Wayfinder Learning Lab

"Learning is about living, and as such is lifelong." Elkjaer.


6 Comments

An Inquiry Crossfader: Authentic vs Effective Learning?

In reading more about Understanding Learners and Learning, learning theories and high-impact teaching and learning strategies I got thinking again about a conversation Jon Schatzky and I had a year and half ago about a continuum of inquiry. I’ll use this post to morph the idea into an Inquiry Crossfader, using it to acknowledge some of the (real or perceived) tensions in transition across the MYP-DP gap. This is by no means an exhaustive discussion on the topic, and there is a lot of thinking still to do (apologies for the rambling). Your thoughts are appreciated in the comments or on Twitter (@iBiologyStephen), especially if you have constructive criticism or pertinent journal articles to share. 

Inquiry: "critical reflective thought." As teachers we can set the balance between telling students what to know generating authentic inquiry. With careful design we can turn both effective and authentic learning up to 11.

Inquiry:critical reflective thought.” As teachers we can set the balance between telling students what to know generating authentic inquiry. With careful design we can turn both effective and authentic learning up to 11.

Defining Inquiry

Definitions of inquiry differ depending on who you talk to or who you are teaching. A PYP teacher might use a description of inquiry as largely student-driven questioning that drives the curriculum, is highly open-ended and can lead students in many directions in terms of curricular outcomes:

“[The PYP is committed to] structured, purposeful inquiry that engages students actively in their own learning. In the PYP it is believed that this is the way in which students learn best—that students should be invited to investigate significant issues by formulating their own questions, designing their own inquiries, assessing the various means available to support their inquiries, and proceeding with research, experimentation, observation and analysis that will help them in finding their own responses to the issues. The starting point is students’ current understanding, and the goal is the active construction of meaning by building connections between that understanding and new information and experience, derived from the inquiry into new content.” Making the PYP Happen, p29 (emphasis mine)

This is a fine approach to teaching and learning, especially in younger years where the backwash-effect of university entry is not a driving factor in school-wide or classroom-level decision-making with regards to teaching, learning and assessment. It is certainly the way I want my own young children to learn. However in my (anecdotal) experience the term inquiry meets resistance in the march on up to high school, as teachers feel the pressure of terminal assessment and more heavily prescribed syllabus outcomes or standards. It can be seen as too open-ended, or ‘loose’, perhaps sacrificing ‘standards’ for exploration. When we look at Hattie’s learning impacts, the open-ended inquiry-based learning that these teachers fear rates below average with an impact of just 0.31 (average d=0.4); entirely understandable when the tools for measuring learning in older students tend to be highly standardized and based on a pre-determined set of syllabus outcomes or core skills.

I’d prefer to use Bente Elkjaer’s definition of inquiry as “critical or reflective thinking [that] concerns consequences,” a future-oriented approach (‘what-if’ rather than ‘if-then’) in which meaning is “identified by anticipating ‘what-if’ consequences to potential actions and conduct.

As we think about the role of inquiry from this perspective, we can see myriad opportunities for authentic meaning-making in the experience of learning without sacrificing the pedagogies of ‘effective’ teaching and learning. It is a definition that agrees with the PYP approach to inquiry quoted above, as well as being an appropriate description of higher-order learning in a middle or high-school classroom. It does not discount the role of skills and content in the class; otherwise what is our core curriculum and upon what do we build conceptual understandings? It instead opens the door to more student-centred approaches to learning (such as modeling science), that require a student to think critically and reflectively, construct meaning in their learning and apply their factual and conceptual understandings to new situations through transfer.

I would be highly skeptical of any teacher who said they didn’t want to develop critical and reflective thinkers in their classes and instead preferred to keep the learning to only that which can be easily measured through simple testing.

Effective vs Authentic Learning? 

A deliberately provocative – and not necessarily true – dichotomy: are we teaching for a measurable impact, to get results (effective) or are we aiming to build meaning (authentic)? Where I observe conflict across the MYP-DP gap (again anecdotal) it tends to be as a result of a teacher determining their philosophy (and resultant practices) as either/or, when we should be concerned with both. 

In the most extreme of cases and most simplistic of distinctions between competing educational philosophies we might split the camps into ‘results-getters’ (objective-focused, effective learning) and ‘meaning-makers’ (inquiry-focused, authentic learning), the two approaches being exemplary of an behavioral/empirical perspective on learning and a cognitive/rationalist view respectively (see Cognition and Learning, in the references below). A results-getter would take pride in high student scores on standardised testing, where a meaning-maker values the impact (lifelong?) of the learning on the student in a more transformative sense. Of course, it is entirely possible to construct meaning in a highly content-driven high-school classroom, just as it is to fail to construct meaning in a low-functioning pseudo-inquiry environment: in a car recently, my 6yo daughter and I had a conversation on the difference between worthwhile ‘inquiry’ questions and superficiality such as ‘are we there yet?’ Nevertheless, the tensions into a high-stakes DP class from an inquiry-focused MYP class hinge around the (real or perceived) conflicts between a teacher-directed, outcome-driven pedagogy and a more open-ended inquiry-focused approach to learning in the classroom.

I would argue that the master teacher gets the balance right.

Depending on your subject it might be true that opportunities for open-ended inquiry become more limited in the vertical progression through the currciculum, yet the opportunities for engaging students in critical and reflective thought should remain and even strengthen as students develop a more solid conceptual foundation and set of discipline-related skills and content. The sciences, for example, fit into this category: we focus on building solid conceptual understandings through MYP yet experience a highly-prescriptive outcomes-based syllabus in Diploma Programme; as a result we risk losing the spirit of learner-led inquiry that characterizes true science as students get older and it is important in terms of both motivation and the aims of our programmes that we help students construct meaning and relevance in their studies.

The same content-loaded high-school course could be taught in different ways, and the learning experienced by students depends highly on the teacher’s philosophy of education. The focus on effective teaching and learning in these classrooms is relatively straightforward as the clearly-defined objectives of the syllabus make it easier for the teacher to employ high-impact practices such as formative assessment (d=0.9), feedback (d=0.73), spaced practice (d=0.71) and reciprocal teaching (d=0.74). The greater challenge might be to ‘make space’ for inquiry to apply student learning in order to make meaning through critical reflective thought, though it only takes a basic understanding of the higher-level assessment descriptors to see that transfer, critical inquiry and reflection play strongly into student achievement.

On the other hand subjects such as Design, with minimal prescribed content, should allow students to really spread their inquiry wings through their application of the design cycle to authentic problems and design challenges as they get older, building upon the skills, knowledge and concepts they have developed in earlier years. Making meaning should therefore be easy as student-interest drives the curriculum. In this case ‘effective teaching’ might present the more significant challenge: even an excellent teacher would need to think very carefully about how to deploy high-impact teaching practices and to know their impact as students follow diverse lines of inquiry.

So where does the Inquiry Crossfader come in? 

This is just a way to visualize the dichotomy outlined above, in order to emphasize that we can ‘set the slider’ for any of our classes and that we need to bear both effective and authentic teaching and learning in mind in our curriculum and instructional design. The table below the diagram highlights some of the characteristics of the philosophy and the classroom practices that characterize the opposite ends of the crossfader; I have attempted to draw some comparisons between practical conceptualizations of each approach, though this is open to editing and adjustment as I work on the assignment further. Below the table is an expanded outline of the relevance of the components of the DJ metaphor.

InquiryCrossfader_@iBiologyStephen

THE OUTCOMES-DRIVEN CLASSROOM [FOCUS ON EFFECTIVE LEARNING] THE INQUIRY-DRIVEN CLASSROOM [FOCUS ON AUTHENTIC LEARNING]
Aligns with behavioral/empirical perspectives on learning:

  • Knowledge is an accumulation of stimulus-response associations.
  • Transfer is a gradient of similarity between prior and current learning in terms of associations and stimulus/response. Motivation may well be more extrinsic, based on a desire to achieve grades over making meaning in learning.(Greeno, Collins & Resnick)
Aligns with cognitive/rational perspectives on learning:

  • Knowledge is concept-founded, where learning is a process of conceptual construction (constructivism).
  • Transfer is the application of generalities and problem-solving from conceptual understandings.
  • Motivation is more likely to be intrinsic, with a desire to learn and make meaning taking priority.(Greeno, Collins & Resnick)
Objectives are clearly-defined and generally pre-determined. Objectives might be be (partially) defined but student inquiry forms an important part of the curriculum outcomes.
Generally content-based curriculum. Generally concept-based curriculum.
Deployment of high-impact teaching practices might be more straightforward as progress towards defined, pre-determined outcomes can be easier to measure. Deployment of high-impact teaching practices might be more difficult as progress towards defined, pre-determined outcomes can be messier to measure.However, clearly-defined success criteria should still allow for a lot of formative feedback and improvement.
Teacher’s role as the expert of content and assessment. Teacher’s role as the coach or mentor of the learner.
Thinking may be more determined by ‘if-then’ scenarios, in terms of stimulus-response.(Elkjaer, in Illeris) Thinking may be more determined by ‘what-if’ scenarios (future-focused pragmatic approach).(Elkjaer, in Illeris)
Grading might suit a simple points/percentages system in which students ‘earn credit’ for completion and scores in controlled assessments. Assessment is more likely to be criterion-based (or standards-based), in which grades are linked to (and evidenced by) mastery of descriptors. There may be more diversity in assessment tools used, though these need to be very carefully designed*.

*See Grant Wiggins’ recent post on the false dichotomy between testing and projects as assessment tools. No matter the perspective on learning, we need to construct effect assessment tools… by design. 

……….o0O0o……….

Labouring the Metaphor

As a bedroom DJ in a past life, I’l take the liberty of outlining the diagram with the relevance of each part.

Two turntables. The left represents the content-driven (behavioural/empirical) approach, where the right represents the inquiry/concept-driven (cognitive/rationalist) approach. As the DJ builds a set, the balance moves from left to right, as the DJ switches records, though many turntablists use both at the same time to build layers of complexity; this is analagous to the master teacher ensuring both effective and authentic learning are taking place.

Volume control. As well as controlling the balance between each track, the volume of each can be controlled. Consider a complex mix between a highly-effective and highly-authentic classroom: the crossfader is set near the middle, yet both tracks are ‘turned up to 11’.

Beat-matching. A difficult skill to master, where the DJ needs to keep the tracks in time in terms of tempo and alignment of bars: transitions between records should not be noticed by the audience or the botched mix leads to an uncomfortable dissonance. The analogy here is that students notice when a teacher ‘switched gear’ artificially, as the beats go out of step and cause confusion.

Building the set. DJ’s don’t make it up as they go along: they plan their set for peaks and lulls, for the big moments and the build-ups. They start with the end in mind and know what they want their audience to experience; they practice backwards design. With a solid foundation of content (the records in their box) and a knowledge of where they can be flexible (differentiation), they can adapt their set to suit the feedback of the audience and meet their needs. Building the set might also apply to vertical articulation of the curriculum, building a student’s cumulative experience of a discipline over the years, morphing inquiry as the years progress.

We might go a step further to over-egg the analogy and add a microphone, where the teacher makes the teaching visible to students, outlining the what, the why and the how of learning in the classroom, making learning intentions clear and acting as a credible coach. We might also add the headphones, where the teacher previews and fine-tunes the learning experience, predicting and preventing mishaps or a poor mix, and uses feedback to improve the performance. Finally we could add the recording equipment – the formative and summative assessment data – with which the teacher can gain feedback and make adjustments regarding teaching and learning for future lessons.

Conclusion

Where it is possible to recognise tensions in the transition from a open-ended inquiry in the MYP to a more content-driven assessment-led Diploma Programme, it is not helpful to do so with such broad and definitive strokes. Inquiry, if defined as “critical and reflective thinking” is not only possible but strengthened as students progress up through the school, even if the form of that inquiry looks radically different from the PYP and early MYP years. We need to recognise that all classes at all times sit somewhere on the crossfader between the two approaches, and are likely to demonstrate characteristics of each. A master teacher is striking the right balance in each moment between the two sides, making adjustments where needed so that learning can be both effective and authentic.

Crank it up to 11.

……….o0O0o………..

Useful Sources: 

IBO. Making the PYP Happen: A curriculum framework for international primary education.

Greeno, Collins & Resnick. Cognition & Learning, chapter in Berliner, D. & Calfee, R. (eds.), Handbook of Educational Psychology, Macmillan, New York: 15-46

Illeris, Knud. Contemporary Theories of Learning: Learning theorists… in their own wordsChapters by Knud Illeris, Bente Elkjaer.

Hattie & Yates. Visible Learning & The Science of How We Learn.


10 Comments

Making Feedback Visible: Four Levels Experiment

This quick brain-dump is based on ideas from Hattie’s Visible Learning for Teachers, Wiliam’s Embedded Formative Assessment and the pdf of The Power of Feedback (Hattie & Timperley) linked below. 

I spent much of today trying to grade a large project (Describing the Motion of the Rokko Liner, our local train), which was assessed for MYP Sciences criteria D, E, F. Based on some of our Student Learning Goal work on helping students cope with data presentation and interpretation, the lab had been broken into stages (almost all completed in-class), spread across A4 and A3 paper and GoogleDocs in Hapara.

Hattie & Timperley, Four Levels of Feedback. Click for the pdf of 'The Power of Feedback.'

Hattie & Timperley, Four Levels of Feedback. Click for the pdf of ‘The Power of Feedback.’ The image is on the page numbered 87.

The result: a lot of visible learning in that I could keep track of each student, see their work in progress and comment where needed. A lotof verbal feedback was given along the way, with some worked examples for students. Breaking the large assignment into stages helped keep it authentic and manageable for students, with some days dedicated to individual strands of the assessment criteria.

The challenge: a Frankenstein’s monster of a grading pile, part paper, part digital and all over the place. After trying to put comments on the various bits of paper and Google Docs I gave up, realising that I would be there for many hours and that potentially very little would be read carefully be students or actioned in the next assignment. I turned to Hattie (and Wiliam). Visible Learning for Teachers has a very useful section on Feedback (d=0.73, though formative assessment is d=0.9) and so I spent some time making up the following document, with the aim of getting all the feedback focused and in one place for students.

It is based on the four levels of feedback: task-level, process-level, self-regulation and self. In each of the first three sections I have check-boxed a few key items, based on things I am looking for in particular in this task and the common advice that I will give based on a first read through the pile. A couple of boxes will be checked for each student as specific areas for improvement, with the ‘quality’ statements explained in person. There is space under each for personal comments where needed. I fudged the ‘self’ domain a bit for the purpose of student synthesis of the feedback they are given – really making it a reflective space, geared towards the positive after the preceding three sections of constructive commentary.

Once I got the sheets ready, I chugged through the grading, paying attention most closely to the descriptors in the rubric, the task-specific instructions to students and then the points for action. However, I put very little annotation directly on the student work, instead focusing on this coversheet. It was marginally quicker to grade overall than the same task would have been normally, but the feedback this time is more focused. The double-sided sheet was given to them in class, attached to the paper components of their work, with the feedback facing out and the rubrics with grades hidden behind. This is a deliberate attempt to put feedback first. We spent about 25 minutes explaining and thinking through this in class.

Importantly, students were given time to think carefully about why certain notes had been made and boxes checked on their sheet. I asked them to respond to the feedback in the ‘self’ section, and make additional notes in the three sections of task-level, process-level and self-regulation. In discussion with individual students, we identified which were most pertinent – for some higher-achieving students they can take action in more detail at the task level, whereas others need to focus more on self-regulation. At the end of the lesson, the sheets and work were collected back, so I can read the feedback and use this to inform next teaching of lab skills.

The purpose of all this is to make it explicit where they need to focus their efforts for the next time, without having to wade through pages of notes. It hopefully serves to make the “discrepancy between the current and desired” performance manageable, and a sea of marking on their work will not help with this. I will need to frame this carefully with students – some need work on many elements, but I will not check or note them, instead focusing on the few that are most important right now. Incidentally, it also allows me to more quickly spot trends and potentially form readiness groupings based on clusters of students needing work on individual elements in the following lab.

At the end of the task I asked students for feedback on the process. They generally found the presentation of feedback in this way easier to manage than sifting through multiple multimedia components, and will keep this document as a reference for next time. A couple of higher-achieving students asked for more detailed feedback by section in their work, which is somthing I can do at request, rather than perhaps by default; I know these students will value and take action on it.

Here’s the doc embedded. If it looks as ugly on your computer as it does mine, click here to open it.

If you’ve used something like this, or can suggest ways to improve it without taking it over one side per section, I’d love to hear your thoughts in the comments or on Twitter. I’ll add to the post once I’ve done the lesson with the students.

UPDATE (2 December): Feedback-first, peer-generated

Having read that adding grades to feedback weakens the effect of the feedback, I’ve been thinking about ways to get students to pay more attention to the feedback first. For this task, a pretty basic spring extension data-processing lab, I checked the labs over the weekend and wrote down the scores on paper. In class I put students in groups of three and asked them to share the GoogleDoc of the lab with their partners. They then completed a feedback circle, using the coversheet below to identify specific areas for improvement and checking them. If they could suggest an improvement (e.g. a better graph title), they could add this as a comment.

This took about 15-20 minutes, after which students completed the process-level and self-regulation sections and returned the form to me, before continuing with the day’s tasks. Before the next class, I’ll add their grades to the form (rubrics are on the reverse of the copy I gave students) and log them in Powerschool. Delaying communication of the grade this way should, I hope, have helped students engage more effectively with the feedback – I learned last week that making changes in Powerschool resulted in automatic emails to students.

I was wary of doing this first thing on a Monday, but the kids were great and enjoyed giving and receiving feedback from peers. Of course some goofed off a little, but they were easy to get back on track. For the high-flyers who enojoyed the method less the first time, this gave them a chance to really pick through each others’ work to give specific feedback for improvement.

Here is the document:

……….o0O0o……….

……….o0O0o……….

The Power of Feedback (pdf). John Hattie & Helen Timperley, University of Auckland. DOI: 10.3102/003465430298487


4 Comments

Hattie & Yates: Visible Learning & the Science of How We Learn

This brief review of John Hattie and Gregory Yates’ Visible Learning & the Science of How we Learn (#HattieVLSL) is written from the multiple perspectives of a science teacher, IB MYP Coordinator and MA student. I have read both Visible Learning and Visible Learning for Teachers, and regularly refer to the learning impacts in my professional discussions and reflections. While reading the book, I started the #HattieVLSL hashtag to try to summarise my learning in 140 characters and to get more people to join in the conversation – more of this below. 

EDIT: March 2017

This review was written right after the release of VLSL, in late 2013. Since then, the ideas of ‘know they impact‘ and measurement of learning impacts have really taken off in education, particularly in international schools. Critics of Hattie (largely focused on mathematics or methodology) are also easy to find, though the Australian Society for Evidence Based Teaching concludes that “statistical errors do not change any of the findings” and that “Visible Learning remains the most significant summary of educational research ever compiled.“. We do need to be mindful that what works in some contexts might not work in others, and that the visible learning impacts could be used as a set of signposts for further investigation in our own contexts, rather than a list of ‘must do’ strategies for all classes.

The rest of this blog post has remained untouched since 2013. 

……….o0O0o………..

Summary Review (the tl;dr version)

Visible Learning and the Science of How we Learn (#HattieVLSL) is an engaging and accessible guide that connects the impacts of Hattie’s meta-analyses with discussion of current understandings in the field of how we learn. It reduces the ‘jargon of learning theory’ to the implications in terms of learning and teaching (without overly dumbing down), and aims to facilitate clarity through relegating researchers’ names to the references (and focusing on the findings in each of the 31 chapters). This aids swift reading; it would be useful for the novice teacher as a general overview of teaching and learning at the start of their studies in education. On the other hand, the academically-minded will be sifting through the references and hitting the internet for supplementation and more susbstantial explanation.

It is a practical volume and can be dipped into and revisited as needed, though as a ‘how-to’ guide for high-impact practices, Visible Learning for Teachers (VLT) is more immediately actionable. It would serve well as a companion to VLT and should be of particular interest to teachers who want to dig deeper into the issues or to leaders who want to think more carefully before making decisions that affect teaching and learning.

#HattieVLSL is highly quotable and provides many provocations for further thought and ideas that might challenge a teacher’s thinking or way of doing things. It is concise with short, well-structured chapters, each ending with  an In Perspective summary, some study guide questions that could structure discussion (or a teacher learning community) and some annotated references to pursue. Discussion of ‘Fast Thinking & Slow Thinking’ is fascinating.

Although very strong, at times it feels like the examples used (Gladwell’s Blink, Khan Academy) are aiming for a more populist market and might open the book to criticism. Where we have bought copies of VLT for all teachers as a catalyst for teacher learning communities, this volume might better serve those who are interested in the theoretical basis for learning, perhaps as their own reading group or learning community.

I recommend the book to anyone who is already a fan of Hattie’s work, or who has an inherent interest in connecting learning theory and studies with the learning impacts, or visible effects in the classroom. I have learned a lot through reading this volume, have been inspired to learn more and will likely be boring others by talking about it for a good while.

More detail and some tweets after the divide… 

Continue reading