7 Ways to Maximize Hattie’s Effect Size on Feedback

Few would argue the importance of feedback to increase student achievement even without having read the research from John Hattie. With an effect size of .73, Feedback is almost double that of the hinge point .4, making it an effective instructional strategy that is applicable across disciplines and grades. What is clear when distilling information regarding the what & how of effective feedback is that the components are similar in the research and theory but the variability lies in the inhibiting factors and culture of feedback in the classroom.

Top Teaching Strategy according to the research done by John Hattie

So how do we as educators recognize and remedy the variability of feedback to maximize the effect size Hattie found in his meta-analysis involving more than 150 million students to move from Feedback to Punctuating Feedback! as Nuthall and Alton-Lee named it?

Feedback as defined by Hattie and Timperley, “Feedback relating to actions or information provided by an agent (teacher, peer, book, parent, internet, experience) that provides information regarding aspects of one’s performance or understanding.”

Educators and theorists have an often similar definition, relating to students asking and answering: Where am? Where to next? How do I get there?

Punctuating Feedback includes the time given for students to process feedback, an understanding of how to interpret the feedback, and classroom culture to support applying skills gained through the feedback. The greatest impact of feedback occurs when it is supported by effective teaching and learning strategies.

Maximizing a Culture of Feedback

  1. Feedback sits within a formative assessment framework. It includes “where to next” and “how to improve”. Adjusting teaching depends upon this information.
  2. Internal motivation to promote curiosity and willingness to learn and deepen the current understanding. Active involvement by students in their own learning and recognizing growth from where they began to where they are now; not a comparison against other students.
  3. Embedded challenge mindsets, mindframes, metacognition, and deliberate practice, spaced not massed are effective.
  4. Normalizing and celebrating of error is the key to new learning and promotes a culture of actionable feedback.
  5. Equity in learning is maximized through mixed ability grouping.
  6. Feedback needs to be task-related rather than ego-related. Comments vs. grades equal greater gains in student achievement.
  7. All of this is “underpinned in the belief that all students can improve.” (Hattie)

When Effective Feedback is coupled with a Culture to maximize it, variables are lessened and ALL students improve.

Source: John Hattie and Shirley Clarke. Visible Learning Feedback. 2018.

MAP Reading Fluency: A New Tool to Save Teachers Time & Focus on Instruction

This post is sponsored by We Are Teachers and NWEA.org. All opinions expressed are my own. (Meaning, if I don’t like something about a particular education product I will not write about it on my blog)

Across the country, literacy, especially in grades K-3,  is a priority in just about every district you visit. Educators are banding together to share best practices, evidence-based interventions, and inspiring stories; all in an effort to impact student literacy.

All learning in rooted in language, and as one progresses throughout life, access to continued learning, both personal and professional, is typically accessed through written communication.

For me, literacy is my passion, and I have dedicated my life to reading, researching, and sharing not only how to develop young students into lifelong readers, but to advocate for high-quality instruction in literacy for ALL students. Being literate not only allows access to information, but influences one’s personal, professional, and civic lives. Upon graduation, my wish was for students to be equipped with passion and skills to be critical discerners of information, make informed decisions for the betterment of society, and be able to advocate for self and others. To be able to do these things, a solid literacy foundation must be formed in the early grades.

Educators learn about their young readers in a variety of ways when they enter their classrooms. Understanding what they enjoy reading and learning about, how they choose books, which foundational skills they have acquired as opposed to which ones they still need to practice or learn. Typically, in a K-3 classroom, teachers administers some sort of fluency test with accompanying comprehension questions. These assessments provide an abundance of information on students to inform instruction. The drawback to this type of testing is the large amount of TIME it takes to test individual students with classrooms of 25+ young readers. And we all know the one thing teachers need is…More TIME. That is why I was ecstatic to preview a new assessment tool launched by NWEA called MAP Reading Fluency.

I want to stress, NOTHING takes the place of an Expert Teacher, but when resources like this become available and save teachers time to then reclaim and use for instruction, it is a WIN – WIN for kids.

MAP Reading Fluency is the first and only K-3 oral reading assessment using speech recognition, automatic scoring and computer adaptive technology.  It allows data to be collected around; oral reading fluency, comprehension, and foundational reading skills. With this information, teachers are able to make decisions on which areas they may need to dig in a bit deeper in order to differentiate instruction and meet needs of students.

I am also a firm believer in two things when it comes to assessment and data. First, MAP Reading Fluency provides a snapshot of the student as a reader; multiple snapshots across time allow teachers to notice trends and trends should be noted and investigated to find out the What/Why. Second, assessment data does not paint the whole picture of a child as a reader. This is where the beauty of computer-aided assessment comes into play. Reading Fluency data that is generated is immediate, organized, disaggregated and actionable. This is a huge win for teachers and a time-saver due in part to the streamlined process of technology. The follow-up, the instruction, and the passionate teaching to the student is then provided by the Expert Teacher.

For the past 5 years or so, I have been investigating tools and resources that would support teachers and students in this exact way; it is as if NWEA read my mind and delivered with Reading Fluency. MAP Reading Fluency was named the 2018 CODiE award winner for Best Student Assessment Solution. It is adaptive to accommodate  pre-, early-, and fluent readers, and is recorded so that teachers can listen to their students during a planning time or while working with their PLC. I am excited about the possibilities of this new assessment tool and appreciate how it aims to shorten the time spent assessing so more time can be spent on instructing! Want to learn more? Check out this FAQ sheet or request a Demo of MAP Reading Fluency.

Top 5 Takeaways from Visible Learning Institute

My Post (14)

This week, roles were flipped as Steven Anderson and I had an opportunity to learn from John Hattie at the Visible Learning Institute in San Diego. Hattie, a researcher in education, has studied more than 150 million students, synthesizing more than 800 meta-studies to determine the effect size various influences have on teaching and learning. His work disaggregates not only what works in education but what works best. And perhaps most importantly, where we as educators need to concentrate our efforts to support student learning at high levels.

The institute was two days, with Day One led by Hattie and Karen Flories, and covered topics on research, Mindframes, feedback and how to better analyze data. Educators from around the globe had the opportunity to dig into the what, why, and how of the Visible Learning methods while being able to speak directly with both Hattie and Flories. Copious amounts of notes were taken, but the following were our Top 5 Takeaways from the first day of learning.

Top 5 Takeaways from the Visible Learning Institute:

  1. Upscaling Success – Upscaling is not typically seen in education. In fact, Hattie states that “all you need to enhance achievement is … a pulse.”  Every teacher can have success in terms of student achievement in their classroom, this is why every teacher can argue that they have evidence that what they are doing works. Hattie urges us all, “Do not ask what works – but works best!” Identify what works best for your students and upscale those practices school-wide. In most cases, it takes 10-12 weeks to see the results of new instructional methods tried with students. During that time we need to have the “sticktoitness” to follow through. But we also have to be mindful that we may not see the results we want and not be afraid to leave practices behind that just don’t work. If something works, upscale it. If it doesn’t abandon it and move on to something that does.
  2. Goldilocks Principle – “Not too hard, Not too boring.” In alignment with current brain research, Hattie introduced us to the Goldilocks Principle. In terms of learning, students prefer learning to be a challenge, but not too hard that success is impossible and also learning that is relevant and engaging. This also ties back to ability grouping and how the research shows that just isn’t what is best for students, especially those that are struggling. When we group students by ability, educators naturally slow down their teaching to ensure everyone “got it.” Rather, what should take place is a heterogeneous mix of ability levels where a challenge is the norm. Our brains, and especially those that are developing, crave a challenge.
  3. Assessment-Capable Learners – Flories introduced the concept of Assessment-Capable Learners, claiming that they should know the answers to 3 Key Questions of Visible Learning: What am I learning? How will I know I’ve been successful in my learning? What evidence can I provide to support I’ve learned? Students who can answer these questions have teachers who see learning through the eyes of their students and help them to become their own teachers. Learning can’t be a mystery to students. Nor can it be just a repetition of facts and figures. Teacher clarity has an effect size of 0.75. The more we are clear with students of what we are doing, why we are doing it and how we will know we’ve done it, the more they learn. As part of this, we would add a fourth question students should be able to answer. How will I communicate what I’ve learned to others? Not only should the learning reside within the student, but there must also be opportunities for them to share with that they know.
  4. Know Thy Impact – Repeated throughout the Institute, “Know Thy Impact”, Hattie argues that the most important Mindframe of Visible Learning is when teachers understand their job is to evaluate their own impact on student learning. Acknowledging the word “Impact” is ambiguous, Hattie sheds light that the conversations in schools relating to the definition of Impact solidify what each school views as important in terms of learning with Their students but should include triangulation of scores, student’s voice, and artifacts of student work. When educators Know Their Impact, they make better decisions on student learning success.
  5. Feedback – Flories ended the day with a focus on feedback and the .70 effect size on student learning. Startling statements were shared. “80% of feedback that kids get is from each other and 80% of that feedback is wrong – Nuthall.” And “Effective feedback doubles the speed of learning – Dylan William”. Student Feedback should be targeted to close the gap in their learning, and used by students to understand the next steps in their learning. Effective feedback begins with teacher clarity when designing and delivering tasks. Good feedback isn’t just focused on the tasks. (And actually, the feedback that is focused exclusively on task doesn’t show students grow anyway.) The feedback that does the most good is that on the self, the personal evaluation of the learner, and done during the process, not at the end. Feedback is just in time, just for me, information delivered when and where it can do the most good.

By the end of the first day, we had taken an endless supply of notes and had much to digest and discuss. What is even more clear to us now is that while much of what we learned feels like common sense to us, it serves as a good reminder and new learning for some.

Hattie says there are no bad teachers; just Good Teachers and Great Teachers. What separates the two is the willingness to know thyself, know thy students and know thy impact. Those that do not only have students who are high achievers but they also have students who are fully prepared for what’s next.

In our next post, we will look at the 5 Takeaways from Day Two where we dove into Visible Learning in the Literacy Classroom with Nancy Frey.

Measuring Up: 6 Focus Areas for Blended Curriculum Assessment

MUL2.0_Demo_Intro_MDR.jpg

It is true, not all curriculum is created equal. There are specific things I look for when reviewing a curriculum to make the best decisions for kids and teachers. So when my friends at We Are Teachers asked me to take a look at, Measuring Up, a blended curriculum for grades 2-8, I was eager to check it out and provide feedback.

This post is sponsored by We Are Teachers and Mastery Education. All opinions expressed are my own. (Meaning, if I don’t like something about a particular education product I will not write about it on my blog)

I immediately recognized many positives while reading through the sample curriculum:

  • Concepts connected by what students will learn; to what they may already know; to real-world examples.
  • Academic vocabulary in context.
  • Scaffolded learning with guided instruction and gradual release of responsibility.
  • Apply learning independently.

Along with the previous list, two things stuck out to me about Measuring Up that I appreciate as a professional. First, the instruction is done by the expert classroom teacher, not the computer; and second, the Measuring Up Live 2.0 version aligned with my view on student-learning and assessment which they have streamlined through the use of computer applications.

6 Focus Areas for Blended Curriculum Assessment:

  1. Practice – Whether it is a high-stakes test or a certification exam; assessment practices are shifting from paper and pencil to an online version for a variety of reason (costs, access, data disaggregation, etc.) When students have little to no practice or frame of reference to online testing, anxiety rises and results are impacted. Blended curriculum should contain both digital and analog assessment options, as well as multiple types of assessment students,  can take in both a low-stake and high-stakes environment.  
  2. Cognitive Demand – If students have limited interaction and touches on devices when it comes to testing, all of their cognitive energy is wasted on how to manipulate the computer instead of answering the questions. Cognitive energy is best used for thinking critically and demonstrating understanding. From drag and drop to typing extended answers, when students have little access to the types of computer assessments they will take in their schooling and life, cognitive demands are misplaced on basic computer skills.
  3. Adaptive – When evaluating curriculum, edtech options for assessment should include adaptive measures, meaning, the test is sensitive to the answers the student provides and modifications are made based on answers. This ensures that the just right measures are used to gauge what the student knows and what they are not understanding.
  4. Feedback – Feedback is another area I explore when looking at assessment provided by curriculum with blended components. Feedback could come in the form of immediate grading, but could also provide extensions and reinforcement. All of these provides students with an understanding of what they have mastered and what additional support they can access to continue refining their learning.
  5. Mastery and Goal Setting – Curriculum that provides assessment should be aligned to the standards and instruction. It should provide a clear picture as to which skills and standards the students have mastered, what they have left to master and provide a direction on how to move forward. Measuring up provides students and teachers this information, as well as a way for students to set their own learning goals.
  6. Informs Instruction – FInally, data collected is useless unless it is used to inform instruction. Along with providing formative and summative student information, an assessment done via technology streamlines the process of accessing, disaggregating, and changing instruction to best meet students’ needs.

Curriculum cycles are a part of every district I have worked with over the past 10 years. Making the most informed purchasing decisions helps educators in their instruction and assessment of students. While all companies and curriculum writers provide unique frameworks or specialty components, be sure that any curriculum claiming to be blended places value in the professional and contains a comprehensive assessment system, similar to that of Measuring Up,  with a focus on the 6 areas above.

Assessment Types Explained for Educators

My Post (1)

Assessment in Education, in the early years, typically took the form of oral evaluation. Tests were subjective, often performed at the front of the classroom, and largely teacher directed; posing questions to the student around typical areas of mastery needed to pass to the next grade level. From there, assessing students took its traditional form (students at their desk and a paper/pencil test) in the late 1890s following the institution of letter grades (A, B, C, etc.) to replace the teacher’s subjective measure of a student’s ability.

The first standardized test in education was the Stone Arithmetic Test (the Early 1900s) and the SATs made its way onto the education landscape in the 1930s as a way to check a student’s readiness for college.

Current trends in education have seen an increase in testing and making data-driven decisions, but in the era of TLA (another Three Letter Acronym), the volume of assessments educators and districts can/have to use often leads to confusion. The following is a list of assessment terms that are commonly found in education and my simple definition and use of them.

Types of Assessment

Type Who Purpose Examples
Formative Assessment – formal and informal assessment to monitor and provide feedback on student understanding of targeted learning goals. Formative assessment is frequent and ongoing; it is not typically graded. Whole Class Formative assessment is used to inform teacher instruction and by students to set goals and next steps. Exit Slips, Games, Pretest,3-2-1
Summative Assessment –   culminating assessment used to evaluate student learning, skill acquisition, and achievement. It typically occurs at the end of a unit, lesson, semester, or year. It is commonly considered “high-stakes” testing and is graded. Whole Class Demonstration of understanding by the student. Project, Portfolio, Test, Paper
Screener –  a valid, reliable, evidence-based assessment used to indicate or predict student proficiency or identify those at-risk. Screeners are brief, identify the “who”, and are given a few times a year. Whole Class or Targeted Group Identification of students at-risk and who need additional support. AIMSweb, DIBELS, FAST, EasyCBM, iReady, STAR
Diagnostica tool used to provide insights into a student’s specific strengths and weaknesses. The data collected provides the teacher with specific skills to target when designing individualized instruction. Diagnostic Assessments identify the “what” for the student. Individual Student After a student has been identified via a screener, a diagnostic assessment is used to determine specific areas of focus. Error analysis of literacy progress monitoring data, Phonics Inventory, Reading Miscue Analysis
Progress Monitoring a tool used to assess student’s academic performance and rate of growth on individualized or targeted instruction. Individual Student To ensure the response to instruction is helping students grow in a targeted area. Based on specific intervention or instruction. The diagnostic tool can be used if there are multiple forms available.
Norm-Referenced Assessment – compares student’s performance to the “average student” score. The “average student” score is constructed statistically selected group of test takers, typically of the same age or grade level, who have already taken the exam Whole Class, Whole Grade Level Designed to rank test takers on a bell curve. Used to determine how students in a particular school or district are ranking to others who take the same test. Standardized tests. California Achievement Test, Iowa Test of Basic Skills, Stanford Achievement Test, and TerraNova.
Criterion-Referenced Assessment –  measures student performance against a fixed set of standards or criteria that are predetermined as to what a student should be able to do at a certain stage in education. The score is determined by the number of questions correct. Whole Class Can be both high-stakes (used to make decisions about students, teachers, schools, etc.) or low-stakes (used for student achievement, adjusting instruction, etc.) Multiple choices, true/false. Short answer or a combination. Can be teacher designed.
Benchmark Assessment – Fixed assessments (also called interim assessments) to measure a students progress against a grade-level or learning goal. Often given in-between formative and summative assessments. Whole Class or Individual Student Used to communicate to educators, students, and parents which skills are important to master and student’s progress (so far) towards those learning goals. Fountas and Pinnell, Reading A to Z Benchmark Passages
Other Assessment Terms You May Encounter
CFAs (Common Formative Assessments) Assessment that is collaboratively created and agreed upon by a group or grade-level team to measure students attainment of the learning goals.
Alternate Assessment Assessments for students with severe cognitive disabilities. Tests have less depth and breadth than the general assessment. (Small number of kids on IEPs that are unable to take the general test)
Alternative Assessment Also called authentic assessment or performance assessment. Alternative assessment is in contrast to the traditional standardized test and focuses on individuals progress, multiple ways to demonstrate understanding)
Authentic Assessment Replicates real-world challenges that experts or professionals in the field encounter. Used to not only demonstrate mastery of learning goals or standards but also critical thinking skills and problem-solving. (Students construct, respond, or produce to demonstrate understanding)
Synoptic Assessment Combines multiple concepts, units, or topics in which a single assessment requires students to make connections between the learning. A holistic approach to assessment and the interconnectedness of learning.
Quantitative Data Data collected that can be measured and written down in numbers.
Qualitative Data Data collected that is more subjective and speaks to the expertise of the teacher to provide their opinion based on trends and past experiences.

 

The ability to choose the right assessment that meets the needs of students and teachers is essential. Most often, confusion does not occur between the differences between formative and summative assessments. Through my own work with districts and educators across the nation, I have found a need to clarify the definition and purpose between a Screener, Diagnostic Tool, and Progress Monitoring. These three assessment types are essential when digging deep into student needs and help to inform instruction.

Resources to Explore:

My Collection of Edtech Tools for Assessment

List of Screeners

List of Diagnostic Tools

Progress Monitoring List

Authentic Assessment