Learning from Failures

Continuous improvement in my practice is about identifying a specific process that can be improved, applying a change idea, collecting data, and analyzing the results. This term, I am attempting to apply change ideas in my Statistical Analysis class. This is a twelve week introductory class focusing mostly on descriptive statistics. My goal is to have my students reason more about what the statistics are telling them and to justify their claims with evidence. Our 9th grade team has put an emphasis on the structure of claim-evidence-reasoning across the content areas, meaning that students are using this structure in humanities and science and math. I wanted to continue that structure with my 10th graders in this statistics class. So I revamped my approach to the course.

My idea was to use claims to drive the data analysis. It started off well enough. I created some claims and used a Pear Deck to ask students to consider the kind of data that they might need to collect and analyze. (Pear Deck allows them to think individually and respond collaboratively.) Here are the claims:

  • Women who win the “Best Actress” Academy Award are typically younger than men who win the “Best Actor” Academy Award.
  • Sales of vinyl records are rising and will soon overtake the number of digital downloads.
  • Opening box office for sequels in movie franchises (for example. Captain America, Star Wars, Harry Potter, Hunger Games) is typically higher than for other movie openings.
  • LeBron James is the best professional basketball player of all time.
  • For-Profit colleges are more likely to recruit low income individuals for admission.
  • More African American males are incarcerated than any other group of Americans.

Conversation around these claims also included predictions about whether or not the students thought they were true.

Remember, though, the goal was to use the structure of claim-evidence-reasoning, and my kids needed a model. So I gave them this one. After a conversation with a humanities colleague, the students analyzed my example using the techniques they learned in humanities class (highlighting claims and evidence in two different colors). This led us to create “criteria for success” and structure for a five paragraph essay. The analysis showed me that my example could be improved, so I came back after Christmas break with a second draft. We had some discussion about what had changed and whether or not the second draft was an improvement or not. Seemed like all was well. Time for them to “have at it.”

But I wanted them to practice with a single, agreed upon, class claim first. So we brainstormed lots of different claims they could research and settled on:

Original films are typically better than newer entries or sequels.

They had this document to remind them about what to write and off they went to collect whatever data they thought was relevant. And then snow season began. During the first 3 weeks of January we had 6 classes due to holidays, workshop days, snow days, and a broken boiler (no heat). Even though we ask kids to do work during snow days, my students were making very little progress on this assignment. Colossal failure. I gave them too much all at once. They were wallowing in the data collection.

I regrouped. I looked at all of the data that they had collected and gave them this data set to analyze and this document to write their essays. Problem solved, right? Wrong, again. Still too much. At the end of week 4 of this “practice” assignment (interrupted by two more snow days), and after talking with by Better Math Teaching Network colleagues and my humanities colleague, I realized that I had never actually taught them how to write a paragraph that interprets a specific kind of statistic (even though they had examples).

So, at the end of January, I tackled how to write those body paragraphs. We started with writing about means and medians. Given these box plots of average critics ratings, I asked students to write what the medians say about the claim. 02-17-2017-image001

Thinking it would take them about 5 minutes to write, I thought we’d be able to critique the paragraphs that students wrote before the end of class. Wrong, again. But we were able to take a look at what they wrote during the next class. (It’s a very small class.)

I called on my humanities colleague once more and she helped me to create some scaffolding to help them organize their thoughts. This time, with variability. Each group of two received one of the variables to analyze and organize a paragraph around. Once again, we shared the paragraphs they wrote for each measure. I’m not sure how I feel about this, since all of the paragraphs are basically the same. But I guess the point was to focus on the statistics that they included as evidence and not the specific language used. Were the paragraphs “quality”? Here’s a first draft of a rubric to measure that.

As January turned into February, and the snow making machine really kicked in, I called uncle on this, feeling like I had eventually learned something – along with my students – and decided to move on. (We only have 12 weeks, after all.) I’m not sure if this is one iteration, two iterations, or three iterations of my change idea. How ever many iterations it is, it led me to a slightly different approach with scatterplot analysis.

But that’s another blog post.

 

4 Comments

Filed under BMTN, teaching

“I’m not good at math.”

I can’t tell you how many times I’ve heard this from students. I guess I was lucky. Growing up, nobody ever told me that I wasn’t good at math (or anything else, really). More importantly, nobody ever made it okay for me not to be good at math, or school, or whatever I was interested in learning about. But not all of my students have the family support that I did (and continue to have). So part of nurturing their talent falls to me. I’ve always told my students that I want them to be fearless problem solvers – to face life unafraid of what lies ahead. To nurture this I have to allow space within my classroom for some of the “messy stuff” like playing around with patterns and numbers, wondering about data and statistics, or building seemingly impossible 3D geometric structures. And then pointing out how they just did math – and they were good at it.

You see, when my students say, “I’m not good at math,” they really mean, “I’m not good at quickly recalling math facts when I think everyone is looking at me and waiting for me to respond in some brilliant way.” They equate math with arithmetic and math facts and speed. I try to point out the difference between math and arithmetic (math facts), which sometimes helps. I tell them how bad I am at subtracting numbers quickly in my head.

So what do I do to develop fearless problem solvers? I pose a problem for my students to solve. Then I step back. I observe. I listen. I ask questions. I make them all think before anyone is allowed to speak. I make them talk to me about what they’re thinking and I make them talk to each other, especially to each other. That way I get to listen more. I practice wait time, sometimes for awkwardly long, silent moments. Eventually, I no longer hear, “I’m not good at math.” Except when they want to compute something quickly, on the spot, in the moment, and it isn’t working. And then they turn and say, “Sorry, arithmetic.”

3 Comments

Filed under MTBoS Challenge, teaching

Fresh Blogging Opportunity

Welcome to the Explore the MTBoS 2017 Blogging Initiative! With the start of a new year, there is no better time to start a new blog! For those of you who have blogs, it is also the perfect time to get inspired to write again! Please join us to participate in this years blogging initiative! […]

via New Year, New Blog! — Exploring the MathTwitterBlogosphere

1 Comment

Filed under MTBoS Challenge

My Favorite Games

My advisory students are now seniors. We started together four years ago, along with the school. It’s a humbling journey to spend four years with the same group of students, helping them navigate through high school, getting them ready for whatever adventure follows.

We do a lot of work in advisory – research about “Life after Baxter,” prepping for student-led conferences, creating and maintaining digital portfolios, keeping track of academic progress, and completing any required paperwork, for starters. Even though we meet three times a week for about 35 minutes each time, we still have some “down” time.

We like to play games together. We play Set, Farkle, and Joe Name It along with various card games. Taking some time to play and laugh together is important to building those relationships.

1 Comment

Filed under Baxter, MTBoS Challenge

Standards-Based Grading

There’s lots of talk out there, and especially in New England, about standards-based education. Whatever you think about standards-based, or proficiency-based, or competency-based education (they are all the same to me – just using some different words), the bottom line is that we teachers are now supposed to be able to certify that, regardless of any other factors beyond our control, our students are able to _________. Fill in the blank with your skill or habit of choice. This is tricky business. The tricky part is

  • not to distill learning into a checklist of discrete items that have no connection to each other.
  • to maintain a cohesive, robust curriculum with a clear scope and sequence.
  • to develop cross-curricular, integrated courses that give students rich opportunities to build those skills.
  • to build an assessment system that students, teachers, and parents have a common understanding of.

My school has put a lot of energy into creating a standards-based assessment (and reporting) system. Since we are still a new school, there is nothing to change except our own perceptions. We started out using the old 1-2-3-4 system, but ran into trouble with different interpretations of what those numbers represented and how students were able to achieve, or not. Some teachers maintained that standards in a course were global and that there was little chance for a 9th grader to demonstrate at a level higher than a 2. Other teachers defined course standards as local, so that students could earn a 3 or even a 4 on the standards within that class. Clearly, this was a problem.

The other problem is that any time grades are represented using numbers, people want to operate with them, or break them down further (using 2.3, for example). But those numbers represent discrete categories of performance or understanding. A 2.3 doesn’t make any sense if it isn’t defined. So we had to create a brand new system.

Each reporting standard – those big things like Algebra & Functions – has indicators that are connected to each level on the big scale toward graduation benchmarks. These are defined in a rubric. For any given course, we identify what the “target” knowledge & skills are, what level of the rubric we are targeting. For example, in the Modeling in Math class, the target level is Entering.

During a course, we report if a student is “below target,” “on target,” or “above target” for an assessment on particular indicator of a reporting standard. This way a student can be “on target” – meaning that the student is making solid progress and is doing what is expected in the course – but still not be at the graduation benchmark for that standard. After all, Modeling in Math is the first course that our 9th graders take. It’s unlikely that they will meet the graduation benchmark after just this one twelve-week class.

Report cards and transcripts report the big picture status toward graduation. So that 9th grader who was “on target” during the class has made progress toward graduation, but still has work to do to meet that benchmark. And that work could happen in a series of courses or through some combination of courses and portfolio, giving the student control over her education.

 

Leave a comment

Filed under Baxter

#LessonClose versions 1.1 & 1.2

WordPress tells me that I created this draft 3 months ago. I had every intention of updating along the journey of my Lesson Close adventure. Alas, that didn’t happen. Here’s what did happen …

I found it very difficult to decide, in the moment, which survey to send to students. So, I edited the survey to allow the students to choose what they wanted to tell me about the class – what they learned, how they learned it. I used the same survey structure as before, but this time students made a choice. I honestly thought that given a choice of what to reflect on, students would engage more. Wrong.

I asked them what happened: Too many choices, completing it electronically was too much of a hassle, there wasn’t enough time at the end of class to complete it.

Enter version 1.2: paper option, fewer choices, a few extra minutes. Still didn’t work. So I asked again: Still too many choices, still not enough time. One student said, “Even though the posting reminder came up with 5 minutes to go, our conversations about the math were so engaging that we didn’t want to stop to do a survey.” Another said, “The first question was fine, but I really didn’t want to take the time to write stuff for the second question.” This was the general sentiment.

When I reflected on this sequence of events with my colleagues at the Better Math Teaching Network, one teacher (who also has several years of teaching experience) said, “I feel like exit slips are just data for someone else who isn’t in my classroom. I know what my kids know and what they don’t know because I talk with them.” And I thought, she’s absolutely right. Here I was, trying to do something with exit polls – trying to get my students to reflect on the class, to be meta-cognitive about their learning. They were telling me through their actions and class engagement that they were learning just fine, thank you.

I have lots of formative assessment strategies, but this is the last time that I try to implement exit slips for the sake of implementing exit slips. I know what my kids know because I talk to them.

2 Comments

Filed under BMTN, teaching

Note from a graduate

Hey everyone! I just felt like I should email all of you to say hi, and to assure you that I haven’t forgotten Baxter, and to remind myself to stay in touch, and to tell you that [college] is great and almost everything is going really well! I’m actually kind of tearing up writing this, which surprised me since that’s not something that happens to me very often. I’ll try to remember to stop by the school if it’s in session while I’m visiting home!

Teaching is about building relationships with our students.

Leave a comment

Filed under Baxter

Tidbits from the Week

I tried my #LessonClose a couple of times this week in my 3D geometry class. The first time I used the Collaboration poll and the second time I used a new Learning from Mistakes poll. Both polls were given while working on the question: “Which Platonic solids can tessellate space?” It was clear that cubes would work, but there was some disagreement about the tetrahedron.

Student comments about how they collaborated included:

  • I was part of the conversation when we were brainstorming answers
  • I participated but did not lead
  • I argued about shapes that would or wouldn’t work
  • Everyone’s voice was heard

Student comments about Learning from Mistakes included:

  • I assumed an angle stayed that same when rotated on the plane. This turned out to be false, and I had to later revise my answer.
  • I forgot Trig, and I may have messed up a little bit with the answer of #1
  • A few times, we started to follow a faulty train of logic, based on angle assumptions, that messed us up until we figured it out.
  • I wasn’t exactly wrong just wasn’t very sure. My group had made a prediction that was actually true.

I’m finding it difficult to decide on the specific poll to give. I might create a new poll that let’s the student select which aspect they would like to give me some information about. This is the heart of the PDSA cycle – learning quickly and making changes to improve.


Earlier this week, Dan Meyer wrote this post about explanations and mathematical zombies using z-scores as an example. In the comments I shared an activity that I’ve used, one that I posted about last year. It so happened, that it was time for that activity again this week. In both classes, students were able to develop the reasoning behind the formula through discussion. One student even described what we were doing as a transformation to the standard normal distribution. Never once did we write a formula.


Once again my Flex Friday work connects me with students who are new to Baxter Academy. This year we are teaching them skills through workshops. This Friday’s morning workshop employed design thinking: asking and answering “How Might We … ” (HMW) questions. (For more about this approach check out The Teachers Guild or the d.school at Stanford.) Once you have a bunch of HMW questions, you attempt to answer each one in as many ways as you can think of. My colleague called this “thinking around the question” and illustrated it with the question, “What’s half of thirteen (or 13)?” And here’s what we came up with.

img_20160916_120055631

 

Leave a comment

Filed under Flex Friday, problem solving, teaching

Lesson Closure & Exit Polls – Images

I’ve received a couple of requests for some larger images from the last post on Lesson Closure. Here’s my attempt at providing them.

First, the process map.

exit poll process map

A few exit poll examples.

I have a few other exit polls, but you get the idea. One question to rate the day and one question to elaborate a little bit.

Leave a comment

Filed under teaching

Lesson Closure & Exit Polls

At the end of June I wrote about Continuous Improvement and promised that I would share updates throughout the upcoming school year. Well, here’s update #1, thanks to a great post by @druinok about Closure and Exit Slips.

Just like the post says, I, too, have always struggled with wrapping up lessons before the bell rings. Okay, we don’t have bells, but there comes a time when the students have to move on to another class. Too often, it seems like we are all so involved that the time just creeps up on us and off we go. That means that I have to rely on my gut instincts to plan for the next day. After so many years of teaching, it seems to work, at least from my perspective, but am I really serving my students in the best possible way that I can?

As a member of the Better Math Teaching Network, I had to come up with a plan – something in my practice that I can tweak, test, and adjust with ease. So, I decided to focus on class closure. Since I don’t have an actual process for this, I had to think intentionally about what I might be able to do. I created this process map:

exit poll process map

I focused on the final 10 minutes of class. Who knows if this is appropriate or not. That will be one of the adjustments that I will have to make, I’m sure. But, I have created a set of Google forms that are designed to solicit some focused feedback that I’ve designated as “process” or “content” oriented. Here is a sampling of “process” Exit Polls I’ve created:

And for “content”:

 

What I like about the Google Form is that I anticipate it will be easy for students to access (most have smart phones, all have laptops) and I can post a link on the Google Classroom.

I am hopeful that this process, this structure, will push me to gather deliberate and intentional data from my students so that I am able to plan better each day. Time will tell.

8 Comments

Filed under teaching