Category Archives: teaching

Claim, Evidence, Reasoning: Final results

In this final post about the Claim, Evidence, Reasoning approach to teaching statistics, I will share some student results. Fundamental questions with the PDSA approach to reflecting on and improving practice are:

  • Will students engage?
  • Will students learn what I am attempting to teach?
  • Will students produce quality work?

Nearly all of the students in these two classes had prior experiences with statistics which allowed me the freedom to find a new approach. That said, there were definitely times when it became clear that some content instruction was needed, especially when we got into correlation and linear regression. But instead of trying to front-load all of the content, I waited until the need arose. For example, in looking at what students wrote about the class data it became clear that some instruction about regression lines and correlation coefficients was needed.

Now, to answer those questions.

Will students engage?

They didn’t at first – in that disastrous failure only 10% completed the first assignment. But I certainly learned from that experience, regrouped and restructured my approach. And then they engaged. My data show that 100% of my students engaged with the class, process, and content at some point and that 90% engaged consistently by the end of the term.

Will students learn what I am attempting to teach?

I was attempting to teach my students how to apply the claim, evidence, reasoning process that they had previously learned in humanities and science to statistics. Reviewing work against the rubric helped to build an understanding of what quality looks like. It also kept us focused on the goal of claim, evidence, reasoning. By then end of the class, 95% of students were able to review statements through this lens and identify whether or not they were on target.

Will students produce quality work?

This is the big question, right? It’s great if they will engage – that’s the first step – but if they aren’t working to producing quality work then what have they actually learned? Here are some representative examples of student work.

Analyzing movie data This assignment followed the best actor/actress investigation.

Education vs unemployment  Vinyl vs digital album sales  Juvenile incarceration rates This was the final assignment of the univariate data unit. Students had their choice of data to analyze.

Analyzing cars This assignment followed the class data investigation and included the opportunity for students to revise their work following feedback.

Fast food nutrition  1919 World Series This was the final assignment of the bivariate data unit. Students had their choice of data to analyze.

I will leave the question of whether these examples represent quality work to you, the reader. I hope you will let me know what you think.

 

Leave a comment

Filed under BMTN, teaching

Claim, Evidence, Reasoning: About the data

In the last post, I shared the general process that I developed to teach statistics through the lens of Claim, Evidence, Reasoning. This process was tested and refined through several iterations. The data that I chose for these assignments & iterations was critical to student engagement and learning.

How do I know what kind of data is going to be interesting to students? Well, I ask them. I’ve been asking them for a lot of years. Every data set isn’t going to be interesting to every student, but overall, I have been able to identify and collect pretty good data sets.

In the spring term I used these data sets (and the associated class devised claims):

  • Minutes between blast times for Old Faithful (Claim: The time between blasts will be 90 minutes plus or minus 20 minutes.)
  • Ages of Best Actress and Best Actor Oscar winners (Claim: The ages of the Best Actress Oscar winners is typically less than the ages of the Best Actor Oscar winners.)
  • Box office (opening weekend, domestic, worldwide), critics & audience ratings for “original” movies and their sequels (Claim: Original movies are better than sequels.)
  • Juvenile detention/incarceration rates for various types of crimes by sex and race (Claim: African-American males are incarcerated at a higher rate than any other subgroup.)
  • Education level and unemployment rates (Claim: People with a higher level of education have lower unemployment rates.)
  • Sales of vinyl records and digital album downloads (Claim: Sales of vinyl records will soon overtake digital album downloads.)
  • Class measurements such as height, arm span, kneeling height, forearm length, hand span, etc (Claim: Human body measurements are related in a predictable way.)
  • Car data including curb weight, highway mpg, fuel type, and engine size (Claim: Highway mpg depends the most on fuel type.)
  • Fast food burger nutrition including calories, fat, protein, carbohydrates, etc (Claim: Fast food burgers are unhealthy.)
  • Baseball data from the 1919 Chicago White Sox (Claim: The evidence supports the decisions made about the accused players in the 1919 World Series.)

Even with all of these options, students added their own:

  • Skateboarding data including ages and birthplaces of known skaters and number of skate parks in a state (Claim: Professional skateboarders are most likely to come from California.)
  • Olympic male swimming data (Claim: Michael Phelps is the best Olympic swimmer of all time.)

What’s important about all of these data sets?

They all provide multiple variables and opportunities for comparison. They offer students multiple ways to investigate the claims. They allow students to create different representations to support their reasoning. So, the lesson here is that the data sets used much be robust enough for students to really dig into.

Imagine what could happen if the course were integrated with science or social studies.

Next post: The results

Leave a comment

Filed under BMTN, teaching

Claim, Evidence, Reasoning: Starting fresh

I’m not an AP Stats teacher. I did that once. What’s important to me is that all students who graduate from high school have an opportunity to think and reason about real data in a deep and meaningful way. AP Stats is typically reserved for a few juniors or seniors. Maybe they don’t want to do AP Calc, or maybe they’ve already done it. Both of these reasons are unacceptable to me. Data literacy needs to have a higher profile – it needs to be more important than being able to simplify rational expressions. Our students need to be able to reason about data that’s presented to them in the press, or on social media, or by our elected officials. That’s my personal crusade.

Since last December I’ve been on this journey to improve my statistics teaching and the learning of my students. I shared my catastrophic failing first attempt and progress made with that group.  One of the beautiful things about our trimester schedule is that it allows me to immediately apply new learning to a new group – assuming that I am teaching a new section of the same course, I don’t have to wait a whole year to apply what I’ve learned. Luckily, this was the case this year. So, in late March I was able to begin anew, armed with what I learned during the previous term.

My spring term class was also a small group, but quite different from the winter class. This new class had more than 50% who struggled with writing. Since the focus of our work would be “claim, evidence, reasoning,” I would have to find alternative ways for these students to share their learning and their arguments. I wrote my new PDSA form and jumped in, hoping that I had learned enough from the winter term to be somewhat successful this time. (For information about PDSA cycles, see here and here.)

In general, I used this process to introduce concepts:

  • Tell students about the data, usually on paper and verbally, and give them time to make predictions about what they expect from the data. Students do not have access to the data yet. Have some discussion about those predictions. Write them on the board (or some other medium). These predictions become claims to investigate.
  • Give students access to the data & some graphical representations, usually on paper, and have them think about how the data might or might not support the claims that they made. Then ask them to discuss the data with a partner and determine whether or not the data support their claim.
  • Ask them to write a statement about whether or not the data support the claim and why. The why is important – it’s the evidence and the reasoning piece of the “claim, evidence, reasoning” approach.
  • Collect students’ statements, collate them into one document, then have students assess the statements according to the rubric. The focus here is on formulating an argument, not on calculating statistics or representing data. That comes later.

I completed this cycle twice, with two sets of data: minutes between blast times for Old Faithful and ages of winners of Best Actor and Best Actress Oscars.

These are the scaffolds that I provided for the first couple of steps for the Oscar data: predictions & analysis. Remember, the objective at this point is on making an argument, not calculating statistics or creating representations. Taking that piece out of the mix allowed students to focus on finding evidence and formulating reasoning for the claim that we had produced as a class. The next step is to collectively look at the statements that the students produced and assess where they fall on the rubric. This was the second time that we reviewed student work against the rubric. All of this introduction was treated as formative, so although the assignment (and whether or not it was completed) went into the grade book, no grade was attached.

The process for practicing was similar, but included less scaffolding and did not include the step of reviewing student statements. It generally went like this:

  • Tell students about the data, usually on paper and verbally, and give them time to make predictions about what they expect from the data. Students do not have access to the data yet. These predictions become claims to investigate.
  • Give students access to the data, generally in digital form, and a template to help them organize their thinking.
  • Have students calculate statistics and create representations to provide evidence to support or refute their claims.
  • Have students paste their representations into the template and write a statement or paragraph explaining the evidence (this is the reasoning step).

I did this cycle twice for our unit on univariate data: once using data about movies and their sequels and again using a variety of data from which students could choose. By the 4th cycle this is what the assignment directions and template looked like. This was the end of unit assignment for the spring term.

At the beginning of this post I mentioned that more than 50% of this particular class had been identified as having difficulties with writing. So, what did I do? I pushed them to write something – at least one statement (or, in some cases, two) – and then offered to let them talk through their evidence and reasoning with me. I knew that there was good reasoning happening, and I wasn’t assessing their writing anyway. So, why not make the necessary accommodations?

Next post: The importance of data choices.

Leave a comment

Filed under BMTN, teaching

Making Progress

My class made some predictions about car data, without seeing it, and came up with 3 claims:

  1. The heavier the car, the lower the MPG.
  2. Electric cars will have a lower curb weight (than non-electric cars).
  3. Gas powered vehicles will have higher highway MPG than electric or hybrid vehicles. (We think this was written incorrectly, but didn’t catch the error, so decided to go with it.)

We focused on claim 1 first. Students easily produced the scatter plot …

03-15-2017 Image002

and concluded that there didn’t appear to be much of a relationship between highway MPG and curb weight. But they wanted to quantify it – evidence has to be clear, after all.

03-15-2017 Image001

Because of the viewing window, the line looks kind of steep. But the slope of the line is -0.01 (highway mpg / pound), so it’s really not very steep at all. And the correlation coefficient is -0.164, so that’s a pretty weak relationship when we group cars of all fuel types together.

Are there different relationships for the different fuel types?

03-15-2017 Image003

Turns out, yeah.

After some individual analysis, some discussion, and a scaffold to help organize their work, students shared their claim-evidence-reasoning (CER) paragraphs refuting claim 1.

Working on the quality

Step one was getting my students to write these CER paragraphs. (I’ve written about this before and how disastrous my efforts were.) Step two is improving the quality. I shared a rubric with my students.

rubric

We all sat around a table (it’s a small class) and reviewed all of the paragraphs together. They talked, I listened and asked clarifying questions. They assessed each paragraph. They decided that most of their paragraphs were below target. They said things like:

  • “That’s some good reasoning, but there’s no evidence to support it.”
  • “I’d like to see some actual numbers to support the claim.”
  • “I really like how clearly this is stated.”

Even though it took time to review, it was worth it.

3 Comments

Filed under BMTN, teaching

Learning from Failures

Continuous improvement in my practice is about identifying a specific process that can be improved, applying a change idea, collecting data, and analyzing the results. This term, I am attempting to apply change ideas in my Statistical Analysis class. This is a twelve week introductory class focusing mostly on descriptive statistics. My goal is to have my students reason more about what the statistics are telling them and to justify their claims with evidence. Our 9th grade team has put an emphasis on the structure of claim-evidence-reasoning across the content areas, meaning that students are using this structure in humanities and science and math. I wanted to continue that structure with my 10th graders in this statistics class. So I revamped my approach to the course.

My idea was to use claims to drive the data analysis. It started off well enough. I created some claims and used a Pear Deck to ask students to consider the kind of data that they might need to collect and analyze. (Pear Deck allows them to think individually and respond collaboratively.) Here are the claims:

  • Women who win the “Best Actress” Academy Award are typically younger than men who win the “Best Actor” Academy Award.
  • Sales of vinyl records are rising and will soon overtake the number of digital downloads.
  • Opening box office for sequels in movie franchises (for example. Captain America, Star Wars, Harry Potter, Hunger Games) is typically higher than for other movie openings.
  • LeBron James is the best professional basketball player of all time.
  • For-Profit colleges are more likely to recruit low income individuals for admission.
  • More African American males are incarcerated than any other group of Americans.

Conversation around these claims also included predictions about whether or not the students thought they were true.

Remember, though, the goal was to use the structure of claim-evidence-reasoning, and my kids needed a model. So I gave them this one. After a conversation with a humanities colleague, the students analyzed my example using the techniques they learned in humanities class (highlighting claims and evidence in two different colors). This led us to create “criteria for success” and structure for a five paragraph essay. The analysis showed me that my example could be improved, so I came back after Christmas break with a second draft. We had some discussion about what had changed and whether or not the second draft was an improvement or not. Seemed like all was well. Time for them to “have at it.”

But I wanted them to practice with a single, agreed upon, class claim first. So we brainstormed lots of different claims they could research and settled on:

Original films are typically better than newer entries or sequels.

They had this document to remind them about what to write and off they went to collect whatever data they thought was relevant. And then snow season began. During the first 3 weeks of January we had 6 classes due to holidays, workshop days, snow days, and a broken boiler (no heat). Even though we ask kids to do work during snow days, my students were making very little progress on this assignment. Colossal failure. I gave them too much all at once. They were wallowing in the data collection.

I regrouped. I looked at all of the data that they had collected and gave them this data set to analyze and this document to write their essays. Problem solved, right? Wrong, again. Still too much. At the end of week 4 of this “practice” assignment (interrupted by two more snow days), and after talking with by Better Math Teaching Network colleagues and my humanities colleague, I realized that I had never actually taught them how to write a paragraph that interprets a specific kind of statistic (even though they had examples).

So, at the end of January, I tackled how to write those body paragraphs. We started with writing about means and medians. Given these box plots of average critics ratings, I asked students to write what the medians say about the claim. 02-17-2017-image001

Thinking it would take them about 5 minutes to write, I thought we’d be able to critique the paragraphs that students wrote before the end of class. Wrong, again. But we were able to take a look at what they wrote during the next class. (It’s a very small class.)

I called on my humanities colleague once more and she helped me to create some scaffolding to help them organize their thoughts. This time, with variability. Each group of two received one of the variables to analyze and organize a paragraph around. Once again, we shared the paragraphs they wrote for each measure. I’m not sure how I feel about this, since all of the paragraphs are basically the same. But I guess the point was to focus on the statistics that they included as evidence and not the specific language used. Were the paragraphs “quality”? Here’s a first draft of a rubric to measure that.

As January turned into February, and the snow making machine really kicked in, I called uncle on this, feeling like I had eventually learned something – along with my students – and decided to move on. (We only have 12 weeks, after all.) I’m not sure if this is one iteration, two iterations, or three iterations of my change idea. How ever many iterations it is, it led me to a slightly different approach with scatterplot analysis.

But that’s another blog post.

 

3 Comments

Filed under BMTN, teaching

“I’m not good at math.”

I can’t tell you how many times I’ve heard this from students. I guess I was lucky. Growing up, nobody ever told me that I wasn’t good at math (or anything else, really). More importantly, nobody ever made it okay for me not to be good at math, or school, or whatever I was interested in learning about. But not all of my students have the family support that I did (and continue to have). So part of nurturing their talent falls to me. I’ve always told my students that I want them to be fearless problem solvers – to face life unafraid of what lies ahead. To nurture this I have to allow space within my classroom for some of the “messy stuff” like playing around with patterns and numbers, wondering about data and statistics, or building seemingly impossible 3D geometric structures. And then pointing out how they just did math – and they were good at it.

You see, when my students say, “I’m not good at math,” they really mean, “I’m not good at quickly recalling math facts when I think everyone is looking at me and waiting for me to respond in some brilliant way.” They equate math with arithmetic and math facts and speed. I try to point out the difference between math and arithmetic (math facts), which sometimes helps. I tell them how bad I am at subtracting numbers quickly in my head.

So what do I do to develop fearless problem solvers? I pose a problem for my students to solve. Then I step back. I observe. I listen. I ask questions. I make them all think before anyone is allowed to speak. I make them talk to me about what they’re thinking and I make them talk to each other, especially to each other. That way I get to listen more. I practice wait time, sometimes for awkwardly long, silent moments. Eventually, I no longer hear, “I’m not good at math.” Except when they want to compute something quickly, on the spot, in the moment, and it isn’t working. And then they turn and say, “Sorry, arithmetic.”

3 Comments

Filed under MTBoS Challenge, teaching

#LessonClose versions 1.1 & 1.2

WordPress tells me that I created this draft 3 months ago. I had every intention of updating along the journey of my Lesson Close adventure. Alas, that didn’t happen. Here’s what did happen …

I found it very difficult to decide, in the moment, which survey to send to students. So, I edited the survey to allow the students to choose what they wanted to tell me about the class – what they learned, how they learned it. I used the same survey structure as before, but this time students made a choice. I honestly thought that given a choice of what to reflect on, students would engage more. Wrong.

I asked them what happened: Too many choices, completing it electronically was too much of a hassle, there wasn’t enough time at the end of class to complete it.

Enter version 1.2: paper option, fewer choices, a few extra minutes. Still didn’t work. So I asked again: Still too many choices, still not enough time. One student said, “Even though the posting reminder came up with 5 minutes to go, our conversations about the math were so engaging that we didn’t want to stop to do a survey.” Another said, “The first question was fine, but I really didn’t want to take the time to write stuff for the second question.” This was the general sentiment.

When I reflected on this sequence of events with my colleagues at the Better Math Teaching Network, one teacher (who also has several years of teaching experience) said, “I feel like exit slips are just data for someone else who isn’t in my classroom. I know what my kids know and what they don’t know because I talk with them.” And I thought, she’s absolutely right. Here I was, trying to do something with exit polls – trying to get my students to reflect on the class, to be meta-cognitive about their learning. They were telling me through their actions and class engagement that they were learning just fine, thank you.

I have lots of formative assessment strategies, but this is the last time that I try to implement exit slips for the sake of implementing exit slips. I know what my kids know because I talk to them.

2 Comments

Filed under BMTN, teaching