Category Archives: BMTN

Making Progress

My class made some predictions about car data, without seeing it, and came up with 3 claims:

  1. The heavier the car, the lower the MPG.
  2. Electric cars will have a lower curb weight (than non-electric cars).
  3. Gas powered vehicles will have higher highway MPG than electric or hybrid vehicles. (We think this was written incorrectly, but didn’t catch the error, so decided to go with it.)

We focused on claim 1 first. Students easily produced the scatter plot …

03-15-2017 Image002

and concluded that there didn’t appear to be much of a relationship between highway MPG and curb weight. But they wanted to quantify it – evidence has to be clear, after all.

03-15-2017 Image001

Because of the viewing window, the line looks kind of steep. But the slope of the line is -0.01 (highway mpg / pound), so it’s really not very steep at all. And the correlation coefficient is -0.164, so that’s a pretty weak relationship when we group cars of all fuel types together.

Are there different relationships for the different fuel types?

03-15-2017 Image003

Turns out, yeah.

After some individual analysis, some discussion, and a scaffold to help organize their work, students shared their claim-evidence-reasoning (CER) paragraphs refuting claim 1.

Working on the quality

Step one was getting my students to write these CER paragraphs. (I’ve written about this before and how disastrous my efforts were.) Step two is improving the quality. I shared a rubric with my students.

rubric

We all sat around a table (it’s a small class) and reviewed all of the paragraphs together. They talked, I listened and asked clarifying questions. They assessed each paragraph. They decided that most of their paragraphs were below target. They said things like:

  • “That’s some good reasoning, but there’s no evidence to support it.”
  • “I’d like to see some actual numbers to support the claim.”
  • “I really like how clearly this is stated.”

Even though it took time to review, it was worth it.

4 Comments

Filed under BMTN, teaching

“I can’t wait to find out!”

As stated in the last post, Learning from Failures, I decided to adjust my approach to having students analyze and discuss data. We’d put a lot of time into working out many of the kinks, but it was really time to move on to scatterplot representations of data. My students already knew a lot about scatterplots and best-fit lines, so this allowed me to dive right in with some data.

Rather than stating a claim, I started with a statement and four questions:

I have the following measures (in cm) about 54 students: height, arm span, kneeling height, hand span, forearm length, and wrist circumference.

  1. Which pair(s) of variables do you think might show the strongest correlation? (And what would a strong correlation look like in a scatterplot?)
  2. Which pair(s) of variables do you think might show the weakest correlation? (And what would a weak correlation look like in a scatterplot?)
  3. Which variable (from the list above) do you think would be the best predictor of a person’s height (in cm)?
  4. Write one claim statement about the class data variables.

These questions forced them to think about the data and make some predictions about what they might see once they were able to access it. We hadn’t really talked much about correlation, so I was really interested in their responses to what strong and weak correlations look like on a scatterplot.

Generally speaking, they said that strong correlations

  • look like a line
  • can almost see a line
  • looks like a more defined line
  • looks pretty linear

and weak correlations

  • look like randomly placed dots
  • have points that are far from the line
  • looks more spread out and scattered
  • has dots all over the place

As for question 3, there was quite a debate between whether arm span or kneeling height would be the best predictor of a student’s height. One side (6 students) argued that arm span would be the best predictor because “everyone knows that your arm span is about the same as your height.” The other two students claimed that kneeling height would be a better predictor because “it’s part of your height.” Both sides stuck to their convictions – neither could be swayed, not even by what I thought was the astute observation that kneeling height is probably about 3/4 of height. This prediction was made by a student in the arm span camp!

Students each received their own copy of the data and investigated their claims. During the next class, we took a look at a couple of those claims, together. The plot on the left is height vs arm span, with the line y = x (height = arm span). The plot on the right is height vs kneeling height, with the line y = (4/3)x (kneeling height = 3/4 height).

More debate ensued, though most admitted that kneeling height had a stronger correlation to height than arm span did (for this data, at least). And maybe the 3/4 wasn’t the best estimate, but it was pretty close. They also talked about those outliers, which led to a conversation about outliers and influential points.

Moving from Class Data to Cars

I took a similar approach with the next data set.

I have some data about cars, including highway mpg (quantitative), curb weight (quantitative), and fuel type (categorical: gas, hybrid, electric). Think about how these variables might be related and make some predictions.

  1. How might the highway mpg and curb weight be related?
  2. how might the curb weight and fuel type be related?
  3. how might highway mpg and fuel type be related?
  4. Do you think there might be any outliers or influential points? If so, what might they be?

Through some class discussion, we came up with the following claims and predictions.

img_20170215_134600196_hdr

Students still had not seen the data and one of them said, “I really can’t wait to see what this looks like!” Another said, “Yeah, I’m not usually all that interested in cars, but I really want to know.”

2 Comments

Filed under BMTN, technology

Learning from Failures

Continuous improvement in my practice is about identifying a specific process that can be improved, applying a change idea, collecting data, and analyzing the results. This term, I am attempting to apply change ideas in my Statistical Analysis class. This is a twelve week introductory class focusing mostly on descriptive statistics. My goal is to have my students reason more about what the statistics are telling them and to justify their claims with evidence. Our 9th grade team has put an emphasis on the structure of claim-evidence-reasoning across the content areas, meaning that students are using this structure in humanities and science and math. I wanted to continue that structure with my 10th graders in this statistics class. So I revamped my approach to the course.

My idea was to use claims to drive the data analysis. It started off well enough. I created some claims and used a Pear Deck to ask students to consider the kind of data that they might need to collect and analyze. (Pear Deck allows them to think individually and respond collaboratively.) Here are the claims:

  • Women who win the “Best Actress” Academy Award are typically younger than men who win the “Best Actor” Academy Award.
  • Sales of vinyl records are rising and will soon overtake the number of digital downloads.
  • Opening box office for sequels in movie franchises (for example. Captain America, Star Wars, Harry Potter, Hunger Games) is typically higher than for other movie openings.
  • LeBron James is the best professional basketball player of all time.
  • For-Profit colleges are more likely to recruit low income individuals for admission.
  • More African American males are incarcerated than any other group of Americans.

Conversation around these claims also included predictions about whether or not the students thought they were true.

Remember, though, the goal was to use the structure of claim-evidence-reasoning, and my kids needed a model. So I gave them this one. After a conversation with a humanities colleague, the students analyzed my example using the techniques they learned in humanities class (highlighting claims and evidence in two different colors). This led us to create “criteria for success” and structure for a five paragraph essay. The analysis showed me that my example could be improved, so I came back after Christmas break with a second draft. We had some discussion about what had changed and whether or not the second draft was an improvement or not. Seemed like all was well. Time for them to “have at it.”

But I wanted them to practice with a single, agreed upon, class claim first. So we brainstormed lots of different claims they could research and settled on:

Original films are typically better than newer entries or sequels.

They had this document to remind them about what to write and off they went to collect whatever data they thought was relevant. And then snow season began. During the first 3 weeks of January we had 6 classes due to holidays, workshop days, snow days, and a broken boiler (no heat). Even though we ask kids to do work during snow days, my students were making very little progress on this assignment. Colossal failure. I gave them too much all at once. They were wallowing in the data collection.

I regrouped. I looked at all of the data that they had collected and gave them this data set to analyze and this document to write their essays. Problem solved, right? Wrong, again. Still too much. At the end of week 4 of this “practice” assignment (interrupted by two more snow days), and after talking with by Better Math Teaching Network colleagues and my humanities colleague, I realized that I had never actually taught them how to write a paragraph that interprets a specific kind of statistic (even though they had examples).

So, at the end of January, I tackled how to write those body paragraphs. We started with writing about means and medians. Given these box plots of average critics ratings, I asked students to write what the medians say about the claim. 02-17-2017-image001

Thinking it would take them about 5 minutes to write, I thought we’d be able to critique the paragraphs that students wrote before the end of class. Wrong, again. But we were able to take a look at what they wrote during the next class. (It’s a very small class.)

I called on my humanities colleague once more and she helped me to create some scaffolding to help them organize their thoughts. This time, with variability. Each group of two received one of the variables to analyze and organize a paragraph around. Once again, we shared the paragraphs they wrote for each measure. I’m not sure how I feel about this, since all of the paragraphs are basically the same. But I guess the point was to focus on the statistics that they included as evidence and not the specific language used. Were the paragraphs “quality”? Here’s a first draft of a rubric to measure that.

As January turned into February, and the snow making machine really kicked in, I called uncle on this, feeling like I had eventually learned something – along with my students – and decided to move on. (We only have 12 weeks, after all.) I’m not sure if this is one iteration, two iterations, or three iterations of my change idea. How ever many iterations it is, it led me to a slightly different approach with scatterplot analysis.

But that’s another blog post.

 

4 Comments

Filed under BMTN, teaching

#LessonClose versions 1.1 & 1.2

WordPress tells me that I created this draft 3 months ago. I had every intention of updating along the journey of my Lesson Close adventure. Alas, that didn’t happen. Here’s what did happen …

I found it very difficult to decide, in the moment, which survey to send to students. So, I edited the survey to allow the students to choose what they wanted to tell me about the class – what they learned, how they learned it. I used the same survey structure as before, but this time students made a choice. I honestly thought that given a choice of what to reflect on, students would engage more. Wrong.

I asked them what happened: Too many choices, completing it electronically was too much of a hassle, there wasn’t enough time at the end of class to complete it.

Enter version 1.2: paper option, fewer choices, a few extra minutes. Still didn’t work. So I asked again: Still too many choices, still not enough time. One student said, “Even though the posting reminder came up with 5 minutes to go, our conversations about the math were so engaging that we didn’t want to stop to do a survey.” Another said, “The first question was fine, but I really didn’t want to take the time to write stuff for the second question.” This was the general sentiment.

When I reflected on this sequence of events with my colleagues at the Better Math Teaching Network, one teacher (who also has several years of teaching experience) said, “I feel like exit slips are just data for someone else who isn’t in my classroom. I know what my kids know and what they don’t know because I talk with them.” And I thought, she’s absolutely right. Here I was, trying to do something with exit polls – trying to get my students to reflect on the class, to be meta-cognitive about their learning. They were telling me through their actions and class engagement that they were learning just fine, thank you.

I have lots of formative assessment strategies, but this is the last time that I try to implement exit slips for the sake of implementing exit slips. I know what my kids know because I talk to them.

2 Comments

Filed under BMTN, teaching