There are lots of things I love about my classes – my students are at the top, but I also am enjoying teaching a course for the 2nd or 3rd time. This activity is for a statistics course called “Designing Experiments and Studies.” It’s a course that lasts for one trimester (about 12 weeks) and we have just completed week 4. The Penny Stacking activity is an introduction to experiments. I ran this lesson last week (Jan 12 & 13).
The rules of penny stacking are simple:
- You can only touch one penny at a time.
- Once the penny is placed on the stack, you cannot move it.
- Stack as many pennies as you can without having the stack fall over.
- Each student stacks pennies once, with either their dominant hand or their non-dominant hand.
- Students are randomly selected to stack pennies with either their dominant or non-dominant hand.
Before embarking on the experiment, we made the following predictions:
- More pennies would be stacked with the dominant hand (although a few students disagreed and thought the results would be the same for both hands).
- A few students thought the ratio of pennies stacked with dominant hand to non-dominant hand would be 3 to 1.
- The range of pennies stacked with dominant hand would be 15-45 pennies, while the penny stacks from the non-dominant hand would range from 10-35 pennies.
Then it’s off to conduct the experiment. This is pretty tricky given that up to four students sit at a table and our building is so old that if someone walks across the floor upstairs, our floor will shake. Here are the results:
How would you interpret these results?
We calculated the means: dominant -> 30.9 pennies; non-dominant -> 26.25 pennies. It’s clear that, on average, the number of pennies stacked with the dominant hand is greater than the number of pennies stacked with the non-dominant hand. But is that difference in the means (of 4.65 pennies) significant? Is it unusually large? Is it more than what we might expect from randomizing the results?
To check this out, we randomize the results. Each pair of students received a stack of cards. Each card had a result. The partners shuffled up the cards and dealt them out in a stack of 10 (for the dominant hand) and a stack of 8 (for the non-dominant hand), calculated the means, and then subtracted (dominant – non-dominant). They each did this a couple of times and we made a histogram from the results of the randomization test.
(It’s a Google Sheets histogram – I don’t know how to get rid of the space between the bars)
If you look at our difference of 4.65 compared to these randomized results, it looks pretty common – not at all unusual – to get such a result. If you think that our randomization test was too small (with only 24 randomizations), then you can use the Randomization Distribution tool from Core Math Tools, a free suite of tools available from NCTM. And it’s the only tool that I know that runs this test effectively. Here are the results from 1000 runs, just like the card shuffling but faster.
And you can even get summary statistics that show that our result was within 1 standard deviation of the mean of the results that we got from randomizing the data. Not a very unusual result at all.
We followed this up on Thursday with an experiment inspired by an example from NCTM’s Focus in High School Mathematics: Reasoning and Sense Making – memorizing three letter “words.” Based on the experiment described in the book, I created random lists of three letter words and three letter “words.” The lists of words were meaningful, like cat, dog, act, tap, while the lists of “words” were nonsense, like nbg, rji, pxe, ghl. Students were randomly assigned to receive either a list of meaningful words or a list of nonsense words. They were then given 60 seconds to memorize as many words as possible. Like with the penny stacking, I made them predict what they thought the results might be. What would your predictions be?
This is a practical example of the application of non-parametric statistical methods. I like it.