8 comments on “Evaluating energy efficiency claims”

Evaluating energy efficiency claims

Artifact

This (or other) energy efficient light bulb package(s).

Energy Efficient Bulb 20-75 w

So many opportunities here, depending on how targeted you want to be. Or, if you prefer, what kind of problem you plan to facilitate. There’s a clear nod to systems of linear equations (when one compares the time of payoff). There’s also an opportunity for some simple, linear equation building: evaluate the truth behind the $44 claim.

I’m even thinking of a 101qs video in which a perplexed customer at a hardware store is comparing this light bulb, and, say, one of these, though, these existence of incandescent bulbs is probably not long for this world. And, being Easter, hardware stores are closed today (fun fact: also, retailers really don’t like it when you take photos and videos in their stores). But that brings up a whole other can of worms: how much energy will countries save by switching to energy efficient bulbs? Like I said, lets of ways to go about this, depending on whether you want to be targeted or more exploratory.

Suggested questions

  • Is that $44 claim reasonable or bogus when you compare it against a bulb that uses 75 watts?
  • How does this compare with other energy efficient bulbs at the old hardware store?
  • What would happen if you switched every bulb in your house/school/neighborhood to energy efficient ones?
  • How much does a kilowatt-hour cost in our town? And what exactly is a kilowatt-hour?

Potential Activities

  • Take some predictions: does $44 savings sound about right over 5 years? Is that too high? Too low?
  • Collect some data on how much your lights are actually on in your house.
  • Plot five years of bulb use and see what happens.
  • Go around your house and count the number of bulb outlets you have. That data may be nice to have on hand.
  • Tables, graphs, equations, the usual bit.

Potential Solutions

Not sure what electricity costs in your particular neck of the woods, but Planet Money suggests a US average of $0.12 per KW-hr. These 20 watt bulbs usually cost around $12 per bulb, give or take. So our function looks like:

cost=$12+(20 W)*(1 KW/1000 W)*($0.12/KW-hr)*hours

Incandescent bulbs go for about $2, and comparing with a 75 watt bulb, our graphs look like this.

I actually get a savings over 8000 hours of $42.8:

(2+75/1000×0.12x 8000)-($12+20/1000 x 0.12 x 8000). That doesn’t take into account replacing incandescent bulbs more often. You could potentially get all stepwise functions if you consider the, perhaps 1000-2000 hour lifespan of an incandescent bulb.

(note the slightly different guesstimations of numbers in the planning form)

Final Word. Pretty much anything involving energy efficiency is going to allow for some systems problems. It’s all about tradeoffs, with higher initial costs gradually replaced by energy savings. Water heaters, A/C Units, automobiles, window insulation, you get what you pay for.

8 comments on “How does one provide the complex data of global warming to students?”

How does one provide the complex data of global warming to students?

Update (3/12/2013): An atmospheric scientist friend of mine, Katie, suggested a few edits to this post, primarily to clear up a few of the tools listed here. The edits are in bold.

My initial thesis on this post was originally going to be “why don’t teachers let students investigate global warming very often?” While this may not answer it here’s a terrifying google search for any teacher who is interested in having their students do some independent research on climate change. Google: “global warming raw data“.

Untitled

So the first result is a good one. A legit one. There are lots of links to reputable sites maintained by reputable scientists. Then the second result is a yahoo! answers post. The the third (third!) google result for a simple query on raw data turns up World Net Daily, a website for conspiracy theorists and people that think they’re going to be put in FEMA camps any day now. That is not a reputable site. They provide the opposite of “raw data”.

This is not a post about the messy politics and confusion-campaigns around climate change. But this does point to a particular difficulty that you’d hope would be much simpler: where can we find raw temperature data that we can actually use? For the record, a google search of “raw temperature data” yields much more acceptable initial results. But still, many of those results can be extremely difficult for a secondary math or science teacher to pick up and use, let alone students. For one, climate data is often presented in a file format that requires heavy coding knowledge or special programs to process (such as NetCDF). Second, it’s hard to know where to start with temperature data. Do you start by geographic location? Do you take the annual mean across the globe? How would one do that, exactly?

So this is the problem, and maybe a fundamental problem of teaching science: data are messy. We have to rely on others to package it for us. Scientists are interested in providing the raw data because they want people to have access to true observations, but that raw data is so vast and difficult to process (but not that difficult to interpret!) you have to get at least a Master’s degree before you can even start to decipher it. And often, scientists aren’t interested in culling the data to make it more digestible for the public. They’d prefer to show you the graph. This is great for communication, but not great for independent research. And worse, they’re now fighting on the same plane as disingenuous charlatans who are paid to be as such. So let’s provide students of science the raw data in a way that anyone with Microsoft Excel and a genuine curiosity can begin to explore the very real phenomenon of climate change.

My favorite site that does that is this NASA’s GISS Surface Temperature Analysis. In terms of accurate, raw, commentary-free, accessible, customizable, and processable data, I haven’t found a better place to start. Bookmark that site. Tell your students to go to that site. Start locally.

To find specific historic local weather stations, Katie recommends using the map rather than the search function. The map appears to have better functionality. So click on your favorite vacation spot and go find that precious, precious raw data.

Untitled

Untitled

Once you have the ASCII data (shown here), it’s simply a matter of copying and pasting it into Excel, or if you’re incredibly ambitious (or teaching a Stats class perhaps), having students import it into R, one of the industry standards.

For the uninitiated, let me translate a few things: 

D-J-F= December-January-February average

M-A-M=March-April-May average

J-J-A, S-O-N = I think you get the idea….

The last column, metANN = annual mean temperature. This actually might be the best first place to start. 

Berkley also has a nice data set organized by country. However, the accessible to-layperson data is a bit more hidden.

Untitled

If you’re not careful, you’ll end up downloading intense, non-accessible-to-the-layperson, NetCDF data. Which, again, is fantastic data, but difficult to work with yourself.

But now we’ve got two sites with data that can be tossed into Excel, R, or even those statistics packages designed for secondary students. Now that we have that data, we can do a lot with it.

Suggested Activities

  • Have students investigate the temperature trend in their area.
  • Create a linear model that predicts temperature as a function of year locally.
  • Assign each group or student a different region of the world to investigate and develop a linear model for.
  • Or what about this: develop a sinusoidal equation that describes monthly temperature. Get some trig in there.
  • Ask the question: is our town/state/country/planet heating up or not? Or is it too uncertain to tell?
  • Can you find local stations that DON’T show a warming trend? Katie suggests looking at weather stations closer to the poles to consider the potential impact of polar temperature trends. This might be a bit science-y, but it’s something I’d happily let students explore in a math class.

Once you have actual data, you can start to test it to assess that last, fundamental question (which then spurs thousands of other questions, like “should I have children?”). Is ß>0 under the general linear model? Once we have that answer, even if it’s just locally, we can start to talk about the implications.

2 comments on “CNET has some TV viewing size/distance recommendations.”

CNET has some TV viewing size/distance recommendations.

Feels like there’s a similarity (and a lot of other stuff) type problem in here.

Artifact

From CNET:

 In a perfect videophile world, you’d want to sit no closer than 1.5 times the screen’s diagonal measurement, and no farther than twice that measurement to the TV. For example, for a 50-inch TV, you’d sit between 75 and 100 inches (6.25 and 8.3 feet) from the screen. Many people are more comfortable sitting farther back than that, but of course the farther away you sit from a TV, the less immersive feeling it provides.

I’m wondering if you could pair this with Tim’s TV 3 Act problem. Perhaps even Brian’s Holiday Shopping problem. There’s honestly a lot of stuff going on here from CNET: proportion, distance, maybe even a system of equations or linear programming problem (what with the upper and lower bounds suggested above, then toss in cost constraints).

Update (6/12/17): CNET has apparently redirected the original article to a generic TV buying guide, so the above text is no longer viable. However, here’s something from The Home Cinema Guide.

A good rule of thumb is that the ideal viewing distance for a flat screen HDTV is between 1.5 and 3 times the diagonal size of the screen – and we can use this to calculate both approaches.

Still, the work for the rest of this article reflects CNET’s original viewing recommendations.

Guiding Questions

  • How big a TV should I buy based on the above guidelines and my particular living room?
  • Could we develop a mathematical model to illustrate these guidelines? With, like, variables and stuff?
  • Alternatively, how could I set up my living room in order to fit the kind of TV I purchased?

Suggested Activities

  • Have students develop a model (or “rules” to follow) to express the above recommendation mathematically. (This one’s partially answered below)
  • Students could optimize viewing experience given a floorplan and a TV.
  • A Consumer Reports-ish type TV buying guide? We’re veering here…

Attempted Solution

So the initial model for the constraints listed by CNET aren’t terribly complex.

Constraint 1) “you’d want to sit no closer than 1.5 times the screen’s diagonal measurement”

Constraint 2) “no farther than twice that measurement to the TV”

So lower bound: d>1.5x ; upper bound: d<2x ; and there you have it.

Surely we could ramp up the complexity of the problem with some of the above floorplanning activities and additional cost constraints. How would you modify this situation to serve our mathematical purpose here?

5 comments on “More math food blogging: I may need some help from my Southern friends.”

More math food blogging: I may need some help from my Southern friends.

I think I may have an eating problem. Or just a eating mathematically problem. Here’s my problem today.

Delicious, delicious pigs-in-a-blanket (from pillsbury.com):

Pigs-in-a-blanket, for the uninitiated, are little hot dog/sausage type things warmly embraced by crescent rolls dough. In fact, that’s the ingredient list:

  • Little sausages.
  • A can of crescent rolls dough.

Cooking instructions: Wrap those little buggers up and toss them into an oven until you can’t stand it any longer.

At least, that’s how I’ve always made them. Maybe I could get super-ambitious and make my own dough but that sounds a lot of work for breakfast (side note: yes, this is a breakfast food).

Here’s the problem. How am I supposed to cut this triangular piece of dough to ensure proper sausage coverage?

Like this, this, or this? Or none of the above?

I can’t seem to get congruent triangles out of this thing. So I end up with mismatched pigs-in-blankets. Some have too much dough, some have too little. Many don’t wrap properly.

Awful. Just awful.

Like I said, I can’t get the triangles to come out congruent.

Not only are the triangles not congruent, they’re not similar at all. They’re not even the same type of triangle. So I need advice on a few levels.

How can I cut the initial right triangle dough in order to get:

    • The most congruent-like triangles?
    • The most similar-like triangles?
    • Obtain congruent and similar triangles that make for easy sausage-wrapping?

Here’s what I start with.

I want to end with those perfectly covered pigs-in-blankets above. How to I get from start to finish? Please let me know in the comments or tweet me a picture of the proper triangle-slicing orientation.

0 comments on “Area, Overlap, and Sandwich Meat Efficiency”

Area, Overlap, and Sandwich Meat Efficiency

I find myself writing about food a lot on this here blog. I’m starting to wonder if one could construct a whole thematic unit around the Math of Food. Or create a “meal” from appetizer, main course, and desert items.

Or maybe I just need to eat breakfast.

Artifact

Good Sandwich Guide.

Not sure where it originated, but I found it here on one of those 99 Life Hacks! pages.

Guiding Questions

  • How much overlap of bologna occurs in the “traditional” versus the “life hack” method?
  • How much area of bread is wasted in the “traditional” orientation?

Suggested activities

  • This seems like an investigation ripe for Geogebra.

I’d also consider bringing several bread sizes and shapes. How would you orient the bologna for rectangular kinds of sandwich bread?

And don’t even get me started on cheese.

3 comments on “Who doesn’t want to relive the 2000 election? (Stats problem)”

Who doesn’t want to relive the 2000 election? (Stats problem)

We’ll take a slight detour from my college readiness manifesto (that hasn’t even really started yet) to bring you the following election-related problem. Then again, this problem was lifted directly from a graduate level Statistics class, so this might give some insight into what college readiness could potentially look like. Hadn’t thought of that. Enjoy!

Artifact

Here’s a (non-abridged) problem I received in my graduate level stats class last week (due tomorrow! hope it’s ok that I’m posting it!). I think it’s a great problem and one that’s certainly prevalent around this time:

from The Statistical Sleuth, Ramsey & Schafer, Ed. 2)

1. (SS#8.25) Presidential Election of 2000 

The US presidential election of November 7, 2000, was one of the closest in history. As returns were counted on election night it became clear that the outcome in the state of Florida would determine the next president. At one point in the evening, television networks projected that the state was carried by the Democratic nominee, Al Gore, but a retraction of the projection followed a few hours later. Then, early in the morning of November 8, the networks projected that the Republican nominee, George W. Bush, had carried Florida and won the presidency. Gore called Bush to concede. On the way to his concession speech, Gore then called Bush to retract that concession. When the roughly 6 million Florida votes had been counted, Bush was shown to be leading by only 1,738, and the narrow margin triggered an automatic recount. The recount, completed in the evening of November 9, showed Bush’s lead to be less than 400.

Meanwhile, angry Democratic voters in Palm Beach County complained that a confusing “butterfly” ballot in their county caused them to accidentally vote for the Reform Party candidate Pat Buchanan instead of Gore. See the ballot below. You might understand how one could accidentally vote for Buchanan instead of Gore because Gore’s name is the second listed on the left side, but his “bubble” is the third one. Two pieces of evidence supported the claim of voter confusion. First, Buchanan had an unusually high percentage of the vote in that county. Second, there were also an unusually large number of ballots discarded during counting because voters had marked two circles (possibly by inadvertently voting for Buchanan and then trying to correct the mistake by then voting for Gore).

Make a scatterplot of the data, with X = # of votes for Bush and Y = # of votes for Buchanan. What evidence is there that Buchanan received more votes than expected in Palm Beach County? Analyze the data without Palm Beach County to obtain an appropriate regression model fit. Obtain a 95% prediction interval for the number of Buchanan votes in Palm Beach County from this fitted model (assuming that the relationship between X and Y is the same in this county as the others). If it is assumed that Buchanan’s actual count contains a number of votes intended for Gore, what can be said about the likely size of this number from the prediction interval?

Why couldn’t a similar problem be asked in a HS Stats class? Maybe modified, but seriously, why not? And especially why not now, in a year divisible by four (Summer Olympic and presidential election years)? The problems a bit wordy though. Let’s try this:

Artifact, reworked:

The US presidential election of November 7, 2000, was one of the closest in history. As returns were counted on election night it became clear that the outcome in the state of Florida would determine the next president. When the roughly 6 million Florida votes had been counted, Bush was shown to be leading by only 1,738, and the narrow margin triggered an automatic recount. The recount, completed in the evening of November 9, showed Bush’s lead to be less than 400.

Meanwhile, angry Democratic voters in Palm Beach County complained that a confusing “butterfly” ballot in their county caused them to accidentally vote for the Reform Party candidate Pat Buchanan instead of Gore. See the ballot below.

Guiding Questions

  • How could we use statistics to determine whether or not the “butterfly” ballot confused voters?
  • How big of an outlier was Palm Beach county?
  • Had the ballot been more traditional, can we predict the outcome of the Florida electoral votes (and presumably, the 2000 election?).
  • Is there a model of sorts we could employ to detect such anomalies in the future?
  • While we’re at it, what’s up with Dade County over there?

Suggested activities

  • Make a scatterplot and a linear fit and be, like, DUH, something was whack in Palm Beach county. (data of Bush votes and Buchanan votes by countyare at the bottom of this post)
  • Socratic discussion on outliers, not to be confused with Outliers by Malcolm Gladwell.
  • Workshops on confidence intervals, standard deviation and the like.
  • What does the line of best fit look like with and without Palm Beach? And what might that tell us about the voting discrepancies in Palm Beach?

Attempted solution

I’m not going to post my response to the problem prompt, because it may violate academic honesty or something. But I’ll post a scatterplot of Bush/Buchanan votes by county and leave it at that.

Data: election2000

3 comments on “Do violent video games cause violence? One Social Studies teacher’s experience teaching Math”

Do violent video games cause violence? One Social Studies teacher’s experience teaching Math

(A lot of people have heroes. Many of those heroes are athletes or celebrities. For others, they are cops, firefighters, and teachers. One of mine is Lee Fleming, a co-worker, friend, and inspiration. Lee has taught Social Studies and Spanish. A couple weeks ago, she added “Math” to that impressive resume, despite never being formally taught math ed. She wanted to get her and her neighborhood kids ready for mathematics for the year and took it upon herself to review some math before the school year started.

I feel like we have a lot to learn from her experience, which is posted here, in her words.)

­­

===========================================================

My Math Experiment with Tweeners:

So you can get the context for my math below, here is the email that I sent out to my friends in the neighborhood:

Parents of x middle school students,

I am sure you know that the state of Utah will be implementing the Common Core Standards and the kids will be experiencing a new set of standards for the year.  No matter what math they had last year or how far along they are, it will be different and [redacted] middle school has re-sequenced the math to align with the core.  Some of the skills are the same, but now it includes statistics and some more thinking-based processes instead of only computational-based standards so there will be new stuff not just for the kids but for the teachers too.  Since I have some pretty solid familiarity with the core from my work over the last couple of years in a national pilot with the math core, I decided it would be good to have my kids prepped a little so they won’t struggle as much with the changes and will be prepared to be more helpful to their peers and the teachers.

My girls were actually excited to do the math, but my son was less enthused so I had this really geeky idea that I would run him through a little math project aligned to the Common Core and see if anyone else is interested. It will be kind of a fun little project in which they look at graphs and charts to try to understand how statistics works and get their math brains going again since summer is always tough to recover from anyway.  I think my son would like it more if it were done in a group so… here is my
proposal for you:

*FREE SUPER FUN MATH PROJECT sessions in the Fleming’s Fancy Basement

– I will do two 90 minutes sessions, one for each of the next two Saturdays from 9:30-11:00 (August 11th and 18th)
– You can send your kid(s) to one or both of the sessions
– I am going to collect a $5 deposit upon entrance that I will *return *to the kids at the end of the session if they either master the math principles of the session or if they can prove they tried.  That way you and I both know that they got something out of it and if they don’t care, I get 5 bucks for my time and effort trying to keep them interested.
*Not including deposit

As I was planning the course I decided on this standard for 8th grade:

Investigate patterns of association in bivariate data.

What the heck is this??  Why can’t they just say something like two variables?

So I thought if I were a teacher trying to get kids to understand multiple variables and scatter plots, I would like for kids to have some general literacy about what bivariate data looks like and what it means.  I also thought that the stats leading up to this standard (I checked out the lower grades too) included binomials, understanding population sampling, and general understanding of stats and graphs.   I also thought that it would be important for kids to understand what the data is NOT saying just as much as they should try to learn something from it.

So… I decided to pick a topic of interest to the kids and pose a question to investigate:

Part I:

Pose question of the day:

What is the relationship between video games and violent behavior?

I had the kids pose a hypothesis.

On these cool white boards I had cut at Lowe’s (2’x2’):

Part II: Walk around the room and look at data in conjunction with a series of statements:

1)      Video games have gotten more violent.  What makes this statement true or false?

2)      Video games have caused an increase in violent crime.  What makes this statement true or false?

  1. Girls who play video games have worse behavior than girls who do not.  What makes this true or false?

  1. Boys who play video games are more likely to have behavior problems than boys who do not.  What makes this statement true or false?

  1. As the use of video games has increased, so has bullying and teenage violence.  What makes this statement true or false?

I gave each student a color and for each statement they had to write what made the statement true or false.  I gave them a choice of pairing up or working alone and I had a mixture of both.  Some kids would write alone and then gravitate to another team.  Some wanted to work with a partner the whole time and some kids wanted to be alone the whole time, it seemed to be a good strategy.

As I was walking around, I found that the questions they had the hardest time with were the tables about violent behaviors for the gamers.  I had semi-anticipated this based on the fact that it was not only complicated but the question asked them about data that was NOT on the chart—the overall incidence rate of behaviors is listed, but it does not disaggregate non-gamers.  It was really interesting to hear the dialogue and it was also a great opportunity to demonstrate the value of a good graphical summary of data.

Part III Discuss findings

Once they came back together, we went through each of the boards and had them clarify any comments, explaining their proof from the data that their statements were accurate.

I also asked them if their original hypothesis had changed after looking at the data.  I was surprised how much the kids understood about the data but what was even more interesting was their interpretation of the data.  Many assumptions came up, but the two predominate conclusions that came up were:

  • Video games have caused a decrease in violence because kids can take out their violent aggression through games instead of people
  • M-rated games cause violence

Are video games the ONLY explanation for violence decreasing?  Are you sure?  Where does it tell you in the data that video games made a difference in violent crime?  What if we looked in a place in the world where there was no electricity and we noticed that violence was decreasing too—what would you say the cause would be there?

I asked them to then draw a picture of an experiment they could run to prove that video games really did decrease violence.  Each team struggled for a bit but then one team had drawn a picture of two houses:

Then another group chimed in with “but it would have to be 100 houses!” and then a third said “I think all the houses would have to be people who never played video games before and see if their violence changed.”

So we collectively talked about independence of variables.  Pretty cool, right?  I also had them define correlation vs. causation, I don’t honestly know if those are math standards but seemed relevant to the conversation.

Finally, I had them take the charts that they struggled with and asked them to draw a graphic representation.   This was super fascinating!  The one girl in the group,  younger than the other kids by two years, generated this chart.  She got the idea of key, x and y-axis, and did not hesitate to jump right in:

The boys, who up until this point had really understood the data even more than her, produced stuff like this:

And even that was only after looking at her work.

Finally, I had them leave with an exit ticket of explaining the difference between causation and correlation, and then giving them an assignment to look at a minimum of one chart or graph and decide what the graph IS saying vs. what it is NOT.    We have part II tomorrow morning, I hope they retained what we discussed last week!

— Lee

==============================================

I feel like there’s a lot to learn here. What were your takeaways as an educator? Likes? Wonders? Clarifying questions? Next Steps? Let’s hear them in the comments! And thank you, Lee, for providing us a really nice case study.