Showing posts with label thoughts. Show all posts
Showing posts with label thoughts. Show all posts

Thursday, February 9, 2012

Averages, Probability, and the Indivisible: Why you're morphine free in 1 week (probably)

Math makes everything less fun, right?


This morning I sat through a lecture discussing half-lives of drugs, and the lecturer declared that "If a drug follows first-order kinetics [meaning its concentration is cut in half every few hours] it will never be entirely eliminated from your body.  So, you still have some of the formula you drank as a baby!" Okay, so Aristotle thought of this one too.  But zealots beware: the world does not have an infinite resolution for everything.

The basic principle here is simple: if you have three baseballs, and you divide them into two groups, one group gets two baseballs and the other gets one.  You can't have 1.5 baseballs.  Sure, a half cup of water is a half cup of water, but a half baseball is a useless lump of string and leather, and at that point it has ceased to be a baseball.  Thus, you cannot have half a baseball!  If you say you have 1.5, it really means you have either one or two on average.  You cannot always divide by two, and as numbers get small, the remainder gets to be a significant part of what you have.  No matter how big of a number you start with, if you keep dividing you eventually get to a number that is small enough that it matters if you can cut a baseball in half.  Basically, we're going to have a deathmatch between Achilles and the Tortoise and the Wheat and Chessboard problem, but we're going to let the chessboard have as many squares as it needs.

Let's say you take a 20mg dose of morphine (285.34 g/mol, half life of about 2 hours).  20mg does not represent an infinite number of molecules.  Cue Avagadro's number and a bit of multiplication that I'll omit (if you'd understand it, you probably already know it), and we see that 20mg of morphine is about 4.22 * 10^19 molecules of morphine.  After one half life, that number is cut in half (2.11 * 10^19).  After two, you divide by 4.  After three, divide by 8.  After 10 half lives, 4.12 * 10^16 molecules left.  Still a lot, right?  After ten half lives, you are dividing the original number by 1024.  That's a lot, but not compared to the original 42,200,000,000,000,000,000 molecules.  Well, because we're dividing by powers of two, just one more half life and we're not dividing by 2048, then 4096, etc.  In the case of morphine, at 65 half lives you have 1.1 molecules of morphine still around.  Of course, this is impossible,  we can only have zero, 1, 2, etc.  So, even assuming that each division was a perfect half, we can only have 66 half lives before the morphine is gone entirely.  Morphine has a 2-hour half life, so the bottom line is that after about 132 hours (about 5.5 days, if you're into counting "half days"), your body will not have a single molecule of morphine in it.

Now, of course what really happens is that each half life does not eliminate exactly half of what's in you. Heck, it might be off by trillions of molecules each half life, but when we're working with a trillion-trillion-trillion-etc... molecules, we don't care too much.  But what does that mean for any given molecule of morphine in you?  We might think of a half life as just another way of saying "a length of time during which any given molecule has a 50% chance of being eliminated." For big numbers, that expression behaves like a fraction, because on average 50% end up being eliminated each half life.  However, as things get small, you have to go back to the probability definition.

In conclusion, all of this really illustrates is two things:
1)You can only divide things by two infinitely if it is some quantity that it makes sense to have a half of.
2)Averages and statistics work fine as fractions if you are working with a big population (like saying 25% of people die of cancer), but when you get down to single items, it's more useful to think of them as probabilities (I have a 25% chance that I will die in an accident).

Of course, that 25% could mean that 75% of people have 0% chance of dying from cancer, and the other 25% are genetically doomed to cancer with 100% certainty, but that's a discussion for another day.

[Note: I'd love to give proper attribution for the art above, but I can't find the original source.  Let me know if you know it.]

Tuesday, February 7, 2012

Economics and the Autonomous Car

The Only Designated Driver Batman Ever Needed

I'll start with a disclaimer: I'm really excited for self-driving cars.  I may be waiting decades for my Batmobile that can come pick me up after a night of crime-fighting or, you know, fire rockets as necessary, but that doesn't mean I have to be stuck actually driving my car for another 10 years.

With Google, BMW, and other hopping onto the autonomous bandwagon, we are starting to see some real backing for self-driving cars.  Google has announced that it has driven hundreds of thousands of miles, while BWM has braved the German Autobahn sans human input.  Given that crashes will eventually happen with these vehicles, questions have arisen about who would be responsible in the event of a crash: the human driver or the company who made the car.  To me, this question relies on the answer to two other questions:

First, how good should this technology be before we put it in consumer hands?
Second, who is responsible for crashes now?

The answers seem fairly straightforward to me.

First, autonomous cars need only drive as well as people to be unleashed.  Computers make mistakes, sure, but so do people;  the real questions is whether computers or humans cause more accidents.  As I was walking just this morning beside a busy one way street, a saw an all too common occurrence.  Driver A was tailgating Driver B.  Driver A checks to see if the left lane is clear and starts changing lanes to pass Driver B.  Just then, Driver B sees that traffic in front of him has slowed down, and hits the brakes.   A little brakes here, a little gas there, and BAM, Drivers A and B are  both headed to the shop.  A computer probably wouldn't have made that mistake, as it can check lanes and look forward at the same time.  A computer doesn't change the radio station, adjust its hair, take phone calls, text behind the wheel, etc.  Of course, they can also have a tough time telling animals and styrofoam blocks apart.  The point is, both people and computers have their faults, and the real question isn't what the faults of either one are, but which option ends up hurting more people.  If it is conclusively proven that, for example, humans average 2 fatalities per 1,000,000 miles driven while computers also average 2, the emphasis shouldn't be that computer-driven cars are killing people, but that they're as safe as their flesh and blood alternative.  If the computer-kill number drops to 1 per 1,000,000 miles driven that would be cause for celebration, while still soberly acknowledging that the technology has room for improvement.


Second, in regard to responsibility, I see no reason to shift it from the current party held liable: insurance companies.  Okay, in criminal cases things might get fuzzy, but as far as the financial ramifications, be they injury, repairs, etc., if I were to rear end someone today my insurance would pay the costs.  Not the car manufacturer, not me – my insurer ends up with the bill.  If they notice that self-driven cars crash half as much as human-driven cars, they could give you a fat discount for letting the computer take the wheel.  If they notice Chevy's autonomous cars are crashing twice as much as Ford's, they would charge you double for driving a Chevy.  I get my autonomous car, the insurer maintains profits, crash victims of autonomous or human-driven vehicles still have repairs paid for, and everyone has a nice day.  In the end, it doesn't matter if it was the vehicle or the driver's fault, it only matters that accidents happened, and insurers and drivers alike would like it to happen less often.

As far as criminal ramifications, I think that question is really secondary to our first question.  If autonomous cars are as safe or safer than human-driven cars, can you hold a manufacturer liable for accidents caused by their vehicles?  If so, should they also be rewarded for all the accidents they prevented? The same for the drivers who decide to switch on the autopilot.  Unless car manufacturers, you know, purposefully program their cars to hit pedestrians, we should embrace the life-saving potential this technology has, even if it comes with a few accidents along the way.

In short, the answer to it all is that 

Tuesday, January 24, 2012

Why I won't care about 4K resolution until I have a bigger TV

So along with the OLED wave coming, 4K TV seems to be coming.  4K TV is roughly 4 times the pixels of 1080p: 2x horizontal and 2x vertical.  Now, I think a bajillion pixels per inch would be cool, but would it really matter for the average viewer?  To be more specific, would anyone sitting 6 feet from a 47" TV display even notice a difference between 1080p and 4K resolution TVs?

Let's get some numbers and do some quick geometry.

Fact 1: According to Wikipedia, a healthy (20/20) human eye can discriminate two points that are 1 arc-minute, or 1/60 of 1˚ apart.  How this translates to width depends on the distance between the viewer and the points, as will be discussed.

Fact 2: 1080p means "1080 pixels tall by 1920 pixels wide",  4K means "2160 pixels tall by 4096 pixels wide".

Fact 3: A 47" TV is 23" tall and 41" wide. (This assumes a 16:9 aspect ratio.  If you are really good at mental math, you'll have noticed that 4096:2160 is a 256:135 aspect ratio.  For the rest of this, I'll just pretend that everything is 16:9).

Okay, now the geometry.  We know the minimum angle your eye can resolve, and we're assuming you sit 6 feet from your TV, so we can calculate the minimum distance between points (x below) your eye can resolve.





So, at a viewing distance of 6 feet we know that your eye can distinguish points as long as they are 0.021 inches apart.  What we want to know now is whether the pixels on some TV are far enough apart that your eye can tell the difference.  Instead of pixels per inch, we really want to know the inches per pixel, which we can calculate like this:



Now, let's look at the pixels on the 1080p standard on a 47" TV.  If we just take the horizontal axis, 1080p means 1920 pixels horizontally, and 47" TVs are 41" wide, so we can calculate the distance between horizontal pixels as follows:


So how would it look if we had a 47" 4K display?


The Verdict
The goal for a good TV is to get as many pixels as possible close together, but once you pass the threshold where your eye can't tell the difference any more, it does you no good!  So the test is this: is the distance between pixels on a TV smaller than the distance between points your eyes can distinguish?  

For a 47" TV from 6 feet, pixels are about 0.022 inches apart, and your eye can only distinguish points 0.021 inches apart.  So, if you, like me, are watching a 47" TV from 6 feet, then congratulations!  Your TV has about as many pixels as your eye could ever care about!  So smile and relax, knowing that if someone swapped your set for a 4K display, you wouldn't even notice!  Unless you are sitting closer than 6 feet, you have it made!

If, however, you catch your shows on a 65 incher, or you really like to sit close when you play Halo, be excited for the 4K future of TV!  Of course, at 4x the file sizes, can you imagine how much buffering you'll sit through watching 4K videos on YouTube?!

To appease your curiosity,  I'm including some lines from my spreadsheet looking at other TV sizes.  Remember, the 0.021 inches "resolution" of your eye only applies if you sit 6 feet away, and these show the numbers for size and pixels in the horizontal axis (though vertical should be the same ratio).


Wednesday, January 11, 2012

3DTV is Awesome. Stop the Hate!

Count how many people are grimacing
and/or looking at the other movie watchers. 
Then see point 4 below.

I've seen a lot of stories in the news talking down about 3D in the home, saying that consumers aren't figuratively or literally buying it.  The literal part is somewhat true, and they have numbers to back it up.  But I have some objections.

1) When they talk about 3DTV sales, they always say something like "disappointing" or "less than analysts expected."

*My Objection:  Analysts saw the set, and thought, "This is AWESOME!!!  EVERYONE will be buying these soon, despite the expected higher-than-not-3D costs."  Eventually, however, sales are modest.  Modest, but not nonexistent, and not nonsignificant.

To me, the big predictions are a testament to the awesomeness of 3DTV.

2) When they talk about why people don't buy 3DTVs, people usually start in on a laundry list of 'sometimes reported' problems like nausea, not being able to see it, etc.  The not being able to see it part is pretty clear, but nobody seems to report numbers for nausea, headaches, and so on.  The problem I see is that most every article I've seen in the news, etc.  is written by some random person who saw a demo at a store for a few minutes.

*My Objection:  Remember the seizure warnings on video games?  Some people have serious problems with that.  Some people also have serious problems eating gluten, while the rest of the world lives our happy gluten-filled lives.  MOST people see 3D just fine, no headaches, nausea, or anything of the kind. And that the people hand down a sentence on technology they saw for 1-2 minutes in a showroom is ridiculous.

3) People keep saying 3D is a "gimmick".

*My Thoughts:  Sure!  In fact, I've been meaning to wear an eyepatch around to abolish this gimmickry from the rest of my life too!  You know how it is, it gets so distracting when you're trying to watch a perfectly flat football game on your perfectly flat TV and you suddenly realize that your kitchen appears further away than your sidetable. In fact, I've thought about turning off color on my TV to stop those darned hues distracting me from the powerful human drama that is "Everybody Loves Raymond".

To be serious, sometimes filmmakers put in gimmicky things, like swords in your face.  Some amusement park rides are rollercoasters, where the motion is manipulated to throw you off guard because you find it more exciting that way, and some rides are Ferris wheels where the motion just adds to the richness of the experience.  If my Ferris wheel took a deep dive, I'd think 'Who the heck made this thing?', not 'I've about had it with these moving amusement park rides.'

4) Well those goofy glasses are just so uncomfortable and weird looking.

*My Objections:

A-Uncomfortable? WHAT?  Okay, maybe the active shutter style ones are uncomfortable, but I wouldn't know as I've never used them. My passive ones, on the other hand, I've worn all the way down the stairs and into my car before I remembered I was wearing them.  And I don't wear glasses.  These things are designed to be light weight, relaxed fitting, functional glasses.  Which leads to my objection to the goofiness argument :

B-Remember how people watch movies in the dark?  And they, most surprisingly, watch movies when they watch movies (as opposed to looking around the room judging everyone).  3DTV is for home use.  Home, you know, where you sometimes walk around in your underpants.  Where you get upset about whether somebody did or did not eat out of your tub of cottage cheese.  Yeah, wouldn't want those 3D glasses to cramp your style.  I'm sure, when you're sloshing your bowl of Cheerios on your Spongebob PJs and "I'm With Stupid ->" shirt that isn't pointing to anyone while watching a movie at 10pm you silently think to yourself 'Thank goodness I don't have those glasses, those might make me look goofy!'

5) There's not much content.

*My thought:  Partly true, and what's there is is expensive.  Just like VHS was.  Just like DVD was.  Just like Blu-ray was.  Just like the internet was.

MAJOR EXCEPTION:  Have a PS3?  Great!  A ton of new games are in 3D, and let me tell you that regardless of how you think 3D in movies is, 3D in games is AWESOME times about 10,000 ±2.  And given that (good) games generally last for hours and hours and have excellent replay value, you suddenly have HUNDREDS of hours of wonderfully immersive 3D content.

Conclusions:
   *Anti-3D hype is way more unfounded than 3D "hype", or "recognition of the awesomeness of 3D" as I prefer to call it.
   *Seriously, 3D is really awesome.

Qualifications:
    *I actually OWN a 3DTV, and use it to play 3D Blu-ray and 3D games, both of which are awesome. Both of which I've seen bad examples of, but I could also say that about movies, food, or people in general, all of which I'm still very fond of.