I stumbled on a simple thought experiment that nicely illustrates the value of iteration.
Imagine a well-intentioned but flawed device, perhaps a homely little robot, trying to navigate its way on a nighttime trip to the gas station for a midnight snack.
To do this, it navigates using the stars. Whenever it sets out, it takes a photo with its upwards-pointing telescopic web cam, then compares the result with an internal clock and a star map. Using this, it deduces where it is, and which direction it ought to speed off in.
Unfortunately, the web cam takes only grainy photographs. Not only this, sometimes the stars are partly occluded by overhanging trees, a plane inches across the frame, or the robot is totally unlucky and captures a street light.
As a result, half the time it draws the wrong conclusion entirely about which direction it should travel, and heads off in a completely random direction. It only picks the right direction of travel half the time.
With this level of accuracy, our robot's journey is fraught with danger. There's only a fifty percent chance it will reach its destination at all, and even if it makes it, it faces the same risk on the way back. The odds of a successful round trip are only one in four.
Iterate!
Accordingly, we make one tweak to the robot - instead of checking its starmap only at the beginning of each leg of the journey, it checks repeatedly, perhaps every five minutes. This is done in the crudest possible way: by jumping to the top of its simple little navigation program.
This one tweak has an amazing effect on its accuracy. Even though it has no memory, no ability to decide stay on course if it takes a bad photo, the robot's arrival is now a practical certainty.
Why is this?
Assuming the time between checks is small compared to the length of the journey, the refined robot will spend an average of half the time driving towards its target. The rest of the time will be spent travelling in a random direction.
Importantly, these random, off-target jaunts aren't all in the same wrong direction - they form a random walk.
Inevitable
Some of the time, the robot will be travelling in exactly the wrong direction, cancelling out whatever progress it has already made.
That will happen only rarely, however. Most of the time it will be off-target by a lesser degree. Even when it's headed almost directly away from its target, it will still be increasing the distance to its target at less than its full drive speed - a problem which is more than compensated for by the half of the time it does drive the right way. On average, it will simply be wasting time, heading perpendicular to the direction it should be.
Overall, the tweaked robot will make an average rate of progress of half its best speed, jigging and jogging this way and that, but heading inevitably towards the gas station. And they said that GOTO was considered harmful.
At what accuracy would the random walk overpower the destination-seeking?
ReplyDelete0% accuracy. Any better accuracy will create a bias that favors drifting towards the destination.
ReplyDeleteOf course, with only 1% accuracy, the robot will only be heading towards the correct destination at an average of 1% of its driving speed, so it could take a while.
But the point holds, I think, that a robot with a given level of accuracy is better off iterating rapidly.
All of this assumes, of course, that the cost of iterating is zero. If stopping and looking at the stars takes a long time, then we're into a whole other question, the optimal number of iterations.
I guess we also need to address how the robot knows when it has arrived; if it only knows by checking at the end of its drive and may be inaccurate there as well, the lower odds might quickly descend into madness, well below 1% of its driving speed.
ReplyDeleteYes, I was assuming the clerk and the home owner would take care of that, but you're right.
ReplyDelete