Wednesday, April 14, 2010

Sorry, Adobe, you screwed yourself « Sharing the truth one thread at a time

I realize this is a pro-Mac, pro-Apple forum. But being a designer myself, and an iPhone owner, I feel this decision by Apple is really just hurting the developers and designers. If I want to create some cool game for iPhone, I need to learn Objective-C? Common.

And what you don’t know or hear from the Apple propaganda machine is that Apple is not fixing the bugs in their own s/w that is preventing Flash from improving cpu usage on Mac by taking advantage of h/w acceleration for video decode. The comparisons to HTML5 are bogus because you’re comparing apples (pardon the pun) to oranges. A comparable comparison by a 3rd party (I think I saw this article on streamingmedia.com) where HTML5 compared to Flash with gpu support was the same. The performance problems that everyone keeps talking about is mainly due to having to do all the heavy video decoding in s/w vs. h/w.

Also, Apple is pushing HTML5 on everyone. But their own browser, Safari, on Windows doesn’t support H264. So how are they really be seriously pushing HTML5 as the standard for people to adopt? They’re still forcing people to have 2 versions of their site; one for iPhone/iPad, and one for everything else. Sure there may be some people willing to do that now to get on the iPad. But Apple went down this path before with a closed environment, and it got Steve booted out before he came back to bring them to the promise land.

I have total respect for the company. They make great products. And I totally respect their business motives here. But Jobs is not being honest about his motives. All this Flash bashing is just a smoke screen to stir up anti-Adobe/anti-Flash sentiment, so people don’t talk about the real issues, or motivation.

Did you know that of the bugs Adobe filed that would allow them to do h/w acceleration, not one of them got fixed?

Jobs claims that this translation layer results in subpar apps. What a crock of shit. You don’t think people can write crappy apps in Objective-C? Apple screen all the apps anyway. If they’re serious about their quality claims, why don’t they screen for quality, or have some set of tests that an app needs to meet to be deemed high enough quality to make it on their platform. In the end, Adobe’s packager is just translating swf byte code to native code, just like any Objective-C program compiles to. What’s the difference? But again, Jobs is deflecting all that with these ridiculous claims. Just be honest, and live or die by your decision.

     by mc April 11, 2010 at 9:35 pm

Posted via web from Rob's posterous

Monday, April 12, 2010

Change We Can Believe In - Opinionator Blog

Steven StrogatzSteven Strogatz on math, from basic to baffling.

Long before I knew what calculus was, I sensed there was something special about it.  My dad had spoken about it in reverential tones. He hadn’t been able to go to college, being a child of the Depression, but somewhere along the line, maybe during his time in the South Pacific repairing B-24 bomber engines, he’d gotten a feel for what calculus could do.  Imagine a mechanically controlled bank of anti-aircraft guns automatically firing at an incoming fighter plane.  Calculus, he supposed, could be used to tell the guns where to aim.

Every year about a million American students take calculus.  But far fewer really understand what the subject is about or could tell you why they were learning it.  It’s not their fault.  There are so many techniques to master and so many new ideas to absorb that the overall framework is easy to miss.

Calculus is the mathematics of change.  It describes everything from the spread of epidemics to the zigs and zags of a well-thrown curveball.  The subject is gargantuan — and so are its textbooks.  Many exceed 1,000 pages and work nicely as doorstops.

But within that bulk you’ll find two ideas shining through.  All the rest, as Rabbi Hillel said of the Golden Rule, is just commentary.  Those two ideas are the “derivative” and the “integral.”  Each dominates its own half of the subject, named in their honor as differential and integral calculus.

Roughly speaking, the derivative tells you how fast something is changing; the integral tells you how much it’s accumulating.  They were born in separate times and places: integrals, in Greece around 250 B.C.; derivatives, in England and Germany in the mid-1600s.  Yet in a twist straight out of a Dickens novel, they’ve turned out to be blood relatives — though it took almost two millennia to see the family resemblance.

More in This Series

Next week’s column will explore that astonishing connection, as well as the meaning of integrals.  But first, to lay the groundwork, let’s look at derivatives.

Derivatives are all around us, even if we don’t recognize them as such.  For example, the slope of a ramp is a derivative.  Like all derivatives, it measures a rate of change — in this case, how far you’re going up or down for every step you take.  A steep ramp has a large derivative.  A wheelchair-accessible ramp, with its gentle gradient, has a small derivative.

Every field has its own version of a derivative.  Whether it goes by “marginal return” or “growth rate” or “velocity” or “slope,” a derivative by any other name still smells as sweet.  Unfortunately, many students seem to come away from calculus with a much narrower interpretation, regarding the derivative as synonymous with the slope of a curve.

Their confusion is understandable.  It’s caused by our reliance on graphs to express quantitative relationships.  By plotting y versus x to visualize how one variable affects another, all scientists translate their problems into the common language of mathematics.  The rate of change that really concerns them — a viral growth rate, a jet’s velocity, or whatever — then gets converted into something much more abstract but easier to picture: a slope on a graph.

Like slopes, derivatives can be positive, negative or zero, indicating whether something is rising, falling or leveling off.  Watch Michael Jordan in action making his top-10 dunks.

Just after lift-off, his vertical velocity (the rate at which his elevation changes in time, and thus, another derivative) is positive, because he’s going up.  His elevation is increasing.  On the way down, this derivative is negative.  And at the highest point of his jump, where he seems to hang in the air, his elevation is momentarily unchanging and his derivative is zero.   In that sense he truly is hanging.

There’s a more general principle at work here — things always change slowest at the top or the bottom.  It’s especially noticeable here in Ithaca.  During the darkest depths of winter, the days are not just unmercifully short; they barely improve from one to the next.  Whereas now that spring is popping, the days are lengthening rapidly.  All of this makes sense.  Change is most sluggish at the extremes precisely because the derivative is zero there.  Things stand still, momentarily.

This zero-derivative property of peaks and troughs underlies some of the most practical applications of calculus.  It allows us to use derivatives to figure out where a function reaches its maximum or minimum, an issue that arises whenever we’re looking for the best or cheapest or fastest way to do something.

My high school calculus teacher, Mr. Joffray, had a knack for making such “max-min” questions come alive.  One day he came bounding into class and began telling us about his hike through a snow-covered field.   The wind had apparently blown a lot of snow across part of the field, blanketing it heavily and forcing him to walk much more slowly there, while the rest of the field was clear, allowing him to stride through it easily.  In a situation like that, he wondered what path a hiker should take to get from point A to point B as quickly as possible.

field

One thought would be to trudge straight across the deep snow, to cut down on the slowest part of the hike.  The downside, though, is the rest of the trip will take longer than it would otherwise.

Figure 2 – trip spending least time in deep snow

Another strategy is to head straight from A to B.  That’s certainly the shortest distance, but it does cost extra time in the most arduous part of the trip.

Figure 3 - straight line from A to B

With differential calculus you can find the best path.  It’s a certain specific compromise between the two paths considered above.

Figure 4 – best path, compared to two earlier paths.

The analysis involves four main steps.  (For those who’d like to see the details, references are given in the notes.)

First, notice that that the total time of travel — which is what we’re trying to minimize — depends on just one number, the distance x where the hiker emerges from the snow.

Figure 5 - showing what x  means

Second, given a choice of x and the known locations of the starting point A and the destination B, we can calculate how much time the hiker spends walking through the fast and slow parts of the field.  For each leg of the trip, this calculation requires the Pythagorean theorem and the old algebra mantra, “distance equals rate times time.”  Adding the times for both legs together then yields a formula for the total travel time, T, as a function of x.   (See the Notes for details.)

Third, we graph T versus x.  The bottom of the curve is the point we’re seeking — it corresponds to the least time of travel and hence the fastest trip.

Figure 6 - showing T versus x

Fourth, to find this lowest point, we invoke the zero-derivative principle mentioned above.  We calculate the derivative of T, set it equal to zero, and solve for x.

These four steps require a command of geometry, algebra and various derivative formulas from calculus — skills equivalent to fluency in a foreign language and, therefore, stumbling blocks for many students.

But the final answer is worth the struggle.  It reveals that the fastest path obeys a relationship known as Snell’s law.   What’s spooky is that nature obeys it, too.

Snell’s law describes how light rays bend when they pass from air into water, as they do when shining into a swimming pool.   Light moves more slowly in water, much like the hiker in the snow, and it bends accordingly to minimize its travel time.  Similarly, light also bends when it travels from air into glass or plastic as it refracts through your eyeglass lenses.

The eerie point is that light behaves as if it were considering all possible paths and automatically taking the best one.   Nature — cue the theme from “The Twilight Zone” — somehow knows calculus.

NOTES

  1. In an online article for the Mathematical Association of America, David Bressoud presents data on the number of American students taking calculus each year.
  2. For a collection of Mr. Joffray’s calculus problems, both classic and original, see: S. Strogatz, “The Calculus of Friendship: What a Teacher and a Student Learned about Life While Corresponding About Math” (Princeton University Press, 2009).
  3. Several videos and websites present the details of Snell’s law and its derivation from Fermat’s principle (which states that light takes the path of least time).   Others provide historical accounts.
  4. Fermat’s principle was an early forerunner to the more general principle of least action.  For an entertaining and deeply enlightening discussion of this principle, including its basis in quantum mechanics, see: R. P. Feynman, R. B. Leighton and M. Sands, “The principle of least action,” The Feynman Lectures on Physics, Volume 2, Chapter 19 (Addison-Wesley, 1964).

    R. Feynman, “QED: The Strange Theory of Light and Matter” (Princeton University Press, 1988).

    In a nutshell, Feynman’s astonishing proposition is that nature actually does try all paths.  But nearly all of them cancel out with their neighboring paths, through a quantum analog of destructive interference — except for those very close to the classical path where the action is minimized (or more precisely, made stationary).  There the quantum interference becomes constructive, rendering those paths exceedingly more likely to be observed.   This, in Feynman’s account, is why nature obeys minimum principles.  The key is that we live in the macroscopic world of everyday experience, where the actions are enormous compared to Planck’s constant.  In that classical limit, quantum destructive interference becomes extremely strong and obliterates nearly everything that could otherwise happen.

Thanks to Paul Ginsparg and Carole Schiffman for their comments and suggestions, and Margaret Nelson for preparing the illustrations.

Need to print this post? Here is a print-friendly PDF version of this piece, with images.

Posted via web from Rob's posterous

Thursday, April 01, 2010

NYTimes.com: Looking at the iPad From Two Angles

This is a killer. To buy or not to buy? Still lots of reservations...

TECHNOLOGY   | April 01, 2010
By DAVID POGUE
Apple's iPad seems to be hated by techies and loved by everyone else. Here are separate reviews for the two audiences.