Anderson and Intel are just blowing smoke here. Been in embedded computing for many years, and have designed dual-core CPUs on FPGAs myself. I can tell you that software algorithms for spreading general-purpose tasks among 48 cores DOES NOT EXIST. You can divide up easily partitionable tasks, such as shading or rendering, but general-purpose computing tasks? No.
Hmm. I guess I ought to quit my job then, since I install and configure general-purpose clusters that range from 16 cores to several hundred.
Think I should tell the users that their software doesn't actually work?
I bet that as more people learn how to do data parallel coding (like you do with CUDA, Stream, OpenCL, or DirectCompute with upwards of 2000 processor streams) the relaxed parallelism of 48 or more cores that are not required to be run in data parallel mode will be put to good use.