Add optimization patterns to detect and simplify masked loads and stores
with the mask all on / all off.
Enable AVX for LLVM 3.0 builds (still generally hits bugs / unimplemented
stuff on the LLVM side, but it's getting there).
were expecting vector-width-aligned pointers where in point of fact,
there's no guarantee that they would have been in general.
Removed the aligned memory allocation routines from some of the examples;
they're no longer needed.
No perf. difference on Core2/Core i5 CPUs; older CPUs may see some
regressions.
Still need to update the documentation for this change and finish reviewing
alignment issues in Load/Store instructions generated by .cpp files.