There are two related optimizations that happen now. (These currently only apply for gathers where the mask is known to be all on, and to gathers that are accessing 32-bit sized elements, but both of these may be generalized in the future.) First, for any single gather, we are now more flexible in mapping it to individual memory operations. Previously, we would only either map it to a general gather (one scalar load per SIMD lane), or an unaligned vector load (if the program instances could be determined to be accessing a sequential set of locations in memory.) Now, we are able to break gathers into scalar, 2-wide (i.e. 64-bit), 4-wide, or 8-wide loads. Further, we now generate code that shuffles these loads around. Doing fewer, larger loads in this manner, when possible, can be more efficient. Second, we can coalesce memory accesses across multiple gathers. If we have a series of gathers without any memory writes in the middle, then we try to analyze their reads collectively and choose an efficient set of loads for them. Not only does this help if different gathers reuse values from the same location in memory, but it's specifically helpful when data with AOS layout is being accessed; in this case, we're often able to generate wide vector loads and appropriate shuffles automatically.
18 lines
462 B
Plaintext
18 lines
462 B
Plaintext
|
|
export uniform int width() { return programCount; }
|
|
|
|
export void f_f(uniform float RET[], uniform float aFOO[]) {
|
|
uniform float * uniform buf = uniform new uniform float[32*32];
|
|
for (uniform int i = 0; i < 32*32; ++i)
|
|
buf[i] = i;
|
|
|
|
float a = buf[2*programIndex];
|
|
float b = buf[2*programIndex+1];
|
|
|
|
RET[programIndex] = a+b;
|
|
}
|
|
|
|
export void result(uniform float RET[]) {
|
|
RET[programIndex] = 2 * programIndex + 2 * programIndex + 1;
|
|
}
|