diff --git a/README.md b/README.md index 0e38ddb..0533ccf 100644 --- a/README.md +++ b/README.md @@ -3,12 +3,147 @@ CUDA Stream Compaction **University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2** -* (TODO) YOUR NAME HERE - * (TODO) [LinkedIn](), [personal website](), [twitter](), etc. -* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab) +* Lan Lou +* Tested on: Windows 10, i7-6700HQ @ 2.6GHz 16GB, GTX 1070 8GB (Personal Laptop) -### (TODO: Your README) +## Project Features: -Include analysis, etc. (Remember, this is public, so don't put -anything here that you don't want to share with the world.) +- CPU scan: This is the most basic way of doing a prefix sum(looping & adding in cpu). +- Naive scan : what differentiate naive from simple cpu is that this approach happens on gpu, and also we change into using the follwing algorithm shown in the pseudo code: +```for d = 1 to log2n + for all k in parallel + if (k >= 2d-1) + x[k] = x[k – 2d-1] + x[k]; +``` +which ensure that parallel can be applied. +- Efficient GPU scan : by changing the original naive algorithm into two steps:upsweep and downsweep, we can shorten the amount of operations times it took significantly, the following two shows the pseudo code of up and down sweep: + +#### upsweep: +``` +for d = 0 to log2n - 1 + for all k = 0 to n – 1 by 2d+1 in parallel + x[k + 2d+1 – 1] += x[k + 2d – 1]; +``` + +#### downsweep: +``` +x[n - 1] = 0 +for d = log2n – 1 to 0 + for all k = 0 to n – 1 by 2d+1 in parallel + t = x[k + 2d – 1]; + x[k + 2d – 1] = x[k + 2d+1 – 1]; + x[k + 2d+1 – 1] += t; +``` +- Efficient scan(optimized)(Extra points) : when finished the former efficient, I discovered that it wasn't even faster than cpu methods, so I went for the optimized version as extra points: the reason for this to happen is that we haven't made full use of all threads in a upsweep or downsweep kernal, since it has braching in it, we will always have some "non-functional" threads,therefore, I decided that instead of judging inside the kernal about if the current loop's thread is a multiple of ```2^(d+1)```, we bring this outside into the scan functions, in it ,we will first caculate the block num for each loop to be ```blocknum = (adjustlen/interval + blocksize ) / blocksize;``` by dividing the array total length with ```interval = 2^(d-1)```, and also, inside the kernal, we will mutiply the index of threads with ```2^(d-1)``` to make sure it's a multiple of it, as for the others , they will stay the same. + +- compact for cpu & efficient: this part algorithm is simmilar for both, as first step, you turn the array into a buffer only containing 1 and 0 through judging if it's 0 or not, then, you do scanning using corresponding method each, after that, we will treat the caculated buffer as outter layer indices to fill into the output buffer, and only taking into acount the inner indices that is having value 1 according to the boolean buffer. + +## Questions: +- *Roughly optimize the block sizes of each of your implementations for minimal run time on your GPU.* + +blocksize/ops' time(ms)| naive scan(ms)| efficient scan(ms) +-----|-----|---- +128 | 1.911 | 0.541 +256 | 1.908 | 0.533 +512 | 1.92 | 0.545 +1024 | 1.915 | 0.559 + +- the diagram bellow explicitly shows how variation of block size impact on both the naive and the optimized solutions for scanning, although it's not very apparent, I think block size == 256 is best suited for me, in terms of my own machine condition, algorithms, etc... + +![](https://github.com/LanLou123/Project2-Stream-Compaction/raw/master/blocksizechart.JPG) + +- *Compare all of these GPU Scan implementations (Naive, Work-Efficient, and Thrust) to the serial CPU version of Scan. Plot a graph of the comparison (with array size on the independent axis).* + +Array Size(2^(n))/ops' time(ms)| Naïve | Work Efficient(not optimized)| Work Efficient(optimized) | Thrust +----|----|----|----|---- +3 | 0.032 | 0.029 | 0.093 | 0.085 +6 | 0.049 | 0.046 |0.058| 0.065 +9 | 0.076 | 0.075 | 0.068 |0.249 +12 | 0.09 | 0.137 | 0.135| 0.255 +15 | 0.128 | 0.131 | 0.118| 0.263 +18 | 0.306 | 0.471 | 0.167 |0.36 +21 | 3.89 | 2.311 | 1.008 |0.37 +24 | 35.408 | 19.495 |7.925| 0.932 +27 | 317.784 | 141.549 |61.03| 5.704 + +- the corresponding graph is: + +![](https://github.com/LanLou123/Project2-Stream-Compaction/raw/master/arraysize.JPG) + +- frome the graph we can tell the efficiency of those GPU method arraged in order are: thrust>efficient(optimized)>efficient(not optimized)>naive. + +- *Write a brief explanation of the phenomena you see here.* + +some performence issues encountered by efficient gpu as i mentioned before is because of invalid use of thread which lead to a lot of wasting, as for the others, one thing that I noticed is that the program will crash when I set the array size to be 2^30,since it's same for all methods, I guess it's the memory IO that's causing this bottleneck. + + +## Results + +### I got this result using the following settings: + +- ```blocksize``` = 1024 +- ```SIZE``` (in main.cpp) = 1 << 25 + +``` +**************** +** SCAN TESTS ** +**************** + [ 18 25 1 39 31 37 5 10 42 11 4 11 12 ... 37 0 ] +==== cpu scan, power-of-two ==== + elapsed time: 221.761ms (std::chrono Measured) + [ 0 18 43 44 83 114 151 156 166 208 219 223 234 ... 821806793 821806830 ] +==== cpu scan, non-power-of-two ==== + elapsed time: 77.1578ms (std::chrono Measured) + [ 0 18 43 44 83 114 151 156 166 208 219 223 234 ... 821806711 821806713 ] + passed +==== naive scan, power-of-two ==== + elapsed time: 73.3082ms (CUDA Measured) + [ 0 18 43 44 83 114 151 156 166 208 219 223 234 ... 821806793 821806830 ] + passed +==== naive scan, non-power-of-two ==== + elapsed time: 75.7463ms (CUDA Measured) + [ 0 18 43 44 83 114 151 156 166 208 219 223 234 ... 0 0 ] + passed +==== work-efficient scan, power-of-two ==== + elapsed time: 15.5116ms (CUDA Measured) + [ 0 18 43 44 83 114 151 156 166 208 219 223 234 ... 821806793 821806830 ] + passed +==== work-efficient scan, non-power-of-two ==== + elapsed time: 15.3897ms (CUDA Measured) + [ 0 18 43 44 83 114 151 156 166 208 219 223 234 ... 821806711 821806713 ] + passed +==== thrust scan, power-of-two ==== + elapsed time: 1.63021ms (CUDA Measured) + [ 0 18 43 44 83 114 151 156 166 208 219 223 234 ... 821806793 821806830 ] + passed +==== thrust scan, non-power-of-two ==== + elapsed time: 1.63738ms (CUDA Measured) + [ 0 18 43 44 83 114 151 156 166 208 219 223 234 ... 821806711 821806713 ] + passed + +***************************** +** STREAM COMPACTION TESTS ** +***************************** + [ 0 0 3 1 2 3 0 0 3 1 0 2 2 ... 1 0 ] +==== cpu compact without scan, power-of-two ==== + elapsed time: 121.362ms (std::chrono Measured) + [ 3 1 2 3 3 1 2 2 1 1 2 2 1 ... 3 1 ] + passed +==== cpu compact without scan, non-power-of-two ==== + elapsed time: 121.729ms (std::chrono Measured) + [ 3 1 2 3 3 1 2 2 1 1 2 2 1 ... 2 3 ] + passed +==== cpu compact with scan ==== + elapsed time: 293.673ms (std::chrono Measured) + [ 3 1 2 3 3 1 2 2 1 1 2 2 1 ... 3 1 ] + passed +==== work-efficient compact, power-of-two ==== + elapsed time: 15.202ms (CUDA Measured) + [ 3 1 2 3 3 1 2 2 1 1 2 2 1 ... 3 1 ] + passed +==== work-efficient compact, non-power-of-two ==== + elapsed time: 15.2422ms (CUDA Measured) + [ 3 1 2 3 3 1 2 2 1 1 2 2 1 ... 2 3 ] + passed +``` diff --git a/arraysize.JPG b/arraysize.JPG new file mode 100644 index 0000000..15aaa20 Binary files /dev/null and b/arraysize.JPG differ diff --git a/blocksizechart.JPG b/blocksizechart.JPG new file mode 100644 index 0000000..914b319 Binary files /dev/null and b/blocksizechart.JPG differ diff --git a/src/main.cpp b/src/main.cpp index 1850161..04b3c48 100644 --- a/src/main.cpp +++ b/src/main.cpp @@ -13,7 +13,7 @@ #include #include "testing_helpers.hpp" -const int SIZE = 1 << 8; // feel free to change the size of array +const int SIZE = 1 << 22; // feel free to change the size of array const int NPOT = SIZE - 3; // Non-Power-Of-Two int *a = new int[SIZE]; int *b = new int[SIZE]; @@ -36,13 +36,13 @@ int main(int argc, char* argv[]) { // At first all cases passed because b && c are all zeroes. zeroArray(SIZE, b); printDesc("cpu scan, power-of-two"); - StreamCompaction::CPU::scan(SIZE, b, a); + StreamCompaction::CPU::scan(SIZE, b, a,true); printElapsedTime(StreamCompaction::CPU::timer().getCpuElapsedTimeForPreviousOperation(), "(std::chrono Measured)"); printArray(SIZE, b, true); zeroArray(SIZE, c); printDesc("cpu scan, non-power-of-two"); - StreamCompaction::CPU::scan(NPOT, c, a); + StreamCompaction::CPU::scan(NPOT, c, a ,true); printElapsedTime(StreamCompaction::CPU::timer().getCpuElapsedTimeForPreviousOperation(), "(std::chrono Measured)"); printArray(NPOT, b, true); printCmpResult(NPOT, b, c); @@ -51,7 +51,7 @@ int main(int argc, char* argv[]) { printDesc("naive scan, power-of-two"); StreamCompaction::Naive::scan(SIZE, c, a); printElapsedTime(StreamCompaction::Naive::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); - //printArray(SIZE, c, true); + printArray(SIZE, c, true); printCmpResult(SIZE, b, c); /* For bug-finding only: Array of 1s to help find bugs in stream compaction or scan @@ -64,35 +64,35 @@ int main(int argc, char* argv[]) { printDesc("naive scan, non-power-of-two"); StreamCompaction::Naive::scan(NPOT, c, a); printElapsedTime(StreamCompaction::Naive::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); - //printArray(SIZE, c, true); + printArray(SIZE, c, true); printCmpResult(NPOT, b, c); zeroArray(SIZE, c); printDesc("work-efficient scan, power-of-two"); - StreamCompaction::Efficient::scan(SIZE, c, a); + StreamCompaction::Efficient::realscan(SIZE, c, a,true); printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); - //printArray(SIZE, c, true); + printArray(SIZE, c, true); printCmpResult(SIZE, b, c); zeroArray(SIZE, c); printDesc("work-efficient scan, non-power-of-two"); - StreamCompaction::Efficient::scan(NPOT, c, a); + StreamCompaction::Efficient::realscan(NPOT, c, a,true); printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); - //printArray(NPOT, c, true); + printArray(NPOT, c, true); printCmpResult(NPOT, b, c); zeroArray(SIZE, c); printDesc("thrust scan, power-of-two"); StreamCompaction::Thrust::scan(SIZE, c, a); printElapsedTime(StreamCompaction::Thrust::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); - //printArray(SIZE, c, true); + printArray(SIZE, c, true); printCmpResult(SIZE, b, c); zeroArray(SIZE, c); printDesc("thrust scan, non-power-of-two"); StreamCompaction::Thrust::scan(NPOT, c, a); printElapsedTime(StreamCompaction::Thrust::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); - //printArray(NPOT, c, true); + printArray(NPOT, c, true); printCmpResult(NPOT, b, c); printf("\n"); @@ -137,14 +137,14 @@ int main(int argc, char* argv[]) { printDesc("work-efficient compact, power-of-two"); count = StreamCompaction::Efficient::compact(SIZE, c, a); printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); - //printArray(count, c, true); + printArray(count, c, true); printCmpLenResult(count, expectedCount, b, c); zeroArray(SIZE, c); printDesc("work-efficient compact, non-power-of-two"); count = StreamCompaction::Efficient::compact(NPOT, c, a); printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); - //printArray(count, c, true); + printArray(count, c, true); printCmpLenResult(count, expectedNPOT, b, c); system("pause"); // stop Win32 console from closing on exit diff --git a/stream_compaction/CMakeLists.txt b/stream_compaction/CMakeLists.txt index cdbef77..48e2f35 100644 --- a/stream_compaction/CMakeLists.txt +++ b/stream_compaction/CMakeLists.txt @@ -13,5 +13,5 @@ set(SOURCE_FILES cuda_add_library(stream_compaction ${SOURCE_FILES} - OPTIONS -arch=sm_20 + OPTIONS -arch=sm_60 ) diff --git a/stream_compaction/common.cu b/stream_compaction/common.cu index 8fc0211..f285957 100644 --- a/stream_compaction/common.cu +++ b/stream_compaction/common.cu @@ -24,6 +24,12 @@ namespace StreamCompaction { */ __global__ void kernMapToBoolean(int n, int *bools, const int *idata) { // TODO + int idx = blockIdx.x*blockDim.x + threadIdx.x; + if (idx < n) + { + if (idata[idx]) + bools[idx] = 1; + } } /** @@ -33,6 +39,12 @@ namespace StreamCompaction { __global__ void kernScatter(int n, int *odata, const int *idata, const int *bools, const int *indices) { // TODO + int idx = blockIdx.x*blockDim.x + threadIdx.x; + if (idx < n) + { + if (bools[idx]) + odata[indices[idx]] = idata[idx]; + } } } diff --git a/stream_compaction/cpu.cu b/stream_compaction/cpu.cu index 05ce667..bea019f 100644 --- a/stream_compaction/cpu.cu +++ b/stream_compaction/cpu.cu @@ -17,9 +17,17 @@ namespace StreamCompaction { * For performance analysis, this is supposed to be a simple for loop. * (Optional) For better understanding before starting moving to GPU, you can simulate your GPU scan in this function first. */ - void scan(int n, int *odata, const int *idata) { + void scan(int n, int *odata, const int *idata, bool istimer) { + if (istimer) timer().startCpuTimer(); // TODO + odata[0] = 0; + odata[1] = idata[0]; + for (int i = 2; i < n; ++i) + { + odata[i] = odata[i - 1] + idata[i-1]; + } + if (istimer) timer().endCpuTimer(); } @@ -31,10 +39,21 @@ namespace StreamCompaction { int compactWithoutScan(int n, int *odata, const int *idata) { timer().startCpuTimer(); // TODO + int idx = 0; + for (int i = 0; i < n; ++i) + { + if (idata[i] == 0) continue; + else + { + odata[idx] = idata[i]; + idx++; + } + } timer().endCpuTimer(); - return -1; + return idx; } + /** * CPU stream compaction using scan and scatter, like the parallel version. * @@ -43,8 +62,24 @@ namespace StreamCompaction { int compactWithScan(int n, int *odata, const int *idata) { timer().startCpuTimer(); // TODO + //map + int *checker = new int[n]; + for (int i = 0; i < n; ++i) + { + if (idata[i]) checker[i] = 1; + else checker[i] = 0; + } + //scan + scan(n, odata, checker,false); + //scatter + int num = odata[n - 1] ; + for (int i = 0; i < n; ++i) + { + if (idata[i]) odata[odata[i]] = idata[i]; + } + delete []checker; timer().endCpuTimer(); - return -1; + return num; } } } diff --git a/stream_compaction/cpu.h b/stream_compaction/cpu.h index 236ce11..df9b6c5 100644 --- a/stream_compaction/cpu.h +++ b/stream_compaction/cpu.h @@ -6,7 +6,7 @@ namespace StreamCompaction { namespace CPU { StreamCompaction::Common::PerformanceTimer& timer(); - void scan(int n, int *odata, const int *idata); + void scan(int n, int *odata, const int *idata , bool istimer); int compactWithoutScan(int n, int *odata, const int *idata); diff --git a/stream_compaction/efficient.cu b/stream_compaction/efficient.cu index 36c5ef2..b99b26f 100644 --- a/stream_compaction/efficient.cu +++ b/stream_compaction/efficient.cu @@ -11,16 +11,134 @@ namespace StreamCompaction { static PerformanceTimer timer; return timer; } + int blocksize = 1024; + dim3 blocknum; + __global__ void GPUUpsweepreal(int n, int d, int *idata) + { + int idx = blockDim.x*blockIdx.x + threadIdx.x; + int para = (1 << (d + 1)); + int para1 = 1 << d; + if (idx < n) + { + idata[idx*para + para - 1] += idata[idx*para + para1 - 1]; + } + } + + __global__ void GPUUpsweep(int n, int d, int *idata) + { + int idx = blockDim.x*blockIdx.x + threadIdx.x; + int para = 1 << (d + 1); + int para1 = 1 << d; + if (idx < n) + { + if (idx >= 0 && idx%para == 0) + { + idata[idx + para - 1] += idata[idx + para1 - 1]; + } + } + } + + + __global__ void GPUdownsweepreal(int n, int d, int *idata) + { + int idx= blockDim.x*blockIdx.x + threadIdx.x; + int para = 1 << (d + 1); + int para1 = 1 << d; + if (idx < n) + { + int t = idata[idx*para + para1 - 1]; + idata[idx*para + para1 - 1] = idata[idx*para + para - 1]; + idata[idx*para + para - 1] += t; + } + } + + __global__ void GPUdownsweep(int n, int d, int *idata) + { + int idx = blockDim.x*blockIdx.x + threadIdx.x; + int para = 1 << (d + 1); + int para1 = 1 << d; + if (idx < n) + { + if (idx >= 0 && idx%para == 0) + { + int t = idata[idx + para1 - 1]; + idata[idx + para1 - 1] = idata[idx + para - 1]; + idata[idx + para - 1] += t; + } + } + + } /** * Performs prefix-sum (aka scan) on idata, storing the result into odata. */ - void scan(int n, int *odata, const int *idata) { + void scan(int n, int *odata, const int *idata, bool istimer) { + int dmax = ilog2ceil(n); + int adjustlen = 1 << dmax; + int *dev_arr; + + cudaMalloc((void**)& dev_arr, adjustlen* sizeof(int)); + checkCUDAError("cudaMalloc dev_arr failed!"); + + cudaMemcpy(dev_arr, idata, adjustlen * sizeof(int), cudaMemcpyHostToDevice); + checkCUDAError("cudaMemcpy failed!"); + + if(istimer) timer().startGpuTimer(); + + for (int d = 0; d < dmax; d++) + { + blocknum = (adjustlen + blocksize - 1) / blocksize; + GPUUpsweep << > > (adjustlen, d, dev_arr); + } + cudaMemset(dev_arr + adjustlen - 1, 0, sizeof(int)); + for (int d = dmax - 1; d >= 0; d--) + { + blocknum = (adjustlen + blocksize - 1) / blocksize; + GPUdownsweep << > > (adjustlen, d, dev_arr); + } // TODO + if (istimer) timer().endGpuTimer(); + cudaMemcpy(odata, dev_arr, n * sizeof(int), cudaMemcpyDeviceToHost); + cudaFree(dev_arr); } + void realscan(int n, int *odata, const int *idata, bool istimer) { + int dmax = ilog2ceil(n); + int adjustlen = 1 << dmax; + int *dev_arr; + + cudaMalloc((void**)& dev_arr, adjustlen * sizeof(int)); + checkCUDAError("cudaMalloc dev_arr failed!"); + + cudaMemcpy(dev_arr, idata, adjustlen * sizeof(int), cudaMemcpyHostToDevice); + checkCUDAError("cudaMemcpy failed!"); + + if (istimer) + timer().startGpuTimer(); + + for (int d = 0; d < dmax; d++) + { + int interval = (1 << (d + 1)); + blocknum = (adjustlen/interval + blocksize ) / blocksize; + GPUUpsweepreal << > > (adjustlen/interval, d, dev_arr); + } + + cudaMemset(dev_arr + adjustlen - 1, 0, sizeof(int)); + for (int d = dmax - 1; d >= 0; d--) + { + int interval = (1 << (d + 1)); + blocknum = (adjustlen/interval + blocksize ) / blocksize; + GPUdownsweepreal << > > (adjustlen/interval, d, dev_arr); + } + // TODO + if (istimer) + timer().endGpuTimer(); + cudaMemcpy(odata, dev_arr, n * sizeof(int), cudaMemcpyDeviceToHost); + cudaFree(dev_arr); + } + /** * Performs stream compaction on idata, storing the result into odata. * All zeroes are discarded. @@ -31,10 +149,59 @@ namespace StreamCompaction { * @returns The number of elements remaining after compaction. */ int compact(int n, int *odata, const int *idata) { - timer().startGpuTimer(); + int *dev_idata, *dev_odata, *dev_checker, *dev_indices;; + + + + + cudaMalloc((void**)&dev_idata, n * sizeof(int)); + checkCUDAError("cudaMalloc dev_idata failed!"); + + cudaMalloc((void**)&dev_odata, n * sizeof(int)); + checkCUDAError("cudaMalloc dev_odata failed!"); + + cudaMalloc((void**)&dev_checker, n * sizeof(int)); + checkCUDAError("cudaMalloc dev_checker failed!"); + + cudaMalloc((void**)&dev_indices, n * sizeof(int)); + checkCUDAError("cudaMalloc dev_indices failed!"); + + cudaMemcpy(dev_idata,idata, n * sizeof(int), cudaMemcpyHostToDevice); + + //timer().startGpuTimer(); + + blocknum = (n + blocksize ) / blocksize; + Common::kernMapToBoolean << > > (n, dev_checker, dev_idata); + + + int *checker = new int[n];int *indices = new int[n]; + cudaMemcpy(checker, dev_checker, n * sizeof(int), cudaMemcpyDeviceToHost); + + realscan(n, indices, checker,true); + + cudaMemcpy(dev_indices, indices, n * sizeof(int), cudaMemcpyHostToDevice); + + int finalct = checker[n - 1] ? 1 : 0; + + int count = indices[n - 1]+finalct; + + blocknum = (n + blocksize) / blocksize; + Common::kernScatter << > > (n, dev_odata, dev_idata, dev_checker, dev_indices); + //timer().endGpuTimer(); + cudaMemcpy(odata, dev_odata, n * sizeof(int), cudaMemcpyDeviceToHost); + + cudaFree(dev_idata); + cudaFree(dev_odata); + cudaFree(dev_checker); + cudaFree(dev_indices); + + delete[]indices; + delete[]checker; + // TODO - timer().endGpuTimer(); - return -1; + + return count; + } } } diff --git a/stream_compaction/efficient.h b/stream_compaction/efficient.h index 803cb4f..8154826 100644 --- a/stream_compaction/efficient.h +++ b/stream_compaction/efficient.h @@ -6,7 +6,9 @@ namespace StreamCompaction { namespace Efficient { StreamCompaction::Common::PerformanceTimer& timer(); - void scan(int n, int *odata, const int *idata); + void scan(int n, int *odata, const int *idata , bool istimer); + + void realscan(int n, int *odata, const int *idata, bool istimer); int compact(int n, int *odata, const int *idata); } diff --git a/stream_compaction/naive.cu b/stream_compaction/naive.cu index 9218f8e..c62c623 100644 --- a/stream_compaction/naive.cu +++ b/stream_compaction/naive.cu @@ -3,6 +3,7 @@ #include "common.h" #include "naive.h" + namespace StreamCompaction { namespace Naive { using StreamCompaction::Common::PerformanceTimer; @@ -13,13 +14,98 @@ namespace StreamCompaction { } // TODO: __global__ + __device__ int ilog2Device(int x) { + int lg = 0; + while (x >>= 1) { + ++lg; + } + return lg; + } + + __device__ int ilog2ceilDevice(int x) { + return x == 1 ? 0 : ilog2Device(x - 1) + 1; + } + + __global__ void plusp(int n, int *idata, int *odata ,int d) + { + int idx = blockDim.x*blockIdx.x + threadIdx.x; + if (idx < n) + { + if (idx >= (1 << (d - 1))) + { + odata[idx] = idata[idx-(1 << (d - 1))] + idata[idx]; + + } + + else + { + odata[idx] = idata[idx]; + } + __syncthreads(); + } + + } + __global__ void resetidata(int n, int *idata, int *odata) + { + int idx = blockDim.x*blockIdx.x + threadIdx.x; + if (idx < n) + { + idata[idx] = odata[idx]; + + } + + } + + __global__ void toExclusive(int n, int *idata, int *odata) + { + int idx = blockDim.x*blockIdx.x + threadIdx.x; + if (idx < n) + { + odata[0] = 0; + if (idx > 0) + { + odata[idx] = idata[idx - 1]; + } + + } + + } + /** * Performs prefix-sum (aka scan) on idata, storing the result into odata. */ void scan(int n, int *odata, const int *idata) { + if (n <= 0)return; + odata[0] = 0; + int blocksize = 1024; + dim3 blocknum = (n + blocksize - 1) / blocksize; + int *dev_idata, *dev_odata; + cudaMalloc((void**) & dev_idata, n * sizeof(int)); + checkCUDAError("cudaMalloc dev_idata failed!"); + + cudaMalloc((void**)& dev_odata, n * sizeof(int)); + checkCUDAError("cudaMalloc dev_odata failed!"); + + cudaMemcpy(dev_idata, idata, n * sizeof(int), cudaMemcpyHostToDevice); + checkCUDAError("cudaMemcpy failed!"); + timer().startGpuTimer(); // TODO + int dmax = ilog2ceil(n); + for (int d = 1; d <= dmax; ++d) + { + plusp << > > (n, dev_idata, dev_odata,d); + resetidata << > > (n, dev_idata, dev_odata); + } + toExclusive << < blocknum, blocksize >> > (n, dev_idata, dev_odata); timer().endGpuTimer(); + + cudaMemcpy(odata, dev_odata, n * sizeof(int), cudaMemcpyDeviceToHost); + checkCUDAError("cudaMemcpy failed!"); + + cudaFree(dev_idata); + cudaFree(dev_odata); } + } } diff --git a/stream_compaction/thrust.cu b/stream_compaction/thrust.cu index 36b732d..d12ab3f 100644 --- a/stream_compaction/thrust.cu +++ b/stream_compaction/thrust.cu @@ -18,11 +18,15 @@ namespace StreamCompaction { * Performs prefix-sum (aka scan) on idata, storing the result into odata. */ void scan(int n, int *odata, const int *idata) { + thrust::device_vector dev_idata(idata, idata + n); + thrust::device_vector dev_odata(odata, odata + n); timer().startGpuTimer(); + thrust::exclusive_scan(dev_idata.begin(), dev_idata.end(), dev_odata.begin()); // TODO use `thrust::exclusive_scan` // example: for device_vectors dv_in and dv_out: // thrust::exclusive_scan(dv_in.begin(), dv_in.end(), dv_out.begin()); timer().endGpuTimer(); + thrust::copy(dev_odata.begin(), dev_odata.end(), odata); } } }