Blog Archives

CUDA : Texture Memory

Today, this try is quite difficult. Why? Because I don’t understand why they should build this one, Texture Memory. Ok, let’s check the stuff what’s going on. You’ll see…

Heat Transfer Simulation

Heat Transfer Simulation

Let me explain a little bit. What the sample wants to show is how efficient the process of this simulation does. Texture Memory for this case, I think, is very helpful. Why? Ok, here is the theory.

Heat Conduction

Heat Conduction

What you  see is the heat transfer simulation. Heat transfer is computed geometrically. Every particle of “heat” should flow physically. To model this problem, the best way is to parallelize the memory with the appropriate location of the pixels, in the other words, we just update the appropriate pixel, we called with particle “heat”, by altering the memory where the pixel is being represented.

CUDA has a feature called Texture Memory which sounds like mapping a picture to an object. It is quite  similar to that. But for sure, I will explain more.

We used to do this before. Here, I define pixel as the part of an image, memory as the CUDA RAM, and thread as the unit processing of CUDA.

Old ways

Old ways

We want better ones. That typical ways don’t help us with mapping such memory, we will mess up with finding the pixel, so that’s why we should use Texture Memory. With Texture Memory, the entire things look like this.

The New Ways

The New Ways

Ok, get it? So, texture memory is just another type physically-normal memory in CUDA, but it can serve the code with that such access pattern which means the thread is able to read the location of the memory nearby, just by looking up the memory, no more calculating the position of the appropriate memory in the thread.

In the end, I would like to say that, by using Texture Memory, we just define what pixel is going to be updated, and the rest is CUDA’s. That’s all.