阅读:3172回复:3
Playing with GPU
3D cards are just GREAT, period. When you're installing such a card in
your computer, you're not just plugging a device that can render nice graphics, you're also putting a mini-computer in your own computer. Today's graphical cards aren't a simple chip anymore. They have memory, they have a processor, they even have a BIOS ! You can enjoy a LOT of features from these little things. First of all, let's consider what a 3D card really is. 3D cards are here to enhance your computer performances rendering 3D and to send output for your screen to display. As I said, there are three parts that interest us in our 3v1L doings : 1/ The Video RAM. It is memory embedded on the card. This memory is used to store the scene to be rendered and to store computed results. Most of today's cards come with more than 256 MB of memory, which provide us a nice place to store our stuff. 2/ The Graphical Processing Unit (shortly GPU). It constitutes the processor of your 3D card. Most of 3D operations are maths, so most of the GPU instructions compute maths designed to graphics. 3/ The BIOS. A lot of devices include today their own BIOS. 3D cards make no exception, and their little BIOS can be very interesting as they contain the firmware of your 3D card, and when you access a firmware, well, you can just nearly do anything you dream to do. I'll give ideas about what we can do with these three elements, but first we need to know how to play with the card. Sadly, as to play with any device in your computer, you need the specs of your material and most 3D cards are not open enough to do whatever we want. But this is not a big problem in itself as we can use a simple API which will talk with the card for us. Of course, this prevents us to use tricks on the card in certain conditions, like in a shellcode, but once you've gained root and can do what pleases you to do on the system it isn't an issue anymore. The API I'm talking about is OpenGL (see [3]), and if you're not already familiar with it, I suggest you to read the tutorials on [4]. OpenGL is a 3D programming API defined by the OpenGL Architecture Review Board which is composed of members from many of the industry's leading graphics vendors. This library often comes with your drivers and by using it, you can develop easily portable code that will use features of the present 3D card. As we now know how to communicate with the card, let's take a deeper look at this hardware piece. GPU are used to transform a 3D environment (the "scene") given by the programmer into a 2D image (your screen). Basically, a GPU is a computing pipeline applying various mathematical operations on data. I won't introduce here the complete process of transforming a 3D scene into a 2D display as it is not the point of this paper. In our case, all you have to know is : 1/ The GPU is used to transform input (usually a 3D scene but nothing prevents us from inputing anything else) 2/ These transformations are done using mathematical operations commonly used in graphical programming (and again nothing prevents us from using those operations for another purpose) 3/ The pipeline is composed of two main computations each involving multiple steps of data transformation : - Transformation and Lighting : this step translates 3D objects into 2D nets of polygons (usually triangles), generating a wireframe rendering. - Rasterization : this step takes the wireframe rendering as input data and computes pixels values to be displayed on the screen. So now, let's take a look at what we can do with all these features. What interests us here is to hide data where it would be hard to find it and to execute instructions outside the processor of the computer. I won't talk about patching 3D cards firmware as it requires heavy reverse engineering and as it is very specific for each card, which is not the subject of this paper. First, let's consider instructions execution. Of course, as we are playing with a 3D card, we can't do everything we can do with a computer processor like triggering software interrupts, issuing I/O operations or manipulating memory, but we can do lots of mathematical operations. For example, we can encrypt and decrypt data with the 3D card's processor which can render the reverse engineering task quite painful. Also, it can speed up programs relying on heavy mathematical operations by letting the computer processor do other things while the 3D card computes for him. Such things have already been widely done. In fact, some people are already having fun using GPU for various purposes (see [5]). The idea here is to use the GPU to transform data we feed him with. GPUs provide a system to program them called "shaders". You can think of shaders as a programmable hook within the GPU which allows you to add your own routines in the data transformation processus. These hooks can be triggered in two places of the computing pipeline, depending on the shader you're using. Traditionnaly, shaders are used by programmers to add special effects on the rendering process and as the rendering process is composed of two steps, the GPU provides two programmable shaders. The first shader is called the "Vexter shader". This shader is used during the transformation and lighting step. The second shader is called the "Pixel shader" and this one is used during the rasterization processus. Ok, so now we have two entry points in the GPU system, but this doesn't tell us how to develop and inject our own routines. Again, as we are playing in the hardware world, there are several ways to do it, depending on the hardware and the system you're running on. Shaders use their own programming languages, some are low level assembly-like languages, some others are high level C-like languages. The three main languages used today are high level ones : - High-Level Shader Language (HLSL) : this language is provided by Microsoft's DirectX API, so you need MS Windows to use it. (see [6]) - OpenGL Shading Language (GLSL or GLSlang) : this language is provided by the OpenGL API. (see [7]) - Cg : this language was introduced by NVIDIA to program on their hardware using either the DirectX API or the OpenGL one. Cg comes with a full toolkit distributed by NVIDIA for free (see [8] and [9]). Now that we know how to program GPUs, let's consider the most interesting part : data hiding. As I said, 3D cards come with a nice amount of memory. Of course, this memory is aimed at graphical usage but nothing prevents us to store some stuff in it. In fact, with the help of shaders we can even ask the 3D card to store and encrypt our data. This is fairly easy to do : we put the data in the beginning of the pipeline, we program the shaders to decide how to store and encrypt it and we're done. Then, retrieving this data is nearly the same operation : we ask the shaders to decrypt it and to send it back to us. Note that this encryption is really weak, as we rely only on shaders' computing and as the encryption and decryption process can be reversed by simply looking at the shaders programming in your code, but this can constitutes an effective way to improve already existing tricks (a 3D card based Shiva could be fun). Ok, so now we can start coding stuff taking advantage of our 3D cards. But wait ! We don't want to mess with shaders, we don't want to learn about 3D programming, we just want to execute code on the device so we can quickly test what we can do with those devices. Learning shaders programming is important because it allows to understand the device better but it can be really long for people unfamiliar with the 3D world. Recently, nVIDIA released a SDK allowing programmers to easily use 3D devices for other purposes than graphisms. nVIDIA CUDA (see [10]) is a SDK allowing programmers to use the C language with new keywords used to tell the compiler which part of the code should be executed on the device and which part of the code should be executed on the CPU. CUDA also comes with various mathematical libraries. Here is a funny code to illustrate the use of CUDA : ------[ 3ddb.c /* ** 3ddb.c : a very simple program used to store an array in ** GPU memory and make the GPU "encrypt" it. Compile it using nvcc. */ #include <stdio.h> #include <string.h> #include <stdlib.h> #include <cutil.h> #include <cuda.h> /*** GPU code and data ***/ char * store; __global__ void encrypt(int key) { /* do any encryption you want here */ /* and put the result into 'store' */ /* (you need to modify CPU code if */ /* the encrypted text size is */ /* different than the clear text */ /* one). */ } /*** end of GPU code and data ***/ /*** CPU code and data ***/ CUdevice dev; void usage(char * cmd) { fprintf(stderr, "usage is : %s <string> <key>\n", cmd); exit(0); } void init_gpu() { int count; CUT_CHECK_DEVICE(); CU_SAFE_CALL(cuInit()); CU_SAFE_CALL(cuDeviceGetCount(&count)); if (count <= 0) { fprintf(stderr, "error : could not connect to any 3D card\n"); exit(-1); } CU_SAFE_CALL(cuDeviceGet(&dev, 0)); CU_SAFE_CALL(cuCtxCreate(dev)); } int main(int argc, char ** argv) { int key; char * res; if (argc != 3) usage(argv[0]); init_gpu(); CUDA_SAFE_CALL(cudaMalloc((void **)&store, strlen(argv[1]))); CUDA_SAFE_CALL(cudaMemcpy(store, argv[1], strlen(argv[1]), cudaMemcpyHostToDevice)); res = malloc(strlen(argv[1])); key = atoi(argv[2]); encrypt<<<128, 256>>>(key); CUDA_SAFE_CALL(cudaMemcpy(res, store, strlen(argv[1]), cudaMemcpyDeviceToHost)); for (i = 0; i < strlen(argv[1]); i++) printf("%c", res); CU_SAFE_CALL(cuCtxDetach()); CUT_EXIT(argc, argv); return 0; } ------ 本文为节选,完整文章见Phrack64 -- Hacking deeper in the system http://www.phrack.org/issues.html?issue=64&id=12#article |
|
沙发#
发布于:2007-08-03 16:54
顺便问一句:有些文章中“on-the-fly”和“in the wild”该怎么翻译?
|
|
板凳#
发布于:2007-08-03 19:33
引用第1楼z.b.Azy于2007-08-03 16:54发表的 : On-the-fly的大致意思是瞬间或者轻松地搞定某事。(Google翻译:“于飞”) 后面那个我看了一下原文,是use in the wild,那基本上就是widely used(广泛使用)的意思了 |
|
地板#
发布于:2007-08-07 14:03
引用第2楼123456789012于2007-08-03 19:33发表的 : thx |
|