以文本方式查看主题

-  计算机科学论坛  (http://bbs.xml.org.cn/index.asp)
--  『 C/C++编程思想 』  (http://bbs.xml.org.cn/list.asp?boardid=61)
----  [推荐]NeHe OpenGL教程(中英文版附带VC++源码)Lesson 29-lesson  30  (http://bbs.xml.org.cn/dispbbs.asp?boardid=61&rootid=&id=54186)


--  作者:一分之千
--  发布时间:10/22/2007 8:50:00 PM

--  [推荐]NeHe OpenGL教程(中英文版附带VC++源码)Lesson 29-lesson  30

第二十九课三十课源码下载

第三十课  


按此在新窗口浏览图片Blitter 函数:

类似于DirectDraw的blit函数,过时的技术,我们有实现了它。它非常的简单,就是把一块纹理贴到另一块纹理上。

  
   
   
这篇文章是有Andreas Lffler所写的,它写了一份原始的教程。过了几天,Rob Fletcher发了封邮件给我,他重新改写了所有的代码,我在它的基础上把glut的框架变换为Win32的框架。
现在让我们开始吧!
  
   
   
下面是一个保存图像数据的结构  
   

typedef struct Texture_Image
{
 int width;         // 宽
 int height;         // 高
 int format;         // 像素格式
 unsigned char *data;        // 纹理数据
} TEXTURE_IMAGE;

   
接下来定义了两个指向这个结构的指针  
   

typedef TEXTURE_IMAGE *P_TEXTURE_IMAGE;       

P_TEXTURE_IMAGE t1;         // 指向保存图像结构的指针
P_TEXTURE_IMAGE t2;         // 指向保存图像结构的指针

   
下面的函数为w*h的图像分配内存  
   

P_TEXTURE_IMAGE AllocateTextureBuffer( GLint w, GLint h, GLint f)
{
 P_TEXTURE_IMAGE ti=NULL;       
 unsigned char *c=NULL;        
 ti = (P_TEXTURE_IMAGE)malloc(sizeof(TEXTURE_IMAGE));     // 分配图像结构内存

 if( ti != NULL ) {
  ti->width  = w;        // 设置宽度
  ti->height = h;        // 设置高度
  ti->format = f;        // 设置格式
  // 分配w*h*f个字节
  c = (unsigned char *)malloc( w * h * f);
  if ( c != NULL ) {
   ti->data = c;
  }
  else {
   MessageBox(NULL,"内存不足","分配图像内存错误",MB_OK | MB_ICONINFORMATION);
   return NULL;
  }
 }

 else
 {
  MessageBox(NULL,"内存不足","分配图像结构内存错误",MB_OK | MB_ICONINFORMATION);
  return NULL;
 }
 return ti;         // 返回指向图像数据的指针
}

   
下面的函数释放分配的内存  
   

// 释放图像内存
void DeallocateTexture( P_TEXTURE_IMAGE t )
{
 if(t)
 {
  if(t->data)
  {
   free(t->data);       // 释放图像内存
  }

  free(t);         // 释放图像结构内存
 }
}

   
下面我们来读取*.raw的文件,这个函数有两个参数,一个为文件名,另一个为保存文件的图像结构指针。  
   

// 读取*.RAW文件,并把图像文件上下翻转一符合OpenGL的使用格式。
int ReadTextureData ( char *filename, P_TEXTURE_IMAGE buffer)
{
 FILE *f;
 int i,j,k,done=0;
 int stride = buffer->width * buffer->format;     // 记录每一行的宽度,以字节为单位
 unsigned char *p = NULL;

 f = fopen(filename, "rb");       // 打开文件
 if( f != NULL )        // 如果文件存在
 {

   
如果文件存在,我们通过一个循环读取我们的纹理,我们从图像的最下面一行,一行一行的读取图像。  
   

  for( i = buffer->height-1; i >= 0 ; i-- )    // 循环所有的行,从最下面以行开始,一行一行的读取
  {
   p = buffer->data + (i * stride );
   for ( j = 0; j < buffer->width ; j++ )   // 读取每一行的数据
   {

   
下面的循环读取每一像素的数据,并把alpha设为255  
   

    for ( k = 0 ; k < buffer->format-1 ; k++, p++, done++ )
    {
     *p = fgetc(f);     // 读取一个字节
    }
    *p = 255; p++;      // 把255存储在alpha通道中
   }
  }
  fclose(f);        // 关闭文件
 }

   
如果出现错误,弹出一个提示框  
   

 else      
 {
  MessageBox(NULL,"不能打开文件","图像错误",MB_OK | MB_ICONINFORMATION);
 }
 return done;         // 返回读取的字节数
}

   
下面的代码创建一个2D纹理,和前面课程介绍的方法相同  
   

void BuildTexture (P_TEXTURE_IMAGE tex)
{
 glGenTextures(1, &texture[0]);
 glBindTexture(GL_TEXTURE_2D, texture[0]);
 glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
 glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
 gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, tex->width, tex->height, GL_RGBA, GL_UNSIGNED_BYTE, tex->data);
}

   
现在到了blitter函数的地方了,他运行你把一个图像的任意部分复制到另一个图像的任意部分,并混合。
src为原图像
dst为目标图像
src_xstart,src_ystart为要复制的部分在原图像中的位置
src_width,src_height为要复制的部分的宽度和高度
dst_xstart,dst_ystart为复制到目标图像时的起始位置
上面的意思是把原图像中的(src_xstart,src_ystart)-(src_width,src_height)复制到目标图像中(dst_xstart,dst_ystart)-(src_width,src_height)
blend设置是否启用混合,0为不启用,1为启用
alpha设置源图像中颜色在混合时所占的百分比   
   

void Blit( P_TEXTURE_IMAGE src, P_TEXTURE_IMAGE dst, int src_xstart, int src_ystart, int src_width, int src_height,
    int dst_xstart, int dst_ystart, int blend, int alpha)
{
 int i,j,k;
 unsigned char *s, *d;        

 // 掐断alpha的值
 if( alpha > 255 ) alpha = 255;
 if( alpha < 0 ) alpha = 0;

 // 判断是否启用混合
 if( blend < 0 ) blend = 0;
 if( blend > 1 ) blend = 1;

 d = dst->data + (dst_ystart * dst->width * dst->format);     // 要复制的像素在目标图像数据中的开始位置
 s = src->data + (src_ystart * src->width * src->format);   // 要复制的像素在源图像数据中的开始位置

 for (i = 0 ; i < src_height ; i++ )      // 循环每一行
 {

  s = s + (src_xstart * src->format);     // 移动到下一个像素
  d = d + (dst_xstart * dst->format);    
  for (j = 0 ; j < src_width ; j++ )     // 循环复制一行
  {

   for( k = 0 ; k < src->format ; k++, d++, s++)   // 复制每一个字节
   {
    if (blend)      // 如果启用了混合
     *d = ( (*s * alpha) + (*d * (255-alpha)) ) >> 8; // 根据混合复制颜色
    else       
     *d = *s;      // 否则直接复制
   }
  }
  d = d + (dst->width - (src_width + dst_xstart))*dst->format;  // 移动到下一行
  s = s + (src->width - (src_width + src_xstart))*src->format;  
 }
}

   
初始化代码基本不变,我们使用新的函数,加载*.raw纹理。并把纹理t2的一部分blit到t1中混合,接着按常规的方法设置2D纹理。  
   

int InitGL(GLvoid)
{
 t1 = AllocateTextureBuffer( 256, 256, 4 );      // 为图像t1分配内存
 if (ReadTextureData("Data/Monitor.raw",t1)==0)     // 读取图像数据
 {          // 失败则弹出对话框
  MessageBox(NULL,"不能读取 'Monitor.raw' 文件","读取错误",MB_OK | MB_ICONINFORMATION);
  return FALSE;
 }

 t2 = AllocateTextureBuffer( 256, 256, 4 );      // 为图像t2分配内存
 if (ReadTextureData("Data/GL.raw",t2)==0)      // 读取图像数据
 {          // 失败则弹出对话框
  MessageBox(NULL,"不能读取 'GL.raw' 文件","读取错误 ",MB_OK | MB_ICONINFORMATION);
  return FALSE;
 }

   
把图像t2的(127,127)-(256,256)部分和图像t1的(64,64,196,196)部分混合  
   

 // 把图像t2的(127,127)-(256,256)部分和图像t1的(64,64,196,196)部分混合
 Blit(t2,t1,127,127,128,128,64,64,1,127);     

   
下面的代码和前面一样,释放分配的空间,创建纹理  
   

 BuildTexture (t1);        // 把t1图像加载为纹理

 DeallocateTexture( t1 );       // 释放图像数据
 DeallocateTexture( t2 );      

 glEnable(GL_TEXTURE_2D);       // 使用2D纹理

 glShadeModel(GL_SMOOTH);       // 使用光滑着色
 glClearColor(0.0f, 0.0f, 0.0f, 0.0f);     // 设置背景色为黑色
 glClearDepth(1.0);        // 设置深度缓存清楚值为1
 glEnable(GL_DEPTH_TEST);       // 使用深度缓存
 glDepthFunc(GL_LESS);       // 设置深度测试函数

 return TRUE;
}

   
下面的代码绘制一个盒子  
   

GLvoid DrawGLScene(GLvoid)
{
 glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);    // 清楚颜色缓存和深度缓存
 glLoadIdentity();       
 glTranslatef(0.0f,0.0f,-5.0f);

 glRotatef(xrot,1.0f,0.0f,0.0f);
 glRotatef(yrot,0.0f,1.0f,0.0f);
 glRotatef(zrot,0.0f,0.0f,1.0f);

 glBindTexture(GL_TEXTURE_2D, texture[0]);

 glBegin(GL_QUADS);
  // 前面
  glNormal3f( 0.0f, 0.0f, 1.0f);
  glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f,  1.0f,  1.0f);
  glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f,  1.0f,  1.0f);
  glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f,  1.0f);
  glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f, -1.0f,  1.0f);
  // 后面
  glNormal3f( 0.0f, 0.0f,-1.0f);
  glTexCoord2f(1.0f, 1.0f); glVertex3f(-1.0f,  1.0f, -1.0f);
  glTexCoord2f(0.0f, 1.0f); glVertex3f( 1.0f,  1.0f, -1.0f);
  glTexCoord2f(0.0f, 0.0f); glVertex3f( 1.0f, -1.0f, -1.0f);
  glTexCoord2f(1.0f, 0.0f); glVertex3f(-1.0f, -1.0f, -1.0f);
  // 上面
  glNormal3f( 0.0f, 1.0f, 0.0f);
  glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f,  1.0f, -1.0f);
  glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f,  1.0f, -1.0f);
  glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f,  1.0f,  1.0f);
  glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f,  1.0f,  1.0f);
  // 下面
  glNormal3f( 0.0f,-1.0f, 0.0f);
  glTexCoord2f(0.0f, 0.0f); glVertex3f( 1.0f, -1.0f,  1.0f);
  glTexCoord2f(1.0f, 0.0f); glVertex3f(-1.0f, -1.0f,  1.0f);
  glTexCoord2f(1.0f, 1.0f); glVertex3f(-1.0f, -1.0f, -1.0f);
  glTexCoord2f(0.0f, 1.0f); glVertex3f( 1.0f, -1.0f, -1.0f);
  // 右面
  glNormal3f( 1.0f, 0.0f, 0.0f);
  glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f, -1.0f, -1.0f);
  glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f,  1.0f, -1.0f);
  glTexCoord2f(0.0f, 1.0f); glVertex3f( 1.0f,  1.0f,  1.0f);
  glTexCoord2f(0.0f, 0.0f); glVertex3f( 1.0f, -1.0f,  1.0f);
  // 左面
  glNormal3f(-1.0f, 0.0f, 0.0f);
  glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f, -1.0f);
  glTexCoord2f(1.0f, 0.0f); glVertex3f(-1.0f, -1.0f,  1.0f);
  glTexCoord2f(1.0f, 1.0f); glVertex3f(-1.0f,  1.0f,  1.0f);
  glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f,  1.0f, -1.0f);
 glEnd();

 xrot+=0.3f;
 yrot+=0.2f;
 zrot+=0.4f;
 return TRUE; // 一切 OK
}

   
KillGLWindow() 函数没有变化  
   
   
CreateGLWindow函数没有变化  
   
   
WinMain() 没有变化  
   
   
好了,现你可以很轻松的绘制很多混合效果。如果你有什么好的建议,请告诉我。


--  作者:一分之千
--  发布时间:10/22/2007 8:54:00 PM

--  
Lesson 29
   
This tutorial was originally written by Andreas L鰂fler. He also wrote all of the original HTML for the tutorial. A few days later Rob Fletcher emailed me an Irix version of lesson 29. In his version he rewrote most of the code. So I ported Rob's Irix / GLUT code to Visual C++ / Win32. I then modified the message loop code, and the fullscreen code. When the program is minimized it should use 0% of the CPU (or close to). When switching to and from fullscreen mode, most of the problems should be gone (screen not restoring properly, messed up display, etc).

Andreas tutorial is now better than ever. Unfortunately, the code has been modifed quite a bit, so all of the HTML has been rewritten by myself. Huge Thanks to Andreas for getting the ball rolling, and working his butt off to make a killer tutorial. Thanks to Rob for the modifications!

Lets begin... We create a device mode structure called DMsaved. We will use this structure to store information about the users default desktop resolution, color depth, etc., before we switch to fullscreen mode. More on this later! Notice we only allocate enough storage space for one texture (texture[1]).   
   

#include <windows.h>        // Header File For Windows
#include <gl\gl.h>        // Header File For The OpenGL32 Library
#include <gl\glu.h>        // Header File For The GLu32 Library
#include <stdio.h>        // Header File For File Operation Needed

HDC  hDC=NULL;        // Private GDI Device Context
HGLRC  hRC=NULL;        // Permanent Rendering Context
HWND  hWnd=NULL;        // Holds Our Window Handle
HINSTANCE hInstance = NULL;       // Holds The Instance Of The Application

bool  keys[256];        // Array Used For The Keyboard Routine
bool  active=TRUE;        // Window Active Flag Set To TRUE By Default
bool  fullscreen=TRUE;       // Fullscreen Flag Set To Fullscreen Mode By Default

DEVMODE  DMsaved;        // Saves The Previous Screen Settings (NEW)

GLfloat  xrot;         // X Rotation
GLfloat  yrot;         // Y Rotation
GLfloat  zrot;         // Z Rotation

GLuint  texture[1];        // Storage For 1 Texture

   
Now for the fun stuff. We create a structure called TEXTURE_IMAGE. The structure contains information about our images width, height, and format (bytes per pixel). data is a pointer to unsigned char. Later on data will point to our image data.   
   

typedef struct Texture_Image
{
 int width;         // Width Of Image In Pixels
 int height;         // Height Of Image In Pixels
 int format;         // Number Of Bytes Per Pixel
 unsigned char *data;        // Texture Data
} TEXTURE_IMAGE;

   
We then create a pointer called P_TEXTURE_IMAGE to the TEXTURE_IMAGE data type. The variables t1 and t2 are of type P_TEXTURE_IMAGE where P_TEXTURE_IMAGE is a redefined type of pointer to TEXTURE_IMAGE.   
   

typedef TEXTURE_IMAGE *P_TEXTURE_IMAGE;       // A Pointer To The Texture Image Data Type

P_TEXTURE_IMAGE t1;         // Pointer To The Texture Image Data Type
P_TEXTURE_IMAGE t2;         // Pointer To The Texture Image Data Type

LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);     // Declaration For WndProc

   
Below is the code to allocate memory for a texture. When we call this code, we pass it the width, height and bytes per pixel information of the image we plan to load. ti is a pointer to our TEXTURE_IMAGE data type. It's given a NULL value. c is a pointer to unsigned char, it is also set to NULL.   
   

// Allocate An Image Structure And Inside Allocate Its Memory Requirements
P_TEXTURE_IMAGE AllocateTextureBuffer( GLint w, GLint h, GLint f)
{
 P_TEXTURE_IMAGE ti=NULL;       // Pointer To Image Struct
 unsigned char *c=NULL;        // Pointer To Block Memory For Image

   
Here is where we allocate the memory for our image structure. If everything goes well, ti will point to the allocated memory.

After allocating the memory, and checking to make sure ti is not equal to NULL, we can fill the structure with the image attributes. First we set the width (w), then the height (h) and lastly the format (f). Keep in mind format is bytes per pixel.   
   

 ti = (P_TEXTURE_IMAGE)malloc(sizeof(TEXTURE_IMAGE));    // One Image Struct Please

 if( ti != NULL ) {
  ti->width  = w;        // Set Width
  ti->height = h;        // Set Height
  ti->format = f;        // Set Format

   
Now we need to allocate memory for the actual image data. The calculation is easy! We multiply the width of the image (w) by the height of the image (h) then multiply by the format (f - bytes per pixel).   
   

  c = (unsigned char *)malloc( w * h * f);

   
We check to see if everything went ok. If the value in c is not equal to NULL we set the data variable in our structure to point to the newly allocated memory.

If there was a problem, we pop up an error message on the screen letting the user know that the program was unable to allocate memory for the texture buffer. NULL is returned.   
   

  if ( c != NULL ) {
   ti->data = c;
  }
  else {
   MessageBox(NULL,"Could Not Allocate Memory For A Texture Buffer","BUFFER ERROR",MB_OK | MB_ICONINFORMATION);
   return NULL;
  }
 }

   
If anything went wrong when we were trying to allocate memory for our image structure, the code below would pop up an error message and return NULL.

If there were no problems, we return ti which is a pointer to our newly allocated image structure. Whew... Hope that all made sense.   
   

 else
 {
  MessageBox(NULL,"Could Not Allocate An Image Structure","IMAGE STRUCTURE ERROR",MB_OK | MB_ICONINFORMATION);
  return NULL;
 }
 return ti;         // Return Pointer To Image Struct
}

   
When it comes time to release the memory, the code below will deallocate the texture buffer and then free the image structure. t is a pointer to the TEXTURE_IMAGE data structure we want to deallocate.   
   

// Free Up The Image Data
void DeallocateTexture( P_TEXTURE_IMAGE t )
{
 if(t)
 {
  if(t->data)
  {
   free(t->data);       // Free Its Image Buffer
  }

  free(t);        // Free Itself
 }
}

   
Now we read in our .RAW image. We pass the filename and a pointer to the image structure we want to load the image into. We set up our misc variables, and then calculate the size of a row. We figure out the size of a row by multiplying the width of our image by the format (bytes per pixel). So if the image was 256 pixels wide and there were 4 bytes per pixel, the width of a row would be 1024 bytes. We store the width of a row in stride.

We set up a pointer (p), and then attempt to open the file.   
   

// Read A .RAW File In To The Allocated Image Buffer Using data In The Image Structure Header.
// Flip The Image Top To Bottom.  Returns 0 For Failure Of Read, Or Number Of Bytes Read.
int ReadTextureData ( char *filename, P_TEXTURE_IMAGE buffer)
{
 FILE *f;
 int i,j,k,done=0;
 int stride = buffer->width * buffer->format;     // Size Of A Row (Width * Bytes Per Pixel)
 unsigned char *p = NULL;

 f = fopen(filename, "rb");       // Open "filename" For Reading Bytes
 if( f != NULL )         // If File Exists
 {

   
If the file exists, we set up the loops to read in our texture. i starts at the bottom of the image and moves up a line at a time. We start at the bottom so that the image is flipped the right way. .RAW images are stored upside down. We have to set our pointer now so that the data is loaded into the proper spot in the image buffer. Each time we move up a line (i is decreased) we set the pointer to the start of the new line. data is where our image buffer starts, and to move an entire line at a time in the buffer, multiply i by stride. Remember that stride is the length of a line in bytes, and i is the current line. So by multiplying the two, we move an entire line at a time.

The j loop moves from left (0) to right (width of line in pixels, not bytes).   
   

  for( i = buffer->height-1; i >= 0 ; i-- )    // Loop Through Height (Bottoms Up - Flip Image)
  {
   p = buffer->data + (i * stride );
   for ( j = 0; j < buffer->width ; j++ )    // Loop Through Width
   {

   
The k loop reads in our bytes per pixel. So if format (bytes per pixel) is 4, k loops from 0 to 2 which is bytes per pixel minus one (format-1). The reason we subtract one is because most raw images don't have an alpha value. We want to make the 4th byte our alpha value, and we want to set the alpha value manually.

Notice in the loop we also increase the pointer (p) and a variable called done. More about done later.

the line inside the loop reads a character from our file and stores it in the texture buffer at our current pointer location. If our image has 4 bytes per pixel, the first 3 bytes will be read from the .RAW file (format-1), and the 4th byte will be manually set to 255. After we set the 4th byte to 255 we increase the pointer location by one so that our 4th byte is not overwritten with the next byte in the file.

After a all of the bytes have been read in per pixel, and all of the pixels have been read in per row, and all of the rows have been read in, we are done! We can close the file.   
   

    for ( k = 0 ; k < buffer->format-1 ; k++, p++, done++ )
    {
     *p = fgetc(f);     // Read Value From File And Store In Memory
    }
    *p = 255; p++;      // Store 255 In Alpha Channel And Increase Pointer
   }
  }
  fclose(f);        // Close The File
 }

   
If there was a problem opening the file (does not exist, etc), the code below will pop up a message box letting the user know that the file could not be opened.

The last thing we do is return done. If the file couldn't be opened, done will equal 0. If everything went ok, done should equal the number of bytes read from the file. Remember, we were increasing done every time we read a byte in the loop above (k loop).   
   

 else          // Otherwise
 {
  MessageBox(NULL,"Unable To Open Image File","IMAGE ERROR",MB_OK | MB_ICONINFORMATION);
 }
 return done;         // Returns Number Of Bytes Read In
}

   
This shouldn't need explaining. By now you should know how to build a texture. tex is the pointer to the TEXTURE_IMAGE structure that we want to use. We build a linear filtered texture. In this example, we're building mipmaps (smoother looking). We pass the width, height and data just like we would if we were using glaux, but this time we get the information from the selected TEXTURE_IMAGE structure.   
   

void BuildTexture (P_TEXTURE_IMAGE tex)
{
 glGenTextures(1, &texture[0]);
 glBindTexture(GL_TEXTURE_2D, texture[0]);
 glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
 glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
 gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, tex->width, tex->height, GL_RGBA, GL_UNSIGNED_BYTE, tex->data);
}

   
Now for the blitter code :) The blitter code is very powerful. It lets you copy any section of a (src) texture and paste it into a destination (dst) texture. You can combine as many textures as you want, you can set the alpha value used for blending, and you can select whether the two images blend together or cancel eachother out.

src is the TEXTURE_IMAGE structure to use as the source image. dst is the TEXTURE_IMAGE structure to use for the destination image. src_xstart is where you want to start copying from on the x axis of the source image. src_ystart is where you want to start copying from on the y axis of the source image. src_width is the width in pixels of the area you want to copy from the source image. src_height is the height in pixels of the area you want to copy from the source image. dst_xstart and dst_ystart is where you want to place the copied pixels from the source image onto the destination image. If blend is 1, the two images will be blended. alpha sets how tranparent the copied image will be when it mapped onto the destination image. 0 is completely clear, and 255 is solid.

We set up all our misc loop variables, along with pointers for our source image (s) and destination image (d). We check to see if the alpha value is within range. If not, we clamp it. We do the same for the blend value. If it's not 0-off or 1-on, we clamp it.   
   

void Blit( P_TEXTURE_IMAGE src, P_TEXTURE_IMAGE dst, int src_xstart, int src_ystart, int src_width, int src_height,
    int dst_xstart, int dst_ystart, int blend, int alpha)
{
 int i,j,k;
 unsigned char *s, *d;        // Source & Destination

 // Clamp Alpha If Value Is Out Of Range
 if( alpha > 255 ) alpha = 255;
 if( alpha < 0 ) alpha = 0;

 // Check For Incorrect Blend Flag Values
 if( blend < 0 ) blend = 0;
 if( blend > 1 ) blend = 1;

   
Now we have to set up the pointers. The destination pointer is the location of the destination data plus the starting location on the destination images y axis (dst_ystart) * the destination images width in pixels * the destination images bytes per pixel (format). This should give us the starting row for our destination image.

We do pretty much the same thing for the source pointer. The source pointer is the location of the source data plus the starting location on the source images y axis (src_ystart) * the source images width in pixels * the source images bytes per pixel (format). This should give us the starting row for our source image.

i loops from 0 to src_height which is the number of pixels to copy up and down from the source image.   
   

 d = dst->data + (dst_ystart * dst->width * dst->format);     // Start Row - dst (Row * Width In Pixels * Bytes Per Pixel)
 s = src->data + (src_ystart * src->width * src->format);   // Start Row - src (Row * Width In Pixels * Bytes Per Pixel)

 for (i = 0 ; i < src_height ; i++ )      // Height Loop
 {

   
We already set the source and destination pointers to the correct rows in each image. Now we have to move to the correct location from left to right in each image before we can start blitting the data. We increase the location of the source pointer (s) by src_xstart which is the starting location on the x axis of the source image times the source images bytes per pixel. This moves the source (s) pointer to the starting pixel location on the x axis (from left to right) on the source image.

We do the exact same thing for the destination pointer. We increase the location of the destination pointer (d) by dst_xstart which is the starting location on the x axis of the destination image multiplied by the destination images bytes per pixel (format). This moves the destination (d) pointer to the starting pixel location on the x axis (from left to right) on the destination image.

After we have calculated where in memory we want to grab our pixels from (s) and where we want to move them to (d), we start the j loop. We'll use the j loop to travel from left to right through the source image.   
   

  s = s + (src_xstart * src->format);     // Move Through Src Data By Bytes Per Pixel
  d = d + (dst_xstart * dst->format);     // Move Through Dst Data By Bytes Per Pixel
  for (j = 0 ; j < src_width ; j++ )     // Width Loop
  {

   
The k loop is used to go through all the bytes per pixel. Notice as k increases, our pointers for the source and destination images also increase.

Inside the loop we check to see if blending is on or off. If blend is 1, meaning we should blend, we do some fancy math to calculate the color of our blended pixels. The destination value (d) will equal our source value (s) multiplied by our alpha value + our current destination value (d) times 255 minus the alpha value. The shift operator (>>8) keeps the value in a 0-255 range.

If blending is disabled (0), we copy the data from the source image directly into the destination image. No blending is done and the alpha value is ignored.   
   

   for( k = 0 ; k < src->format ; k++, d++, s++)   // "n" Bytes At A Time
   {
    if (blend)      // If Blending Is On
    *d = ( (*s * alpha) + (*d * (255-alpha)) ) >> 8; // Multiply Src Data*alpha Add Dst Data*(255-alpha)
    else       // Keep in 0-255 Range With >> 8
    *d = *s;      // No Blending Just Do A Straight Copy
   }
  }
  d = d + (dst->width - (src_width + dst_xstart))*dst->format;  // Add End Of Row
  s = s + (src->width - (src_width + src_xstart))*src->format;  // Add End Of Row
 }
}

   
The InitGL() code has changed quite a bit. All of the code below is new. We start off by allocating enough memory to hold a 256x256x4 Bytes Per Pixel Image. t1 will point to the allocated ram if everything went well.

After allocating memory for our image, we attempt to load the image. We pass ReadTextureData() the name of the file we wish to open, along with a pointer to our Image Structure (t1).

If we were unable to load the .RAW image, a message box will pop up on the screen to let the user know there was a problem loading the texture.

We then do the same thing for t2. We allocate memory, and attempt to read in our second .RAW image. If anything goes wrong we pop up a message box.   
   

int InitGL(GLvoid)         // This Will Be Called Right After The GL Window Is Created
{
 t1 = AllocateTextureBuffer( 256, 256, 4 );     // Get An Image Structure
 if (ReadTextureData("Data/Monitor.raw",t1)==0)     // Fill The Image Structure With Data
 {          // Nothing Read?
  MessageBox(NULL,"Could Not Read 'Monitor.raw' Image Data","TEXTURE ERROR",MB_OK | MB_ICONINFORMATION);
  return FALSE;
 }

 t2 = AllocateTextureBuffer( 256, 256, 4 );     // Second Image Structure
 if (ReadTextureData("Data/GL.raw",t2)==0)     // Fill The Image Structure With Data
 {          // Nothing Read?
  MessageBox(NULL,"Could Not Read 'GL.raw' Image Data","TEXTURE ERROR",MB_OK | MB_ICONINFORMATION);
  return FALSE;
 }

   
If we got this far, it's safe to assume the memory has been allocated and the images have been loaded. Now to use our Blit() command to merge the two images into one.

We start off by passing Blit() t2 and t1, both point to our TEXTURE_IMAGE structures (t2 is the second image, t1 is the first image.

Then we have to tell blit where to start grabbing data from on the source image. If you load the source image into Adobe Photoshop or any other program capable of loading .RAW images you will see that the entire image is blank except for the top right corner. The top right has a picture of the ball with GL written on it. The bottom left corner of the image is 0,0. The top right of the image is the width of the image-1 (255), the height of the image-1 (255). Knowing that we only want to copy 1/4 of the src image (top right), we tell Blit() to start grabbing from 127,127 (center of our source image).

Next we tell blit how many pixels we want to copy from our source point to the right, and from our source point up. We want to grab a 1/4 chunk of our image. Our image is 256x256 pixels, 1/4 of that is 128x128 pixels. All of the source information is done. Blit() now knows that it should copy from 127 on the x axis to 127+128 (255) on the x axis, and from 127 on the y axis to 127+128 (255) on the y axis.

So Blit() knows what to copy, and where to get the data from, but it doesn't know where to put the data once it's gotten it. We want to draw the ball with GL written on it in the middle our the monitor image. You find the center of the destination image (256x256) which is 128x128 and subtract half the width and height of the source image (128x128) which is 64x64. So (128-64) x (128-64) gives us a starting location of 64,64.

Last thing to do is tell our blitter routine we want to blend the two image (A one means blend, a zero means do not blend), and how much to blend the images. If the last value is 0, we blend the images 0%, meaning anything we copy will replace what was already there. If we use a value of 127, the two images blend together at 50%, and if you use 255, the image you are copying will be completely transparent and will not show up at all.

The pixels are copied from image2 (t2) to image1 (t1). The mixed image will be stored in t1.   
   

 // Image To Blend In, Original Image, Src Start X & Y, Src Width & Height, Dst Location X & Y, Blend Flag, Alpha Value
 Blit(t2,t1,127,127,128,128,64,64,1,127);     // Call The Blitter Routine

   
After we have mixed the two images (t1 and t2) together, we build a texture from the combined images (t1).

After the texture has been created, we can deallocate the memory holding our two TEXTURE_IMAGE structures.

The rest of the code is pretty standard. We enable texture mapping, depth testing, etc.   
   

 BuildTexture (t1);        // Load The Texture Map Into Texture Memory

 DeallocateTexture( t1 );       // Clean Up Image Memory Because Texture Is
 DeallocateTexture( t2 );       // In GL Texture Memory Now

 glEnable(GL_TEXTURE_2D);       // Enable Texture Mapping

 glShadeModel(GL_SMOOTH);       // Enables Smooth Color Shading
 glClearColor(0.0f, 0.0f, 0.0f, 0.0f);      // This Will Clear The Background Color To Black
 glClearDepth(1.0);        // Enables Clearing Of The Depth Buffer
 glEnable(GL_DEPTH_TEST);       // Enables Depth Testing
 glDepthFunc(GL_LESS);        // The Type Of Depth Test To Do

 return TRUE;
}

   
I shouldn't even have to explain the code below. We move 5 units into the screen, select our single texture, and draw a texture mapped cube. You should notice that both textures are now combined into one. We don't have to render everything twice to map both textures onto the cube. The blitter code combined the images for us.   
   

GLvoid DrawGLScene(GLvoid)
{
 glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);    // Clear The Screen And The Depth Buffer
 glLoadIdentity();        // Reset The View
 glTranslatef(0.0f,0.0f,-5.0f);

 glRotatef(xrot,1.0f,0.0f,0.0f);
 glRotatef(yrot,0.0f,1.0f,0.0f);
 glRotatef(zrot,0.0f,0.0f,1.0f);

 glBindTexture(GL_TEXTURE_2D, texture[0]);

 glBegin(GL_QUADS);
  // Front Face
  glNormal3f( 0.0f, 0.0f, 1.0f);
  glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f,  1.0f,  1.0f);
  glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f,  1.0f,  1.0f);
  glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f,  1.0f);
  glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f, -1.0f,  1.0f);
  // Back Face
  glNormal3f( 0.0f, 0.0f,-1.0f);
  glTexCoord2f(1.0f, 1.0f); glVertex3f(-1.0f,  1.0f, -1.0f);
  glTexCoord2f(0.0f, 1.0f); glVertex3f( 1.0f,  1.0f, -1.0f);
  glTexCoord2f(0.0f, 0.0f); glVertex3f( 1.0f, -1.0f, -1.0f);
  glTexCoord2f(1.0f, 0.0f); glVertex3f(-1.0f, -1.0f, -1.0f);
  // Top Face
  glNormal3f( 0.0f, 1.0f, 0.0f);
  glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f,  1.0f, -1.0f);
  glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f,  1.0f, -1.0f);
  glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f,  1.0f,  1.0f);
  glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f,  1.0f,  1.0f);
  // Bottom Face
  glNormal3f( 0.0f,-1.0f, 0.0f);
  glTexCoord2f(0.0f, 0.0f); glVertex3f( 1.0f, -1.0f,  1.0f);
  glTexCoord2f(1.0f, 0.0f); glVertex3f(-1.0f, -1.0f,  1.0f);
  glTexCoord2f(1.0f, 1.0f); glVertex3f(-1.0f, -1.0f, -1.0f);
  glTexCoord2f(0.0f, 1.0f); glVertex3f( 1.0f, -1.0f, -1.0f);
  // Right Face
  glNormal3f( 1.0f, 0.0f, 0.0f);
  glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f, -1.0f, -1.0f);
  glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f,  1.0f, -1.0f);
  glTexCoord2f(0.0f, 1.0f); glVertex3f( 1.0f,  1.0f,  1.0f);
  glTexCoord2f(0.0f, 0.0f); glVertex3f( 1.0f, -1.0f,  1.0f);
  // Left Face
  glNormal3f(-1.0f, 0.0f, 0.0f);
  glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f, -1.0f);
  glTexCoord2f(1.0f, 0.0f); glVertex3f(-1.0f, -1.0f,  1.0f);
  glTexCoord2f(1.0f, 1.0f); glVertex3f(-1.0f,  1.0f,  1.0f);
  glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f,  1.0f, -1.0f);
 glEnd();

 xrot+=0.3f;
 yrot+=0.2f;
 zrot+=0.4f;
}

   
The KillGLWindow() code has a few changes. You'll notice the code to switch from fullscreen mode back to your desktop is now at the top of KillGLWindow(). If the user ran the program in fullscreen mode, the first thing we do when we kill the window is try to switch back to the desktop resolution. If the quick way fails to work, we reset the screen using the information stored in DMsaved. This should restore us to our orignal desktop settings.   
   

GLvoid KillGLWindow(GLvoid)        // Properly Kill The Window
{
 if (fullscreen)         // Are We In Fullscreen Mode?
 {
  if (!ChangeDisplaySettings(NULL,CDS_TEST)) {    // If The Shortcut Doesn't Work
   ChangeDisplaySettings(NULL,CDS_RESET);    // Do It Anyway (To Get The Values Out Of The Registry)
   ChangeDisplaySettings(&DMsaved,CDS_RESET);   // Change Resolution To The Saved Settings
  }
  else         // Not Fullscreen
  {
   ChangeDisplaySettings(NULL,CDS_RESET);    // Do Nothing
  }

  ShowCursor(TRUE);       // Show Mouse Pointer
 }

 if (hRC)         // Do We Have A Rendering Context?
 {
  if (!wglMakeCurrent(NULL,NULL))      // Are We Able To Release The DC And RC Contexts?
  {
   MessageBox(NULL,"Release Of DC And RC Failed.","SHUTDOWN ERROR",MB_OK | MB_ICONINFORMATION);
  }

  if (!wglDeleteContext(hRC))      // Are We Able To Delete The RC?
  {
   MessageBox(NULL,"Release Rendering Context Failed.","SHUTDOWN ERROR",MB_OK | MB_ICONINFORMATION);
  }
  hRC=NULL;        // Set RC To NULL
 }

 if (hDC && !ReleaseDC(hWnd,hDC))      // Are We Able To Release The DC
 {
  MessageBox(NULL,"Release Device Context Failed.","SHUTDOWN ERROR",MB_OK | MB_ICONINFORMATION);
  hDC=NULL;        // Set DC To NULL
 }

 if (hWnd && !DestroyWindow(hWnd))      // Are We Able To Destroy The Window?
 {
  MessageBox(NULL,"Could Not Release hWnd.","SHUTDOWN ERROR",MB_OK | MB_ICONINFORMATION);
  hWnd=NULL;        // Set hWnd To NULL
 }

 if (!UnregisterClass("OpenGL",hInstance))     // Are We Able To Unregister Class
 {
  MessageBox(NULL,"Could Not Unregister Class.","SHUTDOWN ERROR",MB_OK | MB_ICONINFORMATION);
  hInstance=NULL;        // Set hInstance To NULL
 }
}

   
I've made some changes in CreateGLWindow. The changes will hopefully elimintate alot of the problems people are having when they switch to and from from fullscreen mode. I've included the first part of CreateGLWindow() so you can easily follow through the code.   
   

BOOL CreateGLWindow(char* title, int width, int height, int bits, bool fullscreenflag)
{
 GLuint  PixelFormat;       // Holds The Results After Searching For A Match
 WNDCLASS wc;        // Windows Class Structure
 DWORD  dwExStyle;       // Window Extended Style
 DWORD  dwStyle;       // Window Style

 fullscreen=fullscreenflag;       // Set The Global Fullscreen Flag

 hInstance  = GetModuleHandle(NULL);    // Grab An Instance For Our Window
 wc.style  = CS_HREDRAW | CS_VREDRAW | CS_OWNDC;   // Redraw On Size, And Own DC For Window.
 wc.lpfnWndProc  = (WNDPROC) WndProc;     // WndProc Handles Messages
 wc.cbClsExtra  = 0;       // No Extra Window Data
 wc.cbWndExtra  = 0;       // No Extra Window Data
 wc.hInstance  = hInstance;      // Set The Instance
 wc.hIcon  = LoadIcon(NULL, IDI_WINLOGO);    // Load The Default Icon
 wc.hCursor  = LoadCursor(NULL, IDC_ARROW);    // Load The Arrow Pointer
 wc.hbrBackground = NULL;       // No Background Required For GL
 wc.lpszMenuName  = NULL;       // We Don't Want A Menu
 wc.lpszClassName = "OpenGL";      // Set The Class Name

   
The big change here is that we now save the current desktop resolution, bit depth, etc. before we switch to fullscreen mode. That way when we exit the program, we can set everything back exactly how it was. The first line below copies the display settings into the DMsaved Device Mode structure. Nothing else has changed, just one new line of code.   
   

 EnumDisplaySettings(NULL, ENUM_CURRENT_SETTINGS, &DMsaved);   // Save The Current Display State (NEW)

 if (fullscreen)         // Attempt Fullscreen Mode?
 {
  DEVMODE dmScreenSettings;      // Device Mode
  memset(&dmScreenSettings,0,sizeof(dmScreenSettings));   // Makes Sure Memory's Cleared
  dmScreenSettings.dmSize=sizeof(dmScreenSettings);   // Size Of The Devmode Structure
  dmScreenSettings.dmPelsWidth = width;    // Selected Screen Width
  dmScreenSettings.dmPelsHeight = height;    // Selected Screen Height
  dmScreenSettings.dmBitsPerPel = bits;     // Selected Bits Per Pixel
  dmScreenSettings.dmFields=DM_BITSPERPEL|DM_PELSWIDTH|DM_PELSHEIGHT;

  // Try To Set Selected Mode And Get Results.  NOTE: CDS_FULLSCREEN Gets Rid Of Start Bar.
  if (ChangeDisplaySettings(&dmScreenSettings,CDS_FULLSCREEN)!=DISP_CHANGE_SUCCESSFUL)
  {
   // If The Mode Fails, Offer Two Options.  Quit Or Use Windowed Mode.
   if (MessageBox(NULL,"The Requested Fullscreen Mode Is Not Supported By\nYour Video Card. Use Windowed Mode Instead?","NeHe GL",MB_YESNO|MB_ICONEXCLAMATION)==IDYES)
   {
    fullscreen=FALSE;     // Windowed Mode Selected.  Fullscreen = FALSE
   }
   else
   {
    // Pop Up A Message Box Letting User Know The Program Is Closing.
    MessageBox(NULL,"Program Will Now Close.","ERROR",MB_OK|MB_ICONSTOP);
    return FALSE;      // Return FALSE
   }
  }
 }

   
WinMain() starts out the same as always. Ask the user if they want fullscreen or not, then start the loop.   
   

int WINAPI WinMain( HINSTANCE hInstance,     // Instance
   HINSTANCE hPrevInstance,     // Previous Instance
   LPSTR  lpCmdLine,     // Command Line Parameters
   int  nCmdShow)     // Window Show State
{
 MSG msg;         // Windows Message Structure
 BOOL done=FALSE;        // Bool Variable To Exit Loop

 // Ask The User Which Screen Mode They Prefer
 if (MessageBox(NULL,"Would You Like To Run In Fullscreen Mode?", "Start FullScreen?",MB_YESNO|MB_ICONQUESTION)==IDNO)
 {
  fullscreen=FALSE;       // Windowed Mode
 }

 // Create Our OpenGL Window
 if (!CreateGLWindow("Andreas L鰂fler, Rob Fletcher & NeHe's Blitter & Raw Image Loading Tutorial", 640, 480, 32, fullscreen))
 {
  return 0;        // Quit If Window Was Not Created
 }

 while(!done)         // Loop That Runs While done=FALSE
 {
  if (PeekMessage(&msg,NULL,0,0,PM_REMOVE))    // Is There A Message Waiting?
  {
   if (msg.message==WM_QUIT)     // Have We Received A Quit Message?
   {
    done=TRUE;      // If So done=TRUE
   }
   else        // If Not, Deal With Window Messages
   {
    TranslateMessage(&msg);     // Translate The Message
    DispatchMessage(&msg);     // Dispatch The Message
   }
  }

   
I have made some changes to the code below. If the program is not active (minimized) we wait for a message with the command WaitMessage(). Everything stops until the program receives a message (usually maximizing the window). What this means is that the program no longer hogs the processor while it's minimized. Thanks to Jim Strong for the suggestion.   
   

  if (!active)        // Program Inactive?
  {
   WaitMessage();       // Wait For A Message / Do Nothing ( NEW ... Thanks Jim Strong )
  }

  if (keys[VK_ESCAPE])       // Was Escape Pressed?
  {
   done=TRUE;       // ESC Signalled A Quit
  }

  if (keys[VK_F1])       // Is F1 Being Pressed?
  {
   keys[VK_F1]=FALSE;      // If So Make Key FALSE
   KillGLWindow();       // Kill Our Current Window
   fullscreen=!fullscreen;      // Toggle Fullscreen / Windowed Mode
   // Recreate Our OpenGL Window
   if (!CreateGLWindow("Andreas L鰂fler, Rob Fletcher & NeHe's Blitter & Raw Image Loading Tutorial",640,480,16,fullscreen))
   {
    return 0;      // Quit If Window Was Not Created
   }
  }

  DrawGLScene();        // Draw The Scene
  SwapBuffers(hDC);       // Swap Buffers (Double Buffering)
 }

 // Shutdown
 KillGLWindow();         // Kill The Window
 return (msg.wParam);        // Exit The Program
}

   
Well, that磗 it! Now the doors are open for creating some very cool blending effects for your games, engines or even applications. With texture buffers we used in this tutorial you could do more cool effects like real-time plasma or water. When combining these effects all together you磖e able to do nearly photo-realistic terrain. If something doesn磘 work in this tutorial or you have suggestions how to do it better, then please don磘 hesitate to E-Mail me. Thank you for reading and good luck in creating your own special effects!

Some information about Andreas: I磎 an 18 years old pupil who is currently studying to be a software engineer. I磛e been programming for nearly 10 years now. I've been programming in OpenGL for about 1.5 years.

Andreas L鰂fler & Rob Fletcher

Jeff Molofee (NeHe)


--  作者:一分之千
--  发布时间:10/22/2007 9:03:00 PM

--  

第三十课

按此在新窗口浏览图片碰撞检测:

这是一课激动的教程,你也许等待它多时了。你将学会碰撞剪裁,物理模拟太多的东西,慢慢期待吧。

  
   
   
碰撞检测和物理模拟(作者:Dimitrios Christopoulos (christop@fhw.gr))

碰撞检测

这是一个我遇到的最困难的题目,因为它没有一个简单的解决办法.对于每一个程序都有一种检测碰撞的方法.当然这里有一种蛮力,它适用于各种不同的应用,当它非常的费时.
我们将讲述一种算法,它非常的快,简单并易于扩展.下面我们来看看这个算法包含的内容:

1) 碰撞检测
移动的球-平面
移动的球-圆柱
移动的球-移动的球
2) 基于物理的建模
碰撞表示
应用重力加速度
3) 特殊效果
爆炸的表示,利用互交叉的公告板形式
声音使用Windows声音库
4) 关于代码
代码被分为以下5个部分
Lesson30.cpp   : 主程序代码l
Image.cpp, Image.h : 加载图像
Tmatrix.cpp, Tmatrix.h : 矩阵
Tray.cpp, Tray.h : 射线
Tvector.cpp, Tvector.h : 向量

1) 碰撞检测

我们使用射线来完成相关的算法,它的定义为:

射线上的点 = 射线的原点+ t * 射线的方向

t 用来描述它距离原点的位置,它的范围是[0, 无限远).

现在我们可以使用射线来计算它和平面以及圆柱的交点了。

射线和平面的碰撞检测:

平面被描述为:

Xn dot X = d

Xn 是平面的法线.
X 是平面上的一个点.
d 是平面到原点的距离.

现在我们得到射线和平面的两个方程:

PointOnRay = Raystart + t * Raydirection
Xn dot X = d

如果他们相交,则上诉方程组有解,如下所示:

Xn dot PointOnRay = d

(Xn dot Raystart) + t * (Xn dot Raydirection) = d

解得 t:

t = (d - Xn dot Raystart) / (Xn dot Raydirection)

t代表原点到与平面相交点的参数,把t带回原方程我们会得到与平面的碰撞点.如果Xn*Raydirection=0。则说明它与平面平行,则将不产生碰撞。如果t为负值,则说明交点在射线的相反方向,也不会产生碰撞。
  
   

//判断是否和平面相交,是则返回1,否则返回0int TestIntersionPlane(const Plane& plane,const TVector& position,const TVector& direction, double& lamda, TVector& pNormal){
double DotProduct=direction.dot(plane._Normal);
double l2;

//判断是否平行于平面
if ((DotProduct<ZERO)&&(DotProduct>-ZERO))
return 0;

l2=(plane._Normal.dot(plane._Position-position))/DotProduct;

if (l2<-ZERO)
return 0;

pNormal=plane._Normal;
lamda=l2;
return 1;
}

   
射线-圆柱的碰撞检测

计算射线和圆柱方程组得解。  
   

int TestIntersionCylinder(const Cylinder& cylinder,const TVector& position,const TVector& direction, double& lamda, TVector& pNormal,TVector& newposition)

   
球-球之间的碰撞检测

球被表示为中心和它的半径,决定两个球是否相交就是求出它们之间的距离是否小于它们的直径。

在处理两个移动的球是否相交时,有一个bug就是,当它们的移动速度太快,回出现它们相交,但在相邻的两步检测不出它们是否相交的情况,如下图所示:

图 1


有一个替代的办法就是细分相邻的时间片断,如果在这之间发生了碰撞,则确定有效。我们把这个细分时间段设置为3,代码如下:   
   

//判断球和球是否相交,是则返回1,否则返回0int FindBallCol(TVector& point, double& TimePoint, double Time2, int& BallNr1, int& BallNr2){ TVector RelativeV; TRay rays; double MyTime=0.0, Add=Time2/150.0, Timedummy=10000, Timedummy2=-1; TVector posi;  //判断球和球是否相交 for (int i=0;i<NrOfBalls-1;i++) {  for (int j=i+1;j<NrOfBalls;j++)  {       RelativeV=ArrayVel[i]-ArrayVel[j];   rays=TRay(OldPos[i],TVector::unit(RelativeV));   MyTime=0.0;
if ( (rays.dist(OldPos[j])) > 40) continue;

while (MyTime<Time2)
{
MyTime+=Add;
posi=OldPos[i]+RelativeV*MyTime;
if (posi.dist(OldPos[j])<=40) {
point=posi;
if (Timedummy>(MyTime-Add)) Timedummy=MyTime-Add;
BallNr1=i;
BallNr2=j;
break;
}

}
}

}

if (Timedummy!=10000) { TimePoint=Timedummy;
return 1;
}

return 0;
}

   
怎样应用我们的知识

现在我们已经可以决定射线和平面/圆柱的交点了,如下图所示:

图 2a                                         图 2b

当我们找到了碰撞位置后,下一步我们需要知道它是否发生在当前这一步中.如果距离碰撞点的位置小于这一步球体运动的间隔,则碰撞发生.我们使用如下的方程计算运动到碰撞时所需的时间:
Tc= Dsc*T / Dst
接着我们知道碰撞点位置,如下面公式所示:
Collision point= Start + Velocity*Tc

2) 基于物理的模拟


碰撞反应

为了计算对于一个静止物体的碰撞,我们需要知道以下信息:碰撞点,碰撞法线,碰撞时间.

它是基于以下物理规律的,碰撞的入射角等于反射角.如下图所示:

图 3


R 为反射方向
I 为入射方向
N 为法线方向

反射方向有以下公式计算 :

R= 2*(-I dot N)*N + I
  
   

rt2=ArrayVel[BallNr].mag();      // 返回速度向量的模
ArrayVel[BallNr].unit();      // 归一化速度向量

// 计算反射向量
ArrayVel[BallNr]=TVector::unit( (normal*(2*normal.dot(-ArrayVel[BallNr]))) + ArrayVel[BallNr] );
ArrayVel[BallNr]=ArrayVel[BallNr]*rt2;     

   
球体之间的碰撞

由于它很复杂,我们用下图来说明这个原理.

图 4


U1和U2为速度向量,我们用X_Axis表示两个球中心连线的轴,U1X和U2X为U1和U2在这个轴上的分量。U1y和U2y为垂直于X_Axis轴的分量。M1和M2为两个球体的分量。V1和V2为碰撞后的速度,V1x,V1y,V2x,V2y为他们的分量。

在我们的例子里,所有球的质量都相等,解得方程为,在垂直轴上的速度不变,在X_Axis轴上互相交换速度。代码如下:
  
   

TVector pb1,pb2,xaxis,U1x,U1y,U2x,U2y,V1x,V1y,V2x,V2y;
double a,b;
pb1=OldPos[BallColNr1]+ArrayVel[BallColNr1]*BallTime;   // 球1的位置
pb2=OldPos[BallColNr2]+ArrayVel[BallColNr2]*BallTime;   // 球2的位置
xaxis=(pb2-pb1).unit();       // X-Axis轴
a=xaxis.dot(ArrayVel[BallColNr1]);     // X_Axis投影系数
U1x=xaxis*a;        // 计算在X_Axis轴上的速度
U1y=ArrayVel[BallColNr1]-U1x; // 计算在垂直轴上的速度
xaxis=(pb1-pb2).unit();       
b=xaxis.dot(ArrayVel[BallColNr2]);     
U2x=xaxis*b;        
U2y=ArrayVel[BallColNr2]-U2x;
V1x=(U1x+U2x-(U1x-U2x))*0.5;      // 计算新的速度
V2x=(U1x+U2x-(U2x-U1x))*0.5;
V1y=U1y;
V2y=U2y;
for (j=0;j<NrOfBalls;j++)      // 更新所有球的位置
ArrayPos[j]=OldPos[j]+ArrayVel[j]*BallTime;
ArrayVel[BallColNr1]=V1x+V1y;      // 设置新的速度
ArrayVel[BallColNr2]=V2x+V2y;      
   
万有引力的模拟

我们使用欧拉方程来模拟万有引力,如下所示:
Velocity_New = Velovity_Old + Acceleration*TimeStep
Position_New = Position_Old + Velocity_New*TimeStep

在每次模拟中,我们用上面公式计算的速度取代旧的速度

3) 特殊效果

爆炸

最好的表示爆炸效果的就是使用两个互相垂直的平面,并使用alpha混合在窗口中显示它们。接着让alpha变为0,设定爆炸效果不可见。代码如下所示:  
   

// 渲染/混合爆炸效果
glEnable(GL_BLEND);       // 使用混合
glDepthMask(GL_FALSE);       // 禁用深度缓存
glBindTexture(GL_TEXTURE_2D, texture[1]);    // 设置纹理
for(i=0; i<20; i++)       // 渲染20个爆炸效果
{
 if(ExplosionArray[i]._Alpha>=0)
 {
  glPushMatrix();
  ExplosionArray[i]._Alpha-=0.01f;   // 设置alpha
  ExplosionArray[i]._Scale+=0.03f;   // 设置缩放
  // 设置颜色
  glColor4f(1,1,0,ExplosionArray[i]._Alpha);  
  glScalef(ExplosionArray[i]._Scale,ExplosionArray[i]._Scale,ExplosionArray[i]._Scale);
  // 设置位置
  glTranslatef((float)ExplosionArray[i]._Position.X()/ExplosionArray[i]._Scale,
   (float)ExplosionArray[i]._Position.Y()/ExplosionArray[i]._Scale,
   (float)ExplosionArray[i]._Position.Z()/ExplosionArray[i]._Scale);
  glCallList(dlist);     // 调用显示列表绘制爆炸效果
  glPopMatrix();
 }
}

   
声音

在Windows下我们简单的调用PlaySound()函数播放声音。

4) 代码的流程

如果你成功的读完了理论部分,在你开始运行程序并播放声音以前。我们将用伪代码向你介绍一些整个流程,以便你能成功的看懂代码。  
   

While (Timestep!=0)
{
 对每一个球
 {
  计算最近的与平面碰撞的位置;
  计算最近的与圆柱碰撞的位置;
  如果碰撞发生,则保存并替换最近的碰撞点;
 }
 检测各个球之间的碰撞;
 如果碰撞发生,则保存并替换最近的碰撞点;

 If (碰撞发生)
 {
  移动所有的球道碰撞点的时间;
  (We already have computed the point, normal and collision time.)
  计算碰撞后的效果;
  Timestep-=CollisonTime;
 }
 else
  移动所有的球体一步
}

   
下面是对上面伪代码的实现:
  
   

//模拟函数,计算碰撞检测和物理模拟void idle(){  double rt,rt2,rt4,lamda=10000;  TVector norm,uveloc;  TVector normal,point,time;  double RestTime,BallTime;  TVector Pos2;  int BallNr=0,dummy=0,BallColNr1,BallColNr2;  TVector Nc;
//如果没有锁定到球上,旋转摄像机
if (!hook_toball1)
{
camera_rotation+=0.1f;
if (camera_rotation>360)
camera_rotation=0;
}

RestTime=Time;
lamda=1000;

//计算重力加速度
for (int j=0;j<NrOfBalls;j++)
ArrayVel[j]+=accel*RestTime;

//如果在一步的模拟时间内(如果来不及计算,则跳过几步)
while (RestTime>ZERO)
{
lamda=10000;

//对于每个球,找到它们最近的碰撞点
for (int i=0;i<NrOfBalls;i++)
{
//计算新的位置和移动的距离
OldPos[i]=ArrayPos[i];
TVector::unit(ArrayVel[i],uveloc);
ArrayPos[i]=ArrayPos[i]+ArrayVel[i]*RestTime;
rt2=OldPos[i].dist(ArrayPos[i]);

//测试是否和墙面碰撞
if (TestIntersionPlane(pl1,OldPos[i],uveloc,rt,norm))
{
//计算碰撞的时间
rt4=rt*RestTime/rt2;

//如果小于当前保存的碰撞时间,则更新它
if (rt4<=lamda)
{
if (rt4<=RestTime+ZERO)
if (! ((rt<=ZERO)&&(uveloc.dot(norm)>ZERO)) )
{
normal=norm;
point=OldPos[i]+uveloc*rt;
lamda=rt4;
BallNr=i;
}
}
}

if (TestIntersionPlane(pl2,OldPos[i],uveloc,rt,norm))
{
rt4=rt*RestTime/rt2;

if (rt4<=lamda)
{
if (rt4<=RestTime+ZERO)
if (! ((rt<=ZERO)&&(uveloc.dot(norm)>ZERO)) )
{
normal=norm;
point=OldPos[i]+uveloc*rt;
lamda=rt4;
BallNr=i;
dummy=1;
}
}

}

if (TestIntersionPlane(pl3,OldPos[i],uveloc,rt,norm))
{
rt4=rt*RestTime/rt2;

if (rt4<=lamda)
{
if (rt4<=RestTime+ZERO)
if (! ((rt<=ZERO)&&(uveloc.dot(norm)>ZERO)) )
{
normal=norm;
point=OldPos[i]+uveloc*rt;
lamda=rt4;
BallNr=i;
}
}
}

if (TestIntersionPlane(pl4,OldPos[i],uveloc,rt,norm))
{
rt4=rt*RestTime/rt2;

if (rt4<=lamda)
{
if (rt4<=RestTime+ZERO)
if (! ((rt<=ZERO)&&(uveloc.dot(norm)>ZERO)) )
{
normal=norm;
point=OldPos[i]+uveloc*rt;
lamda=rt4;
BallNr=i;
}
}
}

if (TestIntersionPlane(pl5,OldPos[i],uveloc,rt,norm))
{
rt4=rt*RestTime/rt2;

if (rt4<=lamda)
{
if (rt4<=RestTime+ZERO)
if (! ((rt<=ZERO)&&(uveloc.dot(norm)>ZERO)) )
{
normal=norm;
point=OldPos[i]+uveloc*rt;
lamda=rt4;
BallNr=i;
}
}
}

//测试是否与三个圆柱相碰
if (TestIntersionCylinder(cyl1,OldPos[i],uveloc,rt,norm,Nc))
{
rt4=rt*RestTime/rt2;

if (rt4<=lamda)
{
if (rt4<=RestTime+ZERO)
if (! ((rt<=ZERO)&&(uveloc.dot(norm)>ZERO)) )
{
normal=norm;
point=Nc;
lamda=rt4;
BallNr=i;
}
}

}
if (TestIntersionCylinder(cyl2,OldPos[i],uveloc,rt,norm,Nc))
{
rt4=rt*RestTime/rt2;

if (rt4<=lamda)
{
if (rt4<=RestTime+ZERO)
if (! ((rt<=ZERO)&&(uveloc.dot(norm)>ZERO)) )
{
normal=norm;
point=Nc;
lamda=rt4;
BallNr=i;
}
}

}
if (TestIntersionCylinder(cyl3,OldPos[i],uveloc,rt,norm,Nc))
{
rt4=rt*RestTime/rt2;

if (rt4<=lamda)
{
if (rt4<=RestTime+ZERO)
if (! ((rt<=ZERO)&&(uveloc.dot(norm)>ZERO)) )
{
normal=norm;
point=Nc;
lamda=rt4;
BallNr=i;
}
}

}
}


//计算每个球之间的碰撞,如果碰撞时间小于与上面的碰撞,则替换它们
if (FindBallCol(Pos2,BallTime,RestTime,BallColNr1,BallColNr2))
{
if (sounds)
PlaySound("Data/Explode.wav",NULL,SND_FILENAME|SND_ASYNC);

if ( (lamda==10000) || (lamda>BallTime) )
{
RestTime=RestTime-BallTime;

TVector pb1,pb2,xaxis,U1x,U1y,U2x,U2y,V1x,V1y,V2x,V2y;
double a,b;

pb1=OldPos[BallColNr1]+ArrayVel[BallColNr1]*BallTime;
pb2=OldPos[BallColNr2]+ArrayVel[BallColNr2]*BallTime;
xaxis=(pb2-pb1).unit();

a=xaxis.dot(ArrayVel[BallColNr1]);
U1x=xaxis*a;
U1y=ArrayVel[BallColNr1]-U1x;

xaxis=(pb1-pb2).unit();
b=xaxis.dot(ArrayVel[BallColNr2]);
U2x=xaxis*b;
U2y=ArrayVel[BallColNr2]-U2x;

V1x=(U1x+U2x-(U1x-U2x))*0.5;
V2x=(U1x+U2x-(U2x-U1x))*0.5;
V1y=U1y;
V2y=U2y;

for (j=0;j<NrOfBalls;j++)
ArrayPos[j]=OldPos[j]+ArrayVel[j]*BallTime;

ArrayVel[BallColNr1]=V1x+V1y;
ArrayVel[BallColNr2]=V2x+V2y;

//Update explosion array
for(j=0;j<20;j++)
{
if (ExplosionArray[j]._Alpha<=0)
{
ExplosionArray[j]._Alpha=1;
ExplosionArray[j]._Position=ArrayPos[BallColNr1];
ExplosionArray[j]._Scale=1;
break;
}
}

continue;
}
}

//最后的测试,替换下次碰撞的时间,并更新爆炸效果的数组
if (lamda!=10000)
{
RestTime-=lamda;

for (j=0;j<NrOfBalls;j++)
ArrayPos[j]=OldPos[j]+ArrayVel[j]*lamda;

rt2=ArrayVel[BallNr].mag();
ArrayVel[BallNr].unit();
ArrayVel[BallNr]=TVector::unit( (normal*(2*normal.dot(-ArrayVel[BallNr]))) + ArrayVel[BallNr] );
ArrayVel[BallNr]=ArrayVel[BallNr]*rt2;

for(j=0;j<20;j++)
{
if (ExplosionArray[j]._Alpha<=0)
{
ExplosionArray[j]._Alpha=1;
ExplosionArray[j]._Position=point;
ExplosionArray[j]._Scale=1;
break;
}
}
}
else
RestTime=0;

}

}

   
你可以从源代码得到全部的信息,我尽了最大的努力来解释每一行代码,一旦碰撞的原理知道了,代码是非常简单的.

就像我开头所说的,碰撞检测这个题目是非常难得,你已经学会了很多新的知识,并能够用它创建出非常棒的演示.但在这个课题,你认友很多需要学习,既然你已经开始了,其它的原理和模型就非常容易了.



--  作者:一分之千
--  发布时间:10/22/2007 9:03:00 PM

--  
Lesson 30
   
Collision Detection and Physically Based Modeling Tutorial by Dimitrios Christopoulos (christop@fhw.gr).

The source code upon which this tutorial is based, is from an older contest entry of mine (at OGLchallenge.dhs.org). The theme was Collision Crazy and my entry (which by the way took the 1st place :)) was called Magic Room. It features collision detection, physically based modeling and effects.

Collision Detection

A difficult subject and to be honest as far as I have seen up until now, there has been no easy solution for it. For every application there is a different way of finding and testing for collisions. Of course there are brute force algorithms which are very general and would work with any kind of objects, but they are expensive.

We are going to investigate algorithms which are very fast, easy to understand and to some extent quite flexible. Furthermore importance must be given on what to do once a collision is detected and how to move the objects, in accordance to the laws of physics. We have a lot stuff to cover. Lets review what we are going to learn:

1) Collision Detection
Moving Sphere - Plane
Moving Sphere - Cylinder
Moving Sphere - Moving Sphere
2) Physically Based Modeling
Collision Response
Moving Under Gravity Using Euler Equations
3) Special Effects
Explosion Modeling Using A Fin-Tree Billboard Method
Sounds Using The Windows Multimedia Library (Windows Only)
4) Explanation Of The Code
The Code Is Divided Into 5 Files
Lesson30.cpp   : Main Code For This Tutorial
Image.cpp, Image.h : Code To Load Bitmaps
Tmatrix.cpp, Tmatrix.h : Classes To Handle Rotations
Tray.cpp, Tray.h : Classes To Handle Ray Operations
Tvector.cpp, Tvector.h : Classes To Handle Vector Operations

A lot of handy code! The Vector, Ray and Matrix classes are very useful. I used them until now for personal projects of my own.

1) Collision Detection

For the collision detection we are going to use algorithms which are mostly used in ray tracing. Lets first define a ray.

A ray using vector representation is represented using a vector which denotes the start and a vector (usually normalized) which is the direction in which the ray travels. Essentially a ray starts from the start point and travels in the direction of the direction vector. So our ray equation is:

PointOnRay = Raystart + t * Raydirection

t is a float which takes values from [0, infinity).

With 0 we get the start point and substituting other values we get the corresponding points along the ray. PointOnRay, Raystart, Raydirection, are 3D Vectors with values (x,y,z). Now we can use this ray representation and calculate the intersections with plane or cylinders.

Ray - Plane Intersection Detection

A plane is represented using its Vector representation as:

Xn dot X = d

Xn, X are vectors and d is a floating point value.
Xn is its normal.
X is a point on its surface.
d is a float representing the distance of the plane along the normal, from the center of the coordinate system.

Essentially a plane represents a half space. So all that we need to define a plane is a 3D point and a normal from that point which is perpendicular to that plane. These two vectors form a plane, ie. if we take for the 3D point the vector (0,0,0) and for the normal (0,1,0) we essentially define a plane across x,z axes. Therefore defining a point and a normal is enough to compute the Vector representation of a plane.

Using the vector equation of the plane the normal is substituted as Xn and the 3D point from which the normal originates is substituted as X. The only value that is missing is d which can easily be computed using a dot product (from the vector equation).

(Note: This Vector representation is equivalent to the widely known parametric form of the plane Ax + By + Cz + D=0 just take the three x,y,z values of the normal as A,B,C and set D=-d).

The two equations we have so far are:

PointOnRay = Raystart + t * Raydirection
Xn dot X = d

If a ray intersects the plane at some point then there must be some point on the ray which satisfies the plane equation as follows:

Xn dot PointOnRay = d or (Xn dot Raystart) + t * (Xn dot Raydirection) = d

solving for t:

t = (d - Xn dot Raystart) / (Xn dot Raydirection)

replacing d:

t= (Xn dot PointOnRay - Xn dot Raystart) / (Xn dot Raydirection)

summing it up:

t= (Xn dot (PointOnRay - Raystart)) / (Xn dot Raydirection)

t represents the distance from the start until the intersection point along the direction of the ray. Therefore substituting t into the ray equation we can get the collision point. There are a few special cases though. If Xn dot Raydirection = 0 then these two vectors are perpendicular (ray runs parallel to plane) and there will be no collision. If t is negative the collision takes place behind the starting point of the ray along the opposite direction and again there is no intersection.   
   

int TestIntersionPlane(const Plane& plane,const TVector& position,const TVector& direction, double& lamda, TVector& pNormal)
{
 double DotProduct=direction.dot(plane._Normal);   // Dot Product Between Plane Normal And Ray Direction
 double l2;

 // Determine If Ray Parallel To Plane
 if ((DotProduct<ZERO)&&(DotProduct>-ZERO))
  return 0;

 l2=(plane._Normal.dot(plane._Position-position))/DotProduct; // Find Distance To Collision Point

 if (l2<-ZERO)       // Test If Collision Behind Start
  return 0;

 pNormal=plane._Normal;
 lamda=l2;
 return 1;
}

   
The code above calculates and returns the intersection. It returns 1 if there is an intersection otherwise it returns 0. The parameters are the plane, the start and direction of the vector, a double (lamda) where the collision distance is stored if there was any, and the returned normal at the collision point.

Ray - Cylinder Intersection

Computing the intersection between an infinite cylinder and a ray is much more complicated that is why I won't explain it here. There is way too much math involved too easily explain and my goal is primarily to give you tools how to do it without getting into alot of detail (this is not a geometry class). If anyone is interested in the theory behind the intersection code, please look at the Graphic Gems II Book (pp 35, intersection of a with a cylinder). A cylinder is represented as a ray, using a start and direction (here it represents the axis) vector and a radius (radius around the axis of the cylinder). The relevant function is:   
   

int TestIntersionCylinder(const Cylinder& cylinder,const TVector& position,const TVector& direction, double& lamda, TVector& pNormal,TVector& newposition)

   
Returns 1 if an intersection was found and 0 otherwise.

The parameters are the cylinder structure (look at the code explanation further down), the start, direction vectors of the ray. The values returned through the parameters are the distance, the normal at the intersection point and the intersection point itself.

Sphere - Sphere Collision

A sphere is represented using its center and its radius. Determining if two spheres collide is easy. By finding the distance between the two centers (dist method of the TVector class) we can determine if they intersect, if the distance is less than the sum of their two radius.

The problem lies in determining if 2 MOVING spheres collide. Bellow is an example where 2 sphere move during a time step from one point to another. Their paths cross in-between but this is not enough to prove that an intersection occurred (they could pass at a different time) nor can the collision point be determined.

Figure 1


The previous intersection methods were solving the equations of the objects to determine the intersection. When using complex shapes or when these equations are not available or can not be solved, a different method has to be used. The start points, endpoints, time step, velocity (direction of the sphere + speed) of the sphere and a method of how to compute intersections of static spheres is already known. To compute the intersection, the time step has to be sliced up into smaller pieces. Then we move the spheres according to that sliced time step using its velocity, and check for collisions. If at any point collision is found (which means the spheres have already penetrated each other) then we take the previous position as the intersection point (we could start interpolating between these points to find the exact intersection position, but that is mostly not required).

The smaller the time steps, the more slices we use the more accurate the method is. As an example lets say the time step is 1 and our slices are 3. We would check the two balls for collision at time 0 , 0.33, 0.66, 1. Easy !!!!

The code which performs this is:   
   

/*****************************************************************************************/
/***                         Find if any of the current balls                          ***/
/***                intersect with each other in the current timestep                  ***/
/*** Returns the index of the 2 intersecting balls, the point and time of intersection ***/
/*****************************************************************************************/

int FindBallCol(TVector& point, double& TimePoint, double Time2, int& BallNr1, int& BallNr2)
{
 TVector RelativeV;
 TRay rays;
 double MyTime=0.0, Add=Time2/150.0, Timedummy=10000, Timedummy2=-1;
 TVector posi;       // Test All Balls Against Eachother In 150 Small Steps
 for (int i=0;i<NrOfBalls-1;i++)
 {
  for (int j=i+1;j<NrOfBalls;j++)
  {
   RelativeV=ArrayVel[i]-ArrayVel[j];  // Find Distance
   rays=TRay(OldPos[i],TVector::unit(RelativeV));
   MyTime=0.0;

   if ( (rays.dist(OldPos[j])) > 40) continue;  // If Distance Between Centers Greater Than 2*radius
         // An Intersection Occurred
   while (MyTime<Time2)    // Loop To Find The Exact Intersection Point
   {
    MyTime+=Add;
    posi=OldPos[i]+RelativeV*MyTime;
    if (posi.dist(OldPos[j])<=40)
    {
     point=posi;
     if (Timedummy>(MyTime-Add)) Timedummy=MyTime-Add;
     BallNr1=i;
     BallNr2=j;
     break;
    }
   }
  }
 }

 if (Timedummy!=10000)
 {
  TimePoint=Timedummy;
  return 1;
 }
 return 0;
}

   
How To Use What We Just Learned

So now that we can determine the intersection point between a ray and a plane/cylinder we have to use it somehow to determine the collision between a sphere and one of these primitives. What we can do so far is determine the exact collision point between a particle and a plane/cylinder. The start position of the ray is the position of the particle and the direction of the ray is its velocity (speed and direction). To make it usable for spheres is quite easy. Look at Figure 2a to see how this can be accomplished.

Figure 2a                                         Figure 2b


Each sphere has a radius, take the center of the sphere as the particle and offset the surface along the normal of each plane/cylinder of interest. In Figure 2a these new primitives are represented with dotted lines. Your actual primitives of interest are the ones represented by continuous lines, but the collision testing is done with the offset primitives (represented with dotted lines). In essence we perform the intersection test with a little offset plane and a larger in radius cylinder. Using this little trick the ball does not penetrate the surface if an intersection is determined with its center. Otherwise we get a situation as in Figure 2b, where be sphere penetrates the surface. This happens because we determine the intersection between its center and the primitives, which means we did not modify our original code!

Having determined where the collision takes place we have to determine if the intersection takes place in our current time step. Timestep is the time we move our sphere from its current point according to its velocity. Because we are testing with infinite rays there is always the possibility that the collision point is after the new position of the sphere. To determine this we move the sphere, calculate its new position and find the distance between the start and end point. From our collision detection procedure we also get the distance from the start point to its collision point. If this distance is less than the distance between start and end point then there is a collision. To calculate the exact time we solve the following simple equation. Represent the distance between start - end point with Dst, the distance between start - collision point Dsc, and the time step as T. The time where the collision takes place (Tc) is:

Tc= Dsc*T / Dst

All this is performed of course if an intersection is determined. The returned time is a fraction of the whole time step, so if the time step was 1 sec, and we found an intersection exactly in the middle of the distance, the calculated collision time would be 0.5 sec. this is interpreted as "0.5 sec after the start there is an intersection". Now the intersection point can be calculated by just multiplying Tc with the current velocity and adding it to the start point.

Collision point= Start + Velocity*Tc

This is the collision point on the offset primitive, to find the collision point on the real primitive we add to that point the reverse of the normal at that point (which is also returned by the intersection routines) by the radius of the sphere. Note that the cylinder intersection routine returns the intersection point if there is one so it does not need to be calculated.

2) Physically Based Modeling

Collision Response

To determine how to respond after hitting Static Objects like Planes, Cylinders is as important as finding the collision point itself. Using the algorithms and functions described, the exact collision point, the normal at the collision point and the time within a time step in which the collision occurs can be found.

To determine how to respond to a collision, laws of physics have to be applied. When an object collides with the surface its direction changes i.e.. it bounces off. The angle of the of the new direction (or reflection vector) with the normal at the collision point is the same as the original direction vector. Figure 3 shows a collision with a sphere.

Figure 3


R is the new direction vector
I is the old direction vector before the collision
N is the Normal at the collision point

The new vector R is calculated as follows:

R= 2*(-I dot N)*N + I

The restriction is that the I and N vectors have to be unit vectors. The velocity vector as used in our examples represents speed and direction. Therefore it can not be plugged into the equation in the place of I, without any transformation. The speed has to be extracted. The speed for such a velocity vector is extracted finding the magnitude of the vector. Once the magnitude is found, the vector can be transformed to a unit vector and plugged into the equation giving the reflection vector R. R shows us now the direction, of the reflected ray, but in order to be used as a velocity vector it must also incorporate the speed. Therefore it gets, multiplied with the magnitude of the original ray, thus resulting in the correct velocity vector.

In the example this procedure is applied to compute the collision response if a ball hits a plane or a cylinder. But it works also for arbitrary surfaces, it does not matter what the shape of the surface is. As long as a collision point and a Normal can be found the collision response method is always the same. The code which does these operations is:   
   

rt2=ArrayVel[BallNr].mag();      // Find Magnitude Of Velocity
ArrayVel[BallNr].unit();      // Normalize It

// Compute Reflection
ArrayVel[BallNr]=TVector::unit( (normal*(2*normal.dot(-ArrayVel[BallNr]))) + ArrayVel[BallNr] );
ArrayVel[BallNr]=ArrayVel[BallNr]*rt2;     // Muliply With Magnitude To Obtain Final Velocity Vector

   
When Spheres Hit Other Spheres

Determining the collision response, if two balls hit each other is much more difficult. Complex equations of particle dynamics have to be solved and therefore I will just post the final solution without any proof. Just trust me on this one :) During the collision of 2 balls we have a situation as it is depicted in Figure 4.

Figure 4


U1 and U2 are the velocity vectors of the two spheres at the time of impact. There is an axis (X_Axis) vector which joins the 2 centers of the spheres, and U1x, U2x are the projected vectors of the velocity vectors U1,U2 onto the axis (X_Axis) vector.

U1y and U2y are the projected vectors of the velocity vectors U1,U2 onto the axis which is perpendicular to the X_Axis. To find these vectors a few simple dot products are needed. M1, M2 is the mass of the two spheres respectively. V1,V2 are the new velocities after the impact, and V1x, V1y, V2x, V2y are the projections of the velocity vectors onto the X_Axis.

In More Detail:

a) Find X_Axis

X_Axis = (center2 - center1);
Unify X_Axis, X_Axis.unit();

b) Find Projections

U1x= X_Axis * (X_Axis dot U1)
U1y= U1 - U1x
U2x =-X_Axis * (-X_Axis dot U2)
U2y =U2 - U2x

c)Find New Velocities

(U1x * M1)+(U2x*M2)-(U1x-U2x)*M2
V1x= --------------------------------
M1+M2
(U1x * M1)+(U2x*M2)-(U2x-U1x)*M1
V2x= --------------------------------
M1+M2

In our application we set the M1=M2=1, so the equations get even simpler.

d)Find The Final Velocities

V1y=U1y
V2y=U2y
V1=V1x+V1y
V2=V2x+V2y

The derivation of that equations has a lot of work, but once they are in a form like the above they can be used quite easily. The code which does the actual collision response is:   
   

TVector pb1,pb2,xaxis,U1x,U1y,U2x,U2y,V1x,V1y,V2x,V2y;
double a,b;
pb1=OldPos[BallColNr1]+ArrayVel[BallColNr1]*BallTime;   // Find Position Of Ball1
pb2=OldPos[BallColNr2]+ArrayVel[BallColNr2]*BallTime;   // Find Position Of Ball2
xaxis=(pb2-pb1).unit();       // Find X-Axis
a=xaxis.dot(ArrayVel[BallColNr1]);     // Find Projection
U1x=xaxis*a;        // Find Projected Vectors
U1y=ArrayVel[BallColNr1]-U1x;
xaxis=(pb1-pb2).unit();       // Do The Same As Above
b=xaxis.dot(ArrayVel[BallColNr2]);     // To Find Projection
U2x=xaxis*b;        // Vectors For The Other Ball
U2y=ArrayVel[BallColNr2]-U2x;
V1x=(U1x+U2x-(U1x-U2x))*0.5;      // Now Find New Velocities
V2x=(U1x+U2x-(U2x-U1x))*0.5;
V1y=U1y;
V2y=U2y;
for (j=0;j<NrOfBalls;j++)      // Update All Ball Positions
ArrayPos[j]=OldPos[j]+ArrayVel[j]*BallTime;
ArrayVel[BallColNr1]=V1x+V1y;      // Set New Velocity Vectors
ArrayVel[BallColNr2]=V2x+V2y;      // To The Colliding Balls

   
Moving Under Gravity Using Euler Equations

To simulate realistic movement with collisions, determining the the collision point and computing the response is not enough. Movement based upon physical laws also has to be simulated.

The most widely used method for doing this is using Euler equations. As indicated all the computations are going to be performed using time steps. This means that the whole simulation is advanced in certain time steps during which all the movement, collision and response tests are performed. As an example we can advanced a simulation 2 sec. on each frame. Based on Euler equations, the velocity and position at each new time step is computed as follows:

Velocity_New = Velovity_Old + Acceleration*TimeStep
Position_New = Position_Old + Velocity_New*TimeStep

Now the objects are moved and tested angainst collision using this new velocity. The Acceleration for each object is determined by accumulating the forces which are acted upon it and divide by its mass according to this equation:

Force = mass * acceleration

A lot of physics formulas :)

But in our case the only force the objects get is the gravity, which can be represented right away as a vector indicating acceleration. In our case something negative in the Y direction like (0,-0.5,0). This means that at the beginning of each time step, we calculate the new velocity of each sphere and move them testing for collisions. If a collision occurs during a time step (say after 0.5 sec with a time step equal to 1 sec.) we advance the object to this position, compute the reflection (new velocity vector) and move the object for the remaining time (which is 0.5 in our example) testing again for collisions during this time. This procedure gets repeated until the time step is completed.

When multiple moving objects are present, each moving object is tested with the static geometry for intersections and the nearest intersection is recorded. Then the intersection test is performed for collisions among moving objects, where each object is tested with everyone else. The returned intersection is compared with the intersection returned by the static objects and the closest one is taken. The whole simulation is updated to that point, (i.e. if the closest intersection would be after 0.5 sec. we would move all the objects for 0.5 seconds), the reflection vector is calculated for the colliding object and the loop is run again for the remaining time.

3) Special Effects

Explosions

Every time a collision takes place an explosion is triggered at the collision point. A nice way to model explosions is to alpha blend two polygons which are perpendicular to each other and have as the center the point of interest (here intersection point). The polygons are scaled and disappear over time. The disappearing is done by changing the alpha values of the vertices from 1 to 0, over time. Because a lot of alpha blended polygons can cause problems and overlap each other (as it is stated in the Red Book in the chapter about transparency and blending) because of the Z buffer, we borrow a technique used in particle rendering. To be correct we had to sort the polygons from back to front according to their eye point distance, but disabling the Depth buffer writes (not reads) also does the trick (this is also documented in the red book). Notice that we limit our number of explosions to maximum 20 per frame, if additional explosions occur and the buffer is full, the explosion is discarded. The source which updates and renders the explosions is:   
   

// Render / Blend Explosions
glEnable(GL_BLEND);       // Enable Blending
glDepthMask(GL_FALSE);       // Disable Depth Buffer Writes
glBindTexture(GL_TEXTURE_2D, texture[1]);    // Upload Texture
for(i=0; i<20; i++)       // Update And Render Explosions
{
 if(ExplosionArray[i]._Alpha>=0)
 {
  glPushMatrix();
  ExplosionArray[i]._Alpha-=0.01f;   // Update Alpha
  ExplosionArray[i]._Scale+=0.03f;   // Update Scale
  // Assign Vertices Colour Yellow With Alpha
  // Colour Tracks Ambient And Diffuse
  glColor4f(1,1,0,ExplosionArray[i]._Alpha);  // Scale
  glScalef(ExplosionArray[i]._Scale,ExplosionArray[i]._Scale,ExplosionArray[i]._Scale);
  // Translate Into Position Taking Into Account The Offset Caused By The Scale
  glTranslatef((float)ExplosionArray[i]._Position.X()/ExplosionArray[i]._Scale,
   (float)ExplosionArray[i]._Position.Y()/ExplosionArray[i]._Scale,
   (float)ExplosionArray[i]._Position.Z()/ExplosionArray[i]._Scale);
  glCallList(dlist);     // Call Display List
  glPopMatrix();
 }
}

   
Sound

For the sound the windows multimedia function PlaySound() is used. This is a quick and dirty way to play wav files quickly and without trouble.

4) Explaining the Code

Congratulations...

If you are still with me you have survived successfully the theory section ;) Before having fun playing around with the demo, some further explanations about the source code are necessary. The main flow and steps of the simulation are as follows (in pseudo code):   
   

While (Timestep!=0)
{
 For each ball
 {
  compute nearest collision with planes;
  compute nearest collision with cylinders;
  Save and replace if it the nearest intersection;
  in time computed until now;
 }
 Check for collision among moving balls;
 Save and replace if it the nearest intersection;
 in time computed until now;
 If (Collision occurred)
 {
  Move All Balls for time equal to collision time;
  (We already have computed the point, normal and collision time.)
  Compute Response;
  Timestep-=CollisonTime;
 }
 else
  Move All Balls for time equal to Timestep
}

   
The actual code which implements the above pseudo code is much harder to read but essentially is an exact implementation of the pseudo code above.   
   

// While Time Step Not Over
while (RestTime>ZERO)
{
 lamda=10000;       // Initialize To Very Large Value
 // For All The Balls Find Closest Intersection Between Balls And Planes / Cylinders
 for (int i=0;i<NrOfBalls;i++)
 {
  // Compute New Position And Distance
  OldPos[i]=ArrayPos[i];
  TVector::unit(ArrayVel[i],uveloc);
  ArrayPos[i]=ArrayPos[i]+ArrayVel[i]*RestTime;
  rt2=OldPos[i].dist(ArrayPos[i]);
  // Test If Collision Occured Between Ball And All 5 Planes
  if (TestIntersionPlane(pl1,OldPos[i],uveloc,rt,norm))
  {
   // Find Intersection Time
   rt4=rt*RestTime/rt2;
   // If Smaller Than The One Already Stored Replace In Timestep
   if (rt4<=lamda)
   {
    // If Intersection Time In Current Time Step
    if (rt4<=RestTime+ZERO)
     if (! ((rt<=ZERO)&&(uveloc.dot(norm)>ZERO)) )
     {
      normal=norm;
      point=OldPos[i]+uveloc*rt;
      lamda=rt4;
      BallNr=i;
     }
   }
  }

  if (TestIntersionPlane(pl2,OldPos[i],uveloc,rt,norm))
  {

   // ...The Same As Above Omitted For Space Reasons
  }

  if (TestIntersionPlane(pl3,OldPos[i],uveloc,rt,norm))
  {

   // ...The Same As Above Omitted For Space Reasons
  }

  if (TestIntersionPlane(pl4,OldPos[i],uveloc,rt,norm))
  {

   // ...The Same As Above Omitted For Space Reasons
  }

  if (TestIntersionPlane(pl5,OldPos[i],uveloc,rt,norm))
  {

   // ...The Same As Above Omitted For Space Reasons
  }

  // Now Test Intersection With The 3 Cylinders
  if (TestIntersionCylinder(cyl1,OldPos[i],uveloc,rt,norm,Nc))
  {
   rt4=rt*RestTime/rt2;
   if (rt4<=lamda)
   {
    if (rt4<=RestTime+ZERO)
     if (! ((rt<=ZERO)&&(uveloc.dot(norm)>ZERO)) )
     {
      normal=norm;
      point=Nc;
      lamda=rt4;
      BallNr=i;
     }
   }
  }

  if (TestIntersionCylinder(cyl2,OldPos[i],uveloc,rt,norm,Nc))
  {
   // ...The Same As Above Omitted For Space Reasons
  }

  if (TestIntersionCylinder(cyl3,OldPos[i],uveloc,rt,norm,Nc))
  {
   // ...The Same As Above Omitted For Space Reasons
  }

 }

 // After All Balls Were Tested With Planes / Cylinders Test For Collision
 // Between Them And Replace If Collision Time Smaller
 if (FindBallCol(Pos2,BallTime,RestTime,BallColNr1,BallColNr2))
 {
  if (sounds)
   PlaySound("Explode.wav",NULL,SND_FILENAME|SND_ASYNC);

  if ( (lamda==10000) || (lamda>BallTime) )
  {
   RestTime=RestTime-BallTime;
   TVector pb1,pb2,xaxis,U1x,U1y,U2x,U2y,V1x,V1y,V2x,V2y;
   double a,b;
   .
   .
   Code Omitted For Space Reasons
   The Code Is Described In The Physically Based Modeling
   Section Under Sphere To Sphere Collision
   .
   .
   //Update Explosion Array And Insert Explosion
   for(j=0;j<20;j++)
   {
    if (ExplosionArray[j]._Alpha<=0)
    {
     ExplosionArray[j]._Alpha=1;
     ExplosionArray[j]._Position=ArrayPos[BallColNr1];
     ExplosionArray[j]._Scale=1;
     break;
    }
   }

   continue;
  }
 }

 // End Of Tests
 // If Collision Occured Move Simulation For The Correct Timestep
 // And Compute Response For The Colliding Ball
 if (lamda!=10000)
 {
  RestTime-=lamda;
  for (j=0;j<NrOfBalls;j++)
  ArrayPos[j]=OldPos[j]+ArrayVel[j]*lamda;
  rt2=ArrayVel[BallNr].mag();
  ArrayVel[BallNr].unit();
  ArrayVel[BallNr]=TVector::unit( (normal*(2*normal.dot(-ArrayVel[BallNr]))) + ArrayVel[BallNr] );
  ArrayVel[BallNr]=ArrayVel[BallNr]*rt2;

  // Update Explosion Array And Insert Explosion
  for(j=0;j<20;j++)
  {
   if (ExplosionArray[j]._Alpha<=0)
   {
    ExplosionArray[j]._Alpha=1;
    ExplosionArray[j]._Position=point;
    ExplosionArray[j]._Scale=1;
    break;
   }
  }
 }
 else RestTime=0;
}         // End Of While Loop

   
The Main Global Variables Of Importance Are:

Represent the direction and position of the camera. The camera is moved using the LookAt function. As you will probably notice, if not in hook mode (which I will explain later), the whole scene rotates around, the degree of rotation is handled with camera_rotation. TVector dir
TVector pos(0,-50,1000);
float camera_rotation=0;
Represent the acceleration applied to the moving balls. Acts as gravity in the application. TVector accel(0,-0.05,0);
Arrays which hold the New and old ball positions and the velocity vector of each ball. The number of balls is hard coded to 10. TVector ArrayVel[10];
TVector ArrayPos[10];
TVector OldPos[10];
int NrOfBalls=3;
The time step we use. double Time=0.6;
If 1 the camera view changes and a (the ball with index 0 in the array) ball is followed. For making the camera following the ball we used its position and velocity vector to position the camera exactly behind the ball and make it look along the velocity vector of the ball. int hook_toball1=0;
Self explanatory structures for holding data about explosions, planes and cylinders. struct Plane
struct Cylinder
struct Explosion
The explosions are stored in a array, of fixed length. Explosion ExplosionArray[20];


The Main Functions Of Interest Are:

Perform Intersection tests with primitives int TestIntersionPlane(....);
int TestIntersionCylinder(...);
Loads Textures from bmp files void LoadGLTextures();
Has the rendering code. Renders the balls, walls, columns and explosions void DrawGLScene();
Performs the main simulation logic void idle();
Sets Up OpenGL state void InitGL();
Find if any balls collide again each other in current time step int FindBallCol(...);


For more information look at the source code. I tried to comment it as best as I could. Once the collision detection and response logic is understood, the source should become very clear. For any more info don't hesitate to contact me.

As I stated at the beginning of this tutorial, the subject of collision detection is a very difficult subject to cover in one tutorial. You will learn a lot in this tutorial, enough to create some pretty impressive demos of your own, but there is still alot more to learn on this subject. Now that you have the basics, all the other sources on Collision Detection and Physically Based Modeling out there should become easier to understand. With this said, I send you on your way and wish you happy collisions!!!

Some information about Dimitrios Christopoulos: He is currently working as a Virtual Reality software engineer at the Foundation of the Hellenic World in Athens/Greece (www.fhw.gr). Although Born in Germany, he studied in Greece at the University of Patras for a B.Sc. in Computer Engineering and Informatics. He holds also a MSc degree (honours) from the University of Hull (UK) in Computer Graphics and Virtual Environments. He did his first steps in game programming using Basic on an Commodore 64, and switched to C/C++/Assembly on the PC platform after the start of his studium. During the last few years OpenGL has become his graphics API of choice. For more information visit his site at: http://members.xoom.com/D_Christop.

Dimitrios Christopoulos

Jeff Molofee (NeHe)



--  作者:和你一样
--  发布时间:8/15/2008 12:18:00 AM

--  
感激涕零~~
--  作者:Jokcy
--  发布时间:4/1/2011 9:25:00 AM

--  
学习一下
W 3 C h i n a ( since 2003 ) 旗 下 站 点
苏ICP备05006046号《全国人大常委会关于维护互联网安全的决定》《计算机信息网络国际联网安全保护管理办法》
250.000ms