新书推介:《语义网技术体系》
作者:瞿裕忠,胡伟,程龚
   XML论坛     W3CHINA.ORG讨论区     计算机科学论坛     SOAChina论坛     Blog     开放翻译计划     新浪微博  
 
  • 首页
  • 登录
  • 注册
  • 软件下载
  • 资料下载
  • 核心成员
  • 帮助
  •   Add to Google

    >> 本版讨论高级C/C++编程、代码重构(Refactoring)、极限编程(XP)、泛型编程等话题
    [返回] 计算机科学论坛计算机技术与应用『 C/C++编程思想 』 → [推荐]NeHe OpenGL教程(中英文版附带VC++源码)Lesson 35-lesson  36 查看新帖用户列表

      发表一个新主题  发表一个新投票  回复主题  (订阅本版) 您是本帖的第 20187 个阅读者浏览上一篇主题  刷新本主题   树形显示贴子 浏览下一篇主题
     * 贴子主题: [推荐]NeHe OpenGL教程(中英文版附带VC++源码)Lesson 35-lesson  36 举报  打印  推荐  IE收藏夹 
       本主题类别:     
     一分之千 帅哥哟,离线,有人找我吗?射手座1984-11-30
      
      
      威望:1
      等级:研一(随老板参加了WWW大会还和Tim Berners-Lee合了影^_^)
      文章:632
      积分:4379
      门派:XML.ORG.CN
      注册:2006/12/31

    姓名:(无权查看)
    城市:(无权查看)
    院校:(无权查看)
    给一分之千发送一个短消息 把一分之千加入好友 查看一分之千的个人资料 搜索一分之千在『 C/C++编程思想 』的所有贴子 引用回复这个贴子 回复这个贴子 查看一分之千的博客楼主
    发贴心情 [推荐]NeHe OpenGL教程(中英文版附带VC++源码)Lesson 35-lesson  36

    第三十五三十六课源码

    第三十五课


    按此在新窗口浏览图片在OpenGL中播放AVI:

    在OpenGL中如何播放AVI呢?利用Windows的API把每一帧作为纹理绑定到OpenGL中,虽然很慢,但它的效果不错。你可以试试。

      
       
       
    首先我得说我非常喜欢这一章节.Jonathan de Blok使我产生了用OpenGL编写AVI播放器的想法,可那时,我跟本不知如何打开AVI文件,更不必说去写一个播放器了.于是我浏览了搜藏的编程书籍,没有一本讲到AVI文件的.我又阅读了MSDN上和AVI文件格式有关的一切内容,上面有很多有用的信息,但我需要更多的.
    花了几小时在网上搜到AVI范例,只找到两个网站.我的搜索技巧不能说很棒吧,但99.9%的情况,我能找到我要寻找的东西.了解到AVI范例竟如此之少时,我完全震惊了.大多数范例并不能编译通过...有一些则用了太复杂的的方法(至少对我如此),剩下的不错,可是用VB,Delphi等写的(不是用vc++).

    找到的第一个网页是Jonathan Nix写的题为"AVI 文件"的文章.网址是http://www.gamedev.net/reference/programming/features/avifile.感谢Jonathan写了这片关于AVI格式的好文章.虽然我用不同的做法,但他的代码片断和清晰的注解让人学得很轻松!第二个网站标题为"AVI 总体观"(John F. McGowan, Ph.D写的)..我可以大肆赞美John的网叶有多么惊奇,但你最好自己去看看.他的网址是http://www.jmcgowan.com/avi.html.这个网站讲到了和AVI格式有关的几乎所有内容.感谢John做了一个这么有用的网站.

    最后要提到是我没有借鉴任何代码,没有抄袭任何代码.我的代码是花了三天时间了解到上述网站和文章的信息后才写成的.我是想说我的代码也许不是播放AVI文件的最好代码,他也许不是放AVI文件的正确代码,但他管用而且使用方便.如果你不喜欢这些代码和我的编程风格,或者觉得我的言论伤害到整个编程界,你有以下选择:1)在网上找到替换的资源2)写自己的AVI播放器3)写一篇更好的文章.任何访问本网站的人现在应该知道我只是一名中级程序员(这一点我在网站里很多文章的开头都提到过)!我编写代码自乐而已.本网站的目的在于让非精英程序员更轻松的开始OpenGl编程.这些文章只是关于我实现的几个特殊的效果...没有其他的.

    开始讲代码首先你要注意的是我们要包括和连接到视频头文件和库文件.非常感谢微软(窝不敢相信我说了什么).库文件使打开,播放AVI文件都很简便.现在你要知道的是必须包括头文件vfw.h而且要连接到vfw32.lib库文件如果想编译你的代码的话:)

      
       

    #include <vfw.h>       // Video For Windows头文件
    #include "NeHeGL.h"      // NeHeGL头文件

    #pragma comment( lib, "opengl32.lib" )    
    #pragma comment( lib, "glu32.lib" )    
    #pragma comment( lib, "vfw32.lib" )     // 链接到VFW32.lib


    GL_Window* g_window;
    Keys*  g_keys;

       
    现在定义变量.angle是用来根据时间来旋转物体的.为简单起见我们用angle来控制所有的旋转.
    接下来是一个整型变量是用来计算经过的时间(以毫秒计).它使帧速保持一个速度.
    后面细讲!
    frame是动画要显示的当前帧,初始值为0(第一帧).我想如果成功打开AVI,他至少有一帧吧,这样假定比较安全:)
    effect是当前屏幕上的效果(有:立方体,球体,圆柱体).env是布尔值.若它为true则环境映射启动,若为假,则物体没有环境映射.若bg为true,你会看到物体后有全屏的动画;若为假,你只会看到物体(没有背景).
    sp,ep和bp用来确定使用者没有按着键不放.  
       

    float  angle;       // 旋转用
    int  next;       // 动画用
    int  frame=0;       // 帧计数器
    int  effect;       // 当前效果
    bool  sp;       // 空格键按下?
    bool  env=TRUE;       // 环境映射(默认开)
    bool  ep;       //’E’ 按下?
    bool  bg=TRUE;       // 背景(默认开)
    bool  bp;       // ’B’ 按下?

       
    psi结构体包含AVI文件信息.pavi缓冲的指针,缓冲用来接受AVI文件打开时的流句柄.pgf是指向GetFrame对象的指针.bmih在后面的代码中将被用来把动画的每一帧转换为我们需要的格式(保存位图的头信息).lastframe保存AVI动画最后一帧的序号.width和height保存AVI流的维信息,最后...pdata是图象数据的指针(每次在从AVI中获得一帧后返回).mpf用来计算每帧需要多少毫秒.后面细谈这个变量.  
       

    AVISTREAMINFO  psi;      // 包含流信息的结构体的指针
    PAVISTREAM  pavi;      // 流句柄
    PGETFRAME  pgf;       // GetFrame对象的指针
    BITMAPINFOHEADER bmih;       // 头信息 For DrawDibDraw
    long   lastframe;     // 流中最后一帧
    int   width;      // 视频宽
    int   height;      // 视频高
    char   *pdata;      // 纹理数据指针
    int   mpf;      // 控制每帧显示时间

       
    在本章中我们用GLU库创建两个二次曲面(球体和圆柱体).quadratic是曲面对象的指针.
    hdd是DrawDib设备上下文的句柄.hdc是设备上下文的句柄.
    hBitmap是设备无关位图的句柄(在后面位图转换时用到).
    data是最后指向转换后位图的图象数据的指针,在后面的代码中会有意义,往下读:)  
       

    GLUquadricObj *quadratic;      // 存储二次曲面对象

    HDRAWDIB hdd;       // Dib句柄
    HBITMAP hBitmap;       // 设备无关位图的句柄
    HDC hdc = CreateCompatibleDC(0);     // 创建一个兼容的设备上下文
    unsigned char* data = 0;      // 调整后的图象数据指针

       
    下面使用到汇编语言.那些从来没有用过汇编的不要被吓倒了.他看起来神秘,实际上非常简单!
    在写本章是我发现了十分奇怪的事.第一次做出来的可以播放,但色彩混乱了.本来是红色的变成蓝色的了,本来是蓝色的变成红色的了.我简直要发狂了!我相信我的代码某处有问题.看了一边代码还是找不到bug于是又读了MSDN.为什么红色与蓝色互换了!?!MSDN明明说24比特位图是RGB啊!又读了一些东西,我找到了答案.在WINDOWS图形系统中,RGB数据是倒着存储的(BGR).而在OpenGL中,要用的RGB数据就是RGB的顺序!

    在抱怨了微软之后:)我决定加一条注解!我不因为RGB数据倒过来存放而打算骂微软.只是觉得很奇怪--他叫做RGB实际上在文件中是按BGR存的!

    另:这一点和"little endian"和"big endian"有关.Intel以及Intel兼容产品用little endian--LSB(数据最低位)首先存.OpenGL是产生于Silicon Graphics的机器的,用的是big endian,所以标准的OpenGL要位图格式是big endian格式.这是我的理解.

    棒极了!所以说这第一个播放器就是一个垃圾!我的解决方法是用一个循环把数据交换过来.这能行,但太慢.我又在纹理生成代码中用GL_BGR_EXT代替了GL_RGB,速度暴升,色彩显示也对了!问题解决了...原来我是这样想!后来发现一些OpenGL驱动不支持GL_BGR... :(

    与好友Maxwell Sayles讨论后,他推荐我用汇编代码来交换数据.一分钟后,他用icq发来下面的代码!也许不是最优化的,但他很快也很有效!

    动画的每一帧存在一个缓冲里.图象256像素宽,256像素高,每个色彩一字节(一像素3字节).下面的代码会扫描整个缓冲并交换红与蓝的字节.红存在ebx+0,蓝存在ebx+2.我们一次向前走3字节(因为一个像素3字节).不断扫描直到所有数据交换过来.

    你们有些人不喜欢用汇编代码,所以我想有必要在本章里解释一下.本来计划用GL_BGR_EXT,他管用,但不是所有的显卡都支持!我又用异或交换法,这在所有机器上都是有效的,但不十分快.用了汇编后速度相当快.考虑到我们在处理实时视频,你需要最快的交换方法.权衡了以上选择,汇编是最好的!如果你有更好的办法,就用你自己的吧!我并不是告诉你必须如何去做,只是告诉你我的做法.我也会细致的解释代码.如果你要用更好的代码来作替换,你要清楚这些代码是来干什么的,自己写代码时,要为日后的优化提供方便.

      
       

    void flipIt(void* buffer)      // 交换红蓝数据(256x256)
    {
     void* b = buffer;      // 缓冲指针
     __asm       // 汇编代码
     {
      mov ecx, 256*256     // 设置计数器
      mov ebx, b     // ebx存数据指针
      label:      // 循环标记
       mov al,[ebx+0]    // 把ebx位置的值赋予al
       mov ah,[ebx+2]    // 把ebx+2位置的值赋予ah
       mov [ebx+2],al    // 把al的值存到ebx+2的位置
       mov [ebx+0],ah    // 把ah的值存到ebx+0的位置

       add ebx,3     // 向前走3个字节
       dec ecx     // 循环计数器减1
       jnz label     // ecx非0则跳至label
     }
    }

       
    下面的代码以只读方式打开AVI文件.szFile是打开文件的名字.title[100]用来修改window标题(显示AVI文件信息).
    首先调用AVIFileInit().他初始化AVI文件库(使东西能用?鹄?).

    打开AVI文件有很多方法.我采用AVIStreamOpenFromFile(...).他能打开AVI文件中单独一个流(AVI文件可以包含多个流).它的参数如下:pavi是接收流句柄的缓冲的指针,szFile是打开文件的名字(包括路径).第三参数是打开的流的类型.在这个工程里,我们只对视频流感兴趣(streamtypeVIDEO).第四参数是0,这表示我们需要第一次读到的视频流(一个AVI文件里会有多个视频流,我们要第一个).OF_READ表示以只读方式打开文件.最后一个参数是一个类标识句柄的指针.说实话,我也不清楚他是干吗的.我让windows自己设定,于是把NULL传过去.

      
       

    void OpenAVI(LPCSTR szFile)      // 打开AVI文件szFile
    {
     TCHAR title[100];     // 包含修改了的window标题

     AVIFileInit();      // 打开AVI文件库

     // 打开AVI流
     if (AVIStreamOpenFromFile(&pavi, szFile, streamtypeVIDEO, 0, OF_READ, NULL) !=0)
     {
      // 打开流时的出错处理
      MessageBox (HWND_DESKTOP, "打开AVI流失败", "错误", MB_OK | MB_ICONEXCLAMATION);
     }

       
    到目前为止,我们假定文件被正确打开,流被正确定位!然后用AVIStreamInfo(...)从AVI文件里抓取一些信息.
    先前我们创建了叫psi的结构体来保存AVI流的信息.下面第一行,我们把AVI信息填入该结构体.从流的宽度(以像素计)到动画的帧速等所有的信息都会存到psi中.那些想要精确控制播放速度的要记住我刚才说的.更多的信息参阅MSDN.

    我们通过右边位置减左边位置算出帧宽.这个结果是以像素记的精确的帧宽.至于高度,可以用底边位置减顶边位置得到.这样得到高度的像素值.

    然后用AVIStreamLength(...)得到AVI文件最后一帧的序号.AVIStreamLength(...)返回动画最后一帧的序号.结果存在lastframe里.

    计算帧速很简单.每秒帧速(fps)= psi.dwRate/psi,dwScale.返回的值应该匹配显示帧的速度(你在AVI动画中右击鼠标可以看到).你会问那么这和mpf有什么关系呢?第一次写这个代码时,我试着用fps来选择动画了正确的帧面.我遇到一个问题...视频放的太快!于是我看了一下视频属性.face2.avi文件有3.36秒长.帧速是29.974fps.视频动画共有91帧.而3.36*29.974 = 100.71.非常奇怪!!

    所以我采用一些不同的方法.不是计算帧速,我计算每一帧播放所需时间.AVIStreamSampleToTime()把在动画中的位置转换位你到达该位置所需的时间(毫秒计).所以通过计算到达最后一帧的时间就得到整个动画的播放时间.再拿这个结果除以动画总帧数(lastframe).这样就给出了每帧的显示时间(毫秒计).结果存在mpf(milliseconds per frame)里.你也能通过获取动画中一帧的时间来算每帧的毫秒数,代码为:AVIStreamSampleToTime(pavi,1).两种方法都不错!非常感谢Albert Chaulk提供思路!

    我说每帧的毫秒数不精确是因为mpf是一个整型值,所以所有的浮点数都会被取整.

      
       

     AVIStreamInfo(pavi, &psi, sizeof(psi));   // 把流信息读进psi
     width=psi.rcFrame.right-psi.rcFrame.left;   // 宽度为右边减左边
     height=psi.rcFrame.bottom-psi.rcFrame.top;   // 高为底边减顶边

     lastframe=AVIStreamLength(pavi);    // 最后一帧的序号

     mpf=AVIStreamSampleToTime(pavi,lastframe)/lastframe;  // mpf的不精确值

       
    因为OpenGL需要纹理数据是2的幂,而大多视频是160*120,320*240等等,所以需要一种把视频格式重调整为能用作纹理的格式.我们可利用Windows Dib函数去做.
    首先要做的是描述我们想要的图像的类型.于是我们要以所需参数填好bmih这个BitmapInfoHeader结构.
    首先设定该结构体的大小.再把位平面数设为1.3字节的数据有24比特(RGB).要使图像位256像素宽,256像素高,最后要让数据返回为UNCOMPRESSED(非压缩)的RGB数据(BI_RGB).

    CreateDIBSection创建一个可直接写的设备无关位图(dib).如果一切顺利,hBitmap会指向该dib的比特值.hdc是设备上下文(DC)的句柄第二参数是BitmapInfo结构体的指针.该结构体包含了上述dib文件的信息.第三参数(DIB_RGB_COLORS)设定数据是RGB值.data是指向DIB比特值位置的指针的指针(呜,真绕口).第五参数设为NULL,我们的DIB已被分配好内存.末了,最后一个参数可忽略(设为NULL).

    引自MSDN:SelecObject函数选一个对象进入设备上下文(DC).

    现在我们建好一个能直接写的DIB,yeah:)

      
       

     bmih.biSize  = sizeof (BITMAPINFOHEADER);  // BitmapInfoHeader的大小
     bmih.biPlanes  = 1;     // 位平面
     bmih.biBitCount  = 24;     //比特格式(24 Bit, 3 Bytes)
     bmih.biWidth  = 256;     // 宽度(256 Pixels)
     bmih.biHeight  = 256;     // 高度 (256 Pixels)
     bmih.biCompression = BI_RGB;      // 申请的模式 = RGB

     hBitmap = CreateDIBSection (hdc, (BITMAPINFO*)(&bmih), DIB_RGB_COLORS, (void**)(&data), NULL, NULL);
     SelectObject (hdc, hBitmap);     // 选hBitmap进入设备上下文(hdc)

       
    在从AVI中读取帧面前还有几件事要做.接下来使程序做好从AVI文件中解出帧面的准备.用AVIStreamGetFrameOpen(...)函数做这一点.
    你能给这个函数传一个结构体作为第二参数(它会返回一个特定的视频格式).糟糕的是,你能改变的唯一数据是返回的图像的宽度和高度.MSDN也提到能传AVIGETFRAMEF_BESTDISPLAYFMT为参数来选择一个最佳显示格式.奇怪的是,我的编译器没有定义这玩艺儿.

    如果一切顺利,一个GETFRAME对象被返回(用来读帧数据).有问题的话,提示框会出现在屏幕上告诉你有错误!

      
       

     pgf=AVIStreamGetFrameOpen(pavi, NULL);    // 用要求的模式建PGETFRAME
     if (pgf==NULL)
     {
      // 解帧出错
      MessageBox (HWND_DESKTOP, "不能打开AVI帧", "错误", MB_OK | MB_ICONEXCLAMATION);
     }

       
    下面的代码把视频宽,高和帧数传给window标题.用函数SetWindowText(...)在window顶部显示标题.以窗口模式运行程序看看以下代码的作用.  
       

     // bt标题栏信息(宽 / 高/ 帧数)
     wsprintf (title, "NeHe's AVI Player: Width: %d, Height: %d, Frames: %d", width, height, lastframe);
     SetWindowText(g_window->hWnd, title);    // 修改标题栏
    }

       
    下面是有趣的东西...从AVI中抓取一帧,把它转为大小和色深可用的图象.lpbi包含一帧的BitmapInfoHeader信息.我们在下面第二行完成了几件事.先是抓了动画的一帧...我们需要的帧面由这些帧确定.这会让动画走掉这一帧,lpbi会指向这一帧的头信息.
    下面是有趣的东西...我们要指向图像数据了.要跳过头信息(lpbi->biSize).一件事直到写本文时我才意识到:也要跳过任何的色彩信息.所以要跳过biClrUsed*sizeof(RGBQUAD)(译者:我想他是说要跳过调色板信息).做完这一切,我们就得到图像数据的指针了(pdata).

    也要把动画的每一帧的大小转为纹理能用的大小,还要把数据转为RGB数据.这用到DrawDibDraw(...).

    一个大概的解释.我们能直接写设定的DIB图像.那就是DrawDibDraw(...)所做的.第一参数是DrawDib DC的句柄.第二参数是DC的句柄.接下来用左上角(0,0)和右下角(256,256)构成目标矩形.

    lpbi指向刚读的帧的bitmapinfoheader信息.pdata是刚读的帧的图像数据指针.

    再把源图象(刚读的帧)的左上角设为(0,0),右下角设为(帧宽,帧高).最后的参数应设为0.

    这个方法可把任何大小、色深的图像转为256*256*24bit的图像.

      
       

    void GrabAVIFrame(int frame)      // 从流中抓取一帧
    {
     LPBITMAPINFOHEADER lpbi;      // 存位图的头信息
     lpbi = (LPBITMAPINFOHEADER)AVIStreamGetFrame(pgf, frame);  // 从AVI流中得到数据
     pdata=(char *)lpbi+lpbi->biSize+lpbi->biClrUsed * sizeof(RGBQUAD); // 数据指针,由AVIStreamGetFrame返回(跳过头
    //信息和色彩信息)
    // 把数据转为所需格式
     DrawDibDraw (hdd, hdc, 0, 0, 256, 256, lpbi, pdata, 0, 0, width, height, 0);

       
    我们得到动画的每帧数据(红蓝数据颠倒的).为解决这个问题,我们的高速代码flipIt(...).记住,data是指向DIB比特值位置的指针的指针变量.这意味着调用DrawDibDraw后,data指向一个调整过大小(256*256),修改过色深(24bits)的位图数据.
    原来我通过重建动画的每一帧来更新纹理.我收到几封email建议我用glTexSubImage2D().翻阅了OpenGL红宝书后,我磕磕绊绊的写出下面注释:"创建纹理的计算消耗比修改纹理要大.在OpenGL1.1版本中,有几条调用能更新全部或部分纹理图像信息.这对某些应用程序有用,比如实时的抓取视频图像作纹理.对于这些程序,用glTexSubImage2D()根据新视频图像来创建单个纹理以代替旧的纹理数据是行得通的."

    在我个人并没有发现速度明显加快,也许在低端显卡上才会.glTexSubImage2D()的参数是:目标是一个二维纹理(GL_TEXTURE_2D).细节级别(0),mipmapping用.x(0),y(0)告诉OpenGL开始拷贝的位置(0,0是纹理的左下角).然后是图像的宽度,我们要拷贝的图像是256像素宽,256像素高.GL_RGB是我们的数据格式.我们在拷贝无符号byte.最后...图像数据指针----data.非常简单!

    Kevin Rogers 另加:我想指出使用glTexSubImage2D()另一个重要原因.不仅因为在许多OpenGL实现中它很快,还因为目标区不必是2的幂.这对视频重放很方便,因为一帧的维通常不是2的幂(而是像320*200之类的).这样给了你很大机动性,你可以按视频流原本的样子播放,而不是扭曲或剪切每一帧来适应纹理的维.

    重要的是你不能更新一个纹理如果你第一次没有创建他!我们在Initialize()中创建纹理.

    还要提到的是...如果你计划在工程里使用多个纹理,务必绑住你要更新的纹理.否则,更新出来的纹理也许不是你想要的!

      
       

     flipIt(data);       // 交换红蓝数据

     // 更新纹理
     glTexSubImage2D (GL_TEXTURE_2D, 0, 0, 0, 256, 256, GL_RGB, GL_UNSIGNED_BYTE, data);
    }

       
    接下来的部分当程序退出时调用,我们关掉DrawDib DC,释放占用的资源.然后释放AVI GetFrame资源.最后释放AVI流和文件.  
       

    void CloseAVI(void)       // 关掉AVI资源
    {
     DeleteObject(hBitmap);      //释放设备无关位图信息
     DrawDibClose(hdd);       // 关掉DrawDib DC
     AVIStreamGetFrameClose(pgf);     // 释放AVI GetFrame资源
     AVIStreamRelease(pavi);      // 释放AVI流
     AVIFileExit();       // 释放AVI文件
    }

       
    初始化很简明.设初始的angle为0.再打开DrawDib库(得到一个DC).一切顺利的话,hdd会是新创建的dc的句柄.
    以黑色清屏,开启深度测试,等等.

    然后建一个新的二次曲面.quadratic是这个新对象的指针.设置光滑的法线,允许纹理坐标的生成.

      
       

    BOOL Initialize (GL_Window* window, Keys* keys)
    {
     g_window = window;
     g_keys  = keys;

     // 开始用户的初始
     angle = 0.0f;       // angle为0先
     hdd = DrawDibOpen();      // 得到Dib的DC
     glClearColor (0.0f, 0.0f, 0.0f, 0.5f);    // 黑色背景
     glClearDepth (1.0f);      // 深度缓冲初始
     glDepthFunc (GL_LEQUAL);      // 深度测试的类型(小于或等于)
     glEnable(GL_DEPTH_TEST);      // 开启深度测试
     glShadeModel (GL_SMOOTH);      // 平滑效果
     glHint (GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);   // 透视图计算设为 //最高精度

     quadratic=gluNewQuadric();      // 建二次曲面的指针
     gluQuadricNormals(quadratic, GLU_SMOOTH);    // 设置光滑的法线
     gluQuadricTexture(quadratic, GL_TRUE);    // 创建纹理坐标

       
    下面的代码中,我们开启2D纹理映射,纹理滤镜设为GLNEAREST(最快,但看起来很糙),建立球面映射(为了实现环境映射效果).试试其它滤镜,如果你有条件,可以试试GLLINEAR得到一个平滑的动画效果.
    设完纹理和球面映射,我们打开.AVI文件.我尽量使事情简单化...你能看出来么:)我们要打开的文件叫作facec2.avi

      
       

     glEnable(GL_TEXTURE_2D);     // 开启2D纹理映射
     glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);// 设置纹理滤镜
     glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);

     glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_SPHERE_MAP);  // 设纹理坐标生成模式为s
     glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_SPHERE_MAP);  // 设纹理坐标生成模式为t

     OpenAVI("data/face2.avi");     // 打开AVI文件

     // 创建纹理
     glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 256, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, data);

     return TRUE;      // 初始化成功返回TRUE
    }

       
    关闭时调用CloseAVI().他正确的关闭AVI文件,并释放所有占用资源.  
       

    void Deinitialize (void)      // 做所有的释放工作
    {
     CloseAVI();      // 关闭AVI文件
    }

       
    到了检查按键和更新旋转角度的地方了.我知道再没有必要详细解释这些代码了.我们检查空格键是否按下,若是,则增加effect值.有3种效果(立方,球,圆柱)第四个效果被选时(effect = 3)不画任何对象...仅显示背景!如果选了第四效果,空格又按下了,就重设为第一个效果(effect = 0).Yeah,我本该叫他对象:)
    然后检查’b’键是否按下,若是,则改变背景(bg从ON到OFF或从OFF到ON).

    环境映射的键设置也一样.检查’E’是否按下,若是则改变env从TRUE到FALSE或从FALSE到TRUE.仅仅是关闭或开启环境映射!

    每次调用Updata()时angle都加上一个小分数.我用经过的时间除以60.0f使速度降一点.

      
       

    void Update (DWORD milliseconds)     // 动画更新
    {
     if (g_keys->keyDown [VK_ESCAPE] == TRUE)   //ESC按下?
     {
      TerminateApplication (g_window);   // 关闭程序
     }

     if (g_keys->keyDown [VK_F1] == TRUE)    // F1按下?
     {
      ToggleFullscreen (g_window);   // 改变显示模式
     }

     if ((g_keys->keyDown [' ']) && !sp)    // 空格按下并已松开
     {
      sp=TRUE;      // 设sp为True
      effect++;      // 增加effect
      if (effect>3)     // 超出界限?
       effect=0;     // 重设为0
     }

     if (!g_keys->keyDown[' '])     // 空格没按下?
      sp=FALSE;      // 设sp为False

     if ((g_keys->keyDown ['B']) && !bp)    // ’B’按下并已松开
     {
      bp=TRUE;      // 设bp为True
      bg=!bg;      // 改变背景 Off/On
     }

     if (!g_keys->keyDown['B'])     // ’B’没按下?
      bp=FALSE;      // 设bp为False

     if ((g_keys->keyDown ['E']) && !ep)    //  ’E’按下并已松开
     {
      ep=TRUE;      // 设ep为True
      env=!env;      // 改变环境映射 Off/On
     }

     if (!g_keys->keyDown['E'])     //’E’没按下
      ep=FALSE;      // 设ep为False

     angle += (float)(milliseconds) / 60.0f;   // 根据时间更新angle

       
    在原来的文章里,所有的AVI文件都以相同的速度播放.于是,我重写了本文让视频以正常的速度播放.next增加经过的毫秒数.如果你记得文章的前面,我们算出了显示每帧的毫秒数(mpf).为了计算当前帧,我们拿经过的时间除以显示每帧的毫秒数(mpf).
    还要检查确定当前帧没有超过视频的最后帧.若超过了,则将frame设为0,动画计时器设为0,于是动画从头开始.

    下面的代码会丢掉一些帧,若果你的计算机太慢的话,
    或者另一个程序占用了CPU.如果想显示每一帧而不管计算机有多慢的话,你要检查next是否比mpf大,若是,你要把next设为0,frame增1.两种方法都行,虽然下面的代码更有利于跑的快的机器.

    如果你有干劲,试着加上循环,快速播放,暂停或倒放等功能.

      
       

     next+= milliseconds;      // 根据时间增加next
     frame=next/mpf;       // 计算当前帧号

     if (frame>=lastframe)      // 超过最后一帧?
     {
      frame=0;       // Frame设为0
      next=0;       // 重设动画计时器
     }
    }

       
    下面是画屏代码:)我们清屏和深度缓冲.再抓取动画的一帧.我将使这更简单!把你想要的帧数传给GrabAVIFrame().非常简单!当然,如果是多个AVI,你要传一个纹理标号.(你要做更多的事)  
       

    void Draw (void)       // 绘制我们的屏幕
    {
     glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);  // 清屏和深度缓冲

     GrabAVIFrame(frame);     // 抓取动画的一帧

       
    下面检查我们是否想画一个背景图.若bg是TRUE,重设模型视角矩阵,画一个单纹理映射的能盖住整个屏幕的矩形(纹理是从AVI从得到的一帧).矩形距离屏面向里20个单位,这样它看起来在对象之后(距离更远).  
       

     if (bg)       // 背景可见?
     {
      glLoadIdentity();     // 重设模型视角矩阵
      glBegin(GL_QUADS);     // 开始画背景(一个矩形)
       // 正面
       glTexCoord2f(1.0f, 1.0f); glVertex3f( 11.0f,  8.3f, -20.0f);
       glTexCoord2f(0.0f, 1.0f); glVertex3f(-11.0f,  8.3f, -20.0f);
       glTexCoord2f(0.0f, 0.0f); glVertex3f(-11.0f, -8.3f, -20.0f);
       glTexCoord2f(1.0f, 0.0f); glVertex3f( 11.0f, -8.3f, -20.0f);
      glEnd();      
     }

       
    画完背景(或没有),重设模型视角矩阵(使视角中心回到屏幕中央).视角中心再向屏内移进10个单位.然后检查env是否为TRUE.若是,开启球面映射来实现环境映射效果.
      
       

     glLoadIdentity ();      // 重设模型视角矩阵
     glTranslatef (0.0f, 0.0f, -10.0f);    // 视角中心再向屏内移进10个单位

     if (env)       // 环境映射开启?
     {
      glEnable(GL_TEXTURE_GEN_S);    // 开启纹理坐标生成S坐标
      glEnable(GL_TEXTURE_GEN_T);    // 开启纹理坐标生成T坐标
     }

       
    在最后关头我加了以下代码.他绕X轴和Y轴旋转(根据angle的值)然后在Z轴方向移动2单位.这使我们离开了屏幕中心.如果删掉下面三行,对象会在屏幕中心打转.有了下面三行,对象旋转时看起来离我们远一些:)
    如果你不懂旋转和平移...你就不该读这一章:)

      
       

     glRotatef(angle*2.3f,1.0f,0.0f,0.0f);    // 加旋转让东西动起来
     glRotatef(angle*1.8f,0.0f,1.0f,0.0f);    // 加旋转让东西动起来
     glTranslatef(0.0f,0.0f,2.0f);     // 旋转后平移到新位置

       
    下面的代码检查我们要画哪一个对象.若effect为0,我们做一些旋转在画一个立方体.这个旋转使立方体绕X,Y,Z轴旋转.现在你脑中该烙下建一个立方体的方法了吧:)  
       

     switch (effect)       // 哪个效果?
     {
     case 0:        // 效果 0 - 立方体
      glRotatef (angle*1.3f, 1.0f, 0.0f, 0.0f);  
      glRotatef (angle*1.1f, 0.0f, 1.0f, 0.0f);  
      glRotatef (angle*1.2f, 0.0f, 0.0f, 1.0f);  
      glBegin(GL_QUADS);    
       glNormal3f( 0.0f, 0.0f, 0.5f);
       glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f,  1.0f);
       glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f, -1.0f,  1.0f);
       glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f,  1.0f,  1.0f);
       glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f,  1.0f,  1.0f);
       
       glNormal3f( 0.0f, 0.0f,-0.5f);
       glTexCoord2f(1.0f, 0.0f); glVertex3f(-1.0f, -1.0f, -1.0f);
       glTexCoord2f(1.0f, 1.0f); glVertex3f(-1.0f,  1.0f, -1.0f);
       glTexCoord2f(0.0f, 1.0f); glVertex3f( 1.0f,  1.0f, -1.0f);
       glTexCoord2f(0.0f, 0.0f); glVertex3f( 1.0f, -1.0f, -1.0f);
      
       glNormal3f( 0.0f, 0.5f, 0.0f);
       glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f,  1.0f, -1.0f);
       glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f,  1.0f,  1.0f);
       glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f,  1.0f,  1.0f);
       glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f,  1.0f, -1.0f);
       
       glNormal3f( 0.0f,-0.5f, 0.0f);
       glTexCoord2f(1.0f, 1.0f); glVertex3f(-1.0f, -1.0f, -1.0f);
       glTexCoord2f(0.0f, 1.0f); glVertex3f( 1.0f, -1.0f, -1.0f);
       glTexCoord2f(0.0f, 0.0f); glVertex3f( 1.0f, -1.0f,  1.0f);
       glTexCoord2f(1.0f, 0.0f); glVertex3f(-1.0f, -1.0f,  1.0f);
      
       glNormal3f( 0.5f, 0.0f, 0.0f);
       glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f, -1.0f, -1.0f);
       glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f,  1.0f, -1.0f);
       glTexCoord2f(0.0f, 1.0f); glVertex3f( 1.0f,  1.0f,  1.0f);
       glTexCoord2f(0.0f, 0.0f); glVertex3f( 1.0f, -1.0f,  1.0f);
      
       glNormal3f(-0.5f, 0.0f, 0.0f);
       glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f, -1.0f);
       glTexCoord2f(1.0f, 0.0f); glVertex3f(-1.0f, -1.0f,  1.0f);
       glTexCoord2f(1.0f, 1.0f); glVertex3f(-1.0f,  1.0f,  1.0f);
       glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f,  1.0f, -1.0f);
      glEnd();      
      break;      
       
    下面是画球体的地方.开始先绕X,Y,Z轴旋转,再画球体.球体半径为1.3f,20经线,20纬线.我用20是因为我没打算让球体非常光滑.少用些经纬数,使球看起来不那么光滑,这样球转起来时就能看到球面映射的效果(当然球面映射必须开启).试着尝试其它值!要知道,使用更多的经纬数需要更强的计算能力!  
       

     case 1:        // 效果1,球体
      glRotatef (angle*1.3f, 1.0f, 0.0f, 0.0f);  
      glRotatef (angle*1.1f, 0.0f, 1.0f, 0.0f);  
      glRotatef (angle*1.2f, 0.0f, 0.0f, 1.0f);  
      gluSphere(quadratic,1.3f,20,20);   
      break;       
       
    下面画圆柱.开始先绕X,Y,Z轴旋转,圆柱顶和底的半径都为1.0f.高3.0f,32经线,32纬线.若减少经纬数,圆柱的组成多边形会减少,他看起来就没那么圆.  
       

     case 2:        // 效果2,圆柱
      glRotatef (angle*1.3f, 1.0f, 0.0f, 0.0f);  
      glRotatef (angle*1.1f, 0.0f, 1.0f, 0.0f);  
      glRotatef (angle*1.2f, 0.0f, 0.0f, 1.0f);  
      glTranslatef(0.0f,0.0f,-1.5f);    
      gluCylinder(quadratic,1.0f,1.0f,3.0f,32,32);  
      break;       
     }

       
    下面检查env是否为TRUE,若是,关闭球面映射.调用glFlush()清空渲染流水线(使在下一帧开始前一切都渲染了).  
       

     if (env)        // 是否开启了环境渲染
     {
      glDisable(GL_TEXTURE_GEN_S);    // 关闭纹理坐标S
      glDisable(GL_TEXTURE_GEN_T);    // 关闭纹理坐标T
     }

     glFlush ();       // 清空渲染流水线
    }

       
    希望你们喜欢这一章.现在已经凌晨两点了(译者oak:译到这时刚好也是2:00am!)...写这章花了我6小时了.听起来不可思议,可要把东西写通不是件容易的事.本文我读了三边,我力图使文章好懂.不管你信还是不信,对我最重要的是你们能明白代码是怎样运作的,它为什么能行.那就是我喋喋不休并且加了过量注解的原因.
    无论如何,我都想听到本文的反馈.如果你找到文章的错误,并想帮我做一些改进,请联系我.就像我说的那样,这是我第一次写和AVI有关的代码.通常我不会写一个我才接触到的主题,但我太兴奋了,并且考虑到关于这方面的文章太少了.我所希望的是,我打开了编写高质量AVI demo和代码的一扇门!也许成功,也许没有.不管怎样,你可以任意处理我的代码.

    非常感谢 Fredster提供face AVI文件.Face是他发来的六个AVI动画中的一个.他没提出任何问题和条件.他以他的方式帮助了我,谢谢他!

    更要感谢Jonathan de Blok,要没要她,本文就不会有.他给我发来他的AVI播放器的代码,这使我对AVI格式产生了兴趣.他也回答了我问的关于他的代码的问题.但重要的是我并没有借鉴或抄袭他的代码,他的代码只是帮助我理解AVI播放器的运行机制.我的播放器的打开,解帧和播放AVI文件用的是不同的代码!

    感谢给予帮助的所有人,包括所有参观者!若没有你们,我的网站不值一文!!!


       收藏   分享  
    顶(0)
      




    ----------------------------------------------
    越学越无知

    点击查看用户来源及管理<br>发贴IP:*.*.*.* 2007/10/25 9:38:00
     
     一分之千 帅哥哟,离线,有人找我吗?射手座1984-11-30
      
      
      威望:1
      等级:研一(随老板参加了WWW大会还和Tim Berners-Lee合了影^_^)
      文章:632
      积分:4379
      门派:XML.ORG.CN
      注册:2006/12/31

    姓名:(无权查看)
    城市:(无权查看)
    院校:(无权查看)
    给一分之千发送一个短消息 把一分之千加入好友 查看一分之千的个人资料 搜索一分之千在『 C/C++编程思想 』的所有贴子 引用回复这个贴子 回复这个贴子 查看一分之千的博客2
    发贴心情 
    Lesson 35
       
    I would like to start off by saying that I am very proud of this tutorial. When I first got the idea to code an AVI player in OpenGL thanks to Jonathan de Blok, I had no idea how to open an AVI let alone code an AVI player. I started off by flipping through my collection of programming books. Not one book talked about AVI files. I then read everything there was to read about the AVI format in the MSDN. Lots of useful information in the MSDN, but I needed more information.

    After browsing the net for hours searching for AVI examples, I had just two sites bookmarked. I'm not going to say my search engine skills are amazing, but 99.9% of the time I have no problems finding what I'm looking for. I was absolutely shocked when I realized just how few AVI examples there were! Most the examples I found wouldn't compile... A handful of them were way to complex (for me at least), and the rest did the job, but they were coded in VB, Delphi, etc. (not VC++).

    The first page I book marked was an article written by Jonathan Nix titled "AVI Files". You can visit it at http://www.gamedev.net/reference/programming/features/avifile/. Huge respect to Jonathan for writing an extremely brilliant document on the AVI format. Although I decided to do things differently, his example code snippets, and clear comments made the learning process alot easier! The second site is titled "The AVI Overview" by John F. McGowan, Ph.D.. I could go on and on about how amazing John's page is, but it's easier if you check it out yourself! The URL is http://www.jmcgowan.com/avi.html. His site pretty much covers everything there is to know about the AVI format! Thanks to John for making such a valuable page available to the public.

    The last thing I wanted to mention is that NONE of the code has been borrowed, and none of the code has been copied. It was written during a 3 day coding spree, using information from the above mentioned sites and articles. With that said, I feel it is important to note that my code may not be the BEST way to play an AVI file. It may not even be the correct way to play an AVI file, but it does work, and it's easy to use! If you dislike the code, my coding style, or if you feel I'm hurting the programming community by releasing this tut, you have a few options: 1) search the net for alternate resources 2) write your own AVI player OR 3) write a better tutorial! Everyone visiting this site should know by now that I'm an average programmer with average skills (I've stated that on numerous pages throughout the site)! I code for FUN! The goal of this site is to make life easier for the non-elite coder to get started with OpenGL. The tutorials are merely examples on how 'I' managed to accomplish a specific effect... Nothing more, nothing less!

    On to the code...

    The first thing you will notice is that we include and link to the Video For Windows header / library. Big thanks to Microsoft (I can't believe I just said that!). This library makes opening and playing AVI files a SNAP! For now... All you need to know is that you MUST include the vfw.h header file and you must link to the vfw32.lib library file if you want the code to compile :)   
       

    #include <windows.h>       // Header File For Windows
    #include <gl\gl.h>       // Header File For The OpenGL32 Library
    #include <gl\glu.h>       // Header File For The GLu32 Library
    #include <vfw.h>       // Header File For Video For Windows
    #include "NeHeGL.h"       // Header File For NeHeGL

    #pragma comment( lib, "opengl32.lib" )     // Search For OpenGL32.lib While Linking
    #pragma comment( lib, "glu32.lib" )     // Search For GLu32.lib While Linking
    #pragma comment( lib, "vfw32.lib" )     // Search For VFW32.lib While Linking

    #ifndef CDS_FULLSCREEN       // CDS_FULLSCREEN Is Not Defined By Some
    #define CDS_FULLSCREEN 4      // Compilers. By Defining It This Way,
    #endif         // We Can Avoid Errors

    GL_Window* g_window;
    Keys*  g_keys;

       
    Now we define our variables. angle is used to rotate our objects around based on the amount of time that has passed. We will use angle for all rotations just to keep things simple.

    next is an integer variable that will be used to count how much time has passed (in milliseconds). It will be used to keep the framerate at a descent speed. More about this later!

    frame is of course the current frame we want to display from the animation. We start off at 0 (first frame). I think it's safe to assume that if we managed to open the video, it HAS to have at least one frame of animation :)

    effect is the current effect seen on the screen (object: Cube, Sphere, Cylinder, Nothing). env is a boolean value. If it's true, then environment mapping is enabled, if it's false, the object will NOT be environment mapped. If bg is true, you will see the video playing fullscreen behind the object. If it's false, you will only see the object (there will be no background).

    sp, ep and bp are used to make sure the user isn't holding a key down.   
       

    // User Defined Variables
    float  angle;       // Used For Rotation
    int  next;       // Used For Animation
    int  frame=0;      // Frame Counter
    int  effect;       // Current Effect
    bool  sp;       // Space Bar Pressed?
    bool  env=TRUE;      // Environment Mapping (Default On)
    bool  ep;       // 'E' Pressed?
    bool  bg=TRUE;      // Background (Default On)
    bool  bp;       // 'B' Pressed?

       
    The psi structure will hold information about our AVI file later in the code. pavi is a pointer to a buffer that receives the new stream handle once the AVI file has been opened. pgf is a pointer to our GetFrame object. bmih will be used later in the code to convert the frame of animation to a format we want (holds the bitmap header info describing what we want). lastframe will hold the number of the last frame in the AVI animation. width and height will hold the dimensions of the AVI stream and finally.... pdata is a pointer to the image data returned after we get a frame of animation from the AVI! mpf will be used to calculate how many milliseconds each frame is displayed for. More on this later.   
       

    AVISTREAMINFO  psi;      // Pointer To A Structure Containing Stream Info
    PAVISTREAM  pavi;      // Handle To An Open Stream
    PGETFRAME  pgf;      // Pointer To A GetFrame Object
    BITMAPINFOHEADER bmih;      // Header Information For DrawDibDraw Decoding
    long   lastframe;     // Last Frame Of The Stream
    int   width;      // Video Width
    int   height;      // Video Height
    char   *pdata;      // Pointer To Texture Data
    int   mpf;      // Will Hold Rough Milliseconds Per Frame

       
    In this tutorial we will create 2 different quadratic shapes (a sphere and a cylinder) using the GLU library. quadratic is a pointer to our quadric object.

    hdd is a handle to a DrawDib device context. hdc is handle to a device context.

    hBitmap is a handle to a device dependant bitmap (used in the bitmap conversion process later).

    data is a pointer that will eventually point to our converted bitmap image data. Will make sense later in the code. Keep reading :)   
       

    GLUquadricObj *quadratic;      // Storage For Our Quadratic Objects

    HDRAWDIB hdd;        // Handle For Our Dib
    HBITMAP hBitmap;       // Handle To A Device Dependant Bitmap
    HDC hdc = CreateCompatibleDC(0);     // Creates A Compatible Device Context
    unsigned char* data = 0;      // Pointer To Our Resized Image

       
    Now for some assembly language. For those of you that have never used assembly before, don't be intimidated. It might look cryptic, but it's actually pretty simple!

    While writing this tutorial I discovered something very odd. The first video I actually got working with this code was playing fine but the colors were messed up. Everything that was supposed to be red was blue and everything that was supposed to be blue was red. I went absolutely NUTS! I was convinced that I made a mistake somewhere in the code. After looking at all the code, I was unable to find the bug! So I started reading through the MSDN again. Why would the red and blue bytes be swapped!?! It says right in the MSDN that 24 bit bitmaps are RGB!!! After some more reading I discovered the problem. In WINDOWS (figures), RGB data is actually store backwards (BGR). In OpenGL, RGB is exactly that... RGB!

    After a few complaints from fans of Microsoft :) I decided to add a quick note! I am not trashing Microsoft because their RGB data is stored backwards. I just find it very frustrating that it's called RGB when it's actually BGR in the file!

    Blue Adds: It's more to do with "little endian" and "big endian". Intel and Intel compatibles use little endian where the least significant byte (LSB) is stored first. OpenGL came from Silicon Graphics machines, which are probably big endian, and thus the OpenGL standard required the bitmap format to be in big endian format. I think this is how it works.

    Wonderful! So here I am with a player, that looks like absolute crap! My first solution was to swap the bytes manually with a for next loop. It worked, but it was very slow. Completely fed up, I modified the texture generation code to use GL_BGR_EXT instead of GL_RGB. A huge speed increase, and the colors looked great! So my problem was solved... or so I thought! It turns out, some OpenGL drivers have problems with GL_BGR_EXT.... Back to the drawing board :(

    After talking with my good friend Maxwell Sayles, he recommended that I swap the bytes using asm code. A minute later, he had icq'd me the code below! It may not be optimized, but it's fast and it does the job!

    Each frame of animation is stored in a buffer. The image will always be 256 pixels wide, 256 pixels tall and 1 byte per color (3 bytes per pixel). The the code below will go through the buffer and swap the Red and Blue bytes. Red is stored at ebx+0 and blue is stored at ebx+2. We move through the buffer 3 bytes at a time (because one pixel is made up of 3 bytes). We loop through the data until all of the byte have been swapped.

    A few of you were unhappy with the use of ASM code, so I figured I would explain why it's used in this tutorial. Originally I had planned to use GL_BGR_EXT as I stated, it works. But not on all cards! I then decided to use the swap method from the last tut (very tidy XOR swap code). The swap code works on all machines, but it's not extremely fast. In the last tut, yeah, it works GREAT. In this tutorial we are dealing with REAL-TIME video. You want the fastest swap you can get. Weighing the options, ASM in my opinion is the best choice! If you have a better way to do the job, please ... USE IT! I'm not telling you how you HAVE to do things. I'm showing you how I did it. I also explain in detail what the code does. That way if you want to replace the code with something better, you know exactly what this code is doing, making it easier to find an alternate solution if you want to write your own code!   
       

    void flipIt(void* buffer)      // Flips The Red And Blue Bytes (256x256)
    {
     void* b = buffer;      // Pointer To The Buffer
     __asm        // Assembler Code To Follow
     {
      mov ecx, 256*256     // Set Up A Counter (Dimensions Of Memory Block)
      mov ebx, b      // Points ebx To Our Data (b)
      label:       // Label Used For Looping
       mov al,[ebx+0]     // Loads Value At ebx Into al
       mov ah,[ebx+2]     // Loads Value At ebx+2 Into ah
       mov [ebx+2],al     // Stores Value In al At ebx+2
       mov [ebx+0],ah     // Stores Value In ah At ebx

       add ebx,3     // Moves Through The Data By 3 Bytes
       dec ecx      // Decreases Our Loop Counter
       jnz label     // If Not Zero Jump Back To Label
     }
    }

       
    The code below opens the AVI file in read mode. szFile is the name of the file we want to open. title[100] will be used to modify the title of the window (to show information about the AVI file).

    The first thing we need to do is call AVIFileInit(). This initializes the AVI file library (gets things ready for us).

    There are many ways to open an AVI file. I decided to use AVIStreamOpenFromFile(...). This opens a single stream from an AVI file (AVI files can contain multiple streams).

    The parameters are as follows: pavi is a pointer to a buffer that receives the new stream handle. szFile is of course, the name of the file we wish to open (complete with path). The third parameter is the type of stream we wish to open. In this project, we are only interested in the VIDEO stream (streamtypeVIDEO). The fourth parameter is 0. This means we want the first occurance of streamtypeVIDEO (there can be multiple video streams in a single AVI file... we want the first stream). OF_READ means that we want to open the file for reading ONLY. The last parameter is a pointer to a class identifier of the handler you want to use. To be honest, I have no idea what it does. I let windows select it for me by passing NULL as the last parameter!

    If there are any errors while opening the file, a message box pops up letting you know that the stream could not be opened. I don't pass a PASS or FAIL back to the calling section of code, so if this fails, the program will try to keep running. Adding some type of error checking shouldn't take alot of effort, I was too lazy :)   
       

    void OpenAVI(LPCSTR szFile)      // Opens An AVI File (szFile)
    {
     TCHAR title[100];      // Will Hold The Modified Window Title

     AVIFileInit();       // Opens The AVIFile Library

     // Opens The AVI Stream
     if (AVIStreamOpenFromFile(&pavi, szFile, streamtypeVIDEO, 0, OF_READ, NULL) !=0)
     {
      // An Error Occurred Opening The Stream
      MessageBox (HWND_DESKTOP, "Failed To Open The AVI Stream", "Error", MB_OK | MB_ICONEXCLAMATION);
     }

       
    If we made it this far, it's safe to assume that the file was opened and a stream was located! Next we grab a bit of information from the AVI file with AVIStreamInfo(...).

    Earlier we created a structure called psi that will hold information about our AVI stream. Will fill this structure with information about the AVI with the first line of code below. Everything from the width of the stream (in pixels) to the framerate of the animation is stored in psi. For those of you that want accurate playback speeds, make a note of what I just said. For more information look up AVIStreamInfo in the MSDN.

    We can calculate the width of a frame by subtracting the left border from the right border. The result should be an accurate width in pixels. For the height, we subtract the top of the frame from the bottom of the frame. This gives us the height in pixels.

    We then grab the last frame number from the AVI file using AVIStreamLength(...). This returns the number of frames of animation in the AVI file. The result is stored in lastframe.

    Calculating the framerate is fairly easy. Frames per second = psi.dwRate / psi.dwScale. The value returned should match the frame rate displayed when you right click on the AVI and check its properties. So what does this have to do with mpf you ask? When I first wrote the animation code, I tried using the frames per second to select the correct frame of animation. I ran into a problem... All of the videos played to fast! So I had a look at the video properties. The face2.avi file is 3.36 seconds long. The frame rate is 29.974 frames per second. The video has 91 frames of animation. If you multiply 3.36 by 29.974 you get 100 frames of animation. Very Odd!

    So, I decided to do things a little different. Instead of calculating the frames per second, I calculte how long each frame should be displayed. AVIStreamSampleToTime() converts a position in the animation to how many milliseconds it would take to get to that position. So we calculate how many milliseconds the entire video is by grabbing the time (in milliseconds) of the last frame (lastframe). We then divide the result by the total number of frames in the animation (lastframe). This gives us the amount of time each frame is displayed for in milliseconds. We store the result in mpf (milliseconds per frame). You could also calculate the milliseconds per frame by grabbing the amount of time for just 1 frame of animation with the following code: AVIStreamSampleToTime(pavi,1). Either way should work fine! Big thanks to Albert Chaulk for the idea!

    The reason I say rough milliseconds per frame is because mpf is an integer so any floating values will be rounded off.   
       

     AVIStreamInfo(pavi, &psi, sizeof(psi));    // Reads Information About The Stream Into psi
     width=psi.rcFrame.right-psi.rcFrame.left;   // Width Is Right Side Of Frame Minus Left
     height=psi.rcFrame.bottom-psi.rcFrame.top;   // Height Is Bottom Of Frame Minus Top

     lastframe=AVIStreamLength(pavi);    // The Last Frame Of The Stream

     mpf=AVIStreamSampleToTime(pavi,lastframe)/lastframe;  // Calculate Rough Milliseconds Per Frame

       
    Because OpenGL requires texture data to be a power of 2, and because most videos are 160x120, 320x240 or some other odd dimensions we need a fast way to resize the video on the fly to a format that we can use as a texture. To do this, we take advantage of specific Windows Dib functions.

    The first thing we need to do is describe the type of image we want. To do this, we fill the bmih BitmapInfoHeader structure with our requested parameters. We start off by setting the size of the structure. We then set the bitplanes to 1. Three bytes of data works out to 24 bits (RGB). We want the image to be 256 pixels wide and 256 pixels tall and finally we want the data returned as UNCOMPRESSED RGB data (BI_RGB).

    CreateDIBSection creates a dib that we can directly write to. If everything goes well, hBitmap will point to the dib's bit values. hdc is a handle to a device context (DC). The second parameter is a pointer to our BitmapInfo structure. The structure contains information about the dib file as mentioned above. The third parameter (DIB_RGB_COLORS) specifies that the data is RGB values. data is a pointer to a variable that receives a pointer to the location of the DIB's bit values (whew, that was a mouthful). By setting the 5th value to NULL, memory is allocated for our DIB. Finally, the last parameter can be ignored (set to NULL).

    Quoted from the MSDN: The SelectObject function selects an object into the specified device context (DC).

    We have now created a DIB that we can directly draw to. Yay :)   
       

     bmih.biSize  = sizeof (BITMAPINFOHEADER);  // Size Of The BitmapInfoHeader
     bmih.biPlanes  = 1;     // Bitplanes
     bmih.biBitCount  = 24;     // Bits Format We Want (24 Bit, 3 Bytes)
     bmih.biWidth  = 256;     // Width We Want (256 Pixels)
     bmih.biHeight  = 256;     // Height We Want (256 Pixels)
     bmih.biCompression = BI_RGB;    // Requested Mode = RGB

     hBitmap = CreateDIBSection (hdc, (BITMAPINFO*)(&bmih), DIB_RGB_COLORS, (void**)(&data), NULL, NULL);
     SelectObject (hdc, hBitmap);     // Select hBitmap Into Our Device Context (hdc)

       
    A few more things to do before we're ready to read frames from the AVI. The next thing we have to do is prepare our program to decompress video frames from the AVI file. We do this with the AVIStreamGetFrameOpen(...) function.

    You can pass a structure similar to the one above as the second parameter to have a specific video format returned. Unfortunately, the only thing you can alter is the width and height of the returned image. The MSDN also mentions that you can pass AVIGETFRAMEF_BESTDISPLAYFMT to select the best display format. Oddly enough, my compiler had no definition for it.

    If everything goes well, a GETFRAME object is returned (which we need to read frames of data). If there are any problems, a message box will pop onto the screen telling you there was an error!   
       

     pgf=AVIStreamGetFrameOpen(pavi, NULL);    // Create The PGETFRAME Using Our Request Mode
     if (pgf==NULL)
     {
      // An Error Occurred Opening The Frame
      MessageBox (HWND_DESKTOP, "Failed To Open The AVI Frame", "Error", MB_OK | MB_ICONEXCLAMATION);
     }

       
    The code below prints the videos width, height and frames to title. We display title at the top of the window with the command SetWindowText(...). Run the program in windowed mode to see what the code below does.   
       

     // Information For The Title Bar (Width / Height / Last Frame)
     wsprintf (title, "NeHe's AVI Player: Width: %d, Height: %d, Frames: %d", width, height, lastframe);
     SetWindowText(g_window->hWnd, title);    // Modify The Title Bar
    }

       
    Now for the fun stuff... we grab a frame from the AVI and then convert it to a usable image size / color depth. lpbi will hold the BitmapInfoHeader information for the frame of animation. We accomplish a few things at once in the second line of code below. First we grab a frame of animation ... The frame we want is specified by frame. This will pull in the frame of animation and will fill lpbi with the header information for that frame.

    Now for the fun stuff... we need to point to the image data. To do this we need to skip over the header information (lpbi->biSize). One thing I didn't realize until I started writing this tut was that we also have to skip over any color information. To do this we also add colors used multiplied by the size of RGBQUAD (biClrUsed*sizeof(RGBQUAD)). After doing ALL of that :) we are left with a pointer to the image data (pdata).

    Now we need to convert the frame of animation to a usuable texture size as well, we need to convert the data to RGB data. To do this, we use DrawDibDraw(...).

    A quick explanation. We can draw directly to our custom DIB. That's what DrawDibDraw(...) does. The first parameter is a handle to our DrawDib DC. The second parameter is a handle to the DC. Next we have the upper left corner (0,0) and the lower right corner (256,256) of the destination rectangle.

    lpbi is a pointer to the bitmapinfoheader information for the frame we just read. pdata is a pointer to the image data for the frame we just read.

    Then we have the upper left corner (0,0) of the source image (frame we just read) and the lower right corner of the frame we just read (width of the frame, height of the frame). The last parameter should be left at 0.

    This will convert an image of any size / color depth to a 256*256*24bit image.   
       

    void GrabAVIFrame(int frame)      // Grabs A Frame From The Stream
    {
     LPBITMAPINFOHEADER lpbi;     // Holds The Bitmap Header Information
     lpbi = (LPBITMAPINFOHEADER)AVIStreamGetFrame(pgf, frame); // Grab Data From The AVI Stream
     pdata=(char *)lpbi+lpbi->biSize+lpbi->biClrUsed * sizeof(RGBQUAD); // Pointer To Data Returned By AVIStreamGetFrame
              // (Skip The Header Info To Get To The Data)
     // Convert Data To Requested Bitmap Format
     DrawDibDraw (hdd, hdc, 0, 0, 256, 256, lpbi, pdata, 0, 0, width, height, 0);

       
    We have our frame of animation but the red and blue bytes are swapped. To solve this problem, we jump to our speedy flipIt(...) code. Remember, data is a pointer to a variable that receives a pointer to the location of the DIB's bit values. What that means is that after we call DrawDibDraw, data will point to the resized (256*256) / modified (24 bit) bitmap data.

    Originally I was updating the texture by recreating it for each frame of animation. I received a few emails suggesting that I use glTexSubImage2D(). After flipping through the OpenGL Red Book, I stumbled across the following quote: "Creating a texture may be more computationally expensive than modifying an existing one. In OpenGL Release 1.1, there are new routines to replace all or part of a texture image with new information. This can be helpful for certain applications, such as using real-time, captured video images as texture images. For that application, it makes sense to create a single texture and use glTexSubImage2D() to repeatedly replace the texture data with new video images".

    I personally didn't notice a huge speed increase, but on slower cards you might! The parameters for glTexSubImage2D() are as follows: Our target, which is a 2D texture (GL_TEXTURE_2D). The detail level (0), used for mipmapping. The x (0) and y (0) offset which tells OpenGL where to start copying to (0,0 is the lower left corner of the texture). The we have the width of the image we wish to copy which is 256 pixels wide and 256 pixels tall. GL_RGB is the format of our data. We are copying unsigned bytes. Finally... The pointer to our data which is represented by data. Very simple!

    Kevin Rogers Adds: I just wanted to point out another important reason to use glTexSubImage2D. Not only is it faster on many OpenGL implementations, but the target area does not need to be a power of 2. This is especially handy for video playback since the typical dimensions for a frame are rarely powers of 2 (often something like 320 x 200). This gives you the flexibility to play the video stream at its original aspect, rather than distorting / clipping each frame to fit your texture dimensions.

    It's important to note that you can NOT update a texture if you have not created the texture in the first place! We create the texture in the Initialize() code!

    I also wanted to mention... If you planned to use more than one texture in your project, make sure you bind the texture you want to update. If you don't bind the texture you may end up updating textures you didn't want updated!   
       

     flipIt(data);       // Swap The Red And Blue Bytes (GL Compatability)

     // Update The Texture
     glTexSubImage2D (GL_TEXTURE_2D, 0, 0, 0, 256, 256, GL_RGB, GL_UNSIGNED_BYTE, data);
    }

       
    The following section of code is called when the program exits. We close our DrawDib DC, and free allocated resources. We then release the AVI GetFrame resources. Finally we release the stream and then the file.   
       

    void CloseAVI(void)       // Properly Closes The Avi File
    {
     DeleteObject(hBitmap);      // Delete The Device Dependant Bitmap Object
     DrawDibClose(hdd);      // Closes The DrawDib Device Context
     AVIStreamGetFrameClose(pgf);     // Deallocates The GetFrame Resources
     AVIStreamRelease(pavi);      // Release The Stream
     AVIFileExit();       // Release The File
    }

       
    Initialization is pretty straight forward. We set the starting angle to 0. We then open the DrawDib library (which grabs a DC). If everything goes well, hdd becomes a handle to the newly created device context.

    Our clear screen color is black, depth testing is enabled, etc.

    We then create a new quadric. quadratic is the pointer to our new object. We set up smooth normals, and enable texture coordinate generation for our quadric.   
       

    BOOL Initialize (GL_Window* window, Keys* keys)    // Any GL Init Code & User Initialiazation Goes Here
    {
     g_window = window;
     g_keys  = keys;

     // Start Of User Initialization
     angle = 0.0f;       // Set Starting Angle To Zero
     hdd = DrawDibOpen();      // Grab A Device Context For Our Dib
     glClearColor (0.0f, 0.0f, 0.0f, 0.5f);    // Black Background
     glClearDepth (1.0f);      // Depth Buffer Setup
     glDepthFunc (GL_LEQUAL);     // The Type Of Depth Testing (Less Or Equal)
     glEnable(GL_DEPTH_TEST);     // Enable Depth Testing
     glShadeModel (GL_SMOOTH);     // Select Smooth Shading
     glHint (GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);  // Set Perspective Calculations To Most Accurate

     quadratic=gluNewQuadric();     // Create A Pointer To The Quadric Object
     gluQuadricNormals(quadratic, GLU_SMOOTH);   // Create Smooth Normals
     gluQuadricTexture(quadratic, GL_TRUE);    // Create Texture Coords

       
    In the next bit of code, we enable 2D texture mapping, we set the texture filters to GL_NEAREST (fast, but rough looking) and we set up sphere mapping (to create the environment mapping effect). Play around with the filters. If you have the power, try out GL_LINEAR for a smoother looking animation.

    After setting up our texture and sphere mapping, we open the .AVI file. I tried to keep things simple... can you tell :) The file we are going to open is called face2.avi... it's located in the data directory.

    The last thing we have to do is create our initial texture. We need to do this in order to use glTexSubImage2D() to update our texture in GrabAVIFrame().   
       

     glEnable(GL_TEXTURE_2D);     // Enable Texture Mapping
     glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);// Set Texture Max Filter
     glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);// Set Texture Min Filter

     glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_SPHERE_MAP);  // Set The Texture Generation Mode For S To Sphere Mapping
     glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_SPHERE_MAP);  // Set The Texture Generation Mode For T To Sphere Mapping

     OpenAVI("data/face2.avi");     // Open The AVI File

     // Create The Texture
     glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 256, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, data);

     return TRUE;       // Return TRUE (Initialization Successful)
    }

       
    When shutting down, we call CloseAVI(). This properly closes the AVI file, and releases any used resources.   
       

    void Deinitialize (void)      // Any User DeInitialization Goes Here
    {
     CloseAVI();       // Close The AVI File
    }

       
    This is where we check for key presses and update our rotation (angle) based on time passed. By now I shouldn't have to explain the code in detail. We check to see if the space bar is pressed. If it is, we increase the effect. We have three effect (cube, sphere, cylinder) and when the 4th effect is selected (effect=3) nothing is drawn... showing just the background scene! If we are on the 4th effect and space is pressed, we reset back to the first effect (effect=0). Yeah, I know I should have called it OBJECT :)

    We then check to see if the 'B' key is pressed if it is, we toggle the background (bg) from ON to OFF or from OFF to ON.

    Environment mapping is done the same way. We check to see if 'E' is pressed. If it is, we toggle env from TRUE to FALSE or from FALSE to TRUE. Turning environment mapping off or on!

    The angle is increased by a tiny fraction each time Update() is called. I divide the time passed by 60.0f to slow things down a little.   
       

    void Update (DWORD milliseconds)     // Perform Motion Updates Here
    {
     if (g_keys->keyDown [VK_ESCAPE] == TRUE)   // Is ESC Being Pressed?
     {
      TerminateApplication (g_window);   // Terminate The Program
     }

     if (g_keys->keyDown [VK_F1] == TRUE)    // Is F1 Being Pressed?
     {
      ToggleFullscreen (g_window);    // Toggle Fullscreen Mode
     }

     if ((g_keys->keyDown [' ']) && !sp)    // Is Space Being Pressed And Not Held?
     {
      sp=TRUE;      // Set sp To True
      effect++;      // Change Effects (Increase effect)
      if (effect>3)      // Over Our Limit?
       effect=0;     // Reset Back To 0
     }

     if (!g_keys->keyDown[' '])     // Is Space Released?
      sp=FALSE;      // Set sp To False

     if ((g_keys->keyDown ['B']) && !bp)    // Is 'B' Being Pressed And Not Held?
     {
      bp=TRUE;      // Set bp To True
      bg=!bg;       // Toggle Background Off/On
     }

     if (!g_keys->keyDown['B'])     // Is 'B' Released?
      bp=FALSE;      // Set bp To False

     if ((g_keys->keyDown ['E']) && !ep)    // Is 'E' Being Pressed And Not Held?
     {
      ep=TRUE;      // Set ep To True
      env=!env;      // Toggle Environment Mapping Off/On
     }

     if (!g_keys->keyDown['E'])     // Is 'E' Released?
      ep=FALSE;      // Set ep To False

     angle += (float)(milliseconds) / 60.0f;    // Update angle Based On The Timer

       
    In the original tutorial, all AVI files were played at the same speed. Since then, the tutorial has been rewritten to play the video at the correct speed. next is increased by the number of milliseconds that have passed since this section of code was last called. If you remember earlier in the tutorial, we calculated how long each frame should be displayed in milliseconds (mpf). To calculate the current frame, we take the amount of time that has passed (next) and divide it by the time each frame is displayed for (mpf).

    After that, we check to make sure that the current frame of animation hasn't passed the last frame of the video. If it has, frame is reset to zero, the animation timer (next) is reset to 0, and the animation starts over.

    The code below will drop frames if your computer is running to slow, or another application is hogging the CPU. If you want every frame to be displayed no matter how slow the users computer is, you could check to see if next is greater than mpf if it is, you would reset next to 0 and increase frame by one. Either way will work, although the code below is better for faster machines.

    If you feel energetic, try adding rewind, fast forward, pause or reverse play!   
       

     next+= milliseconds;      // Increase next Based On Timer (Milliseconds)
     frame=next/mpf;       // Calculate The Current Frame

     if (frame>=lastframe)      // Have We Gone Past The Last Frame?
     {
      frame=0;      // Reset The Frame Back To Zero (Start Of Video)
      next=0;       // Reset The Animation Timer (next)
     }
    }

       
    Now for the drawing code :) We clear the screen and depth buffer. We then grab a frame of animation. Again, I tried to keep it simple! You pass the requested frame (frame) to GrabAVIFrame(). Pretty simple! Of course, if you wanted multiple AVI's, you would have to pass a texture ID. (More for you to do).   
       

    void Draw (void)       // Draw Our Scene
    {
     glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);  // Clear Screen And Depth Buffer

     GrabAVIFrame(frame);      // Grab A Frame From The AVI

       
    The code below checks to see if we want to draw a background image. If bg is TRUE, we reset the modelview matrix and draw a single texture mapped quad (mapped with a frame from the AVI video) large enough to fill the entire screen. The quad is drawn 20 units into the screen so it appears behind the object (futher in the distance).   
       

     if (bg)        // Is Background Visible?
     {
      glLoadIdentity();     // Reset The Modelview Matrix
      glBegin(GL_QUADS);     // Begin Drawing The Background (One Quad)
       // Front Face
       glTexCoord2f(1.0f, 1.0f); glVertex3f( 11.0f,  8.3f, -20.0f);
       glTexCoord2f(0.0f, 1.0f); glVertex3f(-11.0f,  8.3f, -20.0f);
       glTexCoord2f(0.0f, 0.0f); glVertex3f(-11.0f, -8.3f, -20.0f);
       glTexCoord2f(1.0f, 0.0f); glVertex3f( 11.0f, -8.3f, -20.0f);
      glEnd();      // Done Drawing The Background
     }

       
    After drawing the background (or not), we reset the modelview matrix (starting us back at the center of the screen). We then translate 10 units into the screen.

    After that, we check to see if env is TRUE. If it is, we enable sphere mapping to create the environment mapping effect.   
       

     glLoadIdentity ();      // Reset The Modelview Matrix
     glTranslatef (0.0f, 0.0f, -10.0f);    // Translate 10 Units Into The Screen

     if (env)       // Is Environment Mapping On?
     {
      glEnable(GL_TEXTURE_GEN_S);    // Enable Texture Coord Generation For S (NEW)
      glEnable(GL_TEXTURE_GEN_T);    // Enable Texture Coord Generation For T (NEW)
     }

       
    I added the code below at the last minute. It rotates on the x-axis and y-axis (based on the value of angle) and then translates 2 units on the z-axis. This move us away from the center of the screen. If you remove the three lines of code below, the object will spin in the center of the screen. With the three lines of code, the objects move around a bit as they spin :)

    If you don't understand rotations and translations... you shouldn't be reading this tutorial :)   
       

     glRotatef(angle*2.3f,1.0f,0.0f,0.0f);    // Throw In Some Rotations To Move Things Around A Bit
     glRotatef(angle*1.8f,0.0f,1.0f,0.0f);    // Throw In Some Rotations To Move Things Around A Bit
     glTranslatef(0.0f,0.0f,2.0f);     // After Rotating Translate To New Position

       
    The code below checks to see which effect (object) we want to draw. If the value of effect is 0, we do a few rotations and then draw a cube. The rotations keep the cube spinning on the x-axis, y-axis and z-axis. By now, you should have the code to create a cube burned into your head :)   
       

     switch (effect)       // Which Effect?
     {
     case 0:        // Effect 0 - Cube
      glRotatef (angle*1.3f, 1.0f, 0.0f, 0.0f);  // Rotate On The X-Axis By angle
      glRotatef (angle*1.1f, 0.0f, 1.0f, 0.0f);  // Rotate On The Y-Axis By angle
      glRotatef (angle*1.2f, 0.0f, 0.0f, 1.0f);  // Rotate On The Z-Axis By angle
      glBegin(GL_QUADS);     // Begin Drawing A Cube
       // Front Face
       glNormal3f( 0.0f, 0.0f, 0.5f);
       glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f,  1.0f);
       glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f, -1.0f,  1.0f);
       glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f,  1.0f,  1.0f);
       glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f,  1.0f,  1.0f);
       // Back Face
       glNormal3f( 0.0f, 0.0f,-0.5f);
       glTexCoord2f(1.0f, 0.0f); glVertex3f(-1.0f, -1.0f, -1.0f);
       glTexCoord2f(1.0f, 1.0f); glVertex3f(-1.0f,  1.0f, -1.0f);
       glTexCoord2f(0.0f, 1.0f); glVertex3f( 1.0f,  1.0f, -1.0f);
       glTexCoord2f(0.0f, 0.0f); glVertex3f( 1.0f, -1.0f, -1.0f);
       // Top Face
       glNormal3f( 0.0f, 0.5f, 0.0f);
       glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f,  1.0f, -1.0f);
       glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f,  1.0f,  1.0f);
       glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f,  1.0f,  1.0f);
       glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f,  1.0f, -1.0f);
       // Bottom Face
       glNormal3f( 0.0f,-0.5f, 0.0f);
       glTexCoord2f(1.0f, 1.0f); glVertex3f(-1.0f, -1.0f, -1.0f);
       glTexCoord2f(0.0f, 1.0f); glVertex3f( 1.0f, -1.0f, -1.0f);
       glTexCoord2f(0.0f, 0.0f); glVertex3f( 1.0f, -1.0f,  1.0f);
       glTexCoord2f(1.0f, 0.0f); glVertex3f(-1.0f, -1.0f,  1.0f);
       // Right Face
       glNormal3f( 0.5f, 0.0f, 0.0f);
       glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f, -1.0f, -1.0f);
       glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f,  1.0f, -1.0f);
       glTexCoord2f(0.0f, 1.0f); glVertex3f( 1.0f,  1.0f,  1.0f);
       glTexCoord2f(0.0f, 0.0f); glVertex3f( 1.0f, -1.0f,  1.0f);
       // Left Face
       glNormal3f(-0.5f, 0.0f, 0.0f);
       glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f, -1.0f);
       glTexCoord2f(1.0f, 0.0f); glVertex3f(-1.0f, -1.0f,  1.0f);
       glTexCoord2f(1.0f, 1.0f); glVertex3f(-1.0f,  1.0f,  1.0f);
       glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f,  1.0f, -1.0f);
      glEnd();      // Done Drawing Our Cube
      break;       // Done Effect 0

       
    This is where we draw the sphere. We start off with a few quick rotations on the x-axis, y-axis and z-axis. We then draw the sphere. The sphere will have a radius of 1.3f, with 20 slices and 20 stacks. I decided to use 20 because I didn't want the sphere to be perfectly smooth. Using fewer slices and stacks gives the sphere a rougher look (less smooth), making it semi obvious that the sphere is actually rotating when sphere mapping is enabled. Try playing around with the values! It's important to note that more slices or stacks requires more processing power!   
       

     case 1:        // Effect 1 - Sphere
      glRotatef (angle*1.3f, 1.0f, 0.0f, 0.0f);  // Rotate On The X-Axis By angle
      glRotatef (angle*1.1f, 0.0f, 1.0f, 0.0f);  // Rotate On The Y-Axis By angle
      glRotatef (angle*1.2f, 0.0f, 0.0f, 1.0f);  // Rotate On The Z-Axis By angle
      gluSphere(quadratic,1.3f,20,20);   // Draw A Sphere
      break;       // Done Drawing Sphere

       
    This is where we draw the cylinder. We start off with some simple rotations on the x-axis, y-axis and z-axis. Our cylinder has a base and top radius of 1.0f units. It's 3.0f units high, and is composed of 32 slices and 32 stacks. If you decrease the slices or stacks, the cylinder will be made up of less polygons and will appear less rounded.

    Before we draw the cylinder, we translate -1.5f units on the z-axis. By doing this, our cylinder will rotate around it's center point. The general rule to centering a cylinder is to divide it's height by 2 and translate by the result in a negative direction on the z-axis. If you have no idea what I'm talking about, take out the tranlatef(...) line below. The cylinder will rotate around it's base, instead of a center point.   
       

     case 2:        // Effect 2 - Cylinder
      glRotatef (angle*1.3f, 1.0f, 0.0f, 0.0f);  // Rotate On The X-Axis By angle
      glRotatef (angle*1.1f, 0.0f, 1.0f, 0.0f);  // Rotate On The Y-Axis By angle
      glRotatef (angle*1.2f, 0.0f, 0.0f, 1.0f);  // Rotate On The Z-Axis By angle
      glTranslatef(0.0f,0.0f,-1.5f);    // Center The Cylinder
      gluCylinder(quadratic,1.0f,1.0f,3.0f,32,32);  // Draw A Cylinder
      break;       // Done Drawing Cylinder
     }

       
    Next we check to see if env is TRUE. If it is, we disable sphere mapping. We call glFlush() to flush out the rendering pipeline (makes sure everything gets rendered before we draw the next frame).   
       

     if (env)       // Environment Mapping Enabled?
     {
      glDisable(GL_TEXTURE_GEN_S);    // Disable Texture Coord Generation For S (NEW)
      glDisable(GL_TEXTURE_GEN_T);    // Disable Texture Coord Generation For T (NEW)
     }

     glFlush ();       // Flush The GL Rendering Pipeline
    }

       
    I hope you enjoyed this tutorial. It's 2:00am at the moment... I've been working on this tut for the last 6 hours. Sounds crazy, but writing things so that they actually make sense is not an easy task. I have read the tut 3 times now and I'm still trying to make things easier to understand. Believe it or not, it's important to me that you understand how things work and why they work. That's why I babble endlessly, why I over-comment, etc.

    Anyways... I would love to hear some feedback about this tut. If you find mistakes or you would like to help make the tut better, please contact me. As I said, this is my first attempt at AVI. Normally I wouldn't write a tut on a subject I just learned, but my excitement got the best of me, plus the fact that there's very little information on the subject bothered me. What I'm hoping is that I'll open the door to a flood of higher quality AVI demos and example code! Might happen... might not. Either way, the code is here for you to use however you want!

    Huge thanks to Fredster for the face AVI file. Face was one of about 6 AVI animations he sent to me for use in my tutorial. No questions asked, no conditions. I emailed him and he went out of his way to help me out... Huge respect!

    An even bigger thanks to Jonathan de Blok. If it wasn't for him, this tutorial would not exist. He got me interested in the AVI format by sending me bits of code from his own personal AVI player. He also went out of his way to answer any questions that I had in regards to his code. It's important to note that nothing was borrowed or taken from his code, it was used only to understand how an AVI player works. My player opens, decodes and plays AVI files using very different code!

    Thanks to everyone for the great support! This site would be nothing without it's visitors!!!

    Jeff Molofee (NeHe)

    Jeff Molofee (NeHe)

    ----------------------------------------------
    越学越无知

    点击查看用户来源及管理<br>发贴IP:*.*.*.* 2007/10/25 9:39:00
     
     一分之千 帅哥哟,离线,有人找我吗?射手座1984-11-30
      
      
      威望:1
      等级:研一(随老板参加了WWW大会还和Tim Berners-Lee合了影^_^)
      文章:632
      积分:4379
      门派:XML.ORG.CN
      注册:2006/12/31

    姓名:(无权查看)
    城市:(无权查看)
    院校:(无权查看)
    给一分之千发送一个短消息 把一分之千加入好友 查看一分之千的个人资料 搜索一分之千在『 C/C++编程思想 』的所有贴子 引用回复这个贴子 回复这个贴子 查看一分之千的博客3
    发贴心情 

    第三十六课

    按此在新窗口浏览图片放射模糊和渲染到纹理:

    如何实现放射状的滤镜效果呢,看上去很难,其实很简单。把渲染得图像作为纹理提取出来,在利用OpenGL本身自带的纹理过滤,就能实现这种效果,不信,你试试。

      
       
       
    嗨,我是Dario Corno,也因SpinningKids的rIo而为大家所知。首先,我想要解释我为什么决定写这点指南。我自1989年以来就从事scener的工作。我想要你们去下载一些demo(示例程序,也就是演示——译者)以帮助你理解什么是Demo并且demo的效果是什么。
    Demos是被用来展示恰似风雅的技术一样无限并且时而严酷的译码。在今天的演示中你通常总可以发现一些真正迷人的效果。这不是一本迷人的效果指南,但结果将非常的酷!你能够从http://www.pouet.net和 http://ftp.scene.org. 发现大量的演示收集。
    既然绪论超出了我们探讨的范围,我们可以继续我们的指南了。
    我将解释如何做一个看起来象径向模糊的eye candy 效果。有时它以测定体积的光线被提到。不要相信,它仅仅是一个冒牌的辐射状模糊;D
    辐射状模糊效果通常借助于模糊在一个方向上相对于模糊物的中心原始图象的每一个象素来做的。
    借助于现今的硬件用色彩缓冲器来手工作模糊处理是极其困难的(至少在某种程度上它被所有的gfx卡所支持),因此我们需要一些窍门来达到同样的效果。
    作为一个奖励当学习径向模糊效果时,你同样将学到如何轻松地提供材料的纹理。
    我决定在这篇指南中使用弹簧作为外形因为它是一个酷的外形,另外还因为我对立方体感到厌烦:}
    多留意这篇指南关于如何创建那个效果的指导方针是重要的。我不研究解释那些代码的详情。你应当用心记下它们中的大部分:}
    下面是变量的定义和用到的头文件。

      
       

    #include <math.h>        // 数学库

    float  angle;       // 用来旋转那个螺旋
    float  vertexes[3][3];      // 为3个设置的顶点保存浮点信息
    float  normal[3];      // 存放法线数据的数组
    GLuint  BlurTexture;      // 存放纹理编号的一个无符号整型

       
    函数EmptyTexture()创建了一个空的纹理并返回纹理的编号。我们刚分配了一些自由空间(准确的是128*128*4无符号整数)。
    128*128是纹理的大小(128象素宽和高),4意味着为每一个象素我们想用4byte来存储红,绿,蓝和ALPHA组件。
      
       

    GLuint EmptyTexture()       // 创建一个空的纹理
    {
     GLuint txtnumber;       // 纹理ID
     unsigned int* data;      // 存储数据

     // 为纹理数据(128*128*4)建立存储区
     data = (unsigned int*)new GLuint[((128 * 128)* 4 * sizeof(unsigned int))];

       
    在分配完空间之后我们用ZeroMemory函数清0,返回指针(数据)和被清0的存贮区的大小。
    另一半需注意的重要的事情是我们设置GL_LINEAR的放大率和缩放率的方法。因为我们将被我们的纹理要求投入全部的精力并且如果被滥用,GL_NEAREST会看起来非常糟糕。
      
       

     ZeroMemory(data,((128 * 128)* 4 * sizeof(unsigned int))); // 清除存储区

     glGenTextures(1, &txtnumber);    // 创建一个纹理
     glBindTexture(GL_TEXTURE_2D, txtnumber);   // 构造纹理
     glTexImage2D(GL_TEXTURE_2D, 0, 4, 128, 128, 0,
      GL_RGBA, GL_UNSIGNED_BYTE, data);   // 用数据中的信息构造纹理
     glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
     glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);

     delete [] data;      // 释放数据

     return txtnumber;      // 返回纹理ID
    }

       
    这个函数简单规格化法线向量的长度。向量被当作有3个浮点类型的元素的数组来表示,第一个元素表示X轴,第二个表示Y,第三个表示Z。一个规格化的向量[Nv]被Vn表达为Vn=[Vox/|Vo|,Voy/|Vo|,Voz/|Vo|],这里Vo是最初的向量,|Vo|是该向量的系数(或长度),X,Y,Z它的组件。之后由向量的长度区分每一个法线向量组件。
      
       

    void ReduceToUnit(float vector[3])     // 归一化一个法向量
    {        // 一定长度的单位法线向量
     float length;      // 保存长度
     // 计算向量
     length = (float)sqrt((vector[0]*vector[0]) + (vector[1]*vector[1]) + (vector[2]*vector[2]));

     if(length == 0.0f)      // 避免除0错误
      length = 1.0f;     // 如果为0设置为1

     vector[0] /= length;     // 归一化向量
     vector[1] /= length;      
     vector[2] /= length;      
    }

       
    下面各项计算所给的3个顶点向量(总在3个浮点数组中)。我们有两个参数:v[3][3]和out[3]。当然第一个参数是一个m=3,n=3每一行代表三角形一个顶点的浮点矩阵。Out是我们要放置作为结果的法线向量的位置。
    相当简单的数学。我们将使用著名的交叉乘积运算。理论上说交叉乘积是两个向量——它返回另一个直交向量到两个原始向量——之间的操作。法线向量是一个垂直物体表面的直交向量,是与该表面相对的(通常一个规格化的长度)。设想两个向量是在一个三角形的一侧的上方,那么这个三角形两边的直交向量(由交叉乘积计算)就是那个三角形的法线。
    解释比实行还难。
    我们将着手从现存的顶点0到顶点1,从顶点1到顶点2找到那个向量。这是基本上通过减法——下一个顶点的每个组件减一个顶点的每个组件——作好了的。现在我们已经为我们的三角形的边找到了那个向量。通过交叉相乘我们为那个三角形找到了法线向量。
    看代码。
    V[0][ ]是第一个顶点,v[1][ ]是第二个顶点,v[2][ ]是第三个顶点。每个顶点包括:v[ ][0]是顶点的x坐标,v[ ][1]是顶点的y坐标,v[ ][2]是顶点的z坐标。
    通过简单的减法从一个顶点的每个坐标到另一个顶点每个坐标我们得到了那个VECTOR。v1[0] = v[0][0] - v[1][0],这计算现存的从一个顶点到另一个顶点的向量的X组件,v1[1] = v[0][1] - v[1][1]将计算Y组件,v1[2] = v[0][2] - v[1][2] 计算Z组件等等。
    现在我们有了两个向量,所以我们计算它们的交叉乘积得到那个三角形的法线。
    交叉相乘的规则是:
    out[x] = v1[y] * v2[z] - v1[z] * v2[y]

    out[y] = v1[z] * v2[x] - v1[x] * v2[z]

    out[z] = v1[x] * v2[y] - v1[y] * v2[x]


    我们最终得到了这个三角形的法线in out[ ]。

      
       

    void calcNormal(float v[3][3], float out[3])    // 用三点计算一个立方体法线
    {
     float v1[3],v2[3];      // 向量 1 (x,y,z) 和向量 2 (x,y,z)
     static const int x = 0;     // 定义 X坐标
     static const int y = 1;     // 定义 Y 坐标
     static const int z = 2;     // 定义 Z 坐标

    // 用减法在两点之间得到向量// 从一点到另一点的X,Y,Z坐标// 计算点1到点0的向量
     v1[x] = v[0][x] - v[1][x];     
     v1[y] = v[0][y] - v[1][y];     
     v1[z] = v[0][z] - v[1][z];     
     // 计算点2到点1的向量
     v2[x] = v[1][x] - v[2][x];     
     v2[y] = v[1][y] - v[2][y];     
     v2[z] = v[1][z] - v[2][z];     
     // 计算交叉乘积为我们提供一个表面的法线
     out[x] = v1[y]*v2[z] - v1[z]*v2[y];    
     out[y] = v1[z]*v2[x] - v1[x]*v2[z];    
     out[z] = v1[x]*v2[y] - v1[y]*v2[x];    

     ReduceToUnit(out);      //  规格化向量
    }

       
    下面的例子正好用gluLookAt设立了一个观察点。我们设置一个观察点放置在0,5,50位置——正照看0,0,0并且所属的向上的向量正仰望(0,1,0)!:D  
       

    void ProcessHelix()       // 绘制一个螺旋
    {
     GLfloat x;       // 螺旋x坐标
     GLfloat y;       // 螺旋y坐标
     GLfloat z;       // 螺旋z坐标
     GLfloat phi;       // 角
     GLfloat theta;       // 角
     GLfloat v,u;       // 角
     GLfloat r;       // 螺旋半径
     int twists = 5;       // 5个螺旋

     GLfloat glfMaterialColor[]={0.4f,0.2f,0.8f,1.0f};   // 设置材料色彩
     GLfloat specular[]={1.0f,1.0f,1.0f,1.0f};    // 设置镜象灯光

     glLoadIdentity();       // 重置Modelview矩阵
     gluLookAt(0, 5, 50, 0, 0, 0, 0, 1, 0);    // 场景(0,0,0)的视点中心 (0,5,50),Y轴向上
             
     glPushMatrix();       // 保存Modelview矩阵

     glTranslatef(0,0,-50);      // 移入屏幕50个单位
     glRotatef(angle/2.0f,1,0,0);     // 在X轴上以1/2角度旋转
     glRotatef(angle/3.0f,0,1,0);     // 在Y轴上以1/3角度旋转

     glMaterialfv(GL_FRONT_AND_BACK,GL_AMBIENT_AND_DIFFUSE,glfMaterialColor);
     glMaterialfv(GL_FRONT_AND_BACK,GL_SPECULAR,specular);

       
    然后我们计算螺旋的公式并给弹簧着色。十分简单,我就不再解释了,因为它不是这篇指南的主要目的。这段螺旋代码经过软件赞助者的许可被借用(并作了一点优化)。这是写作的简单的方法,但不是最块的方法。使用顶点数组可以使它更快!  
       

     r=1.5f;       // 半径

     glBegin(GL_QUADS);      // 开始绘制立方体
     for(phi=0; phi <= 360; phi+=20.0)    // 以20度的间隔绘制
     {
      for(theta=0; theta<=360*twists; theta+=20.0)  
      {
       v=(phi/180.0f*3.142f);   // 计算第一个点 ( 0 )的角度
       u=(theta/180.0f*3.142f);   // 计算第一个点 ( 0 )的角度

       x=float(cos(u)*(2.0f+cos(v) ))*r;  // 计算x的位置(第一个点)
       y=float(sin(u)*(2.0f+cos(v) ))*r;  // 计算y的位置(第一个位置)
       z=float((( u-(2.0f*3.142f)) + sin(v) ) * r); // 计算z的位置(第一个位置)

       vertexes[0][0]=x;    // 设置第一个顶点的x值
       vertexes[0][1]=y;    // 设置第一个顶点的y值
       vertexes[0][2]=z;    // 设置第一个顶点的z值

       v=(phi/180.0f*3.142f);   // 计算第二个点( 0 )的角度
       u=((theta+20)/180.0f*3.142f);  // 计算第二个点( 20 )的角度

       x=float(cos(u)*(2.0f+cos(v) ))*r;  // 计算x位置(第二个点)
       y=float(sin(u)*(2.0f+cos(v) ))*r;  // 计算y位置(第二个点)
       z=float((( u-(2.0f*3.142f)) + sin(v) ) * r); // 计算z位置(第二个点)

       vertexes[1][0]=x;    // 设置第二个顶点的x值
       vertexes[1][1]=y;    // 设置第二个顶点的y值
       vertexes[1][2]=z;    // 设置第二个顶点的z值

       v=((phi+20)/180.0f*3.142f);   // 计算第三个点 ( 20 )的角度
       u=((theta+20)/180.0f*3.142f);  // 计算第三个点 ( 20 )的角度

       x=float(cos(u)*(2.0f+cos(v) ))*r;  // 计算x位置 (第三个点)
       y=float(sin(u)*(2.0f+cos(v) ))*r;  // 计算y位置 (第三个点)
       z=float((( u-(2.0f*3.142f)) + sin(v) ) * r); // 计算z位置 (第三个点)

       vertexes[2][0]=x;    // 设置第三个顶点的x值
       vertexes[2][1]=y;    // 设置第三个顶点的y值
       vertexes[2][2]=z;    // 设置第三个顶点的z值

       v=((phi+20)/180.0f*3.142f);   // 计算第四个点( 20 )的角度
       u=((theta)/180.0f*3.142f);   // 计算第四个点( 0 )的角度

       x=float(cos(u)*(2.0f+cos(v) ))*r;  // 计算x位置 (第四个点)
       y=float(sin(u)*(2.0f+cos(v) ))*r;  // 计算y位置 (第四个点)
       z=float((( u-(2.0f*3.142f)) + sin(v) ) * r); // 计算z位置 (第四个点))

       vertexes[3][0]=x;    // 设置第四个顶点的x值
       vertexes[3][1]=y;    // 设置第四个顶点的y值
       vertexes[3][2]=z;    // 设置第四个顶点的z值

       calcNormal(vertexes,normal);  // 计算立方体的法线

       glNormal3f(normal[0],normal[1],normal[2]); // 设置法线

       // 渲染四边形
       glVertex3f(vertexes[0][0],vertexes[0][1],vertexes[0][2]);
       glVertex3f(vertexes[1][0],vertexes[1][1],vertexes[1][2]);
       glVertex3f(vertexes[2][0],vertexes[2][1],vertexes[2][2]);
       glVertex3f(vertexes[3][0],vertexes[3][1],vertexes[3][2]);
      }
     }
     glEnd();       // 绘制结束

     glPopMatrix();      // 取出矩阵
    }

       
    这两个事例(ViewOrtho and ViewPerspective)被编码以使它变得很容易地在一个直交的情形下绘制并且不费力的返回透视图。
    ViewOrtho简单地设立了这个射影矩阵,然后增加一份现行射影矩阵的拷贝到OpenGL栈上。这个恒等矩阵然后被装载并且当前屏幕正投影观察决议被提出。
    利用2维坐标以屏幕左上角0,0和屏幕右下角639,479来绘制是可能的。
    最后,modelview矩阵为透视材料激活。
    ViewPerspective设置射影矩阵模式取回ViewOrtho在堆栈上推进的非正交矩阵。然后样本视图被选择因此我们可以透视材料。
    我建议你保留这两个过程,能够着色2D而不需担心射影矩阵很不错。  
       

    void ViewOrtho()       // 设置一个z正视图
    {
     glMatrixMode(GL_PROJECTION);    // 选择投影矩阵
     glPushMatrix();      // 保存当前矩阵
     glLoadIdentity();      // 重置矩阵
     glOrtho( 0, 640 , 480 , 0, -1, 1 );    // 选择标准模式
     glMatrixMode(GL_MODELVIEW);     // 选择样本视图矩阵
     glPushMatrix();      // 保存当前矩阵
     glLoadIdentity();      // 重置矩阵
    }

    void ViewPerspective()       // 设置透视视图
    {
     glMatrixMode( GL_PROJECTION );     // 选择投影矩阵
     glPopMatrix();       // 取出矩阵
     glMatrixMode( GL_MODELVIEW );     // 选择模型变换矩阵
     glPopMatrix();       //弹出矩阵
    }

       
    现在是解释那个冒牌的辐射状的模糊效果是如何作的时候了。
    我们需要绘制这个场景——它从中心开始在所有方向上模糊出现。窍门是在没有主要的性能瓶颈的情况下做出的。我们不能读写象素,并且如果我们想和非kick-butt视频卡兼容,我们不能使用扩展名何驱动程序特殊命令。
    没办法了吗?
    不,解决方法非常简单,OpenGL赋予我们“模糊”纹理的能力。OK……并非真正的模糊,但我们利用线性过滤去依比例决定一个纹理,结果(有些想象成分)看起来象高斯模糊。
    因此如果我们正确地在3D场景中放了大量的被拉伸的纹理并依比例决定会有什么发生?
    答案比你想象的还简单。
    问题一:透视一个纹理
    有一个后缓冲器在象素格式下问题容易解决。在没有后缓冲器的情况下透视一个纹理在眼睛看来是一个真正的痛苦。
    透视纹理刚好借助一个函数来完成。我们需要绘制我们的实体然后利用glCopytexImage函数复制这个结果(在交换前,后缓冲器之前)后到纹理。
    问题二:在3D实体前精确地协调纹理。
    我们知道:如果我们在没有设置正确的透视的情况下改变了视口,我们就得到一个我们的实体的一个被拉伸的透视图。例如如果我们设置一个是视口足够宽我们就得到一个垂直地被拉伸的透视图。
    解决方法是首先设置一个视口正如我们的纹理(128×128)。透视我们的实体到这个纹理之后,我们利用当前屏幕决议着色这个纹理到屏幕。这种方法OpenGL缩减这个实体去适应纹理,并且我们拉伸纹理到全屏大小时,OpenGL重新调整纹理的大小去完美的适应在我们的3d实体顶端。希望我没有丢掉任何一点。另一个灵活的例子是,如果你取一个640×480大小screenshot,然后调整成为256x256的位图,你可以以一个纹理装载这个位图,并拉伸它使之适合640x480的屏幕。这个质量可能不会以前一样好,但是这个纹理排列起的效果应当接近最初的640x480图象。
    On to the fun stuff! 这个函数相当简单,并且是我的首选的“设计窍门”之一。它设置一个与我们的BlurTexture度数相匹配的大小的视口。然后它被弹簧的着色程序调用。弹簧将由于视口被拉伸适应128*128的纹理。
    在弹簧被拉伸至128x128视口大小之后,我们约定BlurTexture 且用glCopyTexImage2D从视口拷贝色彩缓冲器到BlurTexture。
    参数如下:
    GL_TEXTURE_2D指出我们正使用一个2Dimensional纹理,0是我们想要拷贝缓冲器到mip的绘图等级,默认等级是0。GL_LUMINANCE指出被拷贝的数据格式。我之所以使用GL_LUMINANCE因为最终结果看起来比较好。这种情形缓冲器的亮度部分将被拷贝到纹理。其它参数可以是GL_ALPHA, GL_RGB, GL_INTENSITY等等。
    其次的两个参数告诉OpenGL从(0,0)开始拷贝到哪里。宽度和高度(128,128)是从左到右有多少象素要拷贝并且上下拷贝多少。最后一个参数仅用来指出我们是否想要一个边界——哪个不想要。
    既然在我们的BlurTexture我们已经有了一个色彩缓冲器的副本(和被拉伸的弹簧一致),我们可以清除那个缓冲器,向后设置那个视口到适当的度数(640x480全屏)。
    重要:
    这个窍门能用在只有双缓冲器象素格式的情况下。原因是所有这些操作从观察者面前被隐藏起来。(在后缓冲器完成)。
      
       

    void RenderToTexture()      // 着色到一个纹理
    {
     glViewport(0,0,128,128);     // 设置我们的视口

     ProcessHelix();      // 着色螺旋

     glBindTexture(GL_TEXTURE_2D,BlurTexture);   // 绑定模糊纹理

     // 拷贝我们的视口到模糊纹理 (从 0,0 到 128,128... 无边界)
     glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, 0, 0, 128, 128, 0);

     glClearColor(0.0f, 0.0f, 0.5f, 0.5);    //调整清晰的色彩到中等蓝色
     glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);  // 清屏和深度缓冲

     glViewport(0 , 0,640 ,480);     // 调整视口 (0,0 to 640x480)
    }

       
    DrawBlur函数仅在我们的3D场景前绘制一些混合的方块——用BlurTexture我们以前已实现。这样,借由阿尔发和缩放这个纹理,我们得到了真正看起来象辐射状的模糊的效果。
    我首先禁用GEN_S 和 GEN_T(我沉溺于球体影射,因此我的程序通常启用这些指令:P)。
    我们启用2D纹理,禁用深度测试,调整正确的函数,起用混合然后约束BlurTexture。
    下一件我们要作的事情是转换到标准视图,那样比较容易绘制一些完美适应屏幕大小的方块。这是我们在3D实体顶端排列纹理的方法(通过拉伸纹理匹配屏幕比例)。这是问题二要解决的地方。
      
       

    void DrawBlur(int times, float inc)     // 绘制模糊的图象
    {
     float spost = 0.0f;     // 纹理坐标偏移量
     float alphainc = 0.9f / times;    // alpha混合的衰减量
     float alpha = 0.2f;     // Alpha初值

    // 禁用自动生成纹理坐标
     glDisable(GL_TEXTURE_GEN_S);
     glDisable(GL_TEXTURE_GEN_T);

     glEnable(GL_TEXTURE_2D);     // 启用 2D 纹理映射
     glDisable(GL_DEPTH_TEST);     // 深度测试不可用
     glBlendFunc(GL_SRC_ALPHA,GL_ONE);    // 设置混合模式
     glEnable(GL_BLEND);     // 启用混合
     glBindTexture(GL_TEXTURE_2D,BlurTexture);   // 绑定混合纹理
     ViewOrtho();      // 切换到标准视图

     alphainc = alpha / times;     // 减少alpha值

       
    我们多次绘制这个纹理用于创建那个辐射效果, 缩放这个纹理坐标并且每次我们做另一个关口时增大混合因数 。我们绘制25个方块,每次按照0.015f拉伸这个纹理。  
       

     glBegin(GL_QUADS);      // 开始绘制方块
      for (int num = 0;num < times;num++)   // 着色模糊物的次数
      {
       glColor4f(1.0f, 1.0f, 1.0f, alpha);  // 调整alpha值
       glTexCoord2f(0+spost,1-spost);   
       glVertex2f(0,0);    

       glTexCoord2f(0+spost,0+spost);   
       glVertex2f(0,480);   

       glTexCoord2f(1-spost,0+spost);  
       glVertex2f(640,480);    

       glTexCoord2f(1-spost,1-spost);   
       glVertex2f(640,0);   

       spost += inc;    // 逐渐增加 spost (快速靠近纹理中心)
       alpha = alpha - alphainc;   // 逐渐增加 alpha (逐渐淡出纹理)
      }
     glEnd();       // 完成绘制方块

     ViewPerspective();      // 转换到一个透视视图

     glEnable(GL_DEPTH_TEST);     // 深度测试可用
     glDisable(GL_TEXTURE_2D);     // 2D纹理映射不可用
     glDisable(GL_BLEND);     // 混合不可用
     glBindTexture(GL_TEXTURE_2D,0);    // 释放模糊纹理
    }

       
    瞧,这是以前从未见过的最短的绘制程序,有很棒的视觉效果!
    我们调用RenderToTexture 函数。幸亏我们视口改变这个函数才着色被拉伸的弹簧。 对于我们的纹理拉伸的弹簧被着色,并且这些缓冲器被清除。
    我们之后绘制“真正的”弹簧 (你在屏幕上看到的3D实体) 通过调用 ProcessHelix( )。
    最后我们在弹簧前面绘制一些混合的方块。有织纹的方块将被拉伸以适应在真正的3D弹簧
    上面。

      
       

    void Draw (void)       // 绘制场景
    {
     glClearColor(0.0f, 0.0f, 0.0f, 0.5);    // 将清晰的颜色设定为黑色
     glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);  // 清除屏幕和深度缓冲器
     glLoadIdentity();      // 重置视图
     RenderToTexture();      // 着色纹理
     ProcessHelix();      // 绘制我们的螺旋
     DrawBlur(25,0.02f);     // 绘制模糊效果
     glFlush ();      // 强制OpenGL绘制我们所有的图形
    }

       
    我希望你满意这篇指南,它实在没有比透视一个纹理讲授更多其它内容,但它是一个干脆地添加到你的3D应用程序中有趣的效果。
    如果你有任何的注释建议或者如果你知道一种更好的方法执行这个效果联系我rio@spinningkids.org。
    我也想要委托你去做一列事情(家庭作业):D
    1) 更改DrawBlur程序变为一个水平的模糊之物,垂直的模糊之物和一些更好的效果。(转动模糊之物!)。
    2) 玩转DrawBlur参数(添加,删除)变为一个好的程序和你的音乐同步。
    3) 用GL_LUMINANCE玩弄DrawBlur参数和一个SMALL纹理(惊人的光亮!)。
    4) 用暗色纹理代替亮色尝试大骗(哈哈,自己造的)测定体积的阴影。
    好了,这应该是所有的了(到此为止)。
    访问我的站点http://www.spinningkids.org/rio.
    获得更多的最新指南。

    ----------------------------------------------
    越学越无知

    点击查看用户来源及管理<br>发贴IP:*.*.*.* 2007/10/25 9:40:00
     
     一分之千 帅哥哟,离线,有人找我吗?射手座1984-11-30
      
      
      威望:1
      等级:研一(随老板参加了WWW大会还和Tim Berners-Lee合了影^_^)
      文章:632
      积分:4379
      门派:XML.ORG.CN
      注册:2006/12/31

    姓名:(无权查看)
    城市:(无权查看)
    院校:(无权查看)
    给一分之千发送一个短消息 把一分之千加入好友 查看一分之千的个人资料 搜索一分之千在『 C/C++编程思想 』的所有贴子 引用回复这个贴子 回复这个贴子 查看一分之千的博客4
    发贴心情 
    Lesson 36
       
    Hi! I'm Dario Corno, also known as rIo of SpinningKids. First of all, I want to explain why I decided to write this little tutorial. I have been a scener since 1989. I want all of you to download some demos so you understand what a demo is and what demo effects are.

    Demos are done to show off hardcore and sometimes brutal coding as well as artistic skill. You can usually find some really killer effects in todays demos! This won't be a killer effect tutorial, but the end result is very cool! You can find a huge collection of demos at http://www.pouet.net and http://ftp.scene.org.

    Now that the introduction is out of the way, we can go on with the tutorial...

    I will explain how to do an eye candy effect (used in demos) that looks like radial blur. Sometimes it's referred to as volumetric lights, don't believe it, it's just a fake radial blur! ;D

    Radial blur was usually done (when there were only software renderers) by blurring every pixel of the original image in a direction opposite the center of the blur.

    With todays hardware it is quite difficult to do blurring by hand using the color buffer (at least in a way that is supported by all the gfx cards), so we need to do a little trick to achieve the same effect.

    As a bonus while learning the radial blur effect, you will also learn how to render to a texture the easy way!

    I decided to use a spring as the shape in this tutorial because it's a cool shape, and I'm tired of cubes :)

    It's important to note that this tutorial is more a guideline on how to create the effect. I don't go into great detail explaining the code. You should know most of it off by heart :)

    Below are the variable definitions and includes used:   
       

    #include <math.h>       // We'll Need Some Math

    float  angle;       // Used To Rotate The Helix
    float  vertexes[3][3];      // An Array Of 3 Floats To Store The Vertex Data
    float  normal[3];      // An Array To Store The Normal Data
    GLuint  BlurTexture;      // An Unsigned Int To Store The Texture Number

       
    The function EmptyTexture() creates an empty texture and returns the number of that texture. We just allocate some free space (exactly 128 * 128 * 4 unsiged integers).

    128 * 128 is the size of the texture (128 pixels wide and tall), the 4 means that for every pixel we want 4 byte to store the RED, GREEN, BLUE and ALPHA components.   
       

    GLuint EmptyTexture()       // Create An Empty Texture
    {
     GLuint txtnumber;      // Texture ID
     unsigned int* data;      // Stored Data

     // Create Storage Space For Texture Data (128x128x4)
     data = (unsigned int*)new GLuint[((128 * 128)* 4 * sizeof(unsigned int))];

       
    After allocating space we zero it using the ZeroMemory function, passing the pointer (data) and the size of memory to be "zeroed".

    A semi important thing to note is that we set the magnification and minification methods to GL_LINEAR. That's because we will be stretching our texture and GL_NEAREST looks quite bad if stretched.   
       

     ZeroMemory(data,((128 * 128)* 4 * sizeof(unsigned int))); // Clear Storage Memory

     glGenTextures(1, &txtnumber);     // Create 1 Texture
     glBindTexture(GL_TEXTURE_2D, txtnumber);   // Bind The Texture
     glTexImage2D(GL_TEXTURE_2D, 0, 4, 128, 128, 0,
      GL_RGBA, GL_UNSIGNED_BYTE, data);   // Build Texture Using Information In data
     glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
     glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);

     delete [] data;       // Release data

     return txtnumber;      // Return The Texture ID
    }

       
    This function simply normalize the length of the normal vectors. Vectors are expressed as arrays of 3 elements of type float, where the first element rapresent X, the second Y and the third Z. A normalized vector (Nv) is expressed by Vn = (Vox / |Vo| , Voy / |Vo|, Voz / |Vo|), where Vo is the original vector, |vo| is the modulus (or length) of that vector, and x,y,z are the component of that vector. Doing it "digitally" will be: Calculating the length of the original vector by doing: sqrt(x^2 + y^2 + z^2) ,where x,y,z are the 3 components of the vector. And then dividing each normal vector component by the length of the vector.   
       

    void ReduceToUnit(float vector[3])     // Reduces A Normal Vector (3 Coordinates)
    {         // To A Unit Normal Vector With A Length Of One.
     float length;       // Holds Unit Length
     // Calculates The Length Of The Vector
     length = (float)sqrt((vector[0]*vector[0]) + (vector[1]*vector[1]) + (vector[2]*vector[2]));

     if(length == 0.0f)      // Prevents Divide By 0 Error By Providing
      length = 1.0f;      // An Acceptable Value For Vectors To Close To 0.

     vector[0] /= length;      // Dividing Each Element By
     vector[1] /= length;      // The Length Results In A
     vector[2] /= length;      // Unit Normal Vector.
    }

       
    The following routine calculates the normal given 3 vertices (always in the 3 float array). We have two parameters : v[3][3] and out[3] of course the first parameter is a matrix of floats with m=3 and n=3 where every line is a vertex of the triangle. out is the place where we'll put the resulting normal vector.

    A bit of (easy) math. We are going to use the famous cross product, by definition the cross product is an operation between two vectors that returns another vector orthogonal to the two original vectors. The normal is the vector orthogonal to a surface, with the versus opposite to that surface (and usually a normalized length). Imagine now if the two vectors above are the sides of a triangle, then the orthogonal vector (calculated with the cross product) of two sides of a triangle is exactly the normal of that triangle.

    Harder to explain than to do.

    We will start finding the vector going from vertex 0 to vertex 1, and the vector from vertex 1 to vertex 2, this is basically done by (brutally) subtracting each component of each vertex from the next. Now we got the vectors for our triangle sides. By doing the cross product (vXw) we get the normal vector for that triangle.

    Let's see the code.

    v[0][] is the first vertex, v[1][] is the second vertex, v[2][] is the third vertex. Every vertex has: v[][0] the x coordinate of that vertex, v[][1] the y coord of that vertex, v[][2] the z coord of that vertex.

    By simply subtracting every coord of one vertex from the next we get the VECTOR from this vertex to the next. v1[0] = v[0][0] - v[1][0], this calculates the X component of the VECTOR going from VERTEX 0 to vertex 1. v1[1] = v[0][1] - v[1][1], this will calculate the Y component v1[2] = v[0][2] - v[1][2], this will calculate the Z component and so on...

    Now we have the two VECTORS, so let's calculate the cross product of them to get the normal of the triangle.

    The formula for the cross product is:

    out[x] = v1[y] * v2[z] - v1[z] * v2[y]

    out[y] = v1[z] * v2[x] - v1[x] * v2[z]

    out[z] = v1[x] * v2[y] - v1[y] * v2[x] We finally have the normal of the triangle in out[].   
       

    void calcNormal(float v[3][3], float out[3])    // Calculates Normal For A Quad Using 3 Points
    {
     float v1[3],v2[3];      // Vector 1 (x,y,z) & Vector 2 (x,y,z)
     static const int x = 0;      // Define X Coord
     static const int y = 1;      // Define Y Coord
     static const int z = 2;      // Define Z Coord

     // Finds The Vector Between 2 Points By Subtracting
     // The x,y,z Coordinates From One Point To Another.

     // Calculate The Vector From Point 1 To Point 0
     v1[x] = v[0][x] - v[1][x];     // Vector 1.x=Vertex[0].x-Vertex[1].x
     v1[y] = v[0][y] - v[1][y];     // Vector 1.y=Vertex[0].y-Vertex[1].y
     v1[z] = v[0][z] - v[1][z];     // Vector 1.z=Vertex[0].y-Vertex[1].z
     // Calculate The Vector From Point 2 To Point 1
     v2[x] = v[1][x] - v[2][x];     // Vector 2.x=Vertex[0].x-Vertex[1].x
     v2[y] = v[1][y] - v[2][y];     // Vector 2.y=Vertex[0].y-Vertex[1].y
     v2[z] = v[1][z] - v[2][z];     // Vector 2.z=Vertex[0].z-Vertex[1].z
     // Compute The Cross Product To Give Us A Surface Normal
     out[x] = v1[y]*v2[z] - v1[z]*v2[y];    // Cross Product For Y - Z
     out[y] = v1[z]*v2[x] - v1[x]*v2[z];    // Cross Product For X - Z
     out[z] = v1[x]*v2[y] - v1[y]*v2[x];    // Cross Product For X - Y

     ReduceToUnit(out);      // Normalize The Vectors
    }

       
    The following routine just sets up a point of view using gluLookAt. We set a point of view placed at 0, 5, 50 that is looking to 0, 0, 0 and that has the UP vector looking UP (0, 1, 0)! :D   
       

    void ProcessHelix()       // Draws A Helix
    {
     GLfloat x;       // Helix x Coordinate
     GLfloat y;       // Helix y Coordinate
     GLfloat z;       // Helix z Coordinate
     GLfloat phi;       // Angle
     GLfloat theta;       // Angle
     GLfloat v,u;       // Angles
     GLfloat r;       // Radius Of Twist
     int twists = 5;       // 5 Twists

     GLfloat glfMaterialColor[]={0.4f,0.2f,0.8f,1.0f};  // Set The Material Color
     GLfloat specular[]={1.0f,1.0f,1.0f,1.0f};   // Sets Up Specular Lighting

     glLoadIdentity();      // Reset The Modelview Matrix
     gluLookAt(0, 5, 50, 0, 0, 0, 0, 1, 0);    // Eye Position (0,5,50) Center Of Scene (0,0,0)
             // Up On Y Axis.
     glPushMatrix();       // Push The Modelview Matrix

     glTranslatef(0,0,-50);      // Translate 50 Units Into The Screen
     glRotatef(angle/2.0f,1,0,0);     // Rotate By angle/2 On The X-Axis
     glRotatef(angle/3.0f,0,1,0);     // Rotate By angle/3 On The Y-Axis

     glMaterialfv(GL_FRONT_AND_BACK,GL_AMBIENT_AND_DIFFUSE,glfMaterialColor);
     glMaterialfv(GL_FRONT_AND_BACK,GL_SPECULAR,specular);

       
    We then calculate the helix formula and render the spring. It's quite simple, I won't explain it, beacuse it isn't the main goal of this tutorial. The helix code was borrowed (and optimized a bit) from Listen Software friends. This is written the simple way, and is not the fastest method. Using vertex arrays would make it faster!   
       

     r=1.5f;        // Radius

     glBegin(GL_QUADS);      // Begin Drawing Quads
     for(phi=0; phi &// 360 Degrees In Steps Of 20
     {
      for(theta=0; theta<=360*twists; theta+=20.0)  // 360 Degrees * Number Of Twists In Steps Of 20
      {
       v=(phi/180.0f*3.142f);    // Calculate Angle Of First Point (  0 )
       u=(theta/180.0f*3.142f);   // Calculate Angle Of First Point (  0 )

       x=float(cos(u)*(2.0f+cos(v) ))*r;  // Calculate x Position (1st Point)
       y=float(sin(u)*(2.0f+cos(v) ))*r;  // Calculate y Position (1st Point)
       z=float((( u-(2.0f*3.142f)) + sin(v) ) * r); // Calculate z Position (1st Point)

       vertexes[0][0]=x;    // Set x Value Of First Vertex
       vertexes[0][1]=y;    // Set y Value Of First Vertex
       vertexes[0][2]=z;    // Set z Value Of First Vertex

       v=(phi/180.0f*3.142f);    // Calculate Angle Of Second Point (  0 )
       u=((theta+20)/180.0f*3.142f);   // Calculate Angle Of Second Point ( 20 )

       x=float(cos(u)*(2.0f+cos(v) ))*r;  // Calculate x Position (2nd Point)
       y=float(sin(u)*(2.0f+cos(v) ))*r;  // Calculate y Position (2nd Point)
       z=float((( u-(2.0f*3.142f)) + sin(v) ) * r); // Calculate z Position (2nd Point)

       vertexes[1][0]=x;    // Set x Value Of Second Vertex
       vertexes[1][1]=y;    // Set y Value Of Second Vertex
       vertexes[1][2]=z;    // Set z Value Of Second Vertex

       v=((phi+20)/180.0f*3.142f);   // Calculate Angle Of Third Point ( 20 )
       u=((theta+20)/180.0f*3.142f);   // Calculate Angle Of Third Point ( 20 )

       x=float(cos(u)*(2.0f+cos(v) ))*r;  // Calculate x Position (3rd Point)
       y=float(sin(u)*(2.0f+cos(v) ))*r;  // Calculate y Position (3rd Point)
       z=float((( u-(2.0f*3.142f)) + sin(v) ) * r); // Calculate z Position (3rd Point)

       vertexes[2][0]=x;    // Set x Value Of Third Vertex
       vertexes[2][1]=y;    // Set y Value Of Third Vertex
       vertexes[2][2]=z;    // Set z Value Of Third Vertex

       v=((phi+20)/180.0f*3.142f);   // Calculate Angle Of Fourth Point ( 20 )
       u=((theta)/180.0f*3.142f);   // Calculate Angle Of Fourth Point (  0 )

       x=float(cos(u)*(2.0f+cos(v) ))*r;  // Calculate x Position (4th Point)
       y=float(sin(u)*(2.0f+cos(v) ))*r;  // Calculate y Position (4th Point)
       z=float((( u-(2.0f*3.142f)) + sin(v) ) * r); // Calculate z Position (4th Point)

       vertexes[3][0]=x;    // Set x Value Of Fourth Vertex
       vertexes[3][1]=y;    // Set y Value Of Fourth Vertex
       vertexes[3][2]=z;    // Set z Value Of Fourth Vertex

       calcNormal(vertexes,normal);   // Calculate The Quad Normal

       glNormal3f(normal[0],normal[1],normal[2]); // Set The Normal

       // Render The Quad
       glVertex3f(vertexes[0][0],vertexes[0][1],vertexes[0][2]);
       glVertex3f(vertexes[1][0],vertexes[1][1],vertexes[1][2]);
       glVertex3f(vertexes[2][0],vertexes[2][1],vertexes[2][2]);
       glVertex3f(vertexes[3][0],vertexes[3][1],vertexes[3][2]);
      }
     }
     glEnd();       // Done Rendering Quads

     glPopMatrix();       // Pop The Matrix
    }

       
    This two routines (ViewOrtho and ViewPerspective) were coded to make it easy to draw in an orthogonal way and get back to perspective rendering with ease.

    ViewOrtho simply sets the projection matrix, then pushes a copy of the actual projection matrix onto the OpenGL stack. The identity matrix is then loaded and an orthographic view with the current screen resolution is set up.

    This way it is possible to draw using 2D coordinates with 0,0 in the upper left corner of the screen and with 640,480 in the lower right corner of the screen.

    Finally, the modelview matrix is activated for rendering stuff.

    ViewPerspective sets up projection matrix mode and pops back the non-orthogonal matrix that ViewOrtho pushed onto the stack. The modelview matrix is then selected so we can rendering stuff.

    I suggest you keep these two procedures, it's nice being able to render in 2D without having to worry about the projection matrix!   
       

    void ViewOrtho()       // Set Up An Ortho View
    {
     glMatrixMode(GL_PROJECTION);     // Select Projection
     glPushMatrix();       // Push The Matrix
     glLoadIdentity();      // Reset The Matrix
     glOrtho( 0, 640 , 480 , 0, -1, 1 );    // Select Ortho Mode (640x480)
     glMatrixMode(GL_MODELVIEW);     // Select Modelview Matrix
     glPushMatrix();       // Push The Matrix
     glLoadIdentity();      // Reset The Matrix
    }

    void ViewPerspective()       // Set Up A Perspective View
    {
     glMatrixMode( GL_PROJECTION );     // Select Projection
     glPopMatrix();       // Pop The Matrix
     glMatrixMode( GL_MODELVIEW );     // Select Modelview
     glPopMatrix();       // Pop The Matrix
    }

       
    Now it's time to explain how the fake radial blur effect is done:

    We need to draw the scene so it appears blurred in all directions starting from the center. The trick is doing this without a major performance hit. We can't read and write pixels, and if we want compatibility with non kick-butt video cards, we can't use extensions or driver specific commands.

    Time to give up... ?

    No, the solution is quite easy, OpenGL gives us the ability to "blur" textures. Ok... Not really blurring, but if we scale a texture using linear filtering, the result (with a bit of imagination) looks like gaussian blur.

    So what would happen if we put a lot of stretched textures right on top of the 3D scene and scaled them?

    The answer is simple... A radial blur effect!

    There are two problems: How do we create the texture realtime and how do we place the texture exactly in front of the 3D object?

    The solutions are easier than you may think!

    Problem ONE: Rendering To A Texture

    The problem is easy to solve on pixel formats that have a back buffer. Rendering to a texture without a back buffer can be a real pain on the eyes!

    Rendering to texture is achieved with just one function! We need to draw our object and then copy the result (BEFORE SWAPPING THE BACK BUFFER WITH THE FRONT BUFFER) to a texture using the glCopytexSubImage function.

    Problem TWO: Fitting The Texture Exactly In Front Of The 3D Object

    We know that, if we change the viewport without setting the right perspective, we get a stretched rendering of our object. For example if we set a viewport really wide we get a vertically stretched rendering.

    The solution is first to set a viewport that is square like our texture (128x128). After rendering our object to the texture, we render the texture to the screen using the current screen resolution. This way OpenGL reduces the object to fit into the texture, and when we stretch the texture to the full size of the screen, OpenGL resizes the texture to fit perfectly over top of our 3d object. Hopefully I haven't lost anyone. Another quick example... If you took a 640x480 screenshot, and then resized the screenshot to a 256x256 bitmap, you could load that bitmap as a texture and stretch it to fit on a 640x480 screen. The quality would not be as good, but the texture should line up pretty close to the original 640x480 image.

    On to the fun stuff! This function is really easy and is one of my preferred "design tricks". It sets a viewport with a size that matches our BlurTexture dimensions (128x128). It then calls the routine that renders the spring. The spring will be stretched to fit the 128*128 texture because of the viewport (128x128 viewport).

    After the spring is rendered to fit the 128x128 viewport, we bind to the BlurTexture and copy the colour buffer from the viewport to the BlurTexture using glCopyTexSubImage2D.

    The parameters are as follows:

    GL_TEXTURE_2D indicates that we are using a 2Dimensional texture, 0 is the mip map level we want to copy the buffer to, the default level is 0, GL_LUMINANCE indicates the format of the data to be copied. I used GL_LUMINANCE because the final result looks better, this way the luminance part of the buffer will be copied to the texture. Other parameters could be GL_ALPHA, GL_RGB, GL_INTENSITY and more.

    The next 2 parameters tell OpenGL where to start copying from (0,0). The width and height (128,128) is how many pixels to copy from left to right and how many to copy up and down. The last parameter is only used if we want a border which we dont.

    Now that we have a copy of the colour buffer (with the stretched spring) in our BlurTexture we can clear the buffer and set the viewport back to the proper dimensions (640x480 - fullscreen).

    IMPORTANT:

    This trick can be used only with double buffered pixel formats. The reason why is because all these operations are hidden from the viewer (done on the back buffer).   
       

    void RenderToTexture()       // Renders To A Texture
    {
     glViewport(0,0,128,128);     // Set Our Viewport (Match Texture Size)

     ProcessHelix();       // Render The Helix

     glBindTexture(GL_TEXTURE_2D,BlurTexture);   // Bind To The Blur Texture

     // Copy Our ViewPort To The Blur Texture (From 0,0 To 128,128... No Border)
     glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, 0, 0, 128, 128, 0);

     glClearColor(0.0f, 0.0f, 0.5f, 0.5);    // Set The Clear Color To Medium Blue
     glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);  // Clear The Screen And Depth Buffer

     glViewport(0 , 0,640 ,480);     // Set Viewport (0,0 to 640x480)
    }

       
    The DrawBlur function simply draws some blended quads in front of our 3D scene, using the BlurTexture we got before. This way, playing a bit with alpha and scaling the texture, we get something that really looks like radial blur.

    I first disable GEN_S and GEN_T (I'm addicted to sphere mapping, so my routines usually enable these instructions :P ).

    We enable 2D texturing, disable depth testing, set the proper blend function, enable blending and then bind the BlurTexture.

    The next thing we do is switch to an ortho view, that way it's easier to draw a quad that perfectly fits the screen size. This is how line up the texture over top of the 3D object (by stretching the texture to match the screen ratio). This is where problem two is resolved!   
       

    void DrawBlur(int times, float inc)     // Draw The Blurred Image
    {
     float spost = 0.0f;      // Starting Texture Coordinate Offset
     float alphainc = 0.9f / times;     // Fade Speed For Alpha Blending
     float alpha = 0.2f;      // Starting Alpha Value

     // Disable AutoTexture Coordinates
     glDisable(GL_TEXTURE_GEN_S);
     glDisable(GL_TEXTURE_GEN_T);

     glEnable(GL_TEXTURE_2D);     // Enable 2D Texture Mapping
     glDisable(GL_DEPTH_TEST);     // Disable Depth Testing
     glBlendFunc(GL_SRC_ALPHA,GL_ONE);    // Set Blending Mode
     glEnable(GL_BLEND);      // Enable Blending
     glBindTexture(GL_TEXTURE_2D,BlurTexture);   // Bind To The Blur Texture
     ViewOrtho();       // Switch To An Ortho View

     alphainc = alpha / times;     // alphainc=0.2f / Times To Render Blur

       
    We draw the texture many times to create the radial effect, scaling the texture coordinates and raising the blend factor every time we do another pass. We draw 25 quads stretching the texture by 0.015f each time.   
       

     glBegin(GL_QUADS);      // Begin Drawing Quads
      for (int num = 0;num < times;num++)   // Number Of Times To Render Blur
      {
       glColor4f(1.0f, 1.0f, 1.0f, alpha);  // Set The Alpha Value (Starts At 0.2)
       glTexCoord2f(0+spost,1-spost);   // Texture Coordinate (   0,   1 )
       glVertex2f(0,0);    // First Vertex  (   0,   0 )

       glTexCoord2f(0+spost,0+spost);   // Texture Coordinate (   0,   0 )
       glVertex2f(0,480);    // Second Vertex (   0, 480 )

       glTexCoord2f(1-spost,0+spost);   // Texture Coordinate (   1,   0 )
       glVertex2f(640,480);    // Third Vertex  ( 640, 480 )

       glTexCoord2f(1-spost,1-spost);   // Texture Coordinate (   1,   1 )
       glVertex2f(640,0);    // Fourth Vertex ( 640,   0 )

       spost += inc;     // Gradually Increase spost (Zooming Closer To Texture Center)
       alpha = alpha - alphainc;   // Gradually Decrease alpha (Gradually Fading Image Out)
      }
     glEnd();       // Done Drawing Quads

     ViewPerspective();      // Switch To A Perspective View

     glEnable(GL_DEPTH_TEST);     // Enable Depth Testing
     glDisable(GL_TEXTURE_2D);     // Disable 2D Texture Mapping
     glDisable(GL_BLEND);      // Disable Blending
     glBindTexture(GL_TEXTURE_2D,0);     // Unbind The Blur Texture
    }

       
    And voila', this is the shortest Draw routine ever seen, with a great looking effect!

    We call the RenderToTexture function. This renders the stretched spring once thanks to our viewport change. The stretched spring is rendered to our texture, and the buffers are cleared.

    We then draw the "REAL" spring (the 3D object you see on the screen) by calling ProcessHelix( ).

    Finally, we draw some blended quads in front of the spring. The textured quads will be stretched to fit over top of the REAL 3D spring.   
       

    void Draw (void)       // Draw The Scene
    {
     glClearColor(0.0f, 0.0f, 0.0f, 0.5);    // Set The Clear Color To Black
     glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);  // Clear Screen And Depth Buffer
     glLoadIdentity();      // Reset The View
     RenderToTexture();      // Render To A Texture
     ProcessHelix();       // Draw Our Helix
     DrawBlur(25,0.02f);      // Draw The Blur Effect
     glFlush ();       // Flush The GL Rendering Pipeline
    }

       
    I hope you enjoyed this tutorial, it really doesn't teach much other than rendering to a texture, but it's definitely an interesting effect to add to your 3d programs.

    If you have any comments suggestions or if you know of a better way to implement this effect contact me rio@spinningkids.org.

    You are free to use this code however you want in productions of your own, but before you RIP it, give it a look and try to understand what it does, that's the only way ripping is allowed! Also, if you use this code, please, give me some credit!

    I want also leave you all with a list of things to do (homework) :D

    1) Modify the DrawBlur routine to get an horizontal blur, vertical blur and some more good effects (Twirl blur!).
    2) Play with the DrawBlur parameter (add, remove) to get a good routine to sync with your music.
    3) Play around with DrawBlur params and a SMALL texture using GL_LUMINANCE (Funky Shininess!).
    4) Try superfake volumetric shadows using dark textures instead of luminance one!

    Ok, that should be all for now.

    Visit my site and (SK one) for more upcoming tutorials at http://www.spinningkids.org/rio.

    Dario Corno (rIo)

    Jeff Molofee (NeHe)

    ----------------------------------------------
    越学越无知

    点击查看用户来源及管理<br>发贴IP:*.*.*.* 2007/10/25 9:40:00
     
     GoogleAdSense射手座1984-11-30
      
      
      等级:大一新生
      文章:1
      积分:50
      门派:无门无派
      院校:未填写
      注册:2007-01-01
    给Google AdSense发送一个短消息 把Google AdSense加入好友 查看Google AdSense的个人资料 搜索Google AdSense在『 C/C++编程思想 』的所有贴子 访问Google AdSense的主页 引用回复这个贴子 回复这个贴子 查看Google AdSense的博客广告
    2024/11/24 20:34:16

    本主题贴数4,分页: [1]

    管理选项修改tag | 锁定 | 解锁 | 提升 | 删除 | 移动 | 固顶 | 总固顶 | 奖励 | 惩罚 | 发布公告
    W3C Contributing Supporter! W 3 C h i n a ( since 2003 ) 旗 下 站 点
    苏ICP备05006046号《全国人大常委会关于维护互联网安全的决定》《计算机信息网络国际联网安全保护管理办法》
    328.125ms