以文本方式查看主题 - 计算机科学论坛 (http://bbs.xml.org.cn/index.asp) -- 『 C/C++编程思想 』 (http://bbs.xml.org.cn/list.asp?boardid=61) ---- [推荐]NeHe OpenGL教程(中英文版附带VC++源码)Lesson 35-lesson 36 (http://bbs.xml.org.cn/dispbbs.asp?boardid=61&rootid=&id=54289) |
-- 作者:一分之千 -- 发布时间:10/25/2007 9:38:00 AM -- [推荐]NeHe OpenGL教程(中英文版附带VC++源码)Lesson 35-lesson 36
第三十五课 在OpenGL中如何播放AVI呢?利用Windows的API把每一帧作为纹理绑定到OpenGL中,虽然很慢,但它的效果不错。你可以试试。 找到的第一个网页是Jonathan Nix写的题为"AVI 文件"的文章.网址是http://www.gamedev.net/reference/programming/features/avifile.感谢Jonathan写了这片关于AVI格式的好文章.虽然我用不同的做法,但他的代码片断和清晰的注解让人学得很轻松!第二个网站标题为"AVI 总体观"(John F. McGowan, Ph.D写的)..我可以大肆赞美John的网叶有多么惊奇,但你最好自己去看看.他的网址是http://www.jmcgowan.com/avi.html.这个网站讲到了和AVI格式有关的几乎所有内容.感谢John做了一个这么有用的网站. 最后要提到是我没有借鉴任何代码,没有抄袭任何代码.我的代码是花了三天时间了解到上述网站和文章的信息后才写成的.我是想说我的代码也许不是播放AVI文件的最好代码,他也许不是放AVI文件的正确代码,但他管用而且使用方便.如果你不喜欢这些代码和我的编程风格,或者觉得我的言论伤害到整个编程界,你有以下选择:1)在网上找到替换的资源2)写自己的AVI播放器3)写一篇更好的文章.任何访问本网站的人现在应该知道我只是一名中级程序员(这一点我在网站里很多文章的开头都提到过)!我编写代码自乐而已.本网站的目的在于让非精英程序员更轻松的开始OpenGl编程.这些文章只是关于我实现的几个特殊的效果...没有其他的. 开始讲代码首先你要注意的是我们要包括和连接到视频头文件和库文件.非常感谢微软(窝不敢相信我说了什么).库文件使打开,播放AVI文件都很简便.现在你要知道的是必须包括头文件vfw.h而且要连接到vfw32.lib库文件如果想编译你的代码的话:) #include <vfw.h> // Video For Windows头文件 #pragma comment( lib, "opengl32.lib" ) float angle; // 旋转用 AVISTREAMINFO psi; // 包含流信息的结构体的指针 GLUquadricObj *quadratic; // 存储二次曲面对象 HDRAWDIB hdd; // Dib句柄 在抱怨了微软之后:)我决定加一条注解!我不因为RGB数据倒过来存放而打算骂微软.只是觉得很奇怪--他叫做RGB实际上在文件中是按BGR存的! 另:这一点和"little endian"和"big endian"有关.Intel以及Intel兼容产品用little endian--LSB(数据最低位)首先存.OpenGL是产生于Silicon Graphics的机器的,用的是big endian,所以标准的OpenGL要位图格式是big endian格式.这是我的理解. 棒极了!所以说这第一个播放器就是一个垃圾!我的解决方法是用一个循环把数据交换过来.这能行,但太慢.我又在纹理生成代码中用GL_BGR_EXT代替了GL_RGB,速度暴升,色彩显示也对了!问题解决了...原来我是这样想!后来发现一些OpenGL驱动不支持GL_BGR... :( 与好友Maxwell Sayles讨论后,他推荐我用汇编代码来交换数据.一分钟后,他用icq发来下面的代码!也许不是最优化的,但他很快也很有效! 动画的每一帧存在一个缓冲里.图象256像素宽,256像素高,每个色彩一字节(一像素3字节).下面的代码会扫描整个缓冲并交换红与蓝的字节.红存在ebx+0,蓝存在ebx+2.我们一次向前走3字节(因为一个像素3字节).不断扫描直到所有数据交换过来. 你们有些人不喜欢用汇编代码,所以我想有必要在本章里解释一下.本来计划用GL_BGR_EXT,他管用,但不是所有的显卡都支持!我又用异或交换法,这在所有机器上都是有效的,但不十分快.用了汇编后速度相当快.考虑到我们在处理实时视频,你需要最快的交换方法.权衡了以上选择,汇编是最好的!如果你有更好的办法,就用你自己的吧!我并不是告诉你必须如何去做,只是告诉你我的做法.我也会细致的解释代码.如果你要用更好的代码来作替换,你要清楚这些代码是来干什么的,自己写代码时,要为日后的优化提供方便. void flipIt(void* buffer) // 交换红蓝数据(256x256) add ebx,3 // 向前走3个字节 打开AVI文件有很多方法.我采用AVIStreamOpenFromFile(...).他能打开AVI文件中单独一个流(AVI文件可以包含多个流).它的参数如下:pavi是接收流句柄的缓冲的指针,szFile是打开文件的名字(包括路径).第三参数是打开的流的类型.在这个工程里,我们只对视频流感兴趣(streamtypeVIDEO).第四参数是0,这表示我们需要第一次读到的视频流(一个AVI文件里会有多个视频流,我们要第一个).OF_READ表示以只读方式打开文件.最后一个参数是一个类标识句柄的指针.说实话,我也不清楚他是干吗的.我让windows自己设定,于是把NULL传过去. void OpenAVI(LPCSTR szFile) // 打开AVI文件szFile AVIFileInit(); // 打开AVI文件库 // 打开AVI流 我们通过右边位置减左边位置算出帧宽.这个结果是以像素记的精确的帧宽.至于高度,可以用底边位置减顶边位置得到.这样得到高度的像素值. 然后用AVIStreamLength(...)得到AVI文件最后一帧的序号.AVIStreamLength(...)返回动画最后一帧的序号.结果存在lastframe里. 计算帧速很简单.每秒帧速(fps)= psi.dwRate/psi,dwScale.返回的值应该匹配显示帧的速度(你在AVI动画中右击鼠标可以看到).你会问那么这和mpf有什么关系呢?第一次写这个代码时,我试着用fps来选择动画了正确的帧面.我遇到一个问题...视频放的太快!于是我看了一下视频属性.face2.avi文件有3.36秒长.帧速是29.974fps.视频动画共有91帧.而3.36*29.974 = 100.71.非常奇怪!! 所以我采用一些不同的方法.不是计算帧速,我计算每一帧播放所需时间.AVIStreamSampleToTime()把在动画中的位置转换位你到达该位置所需的时间(毫秒计).所以通过计算到达最后一帧的时间就得到整个动画的播放时间.再拿这个结果除以动画总帧数(lastframe).这样就给出了每帧的显示时间(毫秒计).结果存在mpf(milliseconds per frame)里.你也能通过获取动画中一帧的时间来算每帧的毫秒数,代码为:AVIStreamSampleToTime(pavi,1).两种方法都不错!非常感谢Albert Chaulk提供思路! 我说每帧的毫秒数不精确是因为mpf是一个整型值,所以所有的浮点数都会被取整. AVIStreamInfo(pavi, &psi, sizeof(psi)); // 把流信息读进psi lastframe=AVIStreamLength(pavi); // 最后一帧的序号 mpf=AVIStreamSampleToTime(pavi,lastframe)/lastframe; // mpf的不精确值 CreateDIBSection创建一个可直接写的设备无关位图(dib).如果一切顺利,hBitmap会指向该dib的比特值.hdc是设备上下文(DC)的句柄第二参数是BitmapInfo结构体的指针.该结构体包含了上述dib文件的信息.第三参数(DIB_RGB_COLORS)设定数据是RGB值.data是指向DIB比特值位置的指针的指针(呜,真绕口).第五参数设为NULL,我们的DIB已被分配好内存.末了,最后一个参数可忽略(设为NULL). 引自MSDN:SelecObject函数选一个对象进入设备上下文(DC). 现在我们建好一个能直接写的DIB,yeah:) bmih.biSize = sizeof (BITMAPINFOHEADER); // BitmapInfoHeader的大小 hBitmap = CreateDIBSection (hdc, (BITMAPINFO*)(&bmih), DIB_RGB_COLORS, (void**)(&data), NULL, NULL); 如果一切顺利,一个GETFRAME对象被返回(用来读帧数据).有问题的话,提示框会出现在屏幕上告诉你有错误! pgf=AVIStreamGetFrameOpen(pavi, NULL); // 用要求的模式建PGETFRAME // bt标题栏信息(宽 / 高/ 帧数) 也要把动画的每一帧的大小转为纹理能用的大小,还要把数据转为RGB数据.这用到DrawDibDraw(...). 一个大概的解释.我们能直接写设定的DIB图像.那就是DrawDibDraw(...)所做的.第一参数是DrawDib DC的句柄.第二参数是DC的句柄.接下来用左上角(0,0)和右下角(256,256)构成目标矩形. lpbi指向刚读的帧的bitmapinfoheader信息.pdata是刚读的帧的图像数据指针. 再把源图象(刚读的帧)的左上角设为(0,0),右下角设为(帧宽,帧高).最后的参数应设为0. 这个方法可把任何大小、色深的图像转为256*256*24bit的图像. void GrabAVIFrame(int frame) // 从流中抓取一帧 在我个人并没有发现速度明显加快,也许在低端显卡上才会.glTexSubImage2D()的参数是:目标是一个二维纹理(GL_TEXTURE_2D).细节级别(0),mipmapping用.x(0),y(0)告诉OpenGL开始拷贝的位置(0,0是纹理的左下角).然后是图像的宽度,我们要拷贝的图像是256像素宽,256像素高.GL_RGB是我们的数据格式.我们在拷贝无符号byte.最后...图像数据指针----data.非常简单! Kevin Rogers 另加:我想指出使用glTexSubImage2D()另一个重要原因.不仅因为在许多OpenGL实现中它很快,还因为目标区不必是2的幂.这对视频重放很方便,因为一帧的维通常不是2的幂(而是像320*200之类的).这样给了你很大机动性,你可以按视频流原本的样子播放,而不是扭曲或剪切每一帧来适应纹理的维. 重要的是你不能更新一个纹理如果你第一次没有创建他!我们在Initialize()中创建纹理. 还要提到的是...如果你计划在工程里使用多个纹理,务必绑住你要更新的纹理.否则,更新出来的纹理也许不是你想要的! flipIt(data); // 交换红蓝数据 // 更新纹理 void CloseAVI(void) // 关掉AVI资源 然后建一个新的二次曲面.quadratic是这个新对象的指针.设置光滑的法线,允许纹理坐标的生成. BOOL Initialize (GL_Window* window, Keys* keys) // 开始用户的初始 quadratic=gluNewQuadric(); // 建二次曲面的指针 glEnable(GL_TEXTURE_2D); // 开启2D纹理映射 glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_SPHERE_MAP); // 设纹理坐标生成模式为s OpenAVI("data/face2.avi"); // 打开AVI文件 // 创建纹理 return TRUE; // 初始化成功返回TRUE void Deinitialize (void) // 做所有的释放工作 环境映射的键设置也一样.检查’E’是否按下,若是则改变env从TRUE到FALSE或从FALSE到TRUE.仅仅是关闭或开启环境映射! 每次调用Updata()时angle都加上一个小分数.我用经过的时间除以60.0f使速度降一点. void Update (DWORD milliseconds) // 动画更新 if (g_keys->keyDown [VK_F1] == TRUE) // F1按下? if ((g_keys->keyDown [' ']) && !sp) // 空格按下并已松开 if (!g_keys->keyDown[' ']) // 空格没按下? if ((g_keys->keyDown ['B']) && !bp) // ’B’按下并已松开 if (!g_keys->keyDown['B']) // ’B’没按下? if ((g_keys->keyDown ['E']) && !ep) // ’E’按下并已松开 if (!g_keys->keyDown['E']) //’E’没按下 angle += (float)(milliseconds) / 60.0f; // 根据时间更新angle 下面的代码会丢掉一些帧,若果你的计算机太慢的话, 如果你有干劲,试着加上循环,快速播放,暂停或倒放等功能. next+= milliseconds; // 根据时间增加next if (frame>=lastframe) // 超过最后一帧? void Draw (void) // 绘制我们的屏幕 GrabAVIFrame(frame); // 抓取动画的一帧 if (bg) // 背景可见? glLoadIdentity (); // 重设模型视角矩阵 if (env) // 环境映射开启? glRotatef(angle*2.3f,1.0f,0.0f,0.0f); // 加旋转让东西动起来 switch (effect) // 哪个效果? case 1: // 效果1,球体 case 2: // 效果2,圆柱 if (env) // 是否开启了环境渲染 glFlush (); // 清空渲染流水线 非常感谢 Fredster提供face AVI文件.Face是他发来的六个AVI动画中的一个.他没提出任何问题和条件.他以他的方式帮助了我,谢谢他! 更要感谢Jonathan de Blok,要没要她,本文就不会有.他给我发来他的AVI播放器的代码,这使我对AVI格式产生了兴趣.他也回答了我问的关于他的代码的问题.但重要的是我并没有借鉴或抄袭他的代码,他的代码只是帮助我理解AVI播放器的运行机制.我的播放器的打开,解帧和播放AVI文件用的是不同的代码! 感谢给予帮助的所有人,包括所有参观者!若没有你们,我的网站不值一文!!! |
-- 作者:一分之千 -- 发布时间:10/25/2007 9:39:00 AM -- Lesson 35 I would like to start off by saying that I am very proud of this tutorial. When I first got the idea to code an AVI player in OpenGL thanks to Jonathan de Blok, I had no idea how to open an AVI let alone code an AVI player. I started off by flipping through my collection of programming books. Not one book talked about AVI files. I then read everything there was to read about the AVI format in the MSDN. Lots of useful information in the MSDN, but I needed more information. After browsing the net for hours searching for AVI examples, I had just two sites bookmarked. I'm not going to say my search engine skills are amazing, but 99.9% of the time I have no problems finding what I'm looking for. I was absolutely shocked when I realized just how few AVI examples there were! Most the examples I found wouldn't compile... A handful of them were way to complex (for me at least), and the rest did the job, but they were coded in VB, Delphi, etc. (not VC++). The first page I book marked was an article written by Jonathan Nix titled "AVI Files". You can visit it at http://www.gamedev.net/reference/programming/features/avifile/. Huge respect to Jonathan for writing an extremely brilliant document on the AVI format. Although I decided to do things differently, his example code snippets, and clear comments made the learning process alot easier! The second site is titled "The AVI Overview" by John F. McGowan, Ph.D.. I could go on and on about how amazing John's page is, but it's easier if you check it out yourself! The URL is http://www.jmcgowan.com/avi.html. His site pretty much covers everything there is to know about the AVI format! Thanks to John for making such a valuable page available to the public. The last thing I wanted to mention is that NONE of the code has been borrowed, and none of the code has been copied. It was written during a 3 day coding spree, using information from the above mentioned sites and articles. With that said, I feel it is important to note that my code may not be the BEST way to play an AVI file. It may not even be the correct way to play an AVI file, but it does work, and it's easy to use! If you dislike the code, my coding style, or if you feel I'm hurting the programming community by releasing this tut, you have a few options: 1) search the net for alternate resources 2) write your own AVI player OR 3) write a better tutorial! Everyone visiting this site should know by now that I'm an average programmer with average skills (I've stated that on numerous pages throughout the site)! I code for FUN! The goal of this site is to make life easier for the non-elite coder to get started with OpenGL. The tutorials are merely examples on how 'I' managed to accomplish a specific effect... Nothing more, nothing less! On to the code... The first thing you will notice is that we include and link to the Video For Windows header / library. Big thanks to Microsoft (I can't believe I just said that!). This library makes opening and playing AVI files a SNAP! For now... All you need to know is that you MUST include the vfw.h header file and you must link to the vfw32.lib library file if you want the code to compile :) #include <windows.h> // Header File For Windows #pragma comment( lib, "opengl32.lib" ) // Search For OpenGL32.lib While Linking #ifndef CDS_FULLSCREEN // CDS_FULLSCREEN Is Not Defined By Some GL_Window* g_window; next is an integer variable that will be used to count how much time has passed (in milliseconds). It will be used to keep the framerate at a descent speed. More about this later! frame is of course the current frame we want to display from the animation. We start off at 0 (first frame). I think it's safe to assume that if we managed to open the video, it HAS to have at least one frame of animation :) effect is the current effect seen on the screen (object: Cube, Sphere, Cylinder, Nothing). env is a boolean value. If it's true, then environment mapping is enabled, if it's false, the object will NOT be environment mapped. If bg is true, you will see the video playing fullscreen behind the object. If it's false, you will only see the object (there will be no background). sp, ep and bp are used to make sure the user isn't holding a key down. // User Defined Variables AVISTREAMINFO psi; // Pointer To A Structure Containing Stream Info hdd is a handle to a DrawDib device context. hdc is handle to a device context. hBitmap is a handle to a device dependant bitmap (used in the bitmap conversion process later). data is a pointer that will eventually point to our converted bitmap image data. Will make sense later in the code. Keep reading :) GLUquadricObj *quadratic; // Storage For Our Quadratic Objects HDRAWDIB hdd; // Handle For Our Dib While writing this tutorial I discovered something very odd. The first video I actually got working with this code was playing fine but the colors were messed up. Everything that was supposed to be red was blue and everything that was supposed to be blue was red. I went absolutely NUTS! I was convinced that I made a mistake somewhere in the code. After looking at all the code, I was unable to find the bug! So I started reading through the MSDN again. Why would the red and blue bytes be swapped!?! It says right in the MSDN that 24 bit bitmaps are RGB!!! After some more reading I discovered the problem. In WINDOWS (figures), RGB data is actually store backwards (BGR). In OpenGL, RGB is exactly that... RGB! After a few complaints from fans of Microsoft :) I decided to add a quick note! I am not trashing Microsoft because their RGB data is stored backwards. I just find it very frustrating that it's called RGB when it's actually BGR in the file! Blue Adds: It's more to do with "little endian" and "big endian". Intel and Intel compatibles use little endian where the least significant byte (LSB) is stored first. OpenGL came from Silicon Graphics machines, which are probably big endian, and thus the OpenGL standard required the bitmap format to be in big endian format. I think this is how it works. Wonderful! So here I am with a player, that looks like absolute crap! My first solution was to swap the bytes manually with a for next loop. It worked, but it was very slow. Completely fed up, I modified the texture generation code to use GL_BGR_EXT instead of GL_RGB. A huge speed increase, and the colors looked great! So my problem was solved... or so I thought! It turns out, some OpenGL drivers have problems with GL_BGR_EXT.... Back to the drawing board :( After talking with my good friend Maxwell Sayles, he recommended that I swap the bytes using asm code. A minute later, he had icq'd me the code below! It may not be optimized, but it's fast and it does the job! Each frame of animation is stored in a buffer. The image will always be 256 pixels wide, 256 pixels tall and 1 byte per color (3 bytes per pixel). The the code below will go through the buffer and swap the Red and Blue bytes. Red is stored at ebx+0 and blue is stored at ebx+2. We move through the buffer 3 bytes at a time (because one pixel is made up of 3 bytes). We loop through the data until all of the byte have been swapped. A few of you were unhappy with the use of ASM code, so I figured I would explain why it's used in this tutorial. Originally I had planned to use GL_BGR_EXT as I stated, it works. But not on all cards! I then decided to use the swap method from the last tut (very tidy XOR swap code). The swap code works on all machines, but it's not extremely fast. In the last tut, yeah, it works GREAT. In this tutorial we are dealing with REAL-TIME video. You want the fastest swap you can get. Weighing the options, ASM in my opinion is the best choice! If you have a better way to do the job, please ... USE IT! I'm not telling you how you HAVE to do things. I'm showing you how I did it. I also explain in detail what the code does. That way if you want to replace the code with something better, you know exactly what this code is doing, making it easier to find an alternate solution if you want to write your own code! void flipIt(void* buffer) // Flips The Red And Blue Bytes (256x256) add ebx,3 // Moves Through The Data By 3 Bytes The first thing we need to do is call AVIFileInit(). This initializes the AVI file library (gets things ready for us). There are many ways to open an AVI file. I decided to use AVIStreamOpenFromFile(...). This opens a single stream from an AVI file (AVI files can contain multiple streams). The parameters are as follows: pavi is a pointer to a buffer that receives the new stream handle. szFile is of course, the name of the file we wish to open (complete with path). The third parameter is the type of stream we wish to open. In this project, we are only interested in the VIDEO stream (streamtypeVIDEO). The fourth parameter is 0. This means we want the first occurance of streamtypeVIDEO (there can be multiple video streams in a single AVI file... we want the first stream). OF_READ means that we want to open the file for reading ONLY. The last parameter is a pointer to a class identifier of the handler you want to use. To be honest, I have no idea what it does. I let windows select it for me by passing NULL as the last parameter! If there are any errors while opening the file, a message box pops up letting you know that the stream could not be opened. I don't pass a PASS or FAIL back to the calling section of code, so if this fails, the program will try to keep running. Adding some type of error checking shouldn't take alot of effort, I was too lazy :) void OpenAVI(LPCSTR szFile) // Opens An AVI File (szFile) AVIFileInit(); // Opens The AVIFile Library // Opens The AVI Stream Earlier we created a structure called psi that will hold information about our AVI stream. Will fill this structure with information about the AVI with the first line of code below. Everything from the width of the stream (in pixels) to the framerate of the animation is stored in psi. For those of you that want accurate playback speeds, make a note of what I just said. For more information look up AVIStreamInfo in the MSDN. We can calculate the width of a frame by subtracting the left border from the right border. The result should be an accurate width in pixels. For the height, we subtract the top of the frame from the bottom of the frame. This gives us the height in pixels. We then grab the last frame number from the AVI file using AVIStreamLength(...). This returns the number of frames of animation in the AVI file. The result is stored in lastframe. Calculating the framerate is fairly easy. Frames per second = psi.dwRate / psi.dwScale. The value returned should match the frame rate displayed when you right click on the AVI and check its properties. So what does this have to do with mpf you ask? When I first wrote the animation code, I tried using the frames per second to select the correct frame of animation. I ran into a problem... All of the videos played to fast! So I had a look at the video properties. The face2.avi file is 3.36 seconds long. The frame rate is 29.974 frames per second. The video has 91 frames of animation. If you multiply 3.36 by 29.974 you get 100 frames of animation. Very Odd! So, I decided to do things a little different. Instead of calculating the frames per second, I calculte how long each frame should be displayed. AVIStreamSampleToTime() converts a position in the animation to how many milliseconds it would take to get to that position. So we calculate how many milliseconds the entire video is by grabbing the time (in milliseconds) of the last frame (lastframe). We then divide the result by the total number of frames in the animation (lastframe). This gives us the amount of time each frame is displayed for in milliseconds. We store the result in mpf (milliseconds per frame). You could also calculate the milliseconds per frame by grabbing the amount of time for just 1 frame of animation with the following code: AVIStreamSampleToTime(pavi,1). Either way should work fine! Big thanks to Albert Chaulk for the idea! The reason I say rough milliseconds per frame is because mpf is an integer so any floating values will be rounded off. AVIStreamInfo(pavi, &psi, sizeof(psi)); // Reads Information About The Stream Into psi lastframe=AVIStreamLength(pavi); // The Last Frame Of The Stream mpf=AVIStreamSampleToTime(pavi,lastframe)/lastframe; // Calculate Rough Milliseconds Per Frame The first thing we need to do is describe the type of image we want. To do this, we fill the bmih BitmapInfoHeader structure with our requested parameters. We start off by setting the size of the structure. We then set the bitplanes to 1. Three bytes of data works out to 24 bits (RGB). We want the image to be 256 pixels wide and 256 pixels tall and finally we want the data returned as UNCOMPRESSED RGB data (BI_RGB). CreateDIBSection creates a dib that we can directly write to. If everything goes well, hBitmap will point to the dib's bit values. hdc is a handle to a device context (DC). The second parameter is a pointer to our BitmapInfo structure. The structure contains information about the dib file as mentioned above. The third parameter (DIB_RGB_COLORS) specifies that the data is RGB values. data is a pointer to a variable that receives a pointer to the location of the DIB's bit values (whew, that was a mouthful). By setting the 5th value to NULL, memory is allocated for our DIB. Finally, the last parameter can be ignored (set to NULL). Quoted from the MSDN: The SelectObject function selects an object into the specified device context (DC). We have now created a DIB that we can directly draw to. Yay :) bmih.biSize = sizeof (BITMAPINFOHEADER); // Size Of The BitmapInfoHeader hBitmap = CreateDIBSection (hdc, (BITMAPINFO*)(&bmih), DIB_RGB_COLORS, (void**)(&data), NULL, NULL); You can pass a structure similar to the one above as the second parameter to have a specific video format returned. Unfortunately, the only thing you can alter is the width and height of the returned image. The MSDN also mentions that you can pass AVIGETFRAMEF_BESTDISPLAYFMT to select the best display format. Oddly enough, my compiler had no definition for it. If everything goes well, a GETFRAME object is returned (which we need to read frames of data). If there are any problems, a message box will pop onto the screen telling you there was an error! pgf=AVIStreamGetFrameOpen(pavi, NULL); // Create The PGETFRAME Using Our Request Mode // Information For The Title Bar (Width / Height / Last Frame) Now for the fun stuff... we need to point to the image data. To do this we need to skip over the header information (lpbi->biSize). One thing I didn't realize until I started writing this tut was that we also have to skip over any color information. To do this we also add colors used multiplied by the size of RGBQUAD (biClrUsed*sizeof(RGBQUAD)). After doing ALL of that :) we are left with a pointer to the image data (pdata). Now we need to convert the frame of animation to a usuable texture size as well, we need to convert the data to RGB data. To do this, we use DrawDibDraw(...). A quick explanation. We can draw directly to our custom DIB. That's what DrawDibDraw(...) does. The first parameter is a handle to our DrawDib DC. The second parameter is a handle to the DC. Next we have the upper left corner (0,0) and the lower right corner (256,256) of the destination rectangle. lpbi is a pointer to the bitmapinfoheader information for the frame we just read. pdata is a pointer to the image data for the frame we just read. Then we have the upper left corner (0,0) of the source image (frame we just read) and the lower right corner of the frame we just read (width of the frame, height of the frame). The last parameter should be left at 0. This will convert an image of any size / color depth to a 256*256*24bit image. void GrabAVIFrame(int frame) // Grabs A Frame From The Stream Originally I was updating the texture by recreating it for each frame of animation. I received a few emails suggesting that I use glTexSubImage2D(). After flipping through the OpenGL Red Book, I stumbled across the following quote: "Creating a texture may be more computationally expensive than modifying an existing one. In OpenGL Release 1.1, there are new routines to replace all or part of a texture image with new information. This can be helpful for certain applications, such as using real-time, captured video images as texture images. For that application, it makes sense to create a single texture and use glTexSubImage2D() to repeatedly replace the texture data with new video images". I personally didn't notice a huge speed increase, but on slower cards you might! The parameters for glTexSubImage2D() are as follows: Our target, which is a 2D texture (GL_TEXTURE_2D). The detail level (0), used for mipmapping. The x (0) and y (0) offset which tells OpenGL where to start copying to (0,0 is the lower left corner of the texture). The we have the width of the image we wish to copy which is 256 pixels wide and 256 pixels tall. GL_RGB is the format of our data. We are copying unsigned bytes. Finally... The pointer to our data which is represented by data. Very simple! Kevin Rogers Adds: I just wanted to point out another important reason to use glTexSubImage2D. Not only is it faster on many OpenGL implementations, but the target area does not need to be a power of 2. This is especially handy for video playback since the typical dimensions for a frame are rarely powers of 2 (often something like 320 x 200). This gives you the flexibility to play the video stream at its original aspect, rather than distorting / clipping each frame to fit your texture dimensions. It's important to note that you can NOT update a texture if you have not created the texture in the first place! We create the texture in the Initialize() code! I also wanted to mention... If you planned to use more than one texture in your project, make sure you bind the texture you want to update. If you don't bind the texture you may end up updating textures you didn't want updated! flipIt(data); // Swap The Red And Blue Bytes (GL Compatability) // Update The Texture void CloseAVI(void) // Properly Closes The Avi File Our clear screen color is black, depth testing is enabled, etc. We then create a new quadric. quadratic is the pointer to our new object. We set up smooth normals, and enable texture coordinate generation for our quadric. BOOL Initialize (GL_Window* window, Keys* keys) // Any GL Init Code & User Initialiazation Goes Here // Start Of User Initialization quadratic=gluNewQuadric(); // Create A Pointer To The Quadric Object After setting up our texture and sphere mapping, we open the .AVI file. I tried to keep things simple... can you tell :) The file we are going to open is called face2.avi... it's located in the data directory. The last thing we have to do is create our initial texture. We need to do this in order to use glTexSubImage2D() to update our texture in GrabAVIFrame(). glEnable(GL_TEXTURE_2D); // Enable Texture Mapping glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_SPHERE_MAP); // Set The Texture Generation Mode For S To Sphere Mapping OpenAVI("data/face2.avi"); // Open The AVI File // Create The Texture return TRUE; // Return TRUE (Initialization Successful) void Deinitialize (void) // Any User DeInitialization Goes Here We then check to see if the 'B' key is pressed if it is, we toggle the background (bg) from ON to OFF or from OFF to ON. Environment mapping is done the same way. We check to see if 'E' is pressed. If it is, we toggle env from TRUE to FALSE or from FALSE to TRUE. Turning environment mapping off or on! The angle is increased by a tiny fraction each time Update() is called. I divide the time passed by 60.0f to slow things down a little. void Update (DWORD milliseconds) // Perform Motion Updates Here if (g_keys->keyDown [VK_F1] == TRUE) // Is F1 Being Pressed? if ((g_keys->keyDown [' ']) && !sp) // Is Space Being Pressed And Not Held? if (!g_keys->keyDown[' ']) // Is Space Released? if ((g_keys->keyDown ['B']) && !bp) // Is 'B' Being Pressed And Not Held? if (!g_keys->keyDown['B']) // Is 'B' Released? if ((g_keys->keyDown ['E']) && !ep) // Is 'E' Being Pressed And Not Held? if (!g_keys->keyDown['E']) // Is 'E' Released? angle += (float)(milliseconds) / 60.0f; // Update angle Based On The Timer After that, we check to make sure that the current frame of animation hasn't passed the last frame of the video. If it has, frame is reset to zero, the animation timer (next) is reset to 0, and the animation starts over. The code below will drop frames if your computer is running to slow, or another application is hogging the CPU. If you want every frame to be displayed no matter how slow the users computer is, you could check to see if next is greater than mpf if it is, you would reset next to 0 and increase frame by one. Either way will work, although the code below is better for faster machines. If you feel energetic, try adding rewind, fast forward, pause or reverse play! next+= milliseconds; // Increase next Based On Timer (Milliseconds) if (frame>=lastframe) // Have We Gone Past The Last Frame? void Draw (void) // Draw Our Scene GrabAVIFrame(frame); // Grab A Frame From The AVI if (bg) // Is Background Visible? After that, we check to see if env is TRUE. If it is, we enable sphere mapping to create the environment mapping effect. glLoadIdentity (); // Reset The Modelview Matrix if (env) // Is Environment Mapping On? If you don't understand rotations and translations... you shouldn't be reading this tutorial :) glRotatef(angle*2.3f,1.0f,0.0f,0.0f); // Throw In Some Rotations To Move Things Around A Bit switch (effect) // Which Effect? case 1: // Effect 1 - Sphere Before we draw the cylinder, we translate -1.5f units on the z-axis. By doing this, our cylinder will rotate around it's center point. The general rule to centering a cylinder is to divide it's height by 2 and translate by the result in a negative direction on the z-axis. If you have no idea what I'm talking about, take out the tranlatef(...) line below. The cylinder will rotate around it's base, instead of a center point. case 2: // Effect 2 - Cylinder if (env) // Environment Mapping Enabled? glFlush (); // Flush The GL Rendering Pipeline Anyways... I would love to hear some feedback about this tut. If you find mistakes or you would like to help make the tut better, please contact me. As I said, this is my first attempt at AVI. Normally I wouldn't write a tut on a subject I just learned, but my excitement got the best of me, plus the fact that there's very little information on the subject bothered me. What I'm hoping is that I'll open the door to a flood of higher quality AVI demos and example code! Might happen... might not. Either way, the code is here for you to use however you want! Huge thanks to Fredster for the face AVI file. Face was one of about 6 AVI animations he sent to me for use in my tutorial. No questions asked, no conditions. I emailed him and he went out of his way to help me out... Huge respect! An even bigger thanks to Jonathan de Blok. If it wasn't for him, this tutorial would not exist. He got me interested in the AVI format by sending me bits of code from his own personal AVI player. He also went out of his way to answer any questions that I had in regards to his code. It's important to note that nothing was borrowed or taken from his code, it was used only to understand how an AVI player works. My player opens, decodes and plays AVI files using very different code! Thanks to everyone for the great support! This site would be nothing without it's visitors!!! Jeff Molofee (NeHe) Jeff Molofee (NeHe) |
-- 作者:一分之千 -- 发布时间:10/25/2007 9:40:00 AM -- 第三十六课 如何实现放射状的滤镜效果呢,看上去很难,其实很简单。把渲染得图像作为纹理提取出来,在利用OpenGL本身自带的纹理过滤,就能实现这种效果,不信,你试试。 #include <math.h> // 数学库 float angle; // 用来旋转那个螺旋 GLuint EmptyTexture() // 创建一个空的纹理 // 为纹理数据(128*128*4)建立存储区 ZeroMemory(data,((128 * 128)* 4 * sizeof(unsigned int))); // 清除存储区 glGenTextures(1, &txtnumber); // 创建一个纹理 delete [] data; // 释放数据 return txtnumber; // 返回纹理ID void ReduceToUnit(float vector[3]) // 归一化一个法向量 if(length == 0.0f) // 避免除0错误 vector[0] /= length; // 归一化向量 out[y] = v1[z] * v2[x] - v1[x] * v2[z] out[z] = v1[x] * v2[y] - v1[y] * v2[x] void calcNormal(float v[3][3], float out[3]) // 用三点计算一个立方体法线 // 用减法在两点之间得到向量// 从一点到另一点的X,Y,Z坐标// 计算点1到点0的向量 ReduceToUnit(out); // 规格化向量 void ProcessHelix() // 绘制一个螺旋 GLfloat glfMaterialColor[]={0.4f,0.2f,0.8f,1.0f}; // 设置材料色彩 glLoadIdentity(); // 重置Modelview矩阵 glTranslatef(0,0,-50); // 移入屏幕50个单位 glMaterialfv(GL_FRONT_AND_BACK,GL_AMBIENT_AND_DIFFUSE,glfMaterialColor); r=1.5f; // 半径 glBegin(GL_QUADS); // 开始绘制立方体 x=float(cos(u)*(2.0f+cos(v) ))*r; // 计算x的位置(第一个点) vertexes[0][0]=x; // 设置第一个顶点的x值 v=(phi/180.0f*3.142f); // 计算第二个点( 0 )的角度 x=float(cos(u)*(2.0f+cos(v) ))*r; // 计算x位置(第二个点) vertexes[1][0]=x; // 设置第二个顶点的x值 v=((phi+20)/180.0f*3.142f); // 计算第三个点 ( 20 )的角度 x=float(cos(u)*(2.0f+cos(v) ))*r; // 计算x位置 (第三个点) vertexes[2][0]=x; // 设置第三个顶点的x值 v=((phi+20)/180.0f*3.142f); // 计算第四个点( 20 )的角度 x=float(cos(u)*(2.0f+cos(v) ))*r; // 计算x位置 (第四个点) vertexes[3][0]=x; // 设置第四个顶点的x值 calcNormal(vertexes,normal); // 计算立方体的法线 glNormal3f(normal[0],normal[1],normal[2]); // 设置法线 // 渲染四边形 glPopMatrix(); // 取出矩阵 void ViewOrtho() // 设置一个z正视图 void ViewPerspective() // 设置透视视图 void RenderToTexture() // 着色到一个纹理 ProcessHelix(); // 着色螺旋 glBindTexture(GL_TEXTURE_2D,BlurTexture); // 绑定模糊纹理 // 拷贝我们的视口到模糊纹理 (从 0,0 到 128,128... 无边界) glClearColor(0.0f, 0.0f, 0.5f, 0.5); //调整清晰的色彩到中等蓝色 glViewport(0 , 0,640 ,480); // 调整视口 (0,0 to 640x480) void DrawBlur(int times, float inc) // 绘制模糊的图象 // 禁用自动生成纹理坐标 glEnable(GL_TEXTURE_2D); // 启用 2D 纹理映射 alphainc = alpha / times; // 减少alpha值 glBegin(GL_QUADS); // 开始绘制方块 glTexCoord2f(0+spost,0+spost); glTexCoord2f(1-spost,0+spost); glTexCoord2f(1-spost,1-spost); spost += inc; // 逐渐增加 spost (快速靠近纹理中心) ViewPerspective(); // 转换到一个透视视图 glEnable(GL_DEPTH_TEST); // 深度测试可用 void Draw (void) // 绘制场景 |
-- 作者:一分之千 -- 发布时间:10/25/2007 9:40:00 AM -- Lesson 36 Hi! I'm Dario Corno, also known as rIo of SpinningKids. First of all, I want to explain why I decided to write this little tutorial. I have been a scener since 1989. I want all of you to download some demos so you understand what a demo is and what demo effects are. Demos are done to show off hardcore and sometimes brutal coding as well as artistic skill. You can usually find some really killer effects in todays demos! This won't be a killer effect tutorial, but the end result is very cool! You can find a huge collection of demos at http://www.pouet.net and http://ftp.scene.org. Now that the introduction is out of the way, we can go on with the tutorial... I will explain how to do an eye candy effect (used in demos) that looks like radial blur. Sometimes it's referred to as volumetric lights, don't believe it, it's just a fake radial blur! ;D Radial blur was usually done (when there were only software renderers) by blurring every pixel of the original image in a direction opposite the center of the blur. With todays hardware it is quite difficult to do blurring by hand using the color buffer (at least in a way that is supported by all the gfx cards), so we need to do a little trick to achieve the same effect. As a bonus while learning the radial blur effect, you will also learn how to render to a texture the easy way! I decided to use a spring as the shape in this tutorial because it's a cool shape, and I'm tired of cubes :) It's important to note that this tutorial is more a guideline on how to create the effect. I don't go into great detail explaining the code. You should know most of it off by heart :) Below are the variable definitions and includes used: #include <math.h> // We'll Need Some Math float angle; // Used To Rotate The Helix 128 * 128 is the size of the texture (128 pixels wide and tall), the 4 means that for every pixel we want 4 byte to store the RED, GREEN, BLUE and ALPHA components. GLuint EmptyTexture() // Create An Empty Texture // Create Storage Space For Texture Data (128x128x4) A semi important thing to note is that we set the magnification and minification methods to GL_LINEAR. That's because we will be stretching our texture and GL_NEAREST looks quite bad if stretched. ZeroMemory(data,((128 * 128)* 4 * sizeof(unsigned int))); // Clear Storage Memory glGenTextures(1, &txtnumber); // Create 1 Texture delete [] data; // Release data return txtnumber; // Return The Texture ID void ReduceToUnit(float vector[3]) // Reduces A Normal Vector (3 Coordinates) if(length == 0.0f) // Prevents Divide By 0 Error By Providing vector[0] /= length; // Dividing Each Element By A bit of (easy) math. We are going to use the famous cross product, by definition the cross product is an operation between two vectors that returns another vector orthogonal to the two original vectors. The normal is the vector orthogonal to a surface, with the versus opposite to that surface (and usually a normalized length). Imagine now if the two vectors above are the sides of a triangle, then the orthogonal vector (calculated with the cross product) of two sides of a triangle is exactly the normal of that triangle. Harder to explain than to do. We will start finding the vector going from vertex 0 to vertex 1, and the vector from vertex 1 to vertex 2, this is basically done by (brutally) subtracting each component of each vertex from the next. Now we got the vectors for our triangle sides. By doing the cross product (vXw) we get the normal vector for that triangle. Let's see the code. v[0][] is the first vertex, v[1][] is the second vertex, v[2][] is the third vertex. Every vertex has: v[][0] the x coordinate of that vertex, v[][1] the y coord of that vertex, v[][2] the z coord of that vertex. By simply subtracting every coord of one vertex from the next we get the VECTOR from this vertex to the next. v1[0] = v[0][0] - v[1][0], this calculates the X component of the VECTOR going from VERTEX 0 to vertex 1. v1[1] = v[0][1] - v[1][1], this will calculate the Y component v1[2] = v[0][2] - v[1][2], this will calculate the Z component and so on... Now we have the two VECTORS, so let's calculate the cross product of them to get the normal of the triangle. The formula for the cross product is: out[x] = v1[y] * v2[z] - v1[z] * v2[y] out[y] = v1[z] * v2[x] - v1[x] * v2[z] out[z] = v1[x] * v2[y] - v1[y] * v2[x] We finally have the normal of the triangle in out[]. void calcNormal(float v[3][3], float out[3]) // Calculates Normal For A Quad Using 3 Points // Finds The Vector Between 2 Points By Subtracting // Calculate The Vector From Point 1 To Point 0 ReduceToUnit(out); // Normalize The Vectors void ProcessHelix() // Draws A Helix GLfloat glfMaterialColor[]={0.4f,0.2f,0.8f,1.0f}; // Set The Material Color glLoadIdentity(); // Reset The Modelview Matrix glTranslatef(0,0,-50); // Translate 50 Units Into The Screen glMaterialfv(GL_FRONT_AND_BACK,GL_AMBIENT_AND_DIFFUSE,glfMaterialColor); r=1.5f; // Radius glBegin(GL_QUADS); // Begin Drawing Quads x=float(cos(u)*(2.0f+cos(v) ))*r; // Calculate x Position (1st Point) vertexes[0][0]=x; // Set x Value Of First Vertex v=(phi/180.0f*3.142f); // Calculate Angle Of Second Point ( 0 ) x=float(cos(u)*(2.0f+cos(v) ))*r; // Calculate x Position (2nd Point) vertexes[1][0]=x; // Set x Value Of Second Vertex v=((phi+20)/180.0f*3.142f); // Calculate Angle Of Third Point ( 20 ) x=float(cos(u)*(2.0f+cos(v) ))*r; // Calculate x Position (3rd Point) vertexes[2][0]=x; // Set x Value Of Third Vertex v=((phi+20)/180.0f*3.142f); // Calculate Angle Of Fourth Point ( 20 ) x=float(cos(u)*(2.0f+cos(v) ))*r; // Calculate x Position (4th Point) vertexes[3][0]=x; // Set x Value Of Fourth Vertex calcNormal(vertexes,normal); // Calculate The Quad Normal glNormal3f(normal[0],normal[1],normal[2]); // Set The Normal // Render The Quad glPopMatrix(); // Pop The Matrix ViewOrtho simply sets the projection matrix, then pushes a copy of the actual projection matrix onto the OpenGL stack. The identity matrix is then loaded and an orthographic view with the current screen resolution is set up. This way it is possible to draw using 2D coordinates with 0,0 in the upper left corner of the screen and with 640,480 in the lower right corner of the screen. Finally, the modelview matrix is activated for rendering stuff. ViewPerspective sets up projection matrix mode and pops back the non-orthogonal matrix that ViewOrtho pushed onto the stack. The modelview matrix is then selected so we can rendering stuff. I suggest you keep these two procedures, it's nice being able to render in 2D without having to worry about the projection matrix! void ViewOrtho() // Set Up An Ortho View void ViewPerspective() // Set Up A Perspective View We need to draw the scene so it appears blurred in all directions starting from the center. The trick is doing this without a major performance hit. We can't read and write pixels, and if we want compatibility with non kick-butt video cards, we can't use extensions or driver specific commands. Time to give up... ? No, the solution is quite easy, OpenGL gives us the ability to "blur" textures. Ok... Not really blurring, but if we scale a texture using linear filtering, the result (with a bit of imagination) looks like gaussian blur. So what would happen if we put a lot of stretched textures right on top of the 3D scene and scaled them? The answer is simple... A radial blur effect! There are two problems: How do we create the texture realtime and how do we place the texture exactly in front of the 3D object? The solutions are easier than you may think! Problem ONE: Rendering To A Texture The problem is easy to solve on pixel formats that have a back buffer. Rendering to a texture without a back buffer can be a real pain on the eyes! Rendering to texture is achieved with just one function! We need to draw our object and then copy the result (BEFORE SWAPPING THE BACK BUFFER WITH THE FRONT BUFFER) to a texture using the glCopytexSubImage function. Problem TWO: Fitting The Texture Exactly In Front Of The 3D Object We know that, if we change the viewport without setting the right perspective, we get a stretched rendering of our object. For example if we set a viewport really wide we get a vertically stretched rendering. The solution is first to set a viewport that is square like our texture (128x128). After rendering our object to the texture, we render the texture to the screen using the current screen resolution. This way OpenGL reduces the object to fit into the texture, and when we stretch the texture to the full size of the screen, OpenGL resizes the texture to fit perfectly over top of our 3d object. Hopefully I haven't lost anyone. Another quick example... If you took a 640x480 screenshot, and then resized the screenshot to a 256x256 bitmap, you could load that bitmap as a texture and stretch it to fit on a 640x480 screen. The quality would not be as good, but the texture should line up pretty close to the original 640x480 image. On to the fun stuff! This function is really easy and is one of my preferred "design tricks". It sets a viewport with a size that matches our BlurTexture dimensions (128x128). It then calls the routine that renders the spring. The spring will be stretched to fit the 128*128 texture because of the viewport (128x128 viewport). After the spring is rendered to fit the 128x128 viewport, we bind to the BlurTexture and copy the colour buffer from the viewport to the BlurTexture using glCopyTexSubImage2D. The parameters are as follows: GL_TEXTURE_2D indicates that we are using a 2Dimensional texture, 0 is the mip map level we want to copy the buffer to, the default level is 0, GL_LUMINANCE indicates the format of the data to be copied. I used GL_LUMINANCE because the final result looks better, this way the luminance part of the buffer will be copied to the texture. Other parameters could be GL_ALPHA, GL_RGB, GL_INTENSITY and more. The next 2 parameters tell OpenGL where to start copying from (0,0). The width and height (128,128) is how many pixels to copy from left to right and how many to copy up and down. The last parameter is only used if we want a border which we dont. Now that we have a copy of the colour buffer (with the stretched spring) in our BlurTexture we can clear the buffer and set the viewport back to the proper dimensions (640x480 - fullscreen). IMPORTANT: This trick can be used only with double buffered pixel formats. The reason why is because all these operations are hidden from the viewer (done on the back buffer). void RenderToTexture() // Renders To A Texture ProcessHelix(); // Render The Helix glBindTexture(GL_TEXTURE_2D,BlurTexture); // Bind To The Blur Texture // Copy Our ViewPort To The Blur Texture (From 0,0 To 128,128... No Border) glClearColor(0.0f, 0.0f, 0.5f, 0.5); // Set The Clear Color To Medium Blue glViewport(0 , 0,640 ,480); // Set Viewport (0,0 to 640x480) I first disable GEN_S and GEN_T (I'm addicted to sphere mapping, so my routines usually enable these instructions :P ). We enable 2D texturing, disable depth testing, set the proper blend function, enable blending and then bind the BlurTexture. The next thing we do is switch to an ortho view, that way it's easier to draw a quad that perfectly fits the screen size. This is how line up the texture over top of the 3D object (by stretching the texture to match the screen ratio). This is where problem two is resolved! void DrawBlur(int times, float inc) // Draw The Blurred Image // Disable AutoTexture Coordinates glEnable(GL_TEXTURE_2D); // Enable 2D Texture Mapping alphainc = alpha / times; // alphainc=0.2f / Times To Render Blur glBegin(GL_QUADS); // Begin Drawing Quads glTexCoord2f(0+spost,0+spost); // Texture Coordinate ( 0, 0 ) glTexCoord2f(1-spost,0+spost); // Texture Coordinate ( 1, 0 ) glTexCoord2f(1-spost,1-spost); // Texture Coordinate ( 1, 1 ) spost += inc; // Gradually Increase spost (Zooming Closer To Texture Center) ViewPerspective(); // Switch To A Perspective View glEnable(GL_DEPTH_TEST); // Enable Depth Testing We call the RenderToTexture function. This renders the stretched spring once thanks to our viewport change. The stretched spring is rendered to our texture, and the buffers are cleared. We then draw the "REAL" spring (the 3D object you see on the screen) by calling ProcessHelix( ). Finally, we draw some blended quads in front of the spring. The textured quads will be stretched to fit over top of the REAL 3D spring. void Draw (void) // Draw The Scene If you have any comments suggestions or if you know of a better way to implement this effect contact me rio@spinningkids.org. You are free to use this code however you want in productions of your own, but before you RIP it, give it a look and try to understand what it does, that's the only way ripping is allowed! Also, if you use this code, please, give me some credit! I want also leave you all with a list of things to do (homework) :D 1) Modify the DrawBlur routine to get an horizontal blur, vertical blur and some more good effects (Twirl blur!). Ok, that should be all for now. Visit my site and (SK one) for more upcoming tutorials at http://www.spinningkids.org/rio. Dario Corno (rIo) Jeff Molofee (NeHe) |
W 3 C h i n a ( since 2003 ) 旗 下 站 点 苏ICP备05006046号《全国人大常委会关于维护互联网安全的决定》《计算机信息网络国际联网安全保护管理办法》 |
281.250ms |