_id
int64 0
49
| text
stringlengths 71
4.19k
|
---|---|
1 | How to combine depth and stencil tests? I have a continuous height mapped mesh to represent landscape. I use the stencil test to create holes in the mesh. I draw holes to the stencil buffer and then use it to discard mesh fragments. Everything works fine except that my holes are visible through the mesh even if they are occluded by higher ground. Is there any way to apply depth testing to a hole, keeping my mesh drawn in a single call to glDrawElements? |
1 | What are some good learning resources for OpenGL? I have been using the OpenGL ES on the iPhone for a while now and basically I feel pretty lost outside to the small set of commands I've seen in examples and adopted as my own. I would love to use OpenGL on other platforms and have a good understanding of it. Every time I search for a book I find this HUGE bibles that seem uninteresting and very hard for a beginner. If I had a couple of weekends to spend on learning OpenGL what would be the best way to spend my time (and money)? |
1 | How to do Decal Clipping with OpenGL using Depth Buffer comparison? Basically I'm already drawing decals on top of various flat surfaces and that works great. As soon as the decal approaches the edge of a surface, naturally it doesn't get clipped. Is there a way to add simple clipping by comparing the value in stored depth buffer, with the depth value (Z) of the decal pixel to be written? If the depth value about to be written is about the same (is within a range), the decal pixel gets drawn, otherwise discarded. Decals don't write to depth buffer themselves. Example scene |
1 | Calling Shader Functions Inside Other Shaders I'm new to OpenGL and GLSL, and bit confused about calling conventions of shader functions. I'm trying to implement various procedural noise algorithms in GLSL. I'd like to have separate files for each noise and call these functions inside other fragment shaders. Let say I have 2 files perlin.glsl and simplex.glsl. perlin.glsl consists of pnoise2, pnoise3, pnoise4 simplex.glsl consists of snoise2, snoise3, snoise4 I have another fragment shader marble.frag which calls both snoise2, pnoise2 and has a main() How do I call fragment shader functions inside other fragment shaders? Is this considered a good practice? Can you think of a better alternative? Thanks |
1 | loading a texture in Opengl I am working on a graphics project I want to make a city using opengl with c anyway in the last few days I have been trying to load a texture but it didn't work with me in any way I have tried many codes and tried to follow some tutorials but it didn't went ok here is my load texture function int LoadTexture(char filename,int alpha) int i, j 0 Index variables FILE l file File pointer unsigned char l texture The pointer to the memory zone in which we will load the texture windows.h gives us these types to work with the Bitmap files BITMAPFILEHEADER fileheader BITMAPINFOHEADER infoheader RGBTRIPLE rgb num texture The counter of the current texture is increased if( (l file fopen(filename, "rb")) NULL) return ( 1) Open the file for reading fread( amp fileheader, sizeof(fileheader), 1, l file) Read the fileheader fseek(l file, sizeof(fileheader), SEEK SET) Jump the fileheader fread( amp infoheader, sizeof(infoheader), 1, l file) and read the infoheader Now we need to allocate the memory for our image (width height color deep) l texture (byte ) malloc(infoheader.biWidth infoheader.biHeight 4) And fill it with zeros memset(l texture, 0, infoheader.biWidth infoheader.biHeight 4) At this point we can read every pixel of the image for (i 0 i lt infoheader.biWidth infoheader.biHeight i ) We load an RGB value from the file fread( amp rgb, sizeof(rgb), 1, l file) And store it l texture j 0 rgb.rgbtRed Red component l texture j 1 rgb.rgbtGreen Green component l texture j 2 rgb.rgbtBlue Blue component l texture j 3 alpha Alpha value j 4 Go to the next position fclose(l file) Closes the file stream glBindTexture(GL TEXTURE 2D, num texture) Bind the ID texture specified by the 2nd parameter The next commands sets the texture parameters glTexParameterf(GL TEXTURE 2D, GL TEXTURE WRAP S, GL REPEAT) If the u,v coordinates overflow the range 0,1 the image is repeated glTexParameterf(GL TEXTURE 2D, GL TEXTURE WRAP T, GL REPEAT) glTexParameterf(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) The magnification function ("linear" produces better results) glTexParameterf(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) The minifying function glTexEnvf(GL TEXTURE ENV, GL TEXTURE ENV MODE, GL DECR ) Finally we define the 2d texture glTexImage2D(GL TEXTURE 2D, 0, 4, infoheader.biWidth, infoheader.biHeight, 0, GL RGBA, GL UNSIGNED BYTE, l texture) And create 2d mipmaps for the minifying function gluBuild2DMipmaps(GL TEXTURE 2D, 4, infoheader.biWidth, infoheader.biHeight, GL RGBA, GL UNSIGNED BYTE, l texture) free(l texture) Free the memory we used to load the texture return (num texture) Returns the current texture OpenGL ID I call the function in init section textureID LoadTexture("Building.bmp",255) and in the display section I wrote glBindTexture(GL TEXTURE 2D,textureID) glBegin(GL QUADS) front glTexCoord2f(1,0) glVertex3f(1, 1,0) glTexCoord2f(1,1) glVertex3f(1,1,0) glTexCoord2f(0,1) glVertex3f( 1,1,0) glTexCoord2f(0,0) glVertex3f( 1, 1,0) glEnd() |
1 | Creating an OpenGL FPS camera I have the position and orientation vectors, now what? I have been struggling to create a first person camera in OpenGL ES 2.0 without using gluLookAt(). I grab the camera's orientation vectors (the way it's looking) from the current modelview matrix, and use that to calculate the new forward backward (Z) translation value. I then calculate the strafe (X) value from the dot product of Z and Y (which is always 1.0). So, I have all the information I need to create a view matrix, but how do I do that without using gluLookAt? Almost all the examples I've seen use gluLookAt, but no such function exists in OpenGL ES 2.0. Besides, one of the moderators on cprogramming.com mentioned that gluLookAt is not appropriate for FPS cameras http cboard.cprogramming.com game programming 135390 how properly move strafe yaw pitch camera opengl glut using glulookat.html I am really confused by all the conflicting information I'm getting. I just want to create a first person camera that goes forward (W,S keys), side to side (A,D keys) and rotates around its center (Y axis only), Wolfenstein style. Here's more or less my existing code vec4 cur look,dir move,dir strafe,cam pos mat4 mat modelview,mat rot,mat temp float lr left right keypress increment value float ud up down keypress increment value grab current camera orientation (look at) from the modelview matrix cur look 0 mat modelview 8 cur look 1 mat modelview 9 cur look 2 mat modelview 10 cur look 3 mat modelview 11 rotate the direction vector using the rotation matrix mat4 identity(mat rot) mat4 identity(mat temp) mat4 rotate Y(mat rot,mat temp,lr) create rotation matrix multiply the current direction vector by the rotation matrix and place the result in dir move mat4 mul vec4(dir move, mat rot, cur look) normalize the fwd back direction vector vec4 norm(dir move,dir move) calculate strafe vector using cross product of Z and Y vec4 cross(dir strafe,dir move,(vec4) 0.0,1.0,0.0,0.0 ) normalize strafe vector vec4 norm(dir strafe,dir strafe) detect key input FWD BACK if(key pressed 'w') cam pos 2 (dir move 2 INPUT SENS Z) z if(key pressed 's') cam pos 2 (dir move 2 INPUT SENS Z) z ROTATION if(key pressed LEFT ARROW) lr INPUT SENS ROT if(key pressed RIGHT ARROW) lr INPUT SENS ROT build new modelview matrix mat4 identity(mat modelview) multiply modelview matrix by rotation matrix mat4 mul(mat modelview,mat modelview,mat rot) create translation matrix mat4 translate(mat tran, cam pos 0 , 0.0, cam pos 2 ) multiple modelview matrix by translation matrix mat4 mul(mat modelview,mat modelview,mat tran) update uniforms and draw scene using glDrawArrays() glUniformMatrix4fv(loc modelview, 1, GL FALSE, mat modelview) Any help on this would be much appreciated! |
1 | How to tell if a glut window has focus from c How can I tell if a glut window has focus? Im using c tao, Ill use p invokes if necessary. Basically I want to ignore input if it doesn't have focus. |
1 | How to rotate a direction I'm working a spotlight for my deferred renderer and I'm having trouble with matching the mesh to the visual representation of the light. Right now my mesh is a cone, the apex of the cone is at (0,0,0), it has a height of 1 and a radius of 1. The direction of this cone is (0, 1,0) when the rotation is (0,0,0). The relevant GLSL code float spot alpha dot( l,normalize(vec3(0, 1,0) )) lt float inner alpha cos(light.falloff) float outer alpha cos(light.radius) float spot clamp((spot alpha outer alpha) (inner alpha outer alpha),0.,1.) As you can see, the GLSL code uses a direction to define the area to be lit, so I could get this direction as a uniform, but I would need to find a rotation from that for the mesh to follow, or I can get the rotation and find a direction, but I don't know how to do either of these things. Can I rotate a direction? How do I do it? If I can't, is there another solution for this problem? |
1 | MSAA CSAA FXAA How to set the mode in OpenGL? I'm learning OpenGL and something I am stuck with is AA. Specially when I want to turn it on and off at runtime. I know that I can set the samplecount when I create a FBO and blit it over to the final window. When I want to change the mode I switch the FBO and everything is fine. The question I am completely stuck with is, how do I change the mode and also important, how do I query the modes that the card supports. With mode I mean CSAA,MSAA,.. .I can't find a lot. At least I know that it is vendor specific. Hope anyone can point me in the right direction. Thanks |
1 | Shader Special Effect Unit selection I would like to know how the shaders used to show which unit is selected are made. Here is an image to illustrate. How the merge effect of the blue circles can be achieved ? I didn't found any explanation concerning this effect. I know that to project those texture I can develop a decal system but I don't know how to create the merging effect. |
1 | How should I structure my Android platform board game? I'm new to developing Android games, but not new to developing mobile games (J2ME). I'm currently developing a board game, for a school project, with 2 things a board and a spinning wheel (both are displayed at the same time). The user is able to zoom in out and scroll around the board and spin the wheel. The wheel is also animated resized during game. The board is build using tiles (2 4 different tiles on one image per board ) and the wheel is a image with numbers draw on it, using graphics. My questions is what is the best practice to achive the best performance of the game (game has to run on every possible Android version)? Should i use Android canvas or open GL? Is there a mechanism for drawing tiles, animations or should i just implement it my self using drawImage()? Should i separate the wheel and the board into two different threads? Should i separate the wheel and the board in to 2 activities or put it in 1 activity and draw each part separatly? What would be the best way to resize the wheel during gameplay? Scale the wheel image (but the animation has to be smooth opengl vs canvas)? What would be the best way to make the board zoomable should i scale every image on the board when the zoom is detected or does android have some better way to do this? What would be the best way to make the board scrollable should i implement a Camera that displays just a piece of the board or does android have some better way to do this? |
1 | Started game development no idea of computer graphics. Should I learn tools or concepts? I am in 6th semester of my Computer science bachelor degree program, Working as Intern in a start up company. I started game development using AndEngine, things are going good because I have good hold on OOP and Java. But I don't have any experience in OpenGL programming and neither studied a course of computer graphics. I want to develop 3D games and there are tools available like Unity3d etc. My question is should I master tools or take online lectures of computer graphics to get started on basics. I want to continue game development as my profession. So what should I do? start learning from scratch or Learn already built tools and just dive into development? I see successful designers with no background of academic study, just did a photoshop course and now they are in the softwarehouse and making websites, sprites etc. |
1 | OpenGL Is it possible to use VAO's without specifying a VBO On all the tutorials I can find about VAO's (Vertex Array Objects), they show on how to use them by configuring vertex attributes and binding a VBO (Vertex Buffer Object). But I want to create a VAO that will be configured for a set of VBO's in combination with a fixed shader, where each buffer uses the same data pattern (vertex, uv, color, etc). So, I want to create one VAO for multiple VBO's that will be drawn using one shader. I couldn't find any demo on this, so I decided to just give it a try. But it doesn't work and crashes on the glDrawArray call. It looks like the VBO isn't bound. Here is the code I'm using Rendering Prepare vertex attributes glBindVertexArrayOES(vao) Upload the buffer to the GPU glBindBuffer(GL ARRAY BUFFER, pool gt next()) glBufferSubData(GL ARRAY BUFFER, 0, parts buffer.stride() 6, buffer.getBuffer()) Draw the triangles glDrawArrays(GL TRIANGLES, 0, parts 6) glBindVertexArrayOES(0) VAO Creation glBindVertexArrayOES(vao) glEnableVertexAttribArray(ls.position) glVertexAttribPointer(ls.position, 2, GL FLOAT, GL FALSE, buffer.stride(), 0) glEnableVertexAttribArray(ls.color) glVertexAttribPointer(ls.color, 3, GL FLOAT, GL FALSE, buffer.stride(), GL BUFFER OFFSET(buffer.dataTypeSize 2)) glBindVertexArrayOES(0) Where ls is a simple struct that holds the attribute locations. In the Rendering part, swapping the glBindBuffer and the glBindVertexArrayOES around didn't work either. So, the question is Is it even possible to do so, or will I have to create for each buffer a VAO? And if I have to create a VAO for each VBO, is it possible to update the VBO's data using glBufferSubData in combination with a VAO? |
1 | OpenGL profiling with AMD PerfStudio 2 I'm rendering just a really small amount of polygons for my UI but however I still tried to increase the FPS. In the end I removed redundant calls which increased the FPS. I really don't want to lose FPS for nothing so I keep looking for more improvements. The first thing I noticed is the "huge" time where no calls are made before SwapBuffer (the black one). Well I know that OpenGL works asynchronous so SwapBuffer has to wait until everything is done. But shouldn't PerfStudio mark this time also as black ? Correct me If I am wrong. The second thing I noticed is that some glUniform2f calls just take longer (the brown ones). I mean they should all upload 2floats to the GPU how can the time be so different from call to call. The program isn't even changed or something like that. I also tried to look at other programs like gDebugger or CodeXL but they often crashed and they show less statistics (only of calls or redundant calls etc.) EDIT I also realized that the draw calls also have different durations, which was obvious for me but sometimes drawing more vertices is faster than drawing less vertices. |
1 | How many texture units are available per shader stage when compared to the total number available on hardware? The answer to the question How many textures can usually I bind at once?, given here, explains that OpenGL 3.x defines the minimum number for the per stage limit to be 16... and that there can be a higher limit. This still leaves some ambiguity that I would like to clarify. If we calculate the total number of texture units to be, for e.g., 192 (see here for more details), then does this mean theoretically, that we have access to 192 texture units available for the pixel shader? OR we have access to 192 texture units across all stages, however that still doesn't mean we have access to 192 texture units on any particular stage. OR ? If 2 is correct, then how do we figure out how many maximum texture units are available on any given stage? |
1 | How can I make this tile map correctly? I'm having the following problem creating this hexa tiled map. this is what I wanted EDIT Ok I've Managed to do this const float scaleX ((float)tileWidth) 10000, scaleY ((float)tileHeight ) 10000 const float hexagon r scaleY 2 const float hexagon dx hexagon r cosf(30 3.14f 180.0) const float hexagon dy hexagon r sinf(30 3.14f 180.0) const float hexagon gx 2.0 hexagon dx const float hexagon gy 2.0 hexagon dx sinf(60 3.14f 180.0) Vector2 auxPos auxPos.x 0 auxPos.y 0 float j 0 for (unsigned int y 0 y lt layer gt layerHeight y , j hexagon gx 2) auxPos.y y (hexagon gy 2 ) auxPos.x j for (unsigned int x 0 x lt layer gt layerWidth x ) auxPos.x hexagon gx 2 float angle 90 3.14f 180 auxPos.y hexagon gy 2 SetData(stack gt vData, stack gt vCounter 0, auxPos.x (( scaleX) cosf(angle) ( scaleY) sinf(angle))) SetData(stack gt vData, stack gt vCounter 1, auxPos.y (( scaleX) sinf(angle) ( scaleY) cosf(angle))) SetData(stack gt vData, stack gt vCounter 2, layer gt layerDepth) SetData(stack gt vData, stack gt vCounter 3, auxPos.x ((scaleX) cosf(angle) ( scaleY) sinf(angle))) SetData(stack gt vData, stack gt vCounter 4, auxPos.y ((scaleX) sinf(angle) ( scaleY) cosf(angle))) SetData(stack gt vData, stack gt vCounter 5, layer gt layerDepth) SetData(stack gt vData, stack gt vCounter 6, auxPos.x ((scaleX) cosf(angle) (scaleY) sinf(angle))) SetData(stack gt vData, stack gt vCounter 7, auxPos.y ((scaleX) sinf(angle) (scaleY) cosf(angle))) SetData(stack gt vData, stack gt vCounter 8, layer gt layerDepth) SetData(stack gt vData, stack gt vCounter 9, auxPos.x (( scaleX) cosf(angle) (scaleY) sinf(angle))) SetData(stack gt vData, stack gt vCounter 10, auxPos.y (( scaleX) sinf(angle) (scaleY) cosf(angle))) SetData(stack gt vData, stack gt vCounter 11, layer gt layerDepth) can't seem to make this work... |
1 | Rendering text with SDL2 and OpenGL I've been trying to have text rendering in my OpenGL scene using SDL2. The tutorial I came across is this one Rendering text I followed the same code, and I get text rendering fine. However the issue I'm having is that there is obviously a "conflict" between the OpenGL rendering and the SDL Renderer used in the code. When I run my scene the text displays but everything else keeps flickering. The following gif shows the issue Any idea on how to overcome this while still just using SDL2 and OpenGL, not another library or anything. Thanks |
1 | OpenGL slower than Canvas Up to 3 days ago I used a Canvas in a SurfaceView to do all the graphics operations but now I switched to OpenGL because my game went from 60FPS to 30 45 with the increase of the sprites in some levels. However, I find myself disappointed because OpenGL now reaches around 40 50 FPS at all levels. Surely (I hope) I'm doing something wrong. How can I increase the performance at stable 60FPS? My game is pretty simple and I can not believe that it is impossible to reach them. I use 2D sprite texture applied to a square for all the objects. I use a transparent GLSurfaceView, the real background is applied in a ImageView behind the GLSurfaceView. Some code public MyGLSurfaceView(Context context, AttributeSet attrs) super(context) setZOrderOnTop(true) setEGLConfigChooser(8, 8, 8, 8, 0, 0) getHolder().setFormat(PixelFormat.RGBA 8888) mRenderer new ClearRenderer(getContext()) setRenderer(mRenderer) setLongClickable(true) setFocusable(true) public void onSurfaceCreated(final GL10 gl, EGLConfig config) gl.glEnable(GL10.GL TEXTURE 2D) gl.glShadeModel(GL10.GL SMOOTH) gl.glDisable(GL10.GL DEPTH TEST) gl.glDepthMask(false) gl.glEnable(GL10.GL ALPHA TEST) gl.glAlphaFunc(GL10.GL GREATER, 0) gl.glEnable(GL10.GL BLEND) gl.glBlendFunc(GL10.GL ONE, GL10.GL ONE MINUS SRC ALPHA) gl.glHint(GL10.GL PERSPECTIVE CORRECTION HINT, GL10.GL NICEST) public void onSurfaceChanged(GL10 gl, int width, int height) gl.glViewport(0, 0, width, height) gl.glMatrixMode(GL10.GL PROJECTION) gl.glLoadIdentity() gl.glOrthof(0, width, height, 0, 1f, 1f) gl.glMatrixMode(GL10.GL MODELVIEW) gl.glLoadIdentity() public void onDrawFrame(GL10 gl) gl.glClear(GL10.GL COLOR BUFFER BIT) gl.glMatrixMode(GL10.GL MODELVIEW) gl.glLoadIdentity() gl.glEnableClientState(GL10.GL VERTEX ARRAY) gl.glEnableClientState(GL10.GL TEXTURE COORD ARRAY) Draw all the graphic object. for (byte i 0 i lt mGame.numberOfObjects() i ) mGame.getObject(i).draw(gl) Disable the client state before leaving gl.glDisableClientState(GL10.GL VERTEX ARRAY) gl.glDisableClientState(GL10.GL TEXTURE COORD ARRAY) mGame.getObject(i).draw(gl) is for all the objects like this HERE there is always a translatef and scalef transformation and sometimes rotatef gl.glBindTexture(GL10.GL TEXTURE 2D, mTexPointer 0 ) Point to our vertex buffer gl.glVertexPointer(3, GL10.GL FLOAT, 0, mVertexBuffer) gl.glTexCoordPointer(2, GL10.GL FLOAT, 0, mTextureBuffer) Draw the vertices as triangle strip gl.glDrawArrays(GL10.GL TRIANGLE STRIP, 0, mVertices.length 3) EDIT After some test it seems to be due to the transparent GLSurfaceView. If I delete this line of code setEGLConfigChooser(8, 8, 8, 8, 0, 0) the background becomes all black but I reach 60 fps. What can I do? |
1 | Optimized algorithm for line sphere intersection in GLSL Well, hello then! I need to find intersection between line and sphere in GLSL. Right now my solution is based on Paul Bourke's page and was ported to GLSL this way The line passes through p1 and p2 vec3 p1 (...) vec3 p2 (...) Sphere center is p3, radius is r vec3 p3 (...) float r ... float x1 p1.x float y1 p1.y float z1 p1.z float x2 p2.x float y2 p2.y float z2 p2.z float x3 p3.x float y3 p3.y float z3 p3.z float dx x2 x1 float dy y2 y1 float dz z2 z1 float a dx dx dy dy dz dz float b 2.0 (dx (x1 x3) dy (y1 y3) dz (z1 z3)) float c x3 x3 y3 y3 z3 z3 x1 x1 y1 y1 z1 z1 2.0 (x3 x1 y3 y1 z3 z1) r r float test b b 4.0 a c if (test gt 0.0) Hit (according to Treebeard, "a fine hit"). float u ( b sqrt(test)) (2.0 a) vec3 hitp p1 u (p2 p1) Now use hitp. It works perfectly! But it seems slow... I'm new at GLSL. You can answer this questions in two ways Tell me there is no solution, showing some proof or strong evidence. Tell me about GLSL features (vector APIs, primitive operations) that makes the above algorithm faster, showing some example. Thanks a lot! |
1 | Is learning OpenGL2.1 today a bad idea? Possible Duplicate Is learning OpenGL 2.1 useless today? This question was asked around 2 years ago and i have well read the answers to it, but that was 2 years ago, i would like to know if it's a bad idea for me to learn OpenGL2.1 today. I bought the OpenGL superbible (4th edition) and not the 5th because some user in the ratings said that it was much better and i believed him. But now i'm affraid that was a long time ago. Thanks for all your feedback! |
1 | How are CSS and WebGL coordinates related? I'd like to build a simple framework which rendering combines web page DOM elements with WebGL, such that they're manipulable in the same coordinate space. How does the plain CSS coordinate system relate to the one used by WebGL? How can I make sure the two line up (e.g. have a div and a WebGL quad transformed the same way)? Can I expect CSS3D transform coordinates to correspond to WebGL? |
1 | How do I fill a shape with a texture in Libgdx? I am making a 2D racing game. I want to fill the area of the ground object with a texture. I have the shapes of the ground (a Box2D shape). If I make images of the ground with textures, it uses a lot of RAM, so I instead want to fill the shapes with copies of a small texture. |
1 | Using large textures on limited hardware I've run into a problem where some of the models that I'm loading can have very large textures (2000x2000 for example). While my desktop computer can load them just fine, my laptop gets a segfault at the glTexImage call. I've thought of a couple of things I can do in this situation Before giving the image to OpenGL, I could downscale it so it's below the max texture size, but I'm not sure how to then stretch the image to its original width and height I could also outright refuse to load the texture, and replace it with a generic repeating texture, like Valve's black and purple grid, but this isn't at all reliable or desired Are these two options viable? How is it done with game engines like Unity or Source? Are there any other options that are better than my ideas? |
1 | What causes some computers to have no or slow OpenGL, and how to fix it? I am using Java with JOGL to create OpenGL enhanced 2D graphics. The graphics operations I use are nothing fancy, and should be supported by almost any recent graphics card. For example, my game runs great on a Netbook. I was hoping the game would run on most computers. It runs fine on my own computers. However, I found some computers have very slow performance (apparently software fallback, yielding about 2 frames per second). I ran a LWJGL app on one such computer. It doesn't run at all (it reports something like org.lwjgl.LWJGLException Pixel format not accelerated, you can find various forum threads complaining about this but no apparent solutions, except the suggestion that it is a driver problem). Other OpenGL software does not seem to work, either. I also found that my Flash version of the game with exactly the same graphical effects performs pretty well full screen on that same computer. The computer in question has a recent ATI card but unfortunately I have no access to the driver manager. The problem appears fairly widespread. I think it is very unfortunate that OpenGL does not always provide access to graphics features found on most computers. This makes it less attractive for casual 2D games, which I expect to run on any computer. Did any of you run into this problem and manage to fix it? AFAIK, both NVidia and ATI provide OpenGL as part of their standard driver sets, but maybe there are some exceptions? Is this problem caused by third party drivers not supporting OpenGL, and can the problem be fixed by installing better drivers? How many other graphics cards are out there without OpenGL drivers? EDIT As a final note, I can conclude that OpenGL is just not well supported on Windows machines. Microsoft seems to have been doing their best to keep it off their platform, for example by deliberately leaving out OpenGL drivers in some Windows driver bundles. Get the vendor drivers, and you get OpenGL get the standard drivers that Windows downloads for you, and you don't. This has been causing no end of trouble for others as well. For example, for implementing WebGL, Web browsers use Angle, which is an OpenGL ES emulator for DirectX. So, what we'd really need is something like Angle, only for full OpenGL. |
1 | Why is there no glClear() and glClearColor() method in GL30? In the GL30 interface, both the methods glClear() and glClearColor() are absent. I tried to call the method Gdx.gl30.glClear(GL30.GL COLOR BUFFER BIT) inside render() but it threw me a null pointer exception. So I checked the interface. There's no glClear() method in GL30. Only in GL20. But OpenGL documentation says that they are supported in both v2.0 and v3.0. Why is it not included in LibGDX? |
1 | Calculating Directional Shadow Map using Camera Frustum I'm trying to calculate the 8 corners of the view frustum so that I can use them to calculate the ortho projection and view matrix needed to calculate shadows based on the camera's position. Currently, I'm not sure how to convert the frustum corners from local space into world space. Currently, I have calculated the frustum corners in local space as follows (correct me if I'm wrong) float tan 2.0 std tan(m Camera gt FOV 0.5) float nearHeight tan m Camera gt Near float nearWidth nearHeight m Camera gt Aspect float farHeight tan m Camera gt Far float farWidth farHeight m Camera gt Aspect Vec3 nearCenter m Camera gt Position m Camera gt Forward m Camera gt Near Vec3 farCenter m Camera gt Position m Camera gt Forward m Camera gt Far Vec3 frustumCorners 8 nearCenter m Camera gt Up nearHeight m Camera gt Right nearWidth, Near bottom left nearCenter m Camera gt Up nearHeight m Camera gt Right nearWidth, Near top left nearCenter m Camera gt Up nearHeight m Camera gt Right nearWidth, Near top right nearCenter m Camera gt Up nearHeight m Camera gt Right nearWidth, Near bottom right farCenter m Camera gt Up farHeight m Camera gt Right nearWidth, Far bottom left farCenter m Camera gt Up farHeight m Camera gt Right nearWidth, Far top left farCenter m Camera gt Up farHeight m Camera gt Right nearWidth, Far top right farCenter m Camera gt Up farHeight m Camera gt Right nearWidth, Far bottom right How do I move these corners into world space? Update I'm still not sure if what I'm doing is right. I've also attempted to build the ortho projection by looping through the frustum corners and getting the min and max x,y,z coordinates. Then simply setting the values of the projection as left minX right maxX top maxY botton minY near maxZ far minZ I've searched on the internet but all the tutorials use hard coded values so the shadow maps aren't applicable to an open world but a restricted portion of the scene. Any help? Pseudocode is preferred as my linear algebra skills (and reading skills) aren't that great |
1 | Setting a uniform float in a fragment shader results in strange values, is this a type conversion? How can it be fixed? First, some details I'm learning OpenGL from the tutorials on https open.gl My computer is running Linux Mint 18.1 Xfce 64 bit My graphics card is a GeForce GTX 960M OpenGL Version 4.5.0 NVIDIA 375.66 GLSL Version 4.50 NVIDIA Graphics Card Driver nvidia 375 Version 375.66 0ubuntu0.16.04.1 CPU Intel Core i7 6700HQ The code I'm working on can be found here https github.com Faison sdl2 learning blob 8a61032d20edf91cfa60f665e1bb4d72e58f634b phase 01 initial setup main.c (Makefile is located in the same directory) In a fragment shader, I'm trying to make an image do a sort of "flipping mirroring" animation (lines 44 57) version 450 core in vec3 Color in vec2 Texcoord out vec4 outColor uniform sampler2D tex uniform float factor void main() if (Texcoord.y lt factor) outColor texture(tex, Texcoord) vec4(Color, 1.0) else outColor texture(tex, vec2(Texcoord.x, 1.0 Texcoord.y)) When factor is 1, the image should be right side up and has some color on it. When factor is 0, the image should be upside down and has no color added to it. When factor is 0.5, the top half should be right side up and the bottom half should be upside down. Currently, that is only the case if I replace factor with the number. When I set the uniform factor with glUniform1f(), I'm getting very strange results. To illistrate, I added some debug code to lines 188 197 that sets the uniform with one number, retrieves the number from the uniform, and outputs both values to try and see what's going on. Here's the code GLfloat factorToSet 1.0f GLfloat setFactor 0.0f GLint uniFactor glGetUniformLocation(shader program, "factor") while (factorToSet gt 0.1f) glUniform1f(uniFactor, factorToSet) glGetUniformfv(shader program, uniFactor, amp setFactor) printf("Factor of .1f becomes f n", factorToSet, setFactor) factorToSet 0.1 And here are the results Factor of 1.0 becomes 0.000000 Factor of 0.9 becomes 2.000000 Factor of 0.8 becomes 0.000000 Factor of 0.7 becomes 2.000000 Factor of 0.6 becomes 0.000000 Factor of 0.5 becomes 0.000000 Factor of 0.4 becomes 2.000000 Factor of 0.3 becomes 36893488147419103232.000000 Factor of 0.2 becomes 0.000000 Factor of 0.1 becomes 36893488147419103232.000000 Factor of 0.0 becomes 0.000000 So with what little I understand about OpenGL and the way scalar types are stored in binary, I'm thinking that this issue is caused my GLfloat getting converted into something else on the way to the shader's uniform float. But I'm grasping at straws. What could be causing this strange conversion between the number I send to the uniform float and the value that the uniform float becomes? What could I do to fix it if it's possible to fix? Thanks in advanced for any help and leads, I really appreciate it ) An additional note after receiving a working answer George Hanna provided a link to a post where someone had a similar issue. I read over the comments and someone said to use DGL GLEXT PROTOTYPES as a CFLAG. So I rolled back my local code to use glUniform1f() again, added DGL GLEXT PROTOTYPES to the Makefile, and everything worked! Even crazier, all the compiler warnings I had for implicit declarations of OpenGL functions were gone! So in addition to the answer below, if you have this issue, try adding DGL GLEXT PROTOTYPES to your CFLAGS. (You can also get this affect by adding define GL GLEXT PROTOTYPES before any OpenGL includes) |
1 | How can I implement a side scrolling seamless fog shader? I've run into a problem trying to code a 2d game. The way I've set up the game is that you run through underground dungeons and fight enemies, for ambiance I've added a smoke fog shader to superimpose over the dungeon backdrop. However, the noise fog isn't seamless. As in, the fog is always the same even when moving through the map, it doesn't move with the player camera. How do I make it so the fog is generated whenever the camera moves sideways upwards while still being seamless, as in fitting in with the fog from before moving the camera? For example, when I move right, the fog background should move left, instead of being the same backdrop anywhere I am on the map. The shader I've set up is based on this one from Morgan McGuire patriciogv Author patriciogv 2015 http patriciogonzalezvivo.com ifdef GL ES precision mediump float endif uniform vec2 u resolution uniform vec2 u mouse uniform float u time float random (in vec2 st) return fract(sin(dot( st.xy, vec2(12.9898,78.233))) 43758.5453123) Based on Morgan McGuire morgan3d https www.shadertoy.com view 4dS3Wd float noise (in vec2 st) vec2 i floor( st) vec2 f fract( st) Four corners in 2D of a tile float a random(i) float b random(i vec2(1.0, 0.0)) float c random(i vec2(0.0, 1.0)) float d random(i vec2(1.0, 1.0)) vec2 u f f (3.0 2.0 f) return mix(a, b, u.x) (c a) u.y (1.0 u.x) (d b) u.x u.y define NUM OCTAVES 5 float fbm ( in vec2 st) float v 0.0 float a 0.5 vec2 shift vec2(100.0) Rotate to reduce axial bias mat2 rot mat2(cos(0.5), sin(0.5), sin(0.5), cos(0.50)) for (int i 0 i lt NUM OCTAVES i) v a noise( st) st rot st 2.0 shift a 0.5 return v void main() vec2 st gl FragCoord.xy u resolution.xy 3. st st abs(sin(u time 0.1) 3.0) vec3 color vec3(0.0) vec2 q vec2(0.) q.x fbm( st 0.00 u time) q.y fbm( st vec2(1.0)) vec2 r vec2(0.) r.x fbm( st 1.0 q vec2(1.7,9.2) 0.15 u time ) r.y fbm( st 1.0 q vec2(8.3,2.8) 0.126 u time) float f fbm(st r) color mix(vec3(0.101961,0.619608,0.666667), vec3(0.666667,0.666667,0.498039), clamp((f f) 4.0,0.0,1.0)) color mix(color, vec3(0,0,0.164706), clamp(length(q),0.0,1.0)) color mix(color, vec3(0.666667,1,1), clamp(length(r.x),0.0,1.0)) gl FragColor vec4((f f f .6 f f .5 f) color,1.) |
1 | 2 components color model RGB is the natural color model for OpenGL. But a lot of other color models exist. For example, CMY(K) for printers, YUV for JPEG, the little cousins YCbCr and YCoCg, HSL amp HSV from the 70's, and so on. All these models tend to share a common property they are based on 3 components. Therefore my question is Does it exist a 2 components color model ? I'm surprised to not find any. I was expecting something along the line of Hue light could exist. I guess it cannot be as "complete" as a true 3 components color model, but a fine enough approximation will be good for my usecase. The end objective is to store the 2 components into a single BC5 texture (GL COMPRESSED RED GREEN RGTC2 in OpenGL). The 3rd component requires a second fetch into a second texture, which hurts performance. |
1 | OpenGL tile rendering Currently I'm trying to render a TileMap using OpenGL 2.1, GLSL 1.2. I would like to draw every tile in just one draw call. I use a single texture with all tiles, identifying each one by an index. The vertex data per tile is vec2 worldPos the position to transform the tile Quad in world coordinates vec2 texCoord the uv coordinates, calculated using the tile index, by CPU (top left corner). But I can't find a way to draw everything with one draw call The uv coordinates can't be calculated in shader because the vertex shader don't know which corner of the quad it is processing. Can't draw by element because my quad vertex data only contains 4 vertices, store repeated vertices is a memory waste. Only if I could use a separate element buffer just for the vertex (0, 1, 2, 3, 0, 1, ...). Does someone have any suggestion about how should I proceed? Thanks! |
1 | What are some ways of making a game engine centered around the idea of drawing vector lines and polygons only? I've always loved the look of games that just use simple lines and polygons for graphics. Rez is one of my favorite games, visually. I'm a programmer and designer first and foremost, and I'm horrible at creating Actual Art, and the idea of making a game like Rez, using mostly line and polygon drawing, really appeals to me. Sort of like how a lot of programmers like to make roguelikes with minimal or ASCII art, I really like the idea of making a game just using trigonometry to draw lines in cool ways. I've been making a game for a school project using my school's proprietary game engine. Instead of using its 3D model drawing capabilities and 2D sprite drawing capabilities, I've instead opted to make it using only the engine's "debug draw" functionality. I went so far as to create a library of functions that allow me to create "VecSprites", or collections of lines that act in the same way as sprites, both in 2D and 3D contexts. (For 3D, I do my own matrix math to calculate perspective and stuff, forgoing the engine's own 3D rendering capabilities entirely). Here is a video of my current project in action. Going forward though, I want to set out to create my own game engine that will allow me to make these sorts of games on my own, without using someone else's game engine. I have plenty of experience with game engine architecture, but I don't know too much about actually drawing stuff to the screen. (Whenever I made a from scratch game project for school, someone else was in charge of the DirectX or OpenGL side of things.) I have, however, done research into the different methods of rasterizing lines and curves and such to the screen. What I'm wondering is, if I want to set out to make my own game engine specifically for drawing only primitive lines and circles (and then extend it with classes and such to allow for functionality such as grouping lines together into shapes), where is my best bet for getting started? In the past, I've fooled around with Cairo, only to be disappointed by the fact that, currently, it is not really meant for rendering graphics in real time on the GPU. Two years ago I made a simple game project using it, and it worked great until I ran it at 1080p, and, as it turns out, pushing 1920x1080x4 bytes to the GPU sixty times a second isn't exactly what modern computers were meant to do. So where do I begin? I want to draw primitive lines and polygons on PCs, specifically Windows and possibly Linux, as fast as possible. I wouldn't mind having to go through the effort of creating my own systems for drawing anti aliased lines, filled polygons, and so forth I just want to know what my options are for doing such things. Do I draw lines and polygons using low level OpenGL calls, and go from there? Is it possible to pass a bunch of data containing the shapes I want do draw to a shader, and then have the shader do it? (I'm only minimally familiar with how shaders work.) I know this question is kind of broad, but I really want to set out on this adventure to try and make this thing, and I just don't really know where to begin, at least as far as the drawing side of things is concerned. |
1 | Sprite quickly disappears after rendering I'm currently making space invaders and I'm using the game loop pattern as described here. I have an entity class from which there is a spaceship derived class. The base entity class contains all of the general entity data such as the x y position, speed, etc... Currently, I have the X and Y positions initialized to (0,0) so the spaceship is rendered in the center of the screen. I tried initializing the y position to a different coordinate corresponding to a position, but for some reason, the spaceship just keep moving vertically as if the y position is just being accumulated at each frame. I think the problem is caused by the fact that in my render() method of the spaceship class, I'm translating the modelmatrix with the x and y position of the spaceship as that would explain why there isn't vertical movement when the coordinates are (0,0). I'm not sure exactly if this is the proper way to structure the render() method but I couldn't think of another way to change the object's position. Below is the relevant code Spaceship class definition class Spaceship public Entity public Spaceship(float xDirect, float yDirect, float xPosition, float yPosition, float speed, float rState, ShaderProgram program, SheetSprite newSprite, bool movingLeft, bool movingRight) Entity(xDirect, yDirect, xPosition, yPosition, speed, rState, program, newSprite), movingLeft(movingLeft), movingRight(movingRight) posY 0.1 virtual void Update(float elapsed) move stuff and check for collisions if (movingLeft) posX elapsed 0.001 else if (movingRight) posX elapsed 0.001 virtual void Render() setOrthoProj() setObjMatrices() translateObj(posX, posY, 0.0) mySprite.Draw() bool movingLeft bool movingRight Initialization Code in main Spaceship spaceship new Spaceship( 1.0f, 1.0f, 5.1f, 0.0f, 3.0f, 0.0f, program,mySprite, false, false) spaceship gt Render() Game Loop movement code const Uint8 keys SDL GetKeyboardState(NULL) if (keys SDL SCANCODE LEFT ) spaceship gt movingRight false spaceship gt movingLeft true spaceship gt Update(elapsed) else if (keys SDL SCANCODE RIGHT ) spaceship gt movingLeft false spaceship gt movingRight true spaceship gt Update(elapsed) spaceship gt Render() For some reason, the first render() call outside of the loop isn't rendering the spaceship, only the one inside the loop works. Given the structure of the program, what could be causing the problem? EDIT Translate method from Entity base class void translateObj(float x, float y, float z) posX x posY y modelMatrix.Translate(posX, posY, 0.0) void setObjMatrices() glUseProgram(program gt programID) program gt setModelMatrix(modelMatrix) program gt setProjectionMatrix(projectionMatrix) program gt setViewMatrix(viewMatrix) |
1 | Is this a good way of separating graphics from game logic? My current architecture for me game engine looks like this, though it is not accurate Everything graphics related is done in by GraphicsEngine, and through its components, like Material, Mesh, etc). My problem is that I want to store the pointers in RenderData, but I have to include the Mesh, Material etc header files, which have included glew. I currently change an objects material using GetRenderer().SetMaterial("xyz"), which sets a string in the renderData, to be processed by the graphics engine then the correct pointer will be set, if it exists. This is not so modular, because the scene has graphics related files included, like glew. This is a problem. My only solution is to store indices in RenderData. There wont be a material pointer, but instead, an index where the material is in the GraphicsEngines material store. This way, RenderData is just a "blind" integer and string store, in which the Renderer egy the GraphicsEngine works. Is this a good solution? Meshes have VertexData members (position, normal, texture). When I call GraphicEngine.CreateMesh(), passing the MeshName and FileName, where should the file processing go? I use Tiny Obj Loader, and I don't know where I should include it, and call its function. I call the function from inside GraphicsEngine, then I transform the returned structures to my Mesh's structure, which I pass to the Mesh's constructor. The initialised list will assign it to the corresponding member variable. Inside Mesh, I pass the FileName to the Mesh constructor, and let it handle it all by itself. I think the first solution is better, but I don't really know why. Maybe using GraphicsEngine to "create" assets is better than GraphicsEngine commanding assets to "be created" but this is just a personal feeling. Which solution is better? |
1 | Shaders and Performance I'm coding my first Shader in my little game engine, and I have some questions about it's performance and common approaches. Is the Shader code processed by the video card instead of the PC processor? Just so I know if it's possible to share some calculations to save some processor power. Generally, should I do the math calculations in my code or the Shader? I could calculate them all (lightning for example) and just send the final values to be multiplied used to the Shader, but what's the best approach? Shaders as far as I could understand are the main party responsible for visual effects, so for example, if I want to add a "blur" effect to only one object on screen, should I use an if switch statement in the Shader or should I have different Shader ProgramIDs and I just switch the call glUseProgram(programID)? Sorry if some questions seems stupid, and thanks for your time! |
1 | Why does my GLSL 1.20 shader not work with an OpenGL 4.0 driver? I'm just starting out with OpenGL on Linux. In order to write future proof code, I explicitly wrote code for OpenGL 4.0 Core Profile in the first place, thus the shaders are GLSL 4.0. That worked fine on a recent notebook with an AMD graphics card using the fglrx driver, which supports OpenGL 4.0. I then tried to run this code on another box with an Intel card, whose driver only supports OpenGL 2.1. That failed, obviously. I then rewrote the shaders in GLSL 1.20, without changing the C code at all. Now it works on the Intel box, but on the AMD notebook, it displays nothing (except for glClearColor). That confuses me. The code checks glGetError() periodically, so an error in the shader compilation would have been caught. My only idea would have been that I accidentally created a GL 4 core context (instead of a compatibility context) on the AMD box, but the context is created by SDL 1.2, which (says the documentation) can create compatiblity contexts only. Can anyone see what the problem is, or give hints on how to debug this? For reference, I'm attaching all shaders. (They're pretty trivial as I said, I'm just starting.) version 120 (vertex shader) uniform mat4 ModelMatrix uniform mat4 ViewMatrix uniform mat4 ProjectionMatrix attribute vec4 in Color attribute vec4 in Position varying vec4 pass Color void main(void) gl Position (ProjectionMatrix ViewMatrix ModelMatrix) in Position pass Color in Color version 400 (vertex shader) layout(location 0) in vec4 in Position layout(location 1) in vec4 in Color out vec4 ex Color uniform mat4 ModelMatrix uniform mat4 ViewMatrix uniform mat4 ProjectionMatrix void main(void) gl Position (ProjectionMatrix ViewMatrix ModelMatrix) in Position ex Color in Color version 120 (fragment shader) varying vec4 pass Color void main(void) gl FragColor pass Color version 400 (fragment shader) in vec4 ex Color out vec4 out Color void main(void) out Color ex Color |
1 | Storing rendering data for voxel game 1 VAO and VBO or 1 for every chunk I have a minecraft style voxel game with placing and digging blocks. it runs at 400 fps on my computer but when I added semi transparent water it started running at 40 fps. so a time per frame increase of 0.0175s or 8 times what is used to be. What I am currently doing is having a VAO and VBO for each chunk and every time a chunk is changed or loaded it finds all of the non occluded blocks and puts their positions into another buffer in the chunk. then I use instancing to render the whole chunk in one draw call so every frame I am doing 300 draw calls. Once I added the water(using order independent transparency http www.openglsuperbible.com 2013 08 20 is order independent transparency really necessary ) I was individually calling gldrawelements() for every visible water block so around 9000 if half the visible area is ocean. I know using instancing with these blocks would help but then I have 2 VAOs for every chunk and 600 draw calls of buffers with only a couple hundred blocks in them. would there be a better way to organize these things. |
1 | Qt opengl glTexImage2D in a widget I was doing an OpenGL display, where I render in an opengl context (GLUT). But now I would like to integrate it into a big project that does not use the Qt OpenGL API at all (QGLWidget, QOpenGLWidget, ). So I would have to display on QWidget for example using the function glTexImage2D () because it can take a Qimage input ((GLvoid ) Qimage.bits ()) |
1 | How do I calculate the bounding box for an ortho matrix for Cascaded shadow mapping? I've been trying to get a cascaded shadow mapping system implemented on my engine, though it appears to be that the bounding boxes for the cascades aren't correct. The part I'm interested in can be found here, under the function name quot CalcOrthoProjs quot . I've been trying to understand the matrix multiplications with this answer, and the ogldev variable and function names are kind of confusing me. This is how I modified ogldev's function to work with my variables void Scene calcOrthoProjections(Camera amp camera, glm mat4 LightView, std vector lt glm mat4 gt amp orthoContainer, std vector lt GLfloat gt amp cascadeEnd) GLfloat FOV, nearPlane, farPlane, ratio camera.getPerspectiveInfo(FOV, nearPlane, farPlane) ratio static cast lt GLfloat gt (RE config.height) static cast lt GLfloat gt (RE config.width) GLfloat tanHalfHFov glm tan(glm radians(FOV 2.0f)) GLfloat tanHalfVFov glm tan(glm radians((FOV ratio) 2.0)) for (GLuint i 0 i lt NR CASCADES i ) GLfloat xn cascadeEnd i tanHalfHFov GLfloat xf cascadeEnd i 1 tanHalfHFov GLfloat yn cascadeEnd i tanHalfVFov GLfloat yf cascadeEnd i 1 tanHalfVFov The frustum Corners on View(Camera) space glm vec4 frustumCorners 8 Near Face glm vec4( xn, yn, cascadeEnd i , 1.0), glm vec4( xn, yn, cascadeEnd i , 1.0), glm vec4( xn, yn, cascadeEnd i , 1.0), glm vec4( xn, yn, cascadeEnd i , 1.0), Far face glm vec4( xf, yf, cascadeEnd i 1 , 1.0), glm vec4( xf, yf, cascadeEnd i 1 , 1.0), glm vec4( xf, yf, cascadeEnd i 1 , 1.0), glm vec4( xf, yf, cascadeEnd i 1 , 1.0), The frustum Corners in LightSpace glm vec4 frustumCornersL 8 GLfloat minX, maxX, minY, maxY, minZ, maxZ minX minY minZ std numeric limits lt GLfloat gt max() maxX maxY maxZ std numeric limits lt GLfloat gt min() glm mat4 cam (camera.getProjection() camera.getView()) glm mat4 camInverse glm inverse(cam) for (GLuint j 0 j lt 8 j ) View(Camera) space to world space glm vec4 vW camInverse frustumCorners j world space to light space frustumCornersL j LightView vW minX min(minX, frustumCornersL j .x) maxX max(maxX, frustumCornersL j .x) minY min(minY, frustumCornersL j .y) maxY max(maxY, frustumCornersL j .y) minZ min(minZ, frustumCornersL j .z) maxZ max(maxZ, frustumCornersL j .z) orthoContainer i glm ortho(minX, maxX, minY, maxY, minZ, maxZ) LightView LightView represents a matrix created with glm LookAt( glm normalize(light.direction), glm vec3(0.0), glm vec3(0.0, 1.0, 0.0)) camera.getProjection() returns the perspective matrix of the main camera camera.getView() returns a LookAt at the objective the camera is looking at The orthoContainer values are then fed into the depth rendering unaltered afterwards (by anything other than the model matrix of each model) I wrote some comments on how I think the math is done, trying to understand what's wrong. The result is a frustum too wide, resulting on low res shadows(even for the closest shadow map) and this is the depth map of the closest shadow map any insight as to why this isn't working, or any other best practice advice, is welcome. |
1 | Smallest, most memory efficient way to have tiles? (C OpenGL) I need to have tiles in my game, just 16x16 images, there would be hundreds (or even thousands) that make up a level. Of course it's not viable to have thousands of memory hog normal entities, but tiles are just repeated images. What's the most memory efficient way to do tiles? |
1 | Invalid coordinates returned by glutMouseFunc() I am using GLUT's glutMouseFunc() function to retrieve the coordinates of mouse clicks. I want to move the object on that coordinate to another coordinate. But when I click on the object the coordinates returned by glutMouseFunc() are different than the original. windowWidth 1250 windowHeight 1000 void onMouse(int button, int state, int x, int y) int main(int argc, char argv) glutInit( amp argc, argv) glutInitDisplayMode(GLUT SINGLE) glutInitWindowSize(1250, 1000) glutCreateWindow("OpenGL First window demo") ... glutMouseFunc(onMouse) ... glutMainLoop() return 0 void onMouse(int button, int state, int x, int y) flag if (state ! GLUT DOWN amp amp button! GLUT LEFT BUTTON) return GLbyte color 4 GLfloat depth GLuint index glReadPixels(x, windowHeight y 1, 1, 1, GL RGBA, GL UNSIGNED BYTE, color) glReadPixels(x, windowHeight y 1, 1, 1, GL DEPTH COMPONENT, GL FLOAT, amp depth) glReadPixels(x, windowHeight y 1, 1, 1, GL STENCIL INDEX, GL UNSIGNED INT, amp index) printf("Clicked on pixel d, d, color 02hhx 02hhx 02hhx 02hhx, depth f, stencil index u n", x, y, color 0 , color 1 , color 2 , color 3 , depth, index) move() Inside my render function i have set the window coordinates to 0,0 using glortho glMatrixMode(GL PROJECTION) glLoadIdentity() glOrtho(0, windowWidth, windowHeight, 0, 0, 1) glMatrixMode(GL MODELVIEW) glLoadIdentity() If my object is at (550, 900), onMouse() reports it as (556, 650) while if I click on an object which is at position (375, 475) the onMouse() returns (376, 342). Basically there is a huge difference between the value of y axis returned by the function. How can i get the correct screen coordinates? |
1 | Front Face Culling with Shadowmapping To avoid shadow acne I usualy use front face culling which works great. But for my current implementation the mesh is a quite complex terrain (depending on the lod level) and than the shading (dot product of the normals face with the negative light direction) is not always correct as some normals are actually facing the light. But still there is no information in the shadowmap to shadow these fragments in the shadowmap because of the front face culling ( like it is not exact enough). What I end up with, is that there are some bright spots in the shadowed area. I am not really sure why this does not work, because that would mean, that the front face culling is also culling some back faces, if the mesh has a very high level of detail, right? In the Image with Front Face Culling (top picture) you can see these bright lines in the shadowed area where as without FF Culling (bottom image) the shadowed area is completely black. My current workaround is, that I check the highest LOD level that is currently displaying and than according to that switch back an forth between ff culling and bias. I use OpenGL The front face culling glEnable(GL CULL FACE) glCullFace(GL FRONT) The shading in the fragment shader float shading max(dot(normal, lightDir), 0.0) |
1 | Performing manual clipping in OpenGL I'm learning OpenGL and I understand that OpenGL performs clipping as part of the pipeline, but is it a good idea to also perform manual clipping? By manual clipping, I mean not asking the GPU to draw an object if I'm certain it won't be visible on the screen. The advantage I can think of is that the GPU will not be wasting time on executing the vertex shader only to find out the primitives are to be discarded. On the other hand, performing calculations on the CPU may cause the overall performance to decrease. I believe the world terrain model is loaded once and so it isn't a good idea to perform manual clipping for the terrain and reloading selective parts on the GPU's memory, but maybe doing this for things like trees boost performance. Thank you! |
1 | OpenGL render to texture causing edge artifacts This is my first post here so any help would be massively appreciated ) I'm using C with SDL and OpenGL 3.3 When rendering directly to screen I get the following result And when I render to texture I this happens Anti aliasing is turned off for both. I'm guessing this has something to do with depth buffer accuracy but I've tried a lot of different methods to improve the result but, no success ( I'm currently using the following code to set up my FBO GLuint frameBufferID glGenFramebuffers(1, amp frameBufferID) glBindFramebuffer(GL FRAMEBUFFER, frameBufferID) glGenTextures(1, amp coloursTextureID) glBindTexture(GL TEXTURE 2D, coloursTextureID) glTexImage2D(GL TEXTURE 2D,0,GL RGB,SCREEN WIDTH,SCREEN HEIGHT,0,GL RGB,GL UNSIGNED BYTE,NULL) glTexParameteri(GL TEXTURE 2D,GL TEXTURE MIN FILTER,GL NEAREST) glTexParameteri(GL TEXTURE 2D,GL TEXTURE MAG FILTER,GL NEAREST) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) Depth buffer setup GLuint depthrenderbuffer glGenRenderbuffers(1, amp depthrenderbuffer) glBindRenderbuffer(GL RENDERBUFFER, depthrenderbuffer) glRenderbufferStorage(GL RENDERBUFFER, GL DEPTH COMPONENT24, SCREEN WIDTH,SCREEN HEIGHT) glFramebufferRenderbuffer(GL FRAMEBUFFER, GL DEPTH ATTACHMENT, GL RENDERBUFFER, depthrenderbuffer) glFramebufferTexture(GL FRAMEBUFFER, GL COLOR ATTACHMENT0, coloursTextureID, 0) GLenum DrawBuffers 1 GL COLOR ATTACHMENT0 glDrawBuffers(1, DrawBuffers) if(glCheckFramebufferStatus(GL FRAMEBUFFER) ! GL FRAMEBUFFER COMPLETE) return false Thank you so much for any help ) |
1 | what happens with missing vertices in geometry shader I am relatively new to GLSL shader programming, and the documentation I found is unfortunately often inscrutable. I am having trouble understanding a few things with how geometry shaders fit into the pipeline. I am trying to implement silhouette detection by detecting boundary triangles in a geometry shader. A triangle is a boundary triangle if it is front facing and an adjacent triangle is back facing. The idea then is to emit additional camera facing triangles to outline them. For this purpose, I am looking at GL TRIANGLE ADJACENCY shaders, since they seem to fit the bill. Let's assume for the moment that I have already got the data loaded properly into the buffer. For clarity I plan on doing skinning and the Model View transform in the vertex shader, but to hold off on the perspective projection until I decide what to do in the geometry shader (this seems reasonable, but maybe I am wrong). Here is where the uncertainties start to come in. First of all, am I expected to manually perform backface culling at some shader stage? Second, if a triangle has incomplete adjacency information(only 2 out of three triangles for example), how is this conveyed to the shader? Does it just have a length less than 6 vertices? Third, what does the pass through shader (the one that does nothing special) have to perform? Do I emit every vertex in the input list, or just 3 from the first triangle? And again, am I supposed to allowed to cull backfaces? And last, if I decide to emit additional geometry, is there anyway to distinguish that from the (passed through) input geometry from the point of view of the fragment shader? I'd like to be able to texture map them separately if it is possible. If I am on the wrong path on any of this, of course feel free to let me know. In case it is important, I am using OpenGL 3.3. Based on my understanding and Nathan's answer, I could start with a passthrough shader that looks something like this layout(triangle adjacency) in layout(triangle strip, max vertices 14) out flat out float is border void main() gl Position gl in 0 .gl Position is border 0.0 EmitVertex() gl Position gl in 2 .gl Position is border 0.0 EmitVertex() gl Position gl in 4 .gl Position is border 0.0 EmitVertex() EndPrimitive() And then when if I add triangle strips for outlines, I will set is border 1.0 for each of those vertices. |
1 | Multithreaded game fails on SwapBuffers in render thread at exit The render loop and windows message loop run on separate threads. The way the program exits is that after PostQuitMessage is called in WM DESTROY the message loop thread signals the render loop thread to exit. As far as I can tell before the render loop thread can even process the signal it tries SwapBuffers and that fails. My question, is there something about how Windows processes WM DESTROY and WM QUIT, in maybe DefWindowProc that causes various objects associated with rendering to go away even though I haven't explicitly deleted anything? And that would explain why the rendering thread is making bad calls at exit? |
1 | Transparent parts of texture are opaque black instead I render a sprite twice, one on top of the other. The sprites have transparent parts, so I should be able to see the bottom sprite under the top sprite. The transparent parts are black (the clear colour) and opaque instead though and the topmost sprite blocks the bottom sprite. My fragment shader is trivial uniform sampler2D texture varying vec2 f texcoord void main() gl FragColor texture2D(texture, f texcoord) I have glEnable(GL BLEND) and glBlendFunc(GL SRC ALPHA, GL ONE MINUS SRC ALPHA) in my initialization code. My texture comes from a PNG file that I load with libpng. I'm sure to use GL RGBA when initializing the texture with glTexImage2D (otherwise the sprites look like noise). Edit Here's a screenshot. |
1 | Best way to render multiple objects I have a scene that consists of 1 player object, 1 platform, 1 enemy, and 1 background. Currently, this is how my render function looks like void Sprite Render() glUseProgram(m Program) glActiveTexture(GL TEXTURE0) Background glBindVertexArray(BackgroundVAO) glBindTexture(GL TEXTURE 2D, m Textures 1 ) glUniform1i(glGetUniformLocation(m Program, "Texture1"), 0) m ProjectionMatrix m Camera.ViewToWorldMatrix() m TransformationMatrix m ProjectionMatrix m TransformationMatrixLoc glGetUniformLocation(m Program, "TransformationMatrix") glUniformMatrix4fv(m TransformationMatrixLoc, 1, GL FALSE, amp m TransformationMatrix 0 0 ) glDrawElements(GL TRIANGLES, 6, GL UNSIGNED BYTE, 0) Enemy glBindVertexArray(EnemyVAO) glBindTexture(GL TEXTURE 2D, m Textures 2 ) glUniform1i(glGetUniformLocation(m Program, "Texture3"), 0) m ProjectionMatrix glm translate(glm mat4(), glm vec3(EnemyX, EnemyY, 0.0f)) m Camera.ViewToWorldMatrix() m TransformationMatrix m ProjectionMatrix m TransformationMatrixLoc glGetUniformLocation(m Program, "TransformationMatrix") glUniformMatrix4fv(m TransformationMatrixLoc, 1, GL FALSE, amp m TransformationMatrix 0 0 ) glDrawElements(GL TRIANGLES, 6, GL UNSIGNED BYTE, 0) Platform One glBindVertexArray(PlatformVAO) glBindTexture(GL TEXTURE 2D, m Textures 3 ) glUniform1i(glGetUniformLocation(m Program, "Texture4"), 0) m ProjectionMatrix glm translate(glm mat4(), glm vec3(0.0f, 0.5f, 0.0f)) m Camera.ViewToWorldMatrix() m TransformationMatrix m ProjectionMatrix m TransformationMatrixLoc glGetUniformLocation(m Program, "TransformationMatrix") glUniformMatrix4fv(m TransformationMatrixLoc, 1, GL FALSE, amp m TransformationMatrix 0 0 ) glDrawElements(GL TRIANGLES, 6, GL UNSIGNED BYTE, 0) Player glBindVertexArray(PlayerVAO) glBindTexture(GL TEXTURE 2D, m Textures 0 ) glUniform1i(glGetUniformLocation(m Program, "Texture2"), 0) m ProjectionMatrix glm translate(glm mat4(), glm vec3(PlayerX, PlayerY, 0.0f)) m Camera.ViewToWorldMatrix() m TransformationMatrix m ProjectionMatrix m TransformationMatrixLoc glGetUniformLocation(m Program, "TransformationMatrix") glUniformMatrix4fv(m TransformationMatrixLoc, 1, GL FALSE, amp m TransformationMatrix 0 0 ) glDrawElements(GL TRIANGLES, 6, GL UNSIGNED BYTE, 0) glBindTexture(GL TEXTURE 2D, 0) glBindVertexArray(0) As you can see, it is unnecessarily complicated. And if I want to draw, say, two or more enemies, or 4 or more platforms, then it'd get ridiculously large! My question is this, how can I make this smaller? I prefer to be able to do something like this in my main.cpp Sprite.Render(Player) Sprite.Render(EnemyOne) etc... Looking forward to reading your tips, thank you! |
1 | GLM Velocity Vectors Basic Maths to Simulate Steering UPDATE Code updated below but still need help adjusting my math. I have a cube rendered on the screen which represents a car (or similar). Using Projection Model matrices and Glm I am able to move it back and fourth along the axes and rotate it left or right. I'm having trouble with the vector mathematics to make the cube move forwards no matter which direction it's current orientation is. (ie. if I would like, if it's rotated right 30degrees, when it's move forwards, it travels along the 30degree angle on a new axes). I hope I've explained that correctly. This is what I've managed to do so far in terms of using glm to move the cube glm vec3 vel velocity vector void renderMovingCube() glUseProgram(movingCubeShader.handle()) GLuint matrixLoc4MovingCube glGetUniformLocation(movingCubeShader.handle(), "ProjectionMatrix") glUniformMatrix4fv(matrixLoc4MovingCube, 1, GL FALSE, amp ProjectionMatrix 0 0 ) glm mat4 viewMatrixMovingCube viewMatrixMovingCube glm lookAt(camOrigin, camLookingAt, camNormalXYZ) vel.x cos(rotX) vel.y sin(rotX) vel moveCube move cube ModelViewMatrix glm translate(viewMatrixMovingCube,globalPos vel) bring ground and cube to bottom of screen ModelViewMatrix glm translate(ModelViewMatrix, glm vec3(0, 48,0)) ModelViewMatrix glm rotate(ModelViewMatrix, rotX, glm vec3(0,1,0)) manually turn glUniformMatrix4fv(glGetUniformLocation(movingCubeShader.handle(), "ModelViewMatrix"), 1, GL FALSE, amp ModelViewMatrix 0 0 ) pass matrix to shader movingCube.render() draw glUseProgram(0) keyboard input void keyboard() char BACKWARD keys 'S' char FORWARD keys 'W' char ROT LEFT keys 'A' char ROT RIGHT keys 'D' if (FORWARD) W move forwards globalPos vel globalPos.z moveCube BACKWARD false if (BACKWARD) S move backwards globalPos.z moveCube FORWARD false if (ROT LEFT) A turn left rotX 0.01f ROT LEFT false if (ROT RIGHT) D turn right rotX 0.01f ROT RIGHT false Where am I going wrong with my vectors? I would like change the direction of the cube (which it does) but then move forwards in that direction. |
1 | Creating map files for a 3D game I've created plenty of 2D games and now that I've gotten my hands dirty working with 3D in opengl I want to start a game. The issue is I don't know how I can store all the map data. Not only the terrain but the textures and objects in the world, lighting, what's intractable etc. it seems that it'll get way to big if I don't have an efficient system for storing and loading maps. So what methods or articles can you guys suggest on this? |
1 | small independent game development on a virtual machine I've been learning about OpenGL and SFML with c now for about 6 8 months, and would like to work on a small little personal game to put some of my skills to the test. Now I want to kill two birds with one stone, and also increase my knowledge of Ubuntu and linux development in general by developing this project in Ubuntu. Currently my main computer runs Windows 7 while I have an older computer than runs Fedora 18, so I set up an ubuntu 12.04 LTS virtual machine on my main computer. The problem is, what effects does this have graphics wise? I know windows virtual machines running games can often have some graphical and input bugs, should I not use a virtual machine? Or this fine and I'm simply over thinking the subject? Thanks! Note I am using VM VirtualBox on my windows 7 desktop. |
1 | OpenGL 2D Origin in Upper Left Corner I got an OpenGL Context Working in SDL, and I'm trying to set it up so I can render with pixel coordinates of the screen. I got this trying to fill the entire screen with a white rectangle glViewport(0, 0, MonitorWidth, MonitorHeight) glMatrixMode(GL PROJECTION) glLoadIdentity() glOrtho(0, MonitorWidth, MonitorHeight, 0, 1, 1) glMatrixMode(GL MODELVIEW) glLoadIdentity() glClearColor(0, 0, 0, 1) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glColor3f(1, 1, 1) glBegin(GL QUADS) glVertex2f(0, 0) glVertex2f(MonitorWidth, 0) glVertex2f(MonitorWidth, MonitorHeight) glVertex2f(0, MonitorHeight) glEnd() It doesn't work, and the entire screen appears black. What's wrong with this? |
1 | JNI Error when Wrapping Jar with JOGL using Launch4j I have been trying to wrap my fat JAR file into an EXE using Launch4j, but I have been running into problems when I try to execute the EXE. Here is the error log I get from Launch4j Executing E Downloads CasualCaving.exe JNILibLoaderBase Caught IllegalArgumentException No Jar name in lt jar file E Downloads CasualCaving.exe! jogamp common Debug.class gt Exception in thread "main" java.lang.UnsatisfiedLinkError Can't load library E Downloads natives windows amd64 gluegen rt.dll at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.load0(Unknown Source) at java.lang.System.load(Unknown Source) at com.jogamp.common.jvm.JNILibLoaderBase.loadLibraryInternal(JNILibLoaderBase.java 624) at com.jogamp.common.jvm.JNILibLoaderBase.access 000(JNILibLoaderBase.java 63) at com.jogamp.common.jvm.JNILibLoaderBase DefaultAction.loadLibrary(JNILibLoaderBase.java 106) at com.jogamp.common.jvm.JNILibLoaderBase.loadLibrary(JNILibLoaderBase.java 487) at com.jogamp.common.os.DynamicLibraryBundle GlueJNILibLoader.loadLibrary(DynamicLibraryBundle.java 421) at com.jogamp.common.os.Platform 1.run(Platform.java 317) at java.security.AccessController.doPrivileged(Native Method) at com.jogamp.common.os.Platform. lt clinit gt (Platform.java 287) at com.jogamp.opengl.GLProfile. lt clinit gt (GLProfile.java 147) at org.graphics.Render. lt init gt (Render.java 20) at org.engine.Main.main(Main.java 18) I am using JOGL 2.3.2 imported through Maven, with the assembly plugin to compile it to a fat JAR. Here is a picture of my library configuration I am not sure what is wrong, as the JAR file works fine. |
1 | Change value of uniform for each VAO I've heard from several sources that it's a better approach to pass the model matrix to a shader via a uniform rather than an attribute. I also know that the idea of a uniform is, that it has the same value for each frame. If so, how can i change the uniforms value for each object i have in the scene? |
1 | Infinite Treadmilling Hexagonal Grid So, I can happily render an infinite square grid by moving said grid whenever the camera moves out of a grid square, auto diff (cameraTranslation gridTranslation) bool moveGrid false if (diff.x gt cellSize.x 2) gridTranslation.x cellSize.x moveGrid true else if (diff.x lt cellSize.x 2) gridTranslation.x cellSize.x moveGrid true if (diff.z gt cellSize.z 2) gridTranslation.z cellSize.z moveGrid true else if (diff.z lt cellSize.z 2) gridTranslation.z cellSize.z moveGrid true if (moveGrid) gridTransform.Translate(gridTranslation) However, when working with a hexagonal grid, things get a bit more complicated. (For that matter, if the grid was textured, I'd have to adjust the grid translation based on the tessellation distance... ouch.) In this case, I'm dealing with a wireframe hex grid And I'm a little unsure how to treadmill in this case. The first thing that comes to mind is that, given that the camera starts in the center of a hexagon, determine if the camera has exited the cell in which it started. If so, adjust grid position by float angle from center atan2(camera.z grid.z,camera.x grid.x) grid.x cell size.x cos(angle from center) grid.z cell size.z sin(angle from center) Which will probably work, but the issue gets more complicated when considering that I'm not only checking if the camera has moved a certain distance from the cell center, but if it has crossed a border. I could clamp the angle of the camera relative to the cell origin to 1 12 of a circle, which would make hex boundary calculation easier, but this seems like a hack. Any suggestions? I'm 75 sure I'm over thinking this one. |
1 | How to render water reflections on multiple heights I'm making a voxel game on OpenGL, and are trying to find a way to render semi realistic water (At least partially good looking, it doesn't need to be strictly scientific accurate). All sources written about that topic I find seem to assume that all the water in the scene has the same height, which is not the case on a voxel game, where you can have (And see at the same time) multiple bodies of water, with different height each. The tricky detail lies in the multi height part of the problem, which makes really difficult to render the scene in real time using typical water reflection algorithms aimed at single level water surface. As an example, consider the following image The dirt pillar at the left should be reflected on the water at its front, as so should be the mountains near the horizon. The water collindant to the pillar is 20m higher than the general sea water, so traditional reflection methods (Considering a fixed water level) would simply not work in this case. Considering that a lot of different water levels can be seen in a single scene, performing a traditional water reflection method for each water body height is not an option if we want to achieve a acceptable frame rate execution. How can I overcome this problem? There is any way to render it in real time? If it isn't actually possible to do, what approximation should I take to get a good looking result, even if reflections aren't 100 realistic? |
1 | OpenGL streaming from multiple windows I am using GLUT library for my research game. In my game I have 25 windows created using glutCreateWindow(title), and each of them have their own display callback registered using glutDisplayFunc(Draw). I am trying to stream the screen of each of the 25 windows on my computer to another client which will do some processing with all the screen capture of the 25 windows. Currently I am capturing the screen of each of the 25 windows with glReadPixels(0,0,Xres,Yres,GL RGB,GL UNSIGNED BYTE,pixels) Just doing the call to glReadPixels for each of the 25 windows slows my fps down to 7 fps. However I require the fps to be at least 25 FPS, is there any way I can improve the fps? |
1 | Making a weapon stay with a first person camera I was looking all over the internet for any information on how to get a gun to stay with a camera as done in FPS games. I am using OpenGL and GLSL to carry this out. I knew a way of how to do this in earlier OpenGL versions, but I could never figure it out in the newer versions. The type of camera that I am trying to get is something similar to this With the view matrix and everything else, I should be able to figure out the movement of the hand and the shooting. Here is some of the code that I have so far Copyright(c) 2019 Ryan Hall All Rights Reserved I do not permit any of this code to be used elsewhere by anyone else except me for commercial purposes. The gun has two defining variables one that actually creates the gun and another that moves it around in object space weaponOfChoice glm translate(weaponOfChoice, glm vec3(camera.GetPosition().x 0.15, camera.GetPosition().y 0.15, camera.GetPosition().z 0.3)) weaponOfChoice glm rotate(gun, angle, glm vec3(0.0f, 1.0f, 0.0f)) weaponOfChoice glm scale(gun, glm vec3(0.005f, 0.005f, 0.005f)) glUniformMatrix4fv(glGetUniformLocation(shader.Program, "gun"), 1, GL FALSE, glm value ptr(weaponOfChoice)) I have spent quite some time working on how to fix the code so that it will render the gun correctly and have not been able to find any great sources online that will help solve my problem. How could I do this? Do I need an identity matrix as used by me in older versions of OpenGL? If so, how do I create a mimicking function of glLoadIdentity() that will help me with this problem? Thanks, rjhwinner03 |
1 | How do I use graphics APIs to select the proper display device among multiple attached to a PC? I have an LCD monitor display and Oculus Rift attached to my PC. I have an Nvidia 820M dedicated GPU. How does the GPU know to which display device it has to render (or to send rendered information)? Are there any OpenGL DirectX calls to bind display with the GPU? |
1 | Cut a translucent square in a texture How to remove (cut out) a transparent rectangle in a Texture, so that the hole will be translucent. On Android I would use the Xfermodes approach https stackoverflow.com questions 8115732 how to use masks in android But in libgdx I will have to use opengl. So far I almost achieved what I was looking for, by using the the glBlendFunc From this nice and very helpful page I learend that glBlendFunc(GL ZERO, GL ONE MINUS SRC ALPHA) should solve my problem, but I tried it out, and it did not quite work as expected batch.end() batch.begin() batch.setBlendFunction(GL20.GL ZERO, GL20.GL ONE MINUS SRC ALPHA) Draw the background super.draw(batch, x, y, width, height) draw the foreground mask.draw(batch, x innerButtonTable.getX(), y innerButtonTable.getY(), innerButtonTable.getWidth(), innerButtonTable.getHeight()) result foreground 0,0,0,0 background (1 sourceAlpha) batch.end() batch.setBlendFunction(GL20.GL SRC ALPHA, GL20.GL ONE MINUS SRC ALPHA) batch.begin() It is just making the mask area plain black, whereas I was expecting transparency, any ideas. This is what I get This is what I expected |
1 | Is it possible to gain performance by omitting vertex normals in the GPU pipe? I am working on a rendering problem where I want to render as many raw triangles to the screen as I can with either OpenGL or DirectX with the absolute fastest performance possible. I wondered about omitting vertex normals completely and only transforming vertex positions during the vertex shader stage. 1) Is this possible? 2) Is it actually going to increase performance or is the "bare metal" of the GPU designed in such a way that trying to omit normals won't gain any more throughput? p.s. Yes, I realize that omitting normals will leave you with the problem of how to shade the triangle during the shader stage, but I could at least render a solid color to the screen (no shading). At this point, all I'm wondering about is how much data I can eliminate from the typical pipeline to increase the pipeline throughput to the absolute maximum. |
1 | Slick2d fullscreen with black bars NOTE I am moving this question here from Stack Overflow because I feel it belongs better on this forum. I am deleting the original question from Stack Overflow. I'm working on a game in Slick2D and I want to have some versatility with the resolutions I can use for fullscreen. Is there a way to make the game's fullscreen add black bars around the game instead of stretching the game to whatever resolution the graphics card can support for fullscreen? I'm not terribly familiar with openGL so if the only way to fix that requires openGL, I would appreciate some detailed explanation on how to implement the code. Currently I'm using the arg0.setDisplayMode(X resolution, Y resolution, true) arg0 is my AppGameContainer method. But it means I have to stretch my game to one of the compatible resolutions. |
1 | How can I extract the RGB color data from a TGA image? I am working in OpenGL, and I am trying to create terrains using height maps. I am using my own functions to load a TGA image, and in order to pass data to heightmap vertex shader I need to retrieve RGB components from the TGA image. I am not quite sure how to do that, I saw in one tutorial that SDL library can be used to retrieve RGB component from a TGA image, but I do not want to use any such libraries because I am already able to load a TGA. So, how do I recover RGB information per pixel from a TGA that we loaded? |
1 | Style When to call GL Enable Disable I'm working on an OpenGL application and some routines render textured objects, some are just colored primitives. I was wondering if there was a standard convention for how to deal with setting the OpenGL 'state'. For example, consider this routine (using OpenGL ES OpenTK) lt summary gt Renders the given tile at the corresponding screen tile location. lt summary gt public static void RenderTile(float vertices, float xOffset, float yOffset, uint textureId) GL.EnableClientState(All.VertexArray) GL.EnableClientState(All.ColorArray) GL.EnableClientState(All.TextureCoordArray) GL.PushMatrix() GL.Translate(xOffset, yOffset, 0.0f) GL.BindTexture(All.Texture2D, textureId) GL.VertexPointer(2, All.Float, 0, vertices) GL.ColorPointer(4, All.Float, 0, SQUARE COLORS) GL.TexCoordPointer(2, All.Float, 0, SQUARE TEXTURE COORDS) GL.DrawArrays(All.TriangleStrip, 0, 4) GL.PopMatrix() I'm curious about that GL.EnableClientState(All.TextureCoordArray) bit. Should I just get into the habit of calling GL.Enable(...) Gl.EnableClientState(...) for everything the current function needs? Even if it is largely redundant? Or is it better to just assume that the application has a certain set of things enabled (blending, back face culling, 2D textures, etc.). I know this question is pretty subjective, so I'm looking for advice from someone who has worked with a large OpenGL codebase. (As all my experience is for solo projects.) |
1 | Using same buffer for vertex and index data? Is it possible to use the same buffer for both GL ARRAY BUFFER and GL ELEMENT ARRAY BUFFER? I load both vertex data and index data into a big slab of memory, so it would be easier for me to just load it all into a single buffer. So naturally, I do like this glBindBuffer(GL ARRAY BUFFER, vboId) glBufferData(GL ARRAY BUFFER, dataSize, data, usage) glBindBuffer(GL ARRAY BUFFER, 0) Is it legal to ndash during rendering ndash simply use it as both? glBindBuffer(GL ARRAY BUFFER, vboId) glBindBuffer(GL ELEMENT ARRAY BUFFER, vboId) glVertexAttribPointer(...) ... glDrawElements(mode, count, dataType, (void )indexOffset) I can't find anything in the spec saying it's ok to do so, but I can't find anything that says that I can't either. Googling doesn't turn up much either, but I might be looking in the wrong places. |
1 | Intersperse 2D with 3D opengl I want to be able to draw 3D objects as well as 2D objects in an openGL environment. Normally, I would draw my 3D stuff, disable the depth buffer and depth mask, then draw my 2D stuff. However, this creates a hassle. What if I have a single draw() function that is supposed to draw 3D and 2D? I would need to separate it, which I really don't want to do. I want to be able to do stuff like draw3D(...) draw2D(...) draw3D(...) So is there a common way of doing this? I can think of a few solutions but would prefer to do it a standard way. Here's a visual of the problem. My 2D text is interfering with my 3D environment |
1 | OpenGL question about glColorPointer So, I'm just starting out with LWJGL, and my current task is to render two colored cubes on the screen. I can render them, however, I'm looking for some advice on the most efficient way to specify the colors. Currently, I'm doing this specify vertices and colors vertexData.put(new float 0.5f, 0.5f, 0, 0.5f, 0.5f, 0, 0.5f, 0.5f, 0, 0.5f, 0.5f, 0, 1st cube vertices 1.0f, 2.5f, 0, 0, 2.5f, 0, 1.0f, 1.5f, 0, 0, 1.5f, 0 ) 2nd cube vertices colorData.put(new float 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, colors for 1st cube 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1 ) colors for 2nd In the render loop glVertexPointer(vertexBufSize, GL FLOAT, 0, 0L) glColorPointer(colorBufSize, GL FLOAT, 0, 0L) ... glDrawArrays(GL TRIANGLE STRIP, 0, 4) 1st cube glDrawArrays(GL TRIANGLE STRIP, 4, 4) 2nd cube This works, but I can't help but think that since I'm specifying the exact same color values for both cubes, I'm wasting memory. I was hoping to just have one set of vertex colors that could be applied to each cube that gets drawn. Can it be done? If so, what is the best way to do it? Thanks |
1 | How do you create a fractal cube map? I want to create a map similar to how Mincraft and other related games do. I just haven't the faintest clue on how to do so. Can anyone point me to a decent tutorial or give me a decent run through? I program in java and use openGL. |
1 | opengl flickering of fragments even with disabled depth test I'm trying to render a quad with a coloured border. I'm using texture coordinates to detect whether the fragment should be considered part of border or not. If it is part of border, then render it with green colour or else with black colour. Here are my vertices normals tex coordinates. float vertices posistions normals texture coords 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, Here's my fragment shader version 330 core in vec3 frag pos in vec3 frag nor in vec2 frag tex out vec4 frag color void main() vec2 origin vec2(0.01, 0.01) float width 1.0 origin.x 2.0 float height 1.0 origin.y 2.0 if( (frag tex.x gt origin.x amp amp frag tex.x lt origin.x width) amp amp (frag tex.y gt origin.y amp amp frag tex.y lt origin.y height) ) frag color vec4(0.0) else frag color vec4(0.0, 1.0, 0.0, 0.0) And this is how I'm rendering glDisable(GL DEPTH TEST) glBindVertexArray(vao) glDrawArrays(GL TRIANGLES, 0, 6) In right, I'm drawing the same quad with another pass through fragment shader in wireframe mode. As you can see the left quad is flickering while moving the camera. Any ideas how to fix this. |
1 | Why are my texture coordinates always (0,0) in this shader? What I'm trying to do is add my depth buffers values to my scene, ie. I'm trying to make objects closer to the camera darker and objects further away lighter. Which should be easy just render the depth buffer to a texture, and then render the scene, multiplying each pixels colour by the colour value at the same coordinates in my depth buffer texture... Yet I don't know how to use coordinates for textures. Or more like it's just failing. What's happening is that the vertex shader is only using the (0,0) coordinate of my texture, so the entire scene changes colour depending on what's there. What I'm doing is this (the critical stuff from my render function) glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, depthTextureId) Binds the texture box.render() Sets the viewing matrix and renders a test VBO I guess that's the right way to bind a texture? Anyhoo, here's the vertex shader's important stuff out vec4 ShadowCoord void main() gl Position PMatrix (VMatrix MMatrix) gl Vertex Projection view and model matrices ShadowCoord gl MultiTexCoord0 something I kept seeing in examples, was hoping it would work. It seems gl MultiTexCoord0 is the problem? Maybe? The frag shader sets the colour using this function vec4(texture2D(ShadowMap, ShadowCoord.st).x vec3(Color), 1.0) Where the ShadowMap is the sampler2D for the texture, and Color is the vertex's color... So what am I missing? How come the coordinates are not changing from 0,0? |
1 | NiftyGUI Text isn't rendering I am trying to create a gui with nifty on top of lwjgl. I've already had some problems during the Nifty setup, however now Nifty is set up and running correctly but for text rendering. Here is my xml file lt ?xml version "1.0" encoding "UTF 8"? gt lt ! To change this license header, choose License Headers in Project Properties. To change this template file, choose Tools Templates and open the template in the editor. gt lt nifty xmlns "http nifty gui.sourceforge.net nifty 1.4.xsd" xmlns xsi "http www.w3.org 2001 XMLSchema instance" xsi schemaLocation "http niftygui. sourceforge.net nifty 1.4.xsd http nifty gui.sourceforge.net nifty 1.4.xsd" gt lt screen id "start" gt lt layer childLayout "center" gt lt panel id "panel" height "25 " width "35 " align "center" valign "center" backgroundColor " f60f" childLayout "center" visibleToMouse "true" gt lt text id "text" font "aurulent sans 16.fnt" color " ffff" text "Hello World!" gt lt panel gt lt layer gt lt screen gt lt nifty gt and here the respective java class package niftylwjgl import de.lessvoid.nifty.Nifty import de.lessvoid.nifty.nulldevice.NullSoundDevice import de.lessvoid.nifty.renderer.lwjgl.input.LwjglInputSystem import de.lessvoid.nifty.renderer.lwjgl.render.LwjglRenderDevice import de.lessvoid.nifty.tools.TimeProvider import org.lwjgl.LWJGLException import org.lwjgl.input.Keyboard import org.lwjgl.opengl.Display import org.lwjgl.opengl.GL11 public class NiftyLwjgl private static Nifty nifty private static boolean close private static LwjglInputSystem inSys public static void main(String args) throws LWJGLException, Exception init() initNifty() while (!close) update() preRender() render() postRender() destroy() private static void init() throws LWJGLException Display.setDisplayMode(Display.getDesktopDisplayMode()) Display.setFullscreen(true) Display.create() GL11.glOrtho(0, 1920, 0, 1080, 1, 1) private static void initNifty() throws Exception inSys new LwjglInputSystem() inSys.startup() nifty new Nifty(new LwjglRenderDevice(), new NullSoundDevice(), inSys, new TimeProvider()) nifty.fromXml("xml main.xml", "start") nifty.loadStyleFile("nifty default styles.xml") nifty.loadControlFile("nifty default controls.xml") private static void preRender() GL11.glClear(GL11.GL COLOR BUFFER BIT GL11.GL DEPTH BUFFER BIT GL11.GL STENCIL BUFFER BIT) GL11.glMatrixMode(GL11.GL PROJECTION) GL11.glLoadIdentity() GL11.glMatrixMode(GL11.GL MODELVIEW) GL11.glEnable(GL11.GL TEXTURE 2D) private static void postRender() GL11.glDisable(GL11.GL TEXTURE 2D) private static void update() if (Display.isCloseRequested() Keyboard.isKeyDown(Keyboard.KEY ESCAPE) nifty.update()) close true private static void render() nifty.render(false) Display.update() private static void destroy() inSys.shutdown() Display.destroy() Here is a screenshot of the application running I somehow feel like I'm missing something incredible simple but important, like an glXXXXX call. Any help suggestions are appreciated. |
1 | Cascade(Waterfalls) particles animation openframeworks, alternatives Has anyone tried to create a waterfall particles animation using openframeworks? Is it possible? Is it good? If not, can you recommend any other frameworks for easier creation of cascade animation? Thanks a LOT! |
1 | Gamma Space and Linear Space with Shader I am using Unity and I can choose between two color space mode in the settings Gamma or Linear Space. I am trying to build a Custom Lighting Surface shader but I am facing some problems with those Color Space. Because the render is not the same depending of the Color Space. If I render the lightDir, Normal or viewDir I can see that they are different depending of the Color Space I use. I made some test and the result I have in Linear Space is great but how can I obtain the same result in Gamma Space ? Are there some transformations ? On what component should I apply those transformations ? Thank you very much ! |
1 | Can I use glTranslate and glRotate when my GLSL Version over ( version 120)? I was using "GLSL Version 120". But now i am trying to do one tutorial and it is using "gl ClipDistance 0 " inside of vertex shader. And "GLSL Version 120" doesn't support "gl ClipDistance 0 ". Now i need to change GLSL Version with over 120 for example 330 . I also have to change "gl ModelViewProjectionMatrix" inside vertex shader. But my question is i need to change glTranslate,glRotate,glLoadIdentity etc ? |
1 | I get GL INVALID VALUE after calling glTexSubImage2D I am trying to figure out why my texture allocation does not work. Here is the code glTexStorage2D(GL TEXTURE 2D, 2, GL RGBA8, 2048, 2048) glTexSubImage2D(GL TEXTURE 2D, 0, 0, 0, 2048, 2048, GL RGB, GL UNSIGNED SHORT 5 6 5 REV, amp BitMap 0 ) glTexSubImage2D returns GL INVALID VALUE but the maximum texture allowed is 16384x16384 on my card. The source of the image is 16bit (Red 5, Green 6, Blue 5). |
1 | How should I structure VBOs for my 2d world data? My game is played on a fixed size hex based arena, where each hex can be of a different type, and possibly contain some creatures items anything on it. When I started out, I got the advice to have a VBO per entity, which I took quite literally and ended up creating a VBO per hex per frame, which got real slow. This lead me to restructure the data in a way that I pre calculate all of the VBO data for the "background" hex grid just once, and just repeatedly draw it. The problem is that now I can't simply rely on changing the model and having everything be updated, as the whole "map" is pre calculated. Following this I created a separate VBO which is filled just with "dynamic" objects that change on each frame, and first render the static background, and then the dynamic part that changes. But this leads me to a question. What if I need to change the background at runtime? For example, what if a "wall" gets torn down every once in a while, and I need to update the world data? Currently I can think of only a few options Change the VBO in place during the frame. This would only work if the change is small enough to not lag the frame. Keep a separate VBO that just has the "changes" and is drawn over the original one. I don't really like this approach, as it doesn't really feel flexible. Build a new VBO in a separate thread and atomically swap them once done. While this would allow me to do a larger update without sacrificing framerate, it could also introduce a weird kind of latency, when the user would still see the old thing for a few frames until the new VBO is calculated. Ideally I'd just change my "model" and re build the whole VBO from it, but that's about as slow as it gets, so I'm not really sure if I should even keep thinking this way? How do larger and more complicated games handle updating geometry on the fly? Is everything just pre calculated animations that simply get swapped around? |
1 | Problem using glm lookAt and glm perspective I'm trying to change the code from the 22th tutorial at http ogldev.atspace.co.uk , using the GLM library, but the result seems wrong. The problem is shown in the below picture While it should be This program displays a model onto the screen. The model is loaded using a library and I ensure that all of its vertices, normals, and texuture coordinates are loaded correctly. The problem seems to be at creating the Projection model view matrix. I searched for the use of GLM and use the below code to calcualate that matrix glm vec3 Pos(3.0f, 7.0f, 10.0f) glm vec3 Target(0.0f, 0.2f, 1.0f) glm vec3 Up(0.0, 1.0f, 0.0f) glm mat4 viewMat glm lookAt(Pos, Target, Up) glm mat4 perMat glm perspective(60.0f, (float)WINDOW WIDTH WINDOW HEIGHT, 1.0f, 100.0f) glm mat4 modelMat glm scale(glm mat4(1.0f),glm vec3(0.1f, 0.1f, 0.1f)) modelMat glm rotate(modelMat, m scale, glm vec3(0.0f, 1.0f, 0.0f)) modelMat glm translate(modelMat, glm vec3(0.0f, 0.0f, 10.0f)) FINAL MATRIX glm mat4 PVMMat perMat viewMat modelMat After that, I supplied the matrix to the shader like this glUniformMatrix4fv(m WVPLocation, 1, GL TRUE, amp PVMMat 0 0 ) Finally, in the shader, I calculate the gl Position as below vertex shader void main() gl Position gWVP vec4(Position, 1.0) TexCoord0 TexCoord Normal0 (gWorld vec4(Normal, 0.0)).xyz WorldPos0 (gWorld vec4(Position, 1.0)).xyz " Fragment shader void main() FragColor texture2D(gSampler, TexCoord0.xy) " I tried my best for a few days and I can't see any error in the way I use glm lookAt and gml perspective. This is exactly as in the manual on the GLM website. Could you suggest some reason for the problems I'm seeing, or some way for me to further investigate the error? Thanks so much. I hope to see your answer. |
1 | Having trouble running OpenGL on MSVC I'm using the OpenGL Programming Guide, 8th Edition with MSVC 2013 but I can't get the triangles.cpp file to run. These are the errors popping up http puu.sh jAokn c07420cf46.png |
1 | Displacement Mapping opengl es I need to do an application similar to this Morfo. And I posted a question here where the answer states the solution is "Displacement Mapping" . And I googled this to do it in opengles. I couldnt get how to start and implement this in opengles. Can someone give me some starter on this? |
1 | Camera Rolling when Implementing Pitch and Yaw I am implementing a camera in opengl for an android game and am having problems getting the pitch yaw roll of the camera correct. I have read many tutorials on quaternions and have implemented a basic camera based on this link. Everything works well except for when I pitch and yaw I get unwanted roll. I have read things that say I need to apply the pitch yaw rotation together to prevent this, and other things that said not to track individual changes to angles, but I'm not sure what they mean or how to extend this implementation to mine. I am new to this type of math so any help would be appreciated. My goal is to have a camera that can rotate similar to how a space ship would rotate I want the camera to be able to rotate 360 degrees in all directions and turn upside down which is why I was rotating the upVector too. Any suggestions on what I can do to not have the camera roll when rotating around the y and y axes? Thanks! Here is the code that translates user input into two angles, and rotates the lookVector, upVector, and rightVector of the camera. My problem is the camera is also rolling when I just pitch and yaw. Java Camera uses a position vector, lookAt vector, and upVector eventually calls Matrix.setLookAtM() Camera c mRenderer.theCamera PVector is a class that implements a 3 dimensional vector float thetaX getTouchChange(...) set by the user touching the screen float thetaY getTouchChange(...) set by the user touching the screen I keep 3 vectors in the camera class, one points up, one points right, and the 3rd is the point I am looking at Pitch rotate the up and look around right get up and look vectors PVector lookVector PVector.sub(c.getLookAtPoint(), c.getLocation()) PVector otherVector c.getUpDirection() get axis PVector rAxis c.getRightDirection() rAxis.normalize() rotate look PVector rotated Quaternion.rotateVector(lookVector, rAxis, thetaY) Add back camera location to convert vector to actual look points rotated.add(mRenderer.theCamera.getLocation()) c.setLookAtPoint(rotated) rotate up rotated Quaternion.rotateVector(otherVector, rAxis, thetaY) rotated.normalize() c.setUpDirection(rotated) Yaw rotate the look and right around up get up and look vectors lookVector PVector.sub(c.getLookAtPoint(), c.getLocation()) otherVector c.getRightDirection() get axis rAxis c.getUpDirection() rAxis.normalize() rotate look rotated Quaternion.rotateVector(lookVector, rAxis, thetaX) Add back camera location to convert vector to actual look points rotated.add(mRenderer.theCamera.getLocation()) c.setLookAtPoint(rotated) rotate right rotated Quaternion.rotateVector(otherVector, rAxis, thetaX) rotated.normalize() c.setRightDirection(rotated) In my rendering code I update the camera orientation by calling the following called in onSurfaceChanged Matrix.frustumM(m3DProjectionMatrix, 0, screenRatio, screenRatio, 1, 1, 1, 70) called in onDrawFrame Matrix.setLookAtM(m3DViewMatrix, 0, theCamera.getLocation().x, theCamera.getLocation().y, theCamera.getLocation().z, theCamera.getLookAtPoint().x, theCamera.getLookAtPoint().y, theCamera.getLookAtPoint().z, theCamera.getUpDirection().x, theCamera.getUpDirection().y, theCamera.getUpDirection().z) Matrix.multiplyMM(m3DMVPMatrix, 0, m3DProjectionMatrix, 0, m3DViewMatrix, 0) m3DMVPMatrix is eventually passed to the shader along with the shape data |
1 | Should all primitives be GL TRIANGLES in order to create large, unified batches? Optimizing modern OpenGL relies on aggressive batching, which is done by calls like glMultiDrawElementsIndirect. Although glMultiDrawElementsIndirect can render a large number of different meshes, it makes the assumption that all these meshes are made of the same primitives (eg. GL TRIANGLES, GL TRIANGLE STRIP, GL POINTS). In order to most efficiently batch rendering, is it wise to force everything to be GL TRIANGLES (while ignoring possible optimizations with GL TRIANGLE STRIP or GL TRIANGLE FAN) in order to make it possible to group more meshes together? This though comes from reading the Approaching Zero Driver Overhead slides, which suggests to draw everything (or, presumably, as much as possible) in a single glMultiDrawElementsIndirect call. |
1 | Why am I having these weird framerate issues with OpenGL on Windows? I'm using OpenGL on windows (have been for a while now), and I've come across a strange issue. Once every so often, the rate at which frames are presented on the screen drops to roughly 10 fps. However, my framerate counter stays at the usual framerate (2000fps in the menu, 300fps in game). My framerate counter is based on the time between draw calls, so the graphics card is definitly rendering 2000 frames a second. What is the problem? How can I fix it? EDIT I forgot to mention, this only happens when running in windowed mode. |
1 | Texture coordinate discontinuity with mipmaps creates seams I just started learning openGL and I am getting this artifact when texturing a sphere with mipmaps. Basically when the fragment samples the edge of my texture, it detects the discontinuity (say from 1 to 0) and picks the smallest mipmap, which creates this ugly seam ugly seam http cdn.imghack.se images 6bbe0ad0173cac003cd5dddc94bd43c7.png So I tried to manually override the gradients using textureGrad fragVert is the original vertex from the vertex shader vec2 ll vec2((atan(fragVert.y, fragVert.x) 3.1415926 1.0) 0.5, (asin(fragVert.z) 3.1415926 0.5)) vec2 ll2 ll if (ll.x lt 0.01 ll.x gt 0.99) ll2.x .5 vec4 surfaceColor textureGrad(material.tex, ll, dFdx(ll2), dFdy(ll2)) Now I get two seams instead of one. How can I get rid of them? And why does the code above generate 2 seams? 2 eams http cdn.imghack.se images 44a38ef13cc2cdd801967c9223fdd2d3.png You can't tell from the two images but the 2 seams are on either side of the original seam. |
1 | How to clear a buffer to 1.0 instead of 0.0 in OpenGL? Using glClear() you can set the buffer specified by the parameter to 0.0. This is useful say if you want pixels not covered by models to be black. Because vec3(0.0, 0.0, 0.0) results in black. But for the depth buffer, clearing means to set it to the maximum value 1.0 since you don't want pixels not covered by models to have the depth 0.0 which is the camera's position but to have the depth 1.0 which means as far away as possible. So how can I clear the depth buffer to 1.0 or come over this issue in another way? |
1 | How to change the color of an invalid texture? In openGL, when I don't bind a texture, or I bind a texture that was loaded incorrectly, any calls to texture2D texelFetch in the shader will return vec4(0, 0, 0, 1), is there a way to change this to return vec4(1, 1, 1, 1) instead? Initially I looked for a way to test if the sampler2D was valid, and when I couldn't find that I tried using glColor4f, but that doesn't seem to work either although that could be the amdgpu driver, it seems to have some trouble with the old pipeline calls generally. Is there someway to change what this default return color is? |
1 | Coordinate transformation in voxel ray tracing? I am implementing voxelization. I can't understand the coordinate transformation through shader. I have read some papers and code, the first step in the geometry shader is project the triangle along its dominant axis. What's the difference between this method and using projection matrix? In usual ways, we transform data from local space,world space, view space, then to projection space. Does project the triangle means we transform from view space to projection space ? What should we get after these process, in the end of pixel shader? |
1 | How to use FrameBuffer objects (OpenGL) I want to draw a 2D scene and after the scene i want to draw some light effects. When i draw some light, i create a FBO, draw in it and when finished with drawing, i want to create a texture where i draw the content of the FBO. My current code Init Create Texture glActiveTexture(GL TEXTURE0) glGenTextures(1, out texfbo) glBindTexture(GL TEXTURE 2D, texfbo) glTexImage2D(GL TEXTURE 2D, 0, GL RGBA, screenWidth, screenHeight, 0, GL RGBA, GL UNSIGNED BYTE, null) glBindTexture(GL TEXTURE 2D, 0) Create Renderbuffer glGenRenderbuffers(1, out rb) glBindRenderbuffer(GL RENDERBUFFER, rb) glRenderbufferStorage(GL RENDERBUFFER, GL DEPTH COMPONENT16, screenWidth, screenHeight) glBindRenderbuffer(GL RENDERBUFFER, 0) Create Framebuffer glGenFramebuffers(1, out fbo) glBindFramebuffer(GL FRAMEBUFFER, fbo) glFramebufferTexture2D(GL FRAMEBUFFER, GL COLOR ATTACHMENT0, GL TEXTURE 2D, texfbo, 0) glFramebufferRenderbuffer(GL FRAMEBUFFER, GL DEPTH ATTACHMENT, GL RENDERBUFFER, rb) glBindFramebuffer(GL FRAMEBUFFER, 0) Loop glBindFramebuffer(GL FRAMEBUFFER, fbo) Draw Scene glBindFramebuffer(GL FRAMEBUFFER, 0) glBindTexture(GL TEXTURE 2D, texfbo) Draw Rectangle with texture from Framebuffer The Scene is not drawn on the screen, i think that means it's drawn in the right buffer. But i can't get someting on the screen. What is wrong? |
1 | OpenGL What steps to take to correctly set up an Uniform Block Array I have managed to get uniform blocks to work, but I seem to make something wrong when trying to setup an array of uniform blocks. Assume this glsl layout(std140, binding 1) uniform LightingBlock vec4 ambient vec4 diffuse vec4 specular vec3 factors float shininess lighting 3 What's the exact procedure to bind this? What I am doing (and what works for a single block, not an array) GenBuffer... BindBuffer... BindBufferRange(UNIFORM BUFFER, 1, buffer id, 0, size in bytes) index GetProgramResourceIndex(program id, UNIFORM BLOCK, "LightingBlock") UniformBlockBinding(program id, index, 1) I read that I should replace the LightningBlock with lighting 0 , 1 etc, but this only returns invalid indices. So my current attempt looks like this GenBuffer... BindBuffer... binding 1 for(int i 0 i lt 3 i) BindBufferRange(UNIFORM BUFFER, binding i, buffer id, 0, size in bytes) index GetProgramResourceIndex(program id, UNIFORM BLOCK, "lighting " str(i) " ") UniformBlockBinding(program id, index, binding i) What am I doing wrong? How to do this correctly? |
1 | How to get PS3 Xbox 360 experience without having access to Dev kits? I am a budding game programmer trying to get into the industry programming for PS3, Xbox 360. The main problem I see is the need to demonstrate my skills to a potential employer, but without access to Dev kits for the PS3 or Xbox 360, doing this directly is impossible. My question is, what is the best alternative way to show console developers my skills? C programming in DirectX for Windows seems close to showing Xbox 360 programming skills, and C programming in OpenGL seems relatively close to showing PS3 programming skills. Unfortunately, it seems from web research as if both Xbox 360 and PS3 actually have their own propietary libraries, therefore seeming to make this not a 100 fruitful endeavor. This approach seems closest, but also most time consuming. Plus you're not actually making anything run on the console. On the other hand, programming in XNA has the benefit that your games are actually on the console, though I get the impression that this is looked upon as not "the real deal" since it is just a wrapper around DirectX and uses C instead of C . Does anyone have knowledge or experience from inside the industry so as to know what kind of game demos would be most useful to show to a potential employer? C in DirectX, OpenGL, XNA, Unreal Engine, Unity3d, Flash, etc etc etc? There are only so many hours in the day, and I'd love to know how to direct my efforts. My gut feeling is that DirectX would be the best choice, as it seems closer to what is used on the Xbox 360, but if having a good demo in another language engine is just as good, it would obviously be less time consuming to go another route. Thanks in advance for your help and advice! |
1 | How do I represent blended tiles in a mesh vertex array? I recently started making a Terraria clone using the L VE library, which is based in OpenGL. In Terraria, for each tile, there is a large texture with all possible combinations for merging with neighbouring tiles. They usually only support tiles of the same type, and sometimes other materials, like dirt. As a result, they only need a single vertex array. In Starbound, it seems to be simpler. A tutorial I found notes the use of basic 8x8 blocks, with a few edges drawn if the block is next to a block of a different type. I want to implement a similar mechanic, but I encountered a serious issue. A single vertex array does not allow z ordering, which I would need. My first idea was to use one vertex array for each type of block, but that would eat a lot of memory. My second idea was to use the fact that OpenGL vertex array draws primitives in order. I could use some complicated data structure to represent tiles, so I would always know where to place their vertices in the array. However, that would either require moving lots of vertices or lots of memory. What is the best way to implement this? |
1 | Point Light shows black box rect (PointLight not working) libgdx 3D I am creating a 3d scene currently a box and rect, and trying to enable lighting. When i create a PointLight and add it to Environment everything turns to black color? all i want to do is create a 3d scene and enable point light, like a sun or rays coming from a point and shading the objects. Code environment new Environment() environment.add(new PointLight().set(1f, 1f, 1f, 0, 0, 20f, 100f)) modelBatch new ModelBatch() .. square new ModelBuilder().createBox(300,300,300,new Material(ColorAttribute.createDiffuse(Color.GREEN)), VertexAttributes.Usage.Position VertexAttributes.Usage.Normal) squareinst new ModelInstance(square) squareinst.transform.setTranslation( 500,0,0) sprites.get(0).setRotationY(sprites.get(0).getRotationY() 1f) sprites.get(1).setRotationY(sprites.get(1).getRotationY() 1f) squareinst.transform.rotate(1,0,0,1) modelBatch.begin(camera) for(Sprite3D sp sprites) has 3d rect models sp.draw(modelBatch,environment) modelBatch.render(squareinst,environment) modelBatch.end() PointLight turning everything black Without using environment or lights as per my investigation, here if pointlight is not working then everything should be black as currently, because the environment needs light, it works fine with Directional light (only the backface of rect is black even after rotations, i don't know why) libgdx version 1.6.1 android studio i checked it on both android device and desktop please i really need to get this PointLight working, i don't know if it will take a custom shader, if so please guide me to some links because i am not experienced in shaders. I also read about PointLight not working on some device or not working in opengl 2.0 enabled, but i am not sure. |
1 | How can I obtain a triangle ID during rasterization? I want to know the triangle ID of each rendered pixel is coming from. Is there a way to do this in OpenGL? |
1 | Sprite framework binding multiple textures In an attempt to batch render as many quads (sprites) as possible, I'm instance rendering a single unit sized quad and passing in a buffer of per instance data that includes width height, texture coordinates, color, texture Id, etc. Offline, I have a tool that constructs texture atlases based on any particular sprite animation, so the output is flexible in terms of texture atlas width height, which sprites reference which texture atlas, and so on. The end result is that I may have many texture atlases, and sprites which reference any one of those atlases for their current frame. I need a robust solution that'll allow me to draw these sprites in any order while reducing texture binding as much as possible. After some research there are two options combine multiple 2D textures into a 2D array texture (up to GL MAX ARRAY TEXTURE LAYERS EXT) bind multiple textures in array of samplers (up to MAX TEXTURE IMAGE UNITS) The first solution I believe requires all textures to be the same height width. The second solution seems to have a limitation that requires a constant index to access the array of samplers in the fragment shader. I think I want to go with the second solution, but I'm worried I won't be able to properly write a fragment shader if I have to use a constant index, as I'd be getting the index via per instance data input. Are my above suspicions correct? Is there a known work a round for the second solution? Or is there a better way to do this than what I presented. |
1 | How can I render a simple lattice in LibGDX? I have searched all over, but I can't find what I think will be a simple answer. I am using Opengl ES 2.0, and LibGDX. I simply want to use GL LINES primitives to create a lattice structure. I have used shapeRenderer with the following code in create() shapeRenderer new ShapeRenderer() shapeRenderer.setProjectionMatrix(cam.combined) in render() shapeRenderer.setProjectionMatrix(cam.combined) shapeRenderer.begin(ShapeType.Line) shapeRenderer.setColor(0, 1, 0, 1) for (float i 10 i lt 10 i ) for (float j 10 j lt 10 j ) shapeRenderer.line(i, j, 10, i, j, 10) shapeRenderer.line( 10, i, j, 10, i, j) shapeRenderer.line(j, 10, i, j, 10, i) shapeRenderer.end() The problem is, this is not a model instance, thus doesn't seem to use the RendererContext (GL DEPTH TEST, etc.). When I render the lattice this way, it either shows up entirely behind my models, or entirely in front of them (depending on the order I render them in the render method). Is there a way to build a model which is simply a set of lines to be rendered with Opengl primitive GL LINES? Any pointers would be greatly appreciated. |
1 | What's the difference between bloom and emission? In my engine, I've implemented bloom but I also want to implement emission because some models that I have come with an emission map. It sounded to me like something adjacent to bloom but I guess it's different since other engines differentiate the two like here Unity Emission https docs.unity3d.com Manual StandardShaderMaterialParameterEmission.html Unity Bloom https docs.unity3d.com Manual PostProcessing Bloom.html and also because it seems like it's something that is done in the material shader and not in any post processing. What is the difference (i.e not a part of post processing) and how could I go about writing emission into my standard material shader (I'm using a deferred rendering technique)? UPDATE Alright, I've seen that emission is just something that you add on top of the fragment after you've sampled the emission map. How then, would someone get that light appear to be glowing off the surface like in Unreal and Unity and also seemingly lighting other surfaces? Is that then to do with global illumination (I currently have a baked G.I solution for static scenes)? |
1 | Using GLFW and GLUT together I'm new to OpenGL Lets say I create an OpenGL window using glfw and I need the UI feature from glut (such as popup menu). Is it possible to use glfw and glut in one program? Thank you |
1 | Comparing meshes and reducing duplication of data I'm writing a class that reads an obj file, indexes each mesh and creates a VAO and GameObject of the appropriate type. I've stumbled on a design issue. Objects of the same name (Tree A001, Tree A002) are currently skipped over, and their min and max extents are used to calculate their centre, and therefore where to position the a copy of that mesh. Not only does this mean I've read in the mesh twice, just to ignore all the data as soon as I've got the min and max, but in my case I have 230 trees. It works but takes 12 15 seconds to read in the file itself, and I've not been able to gleam the rotation or scale of said object. I understand that comparing the min and max extents of the duplicate mesh to the original could let me get the scale, but as soon as the mesh is rotated in the scene file then it breaks. I'd like to do something more efficient than this however, and I'm stuck on what do to. What is the done thing in professional games, that allows multiple objects to be placed in a 3d modelling software, and their individual position rotation and scale to be kept? I've experimented with using single triangles to represent the position and scale etc but it makes designing the scene much harder as you can't see what you're placing. Thanks for any advice |
1 | OpenGL Transformations I'm not sure if I correctly understand 3D transformations in OpenGL. Let's assume I'm using the typical matrix stack. It seems like you move the world X units over, drop in a bag of verts (a mesh) and move the world back by popping the matrix off the stack, then you repeat the process to place another model? Say I wanted to place a model at (10, 0, 0) and another at (0, 10, 0). I think I would... PUSH move(10,0,0) DRAW model A POP PUSH move(0,10,0) DRAW model B POP Some questions come to mind, like for example what does loading an identity matrix do in the stack? Am I right to assume that these operations in effect move the world around, then you draw the mesh at 0,0,0 in world space when you've moved the world to your liking? Or is it the other way around, does GL "apply" the matrix stack to your mesh as you draw it? Thanks, Cody |
1 | What technique should I use to create models and animation sequences in OpenGL code? I'm getting into game development using OpenGL (and the LWJGL library) and I want to create models for characters, NPC's etc. in the code, as well as animation sequences (for example the way the models are done in Minecraft). What is the process to go about doing something like this? Is there a particular feature set that is used or common methods of doing this? I'm basically looking for pointers as to what to search for when trying to find examples of how this is done. |
1 | Irrlicht engine game wont compile on linux. Undefined refrences(opengl and xfree) I'm trying to port over my game I'm developing to Linux. But when i compile i get a lot of undefined references to mostly functions that look like they belong to OpenGL. Most are titled gl... But one of them is called XFree. I compiled it with this command g main.cpp L.. .. .. LIB irrlicht 1.8.3 lib Linux lIrrlicht I.. .. .. LIB irrlicht 1.8.3 include one of the errors home owner LIB irrlicht 1.8.3 source Irrlicht COpenGLDriver.cpp 3746 undefined reference to glVertex3f |
1 | How to only render fragments with Z from 0 to 1 in OpenGL? I have been using OpenGL for a year now, but I just recently found out that OpenGL only clips vertices when the absolute value of the x,y or z coordinate is less than the absolute value of w coordinate. Previously I had assumed that the z coordinate would have to be 0 lt z lt w to be rendered. Is there any way to clip vertices with z less than 0, without a performance hit? I am using OpenGL 4.4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.