_id
int64 0
49
| text
stringlengths 71
4.19k
|
---|---|
1 | Is it possible to store diffuse and normal maps in the same texture area and preserve SRGB linear space? Usually, one would want to upload texture data to OpenGL with GL SRGB for the internalformat of a texture, and GL RGB (or some other linear format) for normal data or specular highlight maps. We can minimize context switches by using a texture array, but that forces all textures to have the same internalformat. Is there a way to store all needed textures in a single texture array, but preserve colour spaces? Or should I convert from SRGB to linear space when uploading texture data? |
1 | Quaternion LookAt for camera I am using the following code to rotate entities to look at points. glm vec3 forwardVector glm normalize(point position) float dot glm dot(glm vec3(0.0f, 0.0f, 1.0f), forwardVector) float rotationAngle (float)acos(dot) glm vec3 rotationAxis glm normalize(glm cross(glm vec3(0.0f, 0.0f, 1.0f), forwardVector)) rotation glm normalize(glm quat(rotationAxis rotationAngle)) This works fine for my usual entities. However, when I use this on my Camera entity, I get a black screen. If I flip the subtraction in the first line, so that I take the forward vector to be the direction from the point to my camera's position, then my camera works but naturally my entities rotate to look in the opposite direction of the point. I compute the transformation matrix for the camera and then take the inverse to be the View Matrix, which I pass to my OpenGL shaders glm mat4 viewMatrix glm inverse( cameraTransform gt GetTransformationMatrix() ) The orthographic projection matrix is created using glm ortho. What's going wrong? |
1 | Improve bloom quality I'm trying to understang how can I improve bloom effect quality using all known to me optimizations. Currently I'm making it as follows Create new texture using some threshold to extracts areas which should glow Downsample that texture 4 times (using bilinear filtering GL LINEAR) Blur each texture horizontally using 5x5 gaussian blur. Blur each texture vertically using 5x5 gaussian blur. Composite each blurred texture with final image. I've implemented gaussian blur using Incremental Gaussian Algorithm and set radius to 5. However because of size of last texture after composition bloom has very low quality especially when moving camera It doesn't look clear here but you can see it near bunny tail. Simple workaround to that issue was increasing the radius for smaller textures but then image was more blurred. Similar effect is when I use solution presented by Philip Rideout. I get better results when I use blur shader presented here. However I still see some kind of vertical stripes. I also tried to improve the algorithm by blurring the image and then downsampling it. Then I repeated whole process for the rest of images. But I haven't spotted any difference. I'm also wonder how it is done in Unreal Engine. I mean how effective blur radius is computed. Documentation claims that each Bloom Size value is "the size in percent of the screen width" and each texture is smaller from previous one 2 times. |
1 | Simple mouseray picking in openGL Ive been looking at tutorials and trying to figure out how to do basic ray picking. But I'm stuck at figuring out what space to do the distance calculations in. What space does glm unproject() lead to exactly? This is what I'm doing first I get the mouse unprojected, like so mouse ray start vec3 m uproj glm unProject( vec3(mouse xy .x glutGet(GLUT WINDOW WIDTH), mouse xy .y glutGet(GLUT WINDOW HEIGHT),0.0f), workshop.access gui() gt view mat(), workshop.access gui() gt proj mat(), glm ivec4(0, 0, glutGet(GLUT WINDOW WIDTH), glutGet(GLUT WINDOW HEIGHT))) end of ray vec3 m uproj2 glm unProject( vec3(mouse xy.x glutGet(GLUT WINDOW WIDTH), mouse xy.y glutGet(GLUT WINDOW HEIGHT), 1.0f), workshop.access gui() gt view mat(), workshop.access gui() gt proj mat(), ivec4(0, 0, glutGet(GLUT WINDOW WIDTH), glutGet(GLUT WINDOW HEIGHT))) Then I find its direction, mray, like so vec3 mouse ray normalize(m uproj2 m uproj) get mray direction And Im expecting to find the closest point to some object by using this calculation vec3 closest point mouse ray glm dot(locations i , mouse ray) closest point But locations seems to be in the wrong space? Or am I thinking about this the wrong way? Ive been looking around, but I cant find anywhere that explains just this part that I must be misunderstanding. the idea is to compare the distance between closest point and locations i , but the results are incorrect. Im getting something like this Where it should be red only if the cursor is over the square. What space does glm unproj() put my ray in anyway? And in what space should I put the objects that I want to pick highlight? |
1 | For Vertex Buffer Steaming, Multiple glBufferSubData VS Orphaning? I was learning OpenGL recently. In games, we need to update the position of game objects frequently, and they will come in amp out of screen constantly. So it means in rendering we need to update the vertex buffer quite often as well. In OpenGL's context, one intuitive way is to use glBufferSubData to update those that changed. But I also read online a trick called Orphaning that creates a new buffer data and upload the whole vertices data to it. Also due to state chagnes cost and uploading cost, multiple glBufferSubData may also cost more. Here is my question, Which method is better? Is stalls really matters in this case? Is state changes and uploading cost really matters in this case? Thanks! |
1 | What OpenGL version(s) to learn and or use? So, I'm new to OpenGL... I have general knowledge of game programming but little practical experience. I've been looking into various articles and books and trying to dive into OpenGL, but I've found the various versions and old vs new way of doing things confusing. I guess my first questions is does anyone know some figures about percentages of gamers that can run each version of OpenGL. What's the market share like? 2.x, 3.x, 4.x... I looked into the requirements for Half Life 2 since I know Valve updated it with OpenGL to run on Mac and I know they usually try to hit a very wide user base, and they say a minimum of GeForce 8 Series. I looked at the 8800 GT on Nvidia's website and it listed support for OpenGL 2.1. Which, maybe I'm wrong, sounds ancient to me since there's already 4.x. I looked up a driver for 8800GT and it says it supports 4.2! A bit of a discrepancy there, lol. I've also read things like XP only supports up to a certain version, or OS X only supports 3.2, or all kinds of other things. Overall, I'm just confused as to how much support there is for various versions and what version to learn use. I'm also looking for learning resources. My search results thus far have pointed me to the OpenGL SuperBible. The 4th edition has great reviews on Amazon, but it teaches 2.1. The 5th edition teaches 3.3 and there are a couple things in the reviews that mention the 4th edition is better and that the 5th edition doesn't properly teach the new features or something? Basically, even within learning material I'm seeing discrepancies and I just don't even know where to start. From what I understand, 3.x started a whole new way of doing things and I've read from various articles and reviews that you want to "stay away from deprecated features like glBegin(), glEnd()" yet a lot of books and tutorials I've seen use that method. I've seen people saying that, basically, the new way of doing stuff is more complicated yet the old way is bad . Just a side note, personally, I know I still have a lot to learn beforehand, but I'm interested in tessellation so I guess that factors into it as well, because, as far as I understand that's only in 4.x? just btw, my desktop supports 4.2 |
1 | What are the factors that determine the default frequency of a shader call? After i have been played for some days with various vertex and fragments shaders seems clear to me that this programs are called by the GPU at every and each rendering cycle, the problem is that I can't really quantify this frequency and I can't tell if is based on some default values or not because I don't have a big collection of hardware right now to do extensive tests. For what i know the answer could be really trivial like "it's the same of the refresh rate of your monitor", but i would like some good answers on that to be clear on this. For instance looks really odd to me that all the techniques used to control the amount of FPS that i have seen until now uses a call for the OpenGL function glutGet(GLUT ELAPSED TIME) to retrieve a value in ms about when the rendering started but I have to relies on the CPU to do the math. Why I can't set an FPS value in OpenGL if OpenGL clearly has a counter and a timer clock? PS I'm referring to OpenGL 3.0 |
1 | Android Object get Jagged at the border I am new to OpenGL ES 1 in android. My 3D Model border getting Jagged. Please help me to look like a smooth border instead of jagged. Screenshot http i.stack.imgur.com 1Gq83.png private class Renderer implements GLSurfaceView.Renderer public Renderer() setEGLConfigChooser(8, 8, 8, 8, 16, 0) getHolder().setFormat(PixelFormat.TRANSLUCENT) setZOrderOnTop(true) public void onSurfaceCreated(GL10 gl, EGLConfig config) gl.glClearColor(0.0f,0.0f,0.0f, 0.0f) gl.glDisable(GL10.GL DITHER) gl.glEnable(GL10.GL DEPTH TEST) gl.glEnable(GL10.GL BLEND) gl.glBlendFunc(GL10.GL SRC ALPHA SATURATE, GL10.GL ONE) gl.glDepthFunc(GL10.GL LEQUAL) gl.glHint(GL10.GL POLYGON SMOOTH HINT, GL10.GL NICEST) gl.glHint(GL10.GL PERSPECTIVE CORRECTION HINT, GL10.GL FASTEST) gl.glEnable(GL10.GL TEXTURE 2D) gl.glShadeModel(GL10.GL SMOOTH) public void onSurfaceChanged(GL10 gl, int w, int h) mViewWidth (float)w mViewHeight (float)h gl.glViewport(0,0,w,h) gl.glMatrixMode(GL10.GL PROJECTION) gl.glLoadIdentity() GLU.gluPerspective(gl, 45, mViewWidth mViewHeight, 0.1f, 100f) gl.glMatrixMode(GL10.GL MODELVIEW) gl.glLoadIdentity() public void onDrawFrame(GL10 gl) gl.glClear(GL10.GL COLOR BUFFER BIT GL10.GL DEPTH BUFFER BIT) gl.glPushMatrix() gl.glDisable(GL10.GL DITHER) GLU.gluLookAt(gl, 0, 0, 10, 0, 0, 0, 0, 1, 0) draw model gl.glPushMatrix() if(mOrigin ! null amp amp mRotate ! null) gl.glTranslatef(mOrigin.x, mOrigin.y, mOrigin.z) gl.glRotatef(mRotate.x, 1f, 0f, 0f) gl.glRotatef(mRotate.y, 0f, 1f, 0f) gl.glRotatef(mRotate.z, 0f, 0f, 1f) if(mModel ! null) mModel.draw(gl) if(!RendererView.textureFileName.equals("")) mModel.bindTextures(mContext, gl) gl.glPopMatrix() gl.glPopMatrix() if(isPictureTake) w getWidth() h getHeight() b new int w (y h) bt new int w h IntBuffer ib IntBuffer.wrap(b) ib.position(0) gl.glReadPixels(0, 0, w, h, GL10.GL RGBA, GL10.GL UNSIGNED BYTE, ib) createBitmapFromGLSurface(context) isPictureTake false Thanks in advance. |
1 | Flickering tearing with glfwSwapBuffers I recently separated my logic and rendering threads to fix this problem GLFW window freezes on title bar hold drag This means the logic and rendering are no longer running in lock step as they were before, so it is possible that the logic has not finished executing by the time we are ready to render. To handle this case, the rendering thread will keep sleeping for 1ms until the logic has finished executing. Pseudo code Logic thread while (!exiting) timeBefore now() updateGame() renderingDue true timeElapsed now() timeBefore sleep(MS PER FRAME timeElapsed) Rendering thread while (!exiting) if (renderingDue) render() renderingDue false GLFW.glfwSwapBuffers(window) else Not yet ready to render, but try again in a short while sleep(1) This seems to work fine for me, at least with vsync disabled. When I tested with vsync enabled I encountered stuttering, which I suppose is because if we miss a rendering frame, we have to wait until the next monitor refresh interval before we can catch up. However, on some other systems, in particular low end laptops, there is a lot of flickering tearing during gameplay. I don't understand what would cause this, since we are always calling glfwSwapBuffers() after rendering, not during. Is there something fundamentally wrong with my approach? I know ideally the rendering thread should take a delta value so that it can render the state "between" logic threads (yes, I have read Fix Your Timestep!), but I am nonetheless curious about what is happening here. |
1 | How to get a texture from current point of view in OpenGL 2.0 ES? Probably the title is confusing, but I didn't know how to ask better, sorry about that. What I would like to do is get a bitmap texture that represents exactly what's rendered at one point in time and save it as a file (I know how to save it, I need only to find out how get the bitmap data form OpenGL) Like a screen shoot, sort of. |
1 | Multiple Render Targets, Multiple Fragment Shaders I render a normal and a depth image of a scene. Then I want to reuse these two images to do further texture image processing in a second fragment shader. I use a framebuffer with 3 textures attached to it. 2 for the normal and the depth textures and one is supposed to contain the final processed image. The problem is, that I can't get the first two images to the second fragment shader to use them as texture samplers. Here is my code First I create a fbo and attach 3 textures to it. create FBO glGenFramebuffers(1, amp Framebuffer) glBindFramebuffer(GL FRAMEBUFFER, Framebuffer) glGenTextures(1, amp renderedNormalTexture) glBindTexture, glTexImage2D, glTexParameteri ... left out for clarity glFramebufferTexture2D(GL FRAMEBUFFER, GL COLOR ATTACHMENT0, GL TEXTURE 2D, renderedNormalTexture, 0) glGenTextures(1, amp renderedDepthTexture) glFramebufferTexture2D(GL FRAMEBUFFER, GL COLOR ATTACHMENT1, GL TEXTURE 2D, renderedDepthTexture, 0) glGenTextures(1, amp edgeTexture) glFramebufferTexture2D(GL FRAMEBUFFER, GL COLOR ATTACHMENT2, GL TEXTURE 2D, edgeTexture, 0) ndbuffers 0 GL COLOR ATTACHMENT0 ndbuffers 1 GL COLOR ATTACHMENT1 ndbuffers 2 GL COLOR ATTACHMENT2 glDrawBuffers(3, ndbuffers) Fragment shader 1 This is the first fragment shader which outputs 2 textures to position 0 and 1 in vec3 position worldspace in vec3 normal cameraspace in vec4 vpos to fragment uniform float zmin uniform float zmax layout(location 0) out vec3 normalcolor layout(location 1) out vec3 depthcolor void main() normalcolor normalize( normal cameraspace ) 0.5 0.5 normal out vec4 v vec4(vpos to fragment) v v.w float gray ( v.z zmin) (zmax zmin) depthcolor vec3(gray) depth out fragment shader 2 And the second fragment shader which is supposed to receive two texture samplers from position 0 and 1 and do sth. with them uniform sampler2D normalImage uniform sampler2D depthImage uniform float width uniform float height in vec2 UV layout(location 2) out vec3 color void main() vec3 irgb texture2D(normalImage, UV).rgb do sth here... color irgb And finally the rendering step Maybe I am mistaken here. I render the geometry scene (once?!) and apply two fragment shaders. glUseProgram(FragmentShader1) SetMVPUniforms() glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) RenderScene() render geometry glUseProgram(FragmentShader2) GLuint nID glGetUniformLocation(FragmentShader2, "normalImage") GLuint dID glGetUniformLocation(FragmentShader2, "depthImage") glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, renderedNormalTexture) glProgramUniform1i(FragmentShader2, nID, 0) glActiveTexture(GL TEXTURE1) glBindTexture(GL TEXTURE 2D, renderedDepthTexture) glProgramUniform1i(FragmentShader2, dID, 1) Now what I get is either nothing or a wrong colored image. |
1 | Events management using SDL 2.0 I wrote a simple SDL program using SDL 2.0. I have a little problem when I want to check the mouse wheel events. Actually, in the version 2.0 the flags SDL BUTTON WHEELDOWN and SDL BUTTON WHEELUP no longer exist. There is just the flags SDL MOUSEWHEEL. The code below check correctly the' WHEELUP' and 'WHEELDOWN' events but with the same flag. while(!terminer) while (SDL PollEvent( amp evenements)) switch (evenements.type) case SDL QUIT terminer true break case SDL KEYDOWN switch (evenements.key.keysym.sym) case SDLK ESCAPE terminer true break break case SDL MOUSEMOTION std cout lt lt "MOUSE MOVE" lt lt std endl break case SDL MOUSEBUTTONUP case SDL MOUSEBUTTONDOWN std cout lt lt "MOUSE BUTTON DOWN" lt lt std endl break case SDL MOUSEWHEEL std cout lt lt "MOUSE WHEEL" lt lt std endl break But I would like to handle separately the 'WHEELUP' and 'WHEELDOWN' events. I tried several others flags in my condition but without success. Does anyone can help me, please ? Thanks a lot in advance for your help. |
1 | Instanced rendering with ARB vertex attrib binding I'm trying to separate the vertex format specification from the vertex data. I was able to do that for the mesh vertices successfully. For instanced rendering I wanted to further separate the instance data (i.e. model matrices). Here is what I'm trying out, but nothing is rendered. Loading the data glGenBuffers(1, amp vbo instanced) glBindBuffer(GL ARRAY BUFFER, vbo instanced) glBufferData(GL ARRAY BUFFER, n instances sizeof(glm mat4), ModelArray, GL STATIC DRAW) Vertex format glGenVertexArrays(1, amp vao instanced) glBindVertexArray(vao instanced) glEnableVertexAttribArray(0) glVertexAttribFormat(0, 3, GL FLOAT, GL FALSE, 0) glVertexAttribBinding(0, 0) glEnableVertexAttribArray(1) glVertexAttribFormat(1, 3, GL FLOAT, GL FALSE, sizeof(glm vec3)) glVertexAttribBinding(1, 0) glVertexBindingDivisor(1, 1) for(int i 0 i lt 4 i ) Model Matrices glEnableVertexAttribArray(2 i) glVertexAttribBinding(2 i, 1) glVertexAttribFormat(2 i, 4, GL FLOAT, GL FALSE, sizeof(glm mat4)) When rendering I do the following, glBindVertexArray(vao instanced) glBindVertexBuffer(0, vbo mesh, 0, sizeof(Vertex)) glBindVertexBuffer(1, vbo instanced, 0, sizeof(glm mat4)) glDrawArraysInstanced(GL TRIANGLES, 0, n vertices, n instances) but I'm pretty sure the stride argument in glBindVertexBuffer of vbo instanced should be wrong because it is the stride per vertex. Does anyone know what I'm doing wrong and how to fix it? EDIT 1 It seems I made a mistake in the last argument of glVertexAttribFormat, it should instead be, glVertexAttribFormat(2 i, 4, GL FLOAT, GL FALSE, 4 i sizeof(float)) But now, although everything gets rendered, my normals are wrong. Even if I don't transform them, they have a single value per instance instead of a single value per face. This is what the normals look like, And this is what it should actually look like, Note that the normals are fine as long as I instead use the glVertexAttribPointer method, i.e. bind the vbo instanced directly to the vao. For reference, here is how I declare the attributes in my shader, version 330 core layout(location 0) in vec3 in Position layout(location 1) in vec3 in Normal layout(location 2) in mat4 ModelMatrix EDIT 2 I wanted to add that I have checked that all offset values are correct. Additionally, I used apitrace and checked that all vertex buffer data are correctly uploaded. Valgrind also doesn't show any errors. Now for a weird finding, if I don't use vertex attribute array with index 1 and instead bind vertex positions at 0, vertex normals at 2, and Model Matrices at 3, everything is rendering correctly. Could this be some driver bug? I tried on two linux systems, both having nvidia gpus, with exactly the same results. EDIT 3 I put together a minimal working example which you can find here. It uses SDL and GLEW and is written in C 11. If I set SKIP to 1 (which means it skips attribute 1), I get what I expect, but if I set it to 0, the normals become messed up. |
1 | What should the Z coordinate be after transformed by the projection matrix? I'm working on an OpenGL 1.x implementation for the Sega Dreamcast. Because the Dreamcast didn't have any hardware T amp L the entire vertex transformation pipeline has to be done in software. What also has to be done in software is clipping to the near Z plane as failure to clip results in the polygon being dropped entirely from rendering by the hardware. I'm having some trouble getting the transform clip perspective divide process working correctly and basically I can sum up the problem as follows I transform polygon vertices by the modelview and projection matrices I clip each polygon against the W 0.000001 plane This results in new vertices on the near plane with a W of 0.000001, but a Z which is twice the near plane distance At perspective divide vertex.z vertex.w results in an extreme value because we're dividing a Z value (e.g. 0.2) by 0.000001 Something seems very wrong here. The projection matrix is being generated in the same way as described in the glFrustum docs. So my question is, if I have a coordinate on the near plane, should its Z value be zero after transform by the projection matrix or should it be the near z distance, or something else? After clipping polygons to the W 0.000001 plane, should the generated Z coordinates be 0.000001? Update Here is the projection matrix as calculated by gluPerspective(45.0f, 640.0f 480.0f, 0.44f, 100.0f) 1.810660 0.000000 0.000000 0.000000 0.000000 2.414213 0.000000 0.000000 0.000000 0.000000 1.008839 0.883889 0.000000 0.000000 1.000000 0.000000 Does this look correct? It's the value in the right hand column I'm not sure about... |
1 | LWJGL Determining whether or not a polygon is on screen Not sure whether this is an LWJGL or math question. I want to check whether a shape is on screen, so that I don't have to render it if it isn't. First of all, is there any simple way to do this that I am overlooking? Like some method or something that I haven't found? I'm going to assume there isn't. I tried using my trigonometry skills, but it is hard to do this because of how glRotate also distorts the image a little for perspective and realism. Or, is there any way to easily determine if a ray starting from the camera, and going outward in a straight line intersects a shape? (I can probably do it with my math skillz, but is there an easier way?) By the way, I can easily determine the angle at which the camera is facing around the x and y axis. EDIT Or, possibly, I could get the angles of a vector from the camera to the object, and compare those angles to my camera angles. But I have a feeling that the distorts from glRotate and glTranslate would be an issue. I'll try it though. |
1 | Update single entry in GLSL array I have an array in my vertex shader like this uniform mat4 MeshTransforms 20 At the moment I'm just updating the entire array of matrices like so int meshTransforms ARBShaderObjects.glGetUniformLocationARB(shader, "MeshTransforms") ARBShaderObjects.glUniformMatrix4ARB(meshTransforms, false, UnitTransformMatrices) Where UnitTransformMatrices is a float buffer containing the transform matrices of some units. I'd like to just update a single matrix in the array (just update one unit transform matrix). So how can I overwrite a single matrix in the middle of the matrix array? This is using the LWJGL, so I'm limited to the functionality it provides. |
1 | How to draw 2D pixel data with OpenGL I am fairly new to OpenGL. I have a 2D game in SDL2 that uses currently works by creating a SDL Surface from the pixel data, copying it into a SDL Texture, and rendering it to the screen with SDL Renderer. But rather than using SDL to render the pixels, I'd like to switch to OpenGL. The reason I'd like to switch is because I need to render some lines on top of the pixel data and SDL RenderDrawLine() just doesn't have all the features I need (like line thickness or glScissor). At first, I attempted to switch to OpenGL by using glDrawPixels() and I was happy with the results. However, I found out that glDrawPixels() does not seem to be available in OpenGL ES (mobile devices). I have looked through some tutorials, but they all use shaders and other fancy stuff that I don't think I really need. Is there a simple way (like glDrawPixels()) to just draw pixel data to the screen for a 2D game? The pixel data is in the format GL UNSIGNED BYTE and it contains everything that I want to draw on the screen (except for several 2D line segments that I plan on using GL to render on top). |
1 | samplerCubeShadow and texture offset I use sampler2DShadow when accessing a single shadow map. I create PCF in this way result textureProjOffset(ShadowSampler, ShadowCoord, ivec2( 1, 1)) result textureProjOffset(ShadowSampler, ShadowCoord, ivec2( 1,1)) result textureProjOffset(ShadowSampler, ShadowCoord, ivec2(1,1)) result textureProjOffset(ShadowSampler, ShadowCoord, ivec2(1, 1)) result result 0.25 For a cube map I use samplerCubeShadow result texture(ShadowCubeSampler, vec4(normalize(position), depth)) How to adopt above PCF when accessing a cube map ? |
1 | Shadow casting through time acting in "reverse" before next notciable shadowmap update question was re written to better explain the problem I have a scene of big thin parallelepiped, small cube, and a light source, all aligned on Y axis, like so Light Both objects are static and light is sliding on Y axis only. Also, a tiny (32x32) depth texture from Light's PointOfView is being rendered off screen and then applied to both cube and parallelepiped on on screen rendering stage, making effect of a shadow being casted by a small cube on a big parallelepiped. The problem i am experiencing is that when light moves away from cube, such low res depth texture is getting updated only 6 7 times, but it's projection on surfaces updates constantly and wrongly while shadow is supposed to shrink when lightsource moves away and grow when lightsource moves closer, it is in fact doing the reverse between updates of texture projection! In other words, between two updates of texture, it's projection on a parallelepiped grows when light moves away and shrinks when light moves closer and then "jumps" when texture updates, which is looking very wrong. To better show the problem, i've made this video http www.youtube.com watch?v XOraTZimmUU (the square in right upper corner is the depth texture, made from light's POV). As you can see, at first i am moving light out (decresing negative Y) and you can notice projection growing between "jumps" of texture updates. Then i am moving light in and you can notice projection shrinking between the same "jumps". My question is how to reverse this shrinking growing behavior without increasing shadowmap texture resolution (which only make problem less noticable, but not completly solved)? Shaders used to map texture on Vertex version 120 struct light t vec3 spot dir vec3 position mat4 MVP mat4 V mat4 P attribute vec4 in position uniform mat4 MVP uniform mat4 M uniform mat4 V uniform mat4 P uniform light t light varying vec4 shadowcoord void main() gl Position MVP in position mat4 bias mat4(0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0) shadowcoord bias light.MVP in position Fragment version 120 varying vec4 shadowcoord uniform sampler2D shadowmap void main() vec4 color vec4(0.5) float factor 1.0 float z texture2DProj(shadowmap, shadowcoord.xyw).z if (z lt (shadowcoord.z 0.0005) shadowcoord.w) factor 0.2 gl FragColor factor color The full self contained source of minimal ( 900 lines, meh) example (extracted from full scale project) is here (glm math library, C compiler and glut is needed) Compile line g g o example main.cpp lGL lGLU lglut lGLEW Actual source http paste.eientei.org show 287 WebGL playground webglplayground.net saved NAfoKmQqJb |
1 | equirectangular panorama rendering? I want to render my scenes as equirectangular panorama frames. I can get the angular fisheye which is what I actually need by applying the rendered frame as a texture to a correctly UV mapped circle. But how can the scene be rendered as equirectangular panorama? I'm using a 3d engine which uses OpenGL and GLSL. |
1 | What is the best method to update shader uniforms? What is the most accepted way for keeping a shader's matrices up to date, and why? For example, at the moment I have a Shader class that stores the handles to the GLSL shader program amp uniforms. Every time I move the camera I then have to pass the new view matrix to the shader, then every different world object I must pass it's model matrix to the shader. This severely limits me as I can't do anything without having access to that shader object. I thought of creating a singleton ShaderManager class that is responsible for holding all active shaders. I can then access that from anywhere and a world object wouldn't have to know about what shaders are active just that it needs to let the ShaderManager know the desired matrices but I'm not sure this is the best way and there's probably some issues that will arise from taking this approach. |
1 | How to create a scope for a sniper I am developing a first person shooter strategy game using the Lightweight Java Game Library, which has support for OpenGL. I would like to create a sniper which I need to magnify the screen and project an image onto the back of the scope. I have tried using gluPerspective() for zooming in and out, but it just makes the screen go black. I am thinking about using glReadPixels for grabbing pixel data from the zoomed in image, but I'm not sure what to put in field 'data'. explanation of glreadpixels here I'm also not sure how to get the data from glReadPixels and put it back onto the back of the scope. EDIT I'm not doing full screen zooming, I'm showing the image on the back of the scope. Anyone know how to do this? |
1 | OpenGL How come drawing sprites takes so much performance I'm wondering how drawing simple geometry with textures can eat up so much performance (below 60fps)? Even my good graphics card (GTX 960) can "only" draw up to 1000 sprites smoothly. The textures I'm using are all power of 2 textures and don't exceed a size of 512x512. I'm even filtering with GL NEAREST only. The sprites themself are random generated in size. So there are no 1000 fullscreen quads, which would be no real use case. I'm drawing my sprites batched, meaning I have one dynamic vertex buffer and a static index buffer. I update the vertex buffer every frame with glBufferSubData once and then draw everything with glDrawElements . I have about 5 different textures which I bind once per frame resulting in 5 draw calls. For rendering I'm using only one shader which is bound once the application starts. So I have 5 texture bindings, 5 draw calls, and one vertex buffer update per frame which is not really that much. Here is an example with one texture val shaderProgram ShaderProgram("assets default.vert", "assets default.frag") val texture Texture("assets logo.png") val sprite BufferSprite(texture) val batch BufferSpriteBatch() val projView Matrix4f().identity().ortho2D(0f, 640f, 0f, 480f) fun setup() glEnable(GL TEXTURE) glColorMask(true, true, true, true) glDepthMask(false) glUseProgram(shaderProgram.program) texture.bind() batch.begin() for(i in 1..1000) batch.draw(sprite) batch.update() fun render() glClear(GL COLOR BUFFER BIT) stackPush().use stack gt val mat stack.mallocFloat(16) projView.get(mat) val loc glGetUniformLocation(shaderProgram.program, "u projView") glUniformMatrix4fv(loc, false, mat) batch.flush() The batch.draw() method puts the sprites vertex data in a cpu side buffer and batch.update() uploads everything to the gpu with glBufferSubData. And setting up the spritebatch looks as follows glBindBuffer(GL ARRAY BUFFER, tmpVbo) glBufferData(GL ARRAY BUFFER, vertexData, GL STATIC DRAW) glEnableVertexAttribArray(0) glEnableVertexAttribArray(1) glEnableVertexAttribArray(2) glVertexAttribPointer(0, 2, GL FLOAT, false, 24 sizeof(Float), 0) glVertexAttribPointer(1, 4, GL FLOAT, false, 24 sizeof(Float), 2.toLong() sizeof(Float)) glVertexAttribPointer(2, 2, GL FLOAT, false, 24 sizeof(Float), 6.toLong() sizeof(Float)) glBindBuffer(GL ELEMENT ARRAY BUFFER, tmpEbo) glBufferData(GL ELEMENT ARRAY BUFFER, indices, GL STATIC DRAW) I was profiling my program first, but updating the vertex buffers and all the geometry takes about 10 of the total time per frame. But swapping the buffers takes up the rest of the 90 frame time. So I'm asking, how can such big AAA games render their scenes with millions of vertices, if drawing pixels is such a time consuming task? I know that their is a lot of optimizations in the code, but still. |
1 | Setting object specific uniforms in a render queue I am currently trying to write a (very simple) queued renderer using OpenGL. My current idea looks like this struct RenderCommand GLUint vao GLUint texture GLUint shader GLInt zOrder void Renderer addCommand(const RenderCommand amp cmd) add to queue void Renderer render() sort the command queue to try and z sort, batch textures, etc. for(const auto amp cmd queue ) Bind shader vao texture when necessary, issue draw calls Now my problem is this for sprites for example, I only have one set of quad vertices on the GPU, and I send the sprite vertex shader the sprite size using a uniform. Same goes for the model transform matrix for literally any object. That's okay when render calls issued when iterating over scene objects, but becomes a problem when everything is queued since most sprites will use the same shader, how do I set the uniform? I've had a few ideas for solutions, but none of them feels right Add data in RenderCommand for size, model matrix, etc. The big problem with that is flexibility it supports the engine's default sprite (or 3D model or text) shader, but custom shaders with other uniforms won't work. Add a custom data pointer to RenderCommand to point to uniform data to send before rendering. This is probably the most flexible option. Add a UniformCommand type, which only uploads a uniform to the currently bound shader. It'll have to be handled carefully when the queue is sorted, to ensure it stays contiguous with whatever shader and model it corresponds to, but that sounds feasible. |
1 | How do I create geometry in SceneKit? I have been experimenting with Apple's new SceneKit for fun, but I cannot seem to figure out how to input vertex data without loading a .dae file. Does anybody who has been testing SceneKit have any idea how to set or modify vertex data? |
1 | How do I flip upside down fonts in FTGL I just use FTGL to use it in my app. I want to use the version FTBufferFont to render font but it renders in the wrong way. The font(texture?buffer?) is flipped in the wrong axis. I want to use this kind of orthographic settings void enable2D(int w, int h) winWidth w winHeight h glViewport(0, 0, w, h) glMatrixMode(GL PROJECTION) glLoadIdentity() I don't even want to swap the 3rd and 4th param because I like to retain the top left as the origin glOrtho(0, w, h, 0, 0, 1) glMatrixMode(GL MODELVIEW) I render the font like this No pushing and popping of matrices No translation font.Render("Hello World!", 1, position, spacing, FTGL RenderMode RENDER FRONT) On the other forums, they said, just scaled it down to 1, but it wont work in mine. Example font is above the screen I can't see relevant problem like in mine in google so I decide to ask this here again. How can I invert texture's v coordinate without modifying its source code ? (assume its read only) |
1 | GLSL "varying" interpolation component wise? Reference in the spec? Probably a dumb question, but I can't find (or am not understanding) a conclusive answer in the spec or in other questions (e.g., this one). For smoothly interpolated varying vec and mat vertex shader outputs fragment shader inputs, is each element of the vector or matrix interpolated individually? GL spec sec. 13.5.1 (clipping) describes linear clip space interpolation of the "output values associated with a vertex." It also says that those are componentwise for vectors, but doesn't mention matrices. Similarly, sec. 14.5.1 (rasterizing lines) and sec 14.6 (rasterizing polygons) describe interpolation of an "associated datum f for the fragment". Is each element of a vector or matrix considered an individual "associated datum" and interpolated independently from the other elements of the vector or matrix? Secs. 11.1.3.10 (shader outputs) and 15.1 (fragment shader variables) mention interpolation but refer elsewhere for the details. Similarly, in the GLSL spec, sec. 4.5 (interpolation qualifiers) says that interpolation happens but does not distinguish scalars from multi component variables. I am looking for a definitive statement about how vecs and mats are interpolated, if there is one. Or let me know if there isn't! Thank you! (Note answers can be for any OpenGL version.) |
1 | Deferred rendering order? There are some effects for which I must do multi pass rendering. I've got the basics set up (FBO rendering etc.), but I'm trying to get my head around the most suitable setup. Here's what I'm thinking... The framebuffer objects FBO 1 has a color attachment and a depth attachment. FBO 2 has a color attachment. The render passes Render g buffer normals and depth (used by outline amp DoF blur shaders) output to FBO no. 1. Render solid geometry, bold outlines (as in toon shader), and fog output to FBO no. 2. (can all render via a single fragment shader I think.) (optional) DoF blur the scene output to the default frame buffer OR ELSE render FBO2 directly to default frame buffer. (optional) Mesh wireframes composite over what's already in the default framebuffer. Does this order seem viable? Any obvious mistakes? |
1 | Rendering Many Objects in OpenGL4 Sorry for asking such a basic question. I am reading books on OpenGL4 but in most examples they generally render only objects. So I understand how to deal with vertex buffers and vertex array buffers, etc. but my question is does it work when you have say hundred of objects to draw? Should you have 1 vbo for example and replace the data in the buffer each time you need to draw a new object? Should you have hundreds of vbos? By my understanding was that you were limited in the number of buffers you could create (using getBuffer()). How would one typically do this? Thank you. |
1 | How do I represent blended tiles in a mesh vertex array? I recently started making a Terraria clone using the L VE library, which is based in OpenGL. In Terraria, for each tile, there is a large texture with all possible combinations for merging with neighbouring tiles. They usually only support tiles of the same type, and sometimes other materials, like dirt. As a result, they only need a single vertex array. In Starbound, it seems to be simpler. A tutorial I found notes the use of basic 8x8 blocks, with a few edges drawn if the block is next to a block of a different type. I want to implement a similar mechanic, but I encountered a serious issue. A single vertex array does not allow z ordering, which I would need. My first idea was to use one vertex array for each type of block, but that would eat a lot of memory. My second idea was to use the fact that OpenGL vertex array draws primitives in order. I could use some complicated data structure to represent tiles, so I would always know where to place their vertices in the array. However, that would either require moving lots of vertices or lots of memory. What is the best way to implement this? |
1 | Help with dual quaternion skinning I'm trying to convert my code to use dual quaternion skinning instead of matrix skinning because i just can't get the skinning matrix created correctly from bones weights using matrices. Edit just to clarify, if each vert has only one bone the matrix version did skin correctly. But it's just not working and i really don't understand why, i'm using code i found here http www.chinedufn.com dual quaternion shader explained http donw.io post dual quaternion skinning I've looked at the assorted papers often brought up in the answers to these questions, but for the most part i just don't understand them. I'm not even entirely sure how the x, y, z, w of a glm quat correspond to the 1, i, j, k of a quaternion I think that w might be the scalar because it goes first in the actual byte order of the glm quat struct, but i'm not sure. Anyway, in my shader bindings i have void gltfShader bindBones(const std vector lt glm mat4 gt amp bones) std vector lt glm fdualquat gt dual quats(std min lt int gt (bones.size(), 64)) glm quat r glm vec3 t, s, sk glm vec4 pr glm fdualquat dq for(size t i 0 i lt dual quats.size() i) glm decompose(bones i , s, r, t, sk, pr) dq 0 r dq 1 glm quat(t.x, t.y, t.z, 0) r .5f dual quats i dq glUniformMatrix2x4fv(u bones, dual quats.size(), GL FALSE, (float ) amp dual quats 0 ) In the vertex shader mat2x4 GetBoneTransform(ivec4 joints, vec4 weights) float sum weight weights.x weights.y weights.z weights.w Fetch bones mat2x4 dq0 u bones joints.x mat2x4 dq1 u bones joints.y mat2x4 dq2 u bones joints.z mat2x4 dq3 u bones joints.w Ensure all bone transforms are in the same neighbourhood weights.y sign(dot(dq0 0 , dq1 0 )) weights.z sign(dot(dq0 0 , dq2 0 )) weights.w sign(dot(dq0 0 , dq3 0 )) Blend mat2x4 result weights.x dq0 weights.y dq1 weights.z dq2 weights.w dq3 result 0 3 int(sum weight lt 1) (1 sum weight) Normalise float norm length(result 0 ) return result norm mat4 GetSkinMatrix() mat2x4 bone GetBoneTransform(a joints0, a weights0) vec4 r bone 0 vec4 t bone 1 return mat4( 1.0 (2.0 r.y r.y) (2.0 r.z r.z), (2.0 r.x r.y) (2.0 r.w r.z), (2.0 r.x r.z) (2.0 r.w r.y), 0.0, (2.0 r.x r.y) (2.0 r.w r.z), 1.0 (2.0 r.x r.x) (2.0 r.z r.z), (2.0 r.y r.z) (2.0 r.w r.x), 0.0, (2.0 r.x r.z) (2.0 r.w r.y), (2.0 r.y r.z) (2.0 r.w r.x), 1.0 (2.0 r.x r.x) (2.0 r.y r.y), 0.0, 2.0 ( t.w r.x t.x r.w t.y r.z t.z r.y), 2.0 ( t.w r.y t.x r.z t.y r.w t.z r.x), 2.0 ( t.w r.z t.x r.y t.y r.x t.z r.w), 1) EDIT video of the results https youtu.be 8jIt Xhhffk The one on the left is supposed to be a walk cycle, the one on the right is supposed to be this https github.com KhronosGroup glTF Sample Models tree master 2.0 Monster |
1 | Is it a good idea to perform all matrix operations on the GPU? I was wondering if it is a improvement to use OpenGL for matrix calculations instead of using the CPU. And if it is a improvement, is it worth it to change the math class to use OpenGL? |
1 | How do I render using VBOs? I'm trying to render a hemisphere in OpenGL, the problem that the hemisphere isn't rendered at all, only part of it. I initialize it, then at each frame I draw it using the following code. I'm trying to use a VBO for drawing it. void Jellyfish Init HemiSphere(const float radius, const int segments ) m iSegements segments m fVerts new float (segments 1) 2 3 m fNormals new float (segments 1) 2 3 m fTexCoords new float (segments 1) 2 2 for( int j 0 j lt segments 2 j ) float theta1 j 2 3.14159f segments ( 3.14159f ) float theta2 (j 1) 2 3.14159f segments ( 3.14159f ) for( int i 0 i lt segments i ) Vec3f e, p float theta3 i 2 3.14159f segments e.x math lt float gt cos( theta1 ) math lt float gt cos( theta3 ) e.y math lt float gt sin( theta1 ) e.z math lt float gt cos( theta1 ) math lt float gt sin( theta3 ) p e radius m fNormals i 3 2 0 e.x m fNormals i 3 2 1 e.y m fNormals i 3 2 2 e.z m fTexCoords i 2 2 0 0.999f i (float)segments m fTexCoords i 2 2 1 0.999f 2 j (float)segments m fVerts i 3 2 0 p.x m fVerts i 3 2 1 p.y m fVerts i 3 2 2 p.z e.x math lt float gt cos( theta2 ) math lt float gt cos( theta3 ) e.y math lt float gt sin( theta2 ) e.z math lt float gt cos( theta2 ) math lt float gt sin( theta3 ) p e radius m fNormals i 3 2 3 e.x m fNormals i 3 2 4 e.y m fNormals i 3 2 5 e.z m fTexCoords i 2 2 2 0.999f i (float)segments m fTexCoords i 2 2 3 0.999f 2 ( j 1 ) (float)segments m fVerts i 3 2 3 p.x m fVerts i 3 2 4 p.y m fVerts i 3 2 5 p.z glGenBuffers(3, amp SVboId 0 ) Vertex glBindBuffer(GL ARRAY BUFFER,SVboId 0 ) glBufferData(GL ARRAY BUFFER,sizeof( m fVerts) (m iSegements 1) 2 3, m fVerts,GL DYNAMIC DRAW) Normals glBindBuffer(GL ARRAY BUFFER,SVboId 1 ) glBufferData(GL ARRAY BUFFER,sizeof( m fNormals) (m iSegements 1) 2 3, m fNormals,GL DYNAMIC DRAW) void Jellyfish drawHemiSphere( ) glEnableClientState( GL VERTEX ARRAY ) glVertexPointer( 3, GL FLOAT, sizeof( m fVerts) (m iSegements 1) 2 3, m fVerts ) glEnableClientState( GL VERTEX ARRAY ) glBindBuffer(GL ARRAY BUFFER,SVboId 0 ) glVertexPointer( 3, GL FLOAT, 0, 0 ) glEnableClientState( GL NORMAL ARRAY ) glBindBuffer(GL NORMAL ARRAY,SVboId 1 ) glNormalPointer( GL FLOAT, 0,0 ) glEnableClientState( GL TEXTURE COORD ARRAY ) glTexCoordPointer( 2, GL FLOAT, sizeof( m fTexCoords) (m iSegements 1) 2 3, m fTexCoords ) glEnableClientState( GL NORMAL ARRAY ) glNormalPointer( GL FLOAT, sizeof( m fNormals) (m iSegements 1) 2 2, m fNormals ) for( int j 0 j lt m iSegements 2 j ) for( int i 0 i lt m iSegements i ) glDrawArrays( GL TRIANGLES, 0, (m iSegements 1) 2 ) glDisableClientState( GL VERTEX ARRAY ) glDisableClientState( GL TEXTURE COORD ARRAY ) glDisableClientState( GL NORMAL ARRAY ) |
1 | Deferred Rendering with SFML I've been looking to implement a deferred rendering system in SFML. I've read the tutorials on OGLdev for it, and I was wondering how to draw a scene built in SFML (sprites, text, etc) to the multiple render targets needed for the deferred shading process. Basically, I need to be able to get SFML to draw every object to the GBuffer's MRTs, so I can have the parameters needed to do what I want. Is this possible, or will I have to build my own rendering backend from scratch? |
1 | Lag Spike When Creating Model I am creating a game using OpenGl in c . Whenever I create a new model while the game is running, such as fire a bullet, there is a huge lag spike. The function that creates the model is below. std string jsonString jsonString file gt load(type) json jf json parse(jsonString) Might be causing the lag indicesSizeTexture jf quot textureIndices quot .size() verticesSizeTexture jf quot textureVertices quot .size() indicesSizeCollision jf quot collisionIndices quot .size() verticesSizeCollision jf quot collisionVertices quot .size() verticesTexture new float verticesSizeTexture 8 verticesCollision new float verticesSizeCollision 8 verticesCollisionUpdated new float verticesSizeCollision 8 indicesTexture new int indicesSizeTexture indicesCollision new int indicesSizeCollision for (int i 0 i lt verticesSizeTexture i ) responsible for just the texture vertices verticesTexture i jf quot textureVertices quot i for (int i 0 i lt indicesSizeTexture i ) responsible for just the texture indices indicesTexture i jf quot textureIndices quot i for (int i 0 i lt verticesSizeCollision i ) responsible for just the collision vertices verticesCollision i jf quot collisionVertices quot i verticesCollisionUpdated i verticesCollision i for (int i 0 i lt indicesSizeCollision i ) responsible for just the collision indices indicesCollision i jf quot collisionIndices quot i binds id glGenBuffers(1, amp VBO) glGenVertexArrays(1, amp VAO) glGenBuffers(1, amp EBO) glGenTextures(1, amp texture) glBindVertexArray(VAO) glBindBuffer(GL ARRAY BUFFER, VBO) glBufferData(GL ARRAY BUFFER, verticesSizeTexture 8 sizeof(float), verticesTexture, GL STATIC DRAW) position attribute glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, 8 sizeof(float), (void )0) glEnableVertexAttribArray(0) texture glBindTexture(GL TEXTURE 2D, texture) glVertexAttribPointer(2, 2, GL FLOAT, GL FALSE, 8 sizeof(float), (void )(6 sizeof(float))) glEnableVertexAttribArray(2) stbi set flip vertically on load(true) unsigned char data stbi load(texturePathString.c str(), amp width, amp height, amp nrChannels, 0) glTexImage2D(GL TEXTURE 2D, 0, GL RGBA, width, height, 0, GL RGBA, GL UNSIGNED BYTE, data) glGenerateMipmap(GL TEXTURE 2D) stbi image free(data) glBindBuffer(GL ARRAY BUFFER, 0) glBindVertexArray(0) I stripped out a lot of parts that I am almost certain aren't causing the lag. There is a lot of stuff going on, but it is mostly simple mathematical operations. The only parts that I think could be causing the lag is the json section used for loading the model data. The model data is stored in a variable from file as a string. I need the json section for the data storage though. What could be causing the lag? should I find a different data storage type? What if I created a bullet offscreen on startup, then copied it whenever I needed it? The specific json library I am using is https github.com nlohmann json |
1 | LibGDX Box2DLights shadow offset problem on bodies Hello I just started to use LibGDX, and it's awesome. I looked at the Box2DLights library, and started to learn how the lighting work here. I got something up (source gyazo.com) As you can see, it works, but the shadow of the sprite goes over itself, and doesn't start in the right place. Why is it doing this? is it possible to set offsets? This is how I initialize them this.texture new Texture("sprites water.png") this.camera new OrthographicCamera(Gdx.graphics.getWidth(), Gdx.graphics.getHeight()) this.camera.setToOrtho(false) this.renderer new SpriteBatch() this.map new TileMap(20, 20) this.map.generateAllSame(new Texture("sprites sprite.png")) this.world new World(new Vector2(), true) this.createBodies() RayHandler.setGammaCorrection(true) RayHandler.useDiffuseLight(true) this.ray new RayHandler(this.world) this.ray.setAmbientLight(0.2f, 0.2f, 0.2f, 0.1f) this.ray.setCulling(true) this.ray.pointAtLight(50, 0) this.ray.setBlurNum(1) camera.update(true) this.spriteLight new PointLight(ray, 128) this.spriteLight.setDistance(500f) this.spriteLight.setPosition(150, 150) this.spriteLight.setColor(new Color(0.5f, 0.8f, 0.7f, 1f)) this.spriteLight.setSoftnessLength(0) this.spriteLight.setSoft(true) public void createBodies() CircleShape chain new CircleShape() chain.setRadius(10) FixtureDef def new FixtureDef() def.restitution 0.8f def.friction 0.01f def.shape chain def.density 1f BodyDef body new BodyDef() body.type BodyType.DynamicBody for (int i 0 i lt 3 i ) body.position.x 40 MathUtils.random(500) body.position.y 40 MathUtils.random(500) Body box this.world.createBody(body) box.createFixture(def) this.bodies.add(box) chain.dispose() And my rendering public void render() handleInput() camera.update() Gdx.gl.glClear(GL20.GL COLOR BUFFER BIT) renderer.setProjectionMatrix(camera.combined) renderer.disableBlending() renderer.begin() this.map.render(renderer) renderer.draw(texture, 50, 50) renderer.enableBlending() for (Body body this.bodies) Vector2 pos body.getPosition() renderer.draw(texture, pos.x amount, pos.y) renderer.end() ray.setCombinedMatrix(camera.combined, camera.position.x, camera.position.y, camera.viewportWidth camera.zoom, camera.viewportHeight camera.zoom) ray.update() ray.render() What did I do wrong? |
1 | smooth shading vs flat shading, what's the difference in the models? I'm loading the exact same model with Assimp, except one is exported from Blender, shaded smoothly, and the other was exported from Blender, shaded flatly. Here is my results from loading both into my game The Flat drawn model has 1968 vertices and the Smooth drawn model only has 671, why is this happening, I don't understand why there would be less vertices when it's shaded smoothly...? |
1 | Stencil buffer appears to not be decrementing values correctly I'm attempting to use the stencil buffer as a clipper for my UI system, but I'm having trouble debugging a problem I'm running in to. This is what I'm doing A widget can pass a rectangle to the the stencil clipper functions, which will increment the stencil buffer values that it covers. Then it will draw its children, which will only get drawn in the stencilled area (so that if they extend outside they'll be clipped). After a widget is done drawing its children, it pops that rectangle from the stack and in the process decrements the values in the stencil buffer that it has previously incremented. The slightly simplified code is below static void drawStencil(Rect amp rect, unsigned int ref) Save previous values of the color and depth masks GLboolean colorMask 4 GLboolean depthMask glGetBooleanv(GL COLOR WRITEMASK, colorMask) glGetBooleanv(GL DEPTH WRITEMASK, amp depthMask) Turn off drawing glColorMask(0, 0, 0, 0) glDepthMask(0) Draw vertices here ... Turn everything back on glColorMask(colorMask 0 , colorMask 1 , colorMask 2 , colorMask 3 ) glDepthMask(depthMask) Only render pixels in areas where the stencil buffer value ref glStencilFunc(GL EQUAL, ref, 0xFF) glStencilOp(GL KEEP, GL KEEP, GL KEEP) void pushScissor(Rect rect) increment things only at the current stencil stack level glStencilFunc(GL ALWAYS, s scissorStack.size(), 0xFF) glStencilOp(GL INCR, GL INCR, GL INCR) s scissorStack.push back(rect) drawStencil(rect, s ScissorStack.size()) void popScissor() undo what was done in the previous push, decrement things only at the current stencil stack level glStencilFunc(GL ALWAYS, s scissorStack.size(), 0xFF) glStencilOp(GL DECR, GL DECR, GL DECR) Rect rect s scissorStack.back() s scissorStack.pop back() drawStencil(rect, s scissorStack.size()) And this is how it's being used by the Widgets if (m clip) pushScissor(m rect) drawInternal(target, states) for (auto child m children) target.draw( child, states) if (m clip) popScissor() This is the result of the above code There are two things on the screen, a giant test button, and a window with some buttons and text areas on it. The text area scroll box is set to clip its children (so that the text doesn't extend outside the scroll box). The button is drawn after the window and should be on top of it completely. However, for some reason the text area is appearing on top of the button. The only reason I can think of that this would happen is if the stencil values were not getting decremented in the pop, and when it comes time to render the button, since those pixels don't have the right stencil value it doesn't draw over. But I can't figure out whats wrong with my code that would cause that to happen. |
1 | Opengl rotate around global axis I want to create a game in which I rotate a cube around 2 fixed axes(x and y) using my mouse. Here is what I want to do. Just use the mouse to see what kind of rotation I want. I calculated my yaw and pitch values according to mouse movement and they work fine when I try to rotate only around one axis(x or y), but when I try to rotate around both axes it doesn't work(because OpenGL rotates axes when you apply a rotation). How do I make them both work at the same time? Here my code glm mat4 model glm mat4 view glm mat4 projection float cameraDistance 3.0 glm vec3 center glm vec3(0.0f, 0.0f, 0.0f) glm vec3 cameraFront glm vec3(0.0f, 0.0f, 1.0f) glm vec3 cameraUp glm vec3(0.0f, 1.0f, 0.0f) glm vec3 modelDistance glm vec3(0.0,0.0,3.0) int oldX,oldY GLfloat yaw 0.0f Yaw is initialized to 90.0 degrees since a yaw of 0.0 results in a direction vector pointing to the right (due to how Euler angles work) so we initially rotate a bit to the left. GLfloat pitch 0.0f while (running) oldX sf Mouse getPosition(window).x oldY sf Mouse getPosition(window).y yaw oldX 0.1 pitch oldY 0.1 projection glm perspective(45.0f, (GLfloat)WIDTH (GLfloat)HEIGHT, 1.0f, 100.0f) view glm mat4() view glm lookAt(center modelDistance, center modelDistance cameraFront, cameraUp) model glm mat4() Here is the problem,how do i use both at the same time? model glm rotate(model,glm radians(yaw),glm vec3(0.0,1.0,0.0)) model glm rotate(model,glm radians(pitch),glm vec3(1.0,0.0,0.0)) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) ourShader.Use() GLint modelLoc glGetUniformLocation(ourShader.Program, "model") GLint viewLoc glGetUniformLocation(ourShader.Program, "view") GLint projLoc glGetUniformLocation(ourShader.Program, "projection") glUniformMatrix4fv(modelLoc, 1, GL FALSE, glm value ptr(model)) glUniformMatrix4fv(viewLoc, 1, GL FALSE, glm value ptr(view)) glUniformMatrix4fv(projLoc, 1, GL FALSE, glm value ptr(projection)) glBindVertexArray(VAO) glDrawArrays(GL TRIANGLES, 0, 27 n n) |
1 | OpenGL Shadowmaps Limit I would like to render an OpenGL scene with an arbitrary number of point lights (at least 1024) with shadows enabled. My current method of rendering shadowmaps, however, cannot do this. Let me explain There are only 32 textures you can pass to a fragment at a time. I am using 31 of those 32 textures for shadowmaps at the moment. pseudocode for my main code... setShader("DepthBufferPointLight", 1) the int is actually numlights, used for deciding whether or not to recompile a finalpass shader. Irrelevant in this stage and set to 1 for debugging purposes. FBO fBO CreateFBOs 6 numLights for (int j 0 j lt 6 j ) for (int i 0 i lt 6 i ) setRenderTarget(fBO i ) setupCameraforLight(j,i) Move the camera to the right position renderSceneDepth() Textures aren't even rendered for this setRenderTargetDefault() setShader("FinalPass", numLights) The shader dynamically recompiles for final pass calls if the number of lights has changed, because it will need to change how many uniform struct lights there are. setLightTextureUnits(fBO, 6, numLights) glActiveTexture calls and glBindTexture calls renderSceneFinalPass() Textures and uniforms are all set up for this, also does the lights and then in the shader pipeline I do my calculations using the shadowmaps. My pseudocode is rough but you get the picture. (SIDE NOTE the dynamic recompiling is actually really simple and only changes one line of the shader, and I'm going to deprecate and remove the function once I figure out how uniform variable arrays work...) my code breaks after numlights 6 is greater than 31. Is there a better way to do this? my ultimate goal is to render an arbitrarily high number of point lights using shadow mapping |
1 | Could executing OpenGL shaders sent from a server be dangerous? I just came to the realization that since uncompiled shaders are just text, they can easily be sent over a network from a server and then compiled at runtime and executed on clients. I'm not actually planning on doing this (probably...), but I am curious as to whether or not it would result in any potential security vulnerabilities on the clients executing the shaders. My gut feeling is that it would not, since even if a shader were to gain unauthorized access (eg by reading unallocated data from a video buffer), it would have no way of sending this data back to the attacker. On the other hand, allowing arbitrary code execution seems like an almost certain way to get hacked. My question is how this would be possible in this scenario (assuming that it is). |
1 | How can I render a simple lattice in LibGDX? I have searched all over, but I can't find what I think will be a simple answer. I am using Opengl ES 2.0, and LibGDX. I simply want to use GL LINES primitives to create a lattice structure. I have used shapeRenderer with the following code in create() shapeRenderer new ShapeRenderer() shapeRenderer.setProjectionMatrix(cam.combined) in render() shapeRenderer.setProjectionMatrix(cam.combined) shapeRenderer.begin(ShapeType.Line) shapeRenderer.setColor(0, 1, 0, 1) for (float i 10 i lt 10 i ) for (float j 10 j lt 10 j ) shapeRenderer.line(i, j, 10, i, j, 10) shapeRenderer.line( 10, i, j, 10, i, j) shapeRenderer.line(j, 10, i, j, 10, i) shapeRenderer.end() The problem is, this is not a model instance, thus doesn't seem to use the RendererContext (GL DEPTH TEST, etc.). When I render the lattice this way, it either shows up entirely behind my models, or entirely in front of them (depending on the order I render them in the render method). Is there a way to build a model which is simply a set of lines to be rendered with Opengl primitive GL LINES? Any pointers would be greatly appreciated. |
1 | Should I update VAO when I update a VBO? My VAO VBO IBO work fine on iPad and other devices on Android excepted two (A Samsung galaxy S4 and a Sony Xperia S). A problem is present when I start my application on this devices, every elements move everywhere and start to blink on each frame, the problem is present on every element updated during the simulation. I have a SpriteRenderer who share a VBO, so I need to update this VBO on each frame for each sprite (change color, uvs, ). The visual glitch is not present on static element (like text). So my question his Did I have to do something with my VAO on each frame? Here is what I've got Init part bind vao gt Bind vbo gt Bind ibo unbind vao Rendering part for( sprites ) Update (Need to bind VAO here?) bind vbo (lock) update vbo data unbind vbo (unlock bind) Draw. bind vao drawElement unbind vao Thanks! |
1 | Send Geometry Data to Multiple Shaders So I am implementing a deferred rending model for my engine, and I want to be able to send all scene geometry into a single shader to calculate ambient, diffuse, normal, ect thats not the question. Once I have all of this data buffered into the GPU I can render all of that geometry from a camera perspective defined by a uniform for the cameras position in the scene. I am wondering if I can reuse this model data already in VRam in another shader translated by a light sources projection matrix to calculate the shadows on this scene geometry without needing to send the same data to the GPU again. Is there a way to tell my light shader, hey all that juicy geometry scene data you want is already in V Ram, just use this matrix transformation instead. Alternatively when I am sending the VAO to the GPU is there a way to send it to two shades in parallel, one for the deferred rending model, one for shadow casing light sources? Thanks so much! |
1 | OpenGL NDC initial values I have a set of vertex locations to create a plane this gt Vertices position color texture 0.5f, 0.5f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.5f, 0.5f, 0.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 0.5f, 0.5f, 0.0f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f Everything is fine, but I can't grasp how I should input these values at the very beginning. I used to give exact locations for vertices in DirectX in pixel unit, but here in OpenGL I should give vertex locations in normalized coordinates. If I wanted to create a plane say 200px by 200px how should I deal with that when I set vertex data. I know I can transform my plane with coordinate matrices through model space and view space and etc. I just don't know what I should be giving for the initial vertex location values. |
1 | Picking objects with mouse ray I simply want to pick a few spheres in my scene using the mouse ray. I have implemented(copied most of it but with little understanding) a ray sphere collision code. Also I have implemented the code for converting the mouse window coords to OpenGL coords. And constructed the mouse function. I have a ray class. In other words I have made everything, but sth isn't working... So, there is the ray sphere collision code bool Sphere CheckRayCollision(Ray mouseRay) double discriminant, b b DotProduct(mouseRay.GetOrigin(), mouseRay.GetDirection()) discriminant b b DotProduct(mouseRay.GetOrigin(), mouseRay.GetOrigin()) this gt radius this gt radius if(discriminant lt 0) std cout lt lt "disc" lt lt std endl return false discriminant sqrt(discriminant) double x1 b discriminant double x2 b discriminant if(x2 lt 0) std cout lt lt "x2" lt lt std endl return false if(x1 lt 0) x1 0 return true return true It is within the sphere class. The convertion between window and OGL coords Vector3d MouseClass ConvertMouseToOGLCoordinate(int mouseX, int mouseY, int mouseZ) GLint viewport 4 GLdouble modelMatrix 16 GLdouble projectionMatrix 16 glGetIntegerv(GL VIEWPORT, viewport) glGetDoublev(GL MODELVIEW MATRIX, modelMatrix) glGetDoublev(GL PROJECTION MATRIX, projectionMatrix) float winZ float winY float(viewport 3 mouseY) glReadPixels(mouseX, (int)winY, 1, 1, GL DEPTH COMPONENT, GL FLOAT, amp winZ) double x, y, z gluUnProject((double)mouseX, winY, mouseZ, modelMatrix, projectionMatrix, viewport, amp x, amp y, amp z) return Vector3d(x, y, z) And the function which checks whether the left mouse btn is clicked and if it is checks for the collision void HandleMouse() if(userMouse.IsLeftButtonDown()) int cursorX int(userMouse.GetCurrentPosition().GetX()) int cursorY int(userMouse.GetCurrentPosition().GetY()) Vector3d nearPoint MouseClass ConvertMouseToOGLCoordinate(cursorX, cursorY, 0.0f) Vector3d farPoint MouseClass ConvertMouseToOGLCoordinate(cursorX, cursorY, 1.0f) Vector3d direction farPoint nearPoint direction.Normalize() positionInWorld farPoint mouseRay Ray(nearPoint, direction) drawCube true for(int i 0 i lt 3 i ) bool isCollided sphere i .CheckRayCollision(mouseRay) if(isCollided) sphere i .SetColor(0.0f, 1.0f, 0.0f) else sphere i .SetColor(1.0f, 0.0f, 0.0f) The 'drawCube' bool var is just for me to show if the convertion func works well and the 'positionInWorld' var is for the cubes coords. The drawing is well, so I think the convertion function works, but I might've done sth wrong. If anyone can help me sort out why isn't this code working I would be very gratefull! I have been struggling with this for a week and searched everywhere on the Internet and haven't found a solution yet... Thanks in advance! EDIT The picking isn't working here. Pressing the left mouse button doesn't select anything. No matter where I click... |
1 | OpenGL App not setting cursor position appropriately I have written a small application using OpenGL, and have implemented some rudimentary camera controls. Unfortunately, I cannot get the application to set my cursor position correctly. The cursor is never set to where I tell it to go, so my application just reacts to where the cursor is on my entire screen. I first attempted to use GLFW, and when I saw that I couldn't set the cursor appropriately, I decided to try SFML. Neither one works. I'm on an Arch Linux install with a Gnome desktop. I've been trying to figure this out for a few days now to no avail. The relevant code is as follows sf Vector2i cursor pos sf Mouse getPosition( window) sf Mouse setPosition(sf Vector2i(1280 2, 720 2), window) This gets called every frame inside a function that messes with some matrices. I also set the cursor position at initialization. Any hints or advice would be greatly appreciated. |
1 | Does Java support OpenGL by itself? Note This is long but I explaining everything that you need to know. Don't read half way through it and say "What's the question?". It's simple but long and I need help as soon as possible. So I asked a question not long ago that was similar but it wasn't my best work. What I'm trying to do is make a small jar with no other files but the jar (maybe natives if needed) that handles a window and graphics for games. I'm making this for some people I know who don't want to get into advanced graphics and stuff to make games, plus I figure it would be easier to stick everything they need into one jar that they know how to use. Anyway I found JOGL but after like the past 3 hours or so all I got with JOGL was the memory to never use it because it's a pain to install (at least for me) and everyone says a different way to install it and I need like 100 files along with my jar to get it to work. So since I'm not dealing with JOGL i figured that it's best to try and find something else. So does anyway know any way to get OpenGL into java without libraries that add more files and if so just like 1 file? I'm trying to get it so it's just that jar and nothing else. I just want this done but I'm very confused. I would also like it to be able to run on Windows, Linux and Mac. I only have a Windows machine although I can get Linux to test it on and I know someone who has a Mac but keep in mind I'm building it on a Windows machine. So my question really is how would I be able to stick OpenGL (I would like OpenAL and maybe OpenCL too) in a single jar and nothing else? I have a few exceptions such as I'm kinda ok if I need a few natives but I don't want 10 jars and 50 natives and I need it to work on all kind of machines, and also I would like to be able to use swing to handle the window. |
1 | How to emulate PSX's graphics with OpenGL? I want to know what options (or shaders) to set so that my OpenGL game looks like Playstation 1 game. I know it probably can not be achieved 100 because PSX used television and television renders differently that monitor. Thanks. ) |
1 | How to integrate android camera for LibGDX project? (deprecated material..) I'm trying to use the Android camera in LibGDX. I tried using this implementation of the API, which is basically the only you will find on Google. https github.com libgdx libgdx wiki Integrating libgdx and the device camera However, most of it is deprecated. It uses GL10, and the methods it calls to connect to the camera are stubs. It is not usable, and the API has probably got easier to use since (but I'd rather not look at google documentation when this has probably been made before... or has it? nothing on Google anyway). So, do I have to make my own implementation, or is there anything available that would make it any easier? Or is the 2013 implementation actually still usable somehow? Thanks in advance. |
1 | For drawing many layered 2D tiles, should one use the painter's algorithm, or Z buffering? Sorry if this question doesn't make sense, I'm still very new to WebGL OpenGL. Basically, I'm trying to draw a tilemap similar to the one in Stardew Valley. Here's a screenshot from that game https i.imgur.com eEgWu6b.png So, there are several tile layers that are drawn after one another to simulate depth. And I believe that when two objects are on the same layer, their draw order is based on their Y coordinate. For example, in this screenshot notice how if you are above the scarecrow, you'll be drawn behind it, and otherwise, in front of it. I'm trying to figure out how to efficiently do this, and there seem to be two main ways Use z buffering in OpenGL. That is, the map is drawn quot 3d quot like in the following image https i.imgur.com Bi8C8z2.jpg where quad has a z value. Then, assuming the camera only translates and never rotates and just always looks top down, then it should still look like a proper top down 2D game with depth. Use the painter's algorithm. That is, first draw everything on the first layer, then everything on the second layer, then everything on the third layer, etc. Your main application code should sort all objects on the same layer based on their Y coordinate so that everything is drawn with the proper depth. So, I actually really like the idea of the first one. Because I heard that texture swapping was really expensive in OpenGL, and some of my maps are very large so they use 3 5 sprite sheets to draw everything. So with this method, I believe I should be able to just first draw every sprite that uses my first spritesheet, swap to the next texture, draw all sprites that use that texture, etc. Since they have z values, everything can be drawn out of order and still be layered properly, right? Also, if I space out the z coordinate for my layers enough, I think I should be able to incorporate the sprite's y coordinate into their z coordinate. That is, as an example, for a 2x2 map, the first layer would have z values 10000, 10001, 10002, 10003, then the next layer will have z values 20000, 20001, 20002, 20003, etc. My only concern is that I heard depth testing and occluding and whatnot can be expensive. And also apparently something called z fighting can occur? I think the painter's algorithm is the more traditional way of doing this? My only concern with this one is that it seems like it could be expensive to use software sorting on all the sprites each frame. Also, I don't know how to handle this with texture swapping. If one layer happens to include monsters that are all from different sprite sheets, won't that include a lot of texture swapping when rendering them all out, if we have to do them in order from top left to bottom right? I'm just not sure which of these I should do. Using the z index sounds like it could make everything a lot easier, but I'm not sure if the price of depth testing is cheaper than software sorting and texture swapping. Also, I'm concerned about possible z fighting. How do people normally do this? |
1 | How do I determine if a tile is a slope based on the tile image? In my game, every tile is a 32x32 texture. All the slopes are a 0 45 degree angle. I would like to determine, at the time I load the tile, if the is sloped by examining its texture bitmap data. How can I do this? |
1 | Does the order of vertex buffer data when rendering indexed primitives matter? I'm building a 3d object's triangles. If I can write them to the buffer in the order they are calculated it will simplify the CPU code. The vertices for the triangles will not be contiguous. Is there any performance penalty for writing them out of order? |
1 | Deferred rendering order? There are some effects for which I must do multi pass rendering. I've got the basics set up (FBO rendering etc.), but I'm trying to get my head around the most suitable setup. Here's what I'm thinking... The framebuffer objects FBO 1 has a color attachment and a depth attachment. FBO 2 has a color attachment. The render passes Render g buffer normals and depth (used by outline amp DoF blur shaders) output to FBO no. 1. Render solid geometry, bold outlines (as in toon shader), and fog output to FBO no. 2. (can all render via a single fragment shader I think.) (optional) DoF blur the scene output to the default frame buffer OR ELSE render FBO2 directly to default frame buffer. (optional) Mesh wireframes composite over what's already in the default framebuffer. Does this order seem viable? Any obvious mistakes? |
1 | GLSL "varying" interpolation component wise? Reference in the spec? Probably a dumb question, but I can't find (or am not understanding) a conclusive answer in the spec or in other questions (e.g., this one). For smoothly interpolated varying vec and mat vertex shader outputs fragment shader inputs, is each element of the vector or matrix interpolated individually? GL spec sec. 13.5.1 (clipping) describes linear clip space interpolation of the "output values associated with a vertex." It also says that those are componentwise for vectors, but doesn't mention matrices. Similarly, sec. 14.5.1 (rasterizing lines) and sec 14.6 (rasterizing polygons) describe interpolation of an "associated datum f for the fragment". Is each element of a vector or matrix considered an individual "associated datum" and interpolated independently from the other elements of the vector or matrix? Secs. 11.1.3.10 (shader outputs) and 15.1 (fragment shader variables) mention interpolation but refer elsewhere for the details. Similarly, in the GLSL spec, sec. 4.5 (interpolation qualifiers) says that interpolation happens but does not distinguish scalars from multi component variables. I am looking for a definitive statement about how vecs and mats are interpolated, if there is one. Or let me know if there isn't! Thank you! (Note answers can be for any OpenGL version.) |
1 | How to store renderer vertex index data in scene graph objects? I have a SceneNode class which contains a Mesh instance. The Mesh class stores client side information such as vertex and index arrays (before they're uploaded to the GPU). I also have an abstracted Renderer class, for GL and D3D, which render the SceneNodes. However, I'm not sure where I should store the API specific variables, e.g. GLuint via glGenBuffers for GL, and an ID3D11Buffer for D3D. The few options I've considered are Create a derived Mesh class for each API, e.g. GLMesh D3DMesh Create a derived MeshData class for each API, which is stored in the main Mesh class Store a map of Mesh to API variables in each renderer, e.g. perform a lookup of Mesh to GLuint ID3D11Buffer for each object that is rendered (variables would have to be generated after the scene had been updated, but before rendering). Separate the logic of rendering from scenes, by visiting the SceneGraph after update, and generating a RenderGraph of all renderable nodes in the scene. What's the recommended way of doing this? |
1 | How could I do simple per fragment lighting on BSP geometries? I am programming a graphics engine for an old game. The game uses a BSP geometry which I have rendering perfectly. For it's lights however, it simply has light instances with the standard x, y, z, rgba, brightness, type. Now I know that OpenGL has an 8 light limit. How should I go about handling multiple lights. I am learning per fragment light just to have the concepts under my belt. I know per pixel lighting is the standard and I will eventually move there, just want to learn how to get this concept put in play as well. I assume I will just calculate which lights are the closest and render those 8. Does anyone have any other ideas? |
1 | Run OpenGL shader on part of a texture How do I run an OpenGL shader on just a portion of an off screen texture and leave the rest of the texture unmodified? Are there any calls that restrict the sampled pixels to just a rectangle or do I have to filter out the rectangle in the shader code? |
1 | How can I use the graphics pipeline to render volumetric data based on a density function? Both graphics APIs (OpenGL and DirectX) devise a well defined pipeline in which several stages are programmable. These programmable stages require to take a fixed minimum amount of data and are supposed to do a well defined range of operations on it, and output some defined minimum output, so that data can be passed on to the next stage correctly. It seems as if these pipelines are designed to work with only a limited amount of types of geometric data, which in the case of both D3D and OGL are vertex data and texture co ordinates. But, if given a case when the application I plan to make doesn't use vertices ( or even voxels ) to represent its geometric data and does not exactly do transformations or projections or rasterisation or interpolation or anything like that, such limitations of the APIs or the pipeline make things difficult. So, is there a way in which we can change graphics pipeline in a way so that the functionality of what each stage does to the data and the type of data that is outputted in each stage is changed to my advantage? If not, then is there a way by which I can use the 'raw' API functions to construct my own pipeline? If not, then please mention why it isn't possible. EDIT My Application uses density functions to represent geometry. The function has a value at each point in space. I divide the frustum of the camera into a 3d grid, each block can be projected as a pixel. On each block, I integrate the density function and check if its value is more than a required value. If yes, then its is assumed that something exists in that block and that pixel corresponding to that block is rendered. so, now in my renderer, I want to pass the function ( which i represent with a string ) to the graphics hardware instead of vertex data in vertex buffers. this also implies that the vertex shader won't have vertices to transform into homogeniouse clip space and the fragment shader doesn't get pixel info. instead, now most of the looking up and evaluation happens per pixel. |
1 | Gamma Space and Linear Space with Shader I am using Unity and I can choose between two color space mode in the settings Gamma or Linear Space. I am trying to build a Custom Lighting Surface shader but I am facing some problems with those Color Space. Because the render is not the same depending of the Color Space. If I render the lightDir, Normal or viewDir I can see that they are different depending of the Color Space I use. I made some test and the result I have in Linear Space is great but how can I obtain the same result in Gamma Space ? Are there some transformations ? On what component should I apply those transformations ? Thank you very much ! |
1 | Fragment shader compiling in webGL but not in OpenGL I am programming in Haxe (language compiling to multiple platforms) and I have written some shaders. My fragment shader runs fine in html5, but when I try to compile for native (OS X and or Neko, a VM for Haxe) I get a shader compilation error, but no details (I am using lime which is a platform abstraction that does these things for me). Here is the shader precision mediump float varying vec4 v color void main() gl FragColor v color Very simple as you can see. It runs fine in webGL, but it seems it won't compile in OpenGL. I am no expert in shaders so I have no idea what might be wrong. Am I using some syntax that only exists in webGL? Also just in case, here is my vertex shader (which compiles fine) attribute vec3 a position attribute vec4 a color uniform mat4 uMVMatrix uniform mat4 uPMatrix varying vec4 v color void main() gl Position uMVMatrix vec4(a position, 1) v color a color |
1 | SDL, SFML, OpenGL, or just move to Java I recently started a new project, and I'm wondering if I should change the technology now before it's too late. I'm using SDL with C , I have around 6 classes, and the game is going alright, but I got to this point where I have to rotate my sprite to make it point to the coordinates of the mouse. It's a 2D topdown game, so if I pre cached the images, I'd have to load 360 images, which is bad for memory. If I used SDL glx, I could do the rotation real time, but I heard it'd drop my frame rate very drastically, and I don't want my game to be slow. I also read I could use OpenGL, which is faster at these things. The problem is, how much would I have to change my code if I moved to OpenGL with SDL. So, I considered moving to a completely different language Java which is much simpler, and would allow me to focus on the game and handle networking much more easily than with C . I am very confused, and would like to hear your opinion thank you! |
1 | GLM Euler Angles to Quaternion I hope you know GL Mathematics (GLM) because I've got a problem, I can not break I have a set of Euler Angles and I need to perform smooth interpolation between them. The best way is converting them to Quaternions and applying SLERP alrogirthm. The issue I have is how to initialize glm quaternion with Euler Angles, please? I read GLM Documentation over and over, but I can not find appropriate Quaternion constructor signature, that would take three Euler Angles. The closest one I found is angleAxis() function, taking angle value and an axis for that angle. Note, please, what I am looking for si a way, how to parse RotX, RotY, RotZ. For your information, this is the above metnioned angleAxis() function signature detail tquat lt valType gt angleAxis (valType const amp angle, valType const amp x, valType const amp y, valType const amp z) |
1 | Is it possible to run GLFW eventhough my graphic card(Nvidia) supports Direct3d API? I have Nvidia 820M GPU installed in my windows 7 machine. In nvidia control panel it is showing it supports Direct3d API version 11. Am I able to run OpenGL applications(using GLUT,GLFW) on my machine with this configuration? see the below screenshot for configuration details. |
1 | Examples of transforming camera in its local coordinates vs world coordinates OpenGL I am trying to learn OpenGL and am doing some beginners learning exercises. One of the exercises is to translate and then to rotate the camera along its own local coordinates and not the world coordinate system. I am at a loss at how to do one vs the other. I'm aware how to perform translations and rotations using glTranslatef() and glRotatef() functions. Through research I'm also aware of using the 4x4 transformation matrix to multiply our vector s with to create the desired transformations, although I've not yet messed around with it personally so I'm not too familiar with the transformation matrix. This being said, I still have questions regarding the local vs world coordinates. I understand that we can put into effect translations and rotations via these functions or the transformation matrix, but I'm concerned with understanding how we do one vs the other. I haven't found any explicit code examples of transforming a camera (or a model, but I'm more concerned with the camera in this instance) with respect to the world coordinates, and likewise for transforming it with respect to its local coordinates. I feel like at this point I need something rather explicit, because I've felt confused for several days now. Furthermore, there may be some conceptual things I am confused about. This is to my understanding If we strictly move objects and the camera (which to my understanding functions like any other object in the world) by the local coordinate system, does that mean that all the local coordinate systems share the same position of their origin with each other and the origin of the world coordinate system? Then, on the other hand, if we strictly move objects camera with respect to the world coordinate system, then all objects will always be on their own local coordinate system's origin, right? And assuming no object is positioned on the same spot, then no local coordinate systems origin will be positioned on the same spot, right? |
1 | Bad FPS for smaller size (OpenGL ES with SDL) If you saw my other question, well, there is still a little problem Click here to watch on youtube Basically, the frame rate is very bad on the actual device, where for some reason the animation is scaled (it looks like the left side on the video). It is quite fast on the simulator where it is not scaled (right side). For a test, I submitted this new changeset that simply hard codes the smaller size (in SDL CreateWindow() and in glViewport()), and as you see in the video, now it is slow even in the simulator (left side shows the small size, right side shows the original size otherwise the code is the same). I'm clueless why it's soooo slow with a smaller galaxy, in fact it should be FASTER. The question is not about general speed optimization like reducing screen resolution or joining glBegin(GL POINTS) glEnd() blocks. Update For a test, I simply reduced the screen size to 320x480 and compiled it to OS X desktop got the same speed as with 800x800, so this size specific slow down is on iOS only. Will post this on the SDL mailing list now. |
1 | Creating a 3rd Person Flying Camera for 3D asteroids game In order to learn PyOpenGL and test out an engine I am developing, I am trying to write a 3D asteroids flying space shooter game. Currently, I am implementing the camera using a "look at" function positioned right behind the player's ship. The player is able to control the yaw and pitch with the left right and up down arrow keys respectively. The results look like this (forgive the placeholder art) And here is the code snippet in my Player class that implements that from pyorama.entity import Entity from pyorama.math3d.vec3 import Vec3 from pyorama.math3d.mat4 import Mat4 import math class Player(Entity) def init (self, model, camera) self.model model self.center self.model.mesh.compute bounding sphere().center self.model.transform self.model.transform.translate( self.center) self.camera camera self.key down status "Left" False, "Right" False, "Up" False, "Down" False super(Player, self). init () def update(self) messages super(Player, self).update() for message in messages if message.event type "key down" key message.data "key name" if key in self.key down status.keys() self.key down status key True if message.event type "key up" key message.data "key name" if key in self.key down status.keys() self.key down status key False self.model.transform self.model.transform.translate(self.center) if self.key down status "Up" self.model.transform self.model.transform.rotate x( 0.01) if self.key down status "Down" self.model.transform self.model.transform.rotate x(0.01) if self.key down status "Left" self.model.transform self.model.transform.rotate y(0.01) if self.key down status "Right" self.model.transform self.model.transform.rotate y( 0.01) self.model.transform self.model.transform.translate( self.center) self.model.transform self.model.transform.translate(Vec3(0, 0, 0.1)) right self.model.transform.data 0 3 up Vec3( self.model.transform.data 4 7 ) forward Vec3( self.model.transform.data 8 11 ) position Vec3( self.model.transform.data 12 15 ) temp self.model.transform self.model.transform self.model.transform.translate( self.center) right self.model.transform.data 0 3 up Vec3( self.model.transform.data 4 7 ) forward Vec3( self.model.transform.data 8 11 ) position Vec3( self.model.transform.data 12 15 ) self.camera.view Mat4.look at(position 20 forward, position, up) self.model.transform temp As you can see, the ship stays perfectly still behind and the world rotates around. As a result, it looks very unnatural, as if a 2D ship sprite was simply glued onto the screen! So my question is, how are third person cameras in flying games typically implemented? What would a keyboard mouse control scheme look like that controls yaw, pitch, roll, banking, acceleration deceleration, etc? Would the camera controls for the camera be separated from moving the ship? Any help would be greatly appreciated. |
1 | Shader Calculate depth relative to Object I am trying to calculate depth relative to the object.Here is a good solution to retrieve depth relative to camera Depth as distance to camera plane in GLSL varying float distToCamera void main() vec4 cs position glModelViewMatrix gl Vertex distToCamera cs position.z gl Position gl ProjectionMatrix cs position With this example the depth is relative to the camera.But I would like get the depth relative to the object. I would like the same depth and value if I am near from the object or if I am far. Here is an example of what I am trying to achieve. On the left you can see that the depth is relative to the camera. And on the right even if the camera moves back from the object, the depth remains the same because it is dependant to the object. |
1 | Water silhouette shader using GLSL I have this problem to solve using Cocos2d x 3.x In my game there is water represented by rectangle texture, modified by the code on the go. I also have a character moving around, rotating etc. I would like to achieve silhouette effect when he goes into the water so if pixel of character texture is not transparent and pixel of water texture is not transparent, color should be changed lets say to gray. I would like it to work as fragment shader added to character sprite. The problem is that I have no idea how to go from character UV coords to water texture coords (positions, rotations and dimensions of textures are different). I am also not sure the approach proposed by me is the correct one, maybe there is better way to do it? Could you please give me some thought how it should be done? Any code is welcome too! Edit here is a visualization, red triangle is character, it changes color as it submerges |
1 | OpenGL Arcball camera rotation I'm implementing arcball camera rotation, whereby a camera is looking at a coordinate and rotates around it in the x axis or the y axis such that the camera is circulating around it the y axis will be capped to prevent flipping. Suffice it to say it's not producing the output I want as it isn't rotating around the target. I'm not sure where the problem may lie, though my code void Camera rotate(float x, float y) x pMouseSensitivity y pMouseSensitivity pYaw x pPitch y if (pPitch gt 89.0f) pPitch 89.0f if (pPitch lt 89.0f) pPitch 89.0f pPosition glm rotate(pPosition, x, pUp) pPosition glm rotate(pPosition, y, pRight) The rotate function takes in x and y from where the mouse position is at. These values are then added to yaw and pitch respectively before the position of the camera is rotated based on the x and y. I update the camera as follows void Camera update() pForward pPosition pTarget pRight glm normalize(glm cross(pForward, pWorldUp)) pUp glm normalize(glm cross(pForward, pRight)) pViewMat glm lookAt(pPosition, pTarget, pUp) pProjMat glm perspective(pFieldView, pAspectRatio, pNearPlane, pFarPlane) The world up vector is always vec3(0, 1, 0) and is unchanged throughout, for resetting the camera. I set the target of the camera with void Camera orbitAround(glm vec3 focus) pTarget focus This function would then take in an arbitrary point or even the position of a 3D rendered model. I have been looking at a few tutorials regarding arcball rotation, and from what I can understand it involves pTarget being what's looked at, and the rotation revolving around it. What does strike me is that I'm not using pitch and yaw, so will have to look into what I should be doing with them. Producing the arcball rotation is what confuses me, and I require some help on it. Pseudocode would be great! |
1 | Understanding the perspective projection matrix in OpenGL Setting the Perspective projection matrix in Open GL (including OpenGL ES 2.0) has the following general format glm mat4 perspective(float fovy, float aspect, float zNear, float zFar) Notice the last two parameters First one specifies zNear plane, which is the plane closer to the camera. The second one specifies zFar plane which is far from the camera. It is common knowledge that, in OpenGL, things that are further, in other words "deeper into the screen", have negative Z axis. Why then all the OpenGL examples, that set perspective projection matrix (taken from reliable sources like OpenGL SuperBible and simmilar) look like the two above mentioned parameters (zNear, zFar) of Perspective projection funcion are swapped? This example is taken from GLM(OpenGL Math) webiste glm mat4 Projection glm perspective(45.0f, 4.0f 3.0f, 0.1f, 100.f) You can see, that the zNear parameter is actually further from the camera, than zFar parameter, which is 100 points closer to the camera viewer. Can you provide easy to understand explanation, please? I it is not just one but all the examples, which pass zFar as higher positive value than zNear. Thanks. My Example In my code, when I set glm mat4 Projection glm perspective(45.0f, 4.0f 3.0f, 1000.0f, 500.0f) I assume my frustum being 1000 points long in z positive direction and 500 points long in z negative direction. When I set things like this, I can see at least some object, although things are not perfect for some reasons I see only objects positioned on negative z coordinates. This might relate to my incorrect understanding of the perspective projection function. |
1 | How to pass vector to GLSL shader? I am using core OpenGL to draw fonts. This works when I do just one font set at a time. Now I am trying to use three different sets in an array. I have this structure struct sdf Vertex glm vec2 position glm vec2 textureCoord glm vec4 color and I use three fonts so this vector lt sdf Vertex gt sdf vertex 3 vector lt unsigned int gt sdf indexes 3 I then populate each font type like this ( this works fine I can see all the data in each element) void sdf addVertexInfo(uint whichFont, glm vec2 position, glm vec2 textureCoord, glm vec4 color) sdf Vertex sdf vertexTemp Temp to hold vertex info before adding to vector sdf vertexTemp.position position sdf vertexTemp.textureCoord textureCoord sdf vertexTemp.color color sdf vertex whichFont .push back(sdf vertexTemp) and then I want to pass one set of font data to the shader glBufferData ( GL ARRAY BUFFER, sizeof( sdf Vertex) sdf vertex whichFont .size(), amp sdf vertex whichFont , GL DYNAMIC DRAW ) glVertexAttribPointer ( shaderProgram SHADER TTF FONT .inVertsID, 2, GL FLOAT, GL FALSE, sizeof( sdf Vertex), (GLvoid )offsetof( sdf Vertex, position) ) glEnableVertexAttribArray ( shaderProgram SHADER TTF FONT .inVertsID ) Specify the texture info glVertexAttribPointer ( shaderProgram SHADER TTF FONT .inTextureCoordsID, 2, GL FLOAT, GL FALSE, sizeof( sdf Vertex), (GLvoid )offsetof( sdf Vertex, textureCoord) ) glEnableVertexAttribArray ( shaderProgram SHADER TTF FONT .inTextureCoordsID ) Specify the color array glVertexAttribPointer ( shaderProgram SHADER TTF FONT .inColorID, 4, GL FLOAT, GL FALSE, sizeof( sdf Vertex), (GLvoid )offsetof( sdf Vertex, color) ) glEnableVertexAttribArray ( shaderProgram SHADER TTF FONT .inColorID ) glBufferData ( GL ELEMENT ARRAY BUFFER, sdf indexes whichFont .size() sizeof(unsigned int), amp sdf indexes whichFont , GL DYNAMIC DRAW ) ) glDrawElements ( GL TRIANGLES, sdf indexes whichFont .size(), GL UNSIGNED INT, 0 ) ) Checking the vertex shader with renderDoc I'm not seeing any of the sdf vertex data coming through, or sometimes it's all scrambled. I think it's got to do with the how I'm trying to point glBufferData to the vector of structs inside an array. How do I pass the location of the vector inside an array? Thanks. |
1 | Swapping Framebuffers or swapping attachments? I wanted to know what the better approach would be for post processing swapping between framebuffers or swapping between textures attached to one framebuffer? |
1 | Parsing .OBJ to fit the gldrawelements() call. c opengl I'm struggling trying to get this to work like it should. Ive been able to make an obj loader that fits the glDrawArrays() call, with uv and facenormals, but when trying to modify it to fit glDrawElements() Im at a loss. This is the code I have this far, the function takes the addresses of the vectors to be filled void objload3(string filename, vector lt glm vec3 gt amp vertices, vector lt glm vec2 gt amp texcords, vector lt glm vec3 gt amp normals) size t vertsize 0 size t texsize 0 size t normalsize 0 size t facesize 0 ref size containers string line string token std ifstream file(filename) if (file.is open()) while (getline(file, line)) count size if (line.compare(0, 2, "v ") 0) vertsize else if (line.compare(0, 2, "vt") 0) texsize else if (line.compare(0, 2, "vn") 0) normalsize else if (line.compare(0, 2, "f ") 0) facesize file.close() set size vector lt glm vec3 gt verticeref(vertsize) vector lt glm vec2 gt texkoordref(texsize) vector lt glm vec3 gt normalref(normalsize) vector lt string gt points(facesize 3) index filling counters size t vcount 0 size t vtcount 0 size t vncount 0 size t fcount 0 size t refcount 0 fill reference vectors file.open(filename) if (file.is open()) while (getline(file, line)) if (line.compare(0, 2, "v ") 0) istringstream s(line.substr(2)) glm vec3 v s gt gt v.x s gt gt v.y s gt gt v.z verticeref vcount v else if (line.compare(0, 2, "vt") 0) istringstream s(line.substr(2)) glm vec2 v s gt gt v.x s gt gt v.y texkoordref vtcount v else if (line.compare(0, 2, "vn") 0) istringstream s(line.substr(2)) glm vec3 v s gt gt v.x s gt gt v.y s gt gt v.z normalref vncount v else if (line.compare(0, 2, "f ") 0) line.erase(0, 2) istringstream is(line) while (getline(is, token, ' '))points fcount token file.close() vector lt int gt indexref(3 points.size()) faces allocation clean up faces information, convert to int for (unsigned int a 0 a lt points.size() a) stringstream is(points a ) while (getline(is, token, ' ')) indexref refcount stoi(token) is.clear() at this point, I would do this to fit it with the glDrawArrays() call vertices.resize(points.size()) texcords.resize(points.size()) normals.resize(points.size()) refcount 0 keep track of fill for (size t fillcount 0 fillcount lt points.size() fillcount) vertices fillcount verticeref indexref refcount 1 texcords fillcount texkoordref indexref refcount 1 normals fillcount normalref indexref refcount 1 But this fill doesnt take account of all the duplicate vertices, nor does it create an index. Im having problems understanding the logic behind the gldrawarrays call, and how to easily make an index and list of vertices uv normals to fill the vbos with. Seems like something the 3Dmodel tool should do, but it doesnt? I cant juse use the list of faces already in obj export? Some clarification would be much appreciated. |
1 | OpenGL blending (masking) I need some help with OpenGL textures masking. I have it working but need to find some other blending function parameters to work in other way. Now I have Background ...code... glBlendFunc(GL ONE, GL ZERO) ...code Mask ...code... glBlendFuncSeparate(GL ZERO, GL ONE, GL DST COLOR, GL ZERO) ...code... Foreground ...code glBlendFunc(GL DST ALPHA, GL ONE MINUS DST ALPHA) ...code Now it sets foreground's opacity to 0 (fills with background texture) where mask is transparent. I need it to react to mask's colors. I mean something like setting foregrounds opacity depending on mask's color. For example if mask is black (0.0,0.0,0.0) then the opacity of that place in foreground is 0 (is filled with background), and if mask is white (1.0,1.0,1.0) then the opacity of foreground is 1 (not filled with background). It can be in reverse consequence (white opacity 0, black opacity 1). I just need it to work depending on color. 1st column is my current result. Circle in mask is transparent. 2nd column is example of result I am trying to get. Circle in mask is white. 3rd column is example of why I want to get it working just like as I said (white mask color foreground alpha 0, black mask color foreground alpha 1 or reverse) 1st row is background 2nd row is mask 3rd row is foreground 4th row is result |
1 | How to draw the simplest grid map OpenGL 1.0 I want to draw a simple black amp white grid map, like that I have been searching for a way to generate tile, a tile map and tho and I want to draw this map and thats all. I mean that I want to draw it only once so it should be easy. I use OpenGL 1.0. How can I draw that and how should I draw should I draw an Image and place it like a map, should I draw a tile and multiple it ? |
1 | 2D Camera in LWJGL 3 I am trying to implement a simple 2D camera in LWJGL3. The camera has an orthographic projection and can move in 2D space. This is run once at game start up GL11.glMatrixMode(GL11.GL PROJECTION) GL11.glLoadIdentity() GL11.glOrtho(0, graphics.window().width(), graphics.window().height(), 0, 1, 1) GL11.glMatrixMode(GL11.GL MODELVIEW) GL11.glLoadIdentity() This is run as part of the draw loop GL11.glMatrixMode(GL11.GL PROJECTION) GL11.glPushMatrix() GL11.glTranslatef( position.x(), position.y(), 0f) Draw stuff GL11.glPopMatrix() Everything looks fine until I move the view, after which everything seems to be closer to the top left (0, 0) than it should be. It seems to be off by a scalar of the window size the factor is greater in x than y. How have I misunderstood the OpenGL commands? |
1 | Nvidia Nsight 4.6 VS Edition. The Graphics debugger can't find glew32.dll I'm trying to debug some textures and FBO's with Nvidia Nsight 4.6 VS Edition. But when I select either "Start CUDA debugging" or "Start graphics debugging" I get an error. "The program can't start because glew32.dll is missing for your computer. Try reinstalling theprogram to fix this problem" The application runs just fine when I'm not using Nsight. What might be my problem? My system Windows 7 x64 bit. Nsight 4.6 x64 bit. GTX 580 with lastest drivers. OpenGL version 3.3. Building a Win32 application. (Tried to change the build target to x64 but that just resulted in a bunch of linking errors for glfw and glew) |
1 | How can I profile the speed of my vertex and fragment shaders separately? I'd like to know how I can check to see if either my vertex or my fragment shader is a bottleneck in my rendering pipeline. I've read about using glQueryCounter with the GL TIMESTAMP target to get clock checkpoints between OpenGL commands, but these don't distinguish between different types of shaders. For example, if one frame on the GPU took 8 ms to render, can I tell that the vertex shader took 7 ms and the fragment shader took 1 ms? |
1 | OpenGL ES 2.0 Picking Individual Polygon Sprites from within VBO Say, I send 10 polygon pairs (one polygon pair one 2d sprite one rectangle two triangles) into OpenGL ES 2.0 VBO. The 10 polygon pairs represent one animated 2D object consisting of 10 frames. The 10 frames, of course, can not be rendered all at the same time, but will be rendered in particular order to make up smooth animation. Would you have an advice, how to pick up proper polygon pair for rendering (4 vertices) inside Vertex Shader from the VBO? Creating separate VBO for each frame would end up with thousands of VBOs, which is not the right way of doing it. I use OpenGL ES 2.0, and VBOs for both Vertices and Indices. |
1 | Hemisphere Projection I came across the following segment of code that is supposed to project an image on a hemisphere void main(void) vColor aColor vec4 pos uModelViewMatrix vertex float lenxy length(pos.xy) if(lenxy ! 0.0) float phi atan(lenxy, pos.z) pos.xy normalize(pos.xy) pos.xy is equal to (cos theta, sin theta) float r phi (PI 2.) radius is less than or equal to 1. pos.xy r same theta, different radius gl Position uProjectionMatrix pos The followings are what I need clarifications for Why would r be less than or equal to 1 by dividing phi by PI 2.? Why are we multiplying pos.xy with r? And, how does normalizing pos.xy give us cos theta and sin theta? The followings are the accompanying images Thanks. |
1 | How should I organize my matrices in a 3D game engine? I'm working with a group of people from around the world to create a game engine (and hopefully a game with it) within the next upcoming years. My first task is to write a camera class for the engine to use in order to add cameras to the scene, with position and follow points. The problem I have is with using matrices for transformations in the class, should I keep matrices separate to each class? Such as have the model matrix in the model class, camera matrix in the camera class, or have all matrices placed in one class chuck? I could see pros and cons for each method, but I wanted to hear some input form a more professional standpoint. |
1 | OpenGL Texturing Isn't working displaying just white Basically, see the title for whatever reason, I load a texture into OpenGL, bind it, draw a quad with texture coordinates specified, and the quad remains totally white regardless of what the texture is. I have made sure that GL TEXTURE 2D is enabled. I have verified (using glGetTexImage) that the Texture has the correct data in it (or, if not the correct data, data that should be visible, as the pixels have different values). My glTexture2D function is called as follows glTexImage2D(GL TEXTURE 2D, 0, pixelFormat, this width, this height, 0, pixelFormat, GL UNSIGNED BYTE, amp (pixelData 0 )) This is where pixel format is either GL RGBA or GL RGB (according to pixel depth), and pixelData is the array of pixels. I verified that pixelFormat and pixelData and width and height have within them the correct values. Note that pixelData is a std vector. I bind the texture as such glActiveTexture(GL TEXTURE0 this multiTexNumber) glBindTexture(GL TEXTURE 2D, this texID) For the purposes of my testing, this multiTexNumber 0 Have I done something wrong here? Is the problem elsewhere? EDIT via very thorough checking, I have ensured that OpenGL doesn't throw any errors. The problem persists. |
1 | How do I load a texture in OpenGL where the origin of the texture(0, 0) isn't in the bottom left? When loading a texture in OpenGL, how do I specify the origin of the data I am loading? For example, how would I load a Targa that has it's origin at the top left instead of the bottom left of the image? |
1 | Visitor pattern vs inheritance for rendering I have a game engine that currently uses inheritance to provide a generic interface to do rendering class renderable public void render() Each class calls the gl functions itself, this makes the code hard to optimize and hard to implement something like setting the quality of rendering class sphere public renderable public void render() glDrawElements(...) I was thinking about implementing a system where I would create a Renderer class that would render my objects class sphere void render( renderer r ) r gt renderme( this ) class renderer renderme( sphere amp sphere ) magically get render resources here magically render a sphere here My main problem is where should I store the VBOs and where should I Create them when using this method? Should I even use this approach or stick to the current one, perhaps something else? PS I already asked this question on SO but got no proper answers. |
1 | How to put OpenGL in a state for drawing blended, colored, nontextured polys? Using OpenGL1.1 (sadly) I'm trying to draw a cube, which is colored and alpha blended. It is instead showing up as opaque black. Even without including alpha in the color it still shows up as opaque black so I don't believe blending is the problem. Override public void doRender(Entity entity, double d0, double d1, double d2, float f, float f1) assert(entity instanceof EntityExplosion) EntityExplosion explosion (EntityExplosion) entity glPushMatrix() glPushAttrib(GL BLEND) glPushAttrib(GL COLOR MATERIAL) glDepthMask(false) glColorMaterial (GL FRONT AND BACK, GL AMBIENT AND DIFFUSE) TODO fix glEnable(GL COLOR MATERIAL) i dunno if this even does anything glBlendFunc(GL SRC ALPHA, GL ONE MINUS SRC ALPHA) glColor3b((byte)0, (byte)255, (byte)255) TODO fix glTranslated(d0, d1, d2) trans whole thing for(NukeParticle p explosion.particles) glPushMatrix() glTranslated(p.x, p.y, p.z) glCallList(callCube) glPopMatrix() glPopAttrib() glPopAttrib() glPopMatrix() This is being done during a Minecraft Entity Renderer, meaning that the current state of GL is for drawing shaded, textured polys. What attribs or whatever do I change so that I can draw the way I want to? |
1 | nvidia SSAO visible errors I am trying to add ssao effect to a visualization application but there are errors in resulting image. I am using code from https github.com nvpro samples gl ssao . Errors occur in rapidly changing lighting near the edges of screen. Only change to code was to remove basic shading and passing only white from scene.frag shader so the error could be seen more pronounced. I also added obj input to import sponza scene but the error is visible with the included sample scene as well if camera is moved close to object. As is this implementation only enables viewing objects in relative distance. Moving closer to objects brings up this errors. What causes this error and is there fix. |
1 | Taking cube from camera space to clip space, error in my math? watching Ken Joy's Computer Graphics lectures on youtube. One thing I'm confused about is after he gets the cube from the camera space to clip space, from my calculations the cube doesn't look like that. I expected the cube to look like that pink parallelogram in my picture, if we assume the Z of the front face of the cube to be 4 3 and the back face to be 2 then the Ws come out to be 4 3 and 2 respectively. So can someone explain how after multiplying by the viewing matrix, the cube comes out to look like how Ken has it. Ken's view matrix After view matrix has been applied What I think the side of the cube should look like(the pink parallelogram) after view matrix has been applied my reasoning is, after the perspective divide by W, the blue and green vectors should get truncated to create that pink parallelogram. So I'm struggling to understand this. Thanks in advance. |
1 | Assimp renders a partial amount of vertices I'm building a 3D game, and i'm trying to load some assets with the nice Assimp library. The model should look like the one in the first picture, but instead, it takes the form of some kind of avant garde sculpture, as the second picture shows. At least i'm proud that i get to see something, but hey, i can do better. It's not a problem with shaders (it simply isn't, i pass no normals and no textures, just the vertex coordinate, since i set a static colour). Here is my code MODEL LOADING (only vertex coordinates) void Mesh open(const std string file) Assimp Importer importer const aiScene scene importer.ReadFile(file, aiProcess Triangulate) if(!scene) throw STREAM("Could open mesh '" lt lt file lt lt "'!") std vector lt float gt g vp count 0 for(uint m i 0 m i lt scene gt mNumMeshes m i ) const aiMesh mesh scene gt mMeshes 0 g vp.reserve(3 mesh gt mNumVertices) count mesh gt mNumVertices for(uint v i 0 v i lt mesh gt mNumVertices v i ) if(mesh gt HasPositions()) const aiVector3D vp amp (mesh gt mVertices v i ) g vp.push back(vp gt x) g vp.push back(vp gt y) g vp.push back(vp gt z) glGenBuffers(1, amp vbo) glBindBuffer(GL ARRAY BUFFER, vbo) glBufferData(GL ARRAY BUFFER, count 3 sizeof(float), amp g vp 0 , GL STATIC DRAW) glEnableVertexAttribArray(0) glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, 0, (GLubyte )NULL) SCENE RENDERING void renderScene() glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) program.use() program.setUniform("transformMat", modelMat) glBindBuffer(GL ARRAY BUFFER, m.vbo) glDrawArrays(GL TRIANGLES, 0, m.count) SDL GL SwapWindow(window gt mWin) Depth testing is enabled, and i'm using OpenGL 4.0 with v400 shaders too. The error will possibly be stupid, but i can't find it, and my head is all boiling at this very moment. Thanks for your help! |
1 | OpenGL Quad is being separated like a two triangles after blending I wanted to add transparency for my UI elements, so I enabled blending by adding glEnable(GL BLEND) glBlendFunc(GL SRC ALPHA, GL ONE MINUS SRC ALPHA) before renderer and glDisable(GL BLEND) after renderer. So I tried to render simple rectangle (with glDrawArrays() because I'm using GLSL shaders). Before I enabled blending my rectangle (1 quad) looked like this And now, after blending it looks like this (Like a two triangles) Do I need some extra settings for my blending or what? Edit I noticed that if I remove anti aliasing code glHint(GL LINE SMOOTH HINT, GL NICEST) glHint(GL POINT SMOOTH HINT, GL NICEST) glHint(GL POLYGON SMOOTH HINT, GL NICEST) glHint(GL PERSPECTIVE CORRECTION HINT, GL NICEST) glEnable(GL LINE SMOOTH) glEnable(GL POINT SMOOTH) glEnable(GL POLYGON SMOOTH) Problem will disappear, but I want to keep anti aliasing in my game. Any ideas? |
1 | How many textures can usually I bind at once? I'm developing a game engine, and it's only going to work on modern (Shader model 4 ) hardware. I figure that, by the time I'm done with it, that won't be such an unreasonable requirement. My question is how many textures can I bind at once on a modern graphics card? 16 would be sufficient. Can I expect most modern graphics cards to support that number of textures? My GTX 460 appears to support 32, but I have no idea if that's representative of most modern video cards. |
1 | Random lines drawn on screen while all vertexes are correct I'm writing a 2D program in which a monocycle follows a Catmull Rom spline. My problem is when I write the circle, the drawing goes crazy. There is one line on the screenshot (which seems to be 3 when watching it), that rotates with the circle and blinks around it. It doesn't happen always, comes randomly, sometimes it's fine then it will start this after a few seconds. I have no clue what could cause it. My calculations are correct, there aren't fauly or nans inf s in the array, and if I put printf right before buffering the vbo, the lines don't appear, so I can't even debug it. And sometimes there's a small blinking in the green part too. Here's some part of my code. Creating the curve void Create() Curve glGenVertexArrays(1, amp vaoCurve) glBindVertexArray(vaoCurve) glGenBuffers(1, amp vboCurve) Generate 1 vertex buffer object glBindBuffer(GL ARRAY BUFFER, vboCurve) Enable the vertex attribute arrays glEnableVertexAttribArray(0) attribute array 0 Map attribute array 0 to the vertex data of the interleaved vbo glVertexAttribPointer(0, 2, GL FLOAT, GL FALSE, 2 sizeof(float), NULL) attribute array, components attribute, component type, normalize?, stride, offset void Create() glGenVertexArrays(1, amp vaoCyclist) glBindVertexArray(vaoCyclist) Curve glGenBuffers(1, amp vboCyclist) Generate 1 vertex buffer object glBindBuffer(GL ARRAY BUFFER, vboCyclist) glGenBuffers(1, amp vboHuman) Generate 1 vertex buffer object Enable the vertex attribute arrays glEnableVertexAttribArray(0) attribute array 0 Map attribute array 0 to the vertex data of the interleaved vbo glVertexAttribPointer(0, 2, GL FLOAT, GL FALSE, 2 sizeof(float), NULL) attribute array, components attribute, component type, normalize?, stride, offset Calculating humanData.clear() circleData.clear() for (int i 0 i lt 180 i ) cordX cosf((float)M PI i 2 180.0f) cordY sinf((float)M PI i 2 180.0f) if (i 0) circleData.push back(cordX) circleData.push back(cordY) else if (i 179) circleData.push back(cosf((float)M PI i 2 180.0f)) circleData.push back(sinf((float)M PI i 2 180.0f)) circleData.push back(cosf((float)M PI i 2 180.0f)) circleData.push back(sinf((float)M PI i 2 180.0f)) circleData.push back(cosf((float)M PI 0 2 180.0f)) circleData.push back(sinf((float)M PI 0 2 180.0f)) else circleData.push back(cordX) circleData.push back(cordY) circleData.push back(cordX) circleData.push back(cordY) Draw mat4 MVPTransform MrotateTranslate() camera.V() camera.P() MVPTransform.SetUniform(gpuProgram.getId(), "MVP") int colorLocation glGetUniformLocation(gpuProgram.getId(), "color") for (int i 0 i lt circleData.size() 1 i ) printf(" f n", circleData i ) glBindVertexArray(vaoCyclist) glBindBuffer(GL ARRAY BUFFER, vboCyclist) glBufferData(GL ARRAY BUFFER, circleData.size() sizeof(float), amp circleData 0 , GL DYNAMIC DRAW) if (colorLocation gt 0) glUniform4f(colorLocation, 0.72f, 0.16f, 0.0f, 1.0f) glDrawArrays(GL LINES, 0, circleData.size()) Cyclist attributes class Cyclist private unsigned int vaoCyclist, vboCyclist unsigned int vboHuman float r 1 float v 0 vec2 wTranslate vec2( 10.0f, 0.0f) vec2 position vec2(0, 0) float sina, fi 0, ds float cordX, cordY vector lt float gt circleData vector lt float gt humanData float dir 1.0f |
1 | Optimizing black and white matrix block drawing Disclaimer I am uncertain if this is the best place to post this question, so please advise me of how I can best find the answer if I am doing something wrong. I am asking this question because I am trying to optimize a drawing function of a PC game I am programming. Preface There is a matrix of blocks, that can be either on (black) or off (white). There will be different colors, but for the sake of simplicity I am presenting the problem to you as black and white only, so please imagine this matrix forming some random pattern like a QR code which would be a good example. Problem Instead of drawing each black block individually, I want to determine how to draw all black blocks using as minimum calls to my draw black rectangle function as possible. Question Is there any existing process that I can refer to so as to make my drawing process as efficient as possible, or how would I logically go about tackling this conundrum? Edit Tried to produce some example code for us to play with std vector lt std vector lt bool gt gt matrix int width 100 int height 100 matrix.resize(height) for(int h 0 h lt height h ) matrix h .resize(width) for(int w 0 w lt width w ) matrix h w random 1 0 void draw() .. draw matrix using a minimum amount of calls to draw rectangle void draw rectangle(int x, int y, int width, int height) .. draws a set of black blocks |
1 | How do I implement GPU based dynamic geometry LOD in OpenGL? I'm trying to implement LOD to boost my game's performance. I found a very nice tutorial. The basic concept that I think I understand is Get the distance from the camera to the object, check for the right LOD level and then render the object with the "right amount of instances". How do I implement that? The provided example code is a mystery to me... Some questions Is this a good method to implement LOD? Can someone please explain me detailed, how I have to implement it, with the queries and so on... I'm rendering all of my objects with GL11.glDrawElements(GL11.GL TRIANGLES, model.getRawModel().getVertexCount(), GL11.GL UNSIGNED INT, 0) The example code uses GL POINTS. Can I implement it also with GL TRIANGLES? |
1 | issue with Compute Shader macbeth chart I'm trying to generate a macbeth chart using the compute shader, but there seems to be an issue with the output image. here is the glsl code version 430 core layout (local size x 32, local size y 32) in layout (rgba32f, binding 0) uniform image2D data uniform int factor 2 const float colorMatchingFunc 3 72 const float spectralData 24 36 void spectrumToXYZ(uint colorIndex, out vec3 XYZ) const mat3 XYZ TO RGB const float gamma 1 void main(void) uvec2 dim gl NumWorkGroups.xy factor uvec2 ij gl WorkGroupID.xy factor uint i ij.x dim.x ij.y vec3 xyz spectrumToXYZ(i, xyz) vec3 rgb max(vec3(0), XYZ TO RGB xyz) rgb pow(rgb, vec3(1 gamma)) imageStore(data, ivec2(gl GlobalInvocationID.xy), vec4(rgb, 1)) and here is the c setup static int factor 4 static int patchSize 32 int width 192 factor int height 128 factor glActiveTexture(TEXTURE( id)) glBindTexture(GL TEXTURE 2D, buffer) glTexStorage2D(GL TEXTURE 2D, 1, GL RGBA32F, width, height) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glDispatchCompute( width (patchSize), height (patchSize), 1) This setup and glsl code makes sure I get a 6 4 macbeth chart. I'm assuming regardless of which invocation workgroup I end up in I should always generate the correct color for the patch I'm currently on. but I seem to have som aliasing especially on the bottom row and its very noticable all rows on lower resolutions. Low resolution image |
1 | OpenGL cubemap binding I'm experimenting a strange behaviour of textures inside my shaders. Basically I need and bind two cubemap textures inside my shader but only one gets actually bound. I've tried swapping the two textures keeping the samplers untouched and the texture that previouly I could see now I don't see it anymore, so I think the problem is related to the samplers inside my shader. This is how I bind my two cubemaps to the shader int firstTextureLocation glGetUniformLocation(programID, "ConvolutionSrc") returns 5 int secondTextureLocation glGetUniformLocation(programID, "LastResult") returns 9 glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE CUBE MAP, firstTextureID) glUniform1i(firstTextureLocation, 0) glActiveTexture(GL TEXTURE1) glBindTexture(GL TEXTURE CUBE MAP, secondTextureID) glUniform1i(secondTextureLocation, 1) And this is a portion of my shader (the parts where the cubemaps are used) uniform samplerCube ConvolutionSrc uniform samplerCube LastResult ... vec3 Sample(vec3 R) ... vec3 hdrPixel rescaleHDR(textureLod(ConvolutionSrc, L, lod).rgb) ... void main() ... vec3 lastResult textureLod(LastResult, R, ConvolutionMip).rgb sampledColor.rgb mix(lastResult.xyz, importanceSampled.xyz, 1.0f (ConvolutionSamplesOffset)) ... Basically only the texture at uniform location 5 of my shader gets bound. |
1 | GLSL 330 Core Bug? Uniform variable will not set to value if it is named a certain way In the code below I have a uniform variable named "vw matrix" used in the calculation for gl Position. When I run my program, a rectangle gets printed to the screen. version 330 core layout (location 0) in vec4 position uniform mat4 pr matrix uniform mat4 vw matrix mat4(1.0) Rectangle! uniform mat4 m1 matrix mat4(1.0) No rectangle! uniform mat4 mmmmm mat4(1.0) No rectangle! uniform mat4 test mat4(1.0) Rectangle! uniform mat4 aaa mat4(1.0) No rectangle! void main() gl Position pr matrix position vw matrix If instead I put "m1 matrix" in the gl Position line instead of "vw matrix". The rectangle no longer appears. I'm not getting any GLSL compile errors. void main() gl Position pr matrix position m1 matrix There are no calls to glUniformMatrix4fv changing the value "m1 matrix". It has the same value as "vw matrix" so for some reason it must not be initializing. I did some experimenting, it won't set a uniform variable if it starts with a certain letter and "m" is one of those letters. Anything that starts with a "v" or "t" works fine. I can't make this up! Is there anything I should double check post or is this just a rather bizarre bug? The full code is available at https github.com nduplessis11 Freeze I'm running on Linux Mint 17.1 and my GL VERSION is "OpenGL 4.4 13374 Compatibility Profile Context 15.20.2013". My video driver is AMD's fglrx version 15.200 UPDATE Ok, I found a bug in my main program in a function that retrieves the location of a uniform. GLint Shader getUniformLocation(const GLchar name) This is a very slow operation, optimize later. glGetUniformLocation(m ShaderID, name) return 0 This was causing 0 to be passed as the location for glSetUniform calls. I'm not sure how, but fixing this function stopped the unpredictable behavior when initializing uniform variables from within the shader program. I'm still going to take Trevor's advice and set them from within my main program. |
1 | How can I make terrain texturing look detailed both close up and far away? I attempting to make my game have very detailed textures, and in general look pretty. However, I'm having some issues with that. Let's take a look at a rock texture close up Picture. As you can see, this looks perfectly adequate for a 3D game up close. However, I'm struggling to get this working at a distance Picture At a distance, the texturing looks obviously uniform, and looks very tiled and very bland. Picture How can I solve this issue to make my textures look more detailed at all distances? I obviously have mipmapping enabled, but I don't understand why the texture looks so badly detailed and what I can do to improve it. |
1 | Problem with 2d rotation in OpenGL I have a function to perform sprite rotation void Sprite rotateSprite(float angle) Making an array of vertices 6 for 2 triangles Vector2 lt gamePos gt halfDims( rect.w 2, rect.h 2) Vector2 lt gamePos gt bl( halfDims.x, halfDims.y) Vector2 lt gamePos gt tl( halfDims.x,halfDims.y) Vector2 lt gamePos gt br(halfDims.x, halfDims.y) Vector2 lt gamePos gt tr(halfDims.x,halfDims.y) bl rotatePoint(bl,angle) halfDims br rotatePoint(br,angle) halfDims tl rotatePoint(tl,angle) halfDims tr rotatePoint(tr,angle) halfDims 1st triangle Top right dataPointer.vertices 0 .setPosition( rect.x tr.x, rect.y tr.y) Top left dataPointer.vertices 1 .setPosition( rect.x tl.x, rect.y tl.y) Bottom left dataPointer.vertices 2 .setPosition( rect.x bl.x, rect.y bl.y) 2nd triangle Bottom left dataPointer.vertices 3 .setPosition( rect.x bl.x, rect.y bl.y) Bottom right dataPointer.vertices 4 .setPosition( rect.x br.x, rect.y br.y) Top right dataPointer.vertices 5 .setPosition( rect.x tr.x, rect.y tr.y) Vector2 lt gamePos gt Sprite rotatePoint(Vector2 lt gamePos gt pos, float amp angle) Vector2 lt gamePos gt newv newv.x pos.x cos(angle) pos.y sin(angle) newv.y pos.y cos(angle) pos.x sin(angle) return newv And the result is Am i doing something wrong ? It happens also when i put small angle (even if i put here angle 1 ) Thanks for help. |
1 | SDL, SFML, OpenGL, or just move to Java I recently started a new project, and I'm wondering if I should change the technology now before it's too late. I'm using SDL with C , I have around 6 classes, and the game is going alright, but I got to this point where I have to rotate my sprite to make it point to the coordinates of the mouse. It's a 2D topdown game, so if I pre cached the images, I'd have to load 360 images, which is bad for memory. If I used SDL glx, I could do the rotation real time, but I heard it'd drop my frame rate very drastically, and I don't want my game to be slow. I also read I could use OpenGL, which is faster at these things. The problem is, how much would I have to change my code if I moved to OpenGL with SDL. So, I considered moving to a completely different language Java which is much simpler, and would allow me to focus on the game and handle networking much more easily than with C . I am very confused, and would like to hear your opinion thank you! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.