_id
int64 0
49
| text
stringlengths 71
4.19k
|
---|---|
1 | How is terrain editing implemented like a sculpt tool? Some creativity sandbox games have a terrain editing feature. When thinking of a terrain, I'm thinking of a grid of vertices. The elevation of each vertex is then determined by a heightmap (image). And the user can edit this elevations, so you get mountains and valleys. Most times terrain editing is 'just' changing the height component of the positions of vertices in an existing grid of a fixed amount of vertices. Now I see some games that make it possible to do the following This seems way more complex than just a grid and a heightmap. I have thought a lot about this, but what is the best most common approach to implement this? I think it's difficult to find a good data structure for the vertices. I'm pretty sure, during editing, vertices are also created and added into the terrain, which also seems complex. All attempts to create indices seems a hell of a job to me. (Think of rendering this in triangle strips) |
1 | Ara matrices calculated on the GPU or on the CPU? Would built in matrix functions be faster than my custom ones? If I add a math library (for example containing a Matrix class) and use it in my program drawing with OpenGL, will my be work slower than if I used standard OpenGL functions for matrix calculations? Does the same hold true for DirectX? |
1 | Accessing struct in glut I have to write a game for a uni project using openGL glut. I am just going to use a generic example code as I'm just looking for a solution to one specific problem Using glut, I have to call a function 'glutDisplayFunc(display) ' in the main function. I am also using an IdleFunc(update). The problem is that, as described here, the 'display' function cannot pass anything. I have some structs outside my main that I wish to be initialized in the main, and be accessible by display and update. Hopefully some code will explain my problem better include lt gl glut.h gt struct Player GLfloat x GLfloat y GLfloat z int score ... function prototypes (showing how I would normally pass the struct) void InitPlayer (Player amp player) void DrawPlayer (Player amp player) void UpdatePlayer (Player amp player) void main (int argc, char argv) Player player InitPlayer(player) ... glut openGL initialisation code left out ... glutDisplayFunc (display) glutReshapeFunc (reshape) glutIdleFunc (update) glutMainLoop() void display() DrawPlayer(player) void update () UpdatePlayer(player) glutPostRedisplay () end The above code doesn't work I hope it demonstrates what I would like to do though. Access the struct 'player' in 'display' and 'update', having the same values stored globally. How would I go about ? |
1 | Combining rotation,scaling around a pivot with translation into a matrix In short I need to combine rotation (in the form of a quaternion), scaling around a pivot point along with translation into a transformation matrix. The long I am trying to implement a proprietary model format. This format includes animations. Part of that animation involves generating a matrix which combines rotation (in the form of a quaternion) and scaling around a pivot point, with translation into a transformation matrix. I am using OpenGL and the OpenGL math library (GLM) Someone else has already implemented parts of the proprietary model format using DirectX. The part in question was implemented in directX like this D3DXMatrixTransformation( amp BaseMatrix, amp ModelBaseData gt PivotPoint, NULL, amp ScalingVector, amp ModelBaseData gt PivotPoint, reinterpret cast lt D3DXQUATERNION gt ( amp RotationVector), amp TranslationVector) I found the WINE implementation of this function and attempted to duplicate it like this mat4 m1, m2, m3, m4, m5, m6, m7, p1, p2, p3, p4, p5 m5 glm translate(0,0,0) m7 glm translate(pivot translationVector) m1 glm translate( pivot) m3 glm scale(scalingVector) m6 glm mat4 cast(rotationQuaternion) Convert quaternion to a matrix m2 amp m4 are identity p1 m1 m2 p2 p1 m3 Apply scaling first p3 p2 m4 p4 p3 m5 p5 p4 p5 p4 m6 mat4 result p5 m7 (I realize glm translate(0,0,0) is the identity matrix) Unfortunately neither the scaling nor the rotation seems to work correctly like this. So I took another approach based off of this ordering of translation, rotation and scaling. mat4 result glm scale(scalingVector) glm translate(pivot) glm mat4 cast(rotationQuaternion) glm translate(translationVector) However the scaling does not appear to be around the pivot point and I'm not entirely sure about the rotation. Does anyone have any tips on how to accomplish this? |
1 | Additive blending problems I'm trying to get the blend of two images to work without luck. I have a render target on which I have an object, then I want to render again the same object in same position but with different light condition. My goal is to add the result of this two render so I tried to do something like glEnable(GL BLEND) glBlendFunc (GL ONE , GL ONE) .... glClear( GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) render on render target A change light condition do not clear now render on render target A again with new light condition glDisable(GL BLEND) display render target A I am surely missing something because not only just the first image shows, but the object is transparent. I am extremely sorry if the question is stupid but I'm very new to such things. Thank you. edit Following the answer of Sam Johnston I've edited my code but, although I solved the transparency, the results are not added (to make sure of this I render one with a completely red light and one with a green light) together, but I get just the first render as result glEnable(GL BLEND) glBlendFunc (GL ONE , GL ZERO) glClear( GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) render on render target A change light condition glBlendFunc (GL ONE , GL ONE) do not clear now render on render target A again with new light condition glDisable(GL BLEND) display render target A Have I misunderstood the answer? Juan |
1 | Indirect indexing (uv coords read from texture) In the vertex shader, I need to make a texture fetch, where the texture coordinate itself is read from some other texture. vec2 uv texture(someTexture,coords).xy vec4 val texture(otherTexture,uv).xyzw As far as I know, the second sample has undefined results, because the value of uv is outside of uniform flow control. Is there any way to efficiently (i.e., not copy back the contents of the texture and upload it as a uniform to the shader or something) do what I want to do? |
1 | openGL textures in bitmap mode For reasons detailed here I need to texture a quad using a bitmap (as in, 1 bit per pixel, not an 8 bit pixmap). Right now I have a bitmap stored in an on device buffer, and am mounting it like so glBindBuffer(GL PIXEL UNPACK BUFFER, BFR.G (T 1) 2 ) glTexImage2D(GL TEXTURE 2D, 0, GL RGB, W, H, 0, GL COLOR INDEX, GL BITMAP, 0) The OpenGL spec has this to say about glTexImage2D "If type is GL BITMAP, the data is considered as a string of unsigned bytes (and format must be GL COLOR INDEX). Each data byte is treated as eight 1 bit elements..." Judging by the spec, each bit in my buffer should correspond to a single pixel. However, the following experiments show that, for whatever reason, it doesn't work as advertised 1) When I build my texture, I write to the buffer in 32 bit chunks. From the wording of the spec, it is reasonable to assume that writing 0x00000001 for each value would result in a texture with 1 px wide vertical bars with 31 wide spaces between them. However, it appears blank. 2) Next, I write with 0x000000FF. By my apparently flawed understanding of the bitmap mode, I would expect that this should produce 8 wide bars with 24 wide spaces between them. Instead, it produces a white 1 px wide bar. 3) 0x55555555 1010101010101010101010101010101, therefore writing this value ought to create 1 wide vertical stripes with 1 pixel spacing. However, it creates a solid gray color. 4) Using my original 8 bit pixmap in GL BITMAP mode produces the correct animation. I have reached the conclusion that, even in GL BITMAP mode, the texturer is still interpreting 8 bits as 1 element, despite what the spec seems to suggest. The fact that I can generate a gray color (while I was expecting that I was working in two tone), as well as the fact that my original 8 bit pixmap generates the correct picture, support this conclusion. Questions 1) Am I missing some kind of prerequisite call (perhaps for setting a stride length or pack alignment or something) that will signal to the texturer to treat each byte as 8 elements, as it suggests in the spec? 2) Or does it simply not work because modern hardware does not support it? (I have read that GL BITMAP mode was deprecated in 3.3, I am however forcing a 3.0 context.) 3) Am I better off unpacking the bitmap into a pixmap using a shader? This is a far more roundabout solution than I was hoping for but I suppose there is no such thing as a free lunch. |
1 | Restoring projection matrix I am learning to use FBOs and one of the things that I need to do when rendering something onto user defined FBO, I have to setup the projection, modelview and viewport for it. Once I am done rendering to the FBO, I need to restore these matrices. I found glPushAttrib(GL VIEWPORT BIT) glPopAttrib() to restore the viewport to its old state. Is there a way to restore the projection and modelview matrix to whatever it was earlier ? Tech C OpenGL Thanks! |
1 | Render Queue Sorting, HOW? Recently I'm trying to implement a render queue sorting system, i.e., ordering my renderable objects in an array in such a way that the overhead of OpenGL state changes are minumum. After some googling, the most ideal one (to me) is to generate an integer key value for each renderable object, and the sorting is based on that value. Once done sorting, objects that have similar rendering states (same shader, texture, distance, etc.) will be close to each other, so we can just loop through the object array and submit the draw calls for rendering. Here's a detailed explanation Order your graphics draw calls around! My problem is, how do I loop through the renderable object's array to submit their draw calls for actual rendering? Do I need to keep track of the current OpenGL states manually? What have come to my mind is (for each renderable object) if(objects i .getShader() ! m currentShader) m currentShader objects i .getShader() m currentShader.bind() if(objects i .getTexture() ! m currentTexture) m currentTexture objects i .getTexture() m currentTexture.bind() ... and so on object i .getMesh().draw() Seems not so performance friendly to me, but I can't find any further explanation on how to submit the actual draw calls form the object's array either on the article I've provided or other sites I can find. Perhaps there's something I have missed? |
1 | Why doesn't glBindVertexArray work in this case? From my understanding of what glBindVertexArray does and how it works, the following code should work fine First init glGenVertexArraysOES(1, amp vertexArray) glBindVertexArrayOES( vertexArray) glGenBuffers(1, amp buffer) glBindBuffer(GL ARRAY BUFFER, buffer) glBufferData(GL ARRAY BUFFER, kMaxDrawingVerticesCount sizeof(GLKVector3), NULL, GL DYNAMIC DRAW) glGenBuffers(1, amp indexBuffer) glBindBuffer(GL ELEMENT ARRAY BUFFER, indexBuffer) glBufferData(GL ELEMENT ARRAY BUFFER, kMaxDrawingVerticesCount 1.5 sizeof(GLuint), NULL, GL DYNAMIC DRAW) glEnableVertexAttribArray(GLKVertexAttribPosition) glVertexAttribPointer(GLKVertexAttribPosition, 3, GL FLOAT, GL FALSE, sizeof(GLKVector3), BUFFER OFFSET(0)) glBindVertexArrayOES(0) And later add new geometry glBindVertexArrayOES( vertexArray) glBufferSubData(GL ARRAY BUFFER, 0, (GLintptr) data.vertices.length, data.vertices.bytes) glBufferSubData(GL ELEMENT ARRAY BUFFER, 0, (GLintptr) data.indices.length, data.indices.bytes) glBindVertexArrayOES(0) However, it doesn't work (there is screen output, but it looks like the buffers were changed between each other). Binding the vertex array is not enough for the vertex indices buffer to get bound? Because, if I do this glBindVertexArrayOES( vertexArray) glBindBuffer(GL ARRAY BUFFER, buffer) lt note this line glBindBuffer(GL ELEMENT ARRAY BUFFER, indexBuffer) lt note this line glBufferSubData(GL ARRAY BUFFER, 0, (GLintptr) data.vertices.length, data.vertices.bytes) glBufferSubData(GL ELEMENT ARRAY BUFFER, 0, (GLintptr) data.indices.length, data.indices.bytes) glBindVertexArrayOES(0) Everything works looks as expected. I find it strange, since the vertex array should have taken care of the buffer binding. Otherwise, what's the purpose to have a vertex array if you still have to bind all the buffers? |
1 | Should the modelview and projection matrices be calculated in the shader or on the CPU? At minimum I would have a camera with rotation and world position projections parameters such as angle of view and perspective vs. orthographic and meshes with scale, angle, and world position. When rendering a mesh I am wondering if I should calculate the final transformation matrix on the CPU and pass it to my shader, or if I should calculate the transformation matrix inside my shader. |
1 | Why am I not getting an sRGB default framebuffer? I'm trying to make my OpenGL Haskell program gamma correct by making appropriate use of sRGB framebuffers and textures, but I'm running into issues making the default framebuffer sRGB. Consider the following Haskell program, compiled for 32 bit Windows using GHC and linked against 32 bit freeglut import Foreign.Marshal.Alloc(alloca) import Foreign.Ptr(Ptr) import Foreign.Storable(Storable, peek) import Graphics.Rendering.OpenGL.Raw import qualified Graphics.UI.GLUT as GLUT import Graphics.UI.GLUT(( )) main IO () main do ( progName, args) lt GLUT.getArgsAndInitialize GLUT.initialDisplayMode GLUT.SRGBMode window lt GLUT.createWindow "sRGB Test" To prove that I actually have freeglut working correctly. This will fail at runtime under classic GLUT. GLUT.closeCallback Just (return ()) glEnable gl FRAMEBUFFER SRGB colorEncoding lt allocaOut glGetFramebufferAttachmentParameteriv gl FRAMEBUFFER gl FRONT LEFT gl FRAMEBUFFER ATTACHMENT COLOR ENCODING print colorEncoding allocaOut Storable a gt (Ptr a gt IO b) gt IO a allocaOut f alloca ptr gt do f ptr peek ptr On my desktop (Windows 8 64 bit with a GeForce GTX 760 graphics card) this program outputs 9729, a.k.a. gl LINEAR, indicating that the default framebuffer is using linear color space, even though I explicitly requested an sRGB window. This is reflected in the rendering results of the actual program I'm trying to write everything looks washed out because my linear color values aren't being converted to sRGB before being written to the framebuffer. On the other hand, on my laptop (Windows 7 64 bit with an Intel graphics chip), the program prints 0 (huh?) and I get an sRGB default framebuffer by default whether I request one or not! And on both machines, if I manually create a non default framebuffer bound to an sRGB texture, the program correctly prints 35904, a.k.a. gl SRGB. Why am I getting different results on different hardware? Am I doing something wrong? How can I get an sRGB framebuffer consistently on all hardware and target OSes? |
1 | Java LWJGL How can I click to interact with objects? I want to be able to click on a monster to walk to him and start attacking him. The part that doesnt make sense to me is the conversion between the mouse position, and the actual terrain position. There are camera angles to worry about, heights, seperate terrains, how is this done???? I am using Java LWJGL and rendering with OpenGL 4.4 |
1 | Noisy edges, smoothing out edges between faces via fragment shader I have a generated terrain, with hexagonal geometry, as per screenshot below I then generate biomes, but as you can see the borders between them are really ugly and straight. To hide that hexagonal origin, I would need to smooth out the borders between biomes. This is how it looks now in wireframe with real tringular faces What I'm aiming for is something more like this Each vertex has attibute that holds the biome type, I can also add special attributes to the vertices on the edge between two biomes, but I just don't seem to be able to figure out how to pull this off in shader code, obviously noise is involved here, but how do I make it continous across multiple faces and entire border of multiple biomes? I'm rendering with WebGL using THREE.js |
1 | What does the identity matrix really do? I understand that multiplying by the identity matrix is like multiplying by 1. Why would you multiply a matrix that will only contain the same result? Also, I'm experimenting with some OpenGL code and found some very interesting things. Like for example When you use setIdentity(), the image rotates slower Matrix.setIdentityM(mProjectionMatrix, 0) Matrix.rotateM(mProjectionMatrix, 0, angleInDegrees, 0.0f, 0.0f, 1.0f) But if you try to remove the setIdentity(), the image will rotate so fast... first it will rotate clockwise so fast, and then it will become slower until it stops and then it will begin to rotate counter clockwise, and then will stop again, then rotate clockwise. I really don't understand how the identity matrix affects them despite multiplying by 1. Can someone explain to me what's going on? Here is the tutorial I am following. |
1 | Rectangular area light illuminance colour banding I'm currently working on an implementation of rectangular area lights but I am having some issues with the illuminance calculation, which gives me serious colour banding across the entire lit area. I'm using the paper published by DICE, "Moving Frostbite to Physically Based Rendering" (http www.frostbite.com 2014 11 moving frostbite to pbr ) as my base for implementation (Listing 12). This is the relevant code used Solid angle of the rectangle define M PI 3.14159265358979323846 float RectangleSolidAngle(vec3 worldPos, vec3 p0, vec3 p1, vec3 p2, vec3 p3) vec3 v0 p0 worldPos vec3 v1 p1 worldPos vec3 v2 p2 worldPos vec3 v3 p3 worldPos vec3 n0 normalize(cross(v0, v1)) vec3 n1 normalize(cross(v1, v2)) vec3 n2 normalize(cross(v2, v3)) vec3 n3 normalize(cross(v3, v0)) float g0 acos(dot( n0, n1)) float g1 acos(dot( n1, n2)) float g2 acos(dot( n2, n3)) float g3 acos(dot( n3, n0)) return g0 g1 g2 g3 2.0f M PI Illuminance calculation float RectangleIlluminance(vec3 worldPos, vec3 worldNormal, AreaLightData lightSource) float result 0.0f if (dot((worldPos lightSource.position), lightSource.normal) gt 0.0f) float hWidth lightSource.halfWidth float hHeight lightSource.halfHeight vec3 p0 lightSource.position lightSource.right hWidth lightSource.up hHeight vec3 p1 lightSource.position lightSource.right hWidth lightSource.up hHeight vec3 p2 lightSource.position lightSource.right hWidth lightSource.up hHeight vec3 p3 lightSource.position lightSource.right hWidth lightSource.up hHeight float solidAngle RectangleSolidAngle(worldPos, p0, p1, p2, p3) float illuminance solidAngle 0.2f ( Saturate(dot(normalize(p0 worldPos), worldNormal)) Saturate(dot(normalize(p1 worldPos), worldNormal)) Saturate(dot(normalize(p2 worldPos), worldNormal)) Saturate(dot(normalize(p3 worldPos), worldNormal)) Saturate(dot(normalize(lightSource.position worldPos), worldNormal))) result illuminance return result The paper mentions some level of banding but it should not be on the level which I am currently getting, which can be seen in motion in this video https youtu.be gmkOOxIYc9s . These images show the illuminance without shadows or anything. From what I can gather from the paper, the code is correct. The current solution is deferred, reconstructing position using the depth stencil texture and sampling normals from a RGBA32F texture (trying to eliminate precision issues). A directional light also seems to produce some level of banding, although not as visible as with the area light, which can be seen in the following images. So maybe it's still just a precision issue? The code for handling my depth and position reconstruction looks like this uniform sampler2D gDepthStencil vec4 ReconstructWSPosition(float z, vec2 uv f) vec4 sPos vec4(uv f 2.0f 1.0f, z, 1.0f) sPos gInvProjView sPos return vec4((sPos.xyz sPos.w ), sPos.w) void main() ivec2 pixel ivec2(gl GlobalInvocationID.xy) float depth texelFetch(gDepthStencil, pixel, 0).r 2.0f 1.0f vec4 wPosition ReconstructWSPosition(depth, uv) Which to me looks correct, but I might be missing something. Any other ideas or suggestions? Maybe a matter of tonemapping HDR problems? |
1 | OpenGL Fragment Shader simulate LCD slow response time I have a very simple OpenGL view rendering 2 triangles with a single texture applied. The minimum setup for rendering a 2d game. What i do is redraw the texture for every frame and easily get 60fps. I would like to add an effect simulating LCD slow response by outputting pixels that are the average between the current and the previous frame. The pseudo code of fragment shader would look like this varying vec2 v TexCoordinate uniform sampler2D u Texture void main() gl FragColor (texture2D(u Texture, v TexCoordinate) PIXEL PREVIOUS FRAME) 2 PIXEL PREVIOUS FRAME gl FragColor Well, i think shaders can't keep a persistent variable like PIXEL PREVIOUS FRAME, then i really don't know how to tackle this problem. Any suggestion? |
1 | VBO rendering crashed with glDrawArrays I'm playing around in LWJGL3 and I'm experiencing an issue regarding glDrawArrays. At glDrawArrays the JVM crashes. I'm using modern OpenGL and therefore I have my own shaders and matrix calculations. The pipeline I've programmed is quite complex so pasting all the code here isn't of any use. Vertex shader version 330 core uniform mat4 projectionMatrix uniform mat4 viewMatrix uniform mat4 modelMatrix layout (location 0) in vec3 in position void main() gl Position (projectionMatrix viewMatrix modelMatrix) vec4(in position, 1.0f) Fragment shader version 330 core void main() gl FragColor vec4(1.0f, 0.0, 1.0f, 1.0f) I'm trying to draw 1 point to the screen using a VBO. Creation of the VBO float vboData new float 0.0f, 0.0f, 0.0f FloatBuffer buffer BufferUtils.createFloatBuffer(vboData) vbo id glGenBuffers() glBindBuffer(GL ARRAY BUFFER, vbo id) glBufferData(GL ARRAY BUFFER, buffer, GL STATIC DRAW) glBindBuffer(GL ARRAY BUFFER, 0) And the rendering of the point using glDrawArrays glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) testShader.bind() pipeline.applyMatrices(testShader) glBindBuffer(GL ARRAY BUFFER, vbo id) glVertexAttribPointer(0, 3, GL FLOAT, false, 12, 0) glDrawArrays(GL POINTS, 0, 1) glBindBuffer(GL ARRAY BUFFER, 0) pipeline.setCameraPosition(new Vector3f(0.0f, 0.0f, 0.0f), new Vector3f(0.0f, 0.0f, 0.0f)) testShader.unbind() GLFWUtils.swapBuffers() Using GLIntercept I managed to get a log of every OpenGL call made glClearColor(1.000000,0.000000,1.000000,1.000000) glPointSize(10.000000) glGenBuffers(1,000000001E0F6FC0) glBindBuffer(GL ARRAY BUFFER,1) glBufferData(GL ARRAY BUFFER,12,0000000025EB8D90,GL STATIC DRAW) glBindBuffer(GL ARRAY BUFFER,0) glCreateProgram() 1 glCreateShader(GL VERTEX SHADER) 2 glCreateShader(GL FRAGMENT SHADER) 3 glShaderSource(2,1,000000001E0F6FC0,000000001E0F6FC8) glShaderSource(3,1,000000001E0F6FC0,000000001E0F6FC8) glCompileShader(2) glCompileShader(3) glGetShaderiv(2,GL COMPILE STATUS,000000001E0F6FC0) glGetShaderiv(3,GL COMPILE STATUS,000000001E0F6FC0) glAttachShader(1,2) glAttachShader(1,3) glLinkProgram(1) glEnableVertexAttribArray(0) glClear(GL DEPTH BUFFER BIT GL COLOR BUFFER BIT) glUseProgram(1) glGetUniformLocation(1,"projectionMatrix") 1 glUniformMatrix4fv(1,1,false, 0.671312, 0.000000, 0.000000, 0.000000, 0.000000, 0.895083, 0.000000, 0.000000, 0.000000, 0.000000, 1.000020, 1.000000, 0.000000, 0.000000, 0.020000, 0.000000 ) glGetUniformLocation(1,"viewMatrix") 2 glUniformMatrix4fv(2,1,false, 1.000000,0.000000,0.000000,0.000000,0.000000,1.000000,0.000000,0.000000,0.000000,0.000000,1.000000,0.000000,0.000000,0.000000,0.000000,1.000000 ) glGetUniformLocation(1,"modelMatrix") 0 glUniformMatrix4fv(0,1,false, 1.000000,0.000000,0.000000,0.000000,0.000000,1.000000,0.000000,0.000000,0.000000,0.000000,1.000000,0.000000,0.000000,0.000000,0.000000,1.000000 ) glBindBuffer(GL ARRAY BUFFER,0) glVertexAttribPointer(0,3,GL FLOAT,false,12,0000000000000000) glDrawArrays(GL POINTS,0,1) I've carefully tested a few things The shaders compile and link fine The matrices calculated look correct (you can double check inside the GLIntercept log) The VBO is correctly created and filled with the correct data The attribute pointer ( layout 1) is enabled The matrices are correctly sent to the shader (this happens at pipeline.applyMatrices) What am I missing? Interesting debug finding I'm running this on nVidia. On a Intel graphics CPU it runs, but nothing shows on the screen! |
1 | Voxel engine artifacts There are white little dots between blocks at random places, mainly at very near blocks. They disappear when I move the mouse and change the view direction. I use Vertex Arrays with glVertexAttributePointer to send the data directly to the shader. The shader only adds some fake light based on the face direction and draws the pixels. What could be the reason for the small white artifacts? Chunk render code public void renderChunk() glPushMatrix() glTranslatef(pos.x world.chunkSize.x, 0.0f, pos.z world.chunkSize.z) glBindBuffer(GL ARRAY BUFFER, vao id) glVertexAttribPointer(world.game.positionAtt, 3, GL FLOAT, false, 32, 0) glVertexAttribPointer(world.game.normalAtt, 3, GL FLOAT, false, 32, 12) glVertexAttribPointer(world.game.texCoordAtt, 2, GL FLOAT, false, 32, 24) glDrawArrays(GL TRIANGLES, 0, arraySize 8) glBindBuffer(GL ARRAY BUFFER, 0) glPopMatrix() Vertex Shader attribute vec3 in position attribute vec3 in normal attribute vec3 in texcoord varying vec2 texture coordinate varying vec3 normal void main() gl Position gl ModelViewProjectionMatrix vec4(in position, 1.0) normal in normal texture coordinate in texcoord Fragment Shader varying vec2 texture coordinate varying vec3 normal uniform sampler2D texture void main() float add if (normal.x 1.0 normal.z 1.0) add 0.65 else if (normal.y ! 1.0) add 0.8 else add 1.0 gl FragColor texture2D(texture, texture coordinate) add Enlarge the image by click it to notice the artifacts. |
1 | Why do we move the world instead of the camera? I heard that in an OpenGL game what we do to let the player move is not to move the camera but to move the whole world around. For example here is an extract of this tutorial OpenGL View matrix In real life you're used to moving the camera to alter the view of a certain scene, in OpenGL it's the other way around. The camera in OpenGL cannot move and is defined to be located at (0,0,0) facing the negative Z direction. That means that instead of moving and rotating the camera, the world is moved and rotated around the camera to construct the appropriate view. Why do we do that? |
1 | Render Queue Sorting, HOW? Recently I'm trying to implement a render queue sorting system, i.e., ordering my renderable objects in an array in such a way that the overhead of OpenGL state changes are minumum. After some googling, the most ideal one (to me) is to generate an integer key value for each renderable object, and the sorting is based on that value. Once done sorting, objects that have similar rendering states (same shader, texture, distance, etc.) will be close to each other, so we can just loop through the object array and submit the draw calls for rendering. Here's a detailed explanation Order your graphics draw calls around! My problem is, how do I loop through the renderable object's array to submit their draw calls for actual rendering? Do I need to keep track of the current OpenGL states manually? What have come to my mind is (for each renderable object) if(objects i .getShader() ! m currentShader) m currentShader objects i .getShader() m currentShader.bind() if(objects i .getTexture() ! m currentTexture) m currentTexture objects i .getTexture() m currentTexture.bind() ... and so on object i .getMesh().draw() Seems not so performance friendly to me, but I can't find any further explanation on how to submit the actual draw calls form the object's array either on the article I've provided or other sites I can find. Perhaps there's something I have missed? |
1 | Peer to peer first person shooter I've been developing a first person shooter massive multiplayer online roleplaying game for my small business, and was wondering if it would be feasible to use peer to peer technology to communicate between players, without the use of an intermediary server (as our company does not have enough funds for a high speed connection to run a game server on). How would such Peer to peer technology be best implemented in such a game? Here's a few of the options I've been considering a) Divide the game into segments, and have each segment be played on a separate P2P "mesh" (currently named XRevolution) b) Have one mesh for all portions of the game, and place all of the players on a single grid c) Some other P2P solution I've already used option A (considering only one area of the game is done) on a small scale (4 computers), but am wondering how well that would scale when thousands of computers would potentially be bound to the same mesh. Here's a list of possible concerns with P2P technology During penetration testing, it was identified that a client could spoof packets to lie about the state of the game to other peers, allowing people to cheat. This is identified as "medium" risk, as it could be prevented to a certain extent through various security routines that would be integrated within the game (such as encryption, movement prediction, double checking state with other peers, AI, etc.) Lag was originally a problem on slower networks, but now movement prediction and other techniques are used to reduce the effect of lag, and disconnect peers whose lag is too high. However, a problem which remains is the amount of time it would take on a larger scale network to find peers (other players), and establish connections. Due to the amount of time it would take to connect to so many peers, solution A would seem optimal to solve this particular problem A problem with solution A, would be in game chat between meshes (as no two peers in different meshes can communicate with each other, except for the purpose of relaying), however, this problem could be potentially solved by using something in between solutions A and B (such as creating a separate mesh for inter mesh communication, which would be specific to each player or something) What's the best way to do something like this? Solution A, or B, something in between the two, or an entirely different solution altogether. Due to our budget, using a server to facilitate communication between players is completely out of the question, so it would be nice to use a P2P solution, otherwise the project would have to be abandoned. |
1 | How can i draw a curve in modern Opengl? OpenGl is capable of rendering points, lines, triangles and quads but what if i want to draw a bezier curve? I read online you should use something called GL STRIP, but the solutions were using either legacy Opengl, or they were not explaining well the process. The question is How can i render bezier curves in Modern Opengl? I'm using C . |
1 | Do Java and Actionscript use OpenGL? As far as I know there are only 3 base graphics libraries on Windows, the GDI, OpenGL and DirectX, is that correct, so that means that Java, Actionscript and all language use one of these 3 libraries if they are to display graphics, or maybe Java has it's own graphics library API? |
1 | How to animate a model in WebGL? I created a human model in Blender, exported the vertices and indices into a JSON file and render the model in a browser using WebGL. Now I created a walk and jump animation in Blender and would like to do the same with WebGL. I saw examples that use a list of vertices for every frame of the animation. Is this the way to go? Do I need to export the vertices for every frame for every animation? |
1 | OpenGL sluggish performance in extracting texture from GPU I'm currently working on an algorithm which creates a texture within a render buffer. The operations are pretty complex, but for the GPU this is a simple task, done very quickly. The problem is that, after creating the texture, i would like to save it. This requires to extract it from GPU memory. For this operation, i'm using glGetTexImage(). It works, but the performance is sluggish. No, i mean even slower than that. For example, an 8MB texture (uncompressed) requires 3 seconds (yes, seconds) to be extracted. That's mind puzzling. I'm almost wondering if my graphic card is connected by a serial link... Anyway, i've looked around, and found some people complaining about the same, but no working solution so far. The most promising advise was to "extract data in the native format of the GPU". Which i've tried and tried, but failed so far. Edit by moving the call to glGetTexImage() in a different place, the speed has been a bit improved for the most dramatic samples looking again at the 8MB texture, it knows requires 500ms, instead of 3sec. It's better, but still much too slow. Smaller texture sizes were not affected by the change (typical timing remained into the 60 80ms range). Using glFinish() didn't help either. Note that, if i call glFinish() (without glGetTexImage), i'm getting a fixed 16ms result, whatever the texture size or complexity. It really looks like the timing for a frame at 60fps. The timing is measured for the full rendering saving sequence. The call to glGetTexImage() alone does not really matter. That being said, it is this call which changes the performance. And yes, of course, as stated at the beginning, the texture is "created into the GPU", hence the need to save it. Edit 2 I've also tried to use glReadPixels(), instead of glGetTexImage(). But it's worse unfortunately. Approximately twice slower. Edit 3 Here is the code. Note that it uses a "framework" that i've not developed. So some of the function calls here are not OpenGL standard. But they are nonetheless quite self expressive. glDisable(GL BLEND) Simple construction, for test the resulting texture is output into render buffer renderbufx1.bind() clear(0,0,0,0) simpleDiffShader.bind() simpleDiffShader.setTexture("TextureRef",img.tex,0) simpleDiffShader.setTexture("TextureDown",renderbufx2.getTexture(),1) img.tex gt drawPart( 1, 1,2,2,0,0,1,1) simpleDiffShader.unbind() renderbufx1.unbind() Not really useful, but just in case, the result is copied into another render buffer. renderbufx1 is going to be displayed, and i want to avoid any access conflict savebufx1.bind() clear(0,0,0,0) renderbufx1.getTexture() gt bind() renderbufx1.getTexture() gt drawPart( 1, 1,2,2,0,0,1,1) renderbufx1.getTexture() gt unbind() savebufx1.unbind() savebufx1.getTexture() gt bind() glGetTexImage(GL TEXTURE 2D, 0, GL RGBA, GL BYTE, t.data) lt This is the line savebufx1.getTexture() gt unbind() glEnable (GL BLEND) renderbufx1.getTexture() gt bind() renderbufx1.getTexture() gt drawPart(tex2x, 1,w,h,texSpan,texSpan,fact,fact) renderbufx1.getTexture() gt unbind() Timing measured encompasses all this (and a few other things, which do not really contribute to the total). Edit 4 (on course for the most edited question of the week) The native texture format of the GPU is supposed to be retrieved through the function glGetTexLevelParameter. More precisely, i've used this line of code glGetTexLevelParameteriv ( GL TEXTURE 2D, 0, GL TEXTURE INTERNAL FORMAT, amp iTexFormat) The result of the function (within iTexFormat) is 32856. 32856, decimal, is 0x8058, and within gl.h there is define GL RGBA8 0x8058 |
1 | Apply portion of texture atlas I'm trying to write a shader that only maps a portion of a large texture to my sprite and I'm getting a strange behaviour with my current code. This is what I have right now Texture atlas example (256x256px) EDIT I've changed both shaders and I get the following output right now (first image), but I need to zoom the texture so I only render the highlighted portion of the second image, how should I modify the shaders to achieve that? These are my current shaders Vertex uniform mat4 uMVPMatrix attribute vec4 aPosition attribute vec2 aTextureCoord varying vec2 vTextureCoord varying vec2 vTextureCoordOffset void main() offset and texSize will be defined as attributes later. They're defined here for test purposes only vec2 offset vec2(128.0, 64.0) vec2 texSize vec2(256.0, 256.0) float u offset.x texSize.x float v offset.y texSize.y vTextureCoordOffset vec2(u, v) vTextureCoord aTextureCoord gl Position uMVPMatrix aPosition Fragment precision mediump float uniform sampler2D sTexture varying vec2 vTextureCoord varying vec2 vTextureCoordOffset void main() gl FragColor texture2D(sTexture, fract(vTextureCoord) vTextureCoordOffset) SOLVED I solved it changing the fragment shader to gl FragColor texture2D(sTexture, fract(vTextureCoord) vZoom vTextureCoordOffset) vZoom is calculated in the vertex shader vZoom subTextureSize texSize |
1 | Difference in glDrawArrays and glDrawElements While refreshing my mind on OpenGL ES, I came across glDrawArrays and glDrawElements. I understand how they are used and sort of understand why they are different. What I do not seem to understand is that, I fail to see how glDrawElements can save draw calls (saving draw calls is a description that is mentioned by most of the books I have read, hence my mentioning of it here). Imagine a simple scenario in which I tried to draw a square using 2 triangles. I would need to have a set of 6 vertices using glDrawArrays while using glDrawElements, I would need only 4 in addition to an index array that has 6 elements. Given the above, here is what I do not understand how could glDrawElements save draw calls, if it still needs to use the index array (6 indices, in case of a square) to index into my vertex array that had 4 elements (6 times)? Other words, does it mean glDrawElements would still need to have a total of 6 draw calls just like glDrawArrays? how would using glDrawElements save space, if one would still need to have 2 arrays, namely, one for the vertices and one for the indices? In the case of drawing a square from 2 triangles, for simplicity, how many draw calls does glDrawElements (4 items in vertex array and 6 items in index array) and glDrawArrays (6 items only in vertex array) need, individually? Thanks. |
1 | Heightmap Physics Optimization Improvement I'm working on implementing the physics surrounding a player walking on a heightmap. A heightmap is a grid of points which are evenly spaced on the x and z axes, with varying heights. The physical representation of my player (what is exposed to my physics engine manager) is simply a point mass where his feet are, rather than complicating the problem by treating him as a sphere or a box. This image should help further explain heightmaps and show how triangles are generated from the points. This picture would be looking straight down on a heightmap, and each intersection would be a vertex that has some height. Feel free to move this over to math, physics, or stack overflow, but I'm pretty sure this is where it belongs as it is a programming question related to games. Here's my current algorithm Calculate the player's velocity from input gravity previous velocity etc Move the player's position (nextPos prevPos velocity timeSinceLastFrame) Figure out which triangle (all graphics is done in triangles!) of my heightmap the player's new position is vertically aligned with Use the three vertices of that triangle to calculate the equation for the plane which that triangle lies in Plug in the player's x and z coordinates into the plane's equation to get the y coordinate for the player's position on that plane Set the y coordinate of the player's position to this (if newPos.y lt y) This is all fine and dandy, but I'm wondering if there's anything that I can optimize. For example, my first thought is to store the plane's equation with the triangle's vertex information. This way all I have to do is plug in the x and z values to the equation to get the y. However, this would require adding 4 floats to every vertex of the heightmap which is a little ridiculous memory wise. Also, my heightmaps are dynamic (meaning the heights will change at runtime), which would mean changing the plane equations every time a vertex's height changes. Is there a faster way to calculate that point than digging up the plane's equation and then plugging in x and z? Should I store the plane equations and update them on vertex height change or should I just regenerate the plane equations for every triangle every time the player moves on that triangle? Is there a better way to do heightmap physics that maintains this simplicity? (this seems very simple to me as far as physics engines go) |
1 | How to draw a 3D capsule? I want to draw a 3D capsule defined by a segment (two points start end) and a radius Image source Orionist on Wikimedia Commons I already have a sphere leaning on this answer https stackoverflow.com a 31326534 1902536. I understand that I can expand the Parametric Equation to an ellipse one but I'm not sure how to do it with the segment (that may have rotation). what is the best way to draw model a capsule? |
1 | Importing Models from Maya to OpenGL I am looking for ways to import models to my game project. I am using Maya as modelling software, and GLUT for windowing of my game. I found this great parser, it imports all the textures and normal vectors, but it is compatible with .obj files of 3dsMAX. I tried to use it with Maya obj's, and it turned out that Maya's obj files are a bit different from former one, thus it cannot parse them. If you know any way to convert Maya obj files to 3dsMax obj files, that would be acceptable as well as a new parser for Maya obj files. |
1 | Deforming a lip mesh to match face tracking points I'm creating a lipstick filter. I have a basic mesh for the lip shape ( 140 vertices ) which I am able to render on screen. I need the mesh since I have my own lighting system, and hence need the normal info. Plus I don't want to create a model at runtime since if I load an obj file I have the flexibility of rendering high quality meshes. The problem that I am now facing is that when a user is going to use it, their face will be moving around in the video frame, and I will be getting the lips' positions accordingly. Hence at each frame the mesh will be rendered differently ( in one frame he might be smiling, the other he might be frowning etc), in different places in the screen. How can I set the OpenGL vertices to match the positions of the tracked feature points, and interpolate the rest? I have only 20 ish detection landmark points for each of the upper and lower lip. |
1 | Lwjgl or opengl double pixels I'm working in java with LWJGL and trying to double all my pixels. I'm trying to draw in an area of 800x450 and then stretch all the frame image to the complete 1600x900 pixels without them getting blured. I can't figure out how to do that in java, everything I find is in c ... A hint would be great! Thanks a lot. EDIT I've tried drawing to a texture created in opengl by setting it to the framebuffer, but I can't find a way to use glGenTextures() in java... so this is not working... also I though about using a shader but I would not be able to draw only in the smaller region... |
1 | OpenGL get the outline of multiple overlapping objects I just had an idea for my on going game made with opengl in c I'd like to have a big outline (5 6 pixel) on multiple overlapping object when the player win something. I thought the best way is to use stencil buffer but it's few hours that I' trying to do some off screen render of the stencil buffer and I can't achieve any kind of result so probab. there are some other techniques! This is what I want to get Any ideas? |
1 | Knowing the size of a framebuffer when rendering transformed meshes to a texture I have a couple of 2D meshes that make a hierarchical animated model. I want to do some post processing on it, so I decided to render this model to a texture, so that I could do the post processing with a fragment shader while rendering it as a textured quad. But I don't suppose that it would be very smart to have the render texture's size as large as the entire screen for every layer that I'd like to compose it would be nicer if I could use a smaller render texture, just big enough to fit every element of my hierarchical model, right? But how am I supposed to know the size of the render target before I actually render it? Is there any way to figure out the bounding rectangle of a transformed mesh? (Keep in mind that the model is hierarchical, so there might be multiple meshes translated rotated scaled to their proper positions during rendering to make the final result.) I mean, sure, I could transform all the vertices of my meshes myself to get their world space screen space coordinates and then take their minima maxima in both directions to get the size of the image required. But isn't that what vertex shaders were supposed to to so that I wouldn't have to calculate that myself on the CPU? (I mean, if I have to transform everything myself anyway, what's the point of having a vertex shader in the first place? q ) It would be nice if I could just pass those meshes through the vertex shader first somehow without rasterizing it yet, just to let the vertex shader transform those vertices for me, then get their min max extents and create a render texture of that particular size, and only after that let the fragment shader rasterize those vertices into that texture. Is such thing possible to do though? If it isn't, then what would be a better way to do that? Is rendering the entire screen for each composition layer my only option? |
1 | How to spin a 2D quad in place using only matrices? In short, I have a textured 2D quad (a sprite). I would like to rotate spin it about the z axis (coming out of the screen) using nothing but matrices. If I do the following to a transform with scale in it already, the object spins, but also follows a circular path around the origin. As a test, if I first undo the scale, then this works fine (the object spins in place). But I'm not normally keeping scale, orientation and translation separate and then computing the matrix every frame in this case. I'm mutating the matrix all the time. So this isn't workable. Can this be done without first undoing the scale? self is the current transformation matrix (4x4) pivot is the "center" of the quad's AABB (recomputed immediately before this) eulerAngles is a vec3 containing 0.0, 0.0, z angle translate to the origin translate(self, pivot) rotate rotate(self, eulerAngles) go back translate(self, pivot) (I'm using OpenGL, but that probably doesn't matter much here) There are other related articles, but I didn't get to a solution. I'm linking for reference Combining rotation,scaling around a pivot with translation into a matrix How can I rotate about an arbitrary point in 3D (instead of the origin)? |
1 | How to detect GLSL warnings? After compiling a shader with glCompileShader, I can call glGetShaderiv with GL COMPILE STATUS to check if the shader compiled successfully. I can also call glGetShaderInfoLog to get information about possible errors, warnings or other info. The information log returned by this function is unspecified. In a tool where users can write their own shaders, I would like to print all errors and warnings from the compilation, but nothing if no warnings or errors were found. The problem is that the GL COMPILE STATUS returns only false if the compilation failed and true otherwise. If no problems were found, some drivers return empty info log from glGetShaderInfoLog, but some drivers can return something else such as "No errors.", which I do not want to print to the user. How is this problem generally solved? |
1 | Why is my model rotating opposite direction around the Y axis? I have a simple scenario. I have a simple scene, and in it I'm drawing a grid and a model, with some axis for orientation. I noticed when I rotate the model about it's local Y axis, if I rotate the model clockwise, the vertices end up rendering as if they've been rotated counterclockwise. I can see this by rendering the model's local forward vector. Here's a picture of what I mean. Initially, the world forward, local forward, and model all face the same direction. However, after I begin rotating the object clockwise, I notice the "world forward vector" is rotating clockwise, but my model rotates counterclockwise? When I export the model from Blender3D I use the coordinate system "Forward Z" and "UP Y Up", which matches the RHS coordinate system I'm using in opengl. When I "rotate the object", I mean I multiply the object's rotation matrix like so float constexpr angle 1.0f glm vec3 const axis glm vec3 0.0, 1.0, 0.0 object.rotation glm angleAxis(glm radians(angle), axis) object.rotation Here's my vertex shader, I'm just multiplying the vertices by the MVP matrix. vertex shader in vec4 a position in vec4 a color out vec4 v color uniform mat4 u mvpmatrix void main() gl Position u mvpmatrix a position v color a color The rotation object.rotation 's initial value is the quaternion identity. Here's how I calculate the MVP matrix uniform value glm mat4 const tmatrix glm translate(glm mat4 , this gt translation) glm mat4 const rmatrix glm toMat4(this gt rotation) glm mat4 const smatrix glm scale(glm mat4 , this gt scale) glm mat4 const model matrix tmatrix rmatrix smatrix camera xyz are world coords for the camera target xyz is world coords of the object glm vec3 const UP glm vec3 0.0f, 1.0f, 0.0f glm mat4 const view glm lookAt(camera xyz, target xyz, UP) glm mat4 const projection glm perspective(fov, aspect ratio, near, far) glm mat4 const mvp matrix projection view model matrix Other relavent calculations glm vec3 eye forward() const return forward glm vec3 eye up() const return up glm vec3 eye backward() const return eye forward() glm vec3 eye left() const return eye right() glm vec3 eye right() const return glm normalize(glm cross(eye forward(), eye up())) glm vec3 eye down() const return eye up() glm vec3 world forward() const return forward orientation() glm vec3 world up() const return up orientation() glm vec3 world backward() const return world forward() glm vec3 world left() const return world right() glm vec3 world right() const return glm normalize(glm cross(world forward(), world up())) glm vec3 world down() const return world up() From what I can tell, it seems like I'm doing everything right. Can someone explain it to me? I also have a video showing this in more detail. This is important, because I move the object "forward" based on it's forward vector, so when I rotate the object and move it "forward" it's not moving in the direction it is facing. edit Am I calculating the "local forward" correctly with this calculation? double edit I just noticed I labeled the "World" and "Object space" vectors backwards. The green arrow is the World forward vector (0, 0, 1) and the light blue is the object's local forward vector. forward is initialized with Z (0.0, 0.0, 1.0) glm vec3 world forward() const return forward orientation() glm quat const amp orientation() const return object.rotation |
1 | Correct way to calculate Perspective Matrix I have seen at least 3 different ways to calculate the perspective matrix and I'm confused as to which one I should be using and what the differences are? OpenGL says to do it this way f cotangent(fov 0.5) f aspect, 0, 0, 0 0, f, 0, 0 0, 0, (far near) (near far), (2 far near) (near far) 0, 0, 1, 0 glm does it this way range tan(fov 0.5) near left range aspect right range aspect bottom range top range (2 near) (right left), 0, 0, 0 0, (2 near) (top bottom), 0, 0 0, 0, (far near) (far near), 1 0, 0, (2 far near) (far near), 0 |
1 | (GLSL) Lighting code outputting a black quad So, ive been transitioning to modern opengl recently and it's going rather well. But alas, something must go wrong. As the titel says, all I'm getting is a completely black quad. (Ive double checked my C code and I'm pretty sure it has nothing to do with that.) version 330 core Vertex shader layout(location 0) in vec3 vertexPosition modelspace layout(location 1) in vec3 vertexColor layout(location 2) in vec2 vertexUV layout(location 3) in vec3 vertexNormal out vec3 vertNorm out vec3 fragmentColor out vec3 vertPos out vec2 UV uniform mat4 MVP void main() gl Position MVP vec4(vertexPosition modelspace,1.0f) UV vertexUV fragmentColor vertexColor vertPos vertexPosition modelspace vertNorm vertexNormal Fragment shader version 330 core in vec3 Normal in vec3 fragmentColor in vec3 vertPos in vec2 UV out vec3 color uniform sampler2D textureSampler uniform mat4 Model void main() vec3 lightPos vec3(0, 0, 0) mat3 normalMatrix transpose(inverse(mat3(Model))) vec3 normal normalize(normalMatrix Normal) vec3 fragPosition vec3(Model vec4(vertPos, 1)) vec3 surfaceToLight lightPos fragPosition float brightness dot(normal, surfaceToLight) (length(surfaceToLight) length(normal)) brightness clamp(brightness, 0, 1) color vec3(brightness 1 (texture(textureSampler, UV).rgb fragmentColor)) But if you require my c code, say so and ill edit. |
1 | Do UV coordinates need correction for moving object? The image above is a static capture of a dynamic OpenGL project I created in which I wrapped a NASA albedo, i.e., sans clouds, image on an OpenGL generated sphere. In so doing, I also generated the UV coordinates associated with each vertex position. This was an incremental learning effort in which I had already applied model matrix corrections to the vertex positions for the rotating and translating (orbiting) "Earth". I was surprised to find that I did not have to apply a model matrix correction to the UV coordinates. I have tentatively concluded that once the jpg image coordinates are associated with the corresponding vertex positions with the UV coordinates in the range of 0, 1 , they are fixed and need no further correction. Does that sound correct, or is there more to the situation? |
1 | Sprite Sheet Texture not being rendered I'm making space invaders (in OpenGL SDL) and I've run into a problem when trying to draw the sprite for the spaceship from the spritesheet. In my entity class, I have a pointer for the sprite that belongs to the current instance of the object. In main, I make an instance on the heap of the sheetSprite, passing in the u,v,width, and height of the sprite for the parameters and then storing this reference in the mySprite field of the spaceship entity. The coordinates provided by the texture atlas are lt SubTexture name "playerShip2 green.png" x "112" y "866" width "112" height "75" gt To draw the spaceship, I call the draw method from its "mySprite" attribute. For some reason, only a white square is being rendered without the actual texture. What could be causing the problem? The relevant code is below class SheetSprite public SheetSprite(unsigned int textureID, float u, float v, float width, float height, float size, ShaderProgram program) textureID(textureID), u(u), v(v), width(width), height(height), size(size), program(program) void Draw() glBindTexture(GL TEXTURE 2D, textureID) glUseProgram(program.programID) GLfloat texCoords u, v height, u width, v, u, v, u width, v, u, v height, u width, v height float aspect width height float vertices 0.5f size aspect, 0.5f size, 0.5f size aspect, 0.5f size, 0.5f size aspect, 0.5f size, 0.5f size aspect, 0.5f size, 0.5f size aspect, 0.5f size , 0.5f size aspect, 0.5f size glBindTexture(GL TEXTURE 2D, textureID) float vertices2 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5 glVertexAttribPointer(program.positionAttribute, 2, GL FLOAT, false, 0, vertices) glEnableVertexAttribArray(program.positionAttribute) float texCoords2 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0 glVertexAttribPointer(program.texCoordAttribute, 2, GL FLOAT, false, 0, texCoords) glEnableVertexAttribArray(program.texCoordAttribute) glDrawArrays(GL TRIANGLES, 0, 6) glDisableVertexAttribArray(program.positionAttribute) glDisableVertexAttribArray(program.texCoordAttribute) float size unsigned int textureID float u float v float width float height ShaderProgram program class Entity public Matrix modelMatrix, projectionMatrix, viewMatrix float width, height 1.0 float xDir, yDir 0.0 float posX, posY 0.0 float objSpeed 0.0 float rotState 0.0 float u, v, spr width, spr hght, spr size 0.0 unsigned int textureID ShaderProgram program GLuint spriteSheetTexture LoadTexture("sheet.png") SheetSprite mySprite Entity(float wid, float hght, float xDirect, float yDirect, float xPosition, float yPosition, float speed, float rState, ShaderProgram program) width(wid), height(hght), xDir(xDirect), yDir(yDirect), posX(xPosition), posY(yPosition), objSpeed(speed), rotState(rState), program(program) Extra methods instantiation of sheetSprite in main Entity spaceship(0.1f, 0.7f, 1.0f, 1.0f, 5.1f, 0.0f, 3.0f, 0.0f, program) spaceship.objSpeed 10 GLuint spriteSheetTexture LoadTexture("sheet.png") spaceship.mySprite new SheetSprite(spriteSheetTexture, 112.0f 1024.0f, 866.0f 1024.0f, 112.0f 1024.0f, 75.0f 1024.0f, 0.7, program) in the game loop spaceship.mySprite gt Draw() |
1 | indices for surface of revolution I'd like to implement a surface of revolution. I already implemented creating Vertices based on a 2D line. I now want to get the indices to render the mesh with GL TRIANGLES Here's the code to create the 3D Vertices out of a 2D line void createVertices(vector lt Vector2 gt amp points,unsigned int iterations) int i int j unsigned int index vector lt Vertex gt newVertices vector lt unsigned int gt newIndices for(j 0 j lt points.size() j) for(i 0 i lt iterations i) Vector2 p points.at(j) float theta M PI 2.0 (float)i (float)iterations float x sinf(theta) p.x float z cosf(theta) p.x newVertices.push back(Vertex(x,p.y,z)) and here's the code that will be called to save the Vertices void addVertices(vector lt Vertex gt amp vertices,vector lt unsigned int gt amp indices) this gt vertices vertices this gt indices indices unsigned int verticesSize vertices.size() sizeof(Vertex) unsigned int indicesSize indices.size() sizeof(unsigned int) float vertexBuffer new float vertices.size() sizeof(Vertex) createBuffers(vertices,vertexBuffer) glGenVertexArrays(1, amp VAO) glBindVertexArray(VAO) glGenBuffers(1, amp verticesVBO) glBindBuffer(GL ARRAY BUFFER,verticesVBO) glBufferData(GL ARRAY BUFFER,verticesSize,vertexBuffer,GL STATIC DRAW) glGenBuffers(1, amp indicesVBO) glBindBuffer(GL ELEMENT ARRAY BUFFER,indicesVBO) glBufferData(GL ELEMENT ARRAY BUFFER,indicesSize, amp indices 0 ,GL STATIC DRAW) delete vertexBuffer someone has a basic idea? |
1 | Should I make a FPS game on Fixed Function Pipeline or Programmable Pipeline OpenGL? I have a FPS game I have programmed in Fixed Function Pipleline and one made in Programmable pipeline OpenGL. While the programmable pipeline has lots of weird things that you can edit, it does not have glLoadIdentity that I need for the gun to be attached to the camera. There is little to no information on this subject and most of the information that I can find is in the fixed function pipeline rather than the programmable one. Keep in mind that with the fixed function pipeline, I can just use the glloadidentity function and attach it then move on to another thing. On the programmable one, I don t know how to do this, so I have spent a whole week looking up how I can do it. Should I just use the fixed function pipeline one and abandon the programmable pipeline? What shall I do? Thanks! |
1 | How to toggle fullscreen with lwjgl I'm using glfw in lwjgl 3 to try to create a toggleFullscreen method. However it always gives me errors. I saw this question Toggle Fullscreen at Runtime but it didn't help because glfwOpenWindow() and glfwCloseWindow() don't exist for me. I guess lwjgl uses a differnt version of glfw? Does anyone know of a good solution for toggling fullscreen? thanks. this is my current code if it helps. public void toggleFullscreen() long newWindow fullscreen !fullscreen if(fullscreen) newWindow glfwCreateWindow(windowWidth, windowHeight, "Title", glfwGetPrimaryMonitor(), window.getWindowHandle()) glfwMakeContextCurrent( newWindow ) else newWindow glfwCreateWindow(windowWidth, windowHeight, "Title", NULL, windowHandle()) glfwMakeContextCurrent( newWindow ) glfwDestroyWindow(window.getWindowHandle()) windowHandle newWindow glfwMakeContextCurrent(windowHandle) glfwSwapInterval(1) glfwShowWindow(windowHandle) GLContext.createFromCurrent() GL11.glEnable(GL11.GL TEXTURE 2D) glClearColor(0.0f, 1.0f, 0.0f, 0.0f) GL11.glEnable(GL11.GL BLEND) GL11.glBlendFunc(GL11.GL SRC ALPHA, GL11.GL ONE MINUS SRC ALPHA) GL11.glViewport(0,0,windowWidth,windowHeight) GL11.glOrtho(0, windowWidth, windowHeight, 0, 1, 1) GL11.glMatrixMode(GL11.GL MODELVIEW) game.initInputPolling() |
1 | Loading multi texture 3ds c I have a question about loading 3ds using this tutorial. I want to use more than one texture on the model (because here all the models have more than one) but it seems that this library can't do that. Do you know any other alternatives or a way to edit this existing library to reach my aim? |
1 | Image rendering with additional space around it When I try to display an image that is 400 pixels wide and 800 pixels high, it is not displayed this way. Instead it is diplayed like this Instead it is displayed like this You can see at the bottom and a few pixels right to the phone some thin white lines, I did not add this manually and it is not part of the picture, the picture is perfectly cropped around the phone. When in my fragment shader I add vec4(1, 1, 1, .5) it shows the area that the phone should have covered. Image Code for creation of an object that holds info about the image GuiTexture phone new GuiTexture(loader.loadTexture("phone cropped"), Display.getWidth() 2, Display.getHeight() 2, 2, 400, 800) The loadTexture method public int loadTexture(String fileName) Texture texture null try texture TextureLoader.getTexture("PNG", new FileInputStream("res " fileName ".png")) catch (FileNotFoundException e) e.printStackTrace() catch (IOException e) e.printStackTrace() int textureID texture.getTextureID() textures.add(textureID) return textureID The line that calls the render class to render the image guiRenderer.render(guis) The transformation matrices that the image is multiplied by float width gui.getWidth() 2 float height gui.getHeight() 2 Matrix4f modelMatrix new Matrix4f() modelMatrix.m00 2.0f (float) Display.getWidth() modelMatrix.m11 2.0f (float) Display.getHeight() modelMatrix.m30 1 modelMatrix.m31 1 shader.loadModel(modelMatrix) Matrix4f transformationMatrix Maths.createTransformationMatrix( new Vector2f( gui.getxPos(), (gui.getyPos()) ), gui.getRotation() ,new Vector2f( width, height ) ) shader.loadTransformation(transformationMatrix) Vertex shader code gl Position modelMatrix transformationMatrix vec4(position, 0.0, 1.0) textureCoords vec2((position.x 1.0) 2.0, 1 (position.y 1.0) 2.0) This does not happen to all images, when I load in an image that is square and I try to display it with width 400 and height 800 it works perfectly. Image With different images it yields different extra space. All images are .png. The phone image is 1009x2057 pixels. I also tried to use a phone that was 2048 pixels high (1006x2048) since this is a power of 2, that still yields a white line on the side but does look better image The square (3rd image) that was elongated to look like a rectangle and does display correctly is 256 x 256 pixels. All displayed images have a slight rotation because without rotation the little white thin lines don't always show up, the rotation did not change anything about the images. To load images I use slick utils TextureLoader class. |
1 | What is the order of postprocessing effects? I am newbie in game development. I have read how many shader effects are implemented usually, and I have implemented msot of them by myself. However, now, I have the situation when I have to program my post process pipelines and so is the question in what order should I do post process effects? Mostly I ask about SSAO, blur, motion blur, chromatic abberation, fog, shadows... Actually, I am curios about all of them. So, the basic idea of my engine is Always draw everything to the texture. Apply (so called by me "prePostProcess") effects (lighting, for example). Apply post process fx (blur and so on). I understand that as I change the texture sequentially then the result will be different if I change the order in which programs are used to change the texture. Like chromatic aberration and then blur is different between blur and then chromatic aberration. I code my game with opengl 4.4 at this moment, so I tagged it so. |
1 | Why is programmable pipeline( GLSL ) faster than fixed pipeline? So I'm teaching myself GLSL and am trying to figure out why it's suppose to be faster than the fixed function pipeline. The reason I am having a problem is that from my understanding , the shaders you create are replacing sections of the pipeline that were there before. So, how's simply providing your own version speeding things up? The only thing I can think is if you tried to supply say your own lighting equation before, you would have to do the calculation on the CPU, but now you can do the calculations on the GPU which will be faster. Am I understanding this correctly? |
1 | Rotating a Quad around it center How can you rotate a quad around its center? This is what I am trying to do but it is not working GL11.glTranslatef(x getWidth() 2, y getHeight() 2, 0) GL11.glRotatef(30, 0.0f, 0.0f, 1.0f) GL11.glTranslatef(x getWidth() 2, y getHeight() 2, 0) DRAW my main problem is that it renders it off the screen.. draw code GL11.glBegin(GL11.GL QUADS) GL11.glTexCoord2f(0, 0) GL11.glVertex2f(0, 0) GL11.glTexCoord2f(0, getTexture().getHeight()) GL11.glVertex2f(0, height) GL11.glTexCoord2f(getTexture().getWidth(), getTexture().getHeight()) GL11.glVertex2f(width,height) GL11.glTexCoord2f(getTexture().getWidth(), 0) GL11.glVertex2f(width,0) GL11.glEnd() |
1 | SDL for 3D game programming? I have been studying SDL for a few weeks and I have succeeded in making a 2D Ping Pong game, but I want to get started in 3D development, and I'd like to know if SDL is capable (and suitable) for 3D game development, or I must use OpenGL? |
1 | 2D game with 3D models and terrains I am approaching OpenGL and general gaming development. I want to start by coding a simple 2D game in Java (lwjgl) but, since I can create models with Blender, I don't want to use 2D sprites, but a semi flat environment. My target is to remake a classic game but with some cool stuff like particle emitters, bump mapping, lighting and so on... The basic idea is to make something that looks like Super Mario Bros Wii (or Kirby's Adventures Wii, or Super Smash Bros. etc...). Do you think I have to use JavaMonkeyEngine? Some hints,code snippets, tutorials, links? Do I have to handle a "z fixed" camera or set up OpenGL to render in 2D mode? What about the HUD? Sorry if my question is too generic, but I would like to have a clear idea of what I have to do before starting writing code. As always, thanks for your help! |
1 | GLFW shift key not behvaing like other keys I am using GLFW 3.2.1 Currently I am implementing a camera. For the vertical controls I am using space to go up and shift to go down. For all other keys, pressing the key results in the section of the code associated with the key always executing. But for shift, it only gets exectuted when pressing and releasing but not while keeping the key pressed. The key callbak is define CAM SPEED 0.2f void static key callback(GLFWwindow window, int key, int scancode, int action, int mods) if (key GLFW KEY ESCAPE amp amp action GLFW PRESS) glfwSetWindowShouldClose(window, GLFW TRUE) if(key GLFW KEY W) c.translateForward(CAM SPEED) if(key GLFW KEY S) c.translateForward( CAM SPEED) if(key GLFW KEY A) c.translateSideways( CAM SPEED) if(key GLFW KEY D) c.translateSideways(CAM SPEED) if(key GLFW KEY SPACE) c.translate(vec3(0,1,0) CAM SPEED) if(key GLFW KEY LEFT SHIFT mods GLFW MOD SHIFT) c.translate(vec3(0, 1,0) CAM SPEED) |
1 | Cursor position to a 3D ray using angles I've been stuck for a month trying to get gluUnProject working. After my attempts to use gluUnProject failed (as well as attempts to implement gluUnProject functionality manually) I implemented method to get ray using angles. float RadToDeg 180.0f (float)M PI conversion coefficient to degree float DegToRad 1 RadToDeg conversion coefficient to radians float DeltaX tan(fovy 2 DegToRad) 2 (Width 2 winX) zNear Height float DeltaY tan(fovy 2 DegToRad) 2 (Height 2 winY) zNear Height float AngleX atan(DeltaX zNear) RadToDeg float AngleY atan(DeltaY zNear) RadToDeg float DeltaXY sqrt(DeltaX DeltaX DeltaY DeltaY) float AngleXY atan(DeltaXY zNear) RadToDeg float AxisAngle atan(DeltaX DeltaY) RadToDeg vec3 RotationAxis rotate(Camera gt X, AxisAngle, Camera gt Z) if (DeltaY lt 0) RotationAxis RotationAxis vec3 ray rotate( (Camera gt Z), AngleXY, RotationAxis) getting ray direction ray vec3(ray.x scale, ray.y scale, ray.z scale) scale ray if needed vec3 RayEndPoint Camera gt Position ray The main idea behind my method is to rotate ray pointing perpendicularly from the middle of the screen (like in first person shooters) to position where you clicked. Necessary values are fovy angle (field of view in vertical direction), zNear (distance from Camera to near plane) and Camera Transformation Matrix(Camera X, Camera Y, Camera Z, Camera Position 1st, 2nd, 3rd and 4th columns). Difference between Camera Transformation Matrix and View Matrix here. Besides we need some method rotate(vec3 vector,float angle,vec3 axis) that will rotate any vector at any angle around any arbitrary axis. Explanation how it works will consume too many lines (check photo of my calculations). Q Is any easier or better way to do it? Any information or links welcomed. |
1 | Displaying normals of a geometry I have a rectangle which is created by 2 triangles and it is in x z plane, and i have object on it. Now, the normals of two triangles (face normals) are y axis i.e. (0,1,0). I want to display normals of two triangles for which i need two points to draw a line to show normal. Intuitively if i take center of a triangle as one and (0,1,0) as second point of a normal then both of the normals will converge to single point. I dont think it is correct as the normal should be exactly perpendicular to the surface. We can do a trick and take one point as a center and other by increasing y coordinate keeping x and z same, but what if we have curved surface and not a plane board? What about the cylinder made of triangles? How could we display normals as a lines of cylinder? I guess i am missing very basic concept here, please help me to understand it and tell me how to calculate the two points of normal. |
1 | GL SPOT CUTOFF not working properly I'm new to OpenGL. I'm studying OpenGL 2.1 and I'm trying to make a little program to test the GL SPOT CUTOFF property, but when I set a value between 0.0 90.0, the light doesn't work and everything is dark. The code void lightInit(void) GLfloat light0Position 0.0,0.0,2.0,1.0 glEnable(GL LIGHTING) glEnable(GL LIGHT0) glLightf(GL LIGHT0, GL SPOT CUTOFF, 45.0) glLightfv(GL LIGHT0, GL POSITION, light0Position) void reshapeFunc(int w, int h) glViewport(0, 0, (GLsizei) w, (GLsizei) h) glMatrixMode(GL PROJECTION) glLoadIdentity() if (w lt h) glOrtho( 4, 4, 4 (GLfloat)h (GLfloat)w, 4 (GLfloat)h (GLfloat)w, 4.0, 4.0) else glOrtho( 4 (GLfloat)w (GLfloat)h,4 (GLfloat)w (GLfloat)h, 4, 4, 4, 4.0) glMatrixMode(GL MODELVIEW) glLoadIdentity() gluLookAt(0,0,0,0,0, 1,0,1,0) void displayFunc(void) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) center sphere glutSolidSphere(1, 100,100) right sphere glPushMatrix() glTranslatef(3,0,0) glutSolidSphere(1, 100,100) glPopMatrix() left sphere glPushMatrix() glTranslatef( 3,0,0) glutSolidSphere(1, 100,100) glPopMatrix() glutSwapBuffers() void keyboardFunc(unsigned char key, int x, int y) if (key 27) exit(EXIT SUCCESS) int main(int argc, char argv) freeglut init and windows creation glutInit( amp argc, argv) glutInitDisplayMode(GLUT DOUBLE GLUT RGB GLUT DEPTH) glutInitWindowSize(600, 500) glutInitWindowPosition(300, 100) glutCreateWindow("OpenGL") glew init and errors check GLenum err glewInit() if (GLEW OK ! err) fprintf(stderr, "Error s n", glewGetErrorString(err)) return 1 fprintf(stdout, " GLEW s n n", glewGetString(GLEW VERSION)) general settings glClearColor(0.0, 0.0, 0.0, 0.0) glShadeModel(GL SMOOTH) glEnable(GL DEPTH TEST) light settings lightInit() callback functions glutDisplayFunc(displayFunc) glutReshapeFunc(reshapeFunc) glutKeyboardFunc(keyboardFunc) glutMainLoop() return 0 This code produces this image If I delete glLightf(GL LIGHT0, GL SPOT CUTOFF, 45.0) , the next image is produced Is there some kind of bug ? |
1 | Anti Aliasing in OpenGL C I'm trying to make anti aliasing work inside of OpenGL, here's what I've tried glEnable(GL POINT SMOOTH) glHint(GL POINT SMOOTH HINT, GL NICEST) glEnable(GL LINE SMOOTH) glHint(GL LINE SMOOTH HINT, GL NICEST) glEnable(GL POLYGON SMOOTH) glHint(GL POLYGON SMOOTH HINT, GL NICEST) glEnable(GL BLEND) glBlendFunc(GL SRC ALPHA, GL ONE MINUS SRC ALPHA) But so far none of these have worked. I have gotten antialiasing to work by enabling it on the control panel for my video card (Catalyst Control Center in my case), but I would like to get it working inside my program instead. This is what the rendering looks like with 4x antialiasing enabled via the video card control panel And this is what it looks like when I do it with my program How do I get antialiasing to work? |
1 | OpenGL tile rendering Currently I'm trying to render a TileMap using OpenGL 2.1, GLSL 1.2. I would like to draw every tile in just one draw call. I use a single texture with all tiles, identifying each one by an index. The vertex data per tile is vec2 worldPos the position to transform the tile Quad in world coordinates vec2 texCoord the uv coordinates, calculated using the tile index, by CPU (top left corner). But I can't find a way to draw everything with one draw call The uv coordinates can't be calculated in shader because the vertex shader don't know which corner of the quad it is processing. Can't draw by element because my quad vertex data only contains 4 vertices, store repeated vertices is a memory waste. Only if I could use a separate element buffer just for the vertex (0, 1, 2, 3, 0, 1, ...). Does someone have any suggestion about how should I proceed? Thanks! |
1 | Calculating vertex normals in OpenGL C Does anyone knows a simple solution for calculating vertex normals? I've been looking for this on the internet but i cant find a simple solution, for example, if I have some vertices like this GLfloat vertices 0.5f, 0.5f, 0.0f, Top Right 0.5f, 0.5f, 0.0f, Bottom Right 0.5f, 0.5f, 0.0f, Bottom Left 0.5f, 0.5f, 0.0f Top Left |
1 | OpenGL slower than Canvas Up to 3 days ago I used a Canvas in a SurfaceView to do all the graphics operations but now I switched to OpenGL because my game went from 60FPS to 30 45 with the increase of the sprites in some levels. However, I find myself disappointed because OpenGL now reaches around 40 50 FPS at all levels. Surely (I hope) I'm doing something wrong. How can I increase the performance at stable 60FPS? My game is pretty simple and I can not believe that it is impossible to reach them. I use 2D sprite texture applied to a square for all the objects. I use a transparent GLSurfaceView, the real background is applied in a ImageView behind the GLSurfaceView. Some code public MyGLSurfaceView(Context context, AttributeSet attrs) super(context) setZOrderOnTop(true) setEGLConfigChooser(8, 8, 8, 8, 0, 0) getHolder().setFormat(PixelFormat.RGBA 8888) mRenderer new ClearRenderer(getContext()) setRenderer(mRenderer) setLongClickable(true) setFocusable(true) public void onSurfaceCreated(final GL10 gl, EGLConfig config) gl.glEnable(GL10.GL TEXTURE 2D) gl.glShadeModel(GL10.GL SMOOTH) gl.glDisable(GL10.GL DEPTH TEST) gl.glDepthMask(false) gl.glEnable(GL10.GL ALPHA TEST) gl.glAlphaFunc(GL10.GL GREATER, 0) gl.glEnable(GL10.GL BLEND) gl.glBlendFunc(GL10.GL ONE, GL10.GL ONE MINUS SRC ALPHA) gl.glHint(GL10.GL PERSPECTIVE CORRECTION HINT, GL10.GL NICEST) public void onSurfaceChanged(GL10 gl, int width, int height) gl.glViewport(0, 0, width, height) gl.glMatrixMode(GL10.GL PROJECTION) gl.glLoadIdentity() gl.glOrthof(0, width, height, 0, 1f, 1f) gl.glMatrixMode(GL10.GL MODELVIEW) gl.glLoadIdentity() public void onDrawFrame(GL10 gl) gl.glClear(GL10.GL COLOR BUFFER BIT) gl.glMatrixMode(GL10.GL MODELVIEW) gl.glLoadIdentity() gl.glEnableClientState(GL10.GL VERTEX ARRAY) gl.glEnableClientState(GL10.GL TEXTURE COORD ARRAY) Draw all the graphic object. for (byte i 0 i lt mGame.numberOfObjects() i ) mGame.getObject(i).draw(gl) Disable the client state before leaving gl.glDisableClientState(GL10.GL VERTEX ARRAY) gl.glDisableClientState(GL10.GL TEXTURE COORD ARRAY) mGame.getObject(i).draw(gl) is for all the objects like this HERE there is always a translatef and scalef transformation and sometimes rotatef gl.glBindTexture(GL10.GL TEXTURE 2D, mTexPointer 0 ) Point to our vertex buffer gl.glVertexPointer(3, GL10.GL FLOAT, 0, mVertexBuffer) gl.glTexCoordPointer(2, GL10.GL FLOAT, 0, mTextureBuffer) Draw the vertices as triangle strip gl.glDrawArrays(GL10.GL TRIANGLE STRIP, 0, mVertices.length 3) EDIT After some test it seems to be due to the transparent GLSurfaceView. If I delete this line of code setEGLConfigChooser(8, 8, 8, 8, 0, 0) the background becomes all black but I reach 60 fps. What can I do? |
1 | Rendering objects with either normal maps, either specular maps, or with both, or with neither? My hobby engine has a deferred renderer that supports normal maps and specular maps. Now, some objects may have normal maps, and some may have specular maps. In some cases, an object has both maps, and in some cases, it has neither. The question is how should I implement the rendering of these objects? Should I have a render queue for each different object type and render them with separate shaders like this Queue A Objects without normal and specular map Queue B Objects with normal map, without specular map Queue C Objects without normal map, with specular map Queue D Objects with both normal map and specular map Render loop bind shader for type 'A' objects for each object in Queue A render object bind shader for type 'B' objects for each object in Queue B render object and so forth... Or, should I use a single shader and bind a "default" normal map and specular map for those objects that do not have such maps? By a default map, I mean for example a normal map texture that's completely colored (128, 128, 255). This would be something like this bind shader bind default normal map texture bind default specular map texture for each object in Queue A render object for each object in Queue B bind object's normal map render object bind default normal map texture for each object in Queue C bind object's specular map render object for each object in Queue D bind object's normal map bind object's specular map render object Basically, the first approach would involve less texture binds and more shader binds, whereas the second approach would be the opposite. Is either of these a preferred way to approach the problem? Or have I missed something completely here? You can assume the objects are queued correctly to the queues. |
1 | Dynamic chunk loading with high FPS. But still chops I am creating a voxel world, (like any other person), but I currently have a small performance hit when loading unloading chunks. Right now I can load and unload chunks dynamically with "infinite" type world. I am using VBO's to store chunk data with float buffers. Also, (my big point), I am using Greedy meshing to only render the edges I need. My only thought of why it's not loading well is that I am using SSAO shading, which should have a hit on my FPS, but my FPS stays in the high 200's. Also, I am using frustum culling and normal culling. What I want is the player to have a really large render distance without actually rendering the chunks until the player is close. Almost like fog but instead of covering up unloading, it's actually reducing the detail as you move away. Also, when a player creates a voxel or destroys a voxel (basically any change in the chunk) it saves the chunk to a file in the corresponding world folder. Any ideas? Here is a video of the game in its' current stage. Here's a screenshot with a far render distance. Anything with a render distance that is far, hits the performance BAD |
1 | DirectX Bullet Tracer Effect I'm wondering if anyone knows some common and efficient ways to do a fast tracer for an instant bullet. I've seen people speak on forums of using primitive lines with DirectX, but I doubt this is up to "Commercial" standards. I also thought of using some sort of minimal prism geometry encompassing the line and using a shader but even then, I'm not sure how to give it that nice glow and fade. I'm obviously not looking for any code in specific, just some ideas in general. Thanks, Joey |
1 | Texture flipping behaviour I was having this problem with OpenGL where I'd have all my textures being rendered upside down. I did the most logical thing I could think of and tried reversing the order of the mapping and that fixed the issue. But why? I know that OpenGL's coordinate system is cartesian and that it works by mapping each uv to a specific vertex. I had something like 0,0 , 0,1 , 1,1 , 1,0 which would theoretically as far as I understand it go from top left gt bottom left gt bottom right gt top right. I changed this to 1,1 , 1,0 , 0,0 , 0,1 which would represent bottom right gt top right gt top left gt bottom left. In my understanding it should be quite the opposite. Making a pretty quick sketch on paper it shows me that the initial order would theoretically render my texture correctly and not upside down. Am I correct and theres something else messing with my rendering? Like my perspective matrix? This is my orthographic matrix Matrix4f ortho new Matrix4f() ortho.setIdentity() float zNear 0.01f float zFar 100f ortho.m00 2 (float) RenderManager.getWindowWidth() ortho.m11 2 (float) RenderManager.getWindowHeight() ortho.m22 2 (zFar zNear) return ortho I can't really say I understand it though, I had quite a hard time with it. And looking through youtube tutorials and articles you can see most people don't really understand it either, they just use it. I do have a good linear algebra background but still can't wrap my head around how is this matrix normalizing my coordinates (screen coords to OpenGL coords ( 1,1)). Anyway, I digress, any help is appreciated. |
1 | How can I efficiently render a tile based map with many Z levels, where the levels act like hollowed out voxels? My map setup is a little different to what you normally have with tilemaps. The map itself is a rectangular prism, with a width, height, and depth being variable. In my case, width is the x axis horizontally across the screen, height is the y axis vertically across the screen and depth is the z level, or how quot deep quot the map goes into out of the screen. In my map you could consider the rectangular prism as quot solid quot . Many most of these voxels could be considered as occluders to the voxels below them. Some voxels are hollowed out to make rooms and corridors. The map is viewed at a specific z level, from 0, the quot bottom quot of the rectangular prism, to n, the top. A quot hollowed out quot voxel, if viewed from it's level, would result in me rendering a floor texture in that tile. If there were 4 voxels above it hollowed out, then that floor texture would be rendered when the map is viewed from each of these z levels, but each z level above the quot floor quot would have a semi transparent black rect rendered on top of the tile (so it gets successively darker the further quot up quot you are). This occurs until you're on a z level above the tallest voxel that is hollowed out to make the room at which point you can't see it any longer, because this non hollowed out voxel blocks your view to the room below. The problem I have is that there are many z levels to render and most of the hollowed out sections criss cross around in the different layers to make a very complex network of little rooms and tunnels. This results in me naively rendering a floor texture, followed by several layers of semi transparent black rects, followed by a fully opaque black rect and potentially repeated for any additional overlapping rooms for every x,y tile on the screen. I'm trying to wrap my head around a way to render this more efficiently but I'm having trouble. Currently, for every render loop, I'm attempting to step through the z layers, from the current viewed z level down to 0, and finding the z level that is occluded first (or all the way to the 0 level), so that when rendering, I can skip z layers for that tile that are below this value. This isn't really giving me much of a performance increase. I'm looking for ideas on how to better perform these series of draw calls. I'm using OpenGL3.3 directly, so I'm not restricted by any library or engine constraints. Currently, I'm rendering each quot tile quot on each layer seperately and would prefer to keep it that way for simplicity. |
1 | How can I render "two sided" clouds like in Minecraft? The clouds in Minecraft are semi transparent and are rendered on both sides. If you fly into the cloud you can see inside of the cloud. If I render clouds the inside faces would be visible on the outside. How can I prevent that? Z Ordering the faces and render near to far with depth test on? There must be a better, easier way. Could the accum or stencil buffers be used somehow? UPDATE I think the crucial point everyone is forgetting is that these clouds and semi transparent and "blended" into the scene as each face is rendered. If two faces are rendered on top of each other the "white" texture will double up which is undesired. The faces also seem to have slight lighting variations which would rule out using a stencil. UPDATE2 Just another note. Each cloud is 12x12x4 blocks (roughly). Larger clouds are just a group of the base clouds stuck together. Here's showing clouds from above. They are translucent and the chunks below can be seen. And here's the view from inside them. |
1 | Memory dataflow for uniform variables? When a texture (2D) is supplied to a shader as a 'uniform' input, it is first uploaded to OpenGL using glTexImage2D() and then using glUniform1i() it is associated to shader uniform. eg code Texture data glTexImage2D() is used to transfer texture data to the server side glGetUniformLocation() is used to access shader uniform handle glUniform1i() associates the data pointed by texture unit to the shader 'uniform' but when we pass matrix (eg matrix4x4) to a shader as a 'uniform' input, when don't use any specific function to upload it to OpenGL. (we just used to glUniform..() to associate the data with the shader input which we also used in the case of texture data) Matrix data glGetUniformLocation() to access shader uniform handle glUniformMatrix4fv() to associate matrix data to the shader uniform input. Where does the matrix data live in each step in the process of passing it to a shader as a uniform input? Does matrix data always live on client side CPU accessible memory and fetched every frame by server side? If it is uploaded to OpenGL which step function call uploads the data? where does the data live in OpenGL memory? how its memory location is pointed? |
1 | OpenGL Core 3.2 framebuffer rendering black on Mac OSX I'm trying to get a 2 pass post processing system going in OpenGL in a cross platform manor using FBOs. I'm starting the dev on mac OSX (since in the past I've found it the most finicky to get working of windows linux osx), I have a toggle to toggle between using the FBO(post processing) and not. The shaders are working, but it seems the FBO didn't load the texture unit bound to it. The following is the init code for the FBO and it's texture glGenTextures(1, amp fboimg) glBindTexture(GL TEXTURE 2D,fboimg) glTexParameterf(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameterf(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameterf(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameterf(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glTexImage2D(GL TEXTURE 2D, 0, GL RGBA8, 512, 512, 0, GL RGBA, GL UNSIGNED BYTE, 0) glBindTexture(GL TEXTURE 2D, 0) glGenFramebuffers(1, amp fboHandle) glBindFramebuffer(GL FRAMEBUFFER,fboHandle) glFramebufferTexture2D(GL FRAMEBUFFER,GL COLOR ATTACHMENT0,GL TEXTURE 2D,fboimg,0) GLenum status glCheckFramebufferStatus(GL FRAMEBUFFER) switch(status) case GL FRAMEBUFFER COMPLETE cout lt lt "The fbo is complete n" lt lt endl break case GL FRAMEBUFFER INCOMPLETE ATTACHMENT cout lt lt "GL FRAMEBUFFER INCOMPLETE ATTACHMENT EXT n" lt lt endl break case GL FRAMEBUFFER INCOMPLETE MISSING ATTACHMENT cout lt lt "GL FRAMEBUFFER INCOMPLETE MISSING ATTACHMENT EXT n" lt lt endl break case GL FRAMEBUFFER INCOMPLETE DRAW BUFFER cout lt lt "GL FRAMEBUFFER INCOMPLETE DRAW BUFFER" lt lt endl break case GL FRAMEBUFFER INCOMPLETE READ BUFFER cout lt lt "GL FRAMEBUFFER INCOMPLETE READ BUFFER" lt lt endl break if(status ! GL FRAMEBUFFER COMPLETE) cout lt lt "status ! GL FRAMEBUFFER COMPLETE" lt lt endl GLUtils checkForOpenGLError( FILE , LINE ) else shader and VAO setup for post fboSetup true Here is the render code start rendering to FBO if(postFlag amp amp fboSetup) glBindFramebuffer(GL FRAMEBUFFER, fboHandle) glUseProgram(programHandle) start rendering scene one way or another glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glUniformMatrix4fv(mats.projHandle,1,GL FALSE, glm value ptr(mats.projMatrix)) glUniformMatrix4fv(mats.mvHandle,1,GL FALSE,glm value ptr(mats.modelViewMatrix)) bind to vertex array object glBindVertexArray(vaoHandle) render scene glDrawArrays(GL TRIANGLES, 0, 240 3 ) do post processing if we have it enabled if(postFlag amp amp fboSetup) glFlush() glBindFramebuffer(GL FRAMEBUFFER, 0) glUseProgram(fboProgram) glClear(GL COLOR BUFFER BIT) glBindTexture(GL TEXTURE 2D,fboimg) glUniform1i(glGetUniformLocation(fboProgram,"sampler0"),fboimg) glUniform2f(glGetUniformLocation(fboProgram,"resolution"),512,512) glBindVertexArray(vaoFBO) glDrawArrays(GL TRIANGLES, 0, 6) glBindTexture(GL TEXTURE 2D, 0) Tried everything I can think of or find in a FBO tutorial or have read about. I don't get any errors and it returns as complete. (Also I can't seem to get it glTexImage2d a image of size width and height. says invalid values, and if I try to use GL TEXTURE RECTANGLE it says invalid enum but that's for a different question after I just get it working to begin with ) |
1 | Manage VBO VAO in a graphic engine I'm trying to make a 2D Graphic engine for training me. I've actually made it with immediate draw and I've made the renderer outside (so I can switch between OpenGL and DirectX). How can I manage Vertex Buffer Object and Vertex Array Object? I've made a geometry object, and I don't think VBO and VAO need to be here. It is the work of my renderer to manage the scene? (Group object in a large VBO, hide object out of screen, Order object by transparency, ) More explications on my architecture Spacial Spacial element containing spacial elements (like a node). Mesh Object with a geometry and a material Scene Manage spacial element (like mesh) and lights. Renderer Draw the given scene (mesh and lights) Where I should manage buffers (Index buffer, Vertex Object Buffer and Vertex Array Buffer)? In first, I started to put them in the Geometry class, but it seem obvious because a buffer can stock multiple Geometry object. So, I'm thinking to put buffers in a buffer manager (in the scene object) Scene can manage meshes (order static dynamic to regroup them in buffers). What do you think about that? Thanks! |
1 | glDrawElements not acting as expected code Vertices are created by an .obj file. (loading OBJFile.java) I draw a cube perfectly fine with glDrawArrays. (VertexModel.java) created like this new VertexModel(square.getIndexedVertexBuffer()) However, when I try to do the same with glDrawElements i get a distorted cube.. (IndexedVertexModel.java) The IndexedVertexModel is created like this new IndexedVertexModel(square.getVertexBuffer(), square.getVertexIndecies()) The result can be seen here. I've tried to figure out what's wrong, but i really have no idea what causes it to behave this way. The OBJ file I'm using looks like this v 1.00000000000000 1.00000000000000 1.00000000000000 v 1.00000000000000 1.00000000000000 1.00000000000000 v 1.00000000000000 1.00000000000000 1.00000000000000 v 1.00000000000000 1.00000000000000 1.00000000000000 v 1.00000000000000 1.00000000000000 1.00000000000000 v 1.00000000000000 1.00000000000000 1.00000000000000 v 1.00000000000000 1.00000000000000 1.00000000000000 v 1.00000000000000 1.00000000000000 1.00000000000000 f 1 4 1 2 3 1 4 2 1 f 3 4 3 4 3 3 6 2 3 f 5 4 4 6 3 5 8 2 4 f 7 4 6 8 3 6 2 2 6 f 2 4 7 8 3 7 6 2 7 f 7 4 8 1 3 8 3 2 8 f 1 4 1 4 4 1 3 3 1 f 3 4 3 6 4 3 5 3 3 f 5 4 4 8 4 4 7 3 4 f 7 4 6 2 4 6 1 3 6 f 2 4 7 6 4 7 4 3 7 f 7 4 8 3 4 8 5 3 8 |
1 | openGL Updating instanced model transform in vbo every frame I am using OpenGL to render a large number of models by instanced rendering (using LWJGL wrapper). As far as I can tell I have implemented the instancing correctly, although, after profiling, I've come upon an issue. The program is able to render a million cubes at 60fps when their model (world) transformations are not changing. Once I make them all spin though, the performance drops significantly. I deduced from the profiler that this is due to the way I write the matrix data to the VBO. My current approach is to give each unique mesh a new VAO (so all instances of cubes come under 1 VAO), have 1 VBO for vertex positions, textures, and normals and 1 instance array (VBO) for storing instance model matrices. All VBOs are interwoven. In order to make the cubes spin, I need to update the instance VBO every frame. I do that by iterating through every instance and copying the matrix values into the VBO. The code is something like this float matrices new float models by mesh.get(mesh).size() 16 for (int i 0 i lt models.size() i ) Model cube models.get(i) float matrix new float 16 cube.getModelMatrix(matrix) store model matrix into array System.arraycopy(matrix, 0, matrices, i 16, 16) glBindBuffer(GL ARRAY BUFFER, instance buffers by mesh.get(mesh) glBufferData(GL ARRAY BUFFER, matrices, GL STATIC DRAW) render I realise that I create new buffer storage and float array every frame by calling glBufferData instead of glBufferSubData but when I write outside loop soon after VBO creation glBufferData(GL ARRAY BUFFER, null, GL DYNAMIC DRAW) or stream when updating models glBufferSubData(GL ARRAY BUFFER, 0, matrices) nothing is displaying I'm not sure why, perhaps I'm misusing subData but that's another issue. I have been looking at examples of particle simulators (in OpenGL) and most of them update the instance VBO the same way as me. I'm not sure what the problem could be and I can't think of a more efficient way of updating the VBO. I'm asking for suggestions potential improvements with my code. |
1 | Some OpenGL code not working, but some is I'm just starting out using GLEW and GLFW in a C project for my rendering code, but I'm having trouble actually getting anything to render. I know OpenGL is working in some capacity, because I am able to change the background colour, but nothing else is working beyond that. I'm setting up the OpenGL context with this code Attempts to initiate GLFW. If it doesn't, it returns immediately if (!glfwInit()) fprintf(stderr, "Failed to initialize GLFW n") return glfwWindowHint(GLFW SAMPLES, 4) 4x antialiasing glfwWindowHint(GLFW CONTEXT VERSION MAJOR, 3) We want OpenGL 3.3 glfwWindowHint(GLFW CONTEXT VERSION MINOR, 3) glfwWindowHint(GLFW OPENGL FORWARD COMPAT, GL TRUE) To make MacOS happy should not be needed glfwWindowHint(GLFW OPENGL PROFILE, GLFW OPENGL CORE PROFILE) We don't want the old OpenGL window GLFWwindowPtr() window.reset(glfwCreateWindow(width, height, windowTitle, NULL, NULL), DeleteWindow) if (window.get() nullptr) fprintf(stderr, "Failed to open GLFW window n") glfwTerminate() return glfwMakeContextCurrent(window.get()) glewExperimental GL TRUE Needed in core profile int errorCode glewInit() if (errorCode) printf(" i", errorCode) fprintf(stderr, "Failed to initialize GLEW n") return glClearColor(0.0, 0.0, 0.0, 1.0) glClear(GL COLOR BUFFER BIT) glfwSwapBuffers(window.get()) A little while after that in my main loop, I call this code every frame float ratio int width, height glfwGetFramebufferSize(WindowHandler getMainWindowGLPointer(), amp width, amp height) ratio width (float)height glViewport(0, 0, width, height) glClearColor(red 255, green 255, blue 255, 1.0) glClear(GL COLOR BUFFER BIT) Now this code works. I'm incrementing the red, green and blue variables every frame so that I can check that OpenGL is working as a whole and I see the background colour change. However, when I put the following code after it, I'm seeing nothing different appear. glBegin(GL QUADS) Each set of 4 vertices form a quad glColor3f(red 255, green 255, blue 255) Red glVertex2f( 0.5f, 0.5f) x, y glVertex2f(0.5f, 0.5f) glVertex2f(0.5f, 0.5f) glVertex2f( 0.5f, 0.5f) glEnd() I used the code from an existing example found on https www3.ntu.edu.sg home ehchua programming opengl CG Introduction.html so I had a better chance of it being something in my implementation and not me messing up trying to write OpenGL code. |
1 | Looking for a GUI library for OpenGL, or tips for my own (MetroUI) I am making an OpenGL graphics accelerated FPS, and I need to find a library to handle GUI correctly. My current GUI handler just wont do! Yes, as in the title says, the UI of the game is Metro (ZuneHD, WP7, etc). So the UI contains little to NO images, just fonts. |
1 | Render with multiple lights (one pass per light) I have already a system that at the moment handle multiple lights just passing an array of light struct and loop through it. I had been told to switch to a multipass rendering approach. How should I do this in an efficient way? Can you link me to some resources about this? I cannot switch to defferred rendering. |
1 | Batching and Z order with Alpha blending in a 3D world I'm working on a game in a 3D world with 2D sprites only (like Don't Starve game). (OpenGL ES2 with C ) Currently, I'm ordering elements back to front before drawing them without batch (so 1 element 1 drawcall). I would like to implement batching in my framework to decrease draw calls. Here is what I've got for the moment Order all elements of my scene back to front. Send order list of elements to the Renderer. Renderer look in his batch manager if a batch exist for the given element with his Material. Batch didn't exist create a new one. Batch exist for element with this Material Add sprite to the batch. Compute big mesh with all sprite for each batch (1 material type 1 batch). When all batches are ok, the batch manager compute draw commands for the renderer. Renderer process draw commands (bind shader, bind textures, bind buffers, draw element) Image with my problem here But I've got some problems because objects can be behind another objects inside another batch. How can I do something like that? |
1 | How does glm lookAt produce a View Matrix? Say we do this glm mat4 View glm lookAt(glm vec3(4,3, 3), glm vec3(0,0,0),glm vec3(0,1,0)) And after printing to the console with glm tostring (column major order) View 0.600000 0.411597 0.685994 0.000000 0.000000 0.857493 0.514496 0.000000 0.800000 0.308697 0.514496 0.000000 0.000000 0.000000 5.830953 1.000000 This matrix works as intended. I multiplied it by a perspective projection matrix and passed the resulting matrix in to my vertex shader to be multiplied by the vertices of a cube with side length 1. So the full expression for each projected vertex is VERTEX VIEW PERSPECTIVE PROJECTION p.x 0.600000 0.411597 0.685994 0.000000 0.629325 0.000000 0.000000 0.000000 p.y 0.000000 0.857493 0.514496 0.000000 0.000000 0.839100 0.000000 0.000000 p.z 0.800000 0.308697 0.514496 0.000000 0.000000 0.000000 1.002002 1.000000 1.0 0.000000 0.000000 5.830953 1.000000 0.000000 0.000000 0.200200 0.000000 Which produces this However, I was under the impression that to account for the translation of the camera, the camera's coordinates should be inverted and placed in the fourth column of the View Matrix. So I tried this VERTEX VIEW PERSPECTIVE PROJECTION p.x 0.600000 0.411597 0.685994 4.000000 0.629325 0.000000 0.000000 0.000000 p.y 0.000000 0.857493 0.514496 3.000000 0.000000 0.839100 0.000000 0.000000 p.z 0.800000 0.308697 0.514496 3.000000 0.000000 0.000000 1.002002 1.000000 1.0 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.200200 0.000000 Which produces this Why does the first View Matrix work but not the second one? |
1 | In a shader, why does substituting a variable with the expression producing it cause different behaviour? I have this correctly working OpenGL shader producing Perlin noise float left lerp(fade(v), downleft, topleft) float right lerp(fade(v), downright, topright) float result lerp(fade(u), left, right) Then I tried plugging the definitions of right and left into result float result lerp(fade(u), lerp(fade(v), downleft, topleft), lerp(fade(v), downright, topright)) Surprisingly, this behaves completely differently, giving visible edges in my Perlin noise. Below are both results My whole 30 line shader is here. What is the difference between those? |
1 | What are valid ranges of space dimensions used in OpenGL? I want to know what the valid ranges of all the spaces in OpenGL are. Specifically Maximum and Minimum Clip space X, Y, and Z (gl Position) Maximum and Minimum gl FragCoord X, Y, and Z Maximum and Minimum gl FragDepth value |
1 | Invalid coordinates returned by glutMouseFunc() I am using GLUT's glutMouseFunc() function to retrieve the coordinates of mouse clicks. I want to move the object on that coordinate to another coordinate. But when I click on the object the coordinates returned by glutMouseFunc() are different than the original. windowWidth 1250 windowHeight 1000 void onMouse(int button, int state, int x, int y) int main(int argc, char argv) glutInit( amp argc, argv) glutInitDisplayMode(GLUT SINGLE) glutInitWindowSize(1250, 1000) glutCreateWindow("OpenGL First window demo") ... glutMouseFunc(onMouse) ... glutMainLoop() return 0 void onMouse(int button, int state, int x, int y) flag if (state ! GLUT DOWN amp amp button! GLUT LEFT BUTTON) return GLbyte color 4 GLfloat depth GLuint index glReadPixels(x, windowHeight y 1, 1, 1, GL RGBA, GL UNSIGNED BYTE, color) glReadPixels(x, windowHeight y 1, 1, 1, GL DEPTH COMPONENT, GL FLOAT, amp depth) glReadPixels(x, windowHeight y 1, 1, 1, GL STENCIL INDEX, GL UNSIGNED INT, amp index) printf("Clicked on pixel d, d, color 02hhx 02hhx 02hhx 02hhx, depth f, stencil index u n", x, y, color 0 , color 1 , color 2 , color 3 , depth, index) move() Inside my render function i have set the window coordinates to 0,0 using glortho glMatrixMode(GL PROJECTION) glLoadIdentity() glOrtho(0, windowWidth, windowHeight, 0, 0, 1) glMatrixMode(GL MODELVIEW) glLoadIdentity() If my object is at (550, 900), onMouse() reports it as (556, 650) while if I click on an object which is at position (375, 475) the onMouse() returns (376, 342). Basically there is a huge difference between the value of y axis returned by the function. How can i get the correct screen coordinates? |
1 | Directional and orientation problem I have drawn 5 tentacles which are shown in red. I have drew those tentacles on a 2D Circle, and positioned them on 5 vertices of the that circle. BTW, The circle is never be drawn, I have used it to simplify the problem. Now I wanted to attached that circle with tentacles underneath the jellyfish. There is a problem with the current code but I don't know what is it. You can see that the circle is parallel to the base of the jelly fish. I want it to be shifted so that it be inside the jelly fish. but I don't know how. I tried to multiply the direction vector to extend it but that didn't work. One tentacle is constructed from nodes Get the direction of the first tentacle's node 0 to node 39 of that tentacle Vec3f dir m tentacle 0 gt geNodesPos() 0 m tentacle 0 gt geNodesPos() 39 Draw the circle with tentacles on it Vec3f pos m SpherePos drawCircle(pos,dir,30,m tentacle.size()) for (int i 0 i lt m tentacle.size() i ) m tentacle i gt Draw() Draw the jelly fish, and orient it on the 2D Circle gl pushMatrices() Quatf q assign quaternion to rotate the jelly fish around the tentacles q.set(Vec3f(0, 1,0),Vec3f(dir.x,dir.y,dir.z)) tanslate it to the position of the whole creature per every frame gl translate(m SpherePos.x,m SpherePos.y,m SpherePos.z) gl rotate(q) draw the jelly fish at center 0,0,0 drawHemiSphere(Vec3f(0,0,0),m iRadius,90) gl popMatrices() |
1 | OpenGL Better to switch out material or shader for changing object colors? I'm fairly new to shaders and OpenGL so bare with me please, I just want to make sure I'm doing it correctly! Now I'm using LibGDX in order to create a simple 3d diagram for my company. I have a few models that I need to display as grey until they are active, where they will fade to a brighter blue, probably gain some emission value so that they're brighter. Now, when I export my models from Blender to .fbx, and then run Fbx Conv to generate .g3dj files, I get some output like the following "materials" "id" "glowy blue material", "ambient" 0.000000, 0.000000, 0.000000 , "diffuse" 0.000000, 0.753256, 1.000000 , "emissive" 0.000000, 0.753256, 1.000000 , "opacity" 0.592209, "specular" 0.000000, 0.759441, 1.000000 , "shininess" 9.607843, "textures" "id" "Texture", "filename" "heat exchange uv edge highlight.png", "type" "DIFFUSE" Which is all good and displays what I want, but I've been reading about shaders the past few days and see that in a lot of cases LibGDX suggests extending the default shader and making tweaks that way. From a computational point of view, what is the best way to go about switching colors of models? Extending the default shader with some logic about if x is inactive, change shader to render as greyscale, else render as normal blue material? Or should I create a greyscale version of the material and switch them out? ( I'm not quite sure how to do this, also, because I apply the material in the g3dj file could I do this in Java instead of the g3dj file? Does it cause a performance hit? If anyone could point me in the direction of some documentation somewhere.. ) Any help would be appreciated. Thanks! |
1 | How do games make fire and smoke effects? I was wondering around, searching internet about particle system and fire effects but I haven't found any good answers. On some games I have realized that some sort of movie is being shown as the fire, which is pretty good but not for the fires which are close to the viewer. Please let me know how to make realistic fire and smoke effect in a game. Also if you have a good sample code or good description of how to make these kinds of cool fires please note them too. |
1 | OpenGL Mapped Memory Shader Source Is there any way to get a pointer to a newly created shader object's source? I'd like to load a shader directly from file in to my shader object instead of loading to an intermediary variable and then using glShaderSource(). I haven't been able to find any information regarding this, only regarding buffers. |
1 | How to decorate the floor with a grid? I got this photo from mixamo.com I want to draw grid lines similar to this on my floor. I'm sure this is easy by using textures, but I'm trying to avoid using textures as much as possible. Is there a way to implement these floor patterns in OpenGL 3.3? |
1 | OpenGL Move camera regardless of rotation For a 2D board game I'd like to move and rotate an orthogonal camera in coordinates given in a reference system (window space), but simply can't get it to work. The idea is that the user can drag the camera over a surface, rotate and scale it. Rotation and scaling should always be around the center of the current viewport. The camera is set up as gl.glMatrixMode(GL2.GL PROJECTION) gl.glLoadIdentity() gl.glOrtho( width 2, width 2, height 2, height 2, nearPlane, farPlane) where width and height are equal to the viewport's width and height, so that 1 unit is one pixel when no zoom is applied. Since these transformations usually mean (scaling and) translating the world, then rotating it, the implementation is gl.glMatrixMode(GL2.GL MODELVIEW) gl.glLoadIdentity() gl.glRotatef(rotation, 0, 0, 1) e.g. 45 gl.glTranslatef(x, y, 0) e.g. 10 for 10px right, 2 for 2px down gl.glScalef(zoomFactor, zoomFactor, zoomFactor) e.g. scale by 1.5 That however has the nasty side effect that translations are transformed as well, that is applied in world coordinates. If I rotate around 90 and translate again, X and Y axis are swapped. If I reorder the transformations so they read gl.glTranslatef(x, y, 0) gl.glScalef(zoomFactor, zoomFactor, zoomFactor) gl.glRotatef(rotation, 0, 0, 1) the translation will be applied correctly (in reference space, so translation along x always visually moves the camera sideways) but rotation and scaling are now performed around origin. It shouldn't be too hard, so what is it I'm missing? |
1 | OpenGL strange rendering problem when buffers have different sizes I have encountered a very odd error in my program, "odd" in the sense that everything the API says suggests that the error should not occur. I have a bunch of 2D un indexed vertex data, and I want to render it as lines. So far, so good. Then, I wanted to make each vertex have its own (RGB) color, so I generate a color for each vertex. For simplicity, I chose red. Works fine, except now only 2 3 of the points are being rendered! The problem arises from the fact that each vertex's position data consists of only 2 numbers, whereas the color data consists of 3 numbers. So, the "position" buffer has 2 elements per vertex while the "color" one has 3 elements per vertex. I thought that using glVertexAttribPointer to tell this information to OpenGL would be enough, but turns out it's not. In fact, if I say that the color data has only 2 elements per vertex, using glVertexAttribPointer(vertexColorID2,2,GL DOUBLE,GL FALSE,0,(void )0) (as opposed to 3), it renders all the points except now I can only specify two numbers for the RGB color, so I can't get the right color. The full code of the issue is below glUseProgram(programID2) draw the graph graph data graphData() std vector lt double gt graphcolordata(graph data.size() 2 3) for (int i 0 i lt graph data.size() i 3) graphcolordata i 1 glEnableVertexAttribArray(vertexPosition modelspaceID2) glBindBuffer(GL ARRAY BUFFER, graphbuffer) glBufferData(GL ARRAY BUFFER, graph data.size() sizeof(GLdouble), amp graph data 0 , GL STREAM DRAW) glVertexAttribPointer(vertexPosition modelspaceID2,2,GL DOUBLE,GL FALSE,0,(void )0) glEnableVertexAttribArray(vertexColorID2) glBindBuffer(GL ARRAY BUFFER, colorbuffer2) glBufferData(GL ARRAY BUFFER, graphcolordata.size() sizeof(GLdouble), amp graphcolordata 0 , GL STREAM DRAW) glVertexAttribPointer(vertexColorID2,3,GL DOUBLE,GL FALSE,0,(void )0) glDrawArrays(GL LINES, 0, graph data.size() 2) glDisableVertexAttribArray(vertexPosition modelspaceID2) glDisableVertexAttribArray(vertexColorID2) Note programID2 is my basic shader program, and the following variable definitions were previously used GLuint vertexPosition modelspaceID2 glGetAttribLocation(programID2, "vertexPosition modelspace") GLuint vertexColorID2 glGetAttribLocation(programID2, "vertexColor") Edit Incredibly stupid error, figured it out immediately after posting when it had previously stumped me for half an hour. std vector lt double gt graphcolordata(graph data.size() 2 3) for (int i 0 i lt graph data.size() i 3) graphcolordata i 1 should be std vector graphcolordata(graph data.size() 2 3) for (int i 0 i lt graphcolordata.size() i 3) graphcolordata i 1 When this initialization is fixed, it works fine. I would delete this, but I do not see how. |
1 | In Theory, how much gain should I expect from instancing in OpenGL for small scene with large meshs? I am trying to implement one pass stereo rendering in OpenGL for a VR application that I am building. I am implementing this through instancing, so I make one render call glDrawElementsInstanced with count 2 and do clipping in the vertex shader but I am not seeing that much gain in performance. My scene has just 2 massive meshes (rendered two times (2 instances), one for each eye totaling around 3.5m triangles for each eye or 7m per frame). Obviously I can reduce the triangle count by doing some decimation but my question here is, In theory, should I expect a significant gain in performance by using instancing in this scenario even though the number of draw calls and state changes in the entirety of my application is around 15 per frame For each pass and 30 per frame? Or to Rephrase the question, Does OpenGL instancing improve performance mainly by reducing state changes or by making less calls to glDraw ? |
1 | Aiming with a crosshair with a lot of polygons triangles I'm working on a 3d kindof game where I'll eventually be able to modify the shapes present in the environment by pulling their faces with a crosshair. The thing is that I don't know how to achieve ray quads or ray triangle collision. I did it once with a teacher at school, but he didn't have much time and it was not well constructed ( I lost this code). I was also wondering how I could check for theses collisions with thousands of shapes? Because I know this is quite resource demanding and doing this for 1000 spheres cube freeform shapes is not easy. If anyone could help me push my knowledge a bit further, it'd be very much appreciated! I also work in Java using LWJGL, so OpenGl. |
1 | How can I access an OpenGl buffer from within a GLSL 330 shader? I am currently attempting to create a simulation using OpenGl's GPU acceleration for both the simulation computations and rendering. The rendering portion of my simulation is complete and working, but I still need to create the simulation portion. To do this, I want to use transform feedback, for I am rendering as GL POINTS and each point needs to be moved in relation to every other point each frame. This matches nicely to a vertex shader and transform feedback abstraction. The issue comes from the fact that my input data (the initial positions of each point) is in a buffer, and each vertex shader invocation in the transform feedback render only has access to that specific entry in my buffer, while I need access to all entries in my buffer. How can I solve this issue? I can't use compute shaders because I am targeting OpenGl 3.3. I have also considered taking the buffer data, copying it to the CPU, and then back to the GPU in the form of a texture, however this is likely to be incredibly inefficient, especially as I begin scaling up how many points I am working with. This is further exasperated by the fact that textures don't allow me to get pixels at integer coordinates, and passing whole numbers as floating point coordinates is likely to cause floating point error to be sufficiently significant to be a problem, especially with texture filtering (I have thousands of points for now, but that will increase to whatever my GPU can handle once I have it working). |
1 | How to render water reflections on multiple heights I'm making a voxel game on OpenGL, and are trying to find a way to render semi realistic water (At least partially good looking, it doesn't need to be strictly scientific accurate). All sources written about that topic I find seem to assume that all the water in the scene has the same height, which is not the case on a voxel game, where you can have (And see at the same time) multiple bodies of water, with different height each. The tricky detail lies in the multi height part of the problem, which makes really difficult to render the scene in real time using typical water reflection algorithms aimed at single level water surface. As an example, consider the following image The dirt pillar at the left should be reflected on the water at its front, as so should be the mountains near the horizon. The water collindant to the pillar is 20m higher than the general sea water, so traditional reflection methods (Considering a fixed water level) would simply not work in this case. Considering that a lot of different water levels can be seen in a single scene, performing a traditional water reflection method for each water body height is not an option if we want to achieve a acceptable frame rate execution. How can I overcome this problem? There is any way to render it in real time? If it isn't actually possible to do, what approximation should I take to get a good looking result, even if reflections aren't 100 realistic? |
1 | Rendering without VAO's VBO's? I am trying to port a demo I found on PositionBasedDynamics . It has a generic function which does the rendering and on their example works but they don't generate bind any Vertex Array Object or Vertex Buffer Object even though they use Core OpenGL and shaders. The function is this template lt class PositionData gt void Visualization drawTexturedMesh(const PositionData amp pd, const IndexedFaceMesh amp mesh, const unsigned int offset, const float const color, GLuint text) draw mesh const unsigned int faces mesh.getFaces().data() const unsigned int nFaces mesh.numFaces() const Vector3r vertexNormals mesh.getVertexNormals().data() const Vector2r uvs mesh.getUVs().data() std cout lt lt nFaces lt lt std endl glBindTexture(GL TEXTURE 2D, text) glEnableVertexAttribArray(0) glVertexAttribPointer(0, 3, GL REAL, GL FALSE, 0, amp pd.getPosition(offset) 0 ) glEnableVertexAttribArray(1) glVertexAttribPointer(1, 2, GL REAL, GL FALSE, 0, amp uvs 0 0 ) glEnableVertexAttribArray(2) glVertexAttribPointer(2, 3, GL REAL, GL FALSE, 0, amp vertexNormals 0 0 ) glDrawElements(GL TRIANGLES, (GLsizei)3 mesh.numFaces(), GL UNSIGNED INT, mesh.getFaces().data()) glDisableVertexAttribArray(0) glDisableVertexAttribArray(1) glDisableVertexAttribArray(2) glBindTexture(GL TEXTURE 2D, 0) I did the same thing and it render's a white screen . I used RenderDoc to check what's going on and it show's these https puu.sh rM1wB 80620ade8d.png . How can they get it to work while i can't? |
1 | OpenGL Planet Generation Simple Matrix Issue (Planet Spins With Mouse) I originally asked this question on StackOverflow amp was directed here by a commenter. Im currently working on a OpenGL planet rendering. I'm using the Tessellation pipeline. So far things are going very well bar one issue. It's at the stage where I've been banging my head off it for ages and feel like progress isnt happening. First of all here is a gif of what I'm dealing with. Essentially my problem is that whenever the mouse is moved the planet rotates as if its "looking" at where the camera is pointing. There are some graphical issues but they are due to me simply repeating the same heightmap across the whole cubemap. Since it doesnt match up on the sides there are clear seams. Below is my evaluation shader void main(void) vec4 p0 gl in 0 .gl Position vec4 p1 gl in 1 .gl Position vec4 p2 gl in 2 .gl Position vec3 p gl TessCoord.xyz Normal FS in mat3(transpose(inverse((MV)))) (p0 p.x (p1 p.y) p2 p.z).xyz float displacment texture(tex1, Normal FS in).r 800 gl Position MVP (p0 p.x (p1 p.y) p2 p.z) (vec4(vec3(displacment,displacment,0) normalize(Normal FS in),1)) Its pretty simple. The shader calculates a normal for the output vertex. This is then used to grab a displacement value from the heightmap which is a cube map. Then GL Position is calculated. What I've been trying to work out is if my problem lies in the shaders or in the rest of the package. My attempts have largely been tinkering. I've moved all normal related stuff into the evaluation shader rather then calculating it in the vertex shader and passing them through to the control and evaluation shaders. The issue only occurs when the mouse is moved. Any suggestions would be fantastic hopefully all I'll need is a pointed in the right direction. Edit Doing the inverse transpose of the modelview matrix OR view matrix yields the result in the gif. Using the model matrix leads to distortion of the normals when the camera is moved, such that the terrain bends about the place. |
1 | Can't use SFML sprite drawing and OpenGL rendering at the same time I'm using some SFML built in functions to draw sprites and text as an overlay on top of some OpenGL rending in an SFML RenderWindow. The opengl rendering appears fine until I add the code to draw the sprites or text. The sprite or text drawing causes the OpenGL stuff to disappear. The follow code show what I'm trying to do sf RenderWindow window(sf VideoMode(viewport.width,viewport.height,32), "SFML Window") glMatrixMode(GL PROJECTION) glLoadIdentity() glOrtho(0,viewport.width,0,viewport.height,0,1) while (window.pollEvent(Event)) event handling... begin drawing glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glBegin(GL TRIANGLES) glColor3f(col.x,col.y,col.z) for(int i 0 i lt 3 i ) glVertex2f(pos.x verts i .x,pos.y verts i .y) glEnd() adding this line causes all the previous opengl triangles not to appear window.draw("Sometext") window.display() |
1 | AlphaToCoverage Alpha Blending Artifacts I'm experiencing a strange problem using OpenGL SampleAlphaToCoverage mode. There are rendering artifacts when using alpha blending on pixels that have been rendered using alpha to coverage and I can't explain why. Here's the basic setup Render the world. Some batches of vertices are rendered using alpha to coverage. Blending is never enabled at this stage. Disable depth testing an render a screen overlay. At this point I'm using alpha blending. Alpha to coverage is never enabled at this stage. I'm using alpha to coverage on the airplanes glass window. This is what the scene looks like Everything works as expected so far. However, as soon the screen overlays alpha blending is drawn at a screen location where alpha to coverage has been previously active, I'm getting this Note that there is no error as long as blending affects different pixels than alpha to coverage! I also tested the following cases Use alpha to coverage for the overlay instead of alpha blending No artifacts. Clear the entire screen before rendering the screen overlay No artifacts. Clear all buffers except the color buffer Artifacts Clear only the color buffer No artifacts The screen overlay item in question consists of a single textured Quad. The texture is 256x256. You can download the test application at http www.fetzenet.de atc error.zip Camera controls are WASD Mouse (Hold left or right button) Does anyone have a clue whats going on here? Also, can anyone reproduce this behavior? |
1 | How can I get started programming OpenGL on Mac OS X? I'm trying to start OpenGL programming on a Mac, which brings me into unknown territory on a lot of things. During the day, I'm a Web Developer, working in C and before that in PHP and Delphi, all on Windows. During the night, I try to pick up Mac OpenGL skills, but everything is so different. I've been trying to look for some books, but the OpenGL books are usually for iOS (tons of them out there) and the Mac Books usually cover "normal" application Development. I want to start simple with Pong, Tetris and Wolfenstein. I see that there are a bunch of different OpenGL Versions out there. I know about OpenGL ES 1 amp 2, but I don't know about the "big" OpenGL Versions which ones are commonly supported on 10.6 and 10.7 on current (2010 2011) Macs? Are there any up to date (XCode 4) books or tutorials? I don't want to use a premade Engine like Unity yet again, I know next to nothing about any Mac development. |
1 | Shadow map shimmering, indexing outside the shadow map I have tried to reduce the shadow shimmering flickering using the technique described here http msdn.microsoft.com en us library windows desktop ee416324 28v vs.85 29.aspx It works as I want and shimmering is reduced but sometimes I have artifacts. It looks like my code tries to index space outside the shadow map. The article above writes about it but I didn't find a solution. When I played with the code I also got black strips on the corners. Code reduce shadow shimmering flickering Vector2 vecWorldUnitsPerTexel Vector2(D.x() (float)D.shadowMapSize(), D.y() (float)D.shadowMapSize()) get only x and y dimensions Vector2 min2D min.vector2(), max2D max.vector2() min2D vecWorldUnitsPerTexel min2D Round(min2D) min2D vecWorldUnitsPerTexel max2D vecWorldUnitsPerTexel max2D Round(max2D) max2D vecWorldUnitsPerTexel min.set(min2D, min.z) max.set(max2D, max.z) crop matrix based on this article https developer.nvidia.com gpugems GPUGems3 gpugems3 ch10.html Vector scale Vector offset scale.x 2.0f (max.x min.x) scale.y 2.0f (max.y min.y) scale.z 1.0f (max.z min.z) offset.x 0.5f (max.x min.x) scale.x offset.y 0.5f (max.y min.y) scale.y offset.z min.z scale.z Matrix4 m m.x Vector4(scale.x, 0, 0, 0) m.y Vector4(0, scale.y, 0, 0) m.z Vector4(0, 0, scale.z, 0) m.w Vector4(offset.x, offset.y, offset.z, 1.0f) I think that I should store in the depth map a slightly larger area but I'm not sure how to do this. I tried to change the scale of the crop matrix but it doesn't help. EDIT It seems I've found a solution. When I'm rounding (or flooring) min and max values I subtract one from the min value and add one to the max value. This makes the shadow map contain a slightly larger area and I don't see any artifacts. |
1 | How do I render using VBOs? I'm trying to render a hemisphere in OpenGL, the problem that the hemisphere isn't rendered at all, only part of it. I initialize it, then at each frame I draw it using the following code. I'm trying to use a VBO for drawing it. void Jellyfish Init HemiSphere(const float radius, const int segments ) m iSegements segments m fVerts new float (segments 1) 2 3 m fNormals new float (segments 1) 2 3 m fTexCoords new float (segments 1) 2 2 for( int j 0 j lt segments 2 j ) float theta1 j 2 3.14159f segments ( 3.14159f ) float theta2 (j 1) 2 3.14159f segments ( 3.14159f ) for( int i 0 i lt segments i ) Vec3f e, p float theta3 i 2 3.14159f segments e.x math lt float gt cos( theta1 ) math lt float gt cos( theta3 ) e.y math lt float gt sin( theta1 ) e.z math lt float gt cos( theta1 ) math lt float gt sin( theta3 ) p e radius m fNormals i 3 2 0 e.x m fNormals i 3 2 1 e.y m fNormals i 3 2 2 e.z m fTexCoords i 2 2 0 0.999f i (float)segments m fTexCoords i 2 2 1 0.999f 2 j (float)segments m fVerts i 3 2 0 p.x m fVerts i 3 2 1 p.y m fVerts i 3 2 2 p.z e.x math lt float gt cos( theta2 ) math lt float gt cos( theta3 ) e.y math lt float gt sin( theta2 ) e.z math lt float gt cos( theta2 ) math lt float gt sin( theta3 ) p e radius m fNormals i 3 2 3 e.x m fNormals i 3 2 4 e.y m fNormals i 3 2 5 e.z m fTexCoords i 2 2 2 0.999f i (float)segments m fTexCoords i 2 2 3 0.999f 2 ( j 1 ) (float)segments m fVerts i 3 2 3 p.x m fVerts i 3 2 4 p.y m fVerts i 3 2 5 p.z glGenBuffers(3, amp SVboId 0 ) Vertex glBindBuffer(GL ARRAY BUFFER,SVboId 0 ) glBufferData(GL ARRAY BUFFER,sizeof( m fVerts) (m iSegements 1) 2 3, m fVerts,GL DYNAMIC DRAW) Normals glBindBuffer(GL ARRAY BUFFER,SVboId 1 ) glBufferData(GL ARRAY BUFFER,sizeof( m fNormals) (m iSegements 1) 2 3, m fNormals,GL DYNAMIC DRAW) void Jellyfish drawHemiSphere( ) glEnableClientState( GL VERTEX ARRAY ) glVertexPointer( 3, GL FLOAT, sizeof( m fVerts) (m iSegements 1) 2 3, m fVerts ) glEnableClientState( GL VERTEX ARRAY ) glBindBuffer(GL ARRAY BUFFER,SVboId 0 ) glVertexPointer( 3, GL FLOAT, 0, 0 ) glEnableClientState( GL NORMAL ARRAY ) glBindBuffer(GL NORMAL ARRAY,SVboId 1 ) glNormalPointer( GL FLOAT, 0,0 ) glEnableClientState( GL TEXTURE COORD ARRAY ) glTexCoordPointer( 2, GL FLOAT, sizeof( m fTexCoords) (m iSegements 1) 2 3, m fTexCoords ) glEnableClientState( GL NORMAL ARRAY ) glNormalPointer( GL FLOAT, sizeof( m fNormals) (m iSegements 1) 2 2, m fNormals ) for( int j 0 j lt m iSegements 2 j ) for( int i 0 i lt m iSegements i ) glDrawArrays( GL TRIANGLES, 0, (m iSegements 1) 2 ) glDisableClientState( GL VERTEX ARRAY ) glDisableClientState( GL TEXTURE COORD ARRAY ) glDisableClientState( GL NORMAL ARRAY ) |
1 | glm perspective() with wide field of view I'm still pretty new to this GL stuff, but I've managed to put together some C and C code that allows me to construct simple 3D objects consisting of line segments and display them on my laptop using GLFW and the glm library. I've hooked in enough keyboard callbacks to allow me to navigate the position and orientation of an observer via glm lookAt(). A couple of the keys allow me to grow or shrink the field of view value that gets passed to glm perspective(), like so float field of view degrees, not radians glm mat4 proj glm perspective( glm radians(field of view), field of view 1920.0f 1080.0f, aspect ratio 0.0f, 100.0f near and far clipping planes ) From my experience as an amateur photographer, I know that if one takes a photo with a camera with a wide angle lens of a scene that contains straight line near the periphery of the field of view, the physically straight lines will appear bent in the photo. They bow outward so as to create a more circular shape around the center of view. But when I modify the field of view that gets passed to glm perspective() such that it is a high value (approaching 180 degrees), with an orientation such that the straight line segments appear near the edges of the field, the rendered line segments never bend like I'd see with a real camera and wide angle lens. Instead, the lines remain straight but appear to get very long. This creates very unnatural looking images. Is there some other glm function that I should call instead in order to make the rendered lines curved like I'd expect to see in real life? Thanks |
1 | Can't get world position from reverse Z buffer I'm using this solution to render using a reversed Z buffer. This looks fine and completely fixes all my z fighting, but it breaks what I use in shader to derive the world position from the depth for various purposes such as deferred lighting and fog (it has worked for a regular projection matrix) vec4 screenSpacePosition vec4(texcoord 2.0 1.0, depth 2.0 1.0, 1) vec4 worldSpacePosition invProjView screenSpacePosition vec3 finalPosition worldSpacePosition.xyz worldSpacePosition.w return finalPosition I've thought that this was due to this using a single view projection matrix, so I've also tried this vec4 clipSpacePosition vec4(texcoord 2.0 1.0, depth 2.0 1.0, 1.0) vec4 viewSpacePosition invProj clipSpacePosition viewSpacePosition viewSpacePosition.w vec4 worldSpacePosition invView viewSpacePosition return worldSpacePosition.xyz That must be suboptimal performance wise, but it doesn't work either anyway. Any idea what I'm missing? |
1 | SDL2, OpenGL, Nvidia laptop screen tearing EDIT 2017 05 14 dvb. Issue still active to this very day, see geforce forum link in main question. Note 3 "Stoltverd" has posted a "fix guide" on geforce forum. It has workarounds for non OpenGL games. https forums.geforce.com default topic 1008527 geforce mobile gpus guide diagonal screen tearing problems with optimus nvidias control panel not working like it should need a fix get in here Note 4 My workaround for OpenGL games use HDMI output to external monitor. (In my case, the only game I care about is my own, and I use a big cheap HDMI TV for demo.) EDIT 2016 03 06. Note 1 Apparently this is a known issue with Nvidia. If you've found here due to same problem, please stoke the collective "please fix it" by filing a "question" at https nvidia.custhelp.com app ask. Note 2 Doesn't happen on external monitor. Also doesn't happen on internal display if external monitor is mirroring. (Surprise.) I'm developing my game directly in C , OpenGL, and SDL2. On an Alienware 15 R2 laptop with Nvidia GTX 965M, I get screen tearing in a curious diagonal line like shown below. It goes across the window top left to bottom right. The image below is taken during a screen fade in, so shows just a brightness line. But it happens between any two different frames. If my window is full screen, the line goes top left to bottom right of the whole screen. Same code runs fine (no tearing) on All macs, same laptop on Intel integrated GPU, desktop PC with Nvidia 730. Same laptop looks fine with other games, Chrome Shadertoy, amp c. Main loop looks something like setup code SDL GL SetAttribute(SDL GL DOUBLEBUFFER, 1) SDL GL SetSwapInterval(1) flags SDL WINDOW OPENGL SDL CreateWindow(... ,flags) main loop while(1) this gt checkEvents() SDL GL SwapWindow(this gt mainWindow) this gt doGlDrawing() Grepping forums, I've seen reports of some OpenGL games showing this on Alienware amp other Dell laptops. All suggestions welcome! Thanks. Edit 2016 02 20, some links to similar user side reports https forums.geforce.com default topic 903422 geforce mobile gpus diagonal screen tearing issues on gtx 860m 870m 960m 965m 970m 980m http en.community.dell.com owners club alienware f 3746 t 19658623 https www.reddit.com r Alienware comments 427ltv anyone have diagonal screen tearing when playing |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.