_id
int64
0
49
text
stringlengths
71
4.19k
1
Camera scrolling and game boundaries I am making a platformer game in JBox2D and LWJGL that has a scrolling camera, but I have hit a wall with the boundaries of the camera. Essentially what I have right now is a Box2D world that is being draw on the screen, a viewport being moved by gluLookAt and several boundaries specified by Box2D edges that also help keep the player from moving out of bounds. It looks a bit like this What I'd like to do is "push" the camera when ever it comes in contact with the edge, but otherwise be centered on the player. I've tried a couple of methods for this, such as finding the min and max width and height, and simply doing an easy box hit test, and moving the screen accordingly, but that has it's limitations, such as if the world is not in a rectangular shape e.g. Another method I've tried is doing something where I raytrace from one vertex to another of each edge, and if it comes into contact with the screen, I "push" the screen out of the way, but that hasn't quite worked either Fixture fixture boundary.getFixtureList() Vec2 point null boolean hit false fixtureLoop for(int i 0 i lt boundary.m fixtureCount i ) EdgeShape edge (EdgeShape)fixture.getShape() Vec2 origin new Vec2(edge.m vertex1.x Main.OPENGL SCALE, edge.m vertex1.y Main.OPENGL SCALE) Vec2 direction new Vec2(edge.m vertex2.x Main.OPENGL SCALE, edge.m vertex2.y Main.OPENGL SCALE).sub(origin) for(float t 0 t lt 1.0f t 0.001f) Raytrace loop point origin.add(direction.mul(t)) See if point is within screen bounds if(point.x gt screenX Main.WIDTH 2 amp amp point.x lt screenX Main.WIDTH 2 amp amp point.y gt screenY Main.HEIGHT 2 amp amp point.y lt screenY Main.HEIGHT 2) hit true If hit, find closest vertex if(hit) float shortestDistance Float.MAX VALUE Vec2 vertex new Vec2() for(int v 0 v lt screenVertices.length v ) float distance point.sub(screenVertices i ).length() if(distance lt shortestDistance) shortestDistance distance vertex screenVertices v Vec2 displacement point.add(vertex.sub(new Vec2(screenX, screenY))) screenX displacement.x screenY displacement.y break fixtureLoop fixture fixture.getNext() What I have here is my attempt that I have played around with WAAAY too much. Essentially what I'm trying to do is detect if and edges are hitting the screen, find the point where they hit, find the closest screen vertex, subtract them to find the displacement between them, and apply it to the screen so that they're no longer in contact (The screenVertices array is essentially 4 vertices that look like Vec2(screenX WIDTH 2, screenY HEIGHT 2), etc) I was wondering if anyone could help me out, I'm not sure if I'm having logic problems, code problems, or both. Absolutely any help would be appreciated, thanks!
1
How should I render multiple objects in OpenGL? I am just getting my head wrapped around modern OpenGL. Let's say that I would like to draw 10 objects, how would I structure that? Would I call glDrawElements(...) in a for loop 10 times? void render() shader.bind() calls glUseProgram bindVAO() calls glBindVertexArray for(int i 0 i lt 10 i ) glDrawElements(....) unbindVAO shader.unbind()
1
More than 8 lights without deferred shading lighting I want to know if there is any technique (efficient) to use more than 8 lights in a scene made with OpenGL and GLSL. Without making use of deferred shading lighting. I have not implementadon these techniques for their limitations and not being able to use transparency or antialiasing. If there is a good alternative, describe it with an example. I use OpenGL 2.0. Thank you.
1
smooth shading vs flat shading, what's the difference in the models? I'm loading the exact same model with Assimp, except one is exported from Blender, shaded smoothly, and the other was exported from Blender, shaded flatly. Here is my results from loading both into my game The Flat drawn model has 1968 vertices and the Smooth drawn model only has 671, why is this happening, I don't understand why there would be less vertices when it's shaded smoothly...?
1
Choosing a draw call when not reusing vertex data Up until now I've always used glDrawElements, and have my vertex and index buffers bound. From what I've read, glDrawElements is generally the way to go, and it works fine. However, I'm starting a new project and am questioning whether this is still the best fit. For the current project, I'm recomposing a 2D scene from scratch every frame. The player's character runs around, and I only draw the part of the world that is visible, generating 2D quads from the map tiles and building the vertex index buffers. Given that (from my understanding) glDrawElements works best when you upload your VBOs and make multiple draw calls against them, wouldn't it be better to use something else in this case, such as glDrawArrays? I've generally read that glDrawArrays is slower, but given that I'm having to push the vertex data every frame anyway, that might not be true here. I don't want to switch draw calls and assume I made the right choice when I could in fact be using it improperly to begin with, invalidating any performance improvements I see.
1
Why, after calling SetVideoMode as in the following, does nothing appear on my screen? I am trying to create an application that will need to use double buffering (for purpose of vsync). I am using SDL.NET. From what I understood, in order to have double buffering, I have to SetVideoMode with the opengl paremeter set to true. Here's the code I'm using Video.Initialize() Video.GLSetAttribute(OpenGLAttr.DoubleBuffer, 1) Video.GLSetAttribute(OpenGLAttr.SwapControl, 1) Video.GLSetAttribute(OpenGLAttr.RedSize, 8) Video.GLSetAttribute(OpenGLAttr.GreenSize, 8) Video.GLSetAttribute(OpenGLAttr.BlueSize, 8) Video.GLSetAttribute(OpenGLAttr.DepthSize, 16) Video.SetVideoMode(VideoInfo.ScreenWidth, VideoInfo.ScreenHeight, false, true, true, true) If 4th parameter (bool opengl) is false, it works a new fullscreen window is created and displayed (but I assume the OpenGLAttr's set above are meaningless in this case). If 4th parameter is true, nothing happens. A new window gets created (at least, it appears in the list of open windows) but I cannot alt tab into it and nothing appears on the screen. What am I doing wrong?
1
Freeglut Functions missing I'm currently learning OpenGL (in class) and we're using freeglut 2.8.2, which works just fine (using Visual Studio 2012). As an additional learning resource I'm reading the "OpenGL Superbible, 5h Edition". However, I've noticed that there are some functions that aren't available using the freeglut library. For example, gluGenerateMipmap or setting anisotropic filtering doesn't work glTexParameterf(GL TEXTURE 2D, GL TEXTURE MAX ANISOTROPY EXT, fLargest) void glGenerateMipmap(GLenum target) It simply does not know these functions (glTexParameterf is known, GL TEXTURE MAX ANISOTROPY EXT is not). I've googled a bit but didn't really find anything. And since glGenerateMipmap seems to belong to the standard openGL Library (and not glu glut), it should be available (I hope). Is there anything I'm missing?
1
how to use glm rotate with a eulerangle? I have a vec3 to represent my object's orientation rotation but the glm rotate method expects a quaternion. If I just convert it to a quaternion like this glm quat rot rotation The W value will just be zero, right? And then there won't be any changes in rotation. In my code I just want to be able to do rotation.x 5.0f in the update method of an object. This is the method I'm using for my transformations glm mat4 GameObject Transform(glm mat4 model, glm vec3 position, glm vec3 scale, glm quat angleAxis) model glm translate(model, position) if (angleAxis.w gt 360.0f) angleAxis.w 360.0f else if (angleAxis.w lt 0.0f) angleAxis.w 360.0f model glm rotate(model, angleAxis.w toRadians, glm vec3(angleAxis.x, angleAxis.y, angleAxis.z)) model glm scale(model, scale) return model Currently I'm just passing on that rotation vec3 to that angleAxis quaternion parameter but that obviously doesn't work. This is how i currently calculate my front, up, and right vectors void GameObject calculateCameraView() front.x cos(glm radians(rotation.x)) cos(glm radians(rotation.y)) front.y sin(glm radians(rotation.y)) front.z sin(glm radians(rotation.x)) cos(glm radians(rotation.y)) front glm normalize(front) right glm normalize(glm cross(front, worldUp)) up glm normalize(glm cross(right, front)) front.y invertMouse ? front.y 1 front.y
1
Geometry shader and triangle adjacency I'm currently trying to change my project to use GL TRIANGLE ADJACENCY instead of GL TRIANGLES. Following this question, I have managed to construct my index buffer fine, but when it comes to the drawing stage, I'm getting unexpected results. Here is my geometry shader code. Bare in mind that I store my indices like so (vertex1 adjacent1 vertex2 adjacent2 vertex3 adjacent3) Geometry Shader version 330 precision highp float layout (triangles adjacency) in layout (triangle strip, max vertices 3) out smooth in vec2 vVaryingTexCoords smooth in vec3 vVaryingNormals smooth out vec2 gsUV smooth out vec3 gsNormals void main(void) int i for(i 0 i lt gl in.length() i ) switch(i) case 0 case 2 case 4 gl Position gl in i .gl Position gsUV vVaryingTexCoords i gsNormals vVaryingNormals i EmitVertex() break default break EndPrimitive() Any ideas? EDIT Just like to point out that setting the adjacent index to be the same as the vertex index, i.e v1 v1 v2 v2 v3 v3 still produces the same results.
1
How do I implement flat shading in GLSL? I'm working with GLSL and trying to implement flat shading on a 3D model (rather than smooth shading). To illustrate what I mean, here are two screenshots of cubes in Blender. Here's one with flat shading. And here's the same cube with smooth shading. I understand the theory behind this kind of shading. Each face on the cube (six total) has a normal facing away from the surface. Each vertex (eight total) has a normal computed by summing together face normals, then normalizing to unit length. This results in each vertex normal pointing directly away from the center of the cube. Smooth shading can be implemented in basically two ways. In the first, color is computed per vertex (using light direction and normal), then fragment color is interpolated among all vertices. In the second, normals themselves are interpolated, then color is computed per fragment (using the same lighting calculations). Here are my current GLSL shaders to implement the first option (there's no specular lighting yet, but it gets the idea across with ambient and diffuse). Vertex shader first. in vec3 vPosition in vec3 vNormal out vec4 fColor uniform mat4 mvp uniform vec3 aColor uniform vec3 lDirection uniform vec3 lColor void main() gl Position mvp vec4(vPosition, 1) vec4 ambient vec4(aColor, 1) vec4 diffuse vec4(max(dot(lDirection, vNormal), 0) lColor, 1) fColor ambient diffuse Then fragment. in vec4 fColor void main() gl FragColor fColor So that works fine for smoothed shading, with fragment values interpolated among vertices. I'll also point out that I'm using buffers and index arrays for rendering. For flat shading, each fragment on a face should instead use the same normal (such that every pixel on the surface has the same final color after lighting calculations). The problem is that I can't pass data to shaders per face, but only per vertex. Given this, I can think of three solutions. Pass four vertices per face, with each vertex storing the face normal. This would still technically be smooth shading, but done in such a way that every interpolated pixel will use the same color (making if effectively flat). This approach seems wrong because it would basically ruin my vertex buffer, since I'd have to pass 24 vertices (four per face) despite the cube only containing eight unique vertices. Use GLSL's flat mode (there's a flat keyword in GLSL). Using this approach, each fragment would only pull from a single "provoking vertex" rather than interpolating from all vertices on the face. This feels wrong because that I wouldn't actually be using the correct face normal. I also haven't been able to figure out the proper syntax for this style anyway. For the record, I'm aware of glShadeModel, but it's apparently deprecated. Average vertex normals per fragment rather than interpolating them. To me, this feels like exactly the correct solution, since every pixel on a face would use the same normal, with that normal computed by summing and normalizing vertex normals (similar to how vertex normals are computed from face normals to begin with). From those options, 3 clearly feels like the correct solution, but I haven't had any luck in figuring out how. So that's my question. How can I tell the fragment shader to use a normal averaged among all vertices, rather than interpolated?
1
Create a white background for the texture and then blend using GLSL I have a transparent png texture and I'd like to create a white background and then blend this on top of that. Is this possible using just GLSL? I can't multiply, add or mix colors because I don't want to overlay the color white. I want it to be in the background of the texture. I can achieve what I want by creating 2 objects with the exact same dimensions and position, set the color of the object behind to white and the object in the front to the transparent texture. But this seems less than ideal to me. Any suggestions will be much appreciated!
1
Is it sensible to make all physics on GPU using transform feedback? I'm learning OpenGL and today I read something new to me. It's called transform feedback, and if I understand right, it can help to get information about vertex shader variables. And I read an example where a particle system was created, with collisions. After reading it I have a question Is it sensible to make all physics like this, i.e. is there any kind of profit from this?
1
Can you sync screen update on vertical retrace with OpenGL? In OpenGL, is there a way to ensure I get exactly, no more nor less, 60 (or whatever rate my monitor is set for) frames per second? Of course given that the new frame can be calculated in less than 1 60 second. I was thinking Windows more than Linux or Mac OSX, even though it is interesting to keep an eye on portability.
1
Understanding the ModelView Matrix I want to analyze the each component of my 4 4 ModelView Matrix. I came to know that the starting 3 3 of ModelView Matrix stores rotation. If i want my object to have no rotation with respect to camera so My ModelView Matrix looks like this How to change my ModelView Matrix if i want to have NO Translation or Scaling ? Can anyone explain the Maths behind this
1
Texture artifacts depending on texture size I get some strange artifacting with textures depending on their size. I run OpenGL 3.3 with an GTX 580 so it should definitely support non power of two textures. I've narrowed down the problem specifically to the texture's size, I've tried checking if transparency had anything to do with it, color channels etc. As you can see, the 512x512 and 300x300 textures look just fine but the 437x437 is all distorted. What could be the cause of this and how can it be fixed? I could of course just stick to power of two textures which seem to work fine but since this is an personal educational project I really want to understand what's going on.
1
View Matrix to Texture Matrix I'm converting view coordinates to texture coordinates for both my shadow maps and Screen space reflections. I keep seeing this conversion in examples var T new Matrix M11 0.5f, M22 0.5f, M33 1.0f, M41 0.5f, M42 0.5f, M44 1.0f Matrix m view projection T What is T and why does this "work"? I'm saying "work" because I not satisfied with the result but the problem may not lie here. I would like to use and inverted view and projection and then adjust the coordinates from 1,1 to 0,1. But that doesn't work at all. What have I misunderstood?
1
OpenGL FBO, render off screen and texture I need to do some offscreen render to use the rendered image in something different from OpenGL context (for instance I need to use the image in a QListWidgetItem inside a Qt application). After documenting a little bit I've found that Frame Buffer Object (FBO) is what I have to use in combination with glReadPixels for getting the raw image from OpenGL GPU into my application. So, here I am against FBO. I've found that I need to attach the FBO to a texture otherway it does not work and glCheckFramebufferStatus gives me GL FRAMEBUFFER INCOMPLETE DRAW BUFFER. So when I try to init the FBO I need to add also glGenTextures and glBindTexture etc.. But.. why? I don't need a texture for now.. why do I have to declare init bind (don't know what is the best word here) a texture? What is the "minimal" FBO setting for get the images? And what about if in my application I need also stencil or depth images? Are things different? And using back buffer instead of FBO? Is it slower? Do I still need a texture? I'm kind of afraid of using textures because of a lot of parameters () I don't understand for now.. Sorry for confusion..
1
How detect which OpenGL texture formats are natively supported? For example, how detect if my videocard doesn t support "bgr8" and convert it to another format, such as "rgba8" in software mode. UPDATE Sorry for the confusion. This question more about situation when I set internalFormat in glTexImage2D to something like "bgra8" but videodriver internally convert data to another format, like "rgba8".
1
How can I draw 500 million triangles with OpenGL? I am a beginner. I have a problem with my frame rate. I am trying to see my GPUs maximum performance with using VBO. I saw that everybody says a GPU can draw 1 billion triangles (so 3 billion vertices) with VBO easily, is it right? If it is right, then why am I getting 37 FPS at only 8 million triangles (24 million vertices)? I am not using a shader and I don't know how to use it. My GPU is (amd radeon hd 6870). When drawing 20k triangles, the frame rate is 6000, when drawing 8 million triangles, the frame rate is 37 and CPU usage 1 . I don't think the CPU is the bottleneck.. My code is like this I create in header file GLuint terrainVBO I made an init() function glGenBuffers(1, amp terrainVBO) glBindBuffer(GL ARRAY BUFFER,terrainVBO) glBufferData(GL ARRAY BUFFER,terrainVertices.size() sizeof(terrainVec), amp terrainVertices 0 ,GL STATIC DRAW) and draw in my main loop glBindBuffer(GL ARRAY BUFFER,terrainVBO) glEnableClientState(GL VERTEX ARRAY) glVertexPointer(3,GL FLOAT,3 sizeof(float),0) glDrawArrays(GL TRIANGLES,0,terrainVertices.size()) glDisableClientState(GL VERTEX ARRAY) glBindBuffer(GL ARRAY BUFFER,0) Summary How can I draw 500 600 millions triangles (1.5 billion vertices) with a good frame rate? I could only draw 8 million triangles.
1
Environment mapping cube mapping using OpenGL I'm trying to do cube mapping. Problem is that I'm getting this This is what I get when I rotate it But it should look like this Here is code for vertex shader varying vec2 tex coord void main() vec3 v vec3(gl ModelViewMatrix gl Vertex) gl TexCoord 0 .stp normalize(gl Vertex.xyz) gl Position gl ModelViewProjectionMatrix gl Vertex gl ModelViewMatrix gl Vertex fragment shader uniform samplerCube tex void main() gl FragColor vec4(1.0, 1.0, 1.0, 1.0) gl FragColor textureCube( tex, gl TexCoord 0 .stp) Could you please tell me what am I doing wrong?
1
How to move the generated polygon horizontally in openGL? I am trying to develop a small ball game in OpenGL where rectangular bars of random height is generated and moved towards left and there is a ball which should be moved up and down(like jump action) whenever mouse left button is pressed. I am generating the polygon pattern but the problem is the movement What is the best way to move the rectangles(of random height) horizontally? (or any generated pattern preferably polygon from right to left.) I tried code below for movement of a single rectangle but i am getting a blank screen void move polygon() int px,py px randr(50,100) py randr(100,600) float r,g,b r (float)(rand() 10) g (float)(rand() 10) b (float)(rand() 10) float rf r 10,gf g 10,bf b 10 glColor3f(rf,gf,bf) for(int pos 800 pos lt 0 pos 10) Sleep(200) glClear(GL COLOR BUFFER BIT) glBegin(GL POLYGON) glVertex2f(pos,0) glVertex2f(pos,py) glVertex2f(px pos,py) glVertex2f(px pos,0) glEnd() glFlush() Sleep(100)
1
Porting SDL OpenGL Game to Android and IOS I am currently learning OpenGL (3.0 ) with C . I am using SDL for input handling, window creation, etc., GLEW to use OpenGL and call OpenGL Functions, and GLM for OpenGL Math stuff. If I fully finish a Windows game, how can I port my game with the setup above to Android, IOS, and maybe even other platforms (but my main focus is Android and IOS). I do not want to use any Game Engines. What I am looking for is a program that could just make my game run on android and IOS. If there is a way to optimize my game using the tools listed above a little bit and change some code to make my game run on android and IOS, I am okay with that (if that is the case, please provide resources). I have heard of OpenGL Es, not quite sure what that is, but I do not want to use it if it is a completely different library and if I have to rewrite the entirety of my game rendering engine. I also want my app to run without the user having to download any libraries such as SDL, so please make sure that my app can just run as an .apk file or whatever format that I can just tap on and it will open. I also want my app to run on most smartphones (Android and IOS), so please be sure that what you are suggesting is not only available to a limited amount of smartphones. Another thing that I want is optimal performance. No emulators or simulators from .exe to .apk that run very slowly. It should run as fast as other apps and run at probably the same speed as my windows version. Also point out any mistakes that I may have made (for example, maybe I am just crazy thinking that smartphones use OpenGL).
1
Failing to move exponential depth term to depth shader in exponential shadow mapping I'm playing around in my little toy project to see if I can understand how exponential shadow mapping works. To begin, I have the following two fragment shaders Light depth texture shader layout(location 0) out float fragmentdepth void main() fragmentdepth gl FragCoord.z Main shader ... vec3 shadowDiv shadowCoords.xyz shadowCoords.w float lightDepth texture(depthMap, shadowDiv.xy) float visibility clamp(exp( 70 shadowDiv.z) exp(70 lightDepth), 0, 1) ... This works fine I have shadows where I expect them, and I'm using the approximation to the boolean function by taking the product of exponentials, as shown in the ESM paper. However, my understanding is that the whole point of this, is that I can move exp(70 lightDepth) out of this shader, and into the shader when I generate the depth texture at the light. That is New light depth texture shader layout(location 0) out float fragmentdepth void main() fragmentdepth exp(70 gl FragCoord.z) Main shader ... vec3 shadowDiv shadowCoords.xyz shadowCoords.w float lightDepth texture(depthMap, shadowDiv.xy) float visibility clamp(exp( 70 shadowDiv.z) lightDepth, 0, 1) ... When I make this change, the entire scene becomes shadowed. I'm sure I'm misunderstanding something somewhere, but I'm not sure where. I have tried varying the constant term (70) between 1 and 70, though this doesn't appear to make a difference. I have no errors reported by KHR debug I am creating my light depth texture as genLightDepthMap IO GLTextureObject genLightDepthMap do lightDepthMap lt overPtr (glGenTextures 1) glBindTexture GL TEXTURE 2D lightDepthMap glTexParameteri GL TEXTURE 2D GL TEXTURE MIN FILTER GL LINEAR glTexParameteri GL TEXTURE 2D GL TEXTURE MAG FILTER GL LINEAR glTexParameteri GL TEXTURE 2D GL TEXTURE WRAP S GL CLAMP TO EDGE glTexParameteri GL TEXTURE 2D GL TEXTURE WRAP T GL CLAMP TO EDGE glTexImage2D GL TEXTURE 2D 0 GL DEPTH COMPONENT24 shadowMapResolution shadowMapResolution 0 GL DEPTH COMPONENT GL FLOAT nullPtr
1
Skybox texture artifact on edge I have strange problem with drawing skybox texture on Mac. On iPhone everything is going fine. I have tried to change near and far planes value with no success. It is a skybox of six textures, and for every texture I set this glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL NEAREST) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL NEAREST) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) What could be the problem? Screenshot EDIT I have tried to set background color to red, to ensure if it is bleeding through the texture I have also tried to change coords of the texture to this float err corr 0.5 TEXTURE SIZE GLfloat vertices 24 1.0f err corr, 1.0f err corr, 1.0f err corr, 1.0f err corr, 1.0f err corr, 1.0f err corr, 1.0f err corr, 1.0f err corr, 1.0f err corr, 1.0f err corr, 1.0f err corr, 1.0f err corr, 1.0f err corr, 1.0f err corr, 1.0f err corr, 1.0f err corr, 1.0f err corr, 1.0f err corr, 1.0f err corr, 1.0f err corr, 1.0f err corr, 1.0f err corr, 1.0f err corr, 1.0f err corr, And the box is drawn in this order GLubyte indices 14 0, 1, 2, 3, 7, 1, 5, 4, 7, 6, 2, 4, 0, 1 SOLUTION The solution is to set GL CLAMP TO EDGE for the cubemap itself! glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP T, GL CLAMP TO EDGE)
1
Blending for multiple lighting passes I am attempting to perform shadow mapping with deferred shading, using the following code for (auto light m ShadowDirs) render shadow map shadowDirPass(m ShadowBuffer, light) glEnable(GL BLEND) glBlendFunc(GL ONE, GL ONE) activate shader for non shadow casting lights m ShadowDirLightShader.activate() m FrameBuffer.LightingPass(m ShadowDirLightShader) m ShadowDirLightShader.setUniform(("WSCamPos"), m ActiveCamera gt getPosition()) m ShadowDirLightShader.setDirLight(("dirLight"), static cast lt DirLightData amp gt (light gt getLightData())) m ShadowDirLightShader.setUniform(("ShadowTransform"), m ShadowDirMatrix) bind depth map m ShadowDirLightShader.bindTexture(("shadowMap"),5, m ShadowBuffer.getDepthTexture()) render renderQuad() glDisable(GL BLEND) shadowDirPass(m ShadowBuffer, light) Simply renders the geometry to the shadow map. m FrameBuffer.LightingPass(m ShadowDirLightShader) Just binds the Gbuffer textures. I have two directional lights in the scene, one with red colour (1,0,0), and one with blue colour (0,0,1) One would think that the scene would be coloured purple, but no. Every time I render (for each) light, the image in the target framebuffer is overwritten completely, instead of blended together. To illustrate this, see the following images Above is the scene with a single red light. Above is the same scene, with a single blue light, but from a different direction. (See shadows). When I enable both lights, the result is as if the first light was never used, though by stepping through the code, I can confirm that it is definitely going through a render cycle. I'm pretty sure than I've messed up the blending somehow, but I could use some guidance.
1
Java LWJGL Monster follow script not working properly I have written a script to be set off whenever a player is within a distance of the monster. The script checks if the x position is greater than or less than the players x, and same for the z. (y is automatically set to terrain) The checkWalkX and checkWalkZ functions work with my monsters walk function, which specifies a new position on a timer and walks to that position. But when I use the same kind of idea for the following, it doesnt work correctly. public int checkWalkX(Vector3f position) if (Math.floor(this.getX()) ! Math.floor(position.x)) if(this.getX() gt position.x) return 1 Greater if(this.getX() lt position.x) return 2 Less return 0 public int checkWalkZ(Vector3f position) if (Math.floor(this.getZ()) ! Math.floor(position.z)) if(this.getZ() gt position.z) return 1 Greater if(this.getZ() lt position.z) return 2 Less return 0 public void follow(Player player) walking false following true if(checkWalkX(player.getPosition()) 1) this.setX(this.getX() mobSpeed) else if(checkWalkX(player.getPosition()) 2) this.setX(this.getX() mobSpeed) if(checkWalkZ(player.getPosition()) 1) this.setZ(this.getZ() mobSpeed) else if(checkWalkZ(player.getPosition()) 2) this.setZ(this.getZ() mobSpeed) if(Math.floor(checkWalkX(walkToPosition)) 0 amp amp Math.floor(checkWalkZ(walkToPosition)) 0) following false For some reason when I run this script, the monster will only move within a distance of 2ish. He moves the right ways kinda, but he doesnt follow me. Would anyone know why this is?
1
Why would OpenGL ignore GL DEPTH TEST setting? I cannot figure out why some of my objects are being rendered on top of each other. I have Depth testing on. glEnable(GL DEPTH TEST) glDepthFunc(GL LEQUAL) Do I need to draw by order of what is closest to the camera? (I thought OpenGL did that for you.) Setup code private void setUpStates() glShadeModel(GL SMOOTH) glEnable(GL DEPTH TEST) glDepthFunc(GL LEQUAL) glEnable(GL LIGHTING) glEnable(GL LIGHT0) glLightModel(GL LIGHT MODEL AMBIENT, BufferTools.asFlippedFloatBuffer(new float 0, 0f, 0f, 1f )) glLight(GL LIGHT0, GL CONSTANT ATTENUATION,BufferTools.asFlippedFloatBuffer(new float 1, 1, 1, 1 ) ) glEnable(GL COLOR MATERIAL) glColorMaterial(GL FRONT, GL DIFFUSE) glMaterialf(GL FRONT, GL SHININESS, 50f) camera.applyOptimalStates() glEnable(GL CULL FACE) glCullFace(GL BACK) glEnable(GL TEXTURE 2D) glClearColor(0.0f, 0.0f, 0.0f, 0.0f) glEnableClientState(GL VERTEX ARRAY) glEnableClientState(GL COLOR ARRAY) glEnableClientState(GL NORMAL ARRAY) glHint(GL PERSPECTIVE CORRECTION HINT, GL NICEST) Render Code private void render() Clear the pixels on the screen and clear the contents of the depth buffer (3D contents of the scene) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) Reset any translations the camera made last frame update glLoadIdentity() Apply the camera position and orientation to the scene camera.applyTranslations() glLight(GL LIGHT0, GL POSITION, BufferTools.asFlippedFloatBuffer(500f, 100f, 500f, 1)) glPolygonMode(GL FRONT AND BACK, GL LINE) for(ChunkBatch cb InterthreadHolder.getInstance().getBatches()) cb.draw(camera.x(), camera.y(), camera.z()) The draw method in ChunkBatch public void draw(float x, float y, float z) shader.bind() shader.setUniform("cameraPosition", x,y,z) for(ChunkVBO c VBOs) glBindBuffer(GL ARRAY BUFFER, c.vertexid) glVertexPointer(3, GL FLOAT, 0, 0L) glBindBuffer(GL ARRAY BUFFER, c.colorid) glColorPointer(3, GL FLOAT, 0, 0L) glBindBuffer(GL ARRAY BUFFER, c.normalid) glNormalPointer(GL FLOAT, 0, 0L) glDrawArrays(GL QUADS, 0, c.visibleFaces 6) ShaderProgram.unbind()
1
How do I separate physics from framerate? Skyrim (Creation Engine Gamebryo) is a prime example of a game that has its physics tied to the framerate for which it has been heavily criticized, because if you disable vsync have a 60 HZ monitor the physics will glitch, characters will fly into the air, and the screen will flicker while you are put into a swimming position. This problem can be solved by using a framerate limiting mod. But I want my game to run at any framerate, any refresh rate, just fine. How can I do that? Sorry if this isn't a game development question necessarily, it's more of a game engine question. I looked and saw similar questions asked but they were very old and new tech new programming techniques might have allowed better ways to do it since 2008.
1
Heightmap terrain picking I've implemented an OpenGL based terrain unsing a tesselation shader for dividing each 'terrain cell' into the desired tiles. The heightmap is uploaded to the GPU and applied on the shader. When it comes to picking, I don't really know what to do. Testing triangles on intersection with the pick ray is not possible because only the GPU 'knows' the mesh. Is render to texture with an unique color the way to go? Or holding an additional mesh in RAM, only for picking collision?
1
Frustrum culling without per mesh positions I am trying to implement frustrum culling in a C renderer im writing but I feel like I hit a brick wall. The plan is to load the Sponza level and per mesh create a collision box using Bullet physics library, which also holds the mesh translation, scale and rotation, then get the camera planes and check if the collision box falls inside the planes. I'm loading the Sponza level using the Assimp library, it loads fine and displays correctly but every node's transformation is at 0,0,0 which leads me to believe all the vertices are pre translated already (even without optimization flags). I checked it out in Blender and even there the translation gizmo is at 0,0,0 for every mesh. I tried a couple different sponza files, FBX too but they all lack per mesh positions. Should I just give up on this? Are there other ways to perform frustrum culling?
1
GLSL variables as main function params vs on their own line? I am learning OpenGL and GLSL. I was taught that the in out variables should be formatted like this in vec3 something out vec3 somethingElse int main() etc... However, I ran across code like this online (ShaderToy) int main(in vec3 something, out vec3 somethingElse) Is there a difference? Is there any difference?
1
How do find the right GLxx object for a given function in LWJGL? I'm just starting to learn the fundamentals of OpenGL via LWJGL. Every OpenGL function is implemented as a method on a GLxx class. The xx corresponds to the version of the spec when that function was introduced, such as GL20 for functions added in OpenGL 2.0. So far, so good. The difficulty comes when following tutorials or looking at code that is written against the C API. I'm finding myself having to either guess or Google the version for every single function that I want to use. This is quite time consuming. Is there a quick way of finding out which version of OpenGL any given feature was introduced? (Or any other way of figuring out the right LWJGL class for a function?).
1
What are the units when reading depth using glReadPixels()? Suppose I use glReadPixels() to read the depth of a pixel from the depth buffer. What are the units of this? Is it the distance from the camera? Or the distance from the near clip plane? And is the distance normalized to clip space, or in real units?
1
Smooth seams and banding of overlapping lights in deferred rendering I have finally managed to get on screen multiple lights with a deferred renderer, but the result is somehow disappointing. In particular I have severe banding problem Other than having clear banding for a single light, the seam of overlapping light volumes are in some case disturbing. Currently my bounding sphere scale is calculated solving the quadratic equation with a min light intensity of 0.01 and using the attenuation terms of each light source (in the screen above, every light have an equivalent volume). The attenuation in the shader is simply 1 (const linear dist quadr dist dist) . The position is reconstructed from a 32bit depth buffer and the lighting is in view space. The light passes are blended together with simply glEnable(GL BLEND) glBlendEquation(GL FUNC ADD) glBlendFunc(GL ONE,GL ONE) Is there a way to fix this issue easily? Thank you!
1
Rendering a black and white image in OpenGL 1.1 Is there is any way that I can simple disable the color in OpenGL 1.1? Or can I "grey out" textures in LWJGL?
1
What am I doing wrong, in regards to multi texture? I am writing a bump mapping demo, so I need an image texture, and a normal texture which should be loaded into the fragment shader. This is the texture part of my OpenGL code glActiveTexture(GL TEXTURE0) GLuint textureID glGenTextures(1, amp textureID) glBindTexture(GL TEXTURE 2D, textureID) glTexImage2D(GL TEXTURE 2D, 0, GL RGB, 1024, 1024, 0, GL RGB, GL UNSIGNED BYTE, brick) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) GLint textureLocation glGetUniformLocation(shader, "texture") glUniform1i(textureLocation, 0) glActiveTexture(GL TEXTURE1) GLuint normal textureID glGenTextures(1, amp normal textureID) glBindTexture(GL TEXTURE 2D, normal textureID) glTexImage2D(GL TEXTURE 2D, 0, GL RGB, 1024, 1024, 0, GL RGB, GL UNSIGNED BYTE, brick texture) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) GLint normalTextureLocation glGetUniformLocation(shader, "normal texture") glUniform1i(normalTextureLocation, 1) Here is the fragment shader uniform vec3 light uniform sampler2D texture uniform sampler2D normal texture void main() vec3 tex texture2D(normal texture, gl TexCoord 0 .st).rgb gl FragColor vec4(tex, 1.0) I am sure that the brick array contains the image texture, and the brick texture array contains the normal texture but it seems normal texture and texture are both the image texture, not the normal texture. What am I doing wrong, in regards to multi texture?
1
Should I learn OpenGL 1.5? I want to start learning OpenGL with a book I have since a long time ago (Beginning openGL Game Programming) and it uses OpenGL 1.5 so my question is, should I learn OpenGL using this book and then learn a higher version? or should I start learning a higher version at once? are the same core concepts applied to higher versions?
1
Correct use of VAO's in OpenGL ES2 for iOS? I'm migrating to OpenGL ES2 for one of my iOS projects, and I'm having trouble to get any geometry to render successfully. Here's where I'm setting up the VAO rendering void bindVAO(int vertexCount, struct Vertex vertexData, GLushort indexData, GLuint vaoId, GLuint indexId) generate the VAO amp bind glGenVertexArraysOES(1, vaoId) glBindVertexArrayOES( vaoId) GLuint positionBufferId generate the VBO amp bind glGenBuffers(1, amp positionBufferId) glBindBuffer(GL ARRAY BUFFER, positionBufferId) populate the buffer data glBufferData(GL ARRAY BUFFER, vertexCount, vertexData, GL STATIC DRAW) size of verte position GLsizei posTypeSize sizeof(kPositionVertexType) glVertexAttribPointer(kVertexPositionAttributeLocation, kVertexSize, kPositionVertexTypeEnum, GL FALSE, sizeof(struct Vertex), (void )offsetof(struct Vertex, position)) glEnableVertexAttribArray(kVertexPositionAttributeLocation) create amp bind index information glGenBuffers(1, indexId) glBindBuffer(GL ELEMENT ARRAY BUFFER, indexId) glBufferData(GL ELEMENT ARRAY BUFFER, vertexCount, indexData, GL STATIC DRAW) restore default state glBindVertexArrayOES(0) glBindBuffer(GL ELEMENT ARRAY BUFFER, 0) glBindBuffer(GL ARRAY BUFFER, 0) And here's the rendering step bind the frame buffer for drawing glBindFramebuffer(GL FRAMEBUFFER, outputFrameBuffer) glClear(GL COLOR BUFFER BIT) use the shader program glUseProgram(program) glClearColor(0.4, 0.5, 0.6, 0.5) float aspect fabsf(320.0 480.0) GLKMatrix4 projectionMatrix GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f) GLKMatrix4 modelViewMatrix GLKMatrix4MakeTranslation(0.0f, 0.0f, 1.0f) GLKMatrix4 mvpMatrix GLKMatrix4Multiply(projectionMatrix, modelViewMatrix) glUniformMatrix4fv(projectionMatrixUniformLocation, 1, GL FALSE, projectionMatrix.m) glUniformMatrix4fv(modelViewMatrixUniformLocation, 1, GL FALSE, mvpMatrix.m) glBindVertexArrayOES(vaoId) glDrawElements(GL TRIANGLE FAN, kVertexCount, GL FLOAT, amp indexId) bind the color buffer glBindRenderbuffer(GL RENDERBUFFER, colorRenderBuffer) context presentRenderbuffer GL RENDERBUFFER The screen is rendering the color passed to glClearColor correctly, but not the shape passed into bindVAO. Is my VAO being built correctly? Thanks!
1
How portable are OpenGL versions, really? If I write a game engine that uses OpenGL 1.5 (not assuming what else I do), is it portable now and is it still portable five years from now or are will support for OpenGL by hardware and drivers (be) exclusive to their (much more farther along) target OpenGL versions? Lately I've been looking at a lot of answers on this website that direct users to divert their work towards the most recent OpenGL versions, citing hardware surveys of DirectX support and only recommending earlier versions as a last, final resort (as if to imply there is something wrong with them that makes all usage of them invalid or pointless). If I only have computers that can provide OpenGL lt 1.5 or lt 2.1 contexts should I just give up game programming if I can't afford a new computer with hardware and drivers for 3.x and 4.x? Or should I finish my game engine the way I intended to? Will by the time I get a 4.x supporting setup will there be new versions and a lack of backwards compatibility that trash all usage of 4.x? Will 4.x ever dominate over earlier versions support wise before a new major version is realized and released?
1
Memory dataflow for uniform variables? When a texture (2D) is supplied to a shader as a 'uniform' input, it is first uploaded to OpenGL using glTexImage2D() and then using glUniform1i() it is associated to shader uniform. eg code Texture data glTexImage2D() is used to transfer texture data to the server side glGetUniformLocation() is used to access shader uniform handle glUniform1i() associates the data pointed by texture unit to the shader 'uniform' but when we pass matrix (eg matrix4x4) to a shader as a 'uniform' input, when don't use any specific function to upload it to OpenGL. (we just used to glUniform..() to associate the data with the shader input which we also used in the case of texture data) Matrix data glGetUniformLocation() to access shader uniform handle glUniformMatrix4fv() to associate matrix data to the shader uniform input. Where does the matrix data live in each step in the process of passing it to a shader as a uniform input? Does matrix data always live on client side CPU accessible memory and fetched every frame by server side? If it is uploaded to OpenGL which step function call uploads the data? where does the data live in OpenGL memory? how its memory location is pointed?
1
Can't get simple OpenGL texture working using SDL2 and FreeImage3 I created a simple OpenGL program that display a quad with texture, but it doesn't seem to be working as it only displays a white quad. What could possibly be wrong? I checked everything I could think of that could cause the issue. The bitmap is also 256x256. Any clues? Here's the source code http pastebin.com 5nVbMPVp I'm also on Linux if that helps. Thanks!
1
How to calculate directional light frustum from camera frustum I'm playing around with OpenGL for a few weeks now. For the following screenshot I picked the glm ortho values for my lightsource by trial and error. There are two directional light sources with shadows. I would like to calculate the values for glm ortho by creating a bounding box around the camera frustum. I have the corners of the camera frustum in world space ... what is the next step? I think I should move the camera frustum into light space, calculate the bound box and put the dimensions of that bounding box into glm ortho. But ... how? ) The block below is a simplified version of my code with only one light source. Camera mat4 cameraViewMatrix lookAt( vec3(1.2f, 1.2f, 1.2f), vec3(0.0f, 0.0f, 0.0f), vec3(0.0f, 0.0f, 1.0f) ) mat4 cameraProjectionMatrix perspective(45.0f, 800.0f 600.0f, 0.5f, 10.0f) Light mat4 lightViewMatrix lookAt( vec3(3.0f, 2.0f, 2.0f), vec3(0.0f, 0.0f, 0.0f), vec3(0.0f, 0.0f, 1.0f) ) Camera Frustum vector lt vec4 gt cubeNDC cubeNDC.push back(vec4( 1.0f, 1.0f, 1.0f, 1.0f)) cubeNDC.push back(vec4(1.0f, 1.0f, 1.0f, 1.0f)) cubeNDC.push back(vec4(1.0f, 1.0f, 1.0f, 1.0f)) cubeNDC.push back(vec4( 1.0f, 1.0f, 1.0f, 1.0f)) cubeNDC.push back(vec4( 1.0f, 1.0f, 1.0f, 1.0f)) cubeNDC.push back(vec4(1.0f, 1.0f, 1.0f, 1.0f)) cubeNDC.push back(vec4(1.0f, 1.0f, 1.0f, 1.0f)) cubeNDC.push back(vec4( 1.0f, 1.0f, 1.0f, 1.0f)) mat4 viewProjectionMatrixInverse inverse(matProj matView) vector lt vec4 gt cameraFrustum for(vec4 vertex cubeNDC) vec4 vertexTransformed viewProjectionMatrixInverse vertex vertexTransformed vertexTransformed.w cameraFrustum.push back(vertexTransformed) Magic mat4 lightProjectionMatrix ortho(...) Thanx for your help. )
1
How to create OpenGL models like the ones created by blender's wireframe modifier? How could I create programatically wireframe models like this out of a triangular mesh? What'd be the algorithm behind? (Source) Creating an additional mesh using lines instead triangles or just using a single mesh with the typical geometry shader using barycentric coordinates are the most straightforward approaches. But they're far away from being as cool as the results shown in the above link. So I was wondering how difficult would be creating a new mesh like the shown in the above link, would it be possible to achieve by just having a simple triangle soup?
1
Rotating a scene around a fixed axis I am looking for a way to move my scene so that when I translate forward back left right it moves directly up down left right on the screen. Currently it does do that until I rotate (by anything other than 360...). After rotation the up down left right moves in the direction of the rotation. I need it to move directly up down left right on screen. I've been racking my brains trying all kinds of different matrix rotations and multiplication but I can't get it. MOST IMPORTANTLY I need it to maintain a single rotation axis of 0,0,0 so the centre of rotation is always near the middle of the screen regardless of how far I scroll. I can easily rotate on the centre of the scene itself and have it translate how I want (translate to co ord, rotate, translate back, go to position) but I need that axis to remain fixed. Otherwise the centre of rotation could be somewhere way off screen. Fig 1. Shows the starting position of the model(scene). Fig 2. The modelview is rotated 45deg. Fig 3. Decrementing the z value causes the model to move out along z which I do not want. Fig 4 I need the model to move straight back (between z and x). I have tried using sin cos rule in an attempt to translate the vertices back along the arc between the undesired position (Fig 3) and the desired one (Fig 4) but the math is off and the process is messy. I just know there has to be a clean simple solution out there. I am coding this in Java (JOGL) and passing modelview and projection matrices to a vertex shader. Also, I am not using deprecated functions (no begin end glRotate etc). Does anyone know of a matrix multiplication that will accomplish this?
1
LWJGL loading textures of various types I googled around a bit and nobody seems to have asked this question. I have images in multiple color formats (all of them are PNGs). Most of them are ARGB but my bitmap fonts are gray scale, and I would like them to stay that way. All I want to do is find out what format BufferedImage uses to store my pixel data and then use that information with glTexImage2D. Java, in all its wisdom, seems to be determined to hide that information from me at all costs... I also need to know how BufferedImage aligns its pixel data in both of these formats (glTexImage2D cares). Could someone please tell me how to Determine the pixel format of my BufferedImage. If it is ARGB32, I'm going to have to reorder the bytes and use GL RGBA. If it's grayscale, I will be using GL INTENSITY. Extract the actual bytes from the image. I have seen a few examples on the web that use BufferedImage.getRaster().getDataBuffer(). This is nonsensical. Why are there different types of buffers like DataBufferInt? Because of Java's strong typing, I need DataBufferByte. If this is the only way, could somebody give me specific directions to use the different type of buffers with glTexImage2D? Figure out how the aforementioned image data is aligned. I will use this information with glPixelStorei. In addition, I come from C and C programming. In C this was 100 lines of simple libPNG and GL calls. Should I expect more trouble like this in the future?
1
glCreateShader causes segmentation fault I can't create a shader when trying to use shaders with sfml. The function glCreateShader(GL VERTEX SHADER) causes a segmentation fault. At first I googled it and found that it does that when the program does not have an opengl context. I tried SDL first but the poor documentation and "look at the header to know what to do" made me go for sfml the code that causes the seg fault is bellow sf Window App(sf VideoMode(800, 600, 32), "SFML OpenGL") Set color and depth clear value glClearDepth(1.f) glClearColor(0.f, 0.f, 0.f, 0.f) Enable Z buffer read and write glEnable(GL DEPTH TEST) glDepthMask(GL TRUE) Setup a perspective projection glMatrixMode(GL PROJECTION) glLoadIdentity() GLuint vertShader glCreateShader(GL VERTEX SHADER) ... I'm including glew, gl.h, sfml window, sfml system, using opengl 2.1 on gcc linux. What is missing?
1
OpenGl indices array I have a class terrain which create a grid of Quads. I do it like this for(int z 0 z lt length z ) for(int x 0 x lt width x ) vertices.push back(vec3((float)x 250, 0.f, (float)z 250)) for(int z 0 z lt ( length 1) z) for(int x 0 x lt ( width 1) x) int index z width x Vertex vertices Vertex(vertices.at(index),vec3(0, 0, 0)), Vertex(vertices.at(index 1),vec3(0, 0, 0)), Vertex(vertices.at(index width),vec3(0, 0, 0)), Vertex(vertices.at(index 1 width),vec3(0,0,0)) unsigned short indices index,index 1,index width,index 1,index width,index width 1 Quad quad( vertices, 4, indices, 6) squares.push back(quad) i The vertices and the logic are correct, but the indices aren't, for some reason. here is the output for this code But when I change this indices to this unsigned short indices 0,1,2,1,2,3 It works great The problem is I don't understand why this line unsigned short indices index,index 1,index width,index 1,index width,index width 1 doesn't work. And if it worked, my grid would consume a lot less ressources. If someone could explain me why it doesn't work, It would be great, thanks you. In case you need to know how I draw a Quad, here is the code class Quad public Quad(Vertex vertices, int n, unsigned short indices, unsigned short numIndices) for(int i 0 i lt numIndices i ) indices.push back( indices i ) for(int i 0 i lt n i ) vec3 v vec3( vertices i .position, lengthPower) position.push back(v) glGenVertexArrays(1, amp mVertexArray) glBindVertexArray(mVertexArray) glGenBuffers(1, amp mPositionBuffer) glBindBuffer(GL ARRAY BUFFER, mPositionBuffer) glBufferData(GL ARRAY BUFFER, sizeof(vec3) position.size(), position.data(), GL STATIC DRAW) glGenBuffers(1, amp mIndicesBuffer) glBindBuffer(GL ELEMENT ARRAY BUFFER, mIndicesBuffer) glBufferData(GL ELEMENT ARRAY BUFFER, sizeof(unsigned short) indices.size(), indices.data(), GL STATIC DRAW) void draw() glEnableVertexAttribArray(0) glBindBuffer(GL ARRAY BUFFER, mPositionBuffer) glVertexAttribPointer(0, 4, GL FLOAT, GL FALSE, 0, 0) glBindBuffer(GL ELEMENT ARRAY BUFFER, mIndicesBuffer) glDrawElements(GL TRIANGLES, indices.size(), GL UNSIGNED SHORT, 0) glDisableVertexAttribArray(0) Quad() private std vector lt unsigned short gt indices std vector lt vec3 gt position GLuint mVertexArray GLuint mPositionBuffer GLuint mIndicesBuffer I'm using, OpenGL, glm, glfw etc.
1
Skeletal animation in OpenGL I'm using Assimp to do skeletal animation in my OpenGL application. I used Blender to export this one boned model to a COLLADA file The model has only one bone, called arm bone, that controls the arm mesh. All the other meshes are static. I made several structures and classes that help me play animations. All the nodes are added to an std vector of Node objects. each Node contains aiNode data and a toRoot matrix. The bone hierarchy is encapsulated in a Skeleton class, and the animation matrix (T R) are updated for each bone in a class called Animation. My Model Draw() function is this void Model draw() iterate through all animation sets. if the animation is running, update the bones it affects. for(size t i 0 i lt animations.size() i ) if( animations i .running()) animations i .updateAnimationMatrices( amp skeleton) calculate Bone finalMatrix for each bone skeleton.calculateFinalMatrices( skeleton.rootBone()) iterate through the nodes and draw their meshes. for(size t i 0 i lt nodes.size() i ) shaderProgram.setUniform("ModelMatrix", nodes i .toRoot()) nodes i .draw() To get the "animationMatrix" for each bone (the TR matrix) I call Animation updateAnimationMatrices(). Here's what it looks like void Animation updateAnimationMatrices(Skeleton skeleton) double time ((double) timer.elapsed() 1000.0) while(time gt animation gt mDuration) time animation gt mDuration iterate through aiNodeAnim (called channels) and update their corresponding Bone. for(unsigned int iChannel 0 iChannel lt animation gt mNumChannels iChannel ) aiNodeAnim channel animation gt mChannels iChannel Bone bone skeleton gt getBoneByName(channel gt mNodeName.C Str(), skeleton gt rootBone()) rotation glm mat4 R ... calculate rotation matrix based on time translation glm mat4 T ... calculate translation matrix based on time set animation matrix for the bone bone gt animationMatrix T R bone gt needsUpdate true Now in order to calculate the "finalMatrix" for each bone (based on animationMatrix, offsetMatrix etc..), and upload it to the vertex shader, I call Skeleton calculateFinalMatrices(). void Skeleton calculateFinalMatrices(Bone root) if(root) Node node getNodeByName(root gt name gt C Str()) if(node nullptr) std cout lt lt "could not find corresponding node for bone " lt lt root gt name gt C Str() lt lt " n" return if(root gt needsUpdate) update only the bones that need to be updated (their animationMatrix has been changed) root gt finalMatrix root gt animationMatrix root gt offsetMatrix upload the bone matrix to the shader. the array is defined as "uniform mat4 Bones 64 " std string str "Bones " char buf 4 0 itoa s(root gt index, buf, 10) str buf str " " shaderProgram gt setUniform(str.c str(), root gt finalMatrix) root gt needsUpdate false for(unsigned int i 0 i lt root gt numChildren i ) calculateFinalMatrices(root gt children i ) Here's my bone structure, if it helps. My glsl vertex shader is pretty standard. Here it is. And finally, here's the result I get (ignore the model's static legs. that must be some bug in the Blender exporter). And here's the result I should get (using a 3d party software) It looks like there's something wrong with the bone's matrix calculation, although I don't know what. Any ideas or tips? Thanks!
1
How can I calculate a terrain's normals? Im trying to implement basic lighting in Opengl 3 (a sun) with this tutorial http www.mbsoftworks.sk index.php?page tutorials amp series 1 amp tutorial 11 Im building a basic terrain and its working well. Now im trying to add normals, and I think its not working well Terrain in wireframe As you can see I dont think my normals are good. This is how I get them for(x 0 x lt this.m size.width 1 x ) for(y 0 y lt this.m size.height 1 y ) immutable uint indice1 y this.m size.width x immutable uint indice2 y this.m size.width (x 1) immutable uint indice3 (y 1) this.m size.width x immutable uint indice4 (y 1) this.m size.width x immutable uint indice5 y this.m size.width (x 1) immutable uint indice6 (y 1) this.m size.width (x 1) Vector3 v1 vertexes indice3 vertexes indice1 Vector3 v2 vertexes indice2 vertexes indice1 Vector3 normal v1.cross(v2) normal.normalize() normals indice1 normal indices indice1, indice2, indice3, indice4, indice5, indice6 I use this basic shader http ogldev.atspace.co.uk www tutorial18 tutorial18.html or in the link I posted at the top. Thanks.
1
Calculate target angle to a certain Vector I'm developing a 3D game in LWJGL. I have a player class and a monster class and I need the monster class to chase the player when the player is within a certain distance. For that I have the monster directional vector (let's call it monDirVector) and the directional vector that points from the monster to the player (let's call it dirToPlayer) So the way my monster class works is that it increments its current rotation to match a target angle (this way I can simulate the walking turning animation). Given this I need to calculate the target angle so that the monDirVector overlaps the dirToPlayer making the monster walk right towards the player. How would I go about this? Here's a diagram for better understanding of what I need. I pretty much need to get the c angle and assign it as the target angle for the monster. The a is the monster current rotation and the b is the angle between the two vectors. Both of these are known. Please notice that the monster can be oriented any way and the player can be in any quadrant relative to the monster. This stands for a problem because I can't simply subtract the angle towards the player to the current rotation of the monster. I would really apreciate if someone helped me.
1
Optimized algorithm for line sphere intersection in GLSL Well, hello then! I need to find intersection between line and sphere in GLSL. Right now my solution is based on Paul Bourke's page and was ported to GLSL this way The line passes through p1 and p2 vec3 p1 (...) vec3 p2 (...) Sphere center is p3, radius is r vec3 p3 (...) float r ... float x1 p1.x float y1 p1.y float z1 p1.z float x2 p2.x float y2 p2.y float z2 p2.z float x3 p3.x float y3 p3.y float z3 p3.z float dx x2 x1 float dy y2 y1 float dz z2 z1 float a dx dx dy dy dz dz float b 2.0 (dx (x1 x3) dy (y1 y3) dz (z1 z3)) float c x3 x3 y3 y3 z3 z3 x1 x1 y1 y1 z1 z1 2.0 (x3 x1 y3 y1 z3 z1) r r float test b b 4.0 a c if (test gt 0.0) Hit (according to Treebeard, "a fine hit"). float u ( b sqrt(test)) (2.0 a) vec3 hitp p1 u (p2 p1) Now use hitp. It works perfectly! But it seems slow... I'm new at GLSL. You can answer this questions in two ways Tell me there is no solution, showing some proof or strong evidence. Tell me about GLSL features (vector APIs, primitive operations) that makes the above algorithm faster, showing some example. Thanks a lot!
1
How do I fix this access violation when I exit my custom OpenGL game engine? I'm writing an game engine, where the engine is written in a project and exports a .dll file. In another project, in the same solution as the engine, there is a sandbox project which uses the engine. However, there is a bug. I run the sandbox project in debug mode with the engine dll. When I spam my mouse and keyboard for a few seconds, and close the program via the exit button, the program crashes with an error Exception thrown at 0x00000043 in phantom sandbox.exe 0xC0000005 Access violation executing location 0x00000043.If there is a handler for this exception, the program may be safely continued. I found the source of the bug. Since I'm currently writing the engine for OpenGL, I have to initialize GLEW, and I need to create a HGLRC. If I don't initialize this HGLRC, everything works. This is not the ideal solution, since I need to use OpenGL for my engine. I went forth without exporting the .dll from the engine, making the engine an application, instead. I made a main.cpp, and wrote it to use the engine, enabling OpenGL rendering. I tried to recreate the bug, but everything works! I thought it might OpenGL, but then now I'm thinking it might be my engine. How do I fix this? Here is my code, where it errors int const main result invoke main() telemetry main return trigger(nullptr) if(! scrt is managed app()) exit(main result)l if(!has actor) lt this is the line being sent to the call stack cexit() scrt unititialize crt(true, false) return main result except ( seh filter exe(GetExceptionCode(), GetExceptionInformation())) int const main result GetExceptionCode() if(! scrt is managed app()) exit(main result) Here is the call stack
1
How to solve artifacts caused by vertex lighting in my voxel engine? My current lighting system bakes the light amount based on ray tracing from the light source to the 8 corners of the block (so per vertex) and the distance to the light on the blocks. It works acceptable, but it's definitely not perfect. Of course the blocks are made out of faces, which are made out of triangles. In the situation shown in the screenshot, where there is a light directly behind the camera, you get those weird triangle lighting issues. How can I fix this problem?
1
glDrawElements Arrays not working Given the following OpenGL call stack. According to my knowlegde, the calls to glDrawElements Arrays are ok.(?) But, they do not draw anything. (There is no triangle) The calls to glBegin() and glVertex() works fine. Full source here
1
Architecture to draw many different objects in OpenGL I have some objects that I want to draw. I am not sure how I can create my architecture in a way where I can draw everything as fast as possible. As example class MyObject float vertices float colors float textures public MyObject() fill every buffer with data public void Draw() set shader set shader uniforms etc. foreach buffer BindBuffer() VertexAttribPointer() DrawArrays() In this way, I can draw every object like myObject.Draw after it is constructed. The problem would be, that if I have 5000 objects, I have 5000 draw calls. I need to set the shader every time even if I loop over the 5000 objects because I don't know if they use the same shaders. Another way class MyObject bool needsToBeChanged public MyObject() set my edge points needsToBeChanged true void ChangeMyObject() e.g. I changed the height of my object needsToBeChanged true class GLWindow float vertices float colors float textures GameLoop() foreach(MyObject myObj in allMyObjects) if(myObj.needsToBeChanged) get objects points for drawing it and set the global buffers up myObj.needsToBeChanged false DrawArrays() draw only once (because every data is in the buffer) Only 1 draw call for everything. I guess this will be faster than first method. The problem on this code is the big buffer that contains everything. Let's imagine we have inherited from MyObject and want to draw objects in different shapes. Now the Loop gets more complicated. You may have to split it because you need another shader for example for text. Now you have 2 arrays... And with other changes the Loop becomes more and more complicated and harder to maintain. What solutions (beside the both) are available for this kind of problem? Can I use the first way or is there a better way where I can reduce the draw calls? How is it solved in other applications?
1
How to obtain touch events from a GL Viewport, not the whole screen? Background I'm implementing viewport resizing in order for my game to maintain the same display ratio on all devices. However, I've found an issue with getting touch events. Basically, if my viewport, for example, uses the whole of the physical screen's width and height then there is no issue. However, if it is resized say in the vertical, then my touch events are thrown off, because touch events are registered from the top of the physical screen. Hence if I put a sprite at say position 50 in the Y direction then it put's the sprite 50 pixels down from the top of the viewport, not the top of the actual screen. Do I have to manage this myself by adding my viewport's offset to my touch events or is there a way to tell Android to register 0, as the top of the viewport as opposed to the actual screen. Here is a graphic to illistrate what I mean, any help would be appreciated
1
Invalid GLSL on some machines I'm writing a game engine using OpenGL 4.3 using gcc 5, mainly to teach myself graphics programming. Initial development was on my Surface Pro 3 using mingw w64 and worked like a charm. I've decided to move to my desktop, which is running two GTX 670s, and both Windows and Arch Linux. I've made sure that I have the latest NVIDIA drivers installed on both operating systems, but am having issues with my vertex and fragment shaders. All I know is that neither system compiles the shaders, but the OpenGL Reference Compiler simply issues a warning saying that OpenGL 4.3 might not be fully implemented. I've also tried downgrading all the way to OpenGL 4.0, but this doesn't seem to fix the issue. Below are the shaders that I'm currently trying out. Hoping someone more experienced than I can lend a hand? shader.vert version 430 layout (location 0) in vec3 vertex position layout (location 1) in vec2 vt uniform mat4 cam view, cam proj, sprite matrix out vec2 texture coordinates void main() texture coordinates vt gl Position cam proj cam view sprite matrix vec4(vertex position, 1.0) shader.frag version 430 in vec2 texture coordinates layout (binding 0) uniform sampler2D tex out vec4 frag colour void main() vec4 texel texture(tex, texture coordinates) frag colour texel Thanks in advance )
1
Running OpenGL app on Windows XP x86 produces incorrect texture colors I'm working with the Cen64 emulator and I compiled from source a x86 version that operates fine on Windows 10 x64. As soon as I run it on a Windows XP x86 machine the colors are then all incorrect. Here are some screenshots Running on Win 10 Running on Win XP SP3 The color format is set to GL UNSIGNED SHORT 5 5 5 1 and the internal format is GL RGBA glTexImage2D(GL TEXTURE 2D, 0, GL RGBA, hres hskip, vres, 0, GL RGBA, GL UNSIGNED SHORT 5 5 5 1, buffer) I'm pretty new to configuring color packing formats but it appears like some color channels are getting swapped? Maybe some sort of endianness issue going from an x64 to an x86 architecture? Please let me know if I need to show more of my code to help debug the issue. Thanks for any help!
1
What OpenCL video cards (or FPGAs) features are needed for high speed multiplication? I'm benchmarking some cryptographic related software and am looking for video cards that are better at parallel multiplication vs parallel addition. Is there any prior work that would graph video card performance against math operations (calculating large digits like 256 2048, calculating the modulus etc.) What GPU features should I look for?
1
VBO glGenBuffers() IllegalStateException i'm kinda noob in OpenGL but I have this problem I get an IllegalStateException at this code int vboVertex glGenBuffers() In detail exception Exception in thread "main" java.lang.IllegalStateException Function is not supported at org.lwjgl.BufferChecks.checkFunctionAddress(BufferChecks.java 58) at org.lwjgl.opengl.GL15.glGenBuffers(GL15.java 116) at Mesh.initVbo(Mesh.java 49) OpenGL version 2.0.5819 WinXP Release Thanks for support and sorry for the horrible english.
1
Game Engine Design Ubershader Shader management design I want to implement a flexible Ubershader system, with deferred shading. My current idea is to create shaders out of modules, which deal with certain features, such as FlatTexture, BumpTexture, Displacement Mapping, etc. There are also little modules which decode color, do tone mapping, etc. This has the advantage that I can replace certain types of modules if the GPU doesn't support them, so I can adapt to the current GPU capabilities. I am not sure if this design is good. I fear I could make a bad design choice, now, and later pay for it. My question is where do I find resources, examples, articles about how to implement a shader management system effectively? Does anyone know how the big game engines do this?
1
How can I use OpenGL and D3D to render to the same window at the same time? I have main render loop in which initial drawing is done via OpenGL to an SDL window, and after that the same window handle is passed to a Direct3D device, which does subsequent rendering. Once I execute the program it will initially draw the OpenGL scene, and then do Direct3D drawing but that latter drawing overwrites the OpenGL work. What I want to see is both drawings in parallel. How can I accomplish that?
1
Is OpenGL appropriate for 2D games? I have been teaching myself the OpenGL library for a while now, and want to start making a game. However, for an easier introduction, I want to start with something 2D, such as a top down Pokemon style game. Is this a good plan, or is OpenGL made specifically for 3D?
1
using glew glviewport cannot get the correct proportion of a window the generated window resolution is (1334, 750) when i call glViewport(0, 0, 1334, 750) the whole window is filled, but when i call glViewport(0, 0, 1334 2, 750) the result is as bottom the proportion is obviously wrong, anyone know what happened?
1
Problem with rotation a camera with the mouse OPENGL GLFW I am trying to move my camera with my mouse, I can translate it juste by changing the current position of my camera, but when I want to move my forward vector and my up vector of my camera it doesnt work very well, the camera seem to move whereas it's not the case... void Camera moveCameraRotateUpdtade(GLFWwindow window) curseur mvt ... double xpos, ypos Vec3f new front std cout lt lt "front " lt lt CameraFront lt lt std endl glfwGetCursorPos(window, amp xpos, amp ypos) if(!(xpos gt width 1 xpos lt 0) amp amp !(ypos gt height 1 ypos lt 0)) float x (2.0f xpos) width 1.0f float y 1.0f (2.0f ypos) height horizontalAngle mouseSpeed float(x) verticalAngle mouseSpeed float(y) std cout lt lt "x y " lt lt x lt lt " " lt lt y lt lt std endl std cout lt lt "horizontalAngle verticalAngle " lt lt horizontalAngle lt lt " " lt lt verticalAngle lt lt std endl new front Vec3f(cos(verticalAngle) sin(horizontalAngle),sin(verticalAngle),cos(verticalAngle) cos(horizontalAngle)) Vec3f right Vec3f(sin(horizontalAngle M PI 2.0f),0.0,cos(horizontalAngle M PI 2.0f)) CameraUp right.cross(new front) CameraUp CameraUp.normalize() CameraFront new front.normalize() std cout lt lt "CameraFront " lt lt CameraFront std cout lt lt "CameraUp " lt lt CameraUp std cout lt lt "position " lt lt positionCam LookAt( positionCam CameraFront, positionCam, CameraUp) Here my lookAt function Mat4f Camera LookAt(Vec3f target, Vec3f position,Vec3f CameraUp) ce dont j'ai besoin pour calculer les parametre de ma camera... Vec3f f (target position).normalize() Vec3f u CameraUp.normalize() Vec3f s f.cross(u).normalize() u s.cross(f) Mat4f R R.setCol(0,s) R.setCol(1,u) Vec3f f tmp f 1.0 R.setCol(2,f tmp) R.setElement(0,3, 1.0f s.dot(position)) R.setElement(1,3, 1.0f u.dot(position)) R.setElement(2,3,f.dot(position)) LookAt R.transpose() return LookAt Here a video showing my problem https www.youtube.com watch?v Am 1sfMQllQ
1
Using index arrays with OpenGL VBOs I've recently started reading about VBOs. If, for example, I want to draw a cube using VBOs, can I use one VBO to hold the coordinates for the 8 vertices, and another one as an index array, to specify the order in which the vertices are drawn? If it's possible I'd appreciate some help on the matter, or a link to good tutorial you may know on the subject. Thanks.
1
indices for surface of revolution I'd like to implement a surface of revolution. I already implemented creating Vertices based on a 2D line. I now want to get the indices to render the mesh with GL TRIANGLES Here's the code to create the 3D Vertices out of a 2D line void createVertices(vector lt Vector2 gt amp points,unsigned int iterations) int i int j unsigned int index vector lt Vertex gt newVertices vector lt unsigned int gt newIndices for(j 0 j lt points.size() j) for(i 0 i lt iterations i) Vector2 p points.at(j) float theta M PI 2.0 (float)i (float)iterations float x sinf(theta) p.x float z cosf(theta) p.x newVertices.push back(Vertex(x,p.y,z)) and here's the code that will be called to save the Vertices void addVertices(vector lt Vertex gt amp vertices,vector lt unsigned int gt amp indices) this gt vertices vertices this gt indices indices unsigned int verticesSize vertices.size() sizeof(Vertex) unsigned int indicesSize indices.size() sizeof(unsigned int) float vertexBuffer new float vertices.size() sizeof(Vertex) createBuffers(vertices,vertexBuffer) glGenVertexArrays(1, amp VAO) glBindVertexArray(VAO) glGenBuffers(1, amp verticesVBO) glBindBuffer(GL ARRAY BUFFER,verticesVBO) glBufferData(GL ARRAY BUFFER,verticesSize,vertexBuffer,GL STATIC DRAW) glGenBuffers(1, amp indicesVBO) glBindBuffer(GL ELEMENT ARRAY BUFFER,indicesVBO) glBufferData(GL ELEMENT ARRAY BUFFER,indicesSize, amp indices 0 ,GL STATIC DRAW) delete vertexBuffer someone has a basic idea?
1
How can I benefit when I don't use gpu? I am trying to make a 3D game with C , SDL, and OpenGL. My program roughly looks like this control function has only CPU operations. draw function has CPU and OpenGL functions operations. I just randomly pick numbers. How can I use GPU when program at control function? Or I need to use? Do game engines use at every time? For example Unity, or Unreal Engine.
1
How do I better organise rendering in an entity component system? I'm developing an isometric RPG, with 3D characters in 2D level. I was using a standard object orientated programming paradigm, but was faced with a lot of issues. Recently, I learned about entity component systems, and am currently rewriting my engine using this paradigm. I use OpenGL, and have two rendering systems one for drawing in 2D, and one for drawing in 3D. Each system consists of one VBO id, with the appropriate geometry, shader program id, id of texture unit binded to the texture atlas (which accommodates all tilesets or all model textures) and other information like that. The alternate systems provide different logic for drawing in 2D and 3D. When it comes to the components for these systems, I am stuck. Say I have component, defined by the buffer offset from the beginning of VBO buffer and the primitive count that needs to be rendered. Let's pretend that I have two instances of these components. The first instance belongs to the level floor entity, and describes the bunch of quad tiles. The second instance is a character mesh that belongs to the player entity. The first component instance should use the 2D rendering system, and second instance should use the 3D rendering system, but the data structure of these instances are identical. It must be said that I use EntityX and the components gets in the systems, automatically depending on their type. Both systems require position and craphics components, so both systems will take both component instances, and one of these component will be processed improperly, what with a 2D graphics component in a 3D rendering system, and vice versa. I'm thinking about creating the "hollow" components, like sprite and model, which will indicate what graphics components in the entity mean. Since these component will not contain any properties, and would be used only as flags, this solution seems clumsy. How do I better organise rendering in an entity component system?
1
How to store renderer vertex index data in scene graph objects? I have a SceneNode class which contains a Mesh instance. The Mesh class stores client side information such as vertex and index arrays (before they're uploaded to the GPU). I also have an abstracted Renderer class, for GL and D3D, which render the SceneNodes. However, I'm not sure where I should store the API specific variables, e.g. GLuint via glGenBuffers for GL, and an ID3D11Buffer for D3D. The few options I've considered are Create a derived Mesh class for each API, e.g. GLMesh D3DMesh Create a derived MeshData class for each API, which is stored in the main Mesh class Store a map of Mesh to API variables in each renderer, e.g. perform a lookup of Mesh to GLuint ID3D11Buffer for each object that is rendered (variables would have to be generated after the scene had been updated, but before rendering). Separate the logic of rendering from scenes, by visiting the SceneGraph after update, and generating a RenderGraph of all renderable nodes in the scene. What's the recommended way of doing this?
1
Blur gets displaced compared to original image I have implemented a SSAO and I'm using a blur step to smooth it out. The problem is that the blurred texture is slightly displaced compared to the original. I'm blurring using a 4x4 kernel since that was my noise kernel in SSAO. The following is the blurring shader float result 0.0 for(int i 0 i lt 4 i ) for(int j 0 j lt 4 j ) vec2 offset vec2(TEXEL SIZE.x i, TEXEL SIZE.y j) result texture(aoSampler, TexCoord offset).r out AO vec4(vec3(0.0), result 0.0625) Where TEXEL SIZE is one over my window resolution. I was thinking that this is was an error based on how OpenGL counts the Texel center, so I tried displacing the texture coordinate I was using by 0.5 TEXEL SIZE, but there was still a slight displacement. The texture input to my blur shader, has wrap parameters glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP) When I tell the blur shader to just output the the value of the pixel, the result is not displaced, so it must have something to do with how neighboring pixels are sampled. Any thoughts?
1
OpenGL glTexImage3D not working? I'm trying to use arrax textures with opengl 3.3 since this the target version for my application. So I have to use the glTexImage3D() method instead of glTexStorage3D(). The problem is, if I use glTexImage3D() instead of glTexStorage3D() the screen is always black. The other way round, everything works. Here is how I create the array texture public ArrayTexture(int width, int height) this.width width this.height height this.arrayTextureHandle GL11.glGenTextures() GL13.glActiveTexture(GL13.GL TEXTURE0) GL11.glBindTexture(GL30.GL TEXTURE 2D ARRAY, arrayTextureHandle) Works perfect GL42.glTexStorage3D(GL30.GL TEXTURE 2D ARRAY, 1, GL11.GL RGBA8, width, height, 10) Doesn't work with these parameters GL12.glTexImage3D(GL30.GL TEXTURE 2D ARRAY, 0, GL11.GL RGBA8, width, height, 10, 0, GL11.GL RGBA, GL11.GL UNSIGNED BYTE, (ByteBuffer)null) GL11.glBindTexture(GL30.GL TEXTURE 2D ARRAY, 0) public void addTexture(ScrollingSprite... sprites) if(zLayerCounter sprites.length lt MAX LAYERS) GL13.glActiveTexture(GL13.GL TEXTURE0) GL11.glBindTexture(GL30.GL TEXTURE 2D ARRAY, arrayTextureHandle) Arrays.asList(sprites).forEach(sprite gt sprite.getTexture().getTextureData().prepare() GL12.glTexSubImage3D(GL30.GL TEXTURE 2D ARRAY, 0, Mipmap number 0, 0, zLayerCounter , xoffset, yoffset, zoffset width, height, 1, width, height, depth GL11.GL RGBA, format GL11.GL UNSIGNED BYTE, type sprite.getTexture().getTextureData().consumePixmap().getPixels()) pointer to data GL11.glTexParameteri(GL30.GL TEXTURE 2D ARRAY, GL11.GL TEXTURE WRAP S, sprite.getTextureWrapS()) GL11.glTexParameteri(GL30.GL TEXTURE 2D ARRAY, GL11.GL TEXTURE WRAP T, sprite.getTextureWrapT()) ) GL11.glBindTexture(GL30.GL TEXTURE 2D ARRAY, 0) else throw new IllegalArgumentException("Maximum number of textures reached!")
1
How to implement a physical with perspective effect on Android? I'm working on a project that looks like PaperToss. Instead of tossing a page, you toss a coin. Suppose that I have a coin put in three dimensional that have coordinates at A(x,y,z). I throw that point ahead, after 1 100 second, that coin move from A(x,y,z) to A'(x',y',z'). By this way, I have two problems need to solve. Where will be the coin at time t? How can I display this on a screen? For 2., I thought about using Orthographic projection amp Perspective projection. I'm told that OpenGL can help me, but I don't know how. How can I solve 1. and 2.?
1
C OpenGL ShadowMap Issue Artifacts I am currently implementing basic shadow mapping in my C Custom Engine using GLSL 4.10. It is currently working with basic PCF anti aliasing and very minimal reduction for unwanted artifacts. Here is a screenshot for reference Recently, I have started modifying my shadow map pipeline to reduce shadow acne. After reading through various articles, many suggest either using a bias value, which is subtracted from the depth value being compared to the shadow map Shadows float bias 0.0005 vec4 shadowCoordinate u worldToShadowMapSpaceTransform vec4( worldPosition.xyz, 1.0 ) shadowCoordinate.z bias float shadowValue textureProj( u shadowMap, shadowCoordinate ) and or to cull front faces when rendering to the shadow map. glCullFace( GL FRONT ) Render to ShadowMap FBO glCullFace( GL BACK ) Render Scene Culling front faces reduces most of the shadow acne but creates another problem that none of the articles have mentioned. The screenshot below shows the issue. If you look at the white box in front of horned creature, you will notice the creatures's shadow is cast upon the other side of the box. This is because the front faces of the box are culled which leads the back faces to compare their depth values against geometry to the north of the box. Naturally, this also leads to these shadows also being cast below the floor as well as above. Here is a screenshot of the bottom of the floor. It is difficult to see but you can see the shadows being cast. I believe many 3D engines utilize variations of these artifact reduction techniques. So, how do they avoid the shadow casting issue described above?
1
How do you do very simple texturing in OpenGL 3 ? I managed to create a sphere (calculating all the vertices etc etc). Now I want to apply a texture on it. I have no idea how. I googled some "OpenGL 3 texture tutorials" but I can't seem to find anything simple. I am using shaders btw. Is there anywhere a very basic step by step tutorial on how to implement a texture? A texture is just an image that wraps around a shape right? Sounds simple, but from what I have seen on the web it's kinda complicated.
1
Why do I get an error "exit() redefinition" while using OpenGL? I am using Visual C 2010 and I get the following errors 1 gt d visual c vc include stdlib.h(353) error C2381 'exit' redefinition declspec(noreturn) differs 1 gt d glut 3.7.6 bin include gl glut.h(146) see declaration of 'exit' In my project I have 3 files. Here are the includes from each of them. Main.cpp include lt iostream gt include "BmpLoader.h" include lt glut.h gt BmpLoader.h include lt stdio.h gt include lt glut.h gt include lt Windows.h gt BmpLoader.cpp include "BmpLoader.h" As far as I know, I receive this error due to the include order. I have tried so far several arrangements of the includes, but haven't figured out the proper arrangement. What library combination causes this error and what include order should I use in order to prevent it?
1
Alpha Blending performance on IOS I've got few questions without responses about my game development, can you help me? Here is the questions In my game when a large object appear on the screen, the GPU go to his limits, my framerate become very slow. How can I handle that? (I'm working on mobile). ( Currently, I'm sorting object from back to front) I've read some stuff about batching (How to draw efficiently large number of objects with alpha blending?), how can I handle batching when my game need to draw my scene from back to front with alpha blending? What is faster Add a condition to test pixel transparency in fragment shader or enable Alpha blending? (I've got colors around my sprites without alpha blending or condition) Thanks!
1
fragment shader with SNORM textures I want to apply SNORM texture by using GL TEXTURE 3D as a target, what will be the fragment shader for the same? Also, what should be the data type of texture data?
1
What does "GL CLAMP TO EDGE should be used in NPOT textures" mean? I have two sRGB PNG images I am using for textures. One is 64x64, and works fine. The other is 64x47, and when I attempt to use it I get an error reason 'GL CLAMP TO EDGE should be used in NPOT textures' What does this mean, and how do I address it?
1
Are display lists faster than VBOs? I'm making a voxel rendering engine. My "chunks" are 32 32 256 blocks and I can render a 16 16 square of them (which corresponds to Minecraft's maximal render distance). I'm using VBOs holding XYZ float positions and UV float coordinates. I use one VBO per chunk. VBOs are alloacted with GL44.glBufferStorage(GL15.GL ARRAY BUFFER, BufferUtils.createFloatBuffer(32 32 256 6 6 5), GL30.GL MAP WRITE BIT GL44.GL MAP PERSISTENT BIT) I map them once at creation GL30.glMapBufferRange(GL15.GL ARRAY BUFFER, 0, 32 32 256 6 6 5 4, GL30.GL MAP WRITE BIT GL30.GL MAP UNSYNCHRONIZED BIT GL44.GL MAP PERSISTENT BIT, null) Then I can write directly into the buffer and make changes visible to the GPU with GL42.glMemoryBarrier(GL44.GL CLIENT MAPPED BUFFER BARRIER BIT) This works, except I get about 20 FPS, while Minecraft gets 300 without frustum culling. The CPU is clearly not the bottleneck, as I have 1M FPS when the window is minimized. Am I doing something wrong or are VBOs slower than Minecraft's Display Lists? How can I speed this up? System specs i7 4770 amp GTX 770 OC
1
OpenGL App not setting cursor position appropriately I have written a small application using OpenGL, and have implemented some rudimentary camera controls. Unfortunately, I cannot get the application to set my cursor position correctly. The cursor is never set to where I tell it to go, so my application just reacts to where the cursor is on my entire screen. I first attempted to use GLFW, and when I saw that I couldn't set the cursor appropriately, I decided to try SFML. Neither one works. I'm on an Arch Linux install with a Gnome desktop. I've been trying to figure this out for a few days now to no avail. The relevant code is as follows sf Vector2i cursor pos sf Mouse getPosition( window) sf Mouse setPosition(sf Vector2i(1280 2, 720 2), window) This gets called every frame inside a function that messes with some matrices. I also set the cursor position at initialization. Any hints or advice would be greatly appreciated.
1
How to find the "up" direction of the view matrix, with GLM Using OpenGL and the GLM matrix library, I want to translate my camera relative to the world coordinate system. This requires me to compute the necessary view matrix. To initialise the view matrix, I used view matrix glm lookAt(eye, centre, up) Where eye (0, 0, 10), centre (0, 0, 0), and up (0, 1, 0). Suppose I want to now translate the view matrix by 5 unites in the camera's y direction, i.e. to move upwards relative to the model. I tried view matrix glm translate(view matrix, glm vec3(0, 5, 0)) This works fine at first, because the view and world coordinate systems are aligned. However, if I then rotate my camera a bit, and then perform this translation, it no longer works. Instead, it moves the camera along the world coordinate system, rather than the camera coordinate system. So what I need to do, is to find the vector which represents the "up" direction of the camera, and then translate along this vector. This is similar to the inverse of the glm lookAt(eye, centre, up) function. In summary Is it possible to find the "up" vector of the view matrix using the GLM library?
1
Getting ray using gluUnProject or inverted MVP matrix I've read a lot of topics here, on SO, opengl.org etc. Example how gluUnProject should work (from NeHe tutorial) winX (float)x winY (float)viewport 3 (float)y glReadPixels( x, int(winY), 1, 1, GL DEPTH COMPONENT, GL FLOAT, amp winZ ) gluUnProject( winX, winY, winZ, modelview, projection, viewport, amp posX, amp posY, amp posZ) The problem is that gluUnProject returns mostly zeros, if not zeros then it returns Camera position in World Coords. Value winZ I'm getting from glReadPixels is always correct, if I scale it like below float z distance ProjectionMatrix 14 (winZ 2.0 1.0 ProjectionMatrix 10 ) I get distance between near plane and parallel plane that contains mouse click point. Look at the sample below gluUnProject( winX, winY, 1, modelview, projection, viewport, amp posX1, amp posY1, amp posZ1) gluUnProject( winX, winY, 0, modelview, projection, viewport, amp posX2, amp posY2, amp posZ2) gluUnProject( winX, winY, 1, modelview, projection, viewport, amp posX3, amp posY3, amp posZ3) gluUnProject( winX, winY, winZ, modelview, projection, viewport, amp posX4, amp posY4, amp posZ4) The result is always the same (posX1,posY1,posZ1) (posX2,posY2,posZ2) (posX3,posY3,posZ3) (posX4,posY4,posZ4) and is either (0,0,0) or (Camera Position.x,Camera Position.y,Camera Position.z) After gluUnProject I tried to do it manually float NDC x 2.0f winX Width 1.0f float NDC y 1.0f 2.0f winY Height float winZ 0 glReadPixels(winX, (Height winY), 1, 1, GL DEPTH COMPONENT, GL FLOAT, amp winZ) vec4 point1 inverse(ViewProjectionMatrix) vec4(NDC x, NDC y, winZ, 1.0f) vec4 point2 inverse(ViewProjectionMatrix) vec4(NDC x, NDC y, 1.0f, 1.0f) vec4 point3 inverse(ViewProjectionMatrix) vec4(NDC x, NDC y, 0.0f, 1.0f) vec4 point4 inverse(ViewProjectionMatrix) vec4(NDC x, NDC y, 1.0f, 1.0f) Now all points coordinates are different and never zeros but some values, seems like real ones but I don't know how to handle it or I missed something. BTW I used different approach to get ray (using angles) and it works perfectly. Check here. Q1 Any suggestions what's wrong with gluUnProject ? Q2 What is my mistake in manual implementation? I thought that I messed with row column major format, but transposition of matrices had no effect. I read write matrices as column major format so last column is (ViewMatrix 12 , ViewMatrix 13 , ViewMatrix 14 , ViewMatrix 15 )
1
Mapping rectangular texture on trapezoid like shape I have a problem with rendering rectangular textures on non rectangular surfaces. While googling, I found a thread with person having a pretty similar problem http www.gamedev.net topic 419296 skewedsheared texture mapping in opengl But in my case, I cannot be sure about surface's exact properties. It's not regular trapezoid, it can be slightly bent. Which means, that I cannot just calculate correct texture matrix, or at least I don't know how. I've tried to render triangle fan like shape, but it just changed the characteristics of final distortion.
1
OpenGL light not shining quad I've constructed a scene using OpenGL GLUT with a spot light but I'm getting troubles with light shining some of the walls what is going on and how to solve?
1
How to organize a fallback system for older GPUs in OpenGL? I don't want to make this question too broad or opinion based, but I really need some help about good practices. The scenario I created a particle engine with functions which require at least OpenGL 4.3. But some GPUs don't support this version, so I want to try to implement a fallback system for it. But it's not only the particle system. There are many features in 3D engines that can't be simply replaced by a different function name. Some features need different logic behind them and it can go quite complex. What are the best practices to manage fallback rendering in OpenGL?
1
Tips for writing 3D Collision detection with opengl I would like to any tips articles tutorials on how to write collision detection using OpenGL and C in 3D mainly just simple box collisions etc but also if there are any advanced resources that would be great to. i also don't wont to use any external library s if possible. thanks
1
How to deal with VBOs when rendering mesh's that may or may not be displayed? I'm working on a multiplayer game and will be displaying other players near the player. At most 16 players could be near the gamer however there could also be 0. What I'm thinking of doing is setting up 16 empty VBOs so they're ready to load with character's position when the client receives them. Is it valid to create empty VBOs? If not how should I go about rendering data that may or may not be there depending on what the server is sending the client?
1
glDrawElements draws nothing As the title says . ( nothing is drawn on screen is there something I'm missing? Mesh Mesh Create(GLfloat vertices , GLuint indices , GLfloat uvs , GLfloat colors ) Mesh mesh (Mesh )malloc(sizeof(Mesh)) mesh gt vertices (GLfloat )malloc(sizeof(vertices)) memcpy(mesh gt vertices, vertices, sizeof(vertices)) mesh gt indices (GLuint )malloc(sizeof(indices)) memcpy(mesh gt indices, indices, sizeof(indices)) mesh gt elementCount sizeof(indices) sizeof(GLuint) glGenVertexArrays(1, amp mesh gt vao) glBindVertexArray(mesh gt vao) glGenBuffers(1, amp mesh gt vbo VERTEXBUFFER ) glBindBuffer(GL ARRAY BUFFER, mesh gt vbo VERTEXBUFFER ) glBufferData(GL ARRAY BUFFER, sizeof(mesh gt vertices), amp mesh gt vertices, GL STATIC DRAW) glVertexAttribPointer(0, 2, GL FLOAT, false, 0, 0) glEnableVertexAttribArray(0) glGenBuffers(1, amp mesh gt vbo TEXTUREBUFFER ) glBindBuffer(GL ARRAY BUFFER, mesh gt vbo TEXTUREBUFFER ) glBufferData(GL ARRAY BUFFER, sizeof(uvs), uvs, GL STATIC DRAW) glVertexAttribPointer(1, 2, GL FLOAT, false, 0, 0) glEnableVertexAttribArray(1) glGenBuffers(1, amp mesh gt vbo INDICESBUFFER ) glBindBuffer(GL ELEMENT ARRAY BUFFER, mesh gt vbo INDICESBUFFER ) glBufferData(GL ELEMENT ARRAY BUFFER, sizeof(mesh gt indices), amp mesh gt indices, GL STATIC DRAW) glGenBuffers(1, amp mesh gt vbo COLORBUFFER ) glBindBuffer(GL ARRAY BUFFER, mesh gt vbo COLORBUFFER ) glBufferData(GL ARRAY BUFFER, sizeof(colors), colors, GL STATIC DRAW) glVertexAttribPointer(3, 4, GL FLOAT, false, 4 sizeof(GLfloat), NULL) glEnableVertexAttribArray(3) glBindVertexArray(0) glBindBuffer(GL ELEMENT ARRAY BUFFER, 0) glBindBuffer(GL ARRAY BUFFER, 0) return mesh Mesh Mesh CreateQuad() GLfloat vertices 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5 GLuint indices 0,1,2, 0,2,3 GLfloat uvs 0 , 1, 1, 1, 1, 0, 0, 0 GLfloat colors 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 return Mesh Create(vertices, indices, uvs, colors) void Mesh Render(Mesh mesh) glBindVertexArray(mesh gt vao) glDrawElements(GL TRIANGLES, mesh gt elementCount, GL UNSIGNED INT, amp mesh gt indices)
1
Uniform Block solve padding alignment for vec3 in CPU struct I have a struct on the CPU which I'm sending to a uniform block in my shader. After a bit frustration I finally got it to work. The problem I had was that vec3s are actually treated as 16 bytes, or in other words they need to have padding. So to solve this I added a dummy float after each vec3 (see below). I know I could just use vec4, but since I want to treat alpha as a separate component I don't want to type that 4th component for each vector every time. Furthermore, I would have to use swizzling in the shader as well to separate alpha. Now to the question, is there any nicer better way to solve the padding problem than having to add an extra float? CPU struct Material public vec3 Emissive float p1 vec3 Ambient float p2 vec3 Diffuse float p3 vec3 Specular float Shininess float Alpha Fragment shader layout(std140) uniform uniMaterial vec3 Emissive vec3 Ambient vec3 Diffuse vec3 Specular float Shininess float Alpha Regards, Tobias My solution struct GpuVec3 public GpuVec3() GpuVec3 amp GpuVec3 operator (const vec3 amp other) v.x other.x v.y other.y v.z other.z v.w 1.0f return this GpuVec3 amp GpuVec3 operator (const vec4 amp other) v.x other.x v.y other.y v.z other.z v.w other.w return this private vec4 v typedef GpuVec3 GpuVec4
1
OpenGL 4D textures with bilinear interpolation I want to use and interpolated 4D texture in OpenGL, i.e. a texture that is accessed with a texture coordinate vector (s, t, p, q) and interpolated linearly in every texture coordinate. The extension GL SGIS texture4D does exactly this. However, it is unsupported on literally every GPU I have access to. If the was a GL TEXTURE 3D ARRAY, I guess I could sample the appropriate 2 layers of a 3D array texture with the (s, t, p) coordinates and then manually interpolate in the q coordinate. However, there are only "ordinary" GL TEXTURE 2D ARRAY texture arrays. A possible solution is to treat the texture array as 2D array of 2D texture, sample the appropriate 4 layers with (s, t) and then manually interpolate in p and q. This method is somewhat constrained by GL MAX ARRAY TEXTURE LAYERS . I am also afraid of the relatively large number of 2D texture fetches required to fetch a single 4D texture value. Is there a simpler way of achieving 4D interpolated textures?
1
Performance issue in a simple game loop I have a problem with a simple game loop. As my rendering and iteration functions together take 10 ms, the time between update and swapBuffers measuers around 140 ms. What is causing this. Is there a straightforward way to fix this. Could it be related to the fact that each rect() (in a for loop) calls glDrawElements() and it somehow ques the whole process. void game startup() startup() glfwSwapInterval(1) void game render(double currentTime) static const GLfloat green 0.0f, 0.25f, 0.0f, 1.0f static const GLfloat one 1.0f glClearBufferfv(GL COLOR, 0, green) glClearBufferfv(GL DEPTH, 0, amp one) auto t1 std chrono high resolution clock now() for (size t y 0 y lt game board.getWorldHeight() y) for (size t x 0 x lt golf board.getWorldWidth() x) rect(x cell size, y cell size, cell size, cell size, game board.get state(y game board.getWorldWidth() x)) auto t2 std chrono high resolution clock now() game board.iterate(1) auto t3 std chrono high resolution clock now() float rendertime std chrono duration cast lt std chrono microseconds gt (t2 t1).count() 1000.0f float generatetime std chrono duration cast lt std chrono microseconds gt (t3 t2).count() 1000.0f std chrono milliseconds ms perframe(1000 60) About 60 fps auto cur time std chrono high resolution clock now() float sleep time std chrono duration cast lt std chrono microseconds gt (t1 ms perframe t2).count() 1000.0f std this thread sleep for(t1 ms perframe t3) cout lt lt "render time (ms)" lt lt rendertime lt lt " n" lt lt "generate time (ms)" lt lt generatetime lt lt " n" lt lt "sleeping time (ms)" lt lt sleep time lt lt " n" Output (board size 256 x 128 32768 rectangles drawn per frame) render time (ms) 8.1 generate time (ms) 1.3 sleeping time (ms) 6.6 SWAP TIME (ms) 146 application run function rendering part startup() do render(glfwGetTime()) auto t1 std chrono high resolution clock now() glfwSwapBuffers(window) auto t2 std chrono high resolution clock now() float swap time std chrono duration cast lt std chrono microseconds gt (t2 t1).count() 1000.0f std cout lt lt "SWAP TIME (ms)" lt lt swap time lt lt " n" glfwPollEvents() running amp (glfwGetKey(window, GLFW KEY ESCAPE) GLFW RELEASE) running amp (glfwWindowShouldClose(window) ! GL TRUE) while (running) shutdown() glfwDestroyWindow(window) glfwTerminate() rectangle draw function rect(int x, int y, int w, int h, int state) input is top left corner of rectangle, width and height of rectange and the state of unit GLfloat fstate (float)state 0.5f GLfloat vertices x, y, fstate, Top left x w, y, fstate, Top right x w, y h, fstate, Bottom right x, y h, fstate Bottom left glBufferSubData(GL ARRAY BUFFER,0, sizeof(vertices),vertices) glUseProgram(shaderProgram) glUniformMatrix4fv(mvp location, 1, GL FALSE, mvp) glDrawElements(GL TRIANGLES, 6, GL UNSIGNED INT, 0)
1
How can I position framebuffer objects on the screen in OpenGL? I currently have two framebuffers which are drawn using glBlitFramebuffer(0, 0, 1920, 1080, 0, 0, 1920, 1080, GL COLOR BUFFER BIT, GL NEAREST) But the texture does not take up the whole screen, and I would like to position it at a certain x,y on the screen. The images by default start from 0,0 and then span the width height provided, however I would like the image to start at say, 100, 100 and span the width height provided. How would I do this?
1
OpenGL How to render a "shadow" for an object that's behind another? First off, an image from Fez that depicts the effect I'm after I'm trying to achieve a similar effect in my project. I'm quite certain this is done with a stencil buffer, but the resources on such effect are scarce. How should I approach this? I'm guessing I'm after a sort of "AND" stencil where I'd render a semi transparent black rectangle over the player where only the pixels that had both the player and a map object. I'm not sure if it has any effect on it (besides maybe inverting the stencil operations..?), but for other reasons, my pipeline first renders the world and the player is rendered last.
1
OpenGL 4 several glUseProgram overhead I'm developing a little 2D game using OpenGL 4.x and I've also coded a very simple light system which does not take care of shadows. The main concept behind this light system is the frambuffer multiplicative blending. I basically have two framebuffers, one for the normal scene render and the other one for ambient light and the scene light sources. Then, I blend those two framebuffers and I get my final result (which is pretty good, to be honest). In this model I have 3 different shader programs One for rendering the normal scene (with texturing and other normal features) One for rendering lights, which compute light halos and other light effects One for blending the two framebuffers together Now, in my main application loop I have to switch between those three shader programs in order to complete a full rendering cycle. I'm also planning to add more shaders to render different light effects and particles. At the moment I'm in the design phase of my game, so I'm not able to test out the performance of this approach and I have no previous experience with it. And for the same reason I'd like to start with the best approach possibile for this situation. So my question is considering your experience, is switching between several shader programs at each frame something good or is it a bad behaviour in a 2D enviroment ? Is using fewer shader programs but with more if statement and more unused uniform variables a better solution?
1
Easy road from DisplayObject to Molehill? I have a finished Flash game which is rendered using the built in display tree, i.e. Bitmaps contained in Sprites (and a text here and there, few vector graphics, and one bitmap filled shape). For extra performance, I'd like it to use Molehill for rendering, but that's not possible out of the box. What's the easiest way to make this game use Molehill when available, but fall back to the current method if it's not available?
1
OpenGL dynamic font glyph cache library I have begun work on an OpenGL application (all on my own and with little knowledge) and started with FTGL, rendering true type fonts, which, with alot of text has a great impact on frames per second. I believe that to overcome this limitation one needs to generate a font atlas and save glyphs to opengl textures which are then drawn by shaders. I have done a search of github for such a library and found several, a few of which are incomplete (and by their own admission have memory leaks) I am looking for, and hoping that one of you can recommend, the most stable opengl font library that you preffer and does the aforementioned. I also need unicode support. Or a guide to help a noob like me achieve this Thankyou for reading
1
GLSL multiplying fragment output color by normal after calling texture function displays wrong texture So I'm quite new to OpenGL GLSL. I'm trying to make multitexturing and simple normal lighting work. Here is a Vertex shader version 130 uniform mat4 transform in vec3 vertPos in vec2 texUV in vec3 vertNorm in uint thisLayer out vec2 fragTexCoord out vec3 fragNorm out float fragLayer void main() gl Position transform vec4(vertPos, 1.0f) fragTexCoord texUV fragLayer float(thisLayer) fragNorm vertNorm ... and here is Fragment shader version 130 uniform sampler2DArray texArray uniform uint texCount uniform float rTime in vec2 fragTexCoord in vec3 fragNorm in float fragLayer out vec4 color void main() float actual layer max(0, min(texCount uint(1), floor(fragLayer 0.5))) Getting texture layer vec3 texCoord vec3(fragTexCoord, actual layer) color texture(texArray, texCoord) And it works fine Multiplying color.rbg by any constant works as expected color texture(texArray, texCoord) color.rgb vec3(0.5, 0.5, 0.5) But if I try to multiply color.rgb by a normal, like this vec3 finalNorm (normalize(fragNorm) vec3(1,1,1)) vec3(2,2,2) color texture(texArray, texCoord) color.rgb finalNorm The wall texture changes to the last texture in the array (the one on the top) I have no idea why this happens. I normalize vertex normals and map their values into 0, 1 range, so colors will stay in 0, 1 range after multiplication as well. It is supposed to simply shift the colors intensity, like it did while multiplying by a constant, not change the texture itself, right? I am aware of the fact that this is not how you perform normal lighting, however, this is not the correct behavior as well, and I'm ripping my hair off as to why. Any help is appreciated. I will provide more info if needed. EDIT OS is Windows 10. GPU's Intel HD 4600 Nvidia GTX 860M (happens on both of them). OpenGL version 4.3.0 build 20.19.15.4624. Some additional info I use SOIL library to load data from PNG files. I use glTexImage3D(GL TEXTURE 2D ARRAY, 0, GL RGBA, img width, img height, layer count 2, 0, GL RGBA, GL UNSIGNED BYTE, 0) to create a container for texture array, and use glTexSubImage3D(GL TEXTURE 2D ARRAY, 0, 0, 0, i, img width, img height, layer count, GL RGBA, GL UNSIGNED BYTE, data) to load data into it. One issue I have with this is that glTexImage3D takes layer count (the number of textures) multiplied by 2, and I have no idea why as well! Maybe this is somehow related to the issue I'm having above?
1
Get SFML to report the version of OpenGL that is being used How can I get SFML to report the version of OpenGL that is being used by the render window?
1
Prevent tile layout gaps I'm making a map viewer where you can specify where tiles go in a map and then see it on the screen, translate it, and zoom in and out. Here's a picture showing the tiles used to make my example map Here's the tiles connected together, more or less my desired result. The problem is that at particular combinations of translations and zoom levels the spacing is off. Here's an example picture of the problem. In order to arrange the tiles I'm using a Perspective projection matrix parametrized by the zoom level, a camera matrix which just translates the scene based the camera's 2d position, and finally each tile is translated using another matrix derived from its x and y position. tile camera projection How can I avoid this weird spacing issue at particular positions? I'm guessing this is some kind of floating point precision issue, but I'm not sure. (for those interested, the tiles I'm using are from this collection) EDIT I was using the following parameters for the standard openGL perspective projection matrix let near 0.5 let far 1024.0 let scale zoom near let top scale let bottom top let right aspect ratio scale let left right where aspect ratio is the window width divided by the window height, (in pixels,) and zoom in the error case above was set to 8.0. I read some more on perspective projection and floating point errors but couldn't find anything sensible to change so I resorted to adjusting the parameters. Specifically I tried setting near to 1.0 and now I can't replicate the weird spacing. So I guess my problem is solved? But this is a mysterious and unsatisfying answer, so I would still appreciate it if someone could explain what went wrong, and how I can avoid it in the future without fiddling with numbers until it "magically" works! EDIT responding to amitp Here's a screenshot from the same version as above (with near still set to 0.5), with the background set to red as suggested. This seems to indicate that it's something wrong with the tile layout, not image bleed.