_id
int64 0
49
| text
stringlengths 71
4.19k
|
---|---|
1 | How to do directional per fragment lighting in world space? I am attempting to create a GLSL shader for simple, per fragment directional light. So far, after following many tutorials, I have continually ran into the issue my light is specified in world coordinates, however, the shader treats the light's position as being in eye space, thus, the light direction changes when I move the camera. My question is, how to I transform a directional light position such as (50, 50, 50, 0) into eye space, or, would doing things this way be the incorrect approach to the problem? |
1 | RGB to xyY color space conversion and luminance The luminance calculated by following GLSL functions (fragment shaders tonemap) has different value float GetLuminance (vec3 rgb) return (0.2126 rgb.x) (0.7152 rgb.y) (0.0722 rgb.z) vec3 RGB2xyY (vec3 rgb) const mat3 RGB2XYZ mat3(0.4124, 0.3576, 0.1805, 0.2126, 0.7152, 0.0722, 0.0193, 0.1192, 0.9505) vec3 XYZ RGB2XYZ rgb return vec3(XYZ.x (XYZ.x XYZ.y XYZ.z), XYZ.y (XYZ.x XYZ.y XYZ.z), XYZ.y) I used a glm library to calculate an example result. For glm vec3(2.0f, 3.0f, 8.0f) GetLuminance returns 3.1484. RGB2xyY returns glm vec3 which z component is equal 3.8144. What is wrong ? |
1 | Invalid GLSL on some machines I'm writing a game engine using OpenGL 4.3 using gcc 5, mainly to teach myself graphics programming. Initial development was on my Surface Pro 3 using mingw w64 and worked like a charm. I've decided to move to my desktop, which is running two GTX 670s, and both Windows and Arch Linux. I've made sure that I have the latest NVIDIA drivers installed on both operating systems, but am having issues with my vertex and fragment shaders. All I know is that neither system compiles the shaders, but the OpenGL Reference Compiler simply issues a warning saying that OpenGL 4.3 might not be fully implemented. I've also tried downgrading all the way to OpenGL 4.0, but this doesn't seem to fix the issue. Below are the shaders that I'm currently trying out. Hoping someone more experienced than I can lend a hand? shader.vert version 430 layout (location 0) in vec3 vertex position layout (location 1) in vec2 vt uniform mat4 cam view, cam proj, sprite matrix out vec2 texture coordinates void main() texture coordinates vt gl Position cam proj cam view sprite matrix vec4(vertex position, 1.0) shader.frag version 430 in vec2 texture coordinates layout (binding 0) uniform sampler2D tex out vec4 frag colour void main() vec4 texel texture(tex, texture coordinates) frag colour texel Thanks in advance ) |
1 | World position reconstruction from depth fails when viewport size does not match window size I'm facing very strange issue. My code for world pos reconstruction works correctly when the viewport size is equal to window size (or framebuffer size in other words). Below is the part for reconstruction to view space, later I multiply the result by inversed view matrix vec3 posFromDepth(in vec2 Tex, in float depth) vec3 pos vec3(Tex, depth) 2.0 1.0 clip space position float z projection 3 2 (pos.z projection 2 2 ) float x pos.x halfTanFovX z float y pos.y halfTanFovY z return vec3(x, y, z) When I change the viewport light and shadow calculations seems to be incorrect. The shadows for example instead of being "glued" to the object, floats a little bit along with camera movement. When I switched my deferred renderer to use world pos stored in G buffer then everything was calculated right. Is there some transformation of texture coordinates which I'm missing? I tried pass fullscreen quad coords in vertex attributes and compute using below formula gl FragCoord.xy texelSize but the results were the same. I'm searching for the issue more than week and slowly running out of ideas what could be wrong. Maybe you'll come up with something useful. |
1 | GL EXT draw instanced vs VBO's I'm having trouble understanding the benefit of the newer GL EXT draw instanced over traditional VBO's. Don't both keep geometry cached on the gpu for faster redrawing? VBO's seem much more flexible. |
1 | Efficient way of loading wavefront models in openGL game In my game, a RTS game, the units are all wavefront obj. all their animation frames are each seperate wavefront obj file. ie. without any skeletal animation fully rigid models. So when many units are loaded before a game starts, it takes too much time. for example, if 20 types of units are loaded, they take approximately 700 800MB in memory, take atleast 3 minutes to load. Can it be done in an efficient way somehow? Solutions I have already used As obj parsing take too much time. I have stored each obj as memory dump and load that dump. This helped a lot atleast reduced to 3min from 10min, for above example. Further Possible solution I tried I tried loading the anination frames on runtime, when they are accesessed. But this is making the game slow at some crucial time. Like when fighting, all units' fighting frames are loaded together. thus making the game slow. loading the units in different threads. Makes no improvement |
1 | How to calculate normal from normal map in world space? (OpenGL) I'm trying to do normal mapping in a deferred renderer and I'm stuck on how to implement normal maps. I have a bool that passes whether or not to use a normal mapped value and thus, whether to calculate the TBN matrix. My vertex code for the geometry pass looks as follows version 410 core layout (location 0) in vec3 aPos layout (location 1) in vec3 aNormal layout (location 2) in vec2 aTexCoords layout (location 3) in vec3 aTangent Optional Texture coordinates layout (location 4) in vec3 aBitangent Optional Texture coordinates out vec3 FragPos out vec2 TexCoords out vec3 Normal out mat3 TBN uniform mat4 model uniform mat4 view uniform mat4 projection uniform bool hasNormalMap void main() vec4 worldPos model vec4(aPos, 1.0) FragPos worldPos.xyz TexCoords aTexCoords Normal transpose(inverse(mat3(model))) aNormal if(hasNormalMap) vec3 T normalize(vec3(model vec4(aTangent, 0.0))) vec3 N normalize(vec3(model vec4(aNormal, 0.0))) re orthogonalize T with respect to N T normalize(T dot(T, N) N) then retrieve perpendicular vector B with the cross product of T and N vec3 B cross(N, T) mat3 TBN mat3(T, B, N) gl Position projection view worldPos Here is where I am confused In my calculation, I multiplied T and N by the model matrix which should have moved it into world space. Now transposing (T,B,N) should move me back into model space (I think, I'm not sure). In the fragment shader how do I use the TBN to calculate the normal in world space? If there are better approaches, they are welcome. Thank you. Update I removed the transposing of the TBN as there's no reason to transform into tangent space if we want to pass it in world space. Now that we have the TBN matrix in the fragment shader, how do we apply it so that the normal is the correct value for lighting? Currently I've done version 410 core layout (location 0) out vec3 gPosition layout (location 1) out vec3 gNormal layout (location 2) out vec4 gAlbedoSpec in vec2 TexCoords in vec3 FragPos in vec3 Normal in mat3 TBN struct Material sampler2D diffuseMap sampler2D specularMap sampler2D normalMap float shininess uniform Material material uniform bool hasNormalMap void main() store the fragment position vector in the first gbuffer texture gPosition FragPos also store the per fragment normals into the gbuffer gNormal normalize(Normal) if(hasNormalMap) gNormal texture(material.normalMap, TexCoords).rgb TBN gNormal normalize(gNormal) and the diffuse per fragment color gAlbedoSpec.rgb texture(material.diffuseMap, TexCoords).rgb store specular intensity in gAlbedoSpec's alpha component gAlbedoSpec.a texture(material.specularMap, TexCoords).r but that doesn't feel right. I imagine that I would transform the sampled value from the normal map by the TBN matrix to get it in world space. Am I missing something? |
1 | OpenGL specification error? Follow up for my previous question, whose answer lead to another question As the OpenGL specification documentation states, in glDrawElements(GLenum mode, GLsizei count, GLenum type, const GLvoid indices), indices "Specifies a pointer to the location where the indices are stored." So why does calling gl glDrawElements(gl GL TRIANGLES, IndexBuffer.size(), gl GL UNSIGNED INT, (void )(0)) work, but calling gl glDrawElements(gl GL TRIANGLES, IndexBuffer.size(), gl GL UNSIGNED INT, amp (IndexBuffer.at(0))) doesn't ? |
1 | glUniformMatrix4fv OpenTK equivalent Very simple and quick question which surprisingly I couldn't find an answer to over the internet what is the equivalent of glUniformMatrixfv for opentk? I've browsed all the 7 overloads of GL.UniformMatrix4 and none of them seems correct to me and or I have found any example usage for. E.g. if I have a Matrix4 matrices variable (properly initialized etc..) that I want to map on a mat4 matrices 2 in glsl, which GL.UniformMatrix4 overload should I use? P.S. Using OpenTK.Next NuGet package version 1.1.1616.8959 |
1 | Trapping mouse inside window in OpenGL with GLUT? I perfectly understand that GLUT is limited and the following problem can't probably be solved with OpenGL GLUT, but since I don't know exactly if it can or not, I better just ask. Maybe I'm doing something wrong or forgetting something important. Or probably not and GLUT doesn't get better than this. My problem is that I can't trap the mouse inside the window. Well, actually, I can, the code below does just that. I trap the mouse inside the window and I can use the mouse freely to rotate the world. The problem? If I move the mouse too fast I can get way from window prison. Is there a way around this with OpenGL GLUT or the only option is another library or make some calls to the Windows API directly? Enough words, here's my current code void processPassiveMouseMotion(int x, int y) static int centerX glutGet(GLUT WINDOW WIDTH) 2 static int centerY glutGet(GLUT WINDOW HEIGHT) 2 CameraAngle.x 1.0f (y centerY) CameraAngle.y 1.0f (x centerX) if(CameraAngle.x ! 0.0f CameraAngle.y ! 0.0f) SceneCamera.Rotate(CameraAngle) glutPostRedisplay() glutWarpPointer(centerX, centerY) glutPassiveMotionFunc(processPassiveMouseMotion) |
1 | draw fog of war using shaders I am making a RTS game, and I'd like some advice on how to best render fog of wars, given what I'm already doing. You can imagine this game as being a classic RTS like Age of Empires 2, where the fog of war will basically be handled by a 2D array telling if a given "tile" is explored or not. The specific things to consider here are 1) I'm only doing a few draw calls to draw the whole screen, using shaders, and I'm not drawing "tile by tile" in a 2D loop 2) The whole map is much bigger than the screen, and the screen can move every frame or so In that case, how could I draw the fog of war ? I have no issue maintaining on the CPU side a 2D array giving the fog of war for each tile, but what would be the best way to actually display it dynamically ? thanks! |
1 | Creating a frustum for culling in world space glm matrices I need to do frustum culling where the bounding boxes are in world space to determine which entities get to be updated drawn. I was trying to use the classic projection view matrix plane extraction method but it doesn't seem to work with perspective matrices created by GLM. Is this method appropriate for world space culling? It seems like it would be (takes the eye position into account and the projection matrix shapes the frustum). I've only looked at the near far planes extract so far and they're wrong for a frustum sitting at the origin (Both have a c component that's negative which means near and far are facing the same direction). Also, since the d components can't match the near far clipping values with this method is it wrong for world space culling? |
1 | FBO picking in Oculus Rift applications I am writing an Oculus Rift application, I am rendering a very high poly mesh that I wish to be able to perform picking on using the Oculus Touch. Ideally, I want to be able to get the "triangle" id and other information attached to it. Currently, I am using an OBB Tree and ray casting to perform picking on the CPU, It works perfectly, the problem is that even with OBB tree the picking process is slow. I thought I'd perform picking on the GPU by rendering the view (from the point of view of the Oculus Touch) to an FBO using a custom shader that outputs "triangle information" to the buffer and then use glReadPixels to read the central pixel data. The problem I am facing is that the Oculus does distortion to my on screen scene but there is no way to apply it to the off screen buffer, so there is significant difference between the on screen buffer and the off screen buffer. My question is, Is ray casting the only feasible way to do picking in Oculus Apps or is there a way to perform the faster FBO picking even when the view distorted? |
1 | 2D pixels change size when scrolling I'm working on a 2D side scroller, OpenGL, pixel art. And I'm having trouble getting the pixels to remain square when scrolling the camera. By that I mean, when scrolling the camera at low speed when you have tile pixels being draw at 2x2 scale, you can see some pixels become 3x2 in size rather than 2x2. I've tried different ways of calculating the size of the orthographic projection in order to get the pixels sized correctly, and while they do usually appear correct when stationary, they always wobble while scrolling. Here is my latest attempt at calculating window size for the orthographic projection define TILE SIZE 32 Each tile is 32x32 define TILE MIN HEIGHT 10 Adjust pixel size to fit at east 10 tiles vertically in the window float windowRatio (float)window gt width (float)window gt height int pixelSize window gt height TILE SIZE TILE MIN HEIGHT float tileHeight (float)window gt height (pixelSize TILE SIZE) gameState gt windowTileWidth tileHeight windowRatio TILE SIZE gameState gt windowTileHeight tileHeight TILE SIZE window gt orthoProjection matrix44 createOrthographic2D( gameState gt windowTileWidth, gameState gt windowTileHeight ) So far, I've tried all of these in an attempt to eliminate the wobble Calculate orthographic projection as above, to fix at least 10 tiles vertically, adjusting projection size to fit window ratio. Calculate orthographic projection to exactly fit a 20x10 tile scene, scaling pixels to fit window. Render the scene to a 1 1 scale framebuffer, then blit that buffer to the window to scale it up to desired pixel size. Disable offsetting of texture coordinates (I draw tiles from a texture atlas, and so offset the coordinates by 0.001 inside each quad to eliminate bleed from surrounding textures). None of these have eliminated the wobbling. Sometimes it is extremely subtle, but still present when examining the movement closely. What is the correct way to draw a 2D image to the scree with perfect square pixel accuracy, at a given pixel size? |
1 | Aspect ratio of drawn quad messed up after rotating When I draw a quad that is rotating the aspect ratio of the quad gets messed up and the size changes. Gif of what is happening I am confident it has something to do with the way I calculate the size because that is relative to the width and height of the screen, I just don't know why the aspect ratio would change since the aspect ratio of the screen doesn't change. Code of transformationMatrix creation Matrix4f matrix Maths.createTransformationMatrix( new Vector2f( gui.getPosition().getX() Display.getWidth() 2 1 gui.getSize().getX() Display.getWidth(), gui.getPosition().getY() Display.getHeight() 2 1 gui.getSize().getY() Display.getHeight() ), gui.getRotation() ,new Vector2f( gui.getSize().getX() Display.getWidth(), gui.getSize().getY() Display.getHeight() ) ) shader.loadTransformation(matrix) The x size, y size, x position and y position are in pixels, if you put in 1600 pixels for x position and the screen is 1600 pixels wide it will fill the whole screen. The calculation that is done here will translate those sizes to a value that the shaders will display on the screen. I had tried calculating the sizes in a different way gui.getSize().getX() (float) (Display.getWidth() Math.abs(Math.cos((Math.toRadians(gui.getRotation())))) Display.getHeight() Math.abs(Math.sin((Math.toRadians(gui.getRotation()))))), but this does not work at all, my thought process was that if it is sideways it should use Display.getHeight to calculate the x size and if it is vertical it should use Display.getWidth. This does work for 0, 90, 180 and 270 degrees rotation, but in between it gets smaller and weird. Of course I've also done this for the x position but then cos and sin switched around. video of this method And this is the createTransformationMatrix method public static Matrix4f createTransformationMatrix(Vector2f translation, float rotation, Vector2f scale) Matrix4f matrix new Matrix4f() matrix.setIdentity() Matrix4f.translate(translation, matrix, matrix) Matrix4f.rotate((float) Math.toRadians(rotation), new Vector3f(0, 0, 1), matrix, matrix) Matrix4f.scale(new Vector3f(scale.x, scale.y, 1f), matrix, matrix) return matrix |
1 | Deferred shading how to combine multiple lights? I'm starting out with GLSL and I've implemented simple deferred shading that outputs G buffer with positions, normals and albedo. I've also written a simple point light shader. Now I draw a sphere for the point light and output goes into a lighting buffer. The problem is, how do I combine the results of lighting buffer when drawing multiple lights? E.g. when I'm drawing the second light to the lightbuffer using the point light shader, how do I add first light to the second light in the lighting buffer. I mean, you can't read from and write to the same output buffer? |
1 | Should I continue studying OpenGL or just switch to DirectX to give me a better chance of landing a job in the game industry? I've been learning graphics programming for some time now using OpenGL and Linux. I'm pretty familiar with most of the concepts, but I would really like to further my knowledge and eventually pursue a career in game development, especially game engine development. So far it seems to me that the majority of game studios make games for Windows using DirectX. Edit I know that the OpenGL vs DirectX question has been asked here before, but I haven't found an answer in the perspective I want. Edit 2 After reading all the responses comments I've decided to continue diving deeper into graphics with OpenGL GLSL, but I'll try to play around with DX as well, just to have a basic understanding of the API. I'd like to thank everyone for the answers and insight you've given me. |
1 | Game Maker Shader CGA how to swap the palette I created a shader that get the actual color of the surface and render it to the closer of a cga palette I have to change it to swap the palette in 4 colors but I don't know how secondary, I have to render it dynamic (maybe using an uniform variable) because it have to change in base of the palette please help me, thanks CGA Shader varying vec2 v vTexcoord xy varying vec4 v vColour rgba define 0.0 define o (1. 3.) define b (2. 3.) define B 1.0 define check(r,g,b) color vec4(r,g,b,0.) dist distance(sample,color) if (dist lt bestDistance) bestDistance dist bestColor color void main(out vec4 v vColour, in vec2 v vTexcoord) float dist float bestDistance 1000. vec4 color vec4 bestColor vec2 pixel vec2(320,200) floor( v vTexcoord.xy iResolution.xy vec2(320,200)) vec4 sample texture2D(iChannel0, pixel vec2(320,200)) sample (texture2D(iChannel1, pixel iChannelResolution 1 .xy) 0.5) 0.5 pallete 0 check( , , ) check(o,B,o) check(B,o,o) check(B,B,o) vec4 color1 bestColor bestDistance 1000. pallete 1 check( , , ) check(o,B,B) check(B,o,B) check(B,B,B) vec4 color2 bestColor float t (clamp(sin(iGlobalTime) 4., 1.,1.) 1.) 2. v vColour mix(color2, color1, t) gl FragColor v vColour texture2D( gm BaseTexture, v vTexcoord ) |
1 | Triple buffering causes input lag? Consider some time in between two vsyncs. Suppose the first display buffer is being used to display the current image, and suppose the game was really fast and computed and rendered the next image to the second display buffer and the next one after that to the third display buffer. That is the rendering to the second and third display buffer happens so fast that it occurs before the next vsync. Suppose input from the user comes in now. What you would like is for the results of the input to show up on the next vsync or (probably more typical) the vsync after that. However, with the third display buffer already rendered the input can only effect the image after that. Meaning the input will only take effect at best 3 vsyncs later. I wish i had an image to show the exact timings of what I mean. |
1 | Camera scrolling and game boundaries I am making a platformer game in JBox2D and LWJGL that has a scrolling camera, but I have hit a wall with the boundaries of the camera. Essentially what I have right now is a Box2D world that is being draw on the screen, a viewport being moved by gluLookAt and several boundaries specified by Box2D edges that also help keep the player from moving out of bounds. It looks a bit like this What I'd like to do is "push" the camera when ever it comes in contact with the edge, but otherwise be centered on the player. I've tried a couple of methods for this, such as finding the min and max width and height, and simply doing an easy box hit test, and moving the screen accordingly, but that has it's limitations, such as if the world is not in a rectangular shape e.g. Another method I've tried is doing something where I raytrace from one vertex to another of each edge, and if it comes into contact with the screen, I "push" the screen out of the way, but that hasn't quite worked either Fixture fixture boundary.getFixtureList() Vec2 point null boolean hit false fixtureLoop for(int i 0 i lt boundary.m fixtureCount i ) EdgeShape edge (EdgeShape)fixture.getShape() Vec2 origin new Vec2(edge.m vertex1.x Main.OPENGL SCALE, edge.m vertex1.y Main.OPENGL SCALE) Vec2 direction new Vec2(edge.m vertex2.x Main.OPENGL SCALE, edge.m vertex2.y Main.OPENGL SCALE).sub(origin) for(float t 0 t lt 1.0f t 0.001f) Raytrace loop point origin.add(direction.mul(t)) See if point is within screen bounds if(point.x gt screenX Main.WIDTH 2 amp amp point.x lt screenX Main.WIDTH 2 amp amp point.y gt screenY Main.HEIGHT 2 amp amp point.y lt screenY Main.HEIGHT 2) hit true If hit, find closest vertex if(hit) float shortestDistance Float.MAX VALUE Vec2 vertex new Vec2() for(int v 0 v lt screenVertices.length v ) float distance point.sub(screenVertices i ).length() if(distance lt shortestDistance) shortestDistance distance vertex screenVertices v Vec2 displacement point.add(vertex.sub(new Vec2(screenX, screenY))) screenX displacement.x screenY displacement.y break fixtureLoop fixture fixture.getNext() What I have here is my attempt that I have played around with WAAAY too much. Essentially what I'm trying to do is detect if and edges are hitting the screen, find the point where they hit, find the closest screen vertex, subtract them to find the displacement between them, and apply it to the screen so that they're no longer in contact (The screenVertices array is essentially 4 vertices that look like Vec2(screenX WIDTH 2, screenY HEIGHT 2), etc) I was wondering if anyone could help me out, I'm not sure if I'm having logic problems, code problems, or both. Absolutely any help would be appreciated, thanks! |
1 | How to use default framebuffer's depth stencil when rendering to a texture? Is it possible to use the default framebuffer depth buffer when rendering to a texture (instead of using a depth texture)? The Idea is to continue to render part of the scene normally but on a separate texture so I can later apply a post processing effect without having to incur the cost of having another stencil depth buffer. Ho can I do that? |
1 | How to organize a fallback system for older GPUs in OpenGL? I don't want to make this question too broad or opinion based, but I really need some help about good practices. The scenario I created a particle engine with functions which require at least OpenGL 4.3. But some GPUs don't support this version, so I want to try to implement a fallback system for it. But it's not only the particle system. There are many features in 3D engines that can't be simply replaced by a different function name. Some features need different logic behind them and it can go quite complex. What are the best practices to manage fallback rendering in OpenGL? |
1 | What's better drawing every interval that the window updates, or drawing when necessary and updating when drawing? So what's better? In case the title is a bit confusing I mean 1) Drawing every window update interval. For example, for a 60FPS window, every 17 milliseconds. For example window.setFramerateLimit(60) Update every 17 milliseconds for 60FPS while(window.isOpen()) window.display() Or 2) Drawing when you need to (for example, when a sprite moves) and displaying it straight after. For example void func(sf RenderWindow amp window) window.draw(sf Sprite()) Whatever you're drawing window.display() Or is there a much better way? |
1 | 1D vs 2D vs 3D texture performance I'm working on a 3D game with large blocky untextured, but individually colored, voxels (similar to CubeWorld). For rendering, right now I naively convert online all visible faces ahead of time to triangles with the color information stored in the vertex attributes. The screenshot below is an example of what I'm currently doing. There's a lot of room to improve here and I want to switch to a system that uses fewer vertices and stores color information in a texture. What I'd like to do is combine adjacent triangles with the same normals and simplify the mesh so I'm left with a fraction of the vertices. My big hangup is color information. It's easy to add UVs, but the correct choice for texturing isn't obvious. I'm trying to figure out the relative performance of these options Use a 1D texture. This is compact but I'm concerned that large 1D textures won't play nicely with texture mapping units optimized for 2D textures. Use a 2D texture. Simple, but it'll end up with gaps. Use a 3D texture with the full set of voxels. Does anyone have insight as to the relative performance of these options? No mipmapping or other filtering is needed here, so all three options should be viable. For reference I'm making this in C using OpenGL on Windows. My target GPU architectures are GTX 780 Ti and higher or AMD equivalent. |
1 | OpenGL sluggish performance in extracting texture from GPU I'm currently working on an algorithm which creates a texture within a render buffer. The operations are pretty complex, but for the GPU this is a simple task, done very quickly. The problem is that, after creating the texture, i would like to save it. This requires to extract it from GPU memory. For this operation, i'm using glGetTexImage(). It works, but the performance is sluggish. No, i mean even slower than that. For example, an 8MB texture (uncompressed) requires 3 seconds (yes, seconds) to be extracted. That's mind puzzling. I'm almost wondering if my graphic card is connected by a serial link... Anyway, i've looked around, and found some people complaining about the same, but no working solution so far. The most promising advise was to "extract data in the native format of the GPU". Which i've tried and tried, but failed so far. Edit by moving the call to glGetTexImage() in a different place, the speed has been a bit improved for the most dramatic samples looking again at the 8MB texture, it knows requires 500ms, instead of 3sec. It's better, but still much too slow. Smaller texture sizes were not affected by the change (typical timing remained into the 60 80ms range). Using glFinish() didn't help either. Note that, if i call glFinish() (without glGetTexImage), i'm getting a fixed 16ms result, whatever the texture size or complexity. It really looks like the timing for a frame at 60fps. The timing is measured for the full rendering saving sequence. The call to glGetTexImage() alone does not really matter. That being said, it is this call which changes the performance. And yes, of course, as stated at the beginning, the texture is "created into the GPU", hence the need to save it. Edit 2 I've also tried to use glReadPixels(), instead of glGetTexImage(). But it's worse unfortunately. Approximately twice slower. Edit 3 Here is the code. Note that it uses a "framework" that i've not developed. So some of the function calls here are not OpenGL standard. But they are nonetheless quite self expressive. glDisable(GL BLEND) Simple construction, for test the resulting texture is output into render buffer renderbufx1.bind() clear(0,0,0,0) simpleDiffShader.bind() simpleDiffShader.setTexture("TextureRef",img.tex,0) simpleDiffShader.setTexture("TextureDown",renderbufx2.getTexture(),1) img.tex gt drawPart( 1, 1,2,2,0,0,1,1) simpleDiffShader.unbind() renderbufx1.unbind() Not really useful, but just in case, the result is copied into another render buffer. renderbufx1 is going to be displayed, and i want to avoid any access conflict savebufx1.bind() clear(0,0,0,0) renderbufx1.getTexture() gt bind() renderbufx1.getTexture() gt drawPart( 1, 1,2,2,0,0,1,1) renderbufx1.getTexture() gt unbind() savebufx1.unbind() savebufx1.getTexture() gt bind() glGetTexImage(GL TEXTURE 2D, 0, GL RGBA, GL BYTE, t.data) lt This is the line savebufx1.getTexture() gt unbind() glEnable (GL BLEND) renderbufx1.getTexture() gt bind() renderbufx1.getTexture() gt drawPart(tex2x, 1,w,h,texSpan,texSpan,fact,fact) renderbufx1.getTexture() gt unbind() Timing measured encompasses all this (and a few other things, which do not really contribute to the total). Edit 4 (on course for the most edited question of the week) The native texture format of the GPU is supposed to be retrieved through the function glGetTexLevelParameter. More precisely, i've used this line of code glGetTexLevelParameteriv ( GL TEXTURE 2D, 0, GL TEXTURE INTERNAL FORMAT, amp iTexFormat) The result of the function (within iTexFormat) is 32856. 32856, decimal, is 0x8058, and within gl.h there is define GL RGBA8 0x8058 |
1 | Texturing in OpenGL, Should texture coordinates be assigned to vertices in the shader? I attempting to texture 3D models (a cube for example) using ibo s, with OpenGL in Java. Currently, my textures are distorted. I believe this is because only a single texture coordinate is being assigned to each vertex, as the meshes are loaded from wave front (.obj) files. As each vertex should correspond to multiple texture coordinates, should I be assigning texture coordinates to vertices as the data is loaded from file or would it be better to assign texture coordinates to vertices in the shader? I would appreciate example code. |
1 | My generated texture is flipped I'm learning OpenGL and I don't understand why my texture is flipped if I give as UVs the "matching" vec2 of the mesh's vertex (vec3). Example vert 256 256 uv 0 0 vert 256 256 uv 1 1 I've had to try a different order for the UVs to be able to get the correct result... Can someone explain why is that so? Thanks. Screenshots http imgur.com a 49ZVp This are the snippets of code I use Create texture GLuint texDebugId RGBA Texture R G B A unsigned char texDebugBuf 255, 0, 0, 255, 0, 255, 0, 255, 0, 0, 255, 255, 0, 0, 0, 0, glEnable(GL TEXTURE 2D) glGenTextures(1, amp texDebugId) glBindTexture(GL TEXTURE 2D, texDebugId) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL NEAREST) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL NEAREST) glTexImage2D(GL TEXTURE 2D, 0, GL RGBA, 2, 2, 0, GL RGBA, GL UNSIGNED BYTE, texDebugBuf) glBindTexture(GL TEXTURE 2D, 0) glDisable(GL TEXTURE 2D) Prepare VBOs static const GLfloat verts 256.0f, 256.0f, 0.0f, 256.0f, 256.0f, 0.0f, 256.0f, 256.0f, 0.0f, 256.0f, 256.0f, 0.0f FLIPPED (but why?) static const GLfloat uvs 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, OK static const GLfloat uvs 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, Upload vertices glBindBuffer(GL ARRAY BUFFER, vbo id) glBufferData(GL ARRAY BUFFER, sizeof(verts), verts, GL STATIC DRAW) glBindBuffer(GL ARRAY BUFFER, 0) Upload UVs glBindBuffer(GL ARRAY BUFFER, vbo tex id) glBufferData(GL ARRAY BUFFER, sizeof(uvs), uvs, GL STATIC DRAW) glBindBuffer(GL ARRAY BUFFER, 0) Render glBindTexture(GL TEXTURE 2D, texId) glEnableVertexAttribArray((GLuint) pd.vertex a) glEnableVertexAttribArray((GLuint) pd.uv a) Vertices glBindBuffer(GL ARRAY BUFFER, vbo id) glVertexAttribPointer( (GLuint) pd.vertex a, attribute id 3, size GL FLOAT, type GL FALSE, normalized? 0, stride (void )0 array buffer offset ) UVs glBindBuffer(GL ARRAY BUFFER, vbo tex id) glVertexAttribPointer( (GLuint) pd.uv a, attribute id 2, size GL FLOAT, type GL FALSE, normalized? 0, stride (void )0 array buffer offset ) glDrawArrays(GL TRIANGLE STRIP, 0, 4) glBindBuffer(GL ARRAY BUFFER, 0) glDisableVertexAttribArray((GLuint) pd.uv a) glDisableVertexAttribArray((GLuint) pd.vertex a) glBindTexture(GL TEXTURE 2D, 0) Note proj matrix proj glm ortho( 320, 320, 240, 240, 1.0f, 100.0f) 640x480 with 0,0 at center camera matrix cam target glm vec3() cam up glm vec3(0.0, 1.0f, 0.0f) cam pos glm vec3(0.0f, 0.0f, 5.0f) camera glm lookAt(cam pos, cam target, cam up) MVP matrix glm mat4 mvp proj camera |
1 | Is there any performance benefit to sharing shaders between programs? OpenGL allows you to share the same shader between multiple programs. Aside from saving small amounts of memory and a shader handle, are there any GPU side performance benefits to doing this? |
1 | How to render tile maps in modern opengl? I'm trying to use SDL2 and Modern OpenGL to make a 2D rpg. The problem is, I have no idea how to render a tile map. I'm using the Tiled map editor to create maps. I have a vague idea of how to parse the maps using this (https github.com andrewrk tmxparser) tmx parser. I try to parse the maps using said parser, but then I run into the problem of how to actually get the tile maps to appear on screen, when using a spritebatch class. How would I go about rendering these maps using modern OpenGL? |
1 | What is the better approach to redraw a static background scene for game? Take for an example I have the background scene of a game which should remain static for an interval.Then for every time redrawing the scene,is it better for me to store the color buffer for every pixel on the viewport and redrawing using the stored color buffer or to applying a texture to the whole background or to draw the triangular,quad meshes? |
1 | What is the correct way to draw layered Sprites in modern OpenGL? So what do I mean by layered sprites? Layered Sprites are Sprites that consist of multiple layers, e.g. you have sprite sheets for the basic Body, the Head, Clothes, Weapons, etc. Well now I wanted to know how you would draw this in the most efficient way right now the idea is to use vertex buffer objects (VBOs), but I heard that it is expensive to change the textures in a VBO... So what would be the correct way to draw them? My idea right now would be to draw the Map tiles first (with a texture atlas), then I would draw all the basic bodys, then the heads, then clothes, and then the weapons. Would that be the best way or is there a more efficient way? I also thought of using vertex arrays instead of VBOs, but as far as I know they are deprecated... |
1 | 3D models on 2D tilemap perspective when scrolling I am creating a small top down game, where the player traverses a 2D tilemap, with an illusion of depth provided by 3D models for things like buildings or trees. Having gotten to the point where I can render models and a basic terrain as a background, I've gotten stuck on the task of joining both together in a way as to seem not too unnatural. I think an image description of my problem would be best suited for this. For the sake of explanation, the model I am rendering is a transparent cube. In this position, where the camera has a head on view, everything looks generally normal. Now, if I shift the model a bit to the right, the following gets displayed Not horrible by any means, but it does look a tad bit awkward the road remains straight while the building seems to be jutting out from the path. The building itself has been panned, but the ground has remained static. Additionally, this has proven rather nausea inducing during testing, due to the awkward mixing of angles. The only possible solution that came to me (and hence why I am asking here) would be to render each tile as a model as well. I think that creating a flat rectangle the size of the viewable screen and texturing it with the visible tiles could possible work that way, since it is a model, it can be rotated and shrunk based on visibility, like the model. However, I am (very) new to OpenGL, and I fail to see how I could implement this. I know that I can draw a textured rectangle with one, solid texture. I don't know how to draw multiple, smaller textures across a face of the rectangle is it even possible? Then, considering that I wish to have other things like cliffs, I wondered if cliff tiles could be rendered as being slanted. My map format has a height map, so the slant could be determined by the direction in which the nearest tile with a smaller height resides. I fear I'm being rather clumsy at explaining this, which is why, using my most Picasso esque skills I have put together the next image Essentially, though the map itself is made up of 2D tiles, by changing the angle they are being rendered at I reckon it would be possible to simulate a 3D effect the player camera (smiley in the image) would not be able to see the leftmost side (yellow) if they are positioned towards the right. Regardless, this is all hypothetical and I have no idea where to even begin. Is it possible to render the background as essentially a model in 3D space? If so, how would this be done via one large cuboid, or multiple smaller ones? Lastly, would slanting textures based on neighbor height work? More importantly, is there a better way to achieve my goals than what I have proposed? For all it matters, I am writing this game in Java, utilizing the LWJGL OpenGL wrapper. |
1 | Should unbind buffers? I'm making some tests with OpenGL ES 2 and got some questions, my current program is like that Init gt create index buffer gt fill index buffer glBufferData gt create vertex buffer gt fill vertex buffer glBufferData Draw 1. Apply vertex buffer gt Bind VAO gt bind vertex buffer enable attributs (glVertexPointer, ) gt unbind vertex buffer gt Unbind VAO gt Bind VAO 3. Apply index buffer 4. Draw The problem The given code crash, after some researches, I've understood why I need to unbind my index buffer in init part (after "fill index buffer glBufferData") or unbind it before the first "Bind VAO" My questions are Can I put my index buffer in VAO (VAO stock index buffer?)? Did I have to unbind buffers after each update (glBufferData)? In my application I've got some buffers who are updated on each frame (Exemple Particles) so I've got an OpenGL stack like that gt bind buffer 1 gt update buffer 1 gt close buffer 1 gt bind buffer 1 gt draw First 3 lines update the Vertex buffer, the two last draw object, that should be something like that gt bind buffer 1 gt update buffer 1 gt draw Thanks |
1 | Convertion from 3D to 2D openGL Few days ago , we learned about Homogeneous Matrices in OpenGL class. From what I have understood , using these matrices we can convert 3D objects to 2D. However , I am facing some difficulties answering a simple question like "Which of those matrices answering to the definition of converting from 3D to 2D (Homogeneous)" A B C D I'm not asking you to do my homework, I know the answer it's D and C and this is all I need for this section. I just want to understand why? how it works? why A,B not and C,D yes. Thank you. |
1 | OpenGL are strips fans faster for rendering or just data bandwidth When we send data for drawing we can mark it as TRIANGLE STRIP or TRIANGLE FAN to reduce the number of vertices we have to specify. Now, does this actually improve the rendering speed on the graphics card, or does it merely reduce the amount of data that has to be sent to the card? I'm using a very simple model. To construct it correctly I need multiple calls to glDrawArrays using TRIANGLE STRIP. If I switch to using just GL TRIANGLES I could have just one call to glDrawArrays. Is this type of apporach useful, or is the overhead of an the call to glDrawArrays low enouhg that it won't make a difference. I will profile, but since I have only one card I don't know if my result will be indicative of the general case. (NOTE I need to be ES2 compliant, so some of my options are limited). |
1 | Trying to implement Camera I'm trying to implement a Camera class in order to walk and look on the world as follow ifndef CAMERA H define CAMERA H include lt glm glm.hpp gt class Camera public Camera() Camera() void Update(const glm vec2 amp newXY) if by 0.0 it means, it will use the const Class speed to scale it void MoveForward(const float by 0.0f) void MoveBackword(const float by 0.0f) void MoveLef(const float by 0.0f) void MoveRight(const float by 0.0f) void MoveUp(const float by 0.0f) void MoveDown(const float by 0.0f) void Speed(const float speed 0.0f) glm vec3 amp GetCurrentPosition() glm vec3 amp GetCurrentDirection() glm mat4 GetWorldToView() const private glm vec3 position, viewDirection, strafeDir glm vec2 oldYX float speed const glm vec3 up endif include "Camera.h" include lt glm gtx transform.hpp gt Camera Camera() up(0.0f, 1.0f, 0.0), viewDirection(0.0f, 0.0f, 1.0f), speed(0.1f) Camera Camera() void Camera Update(const glm vec2 amp newXY) glm vec2 delta newXY oldYX auto length glm length(delta) if (glm length(delta) lt 50.f) strafeDir glm cross(viewDirection, up) glm mat4 rotation glm rotate( delta.x speed, up) glm rotate( delta.y speed, strafeDir) viewDirection glm mat3(rotation) viewDirection oldYX newXY void Camera Speed(const float speed) this gt speed speed void Camera MoveForward(const float by) float s by 0.0f ? speed by position s viewDirection void Camera MoveBackword(const float by) float s by 0.0f ? speed by position s viewDirection void Camera MoveLef(const float by ) float s by 0.0f ? speed by position s strafeDir void Camera MoveRight(const float by ) float s by 0.0f ? speed by position s strafeDir void Camera MoveUp(const float by ) float s by 0.0f ? speed by position s up void Camera MoveDown(const float by ) float s by 0.0f ? speed by position s up glm vec3 amp Camera GetCurrentPosition() return position glm vec3 amp Camera GetCurrentDirection() return viewDirection glm mat4 Camera GetWorldToView() const return glm lookAt(position, position viewDirection, up) and I update and render as follow void Game OnUpdate() glLoadIdentity() glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glUniformMatrix4fv(program gt GetUniformLocation("modelToViewWorld"), 1, GL FALSE, amp cam.GetWorldToView() 0 0 ) void Game OnRender() model gt Draw() Which the Vertix shader is version 410 layout (location 0) in vec3 inVertex layout (location 1) in vec2 inTexture layout (location 2) in vec3 inNormal uniform mat4 modelToViewWorld void main() gl Position vec4(mat3(modelToViewWorld) inVertex, 1) But in result the model itslef is moving rotating instead of moving the camera around the model. What am I doing wrong here? |
1 | How do i writing timing function for lightning flash in C ? I need to write a horror scene with lightning flash. Unfortunately I am new to both C and OpenGL and I am looking for efficient way to mimic lightning timing in C OpenGL? I don't need graphical implementation, just algorithmic implementation. as such, the function should return a float representing lighting intensity and the delay of lightning flash should be random (5 seconds followed by 10 seconds then 3 seconds.. etc) I hope I made everything clear! |
1 | How can I change the value of a matrix uniform in an OpenGL vertex shader? I'm new at OpenGL. I have a uniform transform matrix in a vertex shader file. I want to modify the matrix by individually assigning each values in the matrix. How can I do that in C ? I learned that we have to get location variable by using glGetUniformLocation but I am not sure how to access and modify index n with it with glUniform. |
1 | Casting a Matrix4 class to float glUniformMatrix4fv(glGetUniformLocation(program, "projMatrix"), 1, false, (float ) amp projMatrix) projMatrix is a an object of type Matrix4 where the first variable declared is a float array. Does (float ) amp projMatrix therefore somehow retrieve this array? What does the casting appear to be doing? |
1 | Geometry Shader crashing I keep getting some strange errors in my Geometry shader and when I search for the cause of the errors, it returns nothing substantial. Here is the code. Shader version 450 core layout(triangles) in layout(points, max vertices 3)out void main(void) int i for (i 0 i lt gl in.length() i ) gl Position gl in i .gl Position EmitVertex() Log. 0(4) error C3008 unknown layout specifier 'max vertices 3' 0(4) error C3008 unknown layout specifier 'points' 0(10) error C3004 function "void EmitVertex() " not supported in this profile I'm using OpenGL 4.5 (My pc supports it before anyone asks p) and and glfw 3.2 EDIT Here is the steps I'm using to load up my shader. Creating a shader instance shader new StaticShader("VertexShader.shader", "TesselationControl.shader", "TessilationEvaluation.shader", "Geometry.shader", "FragmentShader.shader") StaticShader Constructor ShaderProgram ShaderProgram(const char vertex file path, const char tesselation controll shader path, const char tesselation evaluation shader path, const char geometry shader path, const char fragment file path) vertexShaderID loadShader(vertex file path, GL VERTEX SHADER) tessControllShaderID loadShader(tesselation controll shader path, GL TESS CONTROL SHADER) tessEvaluationShaderID loadShader(tesselation evaluation shader path, GL TESS EVALUATION SHADER) geometryShaderID loadShader(geometry shader path, GL GEOMETRY SHADER) fragmentShaderID loadShader(fragment file path, GL FRAGMENT SHADER) Status("ShaderProgram", "Initilizing shader program") programID glCreateProgram() glAttachShader(programID, vertexShaderID) glAttachShader(programID, tessControllShaderID) glAttachShader(programID, tessEvaluationShaderID) glAttachShader(programID, geometryShaderID) glAttachShader(programID, fragmentShaderID) bindAttributes() glLinkProgram(programID) GLint isLinked 0 glGetProgramiv(programID, GL LINK STATUS, (int ) amp isLinked) if (isLinked GL FALSE) GLint maxLength 0 glGetProgramiv(programID, GL INFO LOG LENGTH, amp maxLength) if (maxLength gt 1) GLchar infoLog (GLchar )malloc(maxLength) glGetProgramInfoLog(programID, maxLength, amp maxLength, amp infoLog 0 ) std cout lt lt infoLog StatusError() exit(0) StatusOkay() Clean Up glDetachShader(programID, vertexShaderID) glDetachShader(programID, tessControllShaderID) glDetachShader(programID, tessEvaluationShaderID) glDetachShader(programID, geometryShaderID) glDetachShader(programID, fragmentShaderID) glDeleteShader(vertexShaderID) glDeleteShader(tessControllShaderID) glDeleteShader(tessEvaluationShaderID) glDeleteShader(geometryShaderID) glDeleteShader(fragmentShaderID) Load shader function GLuint ShaderProgram loadShader(const char file path, int type) GLuint shaderID glCreateShader(type) std string shaderCode FileManager readFile(file path) char const sourcePointer shaderCode.c str() Status("ShaderObject", "Compiling Shader " std string(file path)) glShaderSource(shaderID, 1, amp sourcePointer, NULL) glCompileShader(shaderID) GLint isCompiled 0 glGetShaderiv(shaderID, GL COMPILE STATUS, amp isCompiled) if (isCompiled GL FALSE) GLint maxLength 0 glGetShaderiv(shaderID, GL INFO LOG LENGTH, amp maxLength) if (maxLength gt 1) GLchar infoLog (GLchar )malloc(maxLength) glGetShaderInfoLog(shaderID, maxLength, amp maxLength, amp infoLog 0 ) std cout lt lt infoLog StatusError() exit(0) StatusOkay() return shaderID |
1 | JOGL hardware based shadow mapping computing the texture matrix I am implementing hardware shadow mapping as described here. I've rendered the scene successfully from the light POV, and loaded the depth buffer of the scene into a texture. This texture has correctly been loaded I check this by rendering a small thumbnail, as you can see in the screenshot below, upper left corner. The depth of the scene appears to be correct objects further away are darker, and that are closer to the light are lighter. However, I run into trouble while rendering the scene from the camera's point of view using the depth texture the texture on the polygons in the scene is rendered in a weird, nondeterministic fashion, as shown in the screenshot. I believe I am making an error while computing the texture transformation matrix, but I am unsure where exactly. Since I have no matrix utilities in JOGL other then the gl Load Mult Matrix procedures, I multiply the matrices using them, like this void calcTextureMatrix() glPushMatrix() glLoadIdentity() glLoadMatrixf(biasmatrix, 0) glMultMatrixf(lightprojmatrix, 0) glMultMatrixf(lightviewmatrix, 0) glGetFloatv(GL MODELVIEW MATRIX, shadowtexmatrix, 0) glPopMatrix() I obtained these matrices by using the glOrtho and gluLookAt procedures glLoadIdentity() val wdt width 45 val hgt height 45 glOrtho(wdt, wdt, hgt, hgt, 45.0, 45.0) glGetFloatv(GL MODELVIEW MATRIX, lightprojmatrix, 0) glLoadIdentity() glu.gluLookAt( xlook lightpos. 1, ylook lightpos. 2, lightpos. 3, xlook, ylook, 0.0f, 0.f, 0.f, 1.0f) glGetFloatv(GL MODELVIEW MATRIX, lightviewmatrix, 0) My bias matrix is float biasmatrix new float 16 0.5f, 0.f, 0.f, 0.f, 0.f, 0.5f, 0.f, 0.f, 0.f, 0.f, 0.5f, 0.f, 0.5f, 0.5f, 0.5f, 1.f After applying the camera projection and view matrices, I do glTexGeni(GL S, GL TEXTURE GEN MODE, GL EYE LINEAR) glTexGenfv(GL S, GL EYE PLANE, shadowtexmatrix, 0) glEnable(GL TEXTURE GEN S) for each component. Does anybody know why the texture is not being rendered correctly? Thank you. |
1 | Change flag if mousepress while on loop I have this program that starts a loop when the key is pressed The problem is that when i start the loop, i want to be able to stop it halfway with my mousepress. The problem is that it doesn't register my mousepress until my loop is done. How do I stop it halfway if the mouse was pressed in my loop? I tried a flag, if(mousePressed) but the flag doesnt change until the loop is done |
1 | Create YUV texture for GL TEXTURE EXTERNAL OES format 0 down vote favorite I need to create a yuv texture for GL TEXTURE EXTERNAL OES format. source https github.com crossle MediaPlayerSurface blob master src me crossle demo surfacetexture VideoSurfaceView.java I am doing all processing on YUV, so it would save clock cycles, if I can generate a yuv texture as output of texture2D. To get 'y' value, I need to take dotproduct of each texel with vec3 of (0.3,0.59,0.11) Since, for my purpose I need to take 3x3 pixel block's 'y' value and take a convolution of them, this results in performance impact. So it would save clock cycles, if I can generate a yuv texture as output of texture2D. |
1 | How to emulate PSX's graphics with OpenGL? I want to know what options (or shaders) to set so that my OpenGL game looks like Playstation 1 game. I know it probably can not be achieved 100 because PSX used television and television renders differently that monitor. Thanks. ) |
1 | Rotation along normal vector I have a triangle in the 3D world. I have the positions for the three points the normal vectors for the three points (all the same, because vertices are not shared) This triangle faces to a direction, so it has a XYZ rotation (but I dont know the rotation itself). If I put a cube into the same scene, how I can achieve the same rotation for the cube? So if the triangle "looks" upwards, the cube should do the same. If it has a 10f x rotation, how can I get what is the x? |
1 | (While using a cube map) box like textures appearing around my scene whenever I move the camera I've been learning about cube map and I implemented one into my program. It seemed to work well until I started moving the camera around the scene and zooming out. As you can see in the attached gif, there seems to be a problem with the cube map since this weird behavior is occurring https gyazo.com 70ad4ce027d1e032bc19258e28def66f Main program unsigned int cubemapTexture texo.loadCubeMap(faces) glDepthFunc(GL EQUAL) bg.bind() skyBox.use() glm mat4 projection glm perspective(glm radians(fov), (float)scr width (float)scr height, 0.1f, 100.0f) glm mat4 view camera.GetViewMatrix() skyBox.setUniformMat4("projection", projection) skyBox.setUniformMat4("view", view) glActiveTexture(GL TEXTURE20) glBindTexture(GL TEXTURE CUBE MAP, cubemapTexture) GLCall(glDrawArrays(GL TRIANGLES, 0, 36)) loadCubeMap unsigned int Textures loadCubeMap(std vector lt std string gt faces) unsigned int textureID glGenTextures(1, amp textureID) glBindTexture(GL TEXTURE CUBE MAP, textureID) stbi set flip vertically on load(false) int width, height, nrChannels for (unsigned int i 0 i lt faces.size() i ) unsigned char data stbi load(faces i .c str(), amp width, amp height, amp nrChannels, 0) if (data) glTexImage2D(GL TEXTURE CUBE MAP POSITIVE X i, 0, GL RGB, width, height, 0, GL RGB, GL UNSIGNED BYTE, data ) stbi image free(data) else std cout lt lt "Cubemap tex failed to load at path " lt lt faces i lt lt std endl stbi image free(data) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP R, GL CLAMP TO EDGE) return textureID Vertex Shader version 330 core layout (location 0) in vec3 aPos out vec3 TexCoords uniform mat4 view uniform mat4 projection void main() vec4 pos projection view vec4(aPos, 1.0) TexCoords aPos Setting z value to w (1.0) so that the cube map is always in the background gl Position pos.xyww Fragment shader version 330 core out vec4 FragColor in vec3 TexCoords uniform samplerCube skybox void main() FragColor texture(skybox, TexCoords) The problem probably originates from one of these snippets, but if you have any other ideas I can provide more info. |
1 | Drawing multiple objects from one Vertex Buffer Object in OpenGL OpenTK I am trying to experimenting drawing method using VBO in OpenGL. Many people normally use 1 vbo to store one object data array. I was trying to do something quite opposite which is storing multiple object data into 1 vbo then drawing it. There is story behind why i want to do this. I want to group many of objects as a single object sometime. However my code doesn't do the justice. Following is my pseudo code Data int vBO new int 1 vertex buffer objects triangleVertices new float 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, first triangle lineloop, positions in the middle of the screen. 3.0f, 1.0f, 0.0f, 2.0f, 1.0f, 0.0f, 4.0f, 1.0f, 0.0f, second triangle lineloop, positions on the left side of the first triangle. 3.0f, 1.0f, 0.0f, 4.0f, 1.0f, 0.0f, 2.0f, 1.0f, 0.0f third triangle lineloop, positions on the right side of the first triangle. Setting Up void init() GL.GenBuffers(1, vBO) GL.BindBuffer(BufferTarget.ArrayBuffer, vBO 0 ) GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(sizeof(float) triangleVertices.Length), triangleVertices, BufferUsageHint.StaticDraw) GL.VertexPointer(3, VertexPointerType.Float, 0, 0) GL.EnableClientState(Array.VertexArray) Drawing void display() GL.Clear(ClearBufferMask.ColorBufferBit) GL.Clear(ClearBufferMask.DepthBufferBit) setting up camera and projection. float eyes 0.0f, 0.0f, 10.0f float target 0.0f, 0.0f, 0.0f Matrix4 projection Matrix4.CreatePerspectiveFieldOfView(0.785398163f, 4.0f 3.0f, 0.1f, 100f) 45 degree 0.785398163 rads Matrix4 view Matrix4.LookAt(eyes 0 , eyes 1 , eyes 2 , target 0 , target 1 , target 2 , 0, 1, 0) Matrix4 model Matrix4.Identity Matrix4 MV view model GL.MatrixMode(MatrixMode.Projection) GL.LoadIdentity() GL.LoadMatrix(ref projection) GL.MatrixMode(MatrixMode.Modelview) GL.LoadIdentity() GL.LoadMatrix(ref MV) GL.Viewport(0, 0, glControlWindow.Width, glControlWindow.Height) GL.Enable(EnableCap.DepthTest) Enable correct Z Drawings GL.DepthFunc(DepthFunction.Less) Enable correct Z Drawings GL.MatrixMode(MatrixMode.Modelview) Draw drawTriangleLineLoops() Finally... GraphicsContext.CurrentContext.VSync true Caps frame rate as to not over run GPU glControlWindow.SwapBuffers() Takes from the 'GL' and puts into control void drawTriangleLineLoops() GL.BindBuffer(BufferTarget.ArrayBuffer, vBO 0 ) GL.VertexPointer(3, VertexPointerType.Float, 0, 0) GL.EnableClientState(ArrayCap.VertexArray) GL.DrawArrays(BeginMode.LineLoop, 0, 3) drawing first triangle lineloop, first vertex starts at index 0. GL.DrawArrays(BeginMode.LineLoop, 9, 3) drawing second triangle lineloop, first vertex starts at index 9. GL.DrawArrays(BeginMode.LineLoop, 18, 3) drawing third triangle lineloop, first vertex starts at index 18. GL.DisableClientState(ArrayCap.VertexArray) I expect 3 different triangles to be drawn, but only one was drawn. I don't know what went wrong. |
1 | OpenGL Array Texture generates black texture I am creating a game using OpenGL that has many textured cubes. I am trying to use an Array Texture to load multiple textures from one file. I tried to setup my texture atlas however, all that renders is a black texture. I'm not sure what exactly I am doing wrong. Here is the texture loading code Load image stbi set flip vertically on load(true) unsigned char data stbi load(path, amp width, amp height, amp nrChannels, 0) Generate texture object glActiveTexture(GL TEXTURE0) glGenTextures(1, amp textureId) glBindTexture(GL TEXTURE 2D ARRAY, textureId) Actually load texture if (data) glTexImage3D(GL TEXTURE 2D ARRAY, 0, GL RGBA, width, height, 0, 0, GL RGB, GL UNSIGNED BYTE, data) glGenerateMipmap(GL TEXTURE 2D ARRAY) else std cout lt lt "Failed to load " lt lt path lt lt " texture" lt lt std endl set texture filtering parameters glTexParameteri(GL TEXTURE 2D ARRAY, GL TEXTURE MIN FILTER, GL NEAREST) glTexParameteri(GL TEXTURE 2D ARRAY, GL TEXTURE MAG FILTER, GL NEAREST) Free memory stbi image free(data) Here is my fragment shader version 330 core out vec4 FragColor in vec2 TexCoord uniform sampler2DArray my sampler void main() FragColor texture(my sampler, vec3(0, 0, 1)) vec4(1.0f) Here is my texture file Does anyone know how I can solve my issue? |
1 | How can I draw curves above a mesh surface? I am making an animation of a deformed sphere that represents some aspects of the wave function in a hydrogen atom. I am starting with an octahedron that i push through a tessellation shader. The tessellation evaluation shader then first deforms the octahedron into something more sphere like and then applies the deformation stuff. Now, to further improve the visual impression of the deformation, I would like to place lines on top of the sphere and also deform those, to achieve something resembling level curves. What would be a good way to achieve this? Generating circles on the cpu side and then sending them to GL and having them deformed separately seems rather odd, but I cannot figure out another way. If I want to achieve this in one shader run I am also limited to drawing triangles (patches) and making these appear as lines also seems rather unwieldy. |
1 | How to properly transform vertices for a model loaded using Assimp? When I try to load a DAE model then vertices are not placed correctly. Here's the code snippet I use to load 3D models using Assimp aiNode CGameObject findNode(aiNode rootNode, const char name) if (!strcmp(name, rootNode gt mName.data)) return rootNode for (int i 0 i lt rootNode gt mNumChildren i ) aiNode found findNode(rootNode gt mChildren i , name) if (found) return found return NULL void CGameObject transformNode(aiMatrix4x4 result, aiNode rootNode) if (rootNode gt mParent) transformNode(result, rootNode gt mParent) aiMultiplyMatrix4(result, amp rootNode gt mTransformation) else result rootNode gt mTransformation Assimp Importer importer const aiScene scene importer.ReadFileFromMemory(buffer, size, aiProcess CalcTangentSpace aiProcess Triangulate aiProcess JoinIdenticalVertices aiProcess GenUVCoords aiProcess TransformUVCoords aiProcess FlipUVs aiProcess SortByPType) if (!scene) return false const int iVertexTotalSize sizeof(aiVector3D) 2 sizeof(aiVector2D) int iTotalVertices 0 for(int i 0 i lt scene gt mNumMeshes i ) Mesh m new Mesh aiMesh mesh scene gt mMeshes i aiNode node findNode(scene gt mRootNode, mesh gt mName.data) transformNode( amp node gt mTransformation, node) aiTransposeMatrix4( amp node gt mTransformation) int iMeshFaces mesh gt mNumFaces for (int f 0 f lt iMeshFaces f ) const aiFace face amp mesh gt mFaces f for (int k 0 k lt 3 k ) int vertexIndex face gt mIndices k iTotalVertices aiVector3D pos mesh gt mVertices vertexIndex aiTransformVecByMatrix4( amp pos, amp node gt mTransformation) m gt vertices.push back(pos.x) m gt vertices.push back(pos.y) m gt vertices.push back(pos.z) ... What am I doing wrong ? I will create a VBO for all the vertex positions later and render them. When I load an OBJ model everything loads fine and there is no need for applying per node transformations. Also, I noticed that Assimp renames node names from "Cube.001" to "Cube 001", is this behavior normal expected. I'm using Assimp 3.1.1. |
1 | Using PyOpenGL, can I rotate raw data rather than the current matrix? I'm trying to create a sort of PyOpenGL renderer for the Bullet physics engine, and at the moment I'm rotating my basic cubes with math in Python. However, if I do the math in Python that's going to take more time than doing the math with OpenGL. OpenGL has the ability to rotate a matrix using glRotatef(), but it only rotates the "current matrix". I'm not really interested in convoluting my render loop with an individual check for each object's rotation and position if I don't have to. So my question is, is there a way I can, for example, load a matrix of points into OpenGL, rotate this matrix and then read it back and give it to my render loop? Or should I instead look for an alternative 3D math library? (Or for that matter, is running the code in python really all that laggy?) |
1 | How to mix pixel colors in Shader? I have a pixel that have a colour RGB. This color is calculated by the shader and can be anything. How can I override this color by a colour I choose. If my pixel is white it's simple, I can do this half3 original half3(1, 1, 1) half3 mycolor half3(1, 0, 0) half3 result original mycolor But what can I do if half3 original half3(0.36, 0.74, 0.18) half3 mycolor half3(1, 0, 0) half3 result ? What operation or function should I apply to my original pixel color to override it ? |
1 | LWJGL 3 Random Sprites Don't Blend Some of my sprites don't quot Blend quot in with tiles behind them Z wise. glEnable(GL DEPTH TEST) Depth testing is enabled and works. glEnable(GL BLEND) glBlendFunc(GL SRC ALPHA, GL ONE MINUS SRC ALPHA) The blend function is above. Below is a picture of four different instances of the application. As you can see, blending works on random sprites. Why aren't some of the sprites blending? An observation that I made is that when I move the sprites (layers 0 and 1) below (Z wise) the button, you can see the sprites from the gaps left by letters (layer 10), yet the button itself (layer 9) doesn't show in that area. This is what I find the strangest. |
1 | OpenGL ES wrong orientation I'm writing a little Android game in OpenGL ES. My game graphics are not appearing in the right orientation. In fact everything is drawn in reverse vertically. Can someone please comment on my understanding of how OpenGL works below please? My understanding is that OpenGL works in a right handed coordinate system, e.g. y( ) o x( ) z( ) So, I set my orthographic projection matrix using Matrix.orthoM(..., 2, 2, 2, 2, 1, 15) for example, so if I were to draw a line from ( 2, 2, 0) to (2, 2, 0) I should be drawing a line in the X Y plane from ( 2, 2) top (2, 2). Near and far planes are 1 and 15 respectively. In addition I am setting up my view matrix using Matrix.setLookAtM(mViewMatrix, 0, 0, 0, 10, 0, 0, 0, 0, 1, 0) so I'm standing on the positive Z axis at 10 units, looking towards the origin with the Y axis as the upwards vector. But when I come to draw stuff, everything is flipped in the X axis. So a point at (1, 1) would appear at (1, 1). Does anyone see what might be wrong here? |
1 | How to "hot reload" a glsl shader I am wondering if its possible to dynamically change shaders while the code is running. In my game, I want to have a development mode in which users can change the shader source and dynamically see their changes like so https www.youtube.com watch?v hlSG iVC6xc this is my current reload method Shader is my shader class and shader is the current shader delete all resources of previous shader if (shader gt GetProgram() ! 0) glDeleteProgram(shader gt GetProgram()) if (inShader gt GetVert() ! 0) glDeleteShader(inShader gt GetVert()) if (inShader gt GetGeom() ! 0) glDeleteShader(inShader gt GetGeom()) if (inShader gt GetFrag() ! 0) glDeleteShader(inShader gt GetFrag()) GLuint vertex shader glCreateShader(GL VERTEX SHADER) GLuint fragment shader glCreateShader(GL FRAGMENT SHADER) GLuint geometry shader glCreateShader(GL GEOMETRY SHADER) create vertex shader glShaderSource(vertex shader, 1, (const GLchar )vertSource, NULL) glCompileShader(vertex shader) create fragment shader glShaderSource(fragment shader, 1, (const GLchar )fragSource, NULL) glCompileShader(fragment shader) GLuint program glCreateProgram() glAttachShader(program, vertex shader) glAttachShader(program, fragment shader) glLinkProgram(program) Shader theResult new Shader(context) theResult gt SetVertShader(vertex shader) theResult gt SetFragShader(fragment shader) theResult gt SetGeomShader(geometry shader) theResult gt SetProgram(program) replace the shader we created with the old one shader theResult delete theResult return shader The above code does one of two things when called, it either gives some undefined graphical behaviour or doesn't seem to reload the shader and use the previous ones. Is there something I'm missing regarding how I'm linking the new shaders, or is this even the right approach? As requested i will add the shader class information shader.h protected Shader properties unsigned int m VertexShader unsigned int m GeometricShader unsigned int m FragmentShader unsigned int m Program OpenGL mContext public Constructor Shader(OpenGL context) Shader() Getter setters shader properties inline unsigned int GetVert() return m VertexShader inline unsigned int GetGeom() return m GeometricShader inline unsigned int GetFrag() return m FragmentShader inline unsigned int GetProgram() return m Program inline void SetVertShader(unsigned int inValue) m VertexShader inValue inline void SetGeomShader(unsigned int inValue) m GeometricShader inValue inline void SetFragShader(unsigned int inValue) m FragmentShader inValue inline void SetProgram(unsigned int inValue) m Program inValue shader.cpp Shader Shader(OpenGL context) m VertexShader(0), m GeometricShader(0), m FragmentShader(0), m Program(0), mContext(context) Shader Shader() |
1 | opengl rotations for a human I currently can rotate around a pivot point by first translating to the pivot point then performing the rotation and finally translating back to the origin. I do that easily enough for the shoulder in my example. However I cannot figure out how to also add in a rotation around the elbow for the forearm. I've tried the following for the forearm rotation around the elbow translate to shoulder, rotate, translate to origin, translate to forearm, rotate, translate to origin translate to shoulder, rotate, translate to forearm, rotate, translate to shoulder, translate to origin Neither work for me. Any suggestions? I'm really stuck on this one. |
1 | Is glDrawArraysInstanced in OpenGL parallel when drawing those instances? Is glDrawArraysInstanced in OpenGL parallel when drawing those instances? I cannot figure out by referring to its reference and numerous online tutorials. Update To be more clear, I mean, for example, we have 100 instances. Are the drawing of those 100 instances parallel? For example, we have the 100 commands "glDrawArrays(mode, first, count) "? Is execution of those 100 commands sequential or parallel? I think it should be sequential. GPU begins to draw the second instance, only AFTER the first instance drawing finished. Or glDrawArraysInstanced just save the time for "giving your GPU the commands"? Update 2 The reason I wanted to know the answer, is that If it is sequential, personally I think drawing thousands of tree leaves in this way is time consuming. I may draw them in one shader program with some algorithm tweak. |
1 | Does the order of vertex buffer data when rendering indexed primitives matter? I'm building a 3d object's triangles. If I can write them to the buffer in the order they are calculated it will simplify the CPU code. The vertices for the triangles will not be contiguous. Is there any performance penalty for writing them out of order? |
1 | OpenGL Textures not sitting right on model I have been trying to get textures loading properly onto my models but to no avail. I'm using picopng to load my images. Here is my code. Texture code std vector lt unsigned char gt buffer, image, aaa std ifstream file(location.c str(), std ios in std ios binary std ios ate) std streamsize size 0 if (file.seekg(0, std ios end).good()) size file.tellg() if (file.seekg(0, std ios beg).good()) size file.tellg() if (size gt 0) buffer.resize((size t)size) file.read((char )( amp buffer 0 ), size) else buffer.clear() unsigned long w, h int error decodePNG(image, w, h, buffer.empty() ? 0 amp buffer 0 , (unsigned long)buffer.size()) if (error ! 0) std cout lt lt "error " lt lt error lt lt std endl glEnable(GL TEXTURE 2D) GLuint id glGenTextures(1, amp id) glBindTexture(GL TEXTURE 2D, id) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL NEAREST) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL NEAREST) glTexImage2D(GL TEXTURE 2D, 0, GL RGBA, w, h, 0, GL RGBA, GL UNSIGNED BYTE, amp image 0 ) glBindTexture(GL TEXTURE 2D, 0) glGenerateMipmap(GL TEXTURE 2D) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR MIPMAP LINEAR) float mipMapingAffectivness 0.4f glTexParameterf(GL TEXTURE 2D, GL TEXTURE LOD BIAS, mipMapingAffectivness) Any advice? Update I have tried flipping the image but to no avail, I have tried all 8 orientation variations and non seem to work. |
1 | SDL Image and typical SDL BMP loading fails completely been messing with OpenGL and SDL for a pair of weeks. The thing is quite weird. I have been loading a texture from a BMP and using a really easy shader to make it work, and so far it has worked very well. Now, i've refractored my code and made a heightmap loader, with is pretty cool and works nice. I have a ResourceManager class which successfully loads shaders but fails at loading textures. The structure of that class is simple Constructor (empty) Destructor (empty) AddTexture (const char FileName, const char indexName) GetTexture (const char indexName) typedef map TextureMap typedef TextureMap iterator TextureIt TextureMap Textures I tried using SDL Load BMP function and SDL img Load IMG function. The first one makes this weird result Close Up When the real image is a simple BMP in 32bit format, as always Real Texture And the second method, using SDL image, simply doesn't show the image. The 'AddTexture' function bool TEXTURE TextureManager AddTexture(const char fileName, const char indexName) SDL Surface img SDL LoadBMP(fileName) unsigned int id glGenTextures(1, amp id) glBindTexture(GL TEXTURE 2D, id) glTexImage2D(GL TEXTURE 2D, 0, GL RGB, img gt w, img gt h, 0, GL RGB, GL UNSIGNED SHORT 5 6 5, img gt pixels) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) SDL FreeSurface(img) Textures indexName id return true And the 'AddTexture' SDL image version GLuint id 0 SDL Surface Surface IMG Load(fileName) glGenTextures(1, amp id) glBindTexture(GL TEXTURE 2D, id) int Mode NULL if(Surface gt format gt BytesPerPixel 4) Mode GL RGBA else Mode GL RGB glTexImage2D(GL TEXTURE 2D, 0, Mode, Surface gt w, Surface gt h, 0, Mode, GL UNSIGNED BYTE, Surface gt pixels) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glBindTexture(GL TEXTURE 2D, 0) std cout lt lt "The ID of the tex is " lt lt id lt lt std endl Textures indexName id return true I create the texture here Res gt TexManager gt AddTextureAlt("res sum.bmp", "Brick") And the following code is used to bind the texture GLuint TEX Res gt TexManager gt GetTexture("Brick") glBindTexture(GL TEXTURE 2D, TEX) The normals and tex coords are alright, i've been using it all the time and it was displaying the tex and light perfectly well before. I can't find the problem. I just can't. Is someone able to help a poor noob? |
1 | Texturing in OpenGL, Should texture coordinates be assigned to vertices in the shader? I attempting to texture 3D models (a cube for example) using ibo s, with OpenGL in Java. Currently, my textures are distorted. I believe this is because only a single texture coordinate is being assigned to each vertex, as the meshes are loaded from wave front (.obj) files. As each vertex should correspond to multiple texture coordinates, should I be assigning texture coordinates to vertices as the data is loaded from file or would it be better to assign texture coordinates to vertices in the shader? I would appreciate example code. |
1 | How to modify VBO data I am learning LWJGL so i can start working on my game. In order to learn LWJGL I got the idea to implement the map builder so I can get comfortable with graphics programming. Now, for the map creation tool I need to draw new elements or draw the old one's with different coordinates. Let me explain this My game will be a 2D scroller. The map will be consisting of multiple rectangles ( 2 strip triangles). When I click my left mouse button i want to start the rectangle and when I release it I want to stop the rectangle bottom right at that position. As I want to use VBOs I want to know how to modify data inside the VBO based on user input. Should i have a copy of a vertex array and then add the whole array to the VBO at each user input? How is usually implemented the VBO update? |
1 | GL SPOT CUTOFF not working properly I'm new to OpenGL. I'm studying OpenGL 2.1 and I'm trying to make a little program to test the GL SPOT CUTOFF property, but when I set a value between 0.0 90.0, the light doesn't work and everything is dark. The code void lightInit(void) GLfloat light0Position 0.0,0.0,2.0,1.0 glEnable(GL LIGHTING) glEnable(GL LIGHT0) glLightf(GL LIGHT0, GL SPOT CUTOFF, 45.0) glLightfv(GL LIGHT0, GL POSITION, light0Position) void reshapeFunc(int w, int h) glViewport(0, 0, (GLsizei) w, (GLsizei) h) glMatrixMode(GL PROJECTION) glLoadIdentity() if (w lt h) glOrtho( 4, 4, 4 (GLfloat)h (GLfloat)w, 4 (GLfloat)h (GLfloat)w, 4.0, 4.0) else glOrtho( 4 (GLfloat)w (GLfloat)h,4 (GLfloat)w (GLfloat)h, 4, 4, 4, 4.0) glMatrixMode(GL MODELVIEW) glLoadIdentity() gluLookAt(0,0,0,0,0, 1,0,1,0) void displayFunc(void) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) center sphere glutSolidSphere(1, 100,100) right sphere glPushMatrix() glTranslatef(3,0,0) glutSolidSphere(1, 100,100) glPopMatrix() left sphere glPushMatrix() glTranslatef( 3,0,0) glutSolidSphere(1, 100,100) glPopMatrix() glutSwapBuffers() void keyboardFunc(unsigned char key, int x, int y) if (key 27) exit(EXIT SUCCESS) int main(int argc, char argv) freeglut init and windows creation glutInit( amp argc, argv) glutInitDisplayMode(GLUT DOUBLE GLUT RGB GLUT DEPTH) glutInitWindowSize(600, 500) glutInitWindowPosition(300, 100) glutCreateWindow("OpenGL") glew init and errors check GLenum err glewInit() if (GLEW OK ! err) fprintf(stderr, "Error s n", glewGetErrorString(err)) return 1 fprintf(stdout, " GLEW s n n", glewGetString(GLEW VERSION)) general settings glClearColor(0.0, 0.0, 0.0, 0.0) glShadeModel(GL SMOOTH) glEnable(GL DEPTH TEST) light settings lightInit() callback functions glutDisplayFunc(displayFunc) glutReshapeFunc(reshapeFunc) glutKeyboardFunc(keyboardFunc) glutMainLoop() return 0 This code produces this image If I delete glLightf(GL LIGHT0, GL SPOT CUTOFF, 45.0) , the next image is produced Is there some kind of bug ? |
1 | JNI Error when Wrapping Jar with JOGL using Launch4j I have been trying to wrap my fat JAR file into an EXE using Launch4j, but I have been running into problems when I try to execute the EXE. Here is the error log I get from Launch4j Executing E Downloads CasualCaving.exe JNILibLoaderBase Caught IllegalArgumentException No Jar name in lt jar file E Downloads CasualCaving.exe! jogamp common Debug.class gt Exception in thread "main" java.lang.UnsatisfiedLinkError Can't load library E Downloads natives windows amd64 gluegen rt.dll at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.load0(Unknown Source) at java.lang.System.load(Unknown Source) at com.jogamp.common.jvm.JNILibLoaderBase.loadLibraryInternal(JNILibLoaderBase.java 624) at com.jogamp.common.jvm.JNILibLoaderBase.access 000(JNILibLoaderBase.java 63) at com.jogamp.common.jvm.JNILibLoaderBase DefaultAction.loadLibrary(JNILibLoaderBase.java 106) at com.jogamp.common.jvm.JNILibLoaderBase.loadLibrary(JNILibLoaderBase.java 487) at com.jogamp.common.os.DynamicLibraryBundle GlueJNILibLoader.loadLibrary(DynamicLibraryBundle.java 421) at com.jogamp.common.os.Platform 1.run(Platform.java 317) at java.security.AccessController.doPrivileged(Native Method) at com.jogamp.common.os.Platform. lt clinit gt (Platform.java 287) at com.jogamp.opengl.GLProfile. lt clinit gt (GLProfile.java 147) at org.graphics.Render. lt init gt (Render.java 20) at org.engine.Main.main(Main.java 18) I am using JOGL 2.3.2 imported through Maven, with the assembly plugin to compile it to a fat JAR. Here is a picture of my library configuration I am not sure what is wrong, as the JAR file works fine. |
1 | How should I do 3D games through Java on a mac? I have been self teaching myself Java on the mac mostly because the language is cross platform. Recently, I have been only able to develop 2D games using the Graphics2D class. Now, I want to learn how to make 3D games in Java. I used to model and animate stuff in 3D, so my knowledge of 3 Dimensional stuff is okay. I have spent the last 3 hours using google to look up ways of making 3D games in java. Apparently the best one to use is OpenGL, so i looked up a tutorial on it and i cannot find a tutorial that shows how to (if there is a way) install JOGL on the Mac platform. Should i continue to use Java? How can i make 3D games using Java? What is the best way to make 3D games on a mac? |
1 | Java OpenGl Rotating Cube Problem! OK, so I successfully learned how to bind textures to quads and display this cool looking crate. However, when I rotate the cube something goes wrong. During rotation the back face of the cube overlays the front face and appears in front of the front face. This is also the case with the right and left faces. They overlay each other giving a weird perception. I'm not sure why this happens. Here is the code where I draw only four face of the quad (front, back right, left). gl.glClear(GL2.GL COLOR BUFFER BIT GL2.GL DEPTH BUFFER BIT) clear the screen and depth buffers gl.glLoadIdentity() reset the current modelview matrix draw quad gl.glTranslatef(0.0f, 0.0f, 6.0f) translate to the left and into the screen gl.glRotatef(rotateAngle, 0.0f, 1.0f, 0.0f) rotate triangle around the y axis gl.glBindTexture(GL2.GL TEXTURE 2D, textures 2 ) bind the texture you want to use to gl texture 2d gl.glBegin(GL2.GL QUADS) front face gl.glTexCoord2f(0.0f, 1.0f) bottom left gl.glVertex3f( 1.0f, 1.0f, 1.0f) gl.glTexCoord2f(1.0f, 1.0f) bottom right gl.glVertex3f(1.0f, 1.0f, 1.0f) gl.glTexCoord2f(1.0f, 0.0f) top right gl.glVertex3f(1.0f, 1.0f, 1.0f) gl.glTexCoord2f(0.0f, 0.0f) top left gl.glVertex3f( 1.0f, 1.0f, 1.0f) right face gl.glTexCoord2f(0.0f, 1.0f) bottom left gl.glVertex3f(1.0f, 1.0f, 1.0f) gl.glTexCoord2f(1.0f, 1.0f) bottom right gl.glVertex3f(1.0f, 1.0f, 1.0f) gl.glTexCoord2f(1.0f, 0.0f) top right gl.glVertex3f(1.0f, 1.0f, 1.0f) gl.glTexCoord2f(0.0f, 0.0f) top left gl.glVertex3f(1.0f, 1.0f, 1.0f) left face gl.glTexCoord2f(0.0f, 1.0f) bottom left gl.glVertex3f( 1.0f, 1.0f, 1.0f) gl.glTexCoord2f(1.0f, 1.0f) bottom right gl.glVertex3f( 1.0f, 1.0f, 1.0f) gl.glTexCoord2f(1.0f, 0.0f) top right gl.glVertex3f( 1.0f, 1.0f, 1.0f) gl.glTexCoord2f(0.0f, 0.0f) top left gl.glVertex3f( 1.0f, 1.0f, 1.0f) back face gl.glTexCoord2f(0.0f, 1.0f) bottom left gl.glVertex3f( 1.0f, 1.0f, 1.0f) gl.glTexCoord2f(1.0f, 1.0f) bottom right gl.glVertex3f(1.0f, 1.0f, 1.0f) gl.glTexCoord2f(1.0f, 0.0f) top right gl.glVertex3f(1.0f, 1.0f, 1.0f) gl.glTexCoord2f(0.0f, 0.0f) top left gl.glVertex3f( 1.0f, 1.0f, 1.0f) gl.glEnd() Moreover, the way I draw textures and then vertices is not yet clear to me. According to what I've read the bottom left corner (0, 0) is where the textures start drawing, so I should also start drawing my quad from bottom left accordingly. I tried that but it didn't work. The texture was drawn, but up side down. So I had to go through some trial and error till I got the texture drawn correctly. I wish I could attach a picture to this post so I can show you how the back face of the cube overlays the front one during rotation, but I think this facility isn't provided in stack exchange. Thanks. |
1 | Does interleaving in VBOs speed up performance when using VAOs You usually get a speed up when you use interleaved VBOs instead of using multiple VBOs. Is this also valid when using VAOs? Because it's much more convenient to have a VBO for the positions, and one for the normals etc. And you can use one VBO in multiple VAOs. |
1 | Run OpenGL shader on part of a texture How do I run an OpenGL shader on just a portion of an off screen texture and leave the rest of the texture unmodified? Are there any calls that restrict the sampled pixels to just a rectangle or do I have to filter out the rectangle in the shader code? |
1 | OpenGL Index buffers problem I have a custom file format that has all the needed information for a 3D mesh (exported from 3ds Max). I've extracted the data for vertices, vertex indices and normals. I pass to OpenGL the vertex data, vertex indices and normals data and I render the mesh with a call to glDrawElements(GL TRIANGLES,...) Everything looks right but the normals. The problem is that the normals have different indices. And because OpenGL can use only one index buffer, it uses that index buffer for both the vertices and the normals. I would be really grateful if you could suggest me how to go about that problem. Important thing to note is that the vertex normal data is not "sorted" and therefore I am unable to use the functionality of glDrawArrays(GL TRIANGLES,...) the mesh doesn't render correctly. Is there a way algorithm that I can use to sort the data so the mesh can be drawn correctly with glDrawArrays(GL TRIANGLES,..) ? But even if there is an algorithm, there is one more problem I will have to duplicate some vertices (because my vertex buffer consists of unique vertices for example if you have cube my buffer will only have 8 vertices) and I am not sure how to do that. |
1 | What is the "main" implementation of OpenGL? What is the main implementation of OpenGL? I see GLUT everywhere, but I want to use whatever is closest to an official implementation, or what ever I will have the most control over. I might not be asking this right since it's contributed to by different parties. Which is why I have been looking at DirectX, but I see more people using OpenGl so I'd like to use that instead. |
1 | Constant size geometries for glDrawArraysInstanced calls In my application I can keep the objects constant sized const Graphics VectorDouble diffVec(dTranslateX m CameraTransformOffset.x, dTranslateY m CameraTransformOffset.y, dTranslateZ m CameraTransformOffset.z) camScaleInv Graphics Lenght(diffVec) 2.0f tan(fov 0.5f PI 180.0f) Than I scale my view matrix with camScaleInv and I think that is very common to do but , I have some objects which I draw with glDrawArraysInstanced. I pass the parent surface only once and another array for duplicating the parent surface in different positions. That causes my quot diffVec quot to be different and causes me to not being able to use keep same distance logic in CPU side. Is there any way for me to keep the objects which are being drawn with glDrawArraysInstanced same size either in CPU or GPU(glsl) side. |
1 | Store static environment in chunks I have terrain separated by chunks and I would like to put environment (For example, rocks, trees, etc..) in each chunk randomly. My question is related to how to implement such system in OpenGL. What I have tried Solution Draw the environment with instancing once for all the terrain (not a specific chunk) Problem I except the chunk to sometimes take a bit to load and because I am using threads the environment will appear as floating. Solution Draw the environment with instancing for each chunk. Problem To draw each chunk, I will need to bind the VBO for the chunk, draw the chunk, bind the VBO for the environment (and the VAO probably) and draw it. I don't want to put so many glBindBuffer functions because I heard it is slow (Please correct me if I am wrong) (Not tried) Solution Somehow merge the vertices of the terrain with its environment and draw them together. Problem My terrain is drawn with GL TRIANGLE STRIP so this is a first problem, the second problem(?) is that I don't know how well it will function (talking speed). I tried looking up solutions on the internet but didn't seem to find any that relate to chunks. Anyone know how other games that uses chunks do that? Is there a way to do it without causing a lot of speed decrease? |
1 | OpenGL Keeping alpha in a render buffer In my current task, i need to render a texture into a render buffer, in order to work on it (apply special filters) there. The result is then considered a "new texture", which is later displayed. This works fine, except when the texture contains some transparent semi transparent parts. My current guess it that, within the render buffer, the texture is "merged" with a kind of "grey background". In this case, it obviously impacts the R,G,B color components of transparent pixels. I've yet to find a way around this. Even manually assigning alpha after the rendering process doesn't save the day for semi transparent pixels, which RGB are "tainted" by the grey background. |
1 | OpenGL ES indices optimization I am using OpenGL ES 1.x 2.x I have 2 attributes to be passed to the GPU(one is colors, one is vertices, one color per vertex). I use indices. Both attributes will use the same indices array This is not a big issue but I was wondering if there is a way to tell the GPU that both attributes use the same indices array so that it is not transfered to the gpu twice, or maybe it does not matter because the GPU uses the RAM? |
1 | Is there a way to start with OpenGL 3.0 without need to write my own shaders? I'm starting with OpenGL and found out that after OpenGL 3.x you must write your own shaders (think it's obligatory). Am I right here? I have made some research but I can't seem to find the answer. If it's necessary to write the shaders for OpenGL 3 the I still can use 2.x right? Just because I'm just starting and OpenGL seems a little bit more level than I'm accustomed to. So I think I'll have a really hard time writing a shader myself. P.S. My GPU only supports up to OpenGL 3.0. I think it's because of the drivers, I'm running a Linux distro that only has FLOSS (no proprietary software). |
1 | Shaders and Performance I'm coding my first Shader in my little game engine, and I have some questions about it's performance and common approaches. Is the Shader code processed by the video card instead of the PC processor? Just so I know if it's possible to share some calculations to save some processor power. Generally, should I do the math calculations in my code or the Shader? I could calculate them all (lightning for example) and just send the final values to be multiplied used to the Shader, but what's the best approach? Shaders as far as I could understand are the main party responsible for visual effects, so for example, if I want to add a "blur" effect to only one object on screen, should I use an if switch statement in the Shader or should I have different Shader ProgramIDs and I just switch the call glUseProgram(programID)? Sorry if some questions seems stupid, and thanks for your time! |
1 | Shadow map shimmering, indexing outside the shadow map I have tried to reduce the shadow shimmering flickering using the technique described here http msdn.microsoft.com en us library windows desktop ee416324 28v vs.85 29.aspx It works as I want and shimmering is reduced but sometimes I have artifacts. It looks like my code tries to index space outside the shadow map. The article above writes about it but I didn't find a solution. When I played with the code I also got black strips on the corners. Code reduce shadow shimmering flickering Vector2 vecWorldUnitsPerTexel Vector2(D.x() (float)D.shadowMapSize(), D.y() (float)D.shadowMapSize()) get only x and y dimensions Vector2 min2D min.vector2(), max2D max.vector2() min2D vecWorldUnitsPerTexel min2D Round(min2D) min2D vecWorldUnitsPerTexel max2D vecWorldUnitsPerTexel max2D Round(max2D) max2D vecWorldUnitsPerTexel min.set(min2D, min.z) max.set(max2D, max.z) crop matrix based on this article https developer.nvidia.com gpugems GPUGems3 gpugems3 ch10.html Vector scale Vector offset scale.x 2.0f (max.x min.x) scale.y 2.0f (max.y min.y) scale.z 1.0f (max.z min.z) offset.x 0.5f (max.x min.x) scale.x offset.y 0.5f (max.y min.y) scale.y offset.z min.z scale.z Matrix4 m m.x Vector4(scale.x, 0, 0, 0) m.y Vector4(0, scale.y, 0, 0) m.z Vector4(0, 0, scale.z, 0) m.w Vector4(offset.x, offset.y, offset.z, 1.0f) I think that I should store in the depth map a slightly larger area but I'm not sure how to do this. I tried to change the scale of the crop matrix but it doesn't help. EDIT It seems I've found a solution. When I'm rounding (or flooring) min and max values I subtract one from the min value and add one to the max value. This makes the shadow map contain a slightly larger area and I don't see any artifacts. |
1 | Draw contour around object in Opengl I need to draw contour around 2d objects in 3d space. I tried drawing lines around object( points to fill the gap), but due to line width, some part of it( 50 ) was covering object. I tried to use stencil buffer, to eliminate this problem, but I got sth like this(contour is green) http goo.gl OI5uc (sorry I can't post images, due to my reputation) You can see(where arrow points), that some parts of line are behind object, and some are above. This changes when I move camera, but always there is some part, that is covering it. Here is code, that I use for drawing object glColorMask(1,1,1,1) std list lt CObjectOnScene gt iterator objIter ptr gt objects.begin(),objEnd ptr gt objects.end() int countStencilBit 1 while(objIter! objEnd) glColorMask(1,1,1,1) glStencilFunc(GL ALWAYS,countStencilBit,countStencilBit) glStencilOp(GL REPLACE,GL KEEP,GL REPLACE ) ( objIter) gt DrawYourVertices() glStencilFunc(GL NOTEQUAL,countStencilBit,countStencilBit) glStencilOp(GL KEEP,GL KEEP,GL REPLACE) ( objIter) gt DrawYourBorder() objIter countStencilBit I've tried different settings of stencil buffer, but always I was getting sth like that. Here is question 1.Am I setting stencil buffer wrong? 2. Are there any other simple ways to create contour on such objects? Thanks in advance. EDIT 1. I don't have normals of objects. 2. Object can be concave. 3. I can't use shaders(see below why). |
1 | Problem with bullet movement I'm trying to draw quot bullets quot in my game, but I think something is wrong with velocity computation. Sometimes the quot bullets quot are going in the right way, but sometimes they won't. The CameraFront is constructed with euler angles. Can you help? Thanks in advance So this is how I create bullets Bullet tracer makeBulletTracer(v3 cameraFront,v3 startP, v3 endP) Bullet tracer result assert((gTracersCount 1) lt MAX BULLET TRACERS) mat4 model indentity() model scale(model, v3(.5f,.5f,1.5f)) v3 up v3(0.0f,1.0f,0.0f) v3 right normalize(cross(cameraFront,up)) mat4 M rows3x3(right,up,cameraFront) model model M model translate(model, startP v3(0.0f,3.0f,0.0f)) result.model model result.velocity cameraFront gTracersCount return result This is how I draw them for (u32 tracerIndex 0 tracerIndex lt gTracersCount tracerIndex ) Bullet tracer tracer gameState gt tracers tracerIndex v3 p getTranslationPart(tracer gt model) v3 offset (p tracer gt velocity) input gt dtForFrame 0.01f tracer gt model translate(tracer gt model, offset) passUniformMatrix(opengl.shaderProgram,tracer gt model,false, quot model quot ) glDrawArrays(GL TRIANGLES, 0, 36) |
1 | What does glBlendFunc do when given an incorrect enumeration value? In cocos2d x, I am using particle system from plist with blend values dst 1, and src 100 which is wrong enum for gl blend. Although app logged "OpenGL error 0x0500" but it still run and showed a good blend effect. And i can not find the right pairs of dst and src to recreate that effect. So i hope someone can tell me how BlendFunc gl blend will work when it having bad src value. Base on that i hope i can recreate that effect without using wrong src value. Thanks |
1 | OpenGL can an Element Array Buffer points to only one VBO I'm making a Minecraft style game and currently storing 36 vertices and texture coordinates per cube. Could I make an EBO for only the position and leave the texture coordinates as they are? OK i think thsi is how i will do it. I would have an array of 24 vertices(3 GLfloat) and 24 texture coordinates(2 GLfloat) for the corners then i would have an EBO pointing to the corners with 36 indices So memory total 24 3 4 24 2 4 36 4 624 bytes instead of 36 3 4 36 2 4 720 bytes |
1 | OpenGL GLSL vector of sampler2D If im trying to create an obj. file loader in c and draw it with opengl i need to use multiple textures in the fragment shader.I know i can just use uniform sampler2D Texture1 uniform sampler2D Texture2 uniform sampler2D Texture3 uniform sampler2D Texture4 But i think there must be an elegant way to do it.Like a vector of samplers or something.Im asking this because i dont want to declare n textures in my fragment shader and type n if's where i test which texture i should use for the current fragment.It just looks ugly. |
1 | How to properly transform vertices for a model loaded using Assimp? When I try to load a DAE model then vertices are not placed correctly. Here's the code snippet I use to load 3D models using Assimp aiNode CGameObject findNode(aiNode rootNode, const char name) if (!strcmp(name, rootNode gt mName.data)) return rootNode for (int i 0 i lt rootNode gt mNumChildren i ) aiNode found findNode(rootNode gt mChildren i , name) if (found) return found return NULL void CGameObject transformNode(aiMatrix4x4 result, aiNode rootNode) if (rootNode gt mParent) transformNode(result, rootNode gt mParent) aiMultiplyMatrix4(result, amp rootNode gt mTransformation) else result rootNode gt mTransformation Assimp Importer importer const aiScene scene importer.ReadFileFromMemory(buffer, size, aiProcess CalcTangentSpace aiProcess Triangulate aiProcess JoinIdenticalVertices aiProcess GenUVCoords aiProcess TransformUVCoords aiProcess FlipUVs aiProcess SortByPType) if (!scene) return false const int iVertexTotalSize sizeof(aiVector3D) 2 sizeof(aiVector2D) int iTotalVertices 0 for(int i 0 i lt scene gt mNumMeshes i ) Mesh m new Mesh aiMesh mesh scene gt mMeshes i aiNode node findNode(scene gt mRootNode, mesh gt mName.data) transformNode( amp node gt mTransformation, node) aiTransposeMatrix4( amp node gt mTransformation) int iMeshFaces mesh gt mNumFaces for (int f 0 f lt iMeshFaces f ) const aiFace face amp mesh gt mFaces f for (int k 0 k lt 3 k ) int vertexIndex face gt mIndices k iTotalVertices aiVector3D pos mesh gt mVertices vertexIndex aiTransformVecByMatrix4( amp pos, amp node gt mTransformation) m gt vertices.push back(pos.x) m gt vertices.push back(pos.y) m gt vertices.push back(pos.z) ... What am I doing wrong ? I will create a VBO for all the vertex positions later and render them. When I load an OBJ model everything loads fine and there is no need for applying per node transformations. Also, I noticed that Assimp renames node names from "Cube.001" to "Cube 001", is this behavior normal expected. I'm using Assimp 3.1.1. |
1 | OpenGL Fragment Shader Interpolation Vs Inverse Calculation I have found openGL fragment shader tutorials on the web, some that use use inverse calculations to revert the fragment coordinate back to the world space and others that interpolate the position. With that said... Is there a difference between interpolated world space position and the actual world space position calculated using inverse matrices and the fragment window space coordinates? Does anyone know of any resources that would help me understand the mathematical differences between these two things, if they are different? If they aren't different, then why would anyone use inverse matrices instead of interpolating, which seems less expensive? I wrote a shader to test 1, but I cannot tell if rounding errors are to blame for the differences or not. Thanks! |
1 | How do I implement camera axis aligned billboards? I am trying to make an axis aligned billboard with Pyglet. I have looked at several tutorials, but they only show me how to get the up, right, and look vectors. So far this is what I have target cam.pos look norm(target billboard.pos) right norm(Vector3(0,1,0) look) up look right gluLookAt( look.x, look.y, look.z, self.pos.x, self.pos.y, self.pos.z, up.x, up.y, up.z ) This does nothing for me visibly. Any idea what I'm doing wrong? |
1 | Deferred shadow mapping Question What am i doing wrong in the CalcShadowFactor method? It looks like the depth check is not working correctly. Body I'm using deferred rendering in my engine and i have generated the following shadow map Which is draw by the following fragment void main() gl FragColor vec4(texture(gColorMap, TexCoord0).x) The position texture is generated like this WorldPos0 (gWorld vec4(Position, 1.0)).xyz Where gWorld is the transformation of the object being draw(translation rotation scale) Finally, in the light pass, i want to generate the shadow produced from this spotlight with the following code float CalcShadowFactor(vec3 WorldPos) vec4 ShadowCoord gLightWVP vec4(WorldPos,1) ShadowCoord ShadowCoord.w vec2 UVCoords UVCoords.x 0.5 ShadowCoord.x 0.5 UVCoords.y 0.5 ShadowCoord.y 0.5 float z 0.5 ShadowCoord.z 0.5 float Depth texture(gShadowMap, UVCoords).x if (Depth lt z 0.00001) return 0 else return 1.0 It returns a float which multiplicates the color output. I know, im just returning 0 to set the color to complete transparent, it is intensional. The variables used were gLightWVP It is the ProjectionView of the camare set at the light Projection LightCameraRotation LightCameraTranslation WorldPos It is the position got from the Position Texture vec3 WorldPos texture2D( gPositionMap, TextCoord0 ).xyz gShadowMap It is the shadow map texture sampler. This is what is displayed(notice how the spotlight looks at the first image) The engine passes are Shadow pass Render the scene for every light using a alternative camera which is placed in their position with their orientation. It generates the shadow maps for every light. Currently i just have one shadow map for that spot light. Render pass It renders all the scene using the main camera. It generates the position map, normal map, diffuse map and a specular map. Light pass It uses all the textures generated to place the illumination. Here is where the shadow map is being checked for generate the shadows. EDIT I'll also leave here the shadow map texture definition. glGenTextures(1, amp m shadowMap) glBindTexture(GL TEXTURE 2D, m shadowMap) glTexImage2D(GL TEXTURE 2D, 0, GL DEPTH COMPONENT32, WindowWidth, WindowHeight, 0, GL DEPTH COMPONENT, GL FLOAT, NULL) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE COMPARE MODE, GL NONE) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glBindFramebuffer(GL FRAMEBUFFER, m fbo) glFramebufferTexture2D(GL FRAMEBUFFER, GL DEPTH ATTACHMENT, GL TEXTURE 2D, m shadowMap, 0) |
1 | How to spin a 2D quad in place using only matrices? In short, I have a textured 2D quad (a sprite). I would like to rotate spin it about the z axis (coming out of the screen) using nothing but matrices. If I do the following to a transform with scale in it already, the object spins, but also follows a circular path around the origin. As a test, if I first undo the scale, then this works fine (the object spins in place). But I'm not normally keeping scale, orientation and translation separate and then computing the matrix every frame in this case. I'm mutating the matrix all the time. So this isn't workable. Can this be done without first undoing the scale? self is the current transformation matrix (4x4) pivot is the "center" of the quad's AABB (recomputed immediately before this) eulerAngles is a vec3 containing 0.0, 0.0, z angle translate to the origin translate(self, pivot) rotate rotate(self, eulerAngles) go back translate(self, pivot) (I'm using OpenGL, but that probably doesn't matter much here) There are other related articles, but I didn't get to a solution. I'm linking for reference Combining rotation,scaling around a pivot with translation into a matrix How can I rotate about an arbitrary point in 3D (instead of the origin)? |
1 | How can I reduce a frustum to the subset that passes through a portal AABB? I'm trying to implement portal based occlusion culling There are sectors and portals. When a portal is visible, the sector it is connected to is rendered. The sector is made of polygons and portals. The sector is sealed by a portal to the next sector. The portal has an axis aligned bounding box. The view frustum has an array of mathematical planes in it. There is top, bottom, left, right, near and far. The view frustum planes change shape when in contact with a wall. When the view frustum planes are in contact with the portal's AABB, the sector get rendered. Then an array of mathematical planes are added behind the portal to create a reduced view frustum. Reduced frustums are made from other portals in contact with the view frustum until there are no more portals in view. How can I perform the bolded step, where I find a new collection of frustum planes representing the portion of the original frustum that continues through the portal's AABB? |
1 | FBX Importer Vertex Color I imported vertex positions, indices and normals successfully in OpenGL using fbx sdk, but I just can't figure out how to import vertex colors. I tried to fetch the pointer to array of colors trough mesh layer but it returns null. Can anyone help me with this one please? |
1 | Getting the Ray position from View and Projection Matrix I'm having some trouble calculating the direction and position of the ray from my matrices. I have tried some things such as private Vec3 getPick(Mat4 projection, Mat4 view) Mat4 inverseProject projection.copy().inverse() Mat4 pick inverseProject.mul(view.copy()) Vec3 pos new Vec3(pick.m10, pick.m11, pick.m12) return pos I am using custom math classes and I'm pretty sure they work. Also, would it be better to perform this math in my shader? For better performance etc..? I contacted Notch (creator of Minecraft) on this issue and he told me to inverse the view Matrix and multiply by the projection, then multiply by the mouse coordinates. I tried that, it didn't do much. EDIT Tried normalising the screen space. It seems to do the job except in negative directions. Plus its a little off. Vec3 getPick(double w, double h, double ar) double sx ((Mouse.getX() w 2.0) 1.0) ar double sw 1.0 (2 ((h Mouse.getY()) h)) Mat4 viewProjInv view.copy().mul(projection).inverse() Vec3 mouseNormal new Vec3(sx, sw, 1.0) Vec3 dir viewProjInv.mul(mouseNormal) return dir.mul(1) |
1 | Font loads with artifacts I've recently come on to an error when loading fonts in my game, some letters show up perfect while others have artifacts around them. Here's an example of this While some letters are perfectly normal, others are being unstable. My options menu This is what my loader method looks like. (Yes i've tried other fonts) private static int loadTexture(BufferedImage image) int BYTES PER PIXEL 4 int pixels new int image.getWidth() image.getHeight() image.getRGB(0, 0, image.getWidth(), image.getHeight(), pixels, 0, image.getWidth()) ByteBuffer buffer BufferUtils.createByteBuffer(image.getWidth() image.getHeight() BYTES PER PIXEL) for(int y 0 y lt image.getHeight() y ) for(int x 0 x lt image.getWidth() x ) int pixel pixels y image.getWidth() x buffer.put((byte) ((pixel gt gt 16) amp 0xFF)) Red component buffer.put((byte) ((pixel gt gt 8) amp 0xFF)) Green component buffer.put((byte) (pixel amp 0xFF)) Blue component buffer.put((byte) ((pixel gt gt 24) amp 0xFF)) Alpha component. Only for RGBA buffer.flip() int textureID glGenTextures() glBindTexture(GL TEXTURE 2D, textureID) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL NEAREST) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL NEAREST) glTexImage2D(GL TEXTURE 2D, 0, GL RGBA8, image.getWidth(), image.getHeight(), 0, GL RGBA, GL UNSIGNED BYTE, buffer) return textureID |
1 | Beginner question about vertex arrays in OpenGL Is there a special order in which vertices are entered into a vertex array? Currently I'm drawing single textures like this glBindTexture(GL TEXTURE 2D, texName) glVertexPointer(2, GL FLOAT, 0, vertices) glTexCoordPointer(2, GL FLOAT, 0, coordinates) glDrawArrays(GL TRIANGLE STRIP, 0, 4) where vertices has four "xy pairs". This is working fine. As a test I doubled the sizes of the vertices and coordinates arrays and changed the last line above to glDrawArrays(GL TRIANGLE STRIP, 0, 8) since vertices now contains eight "xy pairs". I do see two textures (the second is intentionally offset from the first). However the textures are now distorted. I've tried passing GL TRIANGLES to glDrawArrays instead of GL TRIANGLE STRIP but this doesn't work either. I'm so new to OpenGL that I thought it's best to just ask here ) Cheers! |
1 | How to 'point' an arrow in the direction that it's going? I am relatively new to OpenGL and 3D environments and am having trouble creating a rotation matrix that will rotate an arrow model so it 'points' in the direction that it is going. I am using GLM as a math library. My current approach is to predict the position the arrow will be at next frame and try to rotate the arrow to face that position. double nextVelY velocity.y (gravity (((glfwGetTime() dt) clock) 1000.0)) glm vec3 nextVelocity velocity nextVelocity.y pos.y 0.5 gt 3 ? nextVelY velocity.y glm vec3 lookat glm vec3((nextVelocity glm vec3(20) glm vec3(dt))) glm vec3 pos2 glm vec3(0) And then try to calculate angles between the current position and the predicted position model glm translate(model, pos) model glm scale(model, glm vec3(0.05)) float angleX atan(nextVelocity.y nextVelocity.z) float angleY atan(nextVelocity.x nextVelocity.z) float angleZ atan(nextVelocity.x nextVelocity.y) glm vec3 zAxis glm normalize(lookat pos2) glm vec3 xAxis glm normalize(glm cross(up, zAxis)) glm vec3 yAxis glm cross(zAxis, yAxis) model glm rotate(model, angleX, xAxis) model glm rotate(model, angleY, yAxis) model glm rotate(model, angleZ, zAxis) I'm not at all sure if this is a good way to try and achieve this. I've also tried using glm lookAt model glm transpose(glm lookAt(pos2, lookat, glm vec3(0, 1, 0))) and various other methods I've found with no success. |
1 | Understanding ticksPerSecond and duration with skeletal animations That's my first question here so i hope to do all correctly. From various weeks i started surfing the net about Skeletal animation aiming to add a simple animation controller into my small game engine. After following a video tutorial by ThinMatrix, i successfully added skeletal animation into my engine (i used Assimp to load a .dae file from Blender with one animation in it). Once finished all the base stuff i started thinking about how to change animations speed by a factor (like x2 x3 etc...) and here i found some problems. As i think i understand every animation is measured by a duration field (in ticks) that, as i suppose, should be at least something like 25fps (to obtain some kind of smooth animation) times animation's length in seconds. Then for every animation there is also another field called ticksPerSecond that (as the name says) is the amount of ticks in every second. Into my engine i've a data structure called Animator that contains an array of Animation objects and for each one it has a ticks per second and duration array. The following code shows how i took data from assimp. animator gt ticks per second animation index scene gt mAnimations animation index gt mTicksPerSecond ! 0 ? scene gt mAnimations animation index gt mTicksPerSecond 25.f animator gt duration animation index scene gt mAnimations animation index gt mDuration If i print the two variables i get this result DEBUG ticks per sec 1.000000 DEBUG total ticks 0.833333 So here is my question Why do these variables take on these values? I am trying to explain it to myself and the idea I found is that if ticks is equal to 1 the animation would go at the same mainloop's frame rate but, at the end, I am not sure about it and I would appreciate your help. |
1 | OpenGL object load in reverse I am trying to load a model and it is loading it in reverse. When I am trying to rotate it 180 degrees it changes the lightning as well. I am not sure what I need to do to change the position that eh model is facing when is being loaded. This is the object loader if (!submarineShader gt load("BasicView", "glslfiles basicTransformations.vert", "glslfiles basicTransformations.frag")) cout lt lt "failed to load shader" lt lt endl glUseProgram(submarineShader gt handle()) use the shader glEnable(GL TEXTURE 2D) cout lt lt " loading model " lt lt endl if (objLoader.loadModel("submarine submarine v2 submarine5.obj", model)) returns true if the model is loaded, puts the model in the model parameter cout lt lt " model loaded " lt lt endl model.calcVertNormalsUsingOctree() model.initDrawElements() model.initVBO(submarineShader) model.deleteVertexFaceData() else cout lt lt " model failed to load " lt lt endl |
1 | shadow mapping and linear depth I'm implementing ominidirectional shadow mapping for point lights. I want to use a linear depth which will be stored in the color textures (cube map). A program will contain two filtering techniques software pcf (because hardware pcf works only with depth textures) and variance shadow mapping. I found two ways of storing linear depth const float linearDepthConstant 1.0 (zFar zNear) first float moment1 viewSpace.z linearDepthConstant float moment2 moment1 moment1 outColor vec2(moment1, moment2) second float moment1 length(viewSpace) linearDepthConstant float moment2 moment1 moment1 outColor vec2(moment1, moment2) What are differences between them ? Are both ways correct ? For the standard shadow mapping with software pcf a shadow test will depend on the linear depth format. What about variance shadow mapping ? I implemented omnidirectional shadow mapping for points light using a non linear depth and hardware pcf. In that case a shadow test looks like this vec3 lightToPixel worldSpacePos worldSpaceLightPos vec3 aPos abs(lightToPixel) float fZ max(aPos.x, max(aPos.y, aPos.z)) vec4 clip pLightProjection vec4(0.0, 0.0, fZ, 1.0) float depth (clip.z clip.w) 0.5 0.5 float shadow texture(ShadowMapCube, vec4(normalize(lightToPixel), depth)) I also implemented standard shadow mapping without pcf which using second format of linear depth (Edit 1 i.e. distance to the light some offset to fix shadow acne) vec3 lightToPixel worldSpacePos worldSpaceLightPos const float linearDepthConstant 1.0 (zFar zNear) float fZ length(lightToPixel) linearDepthConstant float depth texture(ShadowMapCube, normalize(lightToPixel)).x if(depth lt fZ) shadow 0.0 else shadow 1.0 but I have no idea how to do that for the first format of linear depth. Is it possible ? Edit 2 For non linear depth I used glPolygonOffset to fix shadow acne. For linear depth and distance to the light some offset should be add in the shader. I'm trying to implement standard shadow mapping without pcf using a linear depth ( viewSpace.z linearDepthConstant offset) but following shadow test doesn't produce correct results vec3 lightToPixel worldSpacePos worldSpaceLightPos vec3 aPos abs(lightToPixel) float fZ max(aPos.x, max(aPos.y, aPos.z)) vec4 clip pLightProjection vec4(0.0, 0.0, fZ, 1.0) float fDepth (clip.z clip.w) 0.5 0.5 float depth texture(ShadowMapCube, normalize(lightToPixel)).x if(depth lt fDepth) shadow 0.0 else shadow 1.0 How to fix that ? |
1 | Can we flip znear and zfar so that positive z values increase away from the viewer? I am confused about the projection matrix in OpenGL. i have a habit of writing the code as follows. glViewport(0,0,w,h) glMatrixMode(GL PROJECTION) glLoadIdentity() if(w lt h) glOrtho(0.0f,250.0f,0.0f,250.0f h w,1.0, 1.0) else glOrtho(0.0f,250.0f w h,0.0f,250.0f,1.0, 1.0) glMatrixMode(GL MODELVIEW) glLoadIdentity() Simple, This code sets the viewing clipping region that maintains the equal width and height. negative z value points away from the viewer. as z value extends from 1 to 1. anything that lies outside this value wont be visible to the user. But i have seen the code like this. glMatrixMode(GL PROJECTION) glLoadIdentity() glOrtho(left,right,bottom,top,1.0, 500.0) how???? glMatrixMode(GL MODELVIEW) glLoadIdentity() glTranslatef(0.0f, 0.0f, 250.0f) ????? glutSolidSphere(15.0f,15,15) how is this visible on the screen Please help me understand this code. As from my point of view the z value is totally flipped near is 1.0 its okay but zfar is 500 and following that code there is a translation of z value to 250. and drawing that sphere will be visible on the screen how? My question is can we flip znear and zfar so that positive z value will go away from the viewer. |
1 | Restart a 2D game OpenGL, GLUT I started learning OpenGL and GLUT by making a snake game. The problem I encountered is that when I press the "new game" in the menu, the window has to be resized so that the content of the window to be updated. From what I have read, it's because of the MainLoop that's waiting for an event, but I don't know how to fix it. I've tried with glutPostRedisplay, but it doesn't change anything maybe I'm placing it wrong. Here is the code associated with the restart of the game int main( int argc, char argv ) glutInit( amp argc, argv ) glutInitDisplayMode( GLUT DOUBLE GLUT RGB ) glutInitWindowSize( 800, 600 ) glutInitWindowPosition( 100, 100 ) glutCreateWindow( "Aarghhh! O ramaaa !" ) createMenu() glClearColor( 0.0, 0.0, 0.0, 0.0 ) init() glutReshapeFunc( reshape ) glutDisplayFunc( dreptunghi ) glutSpecialFunc( player ) glutMainLoop() return 0 void init( void ) glClearColor( 1.0, 1.0, 1.0, 0.0 ) glMatrixMode( GL PROJECTION ) gluOrtho2D( 0.0, 800.0, 0.0, 600.0 ) glShadeModel( GL FLAT ) void menu( int num ) if ( num 0 ) exit( 0 ) else if ( num 1 ) menu value num snake.clear() i 30.0 j 30.0 alpha 1.0 value 1 speed 3 eaten true collided food false collided self false createMenu() glClearColor( 0.0, 0.0, 0.0, 0.0 ) init() glutDisplayFunc( dreptunghi ) glutReshapeFunc( reshape ) glutSpecialFunc( player ) glutSwapBuffers() glutPostRedisplay() void createMenu( void ) glutCreateMenu( menu ) glutAddMenuEntry( "New game!", 1 ) glutAddMenuEntry( "Exit", 0 ) glutAttachMenu( GLUT RIGHT BUTTON ) I have also tried to write a function for the display of the menu, that's called using glutIdleFunc, but it didn't solve anything either. I have ran out of ideas. |
1 | OpenGl Error, The loaded object takes the same colors and style of the Texture? I'm new to OpenGl Faced this problem Draw function void Renderer Draw() glUseProgram(programID) shader.UseProgram() mat4 view mat4(mat3(myCamera gt GetViewMatrix())) glm mat4 VP myCamera gt GetProjectionMatrix() myCamera gt GetViewMatrix() shader.BindVPMatrix( amp VP 0 0 ) glm mat4 VP2 myCamera gt GetProjectionMatrix() myCamera gt GetViewMatrix() floorM model13D gt Render( amp shader, scale(100.0f, 100.0f, 100.0f)) scaling the skybox t2 gt Bind() model3D gt Render( amp shader, scale(2.0f, 2.0f, 2.0f)) scaling aircraft glUniformMatrix4fv(VPID, 1, GL FALSE, amp VP2 0 0 ) mySquare gt Draw() The code of loaded shader.LoadProgram() model3D new Model3D() model3D gt LoadFromFile("data models obj Galaxy galaxy.obj", true) model3D gt Initialize() myCamera gt SetPerspectiveProjection(90.0f, 4.0f 3.0f, 0.1f, 10000000.0f) model13D new Model3D() model13D gt LoadFromFile("data models obj skybox Skybox.obj", true) model13D gt Initialize() Projection matrix shader.LoadProgram() View matrix myCamera gt Reset( 0.0f, 0.0f, 5.0f, Camera Position 0.0f, 0.0f, 0.0f, Look at Point 0.0f, 1.0f, 0.0f Up Vector ) std string Images names 6 Images names 0 "right.png" Images names 1 "left.png" Images names 2 "top.png" Images names 3 "bottom.png" Images names 4 "back.png" Images names 5 "front.png" t new Texture(Images names, 0) t2 new Texture("arrakisday dn.tga", 1) |
1 | How to implement HUD I'm wondering how one could implement an HUD in LWJGL. I've seen tutorials on this, but they don't seem to work. I know the basic structure goes like this init3d() 3d code init2d() HUD To which, of course, init3d and init2d are the GL Initialization codes. And also, how would you draw the images for the HUD (or should I ask that in a separate question)? If this is too vague, let me know and I'll update the question. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.