_id
int64
0
49
text
stringlengths
71
4.19k
1
how could RTT be so slow on my intel card? I simply draw something into a render target, and use it as an normal texture. It's always working greate to me with my nvidia video card. But today I found my program ran terribly slow (less than 5 fps) on a intel card. After running an analyse I found glGenerateMipmapEXT is the trouble maker, it costs most of the cpu time. Here's how I bind a RT texture glBindTexture(GL TEXTURE 2D, m textureID) lt br gt glGenerateMipmapEXT(GL TEXTURE 2D) without glGenerateMipmapEXT the texture is nothing but pure white pic. Something wrong with my RTT?
1
OpenGL How to render a "shadow" for an object that's behind another? First off, an image from Fez that depicts the effect I'm after I'm trying to achieve a similar effect in my project. I'm quite certain this is done with a stencil buffer, but the resources on such effect are scarce. How should I approach this? I'm guessing I'm after a sort of "AND" stencil where I'd render a semi transparent black rectangle over the player where only the pixels that had both the player and a map object. I'm not sure if it has any effect on it (besides maybe inverting the stencil operations..?), but for other reasons, my pipeline first renders the world and the player is rendered last.
1
Loading multi texture 3ds c I have a question about loading 3ds using this tutorial. I want to use more than one texture on the model (because here all the models have more than one) but it seems that this library can't do that. Do you know any other alternatives or a way to edit this existing library to reach my aim?
1
OpenGL approach to depth testing like combination of a large buffer of fragments I have a depth buffer and color buffer created by another effect. The buffer is more than double my screen size. It is not rendered from geometry, but as a whole it resembles a rendered scene with an ortho projection. I can't generate this data w any other projection. I'm trying to redraw the scene in two point perspective. This means each fragment from the orthographic data will be skewed and translated based on the depth value. If I render a fullscreen quad, each fragment will not know how or where to sample from the original buffer. It seems like I the sources have to assign themselves to destinations, rather than a destination gathering sources. Impractical or incomplete solutions Generate a tri or point for every pixel in the source buffer, and actually rasterize them with depth testing. This is really slow, but it could be possible to generate the vertices in a compute shader or something? We're still dealing millions of tris for an average screen. I'm skeptical that this is the real solution. Something with SSBO and atomicMax. It seems like you can only compare your actual data, so I would have to pack my colors with depth values somehow. This seems like a fairly fundemental operation, so I'm hoping there is some canonical technique or opengl extention that does this sort of thing for me well. Even a name for this operation might help me find existing literature. The final solution must be realtime for typical screen sizes. C OpenGL.
1
Acceptable memory size consumed by mobile game in 2017? How do you think, what is the acceptable memory size consumed by mobile game in 2017, taking into account current state of mobile devices hardware performance? I've made a pack of optimizations using a number of recommendations and managed to reduce memory consumption from 270Mb to 120Mb, but it still seems to me too much. Anyway, my game looks pretty nice, smooth and fast on iPhone 5s (produced in 2013), also I run it on cheapest no name android, and it still looks quite good. So my question can be rephrased this way is there anything criminal if the game eats about 200Mb of RAM in our reality when 1Gb of ram is not something incredible for the mobile device?
1
Modern OpenGL, 2D only, should I be using uniforms or VBOs for sprite transformation? I'm new to OpenGL, I'm currently building a 2D game engine. Right now I'm only using one shader as I only draw textured quads (basically sprites). The thing is... I don't know if should I be using uniforms or VBOs with glBufferSubData. Edit My concern is efficiency, more specifically CPU overhead. Edit2 I want to make the best out of OpenGL and the GPU, and to have as little CPU overhead as possible. The way I have it setup right now is that I create a single VBO, VAO and EBO. This is my VBO (vec2 for position center of the screen and another vec2 for texture coordinates, times 4) 0.5f, 0.5f, 0.0f, 0.0f, 0.5f, 0.5f, 1.0f, 0.0f, 0.5f, 0.5f, 1.0f, 1.0f, 0.5f, 0.5f, 0.0f, 1.0f And this is the EBO which I use to draw the quad with glDrawElements with the above VBO 0, 1, 2, 2, 3, 0 This is all static and I currently use uniforms for transformation (position, camera, rotation, etc.) and color. So every frame, for every sprite, I update the uniforms with glUniformMatrix4fv and glUniform2fv and I draw the quad (I batch sprites with identical transformation and color attributes, to minimize the amount of these calls). And I have no idea if this is bad or not. I have no prior experience in OpenGL. I know the VAO will always be the same (as I only render textured quads with a single shader), but should I not use uniforms and instead create a VBO for each sprite object and update the needed data in the VBO with glBufferSubData? P.S. If both ways are not good and there's a better way to do this please let me know. Update (kind of an answer) Using the method above made it impossible to use spritesheets as the UV was static. I didn't think of that when I began working on it. So what I did instead was implementing a completely different approach that I came across last week by reading some books. I'm still using a single shader, a single VBO, a single VAO and a single EBO. The VBO consists of positions, UVs and colors. Except that the VBO and EBO are large (predefined MaxVertices variable determines how large). I initially allocate memory with glBufferData (only once). To the VBO with GL DYNAMIC DRAW and to the EBO with GL STATIC DRAW. I upload the data to the EBO once, which is just static indices to the VBO (using 4 vertices to draw a quad by drawing them with that EBO and glDrawElements, same method as before). I have a Begin(), Draw(...) and End() calls, which are similar to a typical sprite batch class. Whenever the amount of vetices exceeds the predefined MaxVertices variable, or when End() is called, the vertices that were batched are actually uploaded (with glBufferSubData to avoid memory reallocation) and drawn. For now, I found this to be the most efficient way to do what I want. If batched correctly, each batch will have the least amount of CPU and GPU overhead by "sacrificing" (more like utilizing) GPU memory. I am still not sure if this is the most optimal way to reduce overhead, but so far it's a big improvement over what I've previously done. And it makes it so I can use spritesheets which I completely forgot about P I would appreciate any input on this implementation. )
1
C , OpenGL Building a polyhedron via geometry shader I'm stuck with geometry shaders in OpenGL c programming. I want to create simple cube by repeating 6 times drawing one rotated wall. Here is my vertex shader (everyting has version 330 core in preamble) uniform mat4 MVP uniform mat4 ROT layout(location 0) in vec3 vertPos void main() vec4 pos (MVP ROT vec4(vertPos,1.5)) gl Position pos Now geometry shader layout (triangles) in layout (triangle strip, max vertices 6) out out vec4 pos void main(void) for (int i 0 i lt 3 i ) vec4 offset vec4(i 2.,0,0,0) gl Position gl in i .gl Position offset EmitVertex() EndPrimitive() And now fragment shader uniform mat4 MVP in vec4 pos out vec3 color void main() vec3 light (MVP vec4(0,0,0,1)).xyz vec3 dd pos.xyz light float cosTheta length(dd) length(dd) color vec3(1,0,0) Well, there is some junk, I wanted also put shading into my cube, but I've got a problem with sending coordinates. The main problem is here I get my scaled square (by MVP matrix), I can even rotate it with basic interface (ROT matrix), but when I uncomment my " offset" line I get some mess. What should I do to make clean 6 times repeating?
1
Why does Assimp appear to be loading vertices in the incorrect order in this code? I'm getting a weird bug in my code that is reading in the vertices from an .obj file. The .obj file is just a cube, hence there are eight vertices. When I print out the vector it seems to work perfectly up until the 5th vertex, and then it seems to read the last 3 backwards. This is the file Blender v2.69 (sub 0) OBJ File '' www.blender.org mtllib cube.mtl o Cube v 2.000000 2.000000 2.000000 v 3.000000 3.000000 3.000000 v 4.000000 4.000000 4.000000 v 5.000000 5.000000 5.000000 v 6.000000 6.000000 6.000000 v 7.999999 7.000000 7.000001 v 8.000000 8.000000 8.000000 v 9.000000 9.000000 9.000000 usemtl Material s off f 1 2 3 4 f 5 8 7 6 f 1 5 6 2 f 2 6 7 3 f 3 7 8 4 f 5 1 4 8 This is my code m gt vertices new glm vec3 m gt numVertices for (GLuint i 0 i lt m gt numVertices i ) const aiVector3D pPos amp (mesh gt mVertices i ) m gt vertices i glm vec3 (pPos gt x, pPos gt y, pPos gt z) std cout lt lt "X " lt lt m gt vertices i .operator (0) lt lt " Y " lt lt m gt vertices i .operator (1) lt lt " Z " lt lt m gt vertices i .operator (2) lt lt std endl However this is what is stored in the vector (what the above code prints out) X 2 Y 2 Z 2 X 3 Y 3 Z 3 X 4 Y 4 Z 4 X 5 Y 5 Z 5 X 6 Y 6 Z 6 X 9 Y 9 Z 9 X 8 Y 8 Z 8 X 8 Y 7 Z 7 As you can see the last three lines are messed up, I have no idea how this is happening in the loop.
1
What advantage do OpenGL, SFML and SDL have over software rendering? I started watching the Handmade Hero stream, where Casey Muratori creates a game engine without using frameworks or such. Yesterday I got to the part where he showed how an image is drawn onto the screen. As far as I understood it he just allocated some memory as big as the size of the screen he wants to draw to. And then he created a bitmap which he passed to the buffer memory he allocated and drew it to the screen using a os specific function. This seems quite straight forward. I used GameMaker, changed to Love2D, worked a little bit with Sprite Kit but I was always wondering what was really happening beneath this sometimes confusing layers. Given that, why even bother using graphics libraries (OpenGL, SFML, SDL, ) when all you have to do is simply allocate some buffer, pass a bitmap and draw it to the screen? If you then want to draw distinct things to you screen you just write them to your bitmap which then gets passed into the buffer. I'm quite new to programming, but this seems quite simple to me. Please correct me if I'm wrong.
1
Why use sprite tile maps on the GPU in WebGL? I'm trying to figure out the best way of rendering my layered tiled maps with WebGL, and have come across this tutorial several times https blog.tojicode.com 2012 07 sprite tile maps on gpu.html Someone even created a library for it here https github.com englercj gl tiled The gist of this quot GPU abuse quot seems to be generate a texture where each pixel as a lookup table into the tileset spritesheet. That is, the red component refers to the x coordinate and the green component refers to the y coordinate. Additionally, in the library, the blue component refers to the corresponding tileset to use. This seems to accomplish exactly what I am attempting to do, but I am confused on two parts. For one, the author describes this is a neat little trick and quot GPU abuse quot . I feel as if the implication there is, quot this is a fun little exercise, but don't actually use this for your game or any serious code. quot Is this the case, or is this actually a potentially serious technique that I could use to power my map rendering? Secondly, I'm somewhat confused why this is even necessary. I am still learning WebGL, so I might mess up a few terms here, but couldn't this same exact thing be accomplished in a more straight forward manner? Why go through all these steps of creating a custom generated texture of your map, when you can just create a (static?) Vertex Buffer Object, load it with all your tiles, and then just render them? I'm not really understanding what exactly this technique buys as that can't be accomplished in the same way by just creating a large buffer that just contains all our tile data, and then just rendering it like that. Isn't that conceptually simpler, arguably easier to implement, more straight forward, and doesn't have weird restrictions like maximum tile count or width? Basically, I'm having trouble understanding what advantages this has over traditional rendering where you just load up a buffer with tiles. If there are some advantages, should this technique be seriously used in a game?
1
Developing a game using opengl Hello Everyone, I want to create a game using OPEN GL to learn basics of the game development. I know C and OOPS and can manage the coding. I have tried using the free engines on the market like Unreal Engine, Unity and Game Maker. Unreal Engine in particular doesn't work well on my System due to it's higher requirements. Even all other engines have some kind of abstraction to themselves. I have time to spare. So, please suggest me a starting point to start developing using OPEN GL and also the math required to create them. All I want to do is learn it thoroughly at the lower level. Thanks in advance
1
Inverse of perspective matrix, for what? I dont understand for what i need the inverse of the perspective Matrix in Computer Graphics and how do i calculate it?Maybe someone has an explanation for me.
1
How to convert OpenGL 2.0 code to OpenGL ES 2.0? I have a 3D game library that uses OpenGL 2.0 for PC and I need to convert it to OpenGL ES 2.0 to compile it for Android. Because the library is huge, I'd like to avoid converting it line by line by hand. Is there some faster way I can convert desktop OpenGL to OpenGL ES source code, like a wrapper, or maybe some layer running on Android that converts desktop OpenGL to ES at runtime?
1
How much matrices should I use for OpenGL transformation? As I start to get some deeper(or at least a little bit better) understanding on how camera transformations works, I am curious about one thing How much matrices should I use ? I mean, until now I have been working with simple 2d ortho projection and I was using world positions (not in range 1.0 to 1.0 ) so I just created a matrix at the start of the program and that was it but when I wanted to get more to 3D(to get same effect of not using NDC) I noticed (in tutorials) that they use a lot of matrices(for example they create matrix for every object ingame for translation and then send it to the GPU). This confuses me because I thought that I will use only 3 matrices (for every transform step). Can you explain me how is this done in professional games? Do I need to load new model matrix for every object(send it to vertex) ? Or change the whole MVP matrix in shader with every object being rendered? Is it still sufficient? Thanks Edit About efficiency Is it efficient to change the shader variable that often ? I mean, I want use my batch system from my old code and it looks like this now 1. Bind texture VAO 2. Render object.1 object.n via glDrawArrays Elements Instanced But what about now it would be 1. Bind texture VAO 2. Load model matrix of object.1(via glUniformMat4fv) and render it, then load another model matrix for object2 and same for every object in batch Isn t it inefficient to change shader variable that much ?
1
Rendering without VAO's VBO's? I am trying to port a demo I found on PositionBasedDynamics . It has a generic function which does the rendering and on their example works but they don't generate bind any Vertex Array Object or Vertex Buffer Object even though they use Core OpenGL and shaders. The function is this template lt class PositionData gt void Visualization drawTexturedMesh(const PositionData amp pd, const IndexedFaceMesh amp mesh, const unsigned int offset, const float const color, GLuint text) draw mesh const unsigned int faces mesh.getFaces().data() const unsigned int nFaces mesh.numFaces() const Vector3r vertexNormals mesh.getVertexNormals().data() const Vector2r uvs mesh.getUVs().data() std cout lt lt nFaces lt lt std endl glBindTexture(GL TEXTURE 2D, text) glEnableVertexAttribArray(0) glVertexAttribPointer(0, 3, GL REAL, GL FALSE, 0, amp pd.getPosition(offset) 0 ) glEnableVertexAttribArray(1) glVertexAttribPointer(1, 2, GL REAL, GL FALSE, 0, amp uvs 0 0 ) glEnableVertexAttribArray(2) glVertexAttribPointer(2, 3, GL REAL, GL FALSE, 0, amp vertexNormals 0 0 ) glDrawElements(GL TRIANGLES, (GLsizei)3 mesh.numFaces(), GL UNSIGNED INT, mesh.getFaces().data()) glDisableVertexAttribArray(0) glDisableVertexAttribArray(1) glDisableVertexAttribArray(2) glBindTexture(GL TEXTURE 2D, 0) I did the same thing and it render's a white screen . I used RenderDoc to check what's going on and it show's these https puu.sh rM1wB 80620ade8d.png . How can they get it to work while i can't?
1
How can I do ambient occlusion for simple meshes with few vertices? I want to do ambient occlusion for some simple buildings. Since simple buildings have very few vertexes, how can I get good results? Is it possible with a per vertex AO implementation?
1
OpenGL texture2d image sampling issue. Strange artifacts in texture I have an issue when using textures in OpenGL, strange artifacts occur where geometry overlaps, but not always. Video Reference. I am using a GL TEXTURE 2D with GL ARB image load store to make a custom depth test shader that stores material data for opaque and transparent geometry. The video given shows the artifacts occur where the support structure for a table is occluded behind the top of the table, but strangely, not occurring where the base of the table is occluded by the support. version 450 core in VS OUT vec3 Position vec3 Normal vec2 TexCoords mat3 TanBitanNorm fs in Material data uniform sampler2D uAlbedoMap uniform sampler2D uNormalMap uniform sampler2D uMetallicMap Material info out layout(rgba16f) coherent uniform image2D uAlbedoDepthOpaque layout(rgba16f) coherent uniform image2D uNormalMetallicOpaque layout(rgba16f) coherent uniform image2D uAlbedoDepthTransparent layout(rgba16f) coherent uniform image2D uNormalAlphaTransparent Depth info in out layout(r8) uniform image2D uDepthBufferOpaque layout(r8) uniform image2D uDepthBufferTransparent void main() vec3 n tex texture(uNormalMap, fs in.TexCoords).xyz n tex n tex 2.0f 1.0f ivec2 tx loc ivec2(gl FragCoord.xy) const float opaque depth imageLoad(uDepthBufferOpaque, tx loc).r Stored depth of opaque const float trans depth imageLoad(uDepthBufferTransparent, tx loc).r Stored depth of transparent Depth processing if (gl FragCoord.z gt opaque depth) bool tran false if (trans depth gt opaque depth) tran trans depth gt gl FragCoord.z else tran true Transparent if (texture(uAlbedoMap, fs in.TexCoords).a lt 1.0f amp amp tran) imageStore(uDepthBufferTransparent, tx loc, vec4(gl FragCoord.z)) imageStore(uAlbedoDepthTransparent, tx loc, vec4(texture(uAlbedoMap, fs in.TexCoords).rgb, gl FragCoord.z)) imageStore(uNormalAlphaTransparent, tx loc, vec4(abs(length(n tex) 1.0f) gt 0.1f ? fs in.Normal normalize(fs in.TanBitanNorm n tex), texture(uAlbedoMap, fs in.TexCoords).a)) Opaque else imageStore(uDepthBufferOpaque, tx loc, vec4(gl FragCoord.z)) imageStore(uAlbedoDepthOpaque, tx loc, vec4(texture(uAlbedoMap, fs in.TexCoords).rgb, gl FragCoord.z)) imageStore(uNormalMetallicOpaque, tx loc, vec4(abs(length(n tex) 1.0f) gt 0.1f ? fs in.Normal normalize(fs in.TanBitanNorm n tex), texture(uMetallicMap, fs in.TexCoords).r)) if (opaque depth 0.0f) imageStore(uDepthBufferOpaque, tx loc, vec4(0.125f)) else imageStore(uDepthBufferOpaque, tx loc, vec4(0.125f opaque depth)) Render with overlapping geometry shows that artifacts still occur outside of reading from the texture. Also in the video, I move the camera back and forth (with orthographic projection) and the artifacts become brighter and darker. Render with overlapping geometry w out depth processing shows that the brighter darker values were from the depth test. Any ideas on why this occurs, and how can I fix it?
1
Why do we use 4x4 matrices to transform things in 3D? To translate a vector by 10 unit in the X direction, why do we have to use a matrix? We can just add 10 to the mat 0 0 , and we got the same result too.
1
texture cordinate VBO not being updated OpenGL I'm making a minecraft style game and I decided to add a VBO with the texture atkas coordinates of the vertices but it is appearing all white. However I'm following the same process as another VBO for the block positions which works the one that works glBindBuffer(GL ARRAY BUFFER,chunk gt posOffsetVBO) glBufferData(GL ARRAY BUFFER,sizeof(glm vec3) 65536,NULL,GL DYNAMIC DRAW) glVertexAttribPointer(2,3,GL FLOAT,GL FALSE,3 sizeof(GLfloat),(GLvoid )0) glEnableVertexAttribArray(2) glVertexAttribDivisor(2,1) then I fill it with glm vec3s like this glBindBuffer(GL ARRAY BUFFER,adjacentChunks 1 1 gt posOffsetVBO) glBufferSubData(GL ARRAY BUFFER,0,sizeof(posOffsets), amp posOffsets 0 ) this is the one with the texture coordinates that doesn't work glBindBuffer(GL ARRAY BUFFER,chunk gt texCoordVBO) glBufferData(GL ARRAY BUFFER,sizeof(glm vec2) 36 1000,NULL,GL DYNAMIC DRAW) glVertexAttribPointer(1,2,GL FLOAT,GL FALSE,2 sizeof(GLfloat),(GLvoid )0) glEnableVertexAttribArray(1) then fill It with 36 glm vec2s for each block so one per vertex glBindBuffer(GL ARRAY BUFFER,adjacentChunks 1 1 gt texCoordVBO) glBufferSubData(GL ARRAY BUFFER,0,sizeof(texCoords), amp texCoords 0 ) Is there a reason why the texture one doesn't work?
1
scale on z axis move At the position z 0 my box has size on one side 0.2. When, for example, I move it to z 1.5, size of it scale up. I've trided to calculate it by using triangle formula z tan(fov 2), but it does not really work. Do you know how to get size of one side after primitive moved on z axis ? (All about in opengl)
1
Procedural texturing with opengl I have a hexagonal grid of fields, each field has a certain terrain type. I assign every vertex of hexagon with terrain type and pass it as attribute to vertex and then fragment shader. Then I use the extrapolated value to blend terrain textures together. For instance 1 is grassy, 2 is desert. Vertex shader varying vec2 vUv attribute float terrainType varying float vTerrainType void main() vUv uv vTerrainType terrainType vec4 mvPosition mvPosition modelViewMatrix vec4( position, 1.0 ) gl Position projectionMatrix mvPosition Fragment shader uniform vec3 diffuse uniform float opacity varying vec2 vUv varying float vTerrainType uniform sampler2D map uniform sampler2D map1 void main() gl FragColor vec4( diffuse, opacity ) vec4 texelColor texture2D( map, vUv ) (2.0 vTerrainType) texture2D( map1, vUv ) (vTerrainType 1.0) gl FragColor gl FragColor texelColor Implementation is WebGL, if it makes any difference. The result looks really unnatural, so is there any way to make it smoother and rounder?
1
Does LibGDX abstract OpenGL ES away or can I still use my OpenGL ES knowledge? I've been learning OpenGL ES, and am now turning my attention to using LibGDX. My main concern with LibGDX is, if needed, will I be able to apply my OpenGL ES knowledge to something if needed and essentially override bits and pieces of the framework, or does LibGDX essentially hide any implementations of OpenGL?
1
What is an efficient way to deal with large, scrolling background images? In my mobile game you basically you just fly up (infinite height) and collect stars. I have many quite large background images, scaled down so that their width is the same as the device width. Then they are appended after each other during rendering. Since I implemented these backgrounds, my game runs poorly. I've got about 20 background images with a size of 800x480 each without backgrounds the game is quite smooth. Does anyone have an idea how to implement this many backgrounds without making the game slow down? The images are used as a 2DTexture. If I leave the clouds out of the image and "just" display the blue part, the app still slows down. Showing some code is a bit difficult, because I got many many classes which will do the loading, rendering and display stuff. Basically its done as Google does it in there "spriteMethodTest" example here http code.google.com p apps for android source browse trunk SpriteMethodTest 2 of these image set. First http picbox.im view b7c8c86abb 01.png Second http picbox.im view 3a8162314a 02.png
1
Cube map or 2D texture map I'm trying to map quadsphere with COBE spherical cube (CSC) projection in OpenGL (wanna map planets). I managed to create a 2D texture and it works well except seams at edges. Then I learned that there is a cubemap in OpenGL. It has some advantages It automatically generates seamless mipmap and it requires less memory (I was using 4 size width 3 size height texture where a half is just wasted like this https camo.githubusercontent.com b017f71b03e6180dc75ff34f017d45f32e9db817 68747470733a2f2f7261772e6769746875622e636f6d2f6369782f517561645370686572652f6d61737465722f6578616d706c65732f677269642e706e67) The problem with using cubemap is that CSC is not gnomonic projection. So I can't just use vertex point as 3D texture coordinates. I have to calculate 3D texture coordinates from uv coordinates and the face. Then OpenGL recalculate it to uv coordinate and the face. I guess that slows down the rendering compared to simple 2D texture. The problem with using 2D texture is that it is not easy to make texture seamless. And it takes more memory if I use one image file for whole texture. I think I could pre generate mipmaps sampling adjacent faces and add 1px padding and modify uv coordinates slightly (like this? https gamedev.stackexchange.com a 49585 74052). But it sounds complicated and I have to pre generate all mipmaps, and textures might no be in size of power of 2 and I might have another problem with that. I have no idea which way to go. When using cubemap, is there a way to specify uv coordinates and face directly, and not specify 3D texture coordinates (direction)? Is there a way to set 2D texture wrap mode to something like CLAMP TO EDGE OF OTHER TEXTURE to make it seamless at edges? Is there other way to make a 2D cube texture seamless? Any other suggestion to deal with this problem? Thanks in advance!
1
Inverse of perspective matrix, for what? I dont understand for what i need the inverse of the perspective Matrix in Computer Graphics and how do i calculate it?Maybe someone has an explanation for me.
1
What's the correct way to move 2d sprites in opengl 2.1? I'm getting into Opengl 2.1 and wanted to know how can I move 2d sprites. I already created my vbo and ibo, and the vertex data is already there. But, how can I move a sprite once it's already drawn? Should I update the vertex data with glBufferSubData(I don't think that's efficient), or whould I use glTranslate3f? If I use glTranslate I move all the sprites in the screen. Any help appreciated, thanks!
1
Why does my terrain texture fail to load? (OpenGL) I'm currently using a vertex shader and a fragment shader for loading my texture onto the terrain I made. Here is my vertex shader version 330 core layout (location 0) in vec3 aPos layout (location 1) in vec3 aNormal uniform mat4 model uniform mat4 view uniform mat4 projection float scale 0.5 out vec2 TexX out vec2 TexY out vec2 TexZ out vec3 blend weights void main() vec3 blend weights abs(aNormal.xyz) blend weights (blend weights 0.2) 0.7 blend weights max(blend weights, 0) blend weights (blend weights.x blend weights.y blend weights.z) TexX aPos.yz scale TexY aPos.zx scale TexZ aPos.xy scale gl Position projection view model vec4(aPos, 1.0) And here is my fragment shader. version 330 core out vec4 FragColor uniform sampler2D terrainTexture in vec2 TexX in vec2 TexY in vec2 TexZ in vec3 blend weights void main() FragColor texture(terrainTexture, TexX) blend weights.x texture(terrainTexture, TexY) blend weights.y texture(terrainTexture, TexZ) blend weights.z I was trying to implement 3D texture planar projections, based on section 1.5 of this link https developer.nvidia.com gpugems GPUGems3 gpugems3 ch01.html If I just try to implement the texture in a very simple way, it works (which looks terribly stretched on the mountanous terrains, of course). But if I implement it this way, the texture does not load. What could be the possible reason? Just in case anyone is wondering, the following is my VAO, VBO, EBO configuration VAO, VBO configuration unsigned int VAO, VBO, EBO glGenVertexArrays(1, amp VAO) glGenBuffers(1, amp VBO) glGenBuffers(1, amp EBO) glBindVertexArray(VAO) glBindBuffer(GL ARRAY BUFFER, VBO) glBufferData(GL ARRAY BUFFER, positions.size() sizeof(glm vec3) normals.size() sizeof(glm vec3) , NULL, GL STATIC DRAW) glBufferSubData(GL ARRAY BUFFER, 0, positions.size() sizeof(glm vec3), amp positions 0 ) glBufferSubData(GL ARRAY BUFFER, positions.size() sizeof(glm vec3), normals.size() sizeof(glm vec3), amp normals 0 ) glBindBuffer(GL ELEMENT ARRAY BUFFER, EBO) glBufferData(GL ELEMENT ARRAY BUFFER, indices.size() sizeof(int), amp indices 0 , GL STATIC DRAW) glEnableVertexAttribArray(0) glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, 3 sizeof(float), 0) glEnableVertexAttribArray(1) glVertexAttribPointer(1, 3, GL FLOAT, GL FALSE, 3 sizeof(float), (void )(positions.size() sizeof(glm vec3))) This is my draw function Inside the render loop void drawScene(Shader amp shader, unsigned int VAO, unsigned int numIndices, unsigned int texture1) draw the scene with the parameter data. glm mat4 model model glm translate(model, glm vec3( (int)(gridXNum) 2, 0.0f, (int)(gridZNum) 2)) glm mat4 view camera.GetViewMatrix() glm mat4 projection glm perspective(glm radians(camera.Zoom), (float)SCR WIDTH (float)SCR HEIGHT, 0.1f, 100.0f) shader.use() shader.setMat4("model", model) shader.setMat4("view", view) shader.setMat4("projection", projection) shader.setFloat("TERRAIN WIDTH",gridXNum) glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, texture1) glBindVertexArray(VAO) glDrawElements(GL TRIANGLE STRIP, numIndices, GL UNSIGNED INT, 0) glBindVertexArray(0) However, I really don't think this is the problem because as I said, if I change the shader codes into a really basic one, the texture successfully loads. There is definitely something wrong with my shader program. Any help would be appreciated. Thanks in advance!
1
Porting SDL OpenGL Game to Android and IOS I am currently learning OpenGL (3.0 ) with C . I am using SDL for input handling, window creation, etc., GLEW to use OpenGL and call OpenGL Functions, and GLM for OpenGL Math stuff. If I fully finish a Windows game, how can I port my game with the setup above to Android, IOS, and maybe even other platforms (but my main focus is Android and IOS). I do not want to use any Game Engines. What I am looking for is a program that could just make my game run on android and IOS. If there is a way to optimize my game using the tools listed above a little bit and change some code to make my game run on android and IOS, I am okay with that (if that is the case, please provide resources). I have heard of OpenGL Es, not quite sure what that is, but I do not want to use it if it is a completely different library and if I have to rewrite the entirety of my game rendering engine. I also want my app to run without the user having to download any libraries such as SDL, so please make sure that my app can just run as an .apk file or whatever format that I can just tap on and it will open. I also want my app to run on most smartphones (Android and IOS), so please be sure that what you are suggesting is not only available to a limited amount of smartphones. Another thing that I want is optimal performance. No emulators or simulators from .exe to .apk that run very slowly. It should run as fast as other apps and run at probably the same speed as my windows version. Also point out any mistakes that I may have made (for example, maybe I am just crazy thinking that smartphones use OpenGL).
1
Picking a suitable resolution for a modern low res game? I'm working on a 2D game project right now (using SFML OpenGL and C ) and I'm trying to figure out how to go about choosing a resolution. I want my game to have a pixel resolution that is around that of classic '16bit' era consoles like the Super Nintendo or Neo Geo. However, I'd also like to have my game fit the 16 9 aspect ratio that most modern PC monitors use. Finally I'd like to be able to include an option for running full screen. I know that I could create my own low res 16 9 resolution that is more or less around the size of SNES or NeoGeo games. However, the problem seems to be that doing so would leave me with a non standard resolution that my monitor would not be able to support in fullscreen mode. For example, if i divide the common 16 9 resolution 1920x1080 by 4, I would get a 16 9 resolution that is relatively close to the resolution used by 16bit era games 480x270. That would be fine in a windowed mode, but I don't think that it would be supported in fullscreen mode. How can I choose a resolution that suits my needs? Can I use something like 480x270? If so, how would I go about getting fullscreen mode to work with such a non standard resolution? (I'm guessing OpenGL SFML might have a way of up scaling...but..)
1
How can this kind of entity component organization improve cache efficiency? I've been reading up about entity component systems as a design pattern for an OpenGL engine. The style I'm trying to implement has entities only being integers, and components being long contiguous arrays of, for example glm vec3 for a world position component. The entity integers then work as indices into those arrays. All of this described best by this post. As I understand, the main benefits of an ECS would include Simplification and elimination of unnecessary dependencies and spaghetti code Performance boost contiguous component arrays translate into fewer cache misses and an overall de bloating of complex inheritance While I do agree with the first point, I don't understand how the second could be true. On a regular draw call, you'd need almost always the majority of components. You need position, scale, rotate for loading a MVP matrix into the shaders. You need the mesh and textures for rendering itself. You even need details like light color or whether or not it casts shadows. Everything! This means that you will inevitably encounter cache misses in some form, since you can't fit your component arrays in cache. Add to that the fact that since the entity integers are indices into the component arrays, all of the component arrays need to grow in size whenever you add an entity independent of it's type. This means that adding 30 entities with no mesh will inevitable generate 30 empty mesh allocations that will never be used. How would you implement such a system to truly minimize cache misses without incurring in unnecessary memory waste?
1
Rendering a model with transparent or translucent uv map applied doesn't work Before I try to make anything transparent, the model renders nicely. When I change the uv layout so that one piece of the model will be transparent, it renders horribly. This is the result with a translucent green texture I did transparency with OpenGL when I did 2D games and everything worked nicely because of these two lines glEnable(GL BLEND) glBlendFunc(GL SRC ALPHA, GL ONE MINUS SRC ALPHA) Now, I fear, this won't be enough, right? Or maybe I should use some other technique?
1
How do I check for collision between transparent textures? I am creating a 2D game using OpenGL. For sprites, I use textured quads (actually two triangles). The texures contain transparent pixels, since objects are not always perfectly rectangular. How do I do collision detection on the objects, not the quads? I was thinking to first check the quads for collision and if they match, check the textures. But how can I check if two non transparent pixels are on top of each other (or next to each other) for the two objects? Or is there a completely different way of how this is done best?
1
OpenGL pitching problem I've been trying to implement several camera movements for my application. So far yawing, rolling, strafing, walking has been working properly, but I can't get my pitching to work properly. If I continue pitching upwards, it doesn't rotate back 360 degree, and instead gets stuck looking at the top of the screen. Here's my code for Camera class class Camera float m 16 Vector dirForward, dirUp, dirRight void loadCameraDirections() glGetFloatv(GL MODELVIEW MATRIX, m) dirForward Vector(m 2 , m 6 , m 10 ).unitVector() dirUp Vector(m 1 , m 5 , m 9 ).unitVector() dirRight Vector(m 0 , m 4 , m 8 ).unitVector() public Vector position Vector lookAt Vector up Camera() position Vector(300, 0, 100) lookAt Vector(0, 0, 100) up Vector(0, 0, 1) void rotatePitch(double rotationAngle) glPushMatrix() glRotatef(rotationAngle, dirRight.x, dirRight.y, dirRight.z) loadCameraDirections() glPopMatrix() lookAt position.repositionBy(dirForward.reverseVector()) cam I call the gluLookAt funtion by gluLookAt(cam.position.x, cam.position.y, cam.position.z, cam.lookAt.x, cam.lookAt.y, cam.lookAt.z, cam.up.x, cam.up.y, cam.up.z)
1
Why does Assimp appear to be loading vertices in the incorrect order in this code? I'm getting a weird bug in my code that is reading in the vertices from an .obj file. The .obj file is just a cube, hence there are eight vertices. When I print out the vector it seems to work perfectly up until the 5th vertex, and then it seems to read the last 3 backwards. This is the file Blender v2.69 (sub 0) OBJ File '' www.blender.org mtllib cube.mtl o Cube v 2.000000 2.000000 2.000000 v 3.000000 3.000000 3.000000 v 4.000000 4.000000 4.000000 v 5.000000 5.000000 5.000000 v 6.000000 6.000000 6.000000 v 7.999999 7.000000 7.000001 v 8.000000 8.000000 8.000000 v 9.000000 9.000000 9.000000 usemtl Material s off f 1 2 3 4 f 5 8 7 6 f 1 5 6 2 f 2 6 7 3 f 3 7 8 4 f 5 1 4 8 This is my code m gt vertices new glm vec3 m gt numVertices for (GLuint i 0 i lt m gt numVertices i ) const aiVector3D pPos amp (mesh gt mVertices i ) m gt vertices i glm vec3 (pPos gt x, pPos gt y, pPos gt z) std cout lt lt "X " lt lt m gt vertices i .operator (0) lt lt " Y " lt lt m gt vertices i .operator (1) lt lt " Z " lt lt m gt vertices i .operator (2) lt lt std endl However this is what is stored in the vector (what the above code prints out) X 2 Y 2 Z 2 X 3 Y 3 Z 3 X 4 Y 4 Z 4 X 5 Y 5 Z 5 X 6 Y 6 Z 6 X 9 Y 9 Z 9 X 8 Y 8 Z 8 X 8 Y 7 Z 7 As you can see the last three lines are messed up, I have no idea how this is happening in the loop.
1
How portable are OpenGL versions, really? If I write a game engine that uses OpenGL 1.5 (not assuming what else I do), is it portable now and is it still portable five years from now or are will support for OpenGL by hardware and drivers (be) exclusive to their (much more farther along) target OpenGL versions? Lately I've been looking at a lot of answers on this website that direct users to divert their work towards the most recent OpenGL versions, citing hardware surveys of DirectX support and only recommending earlier versions as a last, final resort (as if to imply there is something wrong with them that makes all usage of them invalid or pointless). If I only have computers that can provide OpenGL lt 1.5 or lt 2.1 contexts should I just give up game programming if I can't afford a new computer with hardware and drivers for 3.x and 4.x? Or should I finish my game engine the way I intended to? Will by the time I get a 4.x supporting setup will there be new versions and a lack of backwards compatibility that trash all usage of 4.x? Will 4.x ever dominate over earlier versions support wise before a new major version is realized and released?
1
Determine user mouse selection of 3D Object for multiple viewports I am currently working on setting up some world objects for my level editor and am running into a bit of a snag. When I get the hit location from the mouse raycast, I would like to determine what part of the object that the user has hit. I am looking for something that would work for each viewport in my map editor. I would like to use hit detection change the color of an object when the user mouses over it. For instance, if the user mouses over the x axis (red arrow) it would turn from red to white. It doesn t have to be pixel perfect but I would like to find a single solution that would work in all viewports. I was thinking that after I get the hit location, I could transform the hit location to match the inverse of the rotation of the camera and perform the checks from there. I'm skeptical that solution will work though.
1
Workaround for reading and writing same texture? To apply post effects, it is often needed to read the preliminary image, perform computations on its pixels and store the result in the same texture again. For example, think of a tone mapping or desaturation effect. The input and output should be the same texture, but this isn't allowed by OpenGL. Therefore, what is a good workaround? One idea I came up with but which isn't very satisfying is to have two image textures and automatically use them alternating. Also keeping track of which is the newer one to finally display it on the screen.
1
In OpenGL, what does it mean to make a context current? I have a few questions 1 What does it mean to make a context current? Does it mean that all subsequent OpenGL calls will apply to that context window? 2 With GLFW3, how to use multiple windows (say, 2)? Is it enough to create 2 windows contexts, make the context current on the first one, draw things, then make the context current on the second window, and draw things? 3 How does it all fit together with an OpenGL loading library (like gl3w)? When I initialize gl3w once after creating the contexts, will the OpenGL calls work on the context that is current even if I change it alot or will I have to re initialize gl3w each time I make another context current?
1
Determine user mouse selection of 3D Object for multiple viewports I am currently working on setting up some world objects for my level editor and am running into a bit of a snag. When I get the hit location from the mouse raycast, I would like to determine what part of the object that the user has hit. I am looking for something that would work for each viewport in my map editor. I would like to use hit detection change the color of an object when the user mouses over it. For instance, if the user mouses over the x axis (red arrow) it would turn from red to white. It doesn t have to be pixel perfect but I would like to find a single solution that would work in all viewports. I was thinking that after I get the hit location, I could transform the hit location to match the inverse of the rotation of the camera and perform the checks from there. I'm skeptical that solution will work though.
1
Understanding the z axis vector from OpenGL modelview matrix I would like to shoot spheres in the current view direction in a simple scene. I use an FPS camera, so no z rotation. The vector pointing in the correct direction should be (m 8 , m 9 , m 10 ) where m is the modelview matrix as returned by glGetDoublev(...). In my understanding that vector is normalized, so when I scale it by some scalar its length will equal that scalar. If I draw an object every frame and translate it by the scaled z axis vector (plus camera position), the object should be directly in front of me, all the time, right? However, when I run my program, the object is not in front of me all the time. It is in front of me, as long as no rotation around the y axis is done. Here is the relevant (I think) source code camera glMatrixMode(GL MODELVIEW) glLoadIdentity() glRotatef(g cam.rx, 1, 0, 0) glRotatef(g cam.ry, 0, 1, 0) glTranslatef( g cam.tx, g cam.ty, g cam.tz) get z axis vector double mm 16 glGetDoublev(GL MODELVIEW MATRIX, mm) should be in front of me, all the time glPushMatrix() glTranslatef(g cam.tx 10 mm 8 , g cam.ty 10 mm 9 , g cam.tz 10 mm 10 ) glColor3f(0, 0, 1) glCullFace(GL FRONT) glutSolidTeapot(2) glCullFace(GL BACK) glPopMatrix() Note that this is not about how to place an object in front of me all the time, but rather what is wrong with my understanding, that using the z axis vector does not work here. I know that this is a frequent question, because I found lots of material on the internet. However, I seem to be missing something yet. It would be great if you could help me with it ) If the problem is not as obvious as I think it is, I can prepare a small example demonstrating the issue.
1
Editing the pixels of a rendered image I want to render a simple OpenGL scene as usual, but then I want to superimpose a small image of my own (such as from a bitmap file) on top of the render, such that this image always shows. For example, this could be thought of as showing a logo in the corner of the screen for a 3D game, where the logo is always displayed on top of the rendered scene. Please could somebody start me off in the right direction? What should I be looking into? I am rather a novice at OpenGL... Let us suppose that I have the following code include lt GL glut.h gt void renderScene(void) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glBegin(GL TRIANGLES) glVertex3f( 0.5, 0.5,0.0) glVertex3f(0.5,0.0,0.0) glVertex3f(0.0,0.5,0.0) glEnd() glutSwapBuffers() int main(int argc, char argv) init GLUT and create Window glutInit( amp argc, argv) glutInitDisplayMode(GLUT DEPTH GLUT DOUBLE GLUT RGBA) glutInitWindowPosition(100,100) glutInitWindowSize(320,320) glutCreateWindow("GLUT Triangles") register callbacks glutDisplayFunc(renderScene) enter GLUT event processing cycle glutMainLoop() return 1 This renders a triangle on the screen. How, for example, would I now render a 10 by 10 bitmap from file, at location (100, 100) on the screen? If the viewpoint was static, I could just calculate its 3D location and render it. However, I want the bitmap image to always be displayed in this location, even when the viewpoint changes. Thanks )
1
Blurring part of the screen optimisation I develop 3d menu and sometimes I need to blur only part of the screen. I use a forward rendering. I create a frame buffer object with 3 color attachments. Rendering looks like this bind fbo render objects which are blurred to the first texture bind the first texture perform vertical blur and save a result in the second texture bind the second texture perform horizontal blur and save a result in the third texture unbind fbo (render to the default fbo) bind the thrid texture render the third texture render objects which are not blurred I can use subroutines to reduce overhead during switching shaders. I also found the following article . Do you have any other ideas how to optimize above rendering ?
1
Why would one use a separate alpha mask for sprites? I've seen tilesets of the game Braid, and for each tileset in the main folder there is an alpha map for it in "alpha" folder. I wonder, why just not to draw your image as it is (with transparent parts where you want), export to PNG format and parse it to RGBA texture? Why would one use a separate alpha map for this, is there some kind of performance benefit?
1
Backface culling without light leaking through I want to be able to see through walls, so to do this I used planes for the walls, and enabled backface culling. However with shadow mapping I have a lot of light leaking through I read that using a thick wall solves this, so I did just that. This solved the leaking light but now I cannot see through the walls (camera is within the room in the above screen), all the backfaces are inward. How could I prevent light leaking, as well as enable seeing through the walls?
1
GLSL subroutine not being used I'm using a gaussian blur fragment shader. In it, I thought it would be concise to include 2 subroutines one for selecting the horizontal texture coordinate offsets, and another for the vertical texture coordinate offsets. This way, I just have one gaussian blur shader to manage. Here is the code for my shader. The NAME bits are template placeholders that I substitute in at shader compile time version 420 subroutine vec2 sample coord type(int i) subroutine uniform sample coord type sample coord in vec2 texcoord out vec3 color uniform sampler2D tex uniform int texture size const float offsets NUM SAMPLES float ( SAMPLE OFFSETS ) const float weights NUM SAMPLES float ( SAMPLE WEIGHTS ) subroutine(sample coord type) vec2 vertical coord(int i) return vec2(0.0, offsets i texture size) subroutine(sample coord type) vec2 horizontal coord(int i) return vec2(offsets i texture size, 0.0) return vec2(0.0, 0.0) just for testing if this subroutine gets used void main(void) color vec3(0.0) for (int i 0 i lt NUM SAMPLES i ) color texture(tex, texcoord sample coord(i)).rgb weights i color texture(tex, texcoord sample coord(i)).rgb weights i Here is my code for selecting the subroutine blur program gt start() blur program gt set subroutine("sample coord", "vertical coord", GL FRAGMENT SHADER) blur program gt set int("texture size", width) blur program gt set texture("tex", deferred output) blur program gt draw() draws a quad for the fragment shader to run on and void ShaderProgram set subroutine(constr name, constr routine, GLenum target) GLuint routine index glGetSubroutineIndex(id, target, routine.c str()) GLuint uniform index glGetSubroutineUniformLocation(id, target, name.c str()) glUniformSubroutinesuiv(target, 1, amp routine index) debugging int num subs glGetActiveSubroutineUniformiv(id, target, uniform index, GL NUM COMPATIBLE SUBROUTINES, amp num subs) std cout lt lt uniform index lt lt " " lt lt routine index lt lt " " lt lt num subs lt lt " n" I've checked for errors, and there are none. When I pass in vertical coord as the routine to use, my scene is blurred vertically, as it should be. The routine index variable is also 1 (which is weird, because vertical coord subroutine is the first listed in the shader code...but no matter, maybe the compiler is switching things around) However, when I pass in horizontal coord, my scene is STILL blurred vertically, even though the value of routine index is 0, suggesting that a different subroutine is being used. Yet the horizontal coord subroutine explicitly does not blur. What's more is, whichever subroutine comes first in the shader, is the subroutine that the shader uses permanently. Right now, vertical coord comes first, so the shader blurs vertically always. If I put horizontal coord first, the scene is unblurred, as expected, but then I cannot select the vertical coord subroutine! ) Also, the value of num subs is 2, suggesting that there are 2 subroutines compatible with my sample coord subroutine uniform. Just to re iterate, all of my return values are fine, and there are no glGetError() errors happening. Any ideas?
1
Camera movement with slerp I have 3 spots, I would like to move my camera to using slerp. Just As seen in the image below. My question is how I can connect my camera to the first spot? I should be able to move between other spots after I connect my camera to the first spot. Maybe a better way to ask is how can I make the camera location my first quaternion spot?
1
View to normal calculation in GLSL Sorry for the terrible title, but I really cant think of anything better.. Suggestions welcome. I am trying to do something like showcased in this video http www.youtube.com watch?v CaTI2d0tQME So basically smoothly change the opacity when looked face on. This is my vertex shader so far, the fragment shader is simple as it just multiplies the lightColor with the texture version 430 core uniform mat4 MV uniform mat4 MVP layout(location 0) attribute vec3 vertexPosition layout(location 1) attribute vec2 vertexUV layout(location 2) attribute vec3 vertexNormal layout(location 3) attribute mat4 bufferMatrix For per instance translation varying vec2 UV varying vec4 lightColor flat varying int InstanceID void main() vec4 mcPosition MV bufferMatrix vec4(vertexPosition, 1.0) mcPosition mcPosition length(mcPosition) vec3 mcNormal vertexNormal vec3 ecNormal vec3(MV bufferMatrix vec4(mcNormal, 0.0)) ecNormal ecNormal length(ecNormal) float dotProduct dot(vec4(mcNormal, 1.0), mcPosition) lightColor vec4(dotProduct) gl Position MVP bufferMatrix vec4(vertexPosition, 1.0) UV vertexUV InstanceID gl InstanceID The base is copied code from a 'phong' shader that works!.. I have tried everything I could think of, as well as searched google for quite some time. I think I realize what I need to do mathematically, which is getting the dot product of the vertex normal on to the view to vertex vector. That is mathematically speaking, another is to do it in GLSL with matrices etc. I am bad at debugging GLSL code, but right now the lightColor is always 0, no matter where I look at the model from. Quick Bonus Question What is the technique in the video actually called if anything?
1
Creating a custom mouse cursor with LWJGL2 in Java I have been trying to create a custom mouse cursor in my LWJGL2 application running under Linux and I am almost there. I have implemented the following method that I call right after creating the game window public void loadCursor(BufferedImage img) throws LWJGLException final int w img.getWidth() final int h img.getHeight() int rgbData new int w h for (int i 0 i lt rgbData.length i ) int x i w int y h 1 i w this will also flip the image vertically rgbData i img.getRGB(x, y) IntBuffer buffer BufferUtils.createIntBuffer(w h) buffer.put(rgbData) buffer.rewind() Cursor cursor new Cursor(w, h, 2, h 2, 1, buffer, null) Mouse.setNativeCursor(cursor) The resulting cursor is almost perfect, except for a horizontal blank line in the middle. Note The image is being flipped vertically in my code because otherwise the cursor is displayed upside down in OpenGL. I have seen other implementations that flip the IntBuffer instead of doing this in the for loop but they lead to the same result for me. I don't think it has to do with screen tearing either since I enabled VSync and the blank line is always at 50 of the cursor's height (there is no flickering). Can anybody tell what I am doing wrong or if my code example is missing something essential in order to find the problem? Thank you!
1
Get world position in Vertex shader I'm wondering how I can get the final position of a vertex. I use glTranslate in my render code, and I'm not getting the world coordinates correct. My world is devided in chunks and my position get's screwed up. I have tried multiplying the position with the 3 built in matrices, but no success.
1
Runtime resolution changing with GLFW3 I've been trying to figure out the correct method for changing the resolution fullscreen state of a GLFW window for a while now, but after searching all I found were references of how to do it with older versions of the library such as this. I suspect you'd just destroy the window object and re create it, but was not sure because of how some functions such as glfwSetKeyCallback take a GLFWwindow as a parameter, and I don't know if that'd continue to work after it has been re created. The documentation also does not have any examples on doing such a thing, so any help would be appreciated.
1
How to only render fragments with Z from 0 to 1 in OpenGL? I have been using OpenGL for a year now, but I just recently found out that OpenGL only clips vertices when the absolute value of the x,y or z coordinate is less than the absolute value of w coordinate. Previously I had assumed that the z coordinate would have to be 0 lt z lt w to be rendered. Is there any way to clip vertices with z less than 0, without a performance hit? I am using OpenGL 4.4
1
OpenGL setup on Windows I have been trying to use OpenGL for two days now. First on Mac, then on Windows. The problem with Mac is that it doesn't support the newer versions of OpenGL. I ran a tutorial that actually did get some things working, but it only works in XCode (i.e., I can't create a new file, paste in the same code, and get it to work). Because of these issues, I moved to Windows. My Windows 7 has OpenGL 4.3, which is the same that is used in alot of other tutorials. However, not one of these tutorials gives any instruction on how to set it up for the first time. I have come across some vague posts saying that some libraries need to be linked. But WHAT libraries, and HOW do I link them? Please help. I am pretty desperate to set this up as this project is due for work soon. I have actually used OpenGL before at my university, but the computers already had everything set up. The project itself is very easy, but setting up OpenGL is not something I know how to do. Edit Using C .
1
Setting a uniform float in a fragment shader results in strange values, is this a type conversion? How can it be fixed? First, some details I'm learning OpenGL from the tutorials on https open.gl My computer is running Linux Mint 18.1 Xfce 64 bit My graphics card is a GeForce GTX 960M OpenGL Version 4.5.0 NVIDIA 375.66 GLSL Version 4.50 NVIDIA Graphics Card Driver nvidia 375 Version 375.66 0ubuntu0.16.04.1 CPU Intel Core i7 6700HQ The code I'm working on can be found here https github.com Faison sdl2 learning blob 8a61032d20edf91cfa60f665e1bb4d72e58f634b phase 01 initial setup main.c (Makefile is located in the same directory) In a fragment shader, I'm trying to make an image do a sort of "flipping mirroring" animation (lines 44 57) version 450 core in vec3 Color in vec2 Texcoord out vec4 outColor uniform sampler2D tex uniform float factor void main() if (Texcoord.y lt factor) outColor texture(tex, Texcoord) vec4(Color, 1.0) else outColor texture(tex, vec2(Texcoord.x, 1.0 Texcoord.y)) When factor is 1, the image should be right side up and has some color on it. When factor is 0, the image should be upside down and has no color added to it. When factor is 0.5, the top half should be right side up and the bottom half should be upside down. Currently, that is only the case if I replace factor with the number. When I set the uniform factor with glUniform1f(), I'm getting very strange results. To illistrate, I added some debug code to lines 188 197 that sets the uniform with one number, retrieves the number from the uniform, and outputs both values to try and see what's going on. Here's the code GLfloat factorToSet 1.0f GLfloat setFactor 0.0f GLint uniFactor glGetUniformLocation(shader program, "factor") while (factorToSet gt 0.1f) glUniform1f(uniFactor, factorToSet) glGetUniformfv(shader program, uniFactor, amp setFactor) printf("Factor of .1f becomes f n", factorToSet, setFactor) factorToSet 0.1 And here are the results Factor of 1.0 becomes 0.000000 Factor of 0.9 becomes 2.000000 Factor of 0.8 becomes 0.000000 Factor of 0.7 becomes 2.000000 Factor of 0.6 becomes 0.000000 Factor of 0.5 becomes 0.000000 Factor of 0.4 becomes 2.000000 Factor of 0.3 becomes 36893488147419103232.000000 Factor of 0.2 becomes 0.000000 Factor of 0.1 becomes 36893488147419103232.000000 Factor of 0.0 becomes 0.000000 So with what little I understand about OpenGL and the way scalar types are stored in binary, I'm thinking that this issue is caused my GLfloat getting converted into something else on the way to the shader's uniform float. But I'm grasping at straws. What could be causing this strange conversion between the number I send to the uniform float and the value that the uniform float becomes? What could I do to fix it if it's possible to fix? Thanks in advanced for any help and leads, I really appreciate it ) An additional note after receiving a working answer George Hanna provided a link to a post where someone had a similar issue. I read over the comments and someone said to use DGL GLEXT PROTOTYPES as a CFLAG. So I rolled back my local code to use glUniform1f() again, added DGL GLEXT PROTOTYPES to the Makefile, and everything worked! Even crazier, all the compiler warnings I had for implicit declarations of OpenGL functions were gone! So in addition to the answer below, if you have this issue, try adding DGL GLEXT PROTOTYPES to your CFLAGS. (You can also get this affect by adding define GL GLEXT PROTOTYPES before any OpenGL includes)
1
LibGDX 2D Silhouette recently I decided to implement in my game the drawing of a silhouette of the player when he is behind objects (the top layer of the map). Found a similar question here, but it doesn't matter to me to understand anything. Can someone please explain how it all works? I will be very grateful to you! Specifically, I'm interested in the issue of creating the Vertex and Fragments of the shaders? And why is only one shader passed to two setSheader methods. Code that I do not understand Rendering the upper map layer Simple quot if (gl FragColor.a 0.0) discard quot fragment shader renderer.getBatch().setShader(shader) Rendering the silhouettes quot gl FragColor vec4(0.0, 1.0, 1.0, 0.2) texture2D(u texture, v texCoord0).a quot batch.setShader(shader) EDITS I tried adding this code and this is what I got Java code above render the normal player texture outside the shader code mapRenderer.render(lowerLayer) player.update(mapMgr, game.batch, delta, world) mapRenderer.render(upperLayer) Gdx.gl20.glClear(GL20.GL STENCIL BUFFER BIT) Gdx.gl20.glEnable(GL20.GL STENCIL TEST) Gdx.gl20.glStencilFunc(GL20.GL ALWAYS, 0x1, 0xFF) Gdx.gl20.glStencilOp(GL20.GL REPLACE, GL20.GL REPLACE, GL20.GL REPLACE) mapRenderer.getBatch().setShader(shader) mapRenderer.render(upperLayer) mapRenderer.getBatch().setShader(null) Gdx.gl20.glStencilFunc(GL20.GL LEQUAL, 0x1, 0xFF) Gdx.gl20.glStencilOp(GL20.GL KEEP, GL20.GL KEEP, GL20.GL KEEP) game.batch.setShader(shader) player.update(mapMgr, game.batch, delta, world) game.batch.setShader(null) Gdx.gl20.glDisable(GL20.GL STENCIL TEST) Fragment GLSL code varying vec4 v color varying vec2 v texCoords uniform sampler2D u texture void main() vec4 c vec4(.5, .5, .5, texture2D(u texture, v texCoords).a) if (c.a 0.0) discard gl FragColor c It didn't work out that way. In addition to the fact that the color of the top layer changes, the player is also completely painted, regardless of whether he is under the top layer or not. I think the problem is in some of this This part of the code does not work. Because even if you remove it, nothing changes. Gdx.gl20.glClear(GL20.GL STENCIL BUFFER BIT) Gdx.gl20.glEnable(GL20.GL STENCIL TEST) Gdx.gl20.glStencilFunc(GL20.GL ALWAYS, 0x1, 0xFF) Gdx.gl20.glStencilOp(GL20.GL REPLACE, GL20.GL REPLACE, GL20.GL REPLACE) Gdx.gl20.glStencilFunc(GL20.GL LEQUAL, 0x1, 0xFF) Gdx.gl20.glStencilOp(GL20.GL KEEP, GL20.GL KEEP, GL20.GL KEEP) Gdx.gl20.glDisable(GL20.GL STENCIL TEST) Fragment shader is not written correctly I spent all day solving this problem. Nothing works. I hope someone will figure it out! Thanks.
1
Why are my texture coordinates always (0,0) in this shader? What I'm trying to do is add my depth buffers values to my scene, ie. I'm trying to make objects closer to the camera darker and objects further away lighter. Which should be easy just render the depth buffer to a texture, and then render the scene, multiplying each pixels colour by the colour value at the same coordinates in my depth buffer texture... Yet I don't know how to use coordinates for textures. Or more like it's just failing. What's happening is that the vertex shader is only using the (0,0) coordinate of my texture, so the entire scene changes colour depending on what's there. What I'm doing is this (the critical stuff from my render function) glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, depthTextureId) Binds the texture box.render() Sets the viewing matrix and renders a test VBO I guess that's the right way to bind a texture? Anyhoo, here's the vertex shader's important stuff out vec4 ShadowCoord void main() gl Position PMatrix (VMatrix MMatrix) gl Vertex Projection view and model matrices ShadowCoord gl MultiTexCoord0 something I kept seeing in examples, was hoping it would work. It seems gl MultiTexCoord0 is the problem? Maybe? The frag shader sets the colour using this function vec4(texture2D(ShadowMap, ShadowCoord.st).x vec3(Color), 1.0) Where the ShadowMap is the sampler2D for the texture, and Color is the vertex's color... So what am I missing? How come the coordinates are not changing from 0,0?
1
How can I switch between DirectX and OpenGL renderer in my engine? I am currently developing my engine with DirectX, but I want to make it cross platform in the future, using OpenGL. How can I check which platform my engine is running on? And can I use an if statement after I've checked the platform to say which renderer to use? I currently have an abstract renderer and a DirectX9 renderer as it's child.
1
Should I use GL GENERATE MIPMAP SGIS or GL GENERATE MIPMAP? I'm using Windows 7 and most people in the group of this specific OpenGL project is too, there's only one member on Windows XP. We are all using Visual Studio 2010 though. I don't know if the OpenGL headers come with VS or with Windows but I would assume it's VS. And since we are all using the same VS version, albeit not the same SKU, I'm also assuming we all have the same OpenGL headers and version running on our systems. That said, the version installed on my system (note that I haven't installed anything OpenGL related) is 2.1 and it does support GL GENERATE MIPMAP, I don't need to resort ot the SGIS extension. However, the Windows headers only support version 1.1. To work around that, I'm using GLEW. I also noticed that both GL GENERATE MIPMAP SGIS and GL GENERATE MIPMAP have the same exact values defined. So my question is, should I do the following on my texture loading routine if(GLEW SGIS generate mipmap) glTexParameteri(GL TEXTURE 2D, GL GENERATE MIPMAP SGIS, GL TRUE) glTexImage2D(GL TEXTURE 2D, 0, glFormat, texWidth, texHeight, 0, glFormat, GL UNSIGNED BYTE, texPixels) else gluBuild2DMipmaps(GL TEXTURE 2D, glFormat, texWidth, texHeight, glFormat, GL UNSIGNED BYTE, texPixels) ...or can I just assume everything will work and just do this glTexParameteri(GL TEXTURE 2D, GL GENERATE MIPMAP, GL TRUE) glTexImage2D(GL TEXTURE 2D, 0, glFormat, texWidth, texHeight, 0, glFormat, GL UNSIGNED BYTE, texPixels) ? I would think that this code would fail (and the game wouldn't probably run) if the OpenGL version doesn't support mipmapping generation like that, but nowadays, most people probably have OpenGL 2.1 on their systems right, probably in Windows XP too (without any development tools installed)? Also, another small question... My laptop has 2 graphics card, one integrated (Intel) and the other dedicated (NVIDIA), with NVIDIA Optimus technology. Mipmapping using the dedicated GPU seems to have better quality. I mean, the integrated card mipmapping seems more pixelized in near planes (I'm talking about a plane terrain texture) than the dedicated one. And when I move the camera, the near pixels "blink" a little (with the integrated card) and it's distracting, while the far pixels look nicely linear. Yes, I'm using GL LINEAR MIPMAP LINEAR for the minifying filter. Anyway to fix this?
1
Flipping Textures in Modern OpenGL How do I flip a texture in modern OpenGL (3.1 )? This is the unflipped render, keep in mind it is drawn with 8x8 tiles from a texture atlas, so I edited green squares below him into the image to represent it. Rendered without horizontal flip I have tried doing 1 texCoord.x in my vec2 texCoord in my vertex shader. It gave me... this result This is how texture atlas is laid out, if it matters Attempting other methods such as inverting texture coordinates of the VBO gave me similarly garbled results. What should I actually be doing, to flip the texture, or more accurately each individual tile? I know I would also have to change each tile's coordinates, but that's for after.
1
Why does this work in the fragment shader but not in the vertex shader? I'm doing some model view and projection transforms in the vertex shader and I want to determine whether the current vertex will end up on the viewport or not. After searching a bit I found that after the projection transform the coordinates have not been "normalized". So the criteria for a vertex (after the transforms) to appear in the final viewport is the following w lt x lt w w lt y lt w 0 lt z lt w where w is the 4th coordinate of the vertex after the transforms Performing this check I had a great amount of vertices passing the test without being in the viewport and this caused many sorts of problems in whatever I was trying to implement. However, passing the vertices in their local coordinates to the fragment shader and then doing the exact same process with the transforms and the tests everything worked out fine. Why?
1
Deferred lighting and point light volumes I'm doing deferred point light shadow mapping and I am drawing my point lights using light volumes. Normally I access the position diffuse texture normals using this in the fragment shaders vec2 texcoord gl FragCoord.xy UnifPointLightPass.mScreenSize vec3 worldPos texture(unifPositionTexture, texcoord).xyz vec3 normal texture(unifNormalTexture, texcoord).xyz vec3 diffuse texture(unifDiffuseTexture, texcoord).xyz When doing directional lights, I just draw a fullscreen rectangle, but when doing point lights I use light volumes drawing a 3d sphere from the cameras point of view to limit the amount of fragments processed. Does light volumes change the way I get the texcoord?
1
How to get a safe index for glVertexAttribPointer without shader? I'm learning to use VBOs and trying to keep it simple before building up. Trying to do it without writing a shader right now. It looks like this is possible, but I cannot seem to find a way to get the index parameter for glVertexAttribPointer. I've seen that you can get the index with glGetAttribLocation, but that function seems to require a shader be passed in to get the index. I don't want to assign the index, because I'm worried about different pieces of a project accidentally using the same index and causing problems. (If there's some reason I shouldn't be worried about that, please let me know) Question is Without using shaders and without assigning the index, how can I get the index for glVertexAttribPointer?
1
My 3D Shader wont render light direction correctly opengl c glsl This is what I'm doing vertex shader version 400 layout ( location 0 ) in vec3 vertex position layout ( location 1 ) in vec3 vertex normal uniform mat4 ModelViewMatrix model uniform mat3 NormalMatrix transpose(inverse(model))) uniform mat4 priv mat projection view model out vec3 normal out vec3 position void main() normal normalize(NormalMatrix vertex normal) position vec3(ModelViewMatrix vec4(vertex position,1.0)) gl Position priv mat vec4(vertex position,1.0) fragment shader version 400 in vec3 normal in vec3 position uniform vec4 LightPosition light position vec4(0,0,0,1) uniform vec3 LightIntensity uniform vec3 Kd Diffuse reflectivity uniform vec3 Ka Ambient reflectivity uniform vec3 Ks Specular reflectivity uniform float Shininess Shininess vec3 ads( ) vec3 n normal vec3 s normalize( vec3(LightPosition) position ) vec3 v normalize( vec3( position)) vec3 r reflect( s, n ) return LightIntensity ( Ka Kd max( dot(s, n), 0.0 ) Ks pow( max( dot(r,v), 0.0 ), Shininess ) ) void main() gl FragColor vec4(ads(), 1.0) But when I run it, I get this Where the light doesnt just come from the wrong spot, but it also doesnt change when I move the planet around, like so The diffuse value somehow gets the wrong direction? According to the example given (page 90), the diffuse is calculated by getting the dot(s,n), where s is lightposition position, and position is (viewmodelvertex position). But like you can see, the result doesnt shade the direction correctly. Solution The answer was matrix confusion all along. The problem was the ModelViewMatrix, and that It didnt have to be multiplied with view before being sent to the shader. The rest is just correct. Basically I was putting the "position" in the wrong space. Not multiplying with view before sending to the shader fixes it.
1
Vertex shader in OpenGL GLSL transformation of the interior of a textured quad I have a LWJGL project and ran into a problem with a vertex shader I wrote. In my scene I am rendering a map whose ground consists of rectangular tiles. On top of that there are other objects (I used tiny white balls here). Here is a screenshot of my scene screenshot of my scene http 110.imagebam.com download 0 Ktu6XjdWSMpKytTr3Trg 34736 347358711 scene static.jpg You can also click here to see the image. Each of the four larger rectangles at the bottom is one huge quad drawn in one piece. Every one of them contains four coordinates (the corners top left, top right, bottom right, bottom left). Their interior is filled with a texture. The small white balls on top are single game objects each drawn by itself. Note that I aligned them with the vertical edges of the underlying rectangles. I used the following vertex shader to render the scene default shader version 120 void main() gl TexCoord 0 gl MultiTexCoord0 gl Position gl ModelViewProjectionMatrix gl Vertex Now imagine the scene getting covered with water. I modified the shader adding a little water effect water shader version 120 uniform float amplitude uniform float phase void main() gl TexCoord 0 gl MultiTexCoord0 transform x vec4 a position gl Vertex a position.x a position.x amplitude sin(phase a position.x) gl Position gl ModelViewProjectionMatrix a position An animated image of my scene can be found here or here. You can see that the borders of the rectangles keep aligned with the white balls on top of them. However, vertices inside the rectangles (I marked the most distinct areas red) suffer a displacement relative to the white balls. In the last rectangle you can even see the balls cross the white line in the middle. I need the interior of my textured rectangles to be transformed exactly like the white balls on top and move uniformly when being animated by my shader. Please let me know if my problem is clear and if any other information is needed. Thank you very much for your help. Greetings Xoric
1
Where to start learning OpenGL with C ? Possible Duplicate What are some good learning resources for OpenGL? I have learnt C and made some cool text based games and such but I would love to start graphical programming. I'm a decent artist (I will have some of my work below) I know the basics of C but I really would like to get into OpenGL. I need someone to show me some good tutorials for OpenGL with C so I can really get into game development. My goal is to be able to program a simple 2D game by the end of the year and I have lots of time to do so. I'm en rolled in a game development course next year and really need some help with starting off.
1
How can I create an orthographic display that handles different screen dimensions? I'm trying to create an iPad iPhone game using GLES2.0 that contains a 3D scene with a heads up display GUI overlaid on the top. However, this problem would also apply if I were to port my game to a computer and run the game in a resizable window, or allow the user to change screen resolutions... When trying to make the 2D GUI HUD work I've made the assumption that all I'm really doing is drawing a load of 2D textured 'quads' on the screen and am trying to treat the orthographic projection as an old style 2D display with 0,0 in the upper left and screenWidth,ScreenHeight in the lower right. This causes me all sorts of confusion when I rotate my ipad into Landscape mode since I can't work out what to put into my projection and modelview matrices to turn everything around the right way. It also gets messy if I want to support the iPad's large screen, an iPhone or a Retina display since I have to then draw three sets of textures for everything and work out which ones to use. Should I be trying to map the 2D OpenGL co ords 1 1 with the screen? While typing out this question it occurs to me that I could keep my origin in the centre, still running 1 1 along the axes. This would let me scale my 2D content appropriately on the different screen sizes, but wouldn't I end up with the textures being scaled and possibly losing quality? I'm using OpenGLES 2.0 and have a matrix library that has equivalents to the GLES1.1 glOrthof() and glFrustrum() calls.
1
What is the difference between OpenGL 1.x and 2.x? Is there a good tutorial that shows the difference between OpenGL 1. and 2. ? It would be very helpful to know which functions I should not be calling (like glBegin(), I assume).
1
Fastest way to draw quads in OpenGL ES? I am using OpenGL ES 2.0 I have a bunch a quads to be drawn, would love to be able to have to pass only 4 vertices per quad as if I were using GL QUADS, but basically I just want to know the best way of drawing a bunch of separate quads. So far what I've found I could do GL TRIANGLES(6 vertices per quad) Degenerate GL TRIANGLE STRIP(6 vertices per quad) Possibilities I've found GL TRIANGLE STRIPS with a special index value that resets quad strip(this would mean 5 indexes per quad, but I don't think this is possible is OpenGL ES 2.0)
1
Recurring stalls in SwapBuffer I am currently doing measurements on a test scene implemented in OpenGL 4.5 with different "setups". I noticed that in case I have a very high amount of FPS (1550 in this specific case), SwapBuffer does stall about every 20th frame and the frame time increases from 0.5ms to 1.2ms. This does ONLY happen when the FPS is above a certain threshold. I already figured out that it is not related to pre rendered frames (i disabled pre rendered frames in the NV control panel) and it is also not related to vsync (wglSwapIntervalEXT(0) is used). Can someone tell me what might be causing this stalls lags or wathever. I attached two screenshots made with Nsight. The first shows a "healthy" setup with no stalls While the seconds shows the setup with the stalls
1
Casting a Matrix4 class to float glUniformMatrix4fv(glGetUniformLocation(program, "projMatrix"), 1, false, (float ) amp projMatrix) projMatrix is a an object of type Matrix4 where the first variable declared is a float array. Does (float ) amp projMatrix therefore somehow retrieve this array? What does the casting appear to be doing?
1
OpenGL object movement is not smooth and vibrating In my android NDK OpenGL C project, I have a render method which executes every frame on draw event so this is the algorithm void Engine render() deltaTime GetCurrentTime() lastFrame glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) update() renderDepthMaps() renderMeshes() if (skybox ! nullptr) shaders.drawSkyBox(skybox, camera, width, height) lastFrame GetCurrentTime() first I calculate the delta time between the last frame and the current frame, then I update all transformation and view matrices from input then render the scene so the game loop depends on the android draw frames, I have an object which moves over a terrain and a third person camera moves with it and rotates around it, so after the object moves for some distance it begins to flicker forward and backward, the update function for the object is double amp delta engine.getDeltaTime() GLfloat velocity delta movementSpeed glm vec3 t(glm vec3(0, 0, 1) velocity 3.0f) matrix glm translate(matrix, t) glm vec3 f(matrix 2 0 , matrix 2 1 , matrix 2 2 ) f glm normalize(f) velocity 3.0f camera gt translate(f) If it is an interpolation issue I don't know how to make interpolation with using translate matrix to its forward vector.
1
Object outlining, stencil is not working WebGL I can't seem to figure out my problem with stencil buffer in object outlining algo. function init() ... gl.enable(gl.STENCIL TEST) gl.stencilOp(gl.KEEP, gl.KEEP, gl.REPLACE) ... var mscaled scaled model matrix of a cube function render() gl.clear(gl.COLOR BUFFER BIT gl.DEPTH BUFFER BIT gl.STENCIL BUFFER BIT) gl.stencilFunc(gl.ALWAYS, 1, 0xFF) gl.stencilMask(0xFF) use program1 set attrib pointers bind cube's ebo buffer gl.drawElements(gl.TRIANGLES, 36, gl.UNSIGNED SHORT, 0) gl.stencilFunc(gl.NOTEQUAL, 1, 0xFF) gl.stencilMask(0x00) set scaled model matrix mat4.fromRotationTranslationScale(mscaled, cube.rotation, cube.translation, vec3.fromValues(1.1, 1.1, 1.1)) use program2 set attrib pointers bind cube's ebo buffer gl.drawElements(gl.TRIANGLES, 36, gl.UNSIGNED SHORT, 0) And it just draws the whole second cube on top of the first, like there's no stencil test. I've tested with up scale and down scale. the red cube should be the outliner. In the first image it is scaled up but WebGL draws it just on top of the first one, in the second image it is scaled down ideally it shouldn't been even drawn due to stencil test (of course, depth test is disabled in the case of a second image)
1
OpenGL application, not working on AMD. Recently, I moved an app I am building from a computer with Intel amp Nvidia to a computer with AMD CPU amp GPU. The problem is, that glDrawElements gives me bad access location and I can't figure out why. The render function is the following template lt class PositionData gt void drawTexturedMesh(const PositionData amp pd, const IndexedFaceMesh amp mesh, const unsigned int offset, const float const color, GLuint text) draw mesh const unsigned int faces mesh.getFaces().data() const unsigned int nFaces mesh.numFaces() const Vector3r vertexNormals mesh.getVertexNormals().data() const Vector2r uvs mesh.getUVs().data() Update our buffer data passed to GPU glBindBuffer(GL ARRAY BUFFER, vbo 0 ) glBufferSubData(GL ARRAY BUFFER, 0, sizeof(double) 3 pd.size(), amp pd.getPosition(0) 0 ) glBindBuffer(GL ARRAY BUFFER, 0) glBindBuffer(GL ARRAY BUFFER, vbo 1 ) glBufferSubData(GL ARRAY BUFFER, 0, sizeof(double) 2 mesh.getUVs().size(), amp uvs 0 0 ) glBindBuffer(GL ARRAY BUFFER, 0) glBindBuffer(GL ARRAY BUFFER, vbo 2 ) glBufferSubData(GL ARRAY BUFFER, 0, sizeof(double) 3 mesh.getVertexNormals().size(), amp vertexNormals 0 0 ) glBindBuffer(GL ARRAY BUFFER, 0) Binding element array and drawing. numFaces 3 triangles per face. glBindBuffer(GL ELEMENT ARRAY BUFFER, ibo) glBufferSubData(GL ELEMENT ARRAY BUFFER, 0, sizeof(unsigned int) mesh.getFaces().size(), mesh.getFaces().data()) glDrawElements(GL TRIANGLES, 3 nFaces, GL UNSIGNED INT, BUFFER OFFSET(0)) glBindBuffer(GL ELEMENT ARRAY BUFFER, 0) Math, vectors and matrices are from Eigen. Any ideas or any other code part you would like to see? Bare in mind that this exact code works fine with nVidia. Thanks.
1
Draw line around mesh with defined width at cut plane I want to display a solid line around mesh with a defined width at a cut plane. Currently I realized that using the same method as gl ClipDistance works using dot() operation. However, the output is not what I want since the line width changes depending on angle between the current triangle and the plane. There are also some bad aliasing effects. My guess is that the abs(factor) lt 0.1 is wrong here but I also dont know any other alternative. I'm fairly new to shaders... How can I solve that issue? Current output It should look similar to that output Fragment Shader version 330 core out vec4 fragColor in vec3 Normal in vec3 FragPos uniform vec4 clipPlane0 void main() float factor dot(vec4(FragPos, 1.0), clipPlane0) if (abs(factor) lt 0.1) fragColor.rgb vec3(1, 0, 0) else fragColor.rgb vec3(0, 1, 0)
1
Procedural terrain using 3D noise I'm coding a procedural terrain generation based on this article from GPU Gems 3. But using CPU (not GPU). I'm stuck at generating the procedural terrain. I just can't figure out how make a 3D texture the way it looks in the article (fig. 1 10). Am I missing something big here? Here is the code float CTerrain DensityFunction(float x, float y, float z) CPerlin3DTexture perl tex float noise uint32 t octaves 3 float amplitude 10 0.25,0.5,1.0,1.9,4.1,8.1,16.1,9,10 float lacunarity 1.5 float freq 10 4.03,1.96,1.01,0.51,0.2487,0.12563,0.8013,0.39913, 0, 0 uint32 t i noise 0.0 for(i 0 i lt octaves i ) noise perl tex.noise(x freq i , y freq i , z freq i ) amplitude i x lacunarity y lacunarity z lacunarity return noise Values (nx,ny,nz) (128,128,10) float CTerrain GenerateDensityTexture(uint32 t nx, uint32 t ny, uint32 t nz) float step 3 int32 t i, j, k step 0 1.0 (float)nx step 1 1.0 (float)ny step 2 1.0 (float)nz for(i 1 i lt nx 1 i ) for(j 1 j lt ny 1 j ) for(k 1 k lt nz 1 k ) tex i j k DensityFunction((float)i step 0 , (float)j step 1 , (float)k step 2 ) return tex Help, anybody? This is what I get from the above code
1
Is there any performance benefit to sharing shaders between programs? OpenGL allows you to share the same shader between multiple programs. Aside from saving small amounts of memory and a shader handle, are there any GPU side performance benefits to doing this?
1
Imageeffects No need for Framebuffer? Just use Textures and Shaders? I am doing simple Image Effects, and always i see in examples, that people are binding textures to framebuffers. Why can't i just use textures? So the process would be. 1) Input Texture 2) Shader Do Processing on this Shader 3) Render to the same Texture So no need for framebuffer? This step is done completely linear...no off screen rendering. So cani say that i don't need a framebuffer in this scenario? thanks!
1
OpenGL LWJGL3 Matrix4x4 not rotating correctly I tried today to make my own matrix4f class because of that LWJGL 3 does not include a class for it. So I arrived at rotation and it does not seem to work. I tried using the old util from LWJGL 2 and then it worked fine. Here's a screenshot of how it looks like. Here's a gif animation of how it looks. It's supposed to be a cube but as you can see it stretches when it rotates. Not only the x axis but the y and z axis to. Here's the code public Matrix4f rotate(Vector3f rotation) Matrix4f xRotation new Matrix4f() xRotation.setIdentity() float x rotation.x float y rotation.y float z rotation.z xRotation.m11 (float) cos(x) xRotation.m21 (float) sin(x) xRotation.m12 (float) sin(x) xRotation.m22 (float) cos(x) Matrix4f yRotation new Matrix4f() yRotation.setIdentity() yRotation.m00 (float) cos(y) yRotation.m20 (float) sin(y) yRotation.m02 (float) sin(y) yRotation.m22 (float) cos(y) Matrix4f zRotation new Matrix4f() zRotation.setIdentity() zRotation.m00 (float) cos(z) zRotation.m10 (float) sin(z) zRotation.m01 (float) sin(z) zRotation.m11 (float) cos(z) return multilpy(xRotation).multilpy(yRotation).multilpy(zRotation) Can someone point the problem because I cannot find it. Edit The multiplication and the projection code public Matrix4f multilpy(Matrix4f matrix) Row 0 m00 m00 matrix.m00 m10 matrix.m01 m20 matrix.m02 m30 matrix.m03 m10 m00 matrix.m10 m10 matrix.m11 m20 matrix.m12 m30 matrix.m13 m20 m00 matrix.m20 m10 matrix.m21 m20 matrix.m22 m30 matrix.m23 m30 m00 matrix.m30 m10 matrix.m31 m20 matrix.m32 m30 matrix.m33 Row 1 m01 m01 matrix.m00 m11 matrix.m01 m21 matrix.m02 m31 matrix.m03 m11 m01 matrix.m10 m11 matrix.m11 m21 matrix.m12 m31 matrix.m13 m21 m01 matrix.m20 m11 matrix.m21 m21 matrix.m22 m31 matrix.m23 m31 m01 matrix.m30 m11 matrix.m31 m21 matrix.m32 m31 matrix.m33 Row 2 m02 m02 matrix.m00 m12 matrix.m01 m22 matrix.m02 m32 matrix.m03 m12 m02 matrix.m10 m12 matrix.m11 m22 matrix.m12 m32 matrix.m13 m22 m02 matrix.m20 m12 matrix.m21 m22 matrix.m22 m32 matrix.m23 m32 m02 matrix.m30 m12 matrix.m31 m22 matrix.m32 m32 matrix.m33 Row 3 m03 m03 matrix.m00 m13 matrix.m01 m23 matrix.m02 m33 matrix.m03 m13 m03 matrix.m10 m13 matrix.m11 m23 matrix.m12 m33 matrix.m13 m23 m03 matrix.m20 m13 matrix.m21 m23 matrix.m22 m33 matrix.m23 m33 m03 matrix.m30 m13 matrix.m31 m23 matrix.m32 m33 matrix.m33 return this public static Matrix4f perspective(int width, int height, int fov, float zFar, float zNear) Matrix4f matrix new Matrix4f() float aspectRatio (float) width (float) height float fovY (float) ((1f tan(Math.toRadians(fov 2f))) aspectRatio) float fovX fovY aspectRatio float frustum length zFar zNear matrix.m00 fovX matrix.m11 fovY matrix.m22 ((zFar zNear) frustum length) matrix.m23 1 matrix.m32 ((2 zNear zFar) frustum length) matrix.m33 0 return matrix Vertex Shader version 400 core in vec3 position in vec2 textureCoord out vec2 textureCoords uniform mat4 tranformation uniform mat4 projection void main(void) gl Position projection tranformation vec4(position,1.0) textureCoords textureCoord Fragment Shader version 400 core in vec2 textureCoords out vec4 out Color uniform sampler2D sampler void main(void) vec4 color texture(sampler, textureCoords) if(color.a lt 0.5) discard out Color texture(sampler, textureCoords)
1
Rendering objects with either normal maps, either specular maps, or with both, or with neither? My hobby engine has a deferred renderer that supports normal maps and specular maps. Now, some objects may have normal maps, and some may have specular maps. In some cases, an object has both maps, and in some cases, it has neither. The question is how should I implement the rendering of these objects? Should I have a render queue for each different object type and render them with separate shaders like this Queue A Objects without normal and specular map Queue B Objects with normal map, without specular map Queue C Objects without normal map, with specular map Queue D Objects with both normal map and specular map Render loop bind shader for type 'A' objects for each object in Queue A render object bind shader for type 'B' objects for each object in Queue B render object and so forth... Or, should I use a single shader and bind a "default" normal map and specular map for those objects that do not have such maps? By a default map, I mean for example a normal map texture that's completely colored (128, 128, 255). This would be something like this bind shader bind default normal map texture bind default specular map texture for each object in Queue A render object for each object in Queue B bind object's normal map render object bind default normal map texture for each object in Queue C bind object's specular map render object for each object in Queue D bind object's normal map bind object's specular map render object Basically, the first approach would involve less texture binds and more shader binds, whereas the second approach would be the opposite. Is either of these a preferred way to approach the problem? Or have I missed something completely here? You can assume the objects are queued correctly to the queues.
1
Texture object and texture unit in GL As I understand texture usage consist of two parts How to store this discrete data about texture internally. How much dimensions, channels, etc. How to fetch sample filter The question relative to OpenGL are q1 What parameters are stored in texture object (glGenTexture) and what parameters are store in texture unit (glActiveTexture) ? q2 Does glTexParameter perform setup per texture object or per texture unit?
1
What files and libraries do I need for OpenGL and controls I am starting a 3D game, however I find OpenGL file system very confusing. OpenGL package doesn't have one file, other package doesn't have another, so I need a summary of what files do I need to start playing with 3D. It would be good if it included keyboard and mouse controls. Shortened question Where can I get all files for OpenGl 4.0 and integrated controls libary?
1
FBO Depth Buffer not working I'm trying to get the depth buffer for my 2D game working by offsetting the z value of the rectangles. For some reason, my depth buffer is coming back empty. The value is always 0. I'm assuiming there is something wrong with how I attach the depth buffer to the FBO? But I've looked over that code many times and don't see anything wrong with it. Let me know if you need more information. I have depth testing enabled with GL LEQUAL. I have znear and zfar set to 1.f, 10.f and I have 9 quads setup at three different depths. I have set glClearDepth(1.0) and I clear bot the COLOR BUFFER BIT and DEPTH BUFFER BIT before I draw to the FBO, however the value is still 0. FBO setup RenderTarget RenderTarget(int width, int height) mWidth width mHeight height mVbo 0 Create the color buffer glGenTextures(1, amp mTextureId) glBindTexture(GL TEXTURE 2D, mTextureId) glTexParameterf(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameterf(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glTexParameterf(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameterf(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexImage2D(GL TEXTURE 2D, 0, GL RGBA, mWidth, mHeight, 0, GL RGBA, GL UNSIGNED BYTE, NULL) Create the depth buffer glGenTextures(1, amp mDepth) glBindTexture(GL TEXTURE 2D, mDepth) glTexParameterf(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameterf(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glTexParameterf(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameterf(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexImage2D(GL TEXTURE 2D, 0, GL DEPTH COMPONENT32, mWidth, mHeight, 0, GL RED, GL BYTE, NULL) glGenRenderbuffers(1, amp mDepth) glBindRenderbuffer(GL RENDERBUFFER, mDepth) glRenderbufferStorage(GL RENDERBUFFER, GL DEPTH COMPONENT32, mWidth, mHeight) Create the frame buffer glGenFramebuffers(1, amp mFbo) glBindFramebuffer(GL FRAMEBUFFER, mFbo) glFramebufferTexture2D(GL FRAMEBUFFER, GL COLOR ATTACHMENT0, GL TEXTURE 2D, mTextureId, 0) glFramebufferTexture2D(GL FRAMEBUFFER, GL DEPTH ATTACHMENT, GL TEXTURE 2D, mDepth, 0) GLenum err glCheckFramebufferStatus(GL FRAMEBUFFER) assert(err GL FRAMEBUFFER COMPLETE) glBindFramebuffer(GL FRAMEBUFFER, 0) createVbo() Fragment shader version 120 uniform sampler2D diffuseMap uniform sampler2D lightmap uniform sampler2D depthMap varying vec4 texCoord 2 void main() vec4 color texture2D(diffuseMap, gl TexCoord 0 .st) vec4 light texture2D(lightmap, gl TexCoord 0 .st) vec4 depth texture2D(depthMap, gl TexCoord 0 .st) vec4 final color vec4(0.1) final color light gl FragColor vec4(depth.r, depth.r, depth.r, 1.0) Method to draw sprites to the screen framebuffer void GraphicsDevice drawSprite(ISprite sprite, float x, float y, float z, Material TextureType type, Color c) glActiveTexture(GL TEXTURE0) glEnableClientState(GL VERTEX ARRAY) glEnableClientState(GL TEXTURE COORD ARRAY) glBindBuffer(GL ARRAY BUFFER, sprite gt getVbo()) glBindTexture(GL TEXTURE 2D, sprite gt getMaterial() gt getTexture(type) gt getId()) glPushMatrix() glTranslatef(x, y, z) glColor4f(c.r, c.g, c.b, c.a) glVertexPointer(3, GL FLOAT, sizeof(Vertex), (void )offsetof(Vertex, x)) glTexCoordPointer(2, GL FLOAT, sizeof(Vertex), (void )offsetof(Vertex, tx)) glDrawArrays(GL TRIANGLE STRIP, 0, 4) glPopMatrix() glBindBuffer(GL ARRAY BUFFER, 0) glDisableClientState(GL TEXTURE COORD ARRAY) glDisableClientState(GL VERTEX ARRAY)
1
Confused about why my projection matrix works My projection matrix was buggy, I'm not great at mathematics, but I checked it against the the songho tutorial, and the broken one seems correct to me but switching nearplane to farplane seems to have fixed it. What am I missing? My nearplane and farplane values are positive, nearplane is small about 0.01, 1.0f last time i ran against both farplane is usually relatively large about 1000.0f, 500.0f the last time I ran it against both. f32 l left f32 r right f32 t top f32 b bottom f32 n nearplane f32 f farplane m4x4 Result TODO why did changing n to f in 0 and 5 fix it? and make sure it is fixed if 0 works 2 f (r l), 0, (r l) (r l), 0, 0, 2 f (t b), (t b) (t b), 0, 0, 0, (f n) (f n), 2 f n (f n), 0, 0, 1, 0, else doesn't (2 n) (r l), 0, (r l) (r l), 0, 0, (2 n) (t b), (t b) (t b), 0, 0, 0, (f n) (f n), ( 2 f n) (f n), 0, 0, 1, 0, endif
1
shadow mapping and linear depth I'm implementing ominidirectional shadow mapping for point lights. I want to use a linear depth which will be stored in the color textures (cube map). A program will contain two filtering techniques software pcf (because hardware pcf works only with depth textures) and variance shadow mapping. I found two ways of storing linear depth const float linearDepthConstant 1.0 (zFar zNear) first float moment1 viewSpace.z linearDepthConstant float moment2 moment1 moment1 outColor vec2(moment1, moment2) second float moment1 length(viewSpace) linearDepthConstant float moment2 moment1 moment1 outColor vec2(moment1, moment2) What are differences between them ? Are both ways correct ? For the standard shadow mapping with software pcf a shadow test will depend on the linear depth format. What about variance shadow mapping ? I implemented omnidirectional shadow mapping for points light using a non linear depth and hardware pcf. In that case a shadow test looks like this vec3 lightToPixel worldSpacePos worldSpaceLightPos vec3 aPos abs(lightToPixel) float fZ max(aPos.x, max(aPos.y, aPos.z)) vec4 clip pLightProjection vec4(0.0, 0.0, fZ, 1.0) float depth (clip.z clip.w) 0.5 0.5 float shadow texture(ShadowMapCube, vec4(normalize(lightToPixel), depth)) I also implemented standard shadow mapping without pcf which using second format of linear depth (Edit 1 i.e. distance to the light some offset to fix shadow acne) vec3 lightToPixel worldSpacePos worldSpaceLightPos const float linearDepthConstant 1.0 (zFar zNear) float fZ length(lightToPixel) linearDepthConstant float depth texture(ShadowMapCube, normalize(lightToPixel)).x if(depth lt fZ) shadow 0.0 else shadow 1.0 but I have no idea how to do that for the first format of linear depth. Is it possible ? Edit 2 For non linear depth I used glPolygonOffset to fix shadow acne. For linear depth and distance to the light some offset should be add in the shader. I'm trying to implement standard shadow mapping without pcf using a linear depth ( viewSpace.z linearDepthConstant offset) but following shadow test doesn't produce correct results vec3 lightToPixel worldSpacePos worldSpaceLightPos vec3 aPos abs(lightToPixel) float fZ max(aPos.x, max(aPos.y, aPos.z)) vec4 clip pLightProjection vec4(0.0, 0.0, fZ, 1.0) float fDepth (clip.z clip.w) 0.5 0.5 float depth texture(ShadowMapCube, normalize(lightToPixel)).x if(depth lt fDepth) shadow 0.0 else shadow 1.0 How to fix that ?
1
Nothing gets rendered in SceneKit I have this code in OpenGL Vuforia Matrix44F modelViewProjection VuforiaApplicationUtils translatePoseMatrix(0.0f, 0.0f, self.scale, amp Vuforia modelViewMatrix.data 0 ) VuforiaApplicationUtils scalePoseMatrix(self.scale, self.scale, self.scale, amp Vuforia modelViewMatrix.data 0 ) VuforiaApplicationUtils multiplyMatrix( amp projectionMatrix.data 0 , amp Vuforia modelViewMatrix.data 0 , amp modelViewProjection.data 0 ) And the modelViewProjection gets used like this glUniformMatrix4fv(mvpMatrixHandle, 1, GL FALSE, (const GLfloat ) amp modelViewProjection.data 0 ) With mvpMatrixHandle being a piece of code in the shader mvpMatrixHandle glGetUniformLocation(shaderProgramID, "modelViewProjectionMatrix") And in the shader itself gl Position modelViewProjectionMatrix vertexPosition Now I try to convert this to SceneKit code (never used it before, so I might miss a big part of my SceneKit code) Setup let plane SCNNode(mdlObject MDLAsset(url Bundle.main.url(forResource "Bubble", withExtension "obj")!).object(at 0)) scene.rootNode.addChildNode(plane) let camera SCNCamera() plane.camera camera Each frame modelViewMatrix SCNMatrix4Translate(modelViewMatrix, 0, 0, scale) modelViewMatrix SCNMatrix4Scale(modelViewMatrix, scale, scale, scale) let modelViewProjection SCNMatrix4Mult(arrayToMatrix(pose projectionMatrix), modelViewMatrix) camera.projectionTransform modelViewProjection scene.rootNode.childNodes 0 .transform modelViewProjection As you can see, I have also tried to set this as the projectionTransform of the camera, but that did not work either. My result is the most annoying to debug nothing gets rendered using the SceneKit code, while the OpenGL code works.
1
OpenGL ES high quality 2D scaling Let's say I'm making a 2D game and I want to implement a zoom in out feature. Normally this is as simple as modifying the projection matrix to get more or less of the world to show. However, this results in quite blurry resizing, and I suspect the culprit is the GL LINEAR parameter you supply to glTexParameteri as explained here. I've done my fair share of video resizing and I know much better algorithms exist such as bicubic filtering, lanczos, spline, etc, however the only one OpenGL offers is (tri?)linear. What would be the best way of improving 2d scaling? I suppose this would involve shaders but I'm not familiar with the approach. Any reference to working code would be a nice plus.
1
OpenGL vs OGRE Which is the best for beginner? I am interested in getting into game development and posses good C C programming skills. I have tried OGRE before, and I am curious whether I should learn either OGRE or OpenGL as a starting point. From my understanding, OGRE is a wrapper for OpenGL and DirectX. So, supposed that I learn OGRE, do I still need to learn OpenGL as well to develop a game? Also, can someone points me good tutorial for OpenGL? I found that OGRE is well documented (it's even come with the distribution!) but for OpenGL, most of the tutorials I came over are outdated.
1
Manage VBO VAO in a graphic engine I'm trying to make a 2D Graphic engine for training me. I've actually made it with immediate draw and I've made the renderer outside (so I can switch between OpenGL and DirectX). How can I manage Vertex Buffer Object and Vertex Array Object? I've made a geometry object, and I don't think VBO and VAO need to be here. It is the work of my renderer to manage the scene? (Group object in a large VBO, hide object out of screen, Order object by transparency, ) More explications on my architecture Spacial Spacial element containing spacial elements (like a node). Mesh Object with a geometry and a material Scene Manage spacial element (like mesh) and lights. Renderer Draw the given scene (mesh and lights) Where I should manage buffers (Index buffer, Vertex Object Buffer and Vertex Array Buffer)? In first, I started to put them in the Geometry class, but it seem obvious because a buffer can stock multiple Geometry object. So, I'm thinking to put buffers in a buffer manager (in the scene object) Scene can manage meshes (order static dynamic to regroup them in buffers). What do you think about that? Thanks!
1
Efficiently color a procedural mesh? I'm creating a procedural world with LWJGL and GLSL. I want to better visualize the biome map being produced and the height map it creates, but my attempts so far have been very inefficient. My first idea was to make a vec3 holding the biome color for each vertex, but that very quickly took the game from 150 fps to 10 15. The time to render a 512x512 chunk went from around 3 per second to one per five seconds. As you could expect, there was also a problem with it very quickly using up 12gb of ram. I then thought about making an int array for each vertex where each point of the mesh would be able to use as a dictionary to grab a vec3 for the color. So 512 512 integers in an array getting from an array of 15 colors intead of a single array of 512 512 vec3s. This seemed even worse somehow in both FPS and chunk rendering time. To summarize again, I'm trying to color each vertex with its own color to visually show the biomes. What is a more efficient way to do this?
1
Unpacking Sprite Sheet Into 2D Texture Array I am using WebGL 2. A tag for it does not exist but it should. I have a 10x10 sprite sheet of squares that are 16x16 pixels in size (all in one PNG image). I'd like to create a 2D texture array out of them, where each 16x16 square gets its own, unique Z depth value. let texture gl.createTexture() let image new Image() image.onload function() gl.bindTexture(gl.TEXTURE 2D ARRAY, texture) gl.pixelStorei(gl.UNPACK FLIP Y WEBGL, false) gl.texStorage3D(gl.TEXTURE 2D ARRAY, 5, gl.RGBA, 16, 16, NUM IMAGES) Now what? gl.texSubImage3D doesn't let me copy in a section of the src image image.src "https source url.fake image.png" I know that gl.texSubImage3D exists but it only accepts an entire image as a source? glTexSubImage3D https www.khronos.org registry OpenGL Refpages gl2.1 xhtml glTexSubImage3D.xml
1
Is Batching Geometry Every Frame Always Slower Than Individual Draw Calls I'm currently have an application that has 10k draw calls. I implemented a batching scheme where I group all objects that share material, vertex format, etc and pre transform them by their world matrices and put all those objects in one vertex buffer index buffer, and then draw them with DrawIndexedPrimitive (I do this every frame, since some objects might have moved changed material etc). However, this method is always slower than if I just had multiple draw calls. Using VTune, it seems most of my time is spent computing the transformed vertices and copying the vertex index data into the buffers. My question is Is this expected? Do systems that perform batching typically cache the batched data or are there some that get away with batching every frame like I am attempting to do (and if so, are there any tips for what I might be doing poorly)? My application is in DirectX 9, but I'm curious about solutions in other platforms as well (OpenGL or DirectX 11).
1
In OpenGL what's quicker, lots of smaller VAOs, or one large one updated each frame? In my game engine, a mesh can be made of many submeshes. These submeshes may or may not share vertex data with the rest of the mesh, if they don't they have their own vertex data array. I've noticed that rendering a mesh with many submeshes that don't share vertices is incredibly slow. This is because each submesh has it's own VAO (and vertex index VBO pair). The alternative is to gather the vertices and indices into one pair of VBOs and update them every frame. But will this be much faster? Is it faster to create one large VAO and update it every frame, or to render many small ones which don't need updating each frame?
1
Object outlining, stencil is not working WebGL I can't seem to figure out my problem with stencil buffer in object outlining algo. function init() ... gl.enable(gl.STENCIL TEST) gl.stencilOp(gl.KEEP, gl.KEEP, gl.REPLACE) ... var mscaled scaled model matrix of a cube function render() gl.clear(gl.COLOR BUFFER BIT gl.DEPTH BUFFER BIT gl.STENCIL BUFFER BIT) gl.stencilFunc(gl.ALWAYS, 1, 0xFF) gl.stencilMask(0xFF) use program1 set attrib pointers bind cube's ebo buffer gl.drawElements(gl.TRIANGLES, 36, gl.UNSIGNED SHORT, 0) gl.stencilFunc(gl.NOTEQUAL, 1, 0xFF) gl.stencilMask(0x00) set scaled model matrix mat4.fromRotationTranslationScale(mscaled, cube.rotation, cube.translation, vec3.fromValues(1.1, 1.1, 1.1)) use program2 set attrib pointers bind cube's ebo buffer gl.drawElements(gl.TRIANGLES, 36, gl.UNSIGNED SHORT, 0) And it just draws the whole second cube on top of the first, like there's no stencil test. I've tested with up scale and down scale. the red cube should be the outliner. In the first image it is scaled up but WebGL draws it just on top of the first one, in the second image it is scaled down ideally it shouldn't been even drawn due to stencil test (of course, depth test is disabled in the case of a second image)
1
Common light map practices My scene consists of individual meshes. At the moment each mesh has its associated light map texture, I was able to implement the light mapping using these many small textures. 1) Of course, I want to create an atlas, but how do you split atlases to pages, I mean do you group the lm's of objects that are close to each other, and load light maps on the fly if scene is expected to be big. 2) the 3d authoring software provides automatic uv coordinates for each mesh in the scene, but there are empty areas in the texel space, so if I scale the texture polygons the texel density of each face wil not match other meshes, if I create atlas like that there will be varying lm resolution, how do you solve this, just leave it as it is, or ignore resolution ? Actually these questions also applies to other non tiled maps.
1
Missing any kind of lights? I am trying to make my first 3D game in OpenGL with Shaders (LWJGL3) I read and follow a lot of tutorials but can't figure it out what kind of light I am missing here? The wall doesn't get any effect from the light It is supposed to be like this one And my point light part in fragment vec4 calcLightColour(vec3 light colour, float light intensity, vec3 position, vec3 to light dir, vec3 normal) vec4 diffuseColour vec4(0, 0, 0, 0) vec4 specColour vec4(0, 0, 0, 0) Diffuse Light float diffuseFactor max(dot(normal, to light dir), 0.0) diffuseColour diffuseC vec4(light colour, 1.0) light intensity diffuseFactor Specular Light vec3 camera direction normalize( position) vec3 from light dir to light dir vec3 reflected light normalize(reflect(from light dir , normal)) float specularFactor max( dot(camera direction, reflected light), 0.0) specularFactor pow(specularFactor, specularPower) specColour speculrC light intensity specularFactor material.reflectance vec4(light colour, 1.0) return (diffuseColour specColour) vec4 calcPointLight(PointLight light, vec3 position, vec3 normal) vec3 light direction light.position position vec3 to light dir normalize(light direction) vec4 light colour calcLightColour(light.colour, light.intensity, position, to light dir, normal) Apply Attenuation float distance length(light direction) float attenuationInv light.att.constant light.att.linear distance light.att.exponent distance distance return light colour attenuationInv
1
Transparency problem, phong model It's the first time I'm trying to implement the Phong lighting model. I'm pretty sure everything is working fine. I was experimenting using different materials, meaning I played with Kd,Ks,Ka and Ns values when I came across this problem. Whenever I use a material with alpha value less than one I get this weird result At first I thought it has to do with the normals of the model since they are all supposed to point away from the model but it doesn't seem to be the case. Any ideas?
1
What is the advantage of OpenGL's direct state access mechanism? I've been reading about OpenGL 4.5 Direct State Access (DSA) at opengl.org and not sure if I'm getting it right. It seems to imply, that the old way is less efficient glBind(something) glSetA(..) glSetB(..) glSetC(..) than the new way glSetA(something, ..) glSetB(something, ..) glSetC(something, ..) From the looks of it now each glSet has to include glBind(something) inside of it and if OpenGL still being a state machine cannot take advantage of streamed changes applied to a single something. Please explain the reasoning behind and advantages of the new DSA.
1
glReadPixels with GL DEPTH COMPONENT into PBO is slow I need to read depth buffer back to cpu memory. It may be few frames old, so I use glReadPixels with a buffer bound to GL PIXEL PACK BUFFER. I use several buffers and ping pong them. Finally, I read from the PBO with glGetBufferSubData. I have tried creating the buffers with GL STATIC READ, GL DYNAMIC READ and GL STREAM READ all with same results. Unfortunately, all this is still horribly slow. class DepthBuffer private static const uint32 PboCount 2 Buffer buffer uint32 index uint32 pbo PboCount uint32 w PboCount , h PboCount public DepthBuffer() DepthBuffer() void performCopy(uint32 fbo, uint32 w, uint32 h) returns 0..1 in logarithmic depth float valuePix(uint32 x, uint32 y) pix in 0..(cw ch 1) float valueNdc(float x, float y) ndc in 1..1 DepthBuffer DepthBuffer() index(0), pbo 0, 0 , w 0, 0 , h 0, 0 glGenBuffers(PboCount, pbo) DepthBuffer DepthBuffer() glDeleteBuffers(PboCount, pbo) void DepthBuffer performCopy(uint32 fbo, uint32 paramW, uint32 paramH) copy framebuffer to pbo glBindBuffer(GL PIXEL PACK BUFFER, pbo index ) if (w index ! paramW h index ! paramH) glBufferData(GL PIXEL PACK BUFFER, paramW paramH sizeof(float), nullptr, GL STREAM READ) w index paramW h index paramH glBindFramebuffer(GL READ FRAMEBUFFER, fbo) CHECK GL FRAMEBUFFER(GL READ FRAMEBUFFER) glReadPixels(0, 0, w index , h index , GL DEPTH COMPONENT, GL FLOAT, 0) CHECK GL("read the depth (framebuffer to pbo)") copy gpu pbo to cpu buffer index (index 1) PboCount float depths nullptr uint32 reqsiz w index h index sizeof(float) if (buffer.size() lt reqsiz) buffer.allocate(reqsiz) depths (float )buffer.data() glBindBuffer(GL PIXEL PACK BUFFER, pbo index ) glGetBufferSubData(GL PIXEL PACK BUFFER, 0, w index h index sizeof(float), depths) glBindBuffer(GL PIXEL PACK BUFFER, 0) CHECK GL("read the depth (pbo to cpu)") float DepthBuffer valuePix(uint32 x, uint32 y) if (w index h index 0) return nan1() assert(x lt w index amp amp y lt h index ) return ((float )buffer.data()) x y w index float DepthBuffer valueNdc(float x, float y) assert(x gt 1 amp amp x lt 1 amp amp y gt 1 amp amp y lt 1) return valuePix((x 0.5 0.5) (w index 1), (y 0.5 0.5) (h index 1)) The glReadPixels is taking all the time (on cpu). Am I doing something wrong? I thought that reading into the PBO should be asynchronous. Thanks.
1
Draw quad with OpenGL VBO using OpenTK I'm trying to learn how to use VBO (Vertex Buffer Objects) by putting together a simple program that draws a quad to the screen using OpenTK (C OpenGL bindings). Unfortunately I'm not seeing anything on screen. If I draw the same quad in immediate mode it shows up fine. Since it wasn't working I tried adding Color Data too (in case this was the problem) but it didn't help. Any help would be greatly appreciated! private uint indexBufferId private uint vertexBufferId private uint colorBufferId private void InitialiseData() Set up index buffer ushort indices new ushort 0, 1, 2, 3 GL.GenBuffers(1, out indexBufferId) GL.BindBuffer(BufferTarget.ElementArrayBuffer, indexBufferId) GL.BufferData( BufferTarget.ElementArrayBuffer, (IntPtr)(indices.Length sizeof(ushort)), indices, BufferUsageHint.StaticDraw) Set up vertex buffer float vertexData new float 50.0f, 50.0f, 100.0f, 50.0f, 100.0f, 100.0f, 50.0f, 100.0f GL.GenBuffers(1, out vertexBufferId) GL.BindBuffer(BufferTarget.ArrayBuffer, vertexBufferId) GL.BufferData( BufferTarget.ArrayBuffer, (IntPtr)(vertexData.Length sizeof(float)), vertexData, BufferUsageHint.StaticDraw) Set up color buffer float colorData new float 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f GL.GenBuffers(1, out colorBufferId) GL.BindBuffer(BufferTarget.ArrayBuffer, colorBufferId) GL.BufferData( BufferTarget.ArrayBuffer, (IntPtr)(colorData.Length sizeof(float)), colorData, BufferUsageHint.StaticDraw) protected override void OnRenderFrame(FrameEventArgs e) base.OnRenderFrame(e) GL.Clear(ClearBufferMask.ColorBufferBit) Bind vertex buffer GL.BindBuffer(BufferTarget.ArrayBuffer, vertexBufferId) GL.EnableClientState(ArrayCap.VertexArray) GL.VertexPointer(2, VertexPointerType.Float, 0, IntPtr.Zero) Bind color buffer GL.BindBuffer(BufferTarget.ArrayBuffer, colorBufferId) GL.EnableClientState(ArrayCap.ColorArray) GL.ColorPointer(4, ColorPointerType.Float, 0, IntPtr.Zero) Bind index buffer GL.BindBuffer(BufferTarget.ElementArrayBuffer, indexBufferId) Draw GL.DrawElements( BeginMode.Quads, 1, DrawElementsType.UnsignedShort, IntPtr.Zero) Disable GL.DisableClientState(ArrayCap.VertexArray) GL.DisableClientState(ArrayCap.ColorArray) SwapBuffers()
1
Handling 3D perspective for different aspect ratio I have problem for supporting different aspect ratio in developing Mobile Game. I was developing on 2 3 screen which is iPhone 4 (Retina). When I use width and height for screen size and use fixed FOV, then most of cases look ok, but in 3 4 (which is iPad), the user can "See More" horizontally. If I force the aspect ratio to 2 3, then looks "squeezed". I am lost adjusting other values like moving camera, or changing FOV. Is there any proper way or tips?? Or, What is good FOV for portrait mobile device screen which can cover most of aspect ratio look ok? I am currently using 45 and force aspect ratio to 2 3
1
How to do perspective projection parallax but without changing the scale or offset of objects? Hello everyone I have this problem that I have tried everything I could think of. The problem I am making a 2D game with parallax effect but I am using 3d space so am not simulating the parallax but letting the perspective projection take care of it for me. now the problem i have my own game editor where I design the levels, in this editor I use just images and I set a Z value for each layer. however I want the layers to show in the game engine exactly as I set them in the game editor, in short I want perspective projection to do parallax but without changing their scale or offset position. obvious solution is to scale them up and offset them but the problem how to calculate their offset? with scaling I tried object Scale(object.Z view.Z) and seems to return them to their real size but their positions are still wrong. I tried object setPositionX(object getPosition().x (object.Z view.Z)) and seems to be aligned except they all seems shifted. I have tried unprojecting and tried to convert from world matrix to screen matrix and find some ratios and so on. If anyone have any idea or anyway how this could be done in an elegant mathematical way , I will be most grateful. Thank you all in advance.