_id
int64
0
49
text
stringlengths
71
4.19k
1
Errors when loading Assimp meshes I am experiencing some oddities when loading meshes in Assimp. Some models render perfectly, while others are complete jumbles of triangles. Some Example Images Teapot Correct Teapot Actual Flower Correct Flower Actual OBJ download links Teapot.obj Flower.obj I've searched far and wide, and I cant seem to pin down the cause of this. I have noticed while debugging that the value of ai mesh gt mNumVertices differs from what Assimp Viewer or Blender display. ai mesh gt mNumVertices is smaller in the Teapot example by 156 vertices, and larger in the Flower example by 20 vertices. I've considered that Assimp post processing may be causing the difference here. However, I'm using the EXACT same post processing flags used in Assimp Viewer, copied directly from the source code. Here is my Mesh Loader mesh mesh(aiMesh ai mesh, model parent model) mesh() name ai mesh gt mName.length ! 0 ? ai mesh gt mName.C Str() "" Get Vertices if (ai mesh gt mNumVertices gt 0) for (int ii 0 ii lt ai mesh gt mNumVertices ii ) aiVector3D ai ai mesh gt mVertices ii glm vec3 vec glm vec3(ai.x, ai.y, ai.z) vertices .push back(vec) Get Normals if (ai mesh gt HasNormals()) for (int ii 0 ii lt ai mesh gt mNumVertices ii ) aiVector3D ai ai mesh gt mNormals ii glm vec3 vec glm vec3(ai.x, ai.y, ai.z) normals .push back(vec) Get mesh indexes for (int t 0 t lt ai mesh gt mNumFaces t ) aiFace face amp ai mesh gt mFaces t if (face gt mNumIndices ! 3) std cout lt lt " ?? Mesh face with not exactly 3 indices, ignoring this primitive. n" continue indices .push back(face gt mIndices 0 ) indices .push back(face gt mIndices 1 ) indices .push back(face gt mIndices 2 ) material parent model gt getMaterial(ai mesh gt mMaterialIndex) And render function, just in case. void mesh render() material gt bind() glVertexPointer(3, GL FLOAT, 0, amp vertices 0 ) glNormalPointer(GL FLOAT, 0, amp normals 0 ) glDrawElements(GL TRIANGLES, indices .size(), GL UNSIGNED BYTE, amp indices 0 ) Thanks ahead.
1
What does glBlendFunc(GL DST COLOR, GL ZERO) mean? I need to write a description about a filter method I made but I don't know what glBlendFunc(GL DST COLOR, GL ZERO) means.
1
Unpacking Sprite Sheet Into 2D Texture Array I am using WebGL 2. A tag for it does not exist but it should. I have a 10x10 sprite sheet of squares that are 16x16 pixels in size (all in one PNG image). I'd like to create a 2D texture array out of them, where each 16x16 square gets its own, unique Z depth value. let texture gl.createTexture() let image new Image() image.onload function() gl.bindTexture(gl.TEXTURE 2D ARRAY, texture) gl.pixelStorei(gl.UNPACK FLIP Y WEBGL, false) gl.texStorage3D(gl.TEXTURE 2D ARRAY, 5, gl.RGBA, 16, 16, NUM IMAGES) Now what? gl.texSubImage3D doesn't let me copy in a section of the src image image.src "https source url.fake image.png" I know that gl.texSubImage3D exists but it only accepts an entire image as a source? glTexSubImage3D https www.khronos.org registry OpenGL Refpages gl2.1 xhtml glTexSubImage3D.xml
1
Creating a DPI and Resolution Independant GUI How would one best go about it? Assuming I'm using OpenGL to do all the drawing. It seems like the best approach is to have the GUI elements maintain the same physical size regardless of the screen it is displayed on.It seems that Windows will act as if you had a screen with 72 DPI regardless of its actual size unless you specifically request tell it that you handle high DPI cases. Which would greatly simplify things, does this hold for Linux as well? Does things change between running things in Windowed mode versus fullscreen? It seems like it should, as the Window changes size depending on resultion while in fullscreen the image is always the same size, just with less or more density!
1
How to use continious collision detection in a dynamic AABB tree I am currently writing a game in c using openGL, and I am currently using a kinetic sweep and prune algorithm for the broad phase and then using GJK Raycast GJK amp EPA for the narrow phase. However I have realized that kinetic sweep and prune may not be the best choice because there are many objects in the scene that are just static and cause a lot of swapping when an object moves. So basically I would like to implement a dynamic AABB tree for the broad phase knowing there will be only a few objects requiring continuous collision detection (but these objects are essential). However for these fast moving objects what is the best way to detect a possible collision with other objects in the broad phase? I am thinking about using an AABB that contains the object trajectory from one frame to another, is this a good idea? Or will this a lot of overhead due to many false positives? Also I haven't read to much about dynamic AABB trees but I think I understand the idea, the idea is for each object that moved check if it's AABB if overlapping with the tree node, if it is do the same check with it's children and do so until we are at the leaves of the tree. All help is greatly appreciated
1
How to enable geometry shader in OpenGL 4.2? I'm porting my Direct3D based engine to OpenGL and I'm using geometry shaders for rendering text characters (basically, textured billboards). D3D version works fine, but in OpenGL mode it gives only a flickering point right in the center of the screen. It seems that somehow the geometry shader stage is not enabled (in AMD GPUPerfStudio the box, representing GS stage, is grayed out). I tripple checked the OpenGL back end and the shader code but couldn't find any mistakes. Maybe, I'm missing something. Below is the source code of the relevant shaders Vertex shader in vec4 in xy wh in vec4 in tl br UVs for top left and bottom right corners out vec4 v xy wh out vec4 v tl br void main() v xy wh in xy wh v tl br in tl br Geometry shader layout(points) in layout(triangle strip, max vertices 4) out in vec4 v xy wh in vec4 v tl br UVs for top left and bottom right corners out vec2 v uv void main() vec2 pos v xy wh 0 .xy float width v xy wh 0 .z float height v xy wh 0 .w vec2 tl v tl br 0 .xy vec2 br v tl br 0 .zw gl Position vec4( pos.x, pos.y, 0.0f, 1.0f ) v uv vec2( tl.x, tl.y ) EmitVertex() gl Position vec4( pos.x width, pos.y, 0.0f, 1.0f ) v uv vec2( br.x, tl.y ) EmitVertex() gl Position vec4( pos.x, pos.y height, 0.0f, 1.0f ) v uv vec2( tl.x, br.y ) EmitVertex() gl Position vec4( pos.x width, pos.y height, 0.0f, 1.0f ) v uv vec2( br.x, br.y ) EmitVertex() EndPrimitive() Fragment shader in vec2 v uv out vec4 v color uniform sampler2D s font void main() vec4 color texture( s font, v uv ).rgba if( color.w lt 1.0 255.0 ) discard v color color I've added the " version 420 core" preamble to each shader. I know I shouldn't use the glProgramParameteriEXT function to define GL GEOMETRY INPUT TYPE EXT, GL GEOMETRY OUTPUT TYPE EXT, GL GEOMETRY VERTICES OUT EXT, because these parameters are specified in the geometry shader (which is core in OpenGL 4.2). EDIT My mistake was incorrect usage of glVertexAttribPointer (wrong value of the the last parameter). Thanks for all the answers and sorry for your wasted time!
1
How do I change a sprite's color? In my rhythm game, I have a note object which can be of a different color depending on the note chart. I could use a sprite sheet with all the different color variations I use, but I would prefer to parametrize this. (Each note sprite is made of different shades of a hue. For example a red note has only red, light red and dark red.) How can I colourise a sprite anew? I'm working with OpenGL, but any algorithm or math explanation will do. )
1
Can you trilinear sample a non volume texture? Let's say that i have a regular 2d texture (not a volume texture). Is it possible to do trilinear texture sampling of that texture even though it isn't a volume texture in opengl or directx? Specifically, if i had a 2x4 texture laid out like below where A through H are pixels A B C D E F G H What I'd like to get is a trilinear sample being the result of linearly interpolating the bilinear sampling of (A,B,C,D) and (E,F,G,H). I know i could take two bilinear texture samples and lerp them to get what I want, but is there a hardware supported way to do this?
1
360 degree video of my OpenGL game I want to make a 360 degree video of my OpenGL game. Concerning the rendering Is it enough to render it in OpenGL with a specific projection matrix? If yes, which one? Or can I render it into a cube map, and then encode it? (Which will require much more rendering power and be more complicated, which I want to avoid) And how do I encode a 360 degree video with FFMPEG?
1
Easy way to set face colors with indexed VBOs? I'm loading OBJs, which lend themselves well to setting up as indexed VBOs, since each vertex is only defined once and then a face definition will reference the same vertex more than once when they share an edge. I want to define face colors based on the materials loaded from the OBJ. This is quite easy in immediate mode, but I've realized that using VBOs will require me to duplicate each vertex in order to have two adjacent faces with different colors. What's the best way to go about solving this without duplicating vertices? EDIT Code (LWJGL) Get the VBO from the GPU glBindBuffer(GL ARRAY BUFFER, vbo) Tell OpenGL what to expect from the VBO glVertexAttribPointer(0, 3, GL FLOAT, false, Vertex.SIZE 4, 0) Location glVertexAttribPointer(1, 2, GL FLOAT, false, Vertex.SIZE 4, 4 3) Texture Coord glVertexAttribPointer(2, 3, GL FLOAT, false, Vertex.SIZE 4, 4 3 4 2) Normal glBindBuffer(GL ELEMENT ARRAY BUFFER, ibo) glDrawElements(GL TRIANGLES, size, GL UNSIGNED INT, 0)
1
LibGDX Draw multiple textures with different positions with one shader? currently I'm optimizing my render calls by drawing multiple textures with one shader. The background textures who all share the same position are no problem. But now I want to draw some Textures with different positions with one shader. I think it isn't possible, because the textures don't have the same geometry. But maybe there is a possible way to do this? I never worked with the geometry shader for example. For example, draw a 512x512 texture to position (100, 50), draw a 256x256 texture to position (500, 300)... , like in this picture
1
C OpenGL More textured objects equals less FPS So I'm creating a game engine with many features, last time I focused on textures. When I finished implementing some new code (Made by myself) I created new Entity with texture, and it was working really nice. But If I create 2 or more textured objects game starts lagging.. 1 Cube without texture 60 FPS 1 Cube with texture 60 FPS 2 Cubes without texture 60 FPS 2 Cubes with texture 40 50 FPS 10 Cubes without texture 60 FPS 10 Cubes with texture 20 FPS 100 Cubes without Texture 60 FPS 100 Cubes with Texture lt 1 FPS 10000 Cubes without Texture 60 FPS I'm using GLSL sampler2D, texture UV from VBO, here is my renderer code bool textured false if(entity gt hasComponent("MaterialComponent")) MaterialComponent materialComponent (MaterialComponent ) entity gt getComponent("MaterialComponent") if(materialComponent gt material ! NULL amp amp materialComponent gt material gt texture ! NULL) textured true if(component gt cullFaces) glEnable(GL CULL FACE) glCullFace(GL BACK) if(textured) glEnable(GL TEXTURE 2D) glBindVertexArray(component gt model gt vaoID) glEnableVertexAttribArray(0) if(textured) glEnableVertexAttribArray(1) glEnableVertexAttribArray(2) glDrawElements(GL TRIANGLES, component gt model gt getVertexCount(), GL UNSIGNED INT, 0) if(textured) MaterialComponent materialComponent (MaterialComponent ) entity gt getComponent("MaterialComponent") if(materialComponent gt material gt texture gt textureID 0) glGenTextures(1, amp materialComponent gt material gt texture gt textureID) glBindTexture(GL TEXTURE 2D, materialComponent gt material gt texture gt textureID) SDL Surface surface IMG Load(materialComponent gt material gt texture gt textureFile gt getPath().c str()) if(!surface) Error throwError("Cannot load texture file!") S32 colorMode GL RGB if(surface gt format gt BytesPerPixel 4) colorMode GL RGBA glTexImage2D(GL TEXTURE 2D, 0, colorMode, surface gt w, surface gt h, 0, colorMode, GL UNSIGNED BYTE, surface gt pixels) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameterf(GL TEXTURE 2D, GL TEXTURE WRAP S, GL REPEAT) glTexParameterf(GL TEXTURE 2D, GL TEXTURE WRAP T, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR MIPMAP LINEAR) glGenerateMipmap(GL TEXTURE 2D) glActiveTexture(GL TEXTURE0) glUniform1i(baseShader gt loc sampler, 0) SDL FreeSurface(surface) glDrawElements(GL TRIANGLES, component gt model gt getVertexCount(), GL UNSIGNED INT, 0) if(textured) glBindTexture(GL TEXTURE 2D, 0) glDisable(GL TEXTURE 2D) glDisableVertexAttribArray(0) if(textured) glDisableVertexAttribArray(1) glDisableVertexAttribArray(2) glBindVertexArray(0) if(textured) glDisable(GL TEXTURE 2D) if(component gt cullFaces) glDisable(GL CULL FACE) So I can create new TextureAsset for each cube or I can use one TextureAsset for each cube, it makes no difference about FPS. I'm creating new TextureAsset like this TextureAsset textureAsset new TextureAsset(new FilePath("PATH TO TEXTURE")) Then putting it inside Entity' MaterialComponent MaterialComponent materialC (MaterialComponent ) cube1 gt getComponent("MaterialComponent") materialC gt materialAsset gt texture textureAsset I think there is something bad inside my renderer method, it happens only if there is more than 1 textured object. Maybe I have to clean up texture in different way? I'm using SDL2, SDL2 image and GLEW. If you need more code just tell me.
1
Animating a jellyfish hardcoding? I have drew a hemi sphere by myself, and would like to do basic jellyfish animation by animating the vertices of the hemi sphere. I tried a lot of experiments but it seems I'm doing it wrong. I need hints or idea on about doing it. This is the code on how to draw a hemi sphere. I don't know what values actually to play with to animate the sphere as a jellyfish. for(float phi 0.0 phi lt 1.567 phi factor) glBegin(GL QUAD STRIP) for(float theta 0.0 theta lt 2 3.14 factor theta factor) x rh sin(phi) cos(theta) z rh sin(phi) sin(theta) y rh cos(phi) gl vertex(Vec3f(x, y, z)) x rh sin(phi factor) cos(theta) z rh sin(phi factor) sin(theta) y rh cos(phi factor) gl vertex(Vec3f(x, y, z)) glEnd()
1
Do OpenGL buffers overflow to CPU memory? This is a question about OpenGL buffers and memory. My game world is mid sized, one contiguous space, unchanging, and only partially visible from any position. Will modern OpenGL overflow buffers into CPU memory? And if so, can I just allocate buffers (vertex and texture) for all of it, and adjust my draw calls to skip non visible areas, and let OpenGL pull buffers into GPU as needed, hopefully minimizing thrashing? (Or does OpenGL just fail on the allocate after a while?) EDIT The above was ambiguous. But maybe both variations are interesting. Could I just allocate one huge vertex buffer and a few giant textures, and hope that OpenGL moves parts of it in and out of the GPU as needed, and maybe enough early depth testing lets it skip some of the texture drawing... Could I have a handful, or maybe a lot, of "sensibly sized" vertex and texture buffers, but only glDrawXxx() some of them on each frame? In that case, would OpenGL move them up and down from the GPU, maybe in a least recently used sort of way?
1
How to free the GPU from a long task? I'm developing an application that uses the GPU. Some tasks might be rather long (like a fragment shader with a loop). I have the impression that I can make the entire OS visually freeze by asking the GPU to perform a job that doesn't end. Of course, I'm not doing this on purpose, but when that accidentally happens, I have to force power off my machine and this is very difficult to debug. Some questions Is this behaviour normal? Or could this be a bug in the hardware or driver? Is there a technique to recover from this without force stopping my machine? Since the OS is still functioning (only not rendering), I could try to ssh into my machine and kill the process. Would that free the GPU from its job? Update I tried to ssh into my machine and sudo kill 9 pid , but didn't work. I would like to know if there is an OS independent answer for this. If there isn't, my setup is macOS, with OpenGL (running on Intel HD Graphics 3000).
1
How do i set up 9 slicing in Opengl? What is the ideal way to set up 9 slicing? (for the people who don't know what 9 slicing is) Let's say we have a quad, with a texture applied to it. 9 slicing is basically a way of dividing a texture in 9 parts. The reason why we do this is to have a quad with a texture, that can be resized without the issue of having the texture all warped. we divide the edges, the corners, and the main body of the texture As can be seen in the pictures, we have a quad that can be scaled without warping the texture which would look something like this(no 9 slicing) Now, to come back to my question How should i set up 9 slicing? should i have a multiple quads(one for the center part, one for the right edge, one for the left edge....) but then how do i specify the different texture coordinates for each quad? Another way would be to just create the quads with their texture coordinates and relative textures and then move them all toghether like a whole object. I hope someone explain me a little more about this "9 slicing" approach, since I've never used it before. Thanks everyone for the help EDIT (to answer HolyBlackCat) "Another way would be to just create the quads with their texture coordinates and relative textures and then move them all toghether like a whole object." What i wanted to say is this I create separate quads, attach separate textures to them, render and move them at the same time This is, in my opinion, the most logical way to do this. Despite that, I still want to know what would be a better, faster way to do this, without having to have too many vertices for a quad
1
Best way to do buttons for an OpenGL ES iPhone game I'm making a simple 2d game in OpenGL ES and I want to add movement buttons to it. What's the best way of going about this? In previous projects I've simply added UIButtons to the view but I hear there are performance implications in doing so with OpenGL ES so I'm wondering what the possible alternatives are if so.
1
Point Light shows black box rect (PointLight not working) libgdx 3D I am creating a 3d scene currently a box and rect, and trying to enable lighting. When i create a PointLight and add it to Environment everything turns to black color? all i want to do is create a 3d scene and enable point light, like a sun or rays coming from a point and shading the objects. Code environment new Environment() environment.add(new PointLight().set(1f, 1f, 1f, 0, 0, 20f, 100f)) modelBatch new ModelBatch() .. square new ModelBuilder().createBox(300,300,300,new Material(ColorAttribute.createDiffuse(Color.GREEN)), VertexAttributes.Usage.Position VertexAttributes.Usage.Normal) squareinst new ModelInstance(square) squareinst.transform.setTranslation( 500,0,0) sprites.get(0).setRotationY(sprites.get(0).getRotationY() 1f) sprites.get(1).setRotationY(sprites.get(1).getRotationY() 1f) squareinst.transform.rotate(1,0,0,1) modelBatch.begin(camera) for(Sprite3D sp sprites) has 3d rect models sp.draw(modelBatch,environment) modelBatch.render(squareinst,environment) modelBatch.end() PointLight turning everything black Without using environment or lights as per my investigation, here if pointlight is not working then everything should be black as currently, because the environment needs light, it works fine with Directional light (only the backface of rect is black even after rotations, i don't know why) libgdx version 1.6.1 android studio i checked it on both android device and desktop please i really need to get this PointLight working, i don't know if it will take a custom shader, if so please guide me to some links because i am not experienced in shaders. I also read about PointLight not working on some device or not working in opengl 2.0 enabled, but i am not sure.
1
Transferring my OpenGL PC game to PS4 and or XBox One I want to transfer my OpenGL PC game to consoles, PS4 and or XBox One. It's a complex application, where many calculations (incl. fast fourier transformations) via FBOs and Transformfeedback are performed within OpenGL. What is the best way to bring it to the PS4? Or is it easier to implement the game in an engine like Unity or Unreal. As far as I know they have console support? An advantage would be that we could offer our methods as plugin asset. We are planning to do this too. There are a lot of glsl shaders and OpenGL Api Calls. Is it possible at all to "translate" that into an engine like unreal? Or would we have to use the engines "predefined" rendnering mechanisms? We basically need shaders operating on floating point textures, changing render targets frequently and transform feedback if possible. We also need low level acces to shading attributes like changing pointsize in the vertex shader etc. And I really would like to know which is less work. I could image that it is possible to absctract all the Api Calls in classes functions and then where first all the OpenGL calls are implemted and then we would do an implemetation for the console api. Is this realistic on the PS4 Api? I know that Direct X will have a different design and therefore abstraction itself won't be enough. I believe porting to an engine might be even more work as you have to respect its structures.
1
glDrawElements draws nothing As the title says . ( nothing is drawn on screen is there something I'm missing? Mesh Mesh Create(GLfloat vertices , GLuint indices , GLfloat uvs , GLfloat colors ) Mesh mesh (Mesh )malloc(sizeof(Mesh)) mesh gt vertices (GLfloat )malloc(sizeof(vertices)) memcpy(mesh gt vertices, vertices, sizeof(vertices)) mesh gt indices (GLuint )malloc(sizeof(indices)) memcpy(mesh gt indices, indices, sizeof(indices)) mesh gt elementCount sizeof(indices) sizeof(GLuint) glGenVertexArrays(1, amp mesh gt vao) glBindVertexArray(mesh gt vao) glGenBuffers(1, amp mesh gt vbo VERTEXBUFFER ) glBindBuffer(GL ARRAY BUFFER, mesh gt vbo VERTEXBUFFER ) glBufferData(GL ARRAY BUFFER, sizeof(mesh gt vertices), amp mesh gt vertices, GL STATIC DRAW) glVertexAttribPointer(0, 2, GL FLOAT, false, 0, 0) glEnableVertexAttribArray(0) glGenBuffers(1, amp mesh gt vbo TEXTUREBUFFER ) glBindBuffer(GL ARRAY BUFFER, mesh gt vbo TEXTUREBUFFER ) glBufferData(GL ARRAY BUFFER, sizeof(uvs), uvs, GL STATIC DRAW) glVertexAttribPointer(1, 2, GL FLOAT, false, 0, 0) glEnableVertexAttribArray(1) glGenBuffers(1, amp mesh gt vbo INDICESBUFFER ) glBindBuffer(GL ELEMENT ARRAY BUFFER, mesh gt vbo INDICESBUFFER ) glBufferData(GL ELEMENT ARRAY BUFFER, sizeof(mesh gt indices), amp mesh gt indices, GL STATIC DRAW) glGenBuffers(1, amp mesh gt vbo COLORBUFFER ) glBindBuffer(GL ARRAY BUFFER, mesh gt vbo COLORBUFFER ) glBufferData(GL ARRAY BUFFER, sizeof(colors), colors, GL STATIC DRAW) glVertexAttribPointer(3, 4, GL FLOAT, false, 4 sizeof(GLfloat), NULL) glEnableVertexAttribArray(3) glBindVertexArray(0) glBindBuffer(GL ELEMENT ARRAY BUFFER, 0) glBindBuffer(GL ARRAY BUFFER, 0) return mesh Mesh Mesh CreateQuad() GLfloat vertices 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5 GLuint indices 0,1,2, 0,2,3 GLfloat uvs 0 , 1, 1, 1, 1, 0, 0, 0 GLfloat colors 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 return Mesh Create(vertices, indices, uvs, colors) void Mesh Render(Mesh mesh) glBindVertexArray(mesh gt vao) glDrawElements(GL TRIANGLES, mesh gt elementCount, GL UNSIGNED INT, amp mesh gt indices)
1
Bones Animation Matrices and calculations We are 'on final' when it comes to finishing the project, but just before implementing the animation system. Our Client decided to choose "Bones Animation" which is that I should export each Transformation Matrix ( matrix4x4 rotation translation ) for every frame and for every bone that this animated object has. Objects in our game are animated with 3DS Max Physique Modifier, so we will have bones weighting data per vertex. But I will simplify things here just to get a bit of light on this subject. I would like to split this post into 2 points, where Exporting bones matrices for every frame treats about the correct method of exporting the bones positions for later animation purposes, where i have to 'move' and 'rotate' every vertex influenced by this bone to bone position at frame X. Calculating final vertex position treats about proper matrices operations to calculate new vertex position according to bone transformation in frame X. 1. EXPORTING BONES MATRICES FOR EVERY FRAME Do I understand correctly that while exporting the animated object I should Grab BONE transformation matrix at frame 0 and Invert this matrix Grab BONE transformation matrix at FRAMEx Multiply 1 2 to get the transformation offset of the BONE at FRAMEx pseudocode Animation export For each frame, export bone transformation offset for(int iFrame 0 iFrame lt vFrames.size() iFrame ) For every bone in the object for(int iBone 0 iBone lt vBones.size() iBone ) Grab transformation matrix for this bone at frame 0 and inverse it Matrix3 matBoneMatrixAtStart pNode gt GetObjectTMAfterWSM( 0 ) matBoneMatrixAtStart.Inverse() Grab transformation matrix for this bone at frame iFrame Matrix3 matBoneMatrixAtCurrentFrame pNode gt GetObjectTMAfterWSM( iFrame ) Multiply Inversed Transformation Matrix of this bone at frame 0 with current frame transformation matrix Matrix3 matBoneTransformationOffset matBoneMatrixAtStart matBoneMatrixAtCurrentFrame Save matBoneTransformationOffset vertex will be multiplied by this matrix for animation purposes fwrite(.....) pseudocode Will that be enough ? Or there is something I am missing here? 2. CALCULATING NEW VERTICES POSITIONS ( FINAL VERTEX POSITION AT FRAME X) Later on, when rendering, object vertices are going to be multiplied by exported bone transformation matrix for actual animation frame, and then multiplied by this whole model transformation matrix to place the object in correct position inside the level pseudocode Update() The model transformation matrix describing the position of the model in the level matModelTransformationMatrix Calculate new vertex position according to it's bone transformation offset NewVertexPosition (OriginalVertexPosition matBoneTransformationOffset iFrame ) matModelTransformationMatrix Increment the frame for testing purposes iFrame pseudocode Am I thinking correct here? So, Having bone transformation offset for frame X, multiplying every vertex affected by this bone by this offset should result in a vertex transformed exactly as this bone right?
1
Handling multiple lights of different types in GLSL I want to be able to support multiple lights of different types (point, spot amp directional). Note that I also want to be able to render transparent translucent objects, which rules out deferred rendering. This means that I will need a fixed upper limit on the number of lights. All of the implementations I have seen use a uniform array, for example struct PointLight vec3 color float intensity float radius uniform PointLight pointLights 8 How can I best extend this to multiple types of lights? Two possible options are Create multiple uniform arrays, one for each type of light. Define a struct that can be used for all three types of light and then have some way of differentiating the types of lights in the array (e.g. using start index and count uniforms or adding some kind of type flag to the struct). Both have disadvantages. Both waste memory (although the first more so than the second) and the second either requires additional uniforms or flag checking. What is the usual method for handling multiple lights of various types?
1
Rotate an object given only by its points? I was recently writing a simple 3D maze FPP game. Once I was done fiddling with planes in OpenGL, I wanted to add support for importing Blender objects. The approach I used was triangulization of the object, then using Three.js to export the points to plaintext and then parsing the result JSON in my app. The example file can be seen here https github.com d33tah tinyfpp blob master Data Models cross.txt The numbers represent x,y,z,u,v of a single vertex, which combined in three make a triangle. Then I rendered such an object triangle by triangle and played with it. I could move it back and forth and sideways, but I still have no idea how to rotate it by some axis. Let's say I'd like to rotate all the points by five degrees to the left, how would a code doing it look like?
1
How to draw a rotatable and zoomable sphere? My game is a sort of business simulation game of Earth, and I want the main interface to be just like a google earth view. Is there a way to do this with built in OpenGL features or do I have to implement the engine myself.
1
How can I forward GLFW's keyboard input to another object? I'm having trouble trying to execute keyboard events in a another class with GLFW3. The problem I'm having is that GLFW3 uses a static function for input as shown static UI u ... ... static void key callback(GLFWwindow window, int key, int scancode, int action, int mods) u.controls(window, key, action) u is also static and controls holds the the input for WSAD keys (the only way I could get get key events). From here, pressing a key works displaying which key is pressed in the console window. The trouble I'm having is trying to used the key pressed to manipulate a variable in another class. I have another class called MainMenu that has the function update(). Is there a way I can use my UI class within this function?
1
How do I create a view matrix that does not contain the camera translation? I'm looking at this tutorial https capnramses.github.io opengl cubemaps.html about half way down the webpage, there is a description of a vertex shader version 400 in vec3 vp uniform mat4 P, V out vec3 texcoords void main() texcoords vp gl Position P V vec4(vp, 1.0) The P and V matrices are my camera's projection, and view matrices, respectively. The view matrix here is a special version of the camera's view matrix that does not contain the camera translation. Inside my main loop I check if the camera has moved. If so I build a very simple view matrix and update its uniform for the cube map shader. This camera only can only rotate on the Y Axis, but you might use a quaternion to generate a matrix for a free look camera. Remember that view matrices use negated orientations so that the scene is rotated around the camera. Of course if you are using a different maths library then you will have different matrix functions here. I'm having difficulty figuring out how to construct this matrix in my program. How would I construct it from a projection matrix, view matrix, or camera matrix? (These are the matrices already within my program)
1
Texturing a PyOpenGL 3D Cube with PySDL2 So, I've just started learning OpenGL with PySDL2, and I've created a class that will create a cube to the window that I've created with PySDL2. What I'd like to do now, is to figure out a way to texture the cube. I don't believe that the actual texturing part is the problem, but actually loading the image. I've tried to do this with PySDL's image loader, but this causes an error from the OpenGL texture creator. Here is my code to load an texture and bind it to OpenGL def LoadTexture(self, filename) """Loads a texture for the cube""" surface SDL LoadBMP(filename) Checks if the loading succeeded if surface Translate the LP SDL Surface pointer got from SDL LoadBMP() to a real SDL Surface texture surface surface.contents texture format GL.GL RGBA GL TEXTURE ID GL.glGenTextures(1) GL.glBindTexture(GL.GL TEXTURE 2D, GL TEXTURE ID) GL.glPixelStorei(GL.GL UNPACK ALIGNMENT, 1) GL.glTexImage2D(GL.GL TEXTURE 2D, 0, 3, texture surface.w, texture surface.h, 0, texture format, GL.GL UNSIGNED BYTE, texture surface) return GL TEXTURE ID return None And the error that gives looks like this TypeError ("No array type handler for type lt class 'sdl2.surface.SDL Surface' gt (value lt sdl2.surface.SDL Surface object at 0x030762B0 gt ) registered", lt OpenGL.GL.images.ImageInputConverter object at 0x02EC49F0 gt ) I'm using Python 3 so using something like PIL won't work for me. Any ideas how to get the texture loaded, preferably with PySDL2?
1
Opengl es picking object I saw a lot of picking code opengl es, but nothing worked. Can someone give me what am I missing? My code is (from tutorials forums) Vec3 far Camera.getPosition() Vec3 near Shared.opengl().getPickingRay(ev.getX(), ev.getY(), 0) Vec3 direction far.sub(near) direction.normalize() Log.e("direction", direction.x " " direction.y " " direction.z) Ray mouseRay new Ray(near, direction) for (int n 0 n lt ObjectFactory.objects.size() n ) if (ObjectFactory.objects.get(n)! null) IObject obj ObjectFactory.objects.get(n) float discriminant, b float radius 0.1f b mouseRay.getOrigin().dot(mouseRay.getDirection()) discriminant b b mouseRay.getOrigin().dot(mouseRay.getOrigin()) radius radius discriminant FloatMath.sqrt(discriminant) double x1 b discriminant double x2 b discriminant Log.e("asd", obj.getName() " " discriminant " " x1 " " x2) my camera vectors cam Vec3 position new Vec3( obj.getPosX() x, obj.getPosZ() 0.3f, obj.getPosY() z) Vec3 direction new Vec3( obj.getPosX(), obj.getPosZ(), obj.getPosY()) Vec3 up new Vec3(0.0f, 1.0f, 0.0f) Camera.set(position, direction, up) and my picking code public Vec3 getPickingRay(float mouseX, float mouseY, float mouseZ) int viewport getViewport() float modelview getModelView() float projection getProjection() float winX, winY float position new float 4 winX (float)mouseX winY (float)Shared.screen.width (float)mouseY GLU.gluUnProject(winX, winY, mouseZ, modelview, 0, projection, 0, viewport, 0, position, 0) return new Vec3(position 0 , position 1 , position 2 ) My camera moving all the time in 3d space. and my actors modells moving too. my camera is following one actor modell and the user can move the camera on a circle on this model. How can I change the above code to working?
1
How can I identify loaded models so I can efficiently avoid loading them twice? I'm making a fairly basic OpenGL 3D engine, at the moment. In this engine, when you load an object, you would write manager gt LoadObjFile("cube.obj") manager gt AddParent("uvmap.bmp", "cube.obj", "cube", glm vec3(10.0f, 1.0f, 0.0f)) "uvmap.bmp" being the texture, "cube.obj" is the .obj file path, "cube" being the ID for the object. My question is, at the moment, when it loads an object, it stores the object in a struct with all of the buffer data and an ID, the ID being the path of the obj file, so when I load an .obj, it runs through all of the already loaded files and compared the ID of each using strcmp, which obviously when there's thousands of items, is going to start to get a bit slow.... it does the same for AddParent, and most of my other methods (different vector of data than where the obj data is stored). What would be best in terms of identifying items, an integer? I really don't know, I want this to be as high level, user friendly as possible, but want to keep it as optimized and as efficient as possible.
1
Assimp, Blender and model rotating There is something I can't figure out. I have two models grass.blend and tower.blend. grass.blend tower.blend Note X, Y and Z axises. The problem is that when I load these models using Assimp the scene looks like this The tower should be placed vertically, but for some reason it is placed horizontally. I didn't rotate it this way. Why is it happening? Project example is here (tower was rotated in Blender right way for now). Tower model could be downloaded from blendswap.com.
1
How can I deal with vertex precision errors between terrain chunks? I am using OpenGL to render the following scene, using vertex data from one of the map files of a popular MMORPG. The data is chunked and the pictured scene is made up of 256 (16x16) chunks. However between the chunks I am seeing what I believe to be glitches due to the float precision. Yet I am not doing any large translations and rendering very close to the origin, should I be seeing this problem? The glitches very noticeable as you fly over the scene with it flickering different points along the chunk edges. I was wondering if I was overlooking something simple or there are tricks other games use to mask this problem. I guess the source data could be the reason for this problem but I am not aware of it happening in the original game. View screenshots in a new window to see the issue.
1
GLFW mouselook under OSX I'm continuing to port an OpenGL app from Visual Studio 2012 to XCode 5. The only major issue I'm having is mouselook. It "doesn't work" under OSX (Mavericks). Here's the (pseudocode) Pre update, executed at the start of the frame void Dispatcher PreUpdate(GameTime amp time) m currentKeyboardState m nextKeyboardState m previousMousePosition m currentMousePosition GetMousePosition(m currentMousePosition) Post update, executed, well, after input processing, at the end of the frame void Dispatcher PostUpdate(GameTime amp time) m previousKeyboardState m currentKeyboardState if (m captureMouse) RecenterMouse() Method to recenter the mouse void Dispatcher RecenterMouse() glfwSetCursorPos(m window, m windowSize.x 2.f, m windowSize.y 2.f) GetMousePosition(m currentMousePosition) if (m previousMousePosition ! m currentMousePosition) cout lt lt "Mouse " lt lt m currentMousePosition.x lt lt "," lt lt m currentMousePosition.y lt lt endl m previousMousePosition m currentMousePosition And, jut for completeness void Dispatcher GetMouseMotion(Vector2 amp movedBy) if(m captureMouse) movedBy.x (m currentMousePosition.x m previousMousePosition.x) movedBy.y (m currentMousePosition.y m previousMousePosition.y) else movedBy.x movedBy.y 0 If m captureMouse is true, and m previousMousePosition ! m currentMousePosition, the y delta is always 1, no matter how far I've moved the mouse. (Actually, using the trackpad, but should be indentical.) This code works perfectly under Win7, but never returns anything other motion than (0,1) under OSX. Suggestions? I've seen other questions regarding glfw and osx, but no solutions. EDIT It's worth noting if I turn off mouse capture (which basically does everything but recenter the mouse at the end of the frame), mouselook works correctly but, of course, the cursor is free to wander outside of the window, which doesn't really work for a first person app.
1
OpenGL, Blender and model optimization I'm learning OpenGL and there is something I don't understand regarding model loading. Lets say I found a free .blend model of a tree. The problem is currently in my program all models have only one texture. But this model use two textures, one for trunk and another for branches and leaves. Actually trunk and leaves are two separated models on one scene. What to do in this situation? Editing model manually could take a long time (I'm not an expert in Blender). I could join two textures in one and correct UV's automatically. But most likely a lot of space will be wasted on such an automatically created texture. I could also modify my model format (it's more like memory dump header actually) so I could store N models with N textures as one object. But using N times more VBOs and textures when you actually need only one is inefficient. I'm wondering maybe there is a well known tool for automatically joining multiple objects in one? Or maybe some good practice for this case? Or something like this.
1
Is there a way to start with OpenGL 3.0 without need to write my own shaders? I'm starting with OpenGL and found out that after OpenGL 3.x you must write your own shaders (think it's obligatory). Am I right here? I have made some research but I can't seem to find the answer. If it's necessary to write the shaders for OpenGL 3 the I still can use 2.x right? Just because I'm just starting and OpenGL seems a little bit more level than I'm accustomed to. So I think I'll have a really hard time writing a shader myself. P.S. My GPU only supports up to OpenGL 3.0. I think it's because of the drivers, I'm running a Linux distro that only has FLOSS (no proprietary software).
1
LWJGL Mixing 2D and 3D I'm trying to mix 2D and 3D using LWJGL. I have wrote 2D little method that allow me to easily switch between 2D and 3D. protected static void make2D() glEnable(GL BLEND) GL11.glMatrixMode(GL11.GL PROJECTION) GL11.glLoadIdentity() glOrtho(0.0f, SCREEN WIDTH, SCREEN HEIGHT, 0.0f, 0.0f, 1.0f) GL11.glMatrixMode(GL11.GL MODELVIEW) GL11.glLoadIdentity() protected static void make3D() glDisable(GL BLEND) GL11.glMatrixMode(GL11.GL PROJECTION) GL11.glLoadIdentity() Reset The Projection Matrix GLU.gluPerspective(45.0f, ((float) SCREEN WIDTH (float) SCREEN HEIGHT), 0.1f, 100.0f) Calculate The Aspect Ratio Of The Window GL11.glMatrixMode(GL11.GL MODELVIEW) glLoadIdentity() The in my rendering code i would do something like make2D() draw 2D stuffs here make3D() draw 3D stuffs here What i'm trying to do is to draw a 3D shape (in my case a quad) and i 2D image. I found this example and i took the code from TextureLoader, Texture and Sprite to load and render a 2D image. Here is how i load the image. TextureLoader loader new TextureLoader() Sprite s new Sprite(loader, "player.png") And how i render it make2D() s.draw(0, 0) It works great. Here is how i render my quad glTranslatef(0.0f, 0.0f, 30.0f) glScalef(12.0f, 9.0f, 1.0f) DrawUtils.drawQuad() Once again, no problem, the quad is properly rendered. DrawUtils is a simple class i wrote containing utility method to draw primitives shapes. Now my problem is when i want to mix both of the above, loading rendering the 2D image, rendering the quad. When i try to load my 2D image with the following s new Sprite(loader, "player.png) My quad is not rendered anymore (i'm not even trying to render the 2D image at this point). Only the fact of creating the texture create the issue. After looking a bit at the code of Sprite and TextureLoader i found that the problem appears after the call of the glTexImage2d. In the TextureLoader class glTexImage2D(target, 0, dstPixelFormat, get2Fold(bufferedImage.getWidth()), get2Fold(bufferedImage.getHeight()), 0, srcPixelFormat, GL UNSIGNED BYTE, textureBuffer) Commenting this like make the problem disappear. My question is then why? Is there anything special to do after calling this function to do 3D? Does this function alter the render part, the projection matrix?
1
Rotating between two coordinate frames I have two coordinate frames, A and B. I want to create the rotation matrix RAB which takes you from A to B. A is a right handed system, and B is a left handed system. Furthermore, after moving from a right to a left handed system, there is a further rotation. These two coordinate frames are illustrated in the image below. As can be seen, it appears that the axes in B are simply the negative of the axes in A. So, my first attempt was to simply make these negative in the rotation matrix RAB 1 0 0, 0 1 0, 0 0 1 (sorry I don't know how to write matrices properly here...) However, after working through some examples, this did not work out. Please could somebody answer either (or both!) these questions? (a) Why does my above solution not hold? (b) what is the correct rotation matrix RAB?
1
Why does accessing a uniform float make my shader more than twice as slow? My fragment shader was significantly slowed down by a recent change, and I've been trying to understand why. I isolated the main slow down to accessing a single particular uniform float. If I include this line float not used my uniform then the shader runs more than twice as slowly as it does without this line. The not used float is never referenced again. Why would this be happening? I hope to understand it so that I can try to come up with a workaround that runs more quickly. I'm running this on a mac with Intel HD Graphics 3000. I'm measuring the performance by making OpenGL timestamp queries before and after executing my glDraw calls and looking at the ms intervals. I can provide more specs details if it would be helpful in diagnosing the problem.
1
Texture is black when manually building mipmap I am trying to manually build a mipmap from a series of images. For the sake of brevity, let's assume the file containing the images I want (from 256x256 to 32x32) have paths file 1...4. What I do to load the texture is GLuint textureID glGenTextures(1, amp textureID) glBindTexture(GL TEXTURE 2D, textureID) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR MIPMAP LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL REPEAT) glTexParameteri (GL TEXTURE 2D, GL TEXTURE MAX LEVEL, 4) for i from 1 to 4 unsigned char image stbi load from file(file i, amp width, amp height, amp comp, 3) glTexImage2D(GL TEXTURE 2D, (i 1), GL RGB, width, height, 0, GL RGB, GL UNSIGNED BYTE, image) stbi free image(image) When I go and try to use this created texture in a render all I got is a black result. Using singularly each texture works properly, so I excluded they're malformed. What can it be?
1
Spherical harmonics lighting interpolation I want to use hardware filtering to smooth out colors in texels of a texture when I'm accessing texels at coordinates that are not directly at the center of the texel, the catch being that the texels store 2 bands of spherical harmonics coefficients ( 4 coefficients), not RGBA intensity values. Can I just use hardware filtering like that (GL LINEAR with and without mip mapping) without any considerations? In other terms If I were to first convert the coefficients back to intensity representations, than manually interpolate between two intensities, would the resulting intensity be the same as if I interpolated between the coefficient vectors directly and then converted the interpolated result to intensities?
1
Sprite with alpha Blending in 3D world I'm working on a game in a 3D world with elements 2D only (Like Don't starve game) for Android and IOS. Currently, I've managed "Sprite" without alpha blending, I've just put a condition in the pixel shader to test if pixel alpha channel is null if ( texture.a lt 0.5 ) discard Everything working here. (On old devies like an HTC desire mobile, this condition destroy framerate but it's another problem ) But recently, I've tried to had another Sprite with AlphaBlending activated glCheck( glEnable(GL BLEND) ) glCheck( glBlendFunc( GL ONE, GL ONE MINUS SRC ALPHA ) ) glCheck( glDisable(GL DEPTH TEST) ) My sprite transparency is ok but it is now in front of everything in the scene. (Probably due to glDisable(GL DEPTH TEST)). How can I handle depth with alpha blending activated? EDIT A video to show the problem http www.youtube.com watch?v W0gpBekEwW8 amp feature youtu.be
1
Managing Game Entity coordinates In my Game I'm currently have "Scene Coordinates" which are the X,Y coordinates relative to a Game Scene. In that Scene there are Game Entities, let's say there's a GameEntity A in x 100.0, y 100.0 ( scene coordinates ) Which is the best way to have "Entity Coordinates" so that from GameEntity A point of view, he's standing at x 0.0, y 0.0 with OpenGL transformations? In that way if I apply something like glRotatef(angle, 0, 0, 1) The entity will rotate around it's own origin and not the scene "global" origin.
1
How can I draw multiple lines connected via "nodes" in libgdx Scene2D? I have two Vectors which indicate the ending points of each line. I am trying to draw similar lines to football formation lines like these My main problem using ShapeRenderer is ShapeRenderer coordinates are relative to screen coordinates and not Scene Table Stack. This creates varying coordinate points in different screen sizes. Although I can fix it, I still experience get varying line width in different screen sizes. ShapeRenderer performance issues as it is mainly intended for debugging. ShapeRenderer draws over all Actors. I have tried drawing the lines as a Pixmap and adding them to the scene as an image. This is part of the constructor to my custom class which extends Group. pixmap new Pixmap( 16, 16, Pixmap.Format.RGBA8888 ) pixmap.setColor(Color.RED) pixmap.drawLine(Ax, Ay, Bx, By) Texture pixmaptex new Texture(pixmap, Pixmap.Format.RGB888, false) Image pixmapimg new Image(pixmaptex) pixmapimg.setPosition(x, y) this.addActor(new Image(pixmaptex)) pixmap.dispose() I am trying to draw using this but it just draws a square with the pixmap width and height at (0,0) of the stack. Which is the best class to extend to draw this lines? I am expecting to write a custom class if this doesn't work.
1
What state is stored in an OpenGL Vertex Array Object (VAO) and how do I use the VAO correctly? I was wondering what state is stored in an OpenGL VAO. I've understood that a VAO contains state related to the vertex specifications of buffered vertices (what attributes are in the buffers, and what buffers are bound, ... ). To better understand the correct usage of VAO's, I'd like to know exactly what state they hold. How I assume VAO's should be used From simple examples, I've understood that correct usage of VAO's is as follows Setup Generate VAO BindVAO Specify vertex attributes Generate VBO's BindVBO's Buffer vertex data in VBO's Unbind VBO's Unbind VAO Rendering Bind VAO Draw Unbind VAO From this, I assume that at least the vertex buffer bindings and the vertex attribute specifications are stored in the VAO. I'm unsure however how this usage pattern extends to situations where (multiple) textures and (multiple) shader programs come into play. Is the active shader program stored in the VAO? And are the texture bindings (with their sampling wrapping settings) stored in the VAO as well? Ditto for uniforms? Therefore, my questions are What exact state is stored in an OpenGL VAO? (VBO bindings, attribute specifications, active shader program, texture bindings, texture sampling wrapping settings, uniforms ... ?) How do I correctly use VAO's in a more complex rendering setup where (multiple) textures with associated sampling wrapping settings, (multiple) shader programs and uniforms are involved?
1
OpenGL depth problem I am drawing two cubes, first a red one and then a green one in front of the red one. When I rotate the camera I can see the green cube through the red cube. I have enabled depth test and I clear depth buffer. I'm using OpenGL 4.4 core profile and a simple diffuse shader by the way.
1
Time to render each frame is proportional to the amount of models in the scene This question is deliberately written in a "High Level" manor to avoid screeds and screeds of code snippets, hopefully I can get my point across, I am using C and OpenGL. I have a game engine, and in the engine I have my "game loop" that is called every frame, one of the things in this game loop is my call to Render(), before a call to SwapBuffers(), nothing new there. Essentially my engine draws the scene using a scene graph, that is I have GameObject's and each object has a GameComponent. So when the Render() is called in the "game loop" I get the root GameObject and render its GameComponent and then move onto the next GameObject and render its GameComponent, etc. until all GameObject's and each of their GameComponent's are drawn using glDrawElements(...). One of the GameComponent's a GameObject can have is a MeshRenderer, that is a Mesh paired with a Texture. In my scene I want to add loads of tree models, therefore I create for example 50 GameObject's that each have a MeshRenderer component (a tree) that will eventually in the game loop described above be drawn. The issue with this is that the time it takes to render a frame is directly proportional to the number of tree models I have (O(N) I think is how you word it). For example with 100 trees the render time is 23ms, with 200 trees it is 46ms, with 400 trees it is 90ms so on and so forth. This is terrible, I need at least 1000 trees in my "game world" and even at around 400 my game is "laggy". Because the time to draw a frame is proportional to the number of tees I know that each frame it is re drawing each mesh essentially (every frame drawing 100 trees, and then 200 and so on, and the time to draw scales with it) My question is this How can I store the mesh data so that I don't need to draw them from scratch every frame, I can just draw them once, and then re use that data for the next frame, but don't re draw it all? is there a way to firstly use glDrawElements(...), then store that mesh data(all the vertices etc.) and then every frame after call something like for example glDrawExcistingElements() instead of glDrawElements(...). Ultimately meaning that for consecutive frame draws the time is the exact same wether I have 1 tree or 1000 trees because the system is simply saying "here is the scene I already drew", instead of "lets draw it all again from scratch".
1
Using multiple shaders in OpenGL3.3 guys, I got a question on how to use multiple shaders in my app. The app is simple I have a 3D scene, say, simple game and I want to show some 2D GUI in the scene. I was following this tutorial on how to add font rendering to my scene. One difference is that I am using Java and lwjgl, but everything is implemented as in the tutorial. So I have 2 sets of shaders (2 programs). 1st that handles the 3D models and scene at all. I added the second set of shaders, I just copied them from the tutorial. Here are they vertex version 330 in vec2 position in vec2 texcoord out vec2 TexCoords uniform mat4 projection void main() gl Position projection vec4(position, 0.0, 1.0) TexCoords texcoord and fragment version 330 in vec2 TexCoords out vec4 color uniform sampler2D text uniform vec3 textColor void main() vec4 sampled vec4(1.0, 1.0, 1.0, texture(text, TexCoords).r) color vec4(textColor, 1.0) sampled I compile shaders, link them into a separate programs. (so I have modelProgram and fontProgram). However, when I run my application, I see errors in the console (however, the application runs fine) WARNING Output of vertex shader 'TexCoords' not read by fragment shader ERROR Input of fragment shader 'vNormal' not written by vertex shader ERROR Input of fragment shader 'vTexCoord' not written by vertex shader ERROR Input of fragment shader 'vPosition' not written by vertex shader As you can see TexCoords is an out variable in font.vs.glsl and the other 3 are in variables in model.fs.glsl. So they belong to the other set of shaders, other program. My question is why this happen? It looks like the pipeline tries to combine one program with another, although the application runs smoothly. The other problem I have is that I do not see any text rendered. I don't know whether this is caused by this or it happens because something else. Any help will be appreciated! Thank you
1
clamp a 2D coordinate to fit within an ellipse I need to clamp a 2D coordinate to fit within an ellipse. Call of Duty Modern Warfare 2 does something similar where capture points are translated from a 3D vector in the world to a 2D screen coordinate and then the 2D coordinates are clamped within an ellipse. When the capture points are in view they're within the bounds of the ellipse. When they're behind you they are clamped to be within the bounds of the ellipse. Given a 2D coordinate that could be off screen, etc, what is the math behind clamping it within an ellipse?
1
Calculate distance from the centre to the edge of a cube in OpenGL I'm trying to calculate the distance between a central point on a cube and anywhere on the surface of the cube, depending on the location of a second 3D point. I have two vertices in 3D Q and P. Q is the centre of the cube, P is a random point outside of the cube. I also have a known size of the cube, where distance across is d. If I have the normalised vector of u, which is Q gt P, how do I calculate the distance from the centre to where it hits the surface of the cube? I could not find the answer to this anywhere,on the Stack Exchange.
1
Protecting game assets through archiving I'm writing a game in C (with OpenGL) and am getting quite far into development. Currently I'm loading the data directly from different directories. (E.G. I load textures from a Data Textures ...png) This is fine, but not preferable. Ideally I would like to protect these assets without too much of a performance hit. I looked at other games that I own, both Indie and AAA and found that the majority of them archive the data. However, I am unable to access these archives with programs like Winrar Archiver which implies that they have some level of protection. I'm not particularly knowledgeable in the field of encryption and data storage, so I may be overlooking something obvious but I would like to know how exactly I could achieve something similar. (I am aware that competent individuals are able to use the GPU to gain access to the data but that's not preventable)
1
Make openGL program only update every 1 60 seconds I'm learning C and openGL and have this program as a result from tutorials and playing around. The problem is that the main loop is running at "full speed", making the program unnecessarily cpu intensive. I have managed to make it only perform rendering every 16.7 ms or so, but the outside loop that is waiting to render still is iterating as fast as my computer can handle. This is the full main.cpp http pastebin.com KaCW7wZw This is the main loop at line 95 while (!terminate) if (SDL GetTicks() gt time start frame rate) time start SDL GetTicks() while (SDL PollEvent( amp event)) if (event.type SDL QUIT) terminate true break glClearColor(0.1f, 0.1f, 0.1f, 1.0f) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) renderLoop(entities, amp counter) SDL GL SwapWindow(m window) counter 1.0f I've tried googling and haven't found anything concrete (obviously), except that using some kind of sleep is bad design for these loops. So how can I alter this to slow the whole thing down, and avoid having 100 cpu usage for a program that's not doing much at the moment?
1
OpenGL Understanding the relationship between Model, View and World Matrix I am having a bit of trouble understanding how these matrixes work and how to set them up in relation to one another to get a proper system running. In my understanding the Model Matrix is the matrix of a object, for example a cube or a sphere, there will be many of these in the application game. The World Matrix is the matrix which defines the origin of the 3D world. the starting point. And the View Matrix is the "camera" everything gets translated with this to make sure you have the illusion of an actual camera when in fact everything is moving instead of this matrix? I am a bit lost here. So I was hoping someone here could help me understand this properly. Does every modelMatrix get translated multiplied with the world matrix and the worldMatrix then with the viewMatrix? Or does every modelMatrix get translated multiplied with the viewMatrix and then that with the worldMatrix? How do all these matrixes relate and how do you set up a world with multiple objects and a "camera"? EDIT Thanks a lot for the feedback already. I did some googling aswel and I think I do understand it a bit better now, however would it be possible to get some pseudo code advice? projectionMatrix Matrix makePerspective(45, width, height, 0.1, 1000.0, projectionMatrix) modelMatrix Matrix identity(modelMatrix) translate(modelMatrix, 0.0, 0.0, 10.0 ) move back 10 on z axis viewMatrix Matrix identity(viewMatrix) do some translation based on input with viewMatrix Do I multiply or translate the viewMatrix with the modelMatrix or the other way around? and what then? I currently have a draw method up in such a way that it only needs 2 matrixes for arguments to draw. Here is my draw method draw(matrix1 matrix2) bindBuffer(ARRAY BUFFER, cubeVertexPositionBuffer) vertexAttribPointer(shaderProgram.getShaderProgram().vertexPositionAttribute, cubeVertexPositionBuffer.itemSize, FLOAT, false, 0, 0) bindBuffer(ARRAY BUFFER, cubeVertexColorBuffer) vertexAttribPointer(shaderProgram.getShaderProgram().vertexColorAttribute, cubeVertexColorBuffer.itemSize, FLOAT, false, 0, 0) bindBuffer(ELEMENT ARRAY BUFFER, cubeVertexIndexBuffer) setMatrixUniforms(shaderProgram, matrix1, matrix2) drawElements(TRIANGLES, cubeVertexIndexBuffer.numItems, UNSIGNED SHORT, 0) What are those matrixes suppose to be? Thanks a lot in advance again guys.
1
Gamma Space and Linear Space with Shader I am using Unity and I can choose between two color space mode in the settings Gamma or Linear Space. I am trying to build a Custom Lighting Surface shader but I am facing some problems with those Color Space. Because the render is not the same depending of the Color Space. If I render the lightDir, Normal or viewDir I can see that they are different depending of the Color Space I use. I made some test and the result I have in Linear Space is great but how can I obtain the same result in Gamma Space ? Are there some transformations ? On what component should I apply those transformations ? Thank you very much !
1
client side array in the OpenGL 3.3. core It is possible the topic (not using VBO)? in the OpenGL 3.0 compatible profile I can to draw this way GLint position index attrib location get("VertexPosition") gl EnableVertexAttribArray(position index) gl VertexAttribPointer(position index, 3, gl FLOAT, false, 0, pos Data) gl DrawArrays(gl TRIANGLES, 0, count of vertices) Bat in the OpenGL 3.3 core profile it displays a blank screen. It is right?
1
What to do with unused vertices? Imagine yourself a vertex array in OpenGL representing blocks in a platform game. But some vertices may be not used. The environment is dynamic, so there always some vertex may suddenly become invisible. What is the best way to make them not draw? Graphic cards are complicated and it's hard to predict what is best approach. Few best ways I can think of delete and move all vertices after deleted one to fill freed space (sounds extremely inefficient) set positions to 0 set transparency to maximum I could of course benchmark, but what on my computer works faster doesn't have to on other.
1
Efficiently rendering to 3D texture I have an existing depth texture and some other color textures, and want to process the information in them by rendering to a 3D texture (based on the depth contained in the depth texture, i.e. a point at (x y) in the depth texture will be rendered to (x y texture(depth,uv)) in the 3D texture). Simply doing one manual draw call for each slice of the 3D texture (via glFramebufferTextureLayer) is terribly slow, since I don't know beforehand to what slice of the 3D texture a given texel from one of the color textures or the depth texture belongs. This means the entire process is effectively for each slice for each texel in depth texture process color textures and render to slice So I have to sample the depth texture completely per each slice, and I also have to go through the processing (at least until to discard ) for all texels in it. It would be much faster if I could rearrange the process to for each texel in depth texture figure out what slice it should end up in process color textures and render to slice Is this possible? If so, how? What I'm actually trying to do the color textures contain lighting information (as seen from light view, it's a reflective shadow map). I want to accumulate that information in the 3D texture and then later use it to light the scene. More specifically I'm trying to implement Cryteks Light Propagation Volumes algorithm.
1
Rendering Performance num Draw Calls num Texture Bindings I'm making a game with Libgdx 1.6.4 and experience some lag issues on iPhone 4 and then discovered in the constructor GLProfiler.enable() ... in the render method Gdx.app.debug("draw calls " GLProfiler.drawCalls) Gdx.app.debug("texture bindings " GLProfiler.textureBindings) GLProfiler.reset() What the log shows is that the number of draw calls is always equal to the number of texture bindings. Do you know if this is ok? It seems strange because I have all images as TextureRegion in only one TextureAtlas with size 1024x512. Isn't it supposed to be that I will have for example 50 draw calls and only 1 2 texture binding instead of 50 draw calls and 50 texture bindings. Can this be the source of the lag? Another clue may be, that I use a SpriteBatch and Scene2d Stage at the same time, but they both use the same TextureAtlas. The Scene2d Skin is loading the TextureAtlas and SpriteBatch draws TextureRegions from the skins' regions. Thank you for any help. Update The main SpriteBatch (it's only one) is making 9 totalRenderCalls. The value of maxSpritesInBatch is 26. These values seem normal from what I read in the Docs and the FPS during rendering with the SpriteBatch is 60 FPS. Which is ok, no problem there. The problem with the lag is when I use a Scene2D Stage to display a Dialog above the GameScreen. The Stage has it's own batch which I don't manipulate directly. When the Dialog is displayed the frame rate drops to 49 50 and the Animations in the Dialog are crappy. I guess the problem has something to do with Texture Binding of the main SpriteBatch and the Stage's SpriteBatch.
1
How should I do 3D games through Java on a mac? I have been self teaching myself Java on the mac mostly because the language is cross platform. Recently, I have been only able to develop 2D games using the Graphics2D class. Now, I want to learn how to make 3D games in Java. I used to model and animate stuff in 3D, so my knowledge of 3 Dimensional stuff is okay. I have spent the last 3 hours using google to look up ways of making 3D games in java. Apparently the best one to use is OpenGL, so i looked up a tutorial on it and i cannot find a tutorial that shows how to (if there is a way) install JOGL on the Mac platform. Should i continue to use Java? How can i make 3D games using Java? What is the best way to make 3D games on a mac?
1
What is used for border parameter? I'm learning OpenGL and have a question. I read about Texturing and found border param in glCopyTexImage and glCompressedTexImage functions. In book "OpenGL Programming Guide" wrote that border is reserved and must be zero? If it reserved why they place it into parameter? P.S. I have one idea, is it used for border texture? Like don't read data below that border, isn't it?
1
Managing many draw calls for dynamic objects We are developing a game (cross platform) using Irrlicht. The game has many (around 200 500) dynamic objects flying around during the game. Most of these objects are static mesh and build from 20 50 unique Meshes. We created seperate scenenodes for each object and referring its mesh instance. But the output was very much unexpected. Menu screen (150 tris Just to show you the full speed rendering performance of 2 test computers) a) NVidia Quadro FX 3800 with 1GB 1600 FPS DirectX and 2600 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb 260 FPS in OpenGL Now inside the game in a test level (160 dynamic objects counting around 10K tris) a) NVidia Quadro FX 3800 with 1GB 45 FPS DirectX and 50 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb 45 FPS in OpenGL Obviously we don't have the option of mesh batch rendering as most of the objects are dynamic. And the one big static terrain is already in single mesh buffer. To add more information, we use one 2048 png for texture for most of the dynamic objects. And our collision detection hardly and other calculations hardly make any impact on FPS. So we understood its the draw calls we make that eats up all FPS. Is there a way we can optimize the rendering, or are we missing something?
1
RGB to xyY color space conversion and luminance The luminance calculated by following GLSL functions (fragment shaders tonemap) has different value float GetLuminance (vec3 rgb) return (0.2126 rgb.x) (0.7152 rgb.y) (0.0722 rgb.z) vec3 RGB2xyY (vec3 rgb) const mat3 RGB2XYZ mat3(0.4124, 0.3576, 0.1805, 0.2126, 0.7152, 0.0722, 0.0193, 0.1192, 0.9505) vec3 XYZ RGB2XYZ rgb return vec3(XYZ.x (XYZ.x XYZ.y XYZ.z), XYZ.y (XYZ.x XYZ.y XYZ.z), XYZ.y) I used a glm library to calculate an example result. For glm vec3(2.0f, 3.0f, 8.0f) GetLuminance returns 3.1484. RGB2xyY returns glm vec3 which z component is equal 3.8144. What is wrong ?
1
3D picking for mouse move event I implemented color picking recently and I would like to use it as kind of highlight when mouse is over an object but I am concerned about performance. Color picking requires whole frame to be drawn, loosing effectively up to half of performance (MouseMove can happen nearly every frame). Also, when mouse moves it is very likely to be nearly above previous point is there any way how to use this fact to improve performance of 3D picking? Or is my best chance to use some other technique of 3D picking or approximation of results? tl dr If I wanted to do "realtime" 3D picking on MouseMove, what is the best technique for doing so performance wise? EDIT the exact numbers are even worse, when rendering 1m triangles render time increase from 1.5ms to 2 6ms and in spikes up to 15ms.
1
How should I move 2D objects in OpenGL ES 2? I am a bit confused about what I need to move a basic square. Should I use a translation matrix or just change the object vertices? Which one is better? I use a simple vertex shader, gl Position myPMVMatrix a vertex, along with a VBO.
1
Software and Hardware Rendering From what I understand, the main difference between software rendering and hardware rendering is that in software rendering you use the CPU to determine what to color each pixel, and in hardware rendering you let the GPU take care of calculations and determine how to draw each pixel. However, in the end, isn't an API such as OpenGL still required to do the actual drawing no matter which method you pick? I saw a basic software rendering engine that someone built, and the person said he still used OpenGL to do the actual drawing after using the CPU to do the rendering. So is an API like OpenGL or DirectX still required in the end? And if so, is it true that a computer must at least have an integrated GPU or dedicated GPU to display anything at all?
1
BRDF Incorrect specular highlights I'm currently attempting to implement BRDF lighting, and am hitting a bit of a snag with my specular term the specular highlights aren't rendering correctly. To make things simple, I'm using a single directional light rendering down the Z axis only, and manually specify the gloss specularity of each of the following spheres. Glossyness increases left to right, and Specularity increases bottom to top. Notice how the highlights seem shifted nearly 90 degrees upwards Calculating the specular BRDF I use the generic Fresnel Shlick eq For the NDF, I use the GGX Trowbridge Reitz eq And for the Geometric Shadowing function, I use the Smith Shlick Beckmann eq vec3 BRDF Specular(vec3 F0, vec3 1minusF0, vec3 WorldPos, vec3 Normal) vec3 LightDirection vec3(0, 0, 1) vec3 ViewDirection normalize(EyePos WorldPos) vec3 HalfVec normalize(LightDirection ViewDirection) float NdotL clamp(dot(Normal, LightDirection), 0.0, 1.0) float NdotV clamp(dot(Normal, ViewDirection), 0.0, 1.0) float NdotH clamp(dot(Normal, HalfVec), 0.0, 1.0) float VdotH clamp(dot(ViewDirection, HalfVec), 0.0, 1.0) float Roughness max(1.0f Gloss, 0.0) Turn into roughness float Roughness 2 (Roughness Roughness) float NdotH 2 NdotH NdotH Normal Distribution Function GGX (Trowbridge Reitz) float D0 (NdotH 2 (a 2 1.0f) 1.0f) float D a 2 (M PI (D0 D0)) Fresnel Shlick vec3 Fs F0 1minusF0 pow(1.0 VdotH, 5.0) Geometric Shadowing Smith Shlick Beckmann float k a 2 sqrt(2.0 M PI) float GV NdotV (NdotV (1.0 k) k) float GL NdotL (NdotL (1.0 k) k) float G GV GL vec3 brdfspec vec3(max(( Fs D G ) (4.0f NdotL NdotV), 0.0)) return brdfspec I've been following so many different resources and mimicked many other implementations, and even reducing it down to this basic level I'm still having this issue. I don't know why I'm not getting proper results I'm using a deferred renderer, I derive the World position of each fragment from the depth buffer, and store my normals in viewspace. I don't use a "Metalness" map, but instead use a single rgb specular map, hense why my Fresnel calculation uses Vec3's instead of floats.
1
OpenGL Texture Loading I've been using OpenGL for a bit so I have a general idea of what I'm doing. Recently, I've been working on a framework to let me test stuff more easily. As part of the framework, I split up loading images into OpenGL into two parts, one where I load the data (ie from a file) and another part where I actually send the data to OpenGL. However, when I use the system I designed, only a black triangle is displayed. When I bypass the system I created and load the texture directly into OpenGL using SOIL's SOIL load OGL texture function, everything works with no other changes. Here's the relevant sections of code Main.cpp Section commented out to test loading a texture using SOIL instead Texture Texture2DSource texData texData.loadTextureFromFile("Background.png", Filesystem FOLDER TEXTURES) auto texture Texture Interface getSingleton().loadTexture(texData) Debugging function I added to access the texture ID directly GLuint textureID texture gt getTextureID() Load the texture using SOIL as opposed to my texture loading classes GLuint textureID SOIL load OGL texture( "C Users Username AppData Roaming Test Textures Background.png", SOIL LOAD RGBA, SOIL CREATE NEW ID, SOIL FLAG INVERT Y) glActiveTexture(GL TEXTURE0) This line is used when the SOIL texture loading is used glBindTexture(GL TEXTURE 2D, textureID) I hardcoded in 0 as the uniform ID just for testing normally I query the uniform ID and use that glUniform1i(uniformID, 0) Vertex Shader version 440 layout(location 0) in vec2 vertex layout(location 1) in vec2 uvData out vec2 uv void main() gl Position vec4(vertex, 0.0f, 1.0f) uv uvData Fragment Shader version 440 in vec2 uv out vec4 color uniform sampler2D textureSampler void main() color texture(textureSampler, uv) C code for loading a texture into memory Create variables to store the image dimensions and properties int width 0 int height 0 int channels 0 Load the image auto data SOIL load image(fileLocation.c str(), amp width, amp height, amp channels, SOIL LOAD RGBA) Resize the texture's vector to the same size as the array returned from SOIL m data.resize(width height) Copy the data from SOIL's array to the vector const uint32 t BYTES PER PIXEL 4 memcpy( amp m data 0 , data, width height BYTES PER PIXEL) Set the dimensions m x width m y height Flip the image Code omitted as irrelevant C code for loading the texture data into OpenGL Set up the texture storage GLuint textureID 1 glGenTextures(1, amp textureID) glBindTexture(GL TEXTURE 2D, textureID) Load the texture data glTexImage2D( GL TEXTURE 2D, Texture Target 0, Level of Detail level GL RGBA, Internal Format data.m x, Width data.m y, Height 0, Border (must be 0) GL RGBA, Pixel Data Format GL UNSIGNED BYTE, Data Type amp data.m data 0 Texture Data ) Set texture filtering options glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL NEAREST) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL NEAREST MIPMAP NEAREST) Create a Texture object and load the texture ID into it, then return it Texture lt GL TEXTURE 2D gt texture texture.m textureID textureID return texture Here's some more images This is what the window shows when using my texture loading process. This is what Nvidia NSight shows for the Fragment shader. This is what the window shows when bypassing my texture loading process. This is what Nvidia NSight shows when bypassing my texture loading process. NSight shows the texture as loaded properly for both texture loading processes, and everything else seems to be the same. Any ideas on what's causing the issue?
1
Efficient per frame constants in shaders I have some variables that stay the same during the entire frame and will be used by a large number of the various shaders I use (several dozens). These include things such as the various transforms of the camera, light source data, etc. The main options to provide this data to all the shaders (at least that I can think of) are Just give each shader uniforms for the data and set all uniforms manually when they're required Use buffers, such as Uniform Buffer Objects Shader Storage Buffer Objects Buffer Textures Which of these options is generally the most performance efficient, and why?
1
Noise when using SSLR (Screen Space Local Reflections) When I tried to apply reflections to my scene, I ran into the problem of noise My fragment shader code version 330 core uniform sampler2D normalMap in view space uniform sampler2D depthMap in view space uniform sampler2D colorMap uniform sampler2D reflectionStrengthMap uniform mat4 projection uniform mat4 inv projection in vec2 texCoord layout (location 0) out vec4 fragColor vec3 calcViewPosition(in vec2 texCoord) Combine UV amp depth into XY amp Z (NDC) vec3 rawPosition vec3(texCoord, texture(depthMap, texCoord).r) Convert from (0, 1) range to ( 1, 1) vec4 ScreenSpacePosition vec4(rawPosition 2 1, 1) Undo Perspective transformation to bring into view space vec4 ViewPosition inv projection ScreenSpacePosition Perform perspective divide and return return ViewPosition.xyz ViewPosition.w vec2 rayCast(vec3 dir, inout vec3 hitCoord, out float dDepth) dir 0.3f for (int i 0 i lt 20 i ) hitCoord dir vec4 projectedCoord projection vec4(hitCoord, 1.0) projectedCoord.xy projectedCoord.w projectedCoord.xy projectedCoord.xy 0.5 0.5 float depth calcViewPosition(projectedCoord.xy).z dDepth hitCoord.z depth if(dDepth lt 0.0) return projectedCoord.xy return vec2( 1.0f) void main() vec3 normal texture(normalMap, texCoord).xyz 2.0 1.0 float depth texture(depthMap, texCoord).r vec3 screenPos 2.0 vec3(texCoord, depth) 1.0 vec3 viewPos calcViewPosition(texCoord) Reflection vector vec3 reflected normalize(reflect(normalize(viewPos), normalize(normal))) Ray cast vec3 hitPos viewPos float dDepth float minRayStep 0.1f vec2 coords rayCast(reflected max(minRayStep, viewPos.z), hitPos, dDepth) if (coords ! vec2( 1.0)) fragColor mix(texture(colorMap, texCoord), texture(colorMap, coords), texture(reflectionStrengthMap, texCoord).r) else fragColor texture(colorMap, texCoord) Normal map Reflection strength map (only red chanel)
1
indices for surface of revolution I'd like to implement a surface of revolution. I already implemented creating Vertices based on a 2D line. I now want to get the indices to render the mesh with GL TRIANGLES Here's the code to create the 3D Vertices out of a 2D line void createVertices(vector lt Vector2 gt amp points,unsigned int iterations) int i int j unsigned int index vector lt Vertex gt newVertices vector lt unsigned int gt newIndices for(j 0 j lt points.size() j) for(i 0 i lt iterations i) Vector2 p points.at(j) float theta M PI 2.0 (float)i (float)iterations float x sinf(theta) p.x float z cosf(theta) p.x newVertices.push back(Vertex(x,p.y,z)) and here's the code that will be called to save the Vertices void addVertices(vector lt Vertex gt amp vertices,vector lt unsigned int gt amp indices) this gt vertices vertices this gt indices indices unsigned int verticesSize vertices.size() sizeof(Vertex) unsigned int indicesSize indices.size() sizeof(unsigned int) float vertexBuffer new float vertices.size() sizeof(Vertex) createBuffers(vertices,vertexBuffer) glGenVertexArrays(1, amp VAO) glBindVertexArray(VAO) glGenBuffers(1, amp verticesVBO) glBindBuffer(GL ARRAY BUFFER,verticesVBO) glBufferData(GL ARRAY BUFFER,verticesSize,vertexBuffer,GL STATIC DRAW) glGenBuffers(1, amp indicesVBO) glBindBuffer(GL ELEMENT ARRAY BUFFER,indicesVBO) glBufferData(GL ELEMENT ARRAY BUFFER,indicesSize, amp indices 0 ,GL STATIC DRAW) delete vertexBuffer someone has a basic idea?
1
Passing data into a vertex shader for perspective divide In OpenGL and GLSL, I am just learning about perspective projection and the vertex shader. However, I am a little confused about what data actually needs to be passed to the vertex shader, and what needs to be done in the shader code itself. I have two questions Suppose I have a triangle defined in 3D coordinates (x,y,z). Do I need to pass a 4D vector with values (x,y,z,w), where w z? Or do I just pass the 3D vector? The reason I ask is that I know that somewhere in the pipeline, the x and y coordinates are divided by the w component, in the perspective divide. In the vertex shader code, do I need to manually divide the x and y components by the w component myself? Or is this taken care of automatically? Thanks!
1
Vector out of range (Batch rendering opengl) So ive (tried to) implement a batch rendering system, and at the for loop the error pops up and I'm not shure what ive done wrong. Any suggestions to improve the system would be much appreciated. (It's my first time trying to implement this) include "masterRenderer.h" masterRenderer masterRenderer() quad vector lt Quad gt (1000) numOf 0 void masterRenderer addQuad(Quad quad) masterRenderer quad numOf quad numOf void masterRenderer init(GLuint shader, string texPath) vector lt GLfloat gt vertex((3 4) (numOf 1)) vector lt GLfloat gt normals((3 4) (numOf 1)) vector lt GLfloat gt color((3 4) (numOf 1)) vector lt GLfloat gt uv((2 4) (numOf 1)) int nv 0, nn 0, nc 0, nu 0 GLfloat sizeV 0, sizeN 0, sizeC 0, sizeUv 0 for (int i 0 i lt numOf i ) The error seems to pop up over here. for (int j 0 j lt 3 4 j ) vertex nv quad i .g vertex buffer data j sizeV quad i .g vertex buffer data j nv normals nn quad i .g normal buffer bata j sizeN quad i .g normal buffer bata j nn color nc quad i .g color buffer data j sizeC quad i .g color buffer data j nc for (int j 0 j lt 2 4 j ) uv nc quad i .g uv buffer data j sizeUv quad i .g uv buffer data j nc TextureID glGetUniformLocation(shader, "textureSampler") texture handle Util loadTexture("test.png") glGenBuffers(1, amp normalbuffer) glBindBuffer(GL ARRAY BUFFER, normalbuffer) glBufferData(GL ARRAY BUFFER, sizeof(sizeN), amp normals 0 , GL STATIC DRAW) glGenBuffers(1, amp colorbuffer) glBindBuffer(GL ARRAY BUFFER, colorbuffer) glBufferData(GL ARRAY BUFFER, sizeof(sizeC), amp color 0 , GL STATIC DRAW) glGenBuffers(1, amp uvbuffer) glBindBuffer(GL ARRAY BUFFER, uvbuffer) glBufferData(GL ARRAY BUFFER, sizeof(sizeUv), amp uv 0 , GL STATIC DRAW) glGenBuffers(1, amp vertexbuffer) glBindBuffer(GL ARRAY BUFFER, vertexbuffer) glBufferData(GL ARRAY BUFFER, sizeof(sizeV), amp vertex 0 , GL STATIC DRAW) void masterRenderer draw(glm mat4 Model, glm mat4 View, glm mat4 Projection, GLuint MatrixID) glPushMatrix() glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, texture handle) glUniform1i(TextureID, 0) glEnableVertexAttribArray(0) glBindBuffer(GL ARRAY BUFFER, vertexbuffer) glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, 0, (void )0) glEnableVertexAttribArray(1) glBindBuffer(GL ARRAY BUFFER, colorbuffer) glVertexAttribPointer(1, 3, GL FLOAT, GL FALSE, 0, (void )0) glEnableVertexAttribArray(2) glBindBuffer(GL ARRAY BUFFER, uvbuffer) glVertexAttribPointer(2, 2, GL FLOAT, GL FALSE, 0, (void )0) glEnableVertexAttribArray(3) glBindBuffer(GL ARRAY BUFFER, normalbuffer) glVertexAttribPointer(3, 3, GL FLOAT, GL FALSE, 0, (void )0) for (int i 0 i lt numOf 1 i ) Model glm translate(glm mat4(1.0f), glm vec3(quad i 1 .pos.x, quad i 1 .pos.y, quad i 1 .pos.z)) Model glm rotate(Model, glm radians(quad i 1 .yrot), glm vec3(0.0f, 1.0f, 0.0f)) glm mat4 mvp Projection View Model glUniformMatrix4fv(MatrixID, 1, GL FALSE, amp mvp 0 0 ) glDrawArrays(GL QUADS, 4 i 1, 4 i) glDisableVertexAttribArray(0) glDisableVertexAttribArray(1) glDisableVertexAttribArray(2) glDisableVertexAttribArray(3) glPopMatrix() masterRenderer masterRenderer()
1
How can I determine a budget for RAM used (LRU involving VRAM, in particular) After some profiling, I've determined that one of my most expensive functions involves drawing text. As a solution, I'd like to implement a LRU type of cache that will "remember" the vertices, tex coords (etc) for a given string. This is particularly complicated because some of the resources involved reside in VRAM (most do, actually I take a given string and convert it into a long sequence of vertices tex coords indices), while some reside in plain RAM. If I knew what my limits were (in both respects) I could write a decent LRU cache system and probably buy myself a substantial performance gain. The primary target for this app is the Android platform, and as I understand it, most such devices share RAM with the GPU. I'd appreciate any answers to this, whether they directly answer my question or not.
1
What Shading Rendering techniques are being used in this image? My previous question wasn't clear enough. From a rendering point of view what kind of techniques are used in this image as I would like to apply a similar style (I'm using OpenGL if that matters) http alexcpeterson.com My specific questions are How is that sun glare made? How does the planet look "cartoon" like? How does the space around the planet look warped misted? How does the water look that good? I'm a beginner so any information keywords on each question would be helpful so I can go off and learn more. Thanks
1
GLM conversion from euler angles to quaternion and back does not hold I am trying to convert the orientation of an OpenVR controller that I have stored as a glm vec3 of Euler angles into a glm fquat and back, but I get wildly different results and the in game behavior is just wrong (hard to explain, but the orientation of the object behaves normally for a small range of angles, then flips in weird axes). This is my conversion code get orientation from OpenVR controller sensor data const glm vec3 eulerAnglesInDegrees orientation PITCH , orientation YAW , orientation ROLL debugPrint(eulerAnglesInDegrees) const glm fquat quaternion glm radians(eulerAnglesInDegrees) const glm vec3 result glm degrees(glm eulerAngles(quaternion)) debugPrint(result) result should represent the same orientation as eulerAnglesInDegrees I would expect eulerAnglesInDegrees and result to either be the same or equivalent representations of the same orientation, but that is apparently not the case. These are some example values I get printed out 39.3851 5.17816 3.29104 39.3851 5.17816 3.29104 32.7636 144.849 44.3845 147.236 35.1512 135.616 39.3851 5.17816 3.29104 39.3851 5.17816 3.29104 32.0103 137.415 45.1592 147.99 42.5846 134.841 As you can see above, for some orientation ranges the conversion is correct, but for others it is completely different. What am I doing wrong? I've looked at existing questions and attempted a few things, including trying out every possible rotation order listed here, conjugating the quaternion, and other random things like flipping pitch yaw roll. Nothing gave me the expected result. How can I convert euler angles to quaternions and back, representing the original orientation, using glm? Some more examples of discrepancies original 4 175 26 computed 175 4 153 difference 179 171 179 original 6 173 32 computed 173 6 147 difference 179 167 179 original 9 268 46 computed 170 88 133 difference 179 356 179 original 27 73 266 computed 27 73 93 difference 0 0 359 original 33 111 205 computed 146 68 25 difference 179 43 180 I tried to find a pattern to fix the final computed results, but it doesn't seem like there's one easy to identify. GIF video of the behavior Full video on YouTube Visual representation of my intuition current understanding The above picture shows a sphere, and I'm in the center. When I aim the gun towards the green half of the sphere, the orientation is correct. When I aim the gun towards the red half of the sphere, it is incorrect it seems like every axis is inverted, but I am not 100 sure that is the case.
1
Transparent textures being handled oddly In OpenGL why is it that when rendering transparent textures it changes the color values of all the pixels? Ex. when this transparent texture is rendered it comes out as this. Also if I change the texture to not have transparency it still has the same problem of changing pixel values. What causes OpenGL to completely change a transparent texture when it is rendered? (Note the textures have been created in GIMP and the program is running on a mac (OSX 10.10.3)) EDIT Turns out this happens when you load a texture with transparency on the RGB color spectrum instead of the RGBA spectrum.
1
Started game development no idea of computer graphics. Should I learn tools or concepts? I am in 6th semester of my Computer science bachelor degree program, Working as Intern in a start up company. I started game development using AndEngine, things are going good because I have good hold on OOP and Java. But I don't have any experience in OpenGL programming and neither studied a course of computer graphics. I want to develop 3D games and there are tools available like Unity3d etc. My question is should I master tools or take online lectures of computer graphics to get started on basics. I want to continue game development as my profession. So what should I do? start learning from scratch or Learn already built tools and just dive into development? I see successful designers with no background of academic study, just did a photoshop course and now they are in the softwarehouse and making websites, sprites etc.
1
Textures not displaying. Problem with fragment and vertex shaders Hi i have newbe question. I am sending to gpu textures unit and they dont display. This is simple version of my fragment and vertexshader. (More complicated version also dont work with other textures than DDS but maybe simpler version will tell you what newbe mistake i made) for (unsigned int i 0 i lt textures.size() i ) glActiveTexture(GL TEXTURE0 i) glBindTexture(GL TEXTURE 2D, Textures i ) Set our "myTextureSampler" sampler to user Texture Unit 0 glUniform1i(TextureID, i) This is my simple fragmentshader version 330 core Interpolated values from the vertex shaders in vec2 UV Ouput data out vec3 color Values that stay constant for the whole mesh. uniform sampler2D myTextureSampler void main() Output color color of the texture at the specified UV color texture2D( myTextureSampler, UV ).rgb This is my simple vertexshader version 330 core Input vertex data, different for all executions of this shader. layout(location 0) in vec3 vertexPosition modelspace layout(location 1) in vec2 vertexUV Output data will be interpolated for each fragment. out vec2 UV Values that stay constant for the whole mesh. uniform mat4 MVP void main() Output position of the vertex, in clip space MVP position gl Position MVP vec4(vertexPosition modelspace,1) UV of the vertex. No special space for this one. UV vertexUV EDIT this corrected version but still something dont work. If i had DDS everything was working OK but jpg textures all time i have problems. EDIT i have correct number of textures but they looks diffrent than original
1
How to not render what is behind object? I am working on a 3d project using openGL. I am looking for a way to optimize my renderings. Is there a way to tell if an object is behind another object and thus not visible to not waste time rendering it ? I am already working on a frustrum culling implementation to not display what is not visible because out of the viewing frustrum, but I didn't find a way to know if an object in located behind another object. Can any of you help me about this, please ?
1
How do I implement a quaternion based camera? UPDATE The error here was a pretty simple one. I have missed a radian to degrees conversion. No need to read the whole thing if you have some other problem. I looked at several tutorials about this and when I thought I understood I tried to implement a quaternion based camera. The problem is it doesn't work correctly, after rotating for approx. 10 degrees it jumps back to 10 degrees. I have no idea what's wrong. I'm using openTK and it already has a quaternion class. I'm a noob at opengl, I'm doing this just for fun, and don't really understand quaternions, so probably I'm doing something stupid here. Here is some code (Actually almost all the code except the methods that load and draw a vbo (it is taken from an OpenTK sample that demonstrates vbo s)) I load a cube into a vbo and initialize the quaternion for the camera protected override void OnLoad(EventArgs e) base.OnLoad(e) cameraPos new Vector3(0, 0, 7) cameraRot Quaternion.FromAxisAngle(new Vector3(0,0, 1), 0) GL.ClearColor(System.Drawing.Color.MidnightBlue) GL.Enable(EnableCap.DepthTest) vbo LoadVBO(CubeVertices, CubeElements) I load a perspective projection here. This is loaded at the beginning and every time I resize the window. protected override void OnResize(EventArgs e) base.OnResize(e) GL.Viewport(0, 0, Width, Height) float aspect ratio Width (float)Height Matrix4 perpective Matrix4.CreatePerspectiveFieldOfView(MathHelper.PiOver4, aspect ratio, 1, 64) GL.MatrixMode(MatrixMode.Projection) GL.LoadMatrix(ref perpective) Here I get the last rotation value and create a new quaternion that represents only the last rotation and multiply it with the camera quaternion. After this I transform this into axis angle so that opengl can use it. (This is how I understood it from several online quaternion tutorials) protected override void OnRenderFrame(FrameEventArgs e) base.OnRenderFrame(e) GL.Clear(ClearBufferMask.ColorBufferBit ClearBufferMask.DepthBufferBit) double speed 1 double rx 0, ry 0 if (Keyboard Key.A ) ry speed e.Time if (Keyboard Key.D ) ry speed e.Time if (Keyboard Key.W ) rx speed e.Time if (Keyboard Key.S ) rx speed e.Time Quaternion tmpQuat Quaternion.FromAxisAngle(new Vector3(0,1,0), (float)ry) cameraRot tmpQuat cameraRot cameraRot.Normalize() GL.MatrixMode(MatrixMode.Modelview) GL.LoadIdentity() Vector3 axis float angle cameraRot.ToAxisAngle(out axis, out angle) THIS IS WHAT I DID WRONG I NEED TO CONVERT FROM RADIANS TO DEGREES BEFORE GL.Rotate(angle, axis) AFTER GL.Rotate(angle (float)180.0 (float)Math.PI, axis) GL.Translate( cameraPos) Draw(vbo) SwapBuffers() Here are 2 images to explain better I rotate a while and from this it jumps into this Any help is appreciated. Update1 I add these to a streamwriter that writes into a file sw.WriteLine("camerarot X 0 Y 1 Z 2 W 3 L 4 ", cameraRot.X, cameraRot.Y, cameraRot.Z, cameraRot.W, cameraRot.Length) sw.WriteLine("ry 0 ", ry) The log is available here http www.pasteall.org 26133 text. At line 770 the cube jumps from right to left, when camerarot.Y changes signs. I don't know if this is normal. Update2 Here is the complete project.
1
How do you display non cutout transparent 2D textures with a depth buffer? (OpenGL) I've been able to get my 2D renderer to display transparent cutout textures by testing the alpha of a fragment and discarding if it is less than 1 (or any fraction really). The problem is I want to support using translucent textures. The current way I sort my sprites is by what texture they use, so that I can minimize texture changes. The only way I can think of getting this to work properly is by scrapping that and only sorting by z order. But I don't want to throw away the optimization I already did. Is there any way to do both? Does only rendering in 2D simplify the problem at all? I was hoping to support translucent sprites, but my font renderer makes translucent font textures, so I can't just only use cutouts. EDIT After doing some research, it seems there really is no easy way to do this. (depth peeling for a 2D renderer seems a little overkill) I'm going to compromise by having my renderer hold 2 different sets of sprites, cutouts and translucent. I can draw the cutouts first in whatever order I want, making full use of texture atlases. The translucent textures, however, will need to be in z order, ignoring atlases. If anyone can tell me a better way, I'm all ears.
1
How to draw a rectangle at x,y where x y are in pixel? Here is my code glMatrixMode(GL PROJECTION) glLoadIdentity() glViewport(0, 0, screen width, screen height) glMatrixMode(GL PROJECTION) glLoadIdentity() glOrtho(0.0, screen width, 0.0, screen height, 1.0, 1.0) glMatrixMode(GL MODELVIEW MATRIX) glTranslatef(0, 0, 5) glColor4f(rect.color.R(), rect.color.G(), rect.color.B(), rect.color.A()) glClear(GL COLOR BUFFER BIT) glBegin(GL QUADS) glRectd(rect.transforms.position.x, rect.transforms.position.y, rect.transforms.size.width, rect.transforms.size.height) glEnd() glfwSwapBuffers(window) glFlush() Where rect.transforms.position.x is in pixel. Issue Nothing shows up. The color was tested before amp works on a rectangle that use opengl coordinates.
1
Octrees and Vertex Buffer Objects As many others I want to code a game with a voxel based terrain. The data is represented by voxels which are rendered using triangles. I head of two different approaches and want to combine them. First, I could once divide the space in chunks of a fixed size like many games do. What I could do now is to generate a polygon shape for each chunk and store that in a vertex buffer object (vbo). Each time a voxel changes, the polygon and vbo of its chunk is recreated. Additionally it is easy to dynamically load and reload parts of the terrain. Another approach would be to use octrees and divide the space in eight cubes which are divided again and again. So I could efficiently render the terrain because I don't have to go deeper in a solid cube and can draw that as a single one (with a repeated texture). What I like to use for my game is an octree datastructure. But I can't imagine how to use vbos with that. How is that done, or is this impossible?
1
OpenGL cube map is always black When creating and rendering a skybox with a cube map texture, the skybox is black. Here is how I create the cube map texture GLuint loadCubemap(std vector lt std string gt faces) GLuint textureID glGenTextures(1, amp textureID) glActiveTexture(GL TEXTURE0) int width,height unsigned char image glBindTexture(GL TEXTURE CUBE MAP, textureID) for(GLuint i 0 i lt faces.size() i ) image SOIL load image(faces i .c str(), amp width, amp height, 0, SOIL LOAD RGBA) glTexImage2D( GL TEXTURE CUBE MAP POSITIVE X i, 0, GL RGB, width, height, 0, GL RGB, GL UNSIGNED BYTE, image ) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP R, GL CLAMP TO EDGE) glBindTexture(GL TEXTURE CUBE MAP, 0) return textureID Here is how I draw the cube map void Skybox update() the sky box should be in the background glDepthMask(GL FALSE) set the position of the skybox around the camera pos Camera getCurrentlyBound() gt getPos() faces of the cube map std vector lt std string gt faces " home a Programming GameEngine Game Assets skybox px.png", x " home a Programming GameEngine Game Assets skybox nx.png", x " home a Programming GameEngine Game Assets skybox py.png", y " home a Programming GameEngine Game Assets skybox ny.png", y " home a Programming GameEngine Game Assets skybox pz.png", z " home a Programming GameEngine Game Assets skybox nz.png", z create the texture object GLuint cubeMap loadCubemap(faces) set up some matrices glm mat4 modelMatrix glm translate(glm mat4(), pos) glm mat4 cast(orien) glm mat4 mvpMatrix Camera getCurrentlyBound() gt getVP() modelMatrix you can ignore this part Program prog ProgramManager getProgram(msh.getProgramPath()) bind the program prog gt bind() set uniform data prog gt setUniformData(mvpMatrix, "MVP") bind the VAO msh.bind() bind the cube map texture glBindTexture(GL TEXTURE CUBE MAP, cubeMap) glDrawElements(GL TRIANGLES, msh.getNumVerts(), GL UNSIGNED INT, 0) draw msh.unbind() unbind the VAO prog gt unbind() glDepthMask(GL TRUE) And here are the shaders Vertex Shader version 330 core in vec3 pos uniform mat4 MVP out vec3 fragTexVector void main() gl Position MVP vec4(pos, 1) fragTexVector pos Fragment Shader version 330 core out vec4 finalColor in vec3 fragTexVector uniform samplerCube skybox void main() finalColor vec4(1,1,1,1) finalColor texture(skybox, fragTexVector) I have scanned over this piece of code and over my entire project many times to find what was wrong and could not find anything. The samplerCube seems to always get black pixels. Extra Information (might be helpful, I'm not sure) I have tried debugging the program with VOGL (Valve's opengl debugger) but when I try to capture a snapshot of a frame it crashes with an error Internal error KTX texture failed internal consistency check, texture 7 target GL TEXTURE CUBE MAP. This should not happen! I'm very new to OpenGL and if anybody with OpenGL experience could help me with issue it would be greatly appreciated. Thanks.
1
Project 2D texture onto a cubemap I'm looking to take a 2D texture (previously rendered from the user's perspective), and overlay it overtop of a cubemap. Since a cubemap has 6 textures, I need to run a shader over top of all 6, and use some formula for calculating the UV from a given view. I'm at a loss for how to calculate this. Here's an illustration If the user were to spin around in 1 spot, they would build up full panoramic cubemap. The intention is to reuse end of frame data to update a persistent cubemap that follows the user. I know how to build manual cubemaps by rendering a scene 6 times, once on all axes, however in this case I wish to take an already rendered view and transplant it onto the correct spots on a cubemap.
1
GLSL rewriting (geometry) shader from 330 to 130 version I'm having trouble running example from https raw.github.com progschj OpenGL Examples master 07geometry shader blending.cpp My graphics card supports only 130 shaders version so I have to rewrite shaders. I figured out how to fix vertex and fragment shaders. I removed layout(location 0) from layout(location 0) in vec4 vposition and added this into my source code glBindAttribLocation(shader program, 0, "vposition") However I have no idea how to rewrite geometry shader... version 130 uniform mat4 View uniform mat4 Projection layout (points) in layout (triangle strip, max vertices 4) out out vec2 txcoord void main() vec4 pos View gl PositionIn 0 .gl Position txcoord vec2( 1, 1) gl Position Projection (pos vec4(txcoord,0,0)) EmitVertex() txcoord vec2( 1, 1) gl Position Projection (pos vec4(txcoord,0,0)) EmitVertex() txcoord vec2( 1, 1) gl Position Projection (pos vec4(txcoord,0,0)) EmitVertex() txcoord vec2( 1, 1) gl Position Projection (pos vec4(txcoord,0,0)) EmitVertex() The main problem is how to get rid of location here layout (points) in layout (triangle strip, max vertices 4) out Can someone help me? EDIT melak47 I edited shader, now I'm not getting any error, but window is black. version 130 extension GL EXT geometry shader4 enable precision mediump float uniform mat4 View uniform mat4 Projection out vec2 txcoord void main() vec4 pos View gl PositionIn 0 txcoord vec2( 1, 1) gl Position Projection (pos vec4(txcoord,0,0)) EmitVertex() txcoord vec2( 1, 1) gl Position Projection (pos vec4(txcoord,0,0)) EmitVertex() txcoord vec2( 1, 1) gl Position Projection (pos vec4(txcoord,0,0)) EmitVertex() txcoord vec2( 1, 1) gl Position Projection (pos vec4(txcoord,0,0)) EmitVertex()
1
Knowing the size of a framebuffer when rendering transformed meshes to a texture I have a couple of 2D meshes that make a hierarchical animated model. I want to do some post processing on it, so I decided to render this model to a texture, so that I could do the post processing with a fragment shader while rendering it as a textured quad. But I don't suppose that it would be very smart to have the render texture's size as large as the entire screen for every layer that I'd like to compose it would be nicer if I could use a smaller render texture, just big enough to fit every element of my hierarchical model, right? But how am I supposed to know the size of the render target before I actually render it? Is there any way to figure out the bounding rectangle of a transformed mesh? (Keep in mind that the model is hierarchical, so there might be multiple meshes translated rotated scaled to their proper positions during rendering to make the final result.) I mean, sure, I could transform all the vertices of my meshes myself to get their world space screen space coordinates and then take their minima maxima in both directions to get the size of the image required. But isn't that what vertex shaders were supposed to to so that I wouldn't have to calculate that myself on the CPU? (I mean, if I have to transform everything myself anyway, what's the point of having a vertex shader in the first place? q ) It would be nice if I could just pass those meshes through the vertex shader first somehow without rasterizing it yet, just to let the vertex shader transform those vertices for me, then get their min max extents and create a render texture of that particular size, and only after that let the fragment shader rasterize those vertices into that texture. Is such thing possible to do though? If it isn't, then what would be a better way to do that? Is rendering the entire screen for each composition layer my only option?
1
Restart a 2D game OpenGL, GLUT I started learning OpenGL and GLUT by making a snake game. The problem I encountered is that when I press the "new game" in the menu, the window has to be resized so that the content of the window to be updated. From what I have read, it's because of the MainLoop that's waiting for an event, but I don't know how to fix it. I've tried with glutPostRedisplay, but it doesn't change anything maybe I'm placing it wrong. Here is the code associated with the restart of the game int main( int argc, char argv ) glutInit( amp argc, argv ) glutInitDisplayMode( GLUT DOUBLE GLUT RGB ) glutInitWindowSize( 800, 600 ) glutInitWindowPosition( 100, 100 ) glutCreateWindow( "Aarghhh! O ramaaa !" ) createMenu() glClearColor( 0.0, 0.0, 0.0, 0.0 ) init() glutReshapeFunc( reshape ) glutDisplayFunc( dreptunghi ) glutSpecialFunc( player ) glutMainLoop() return 0 void init( void ) glClearColor( 1.0, 1.0, 1.0, 0.0 ) glMatrixMode( GL PROJECTION ) gluOrtho2D( 0.0, 800.0, 0.0, 600.0 ) glShadeModel( GL FLAT ) void menu( int num ) if ( num 0 ) exit( 0 ) else if ( num 1 ) menu value num snake.clear() i 30.0 j 30.0 alpha 1.0 value 1 speed 3 eaten true collided food false collided self false createMenu() glClearColor( 0.0, 0.0, 0.0, 0.0 ) init() glutDisplayFunc( dreptunghi ) glutReshapeFunc( reshape ) glutSpecialFunc( player ) glutSwapBuffers() glutPostRedisplay() void createMenu( void ) glutCreateMenu( menu ) glutAddMenuEntry( "New game!", 1 ) glutAddMenuEntry( "Exit", 0 ) glutAttachMenu( GLUT RIGHT BUTTON ) I have also tried to write a function for the display of the menu, that's called using glutIdleFunc, but it didn't solve anything either. I have ran out of ideas.
1
Turning on Vertex Attribute Divisor With Instanced Rendering Renders Nothing When I render with glDrawArraysInstanced with the vertex attribute divisor set to zero, the triangle appears as expected. But when the divisor is set to any value other than zero, the triangle disappears. This is the code I am using to render static const u32 NUM TRIANGLES 1 struct Vertex f32 x f32 y static const Vertex g triangles NUM TRIANGLES 3 0.125f, 0.125f , 0.125f,0.125f , 0.125f, 0.125f , glGenVertexArrays(1, amp g vao) glBindVertexArray(g vao) glGenBuffers(1, amp g vbo) glBindBuffer(GL ARRAY BUFFER, g vbo) glBufferData(GL ARRAY BUFFER, sizeof(g triangles), nullptr, GL DYNAMIC DRAW) glBufferSubData(GL ARRAY BUFFER, 0, sizeof(g triangles), reinterpret cast lt const void gt ( amp g triangles)) glVertexAttribPointer(0, 2, GL FLOAT, GL FALSE, 0, reinterpret cast lt void gt (0)) glEnableVertexAttribArray(0) glVertexAttribDivisor(0, 1) Setting this to 0 renders the triangle This is the draw call glDrawArraysInstanced(GL TRIANGLES, 0, 3, NUM TRIANGLES) Here are my shaders version 420 core layout (location 0) in vec2 v void main() gl Position vec4(v, 0.0f, 1.0f) version 420 core out vec4 out color void main() out color vec4(1.0f,0.0f,0.0f,1.0f) My problem is similar to OpenGL 2D instancing glDrawArraysInstanced with a divisor renders nothing, but that question wasn't resolved.
1
Opengl Best VAO model for voxel engine I make a voxel engine on LWJGL 3 (OpenGL binding for JVM). I store all object data in world class. World is subdivided into 32 32 32 chunks. Chunks use octree storage. The main part of all my rendering is cubes that share one mesh but have different transformation matrices and some other data. To render those i use one VAO per chunk, it contains only one cube mesh instance and has all cubes data (like transformations) stored in its VBOs. I construct each chunk VAO once and render it fully in one call glDrawElementsInstanced. If i need to change some cube data in a chunk, i will reconstuct VAO from ground up or change individual buffer data. That should be pretty efficient, right ? But I need some other stuff to be rendered, like complex mesh objects or effects or some debug stuff. How should I render those objects ? Should I create a VAO for each mesh and then perform instanced rendering for each object with the same mesh ? Or should I batch all objects with their meshes into one VAO for each chunk and render it fully ? I was trying to create a VAO recently, that contained one instance of each mesh and all object data. But i could not render it correctly. Because each time I needed to somehow switch object data VBO while rendering an instanced mesh. It didn't work. Code for the last VAO http pastebin.com WanNsR5L
1
using heightmap to simulate 3d in an isometric 2d game I saw a video of an 2.5d engine that used heightmaps to do zbuffering. Is this hard to do? I have more or less no idea of Opengl(lwjgl) and that stuff. I could imagine, that you compare each pixel and its depthmap to the depthmap of the already drawn background to determine if it gets drawn or not. Are there any tutorials on how to do this, is this a common problem? It would already be awesome if somebody knows the names of the Opengl commands so that i can go through some general tutorials on that. greets! Great 2.5d engine with the needed effect, pls go to the last 30 seconds Edit, just realised, that my question wasn't quite clear expressed How can i tell Opengl to compare the existing depthbuffer with an grayscale texure, to determine if a pixel should get drawn or not?
1
How do I initialise levels sequentially? I have been learning opengl and I have made good progress over past few months. However I still struggle to understand game logic in C , I am new to C too. Say I have this program. include lt headers gt int main() Step 1 VAO VBO EBO inits() while(!glfwWindowShouldClose(window)) glClearColor(.2f, .2f, .3f, 1.f) glClear(GL COLOR BUFFER BIT) Step 2 DrawCalss() glfwPollEvents() glfwSwapBuffers(window) return 0 For OpenGL programmers this should be clear. With this structure, I have to load all levels and then draw part of them like so. Step 1 feeds the gpu the vertex data and its attributes. Step 2 binds shader programs and draws the vertices Say I have A, B, C, D levels, within VAO VBO EBO inits() function I have to load all A, B, C, D levels then within DrawCalss() function will draw only related levels. DrawCalss() function is dynamic but VAO VBO EBO inits() invoked only once. What I want to do is create separate game loops for each levels and be able to switch between levels, and as I switch levels preceding level should be unloaded and next level reloaded. Since I am new to C I don't know what way would be the most efficient, that's why I have not tried anything. How do I achieve it? what is the best practice for such scenarios?
1
Understanding ticksPerSecond and duration with skeletal animations That's my first question here so i hope to do all correctly. From various weeks i started surfing the net about Skeletal animation aiming to add a simple animation controller into my small game engine. After following a video tutorial by ThinMatrix, i successfully added skeletal animation into my engine (i used Assimp to load a .dae file from Blender with one animation in it). Once finished all the base stuff i started thinking about how to change animations speed by a factor (like x2 x3 etc...) and here i found some problems. As i think i understand every animation is measured by a duration field (in ticks) that, as i suppose, should be at least something like 25fps (to obtain some kind of smooth animation) times animation's length in seconds. Then for every animation there is also another field called ticksPerSecond that (as the name says) is the amount of ticks in every second. Into my engine i've a data structure called Animator that contains an array of Animation objects and for each one it has a ticks per second and duration array. The following code shows how i took data from assimp. animator gt ticks per second animation index scene gt mAnimations animation index gt mTicksPerSecond ! 0 ? scene gt mAnimations animation index gt mTicksPerSecond 25.f animator gt duration animation index scene gt mAnimations animation index gt mDuration If i print the two variables i get this result DEBUG ticks per sec 1.000000 DEBUG total ticks 0.833333 So here is my question Why do these variables take on these values? I am trying to explain it to myself and the idea I found is that if ticks is equal to 1 the animation would go at the same mainloop's frame rate but, at the end, I am not sure about it and I would appreciate your help.
1
Quaternion rotation around center, undefined behavior Here's my code vec4 qx, qy, qz mat4 mx, my, mz rotating using quaternions glm quat(qx, to radians(a gt rx), 1.0f, 0.0f, 0.0f) glm quat(qy, to radians(a gt ry), 0.0f, 1.0f, 0.0f) glm quat(qz, to radians(a gt rz), 0.0f, 0.0f, 1.0f) turning the quaternions into matrices glm quat mat4(qx, mx) glm quat mat4(qy, my) glm quat mat4(qz, mz) mat4 trans 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 mat4 rot 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 mat4 final combining the rotations into one. glm mat4 mulN((mat4 ) amp mx, amp my, amp mz , 3, rot) translating the trans matrix. glm translate(trans, (vec3) a gt x, a gt y, a gt z ) finally combining the translation with the rotation into one. glm mat4 mul(trans, rot, final) My desired behavior is that the object rotates around its center, but here is what happens instead So , it seems that my object is rotating around some weird other undefined point. I have no idea why this happens. Any ideas? Thank you.
1
OpenGL's matrix stack vs Hand multiplying Which is more efficient using OpenGL's transformation stack or applying the transformations by hand. I've often heard that you should minimize the number of state transitions in your graphics pipeline. Pushing and popping translation matrices seem like a big change. However, I wonder if the graphics card might be able to more than make up for pipeline hiccup by using its parallel execution hardware to bulk multiply the vertices. My specific case. I have font rendered to a sprite sheet. The coordinates of each character or a string are calculated and added to a vertex buffer. Now I need to move that string. Would it be better to iterate through the vertex buffer and adjust each of the vertices by hand or temporarily push a new translation matrix?
1
What is the convention for column major order matrix transformations? According to this cheat sheet for CG, if I want to use column major order for my matrix vector math I have to multiply from the left applying transformations from right to left, i.e. v' P V v I built my view and projection matrices like this Matrix44 view(right.x, right.y, right.z, position.x, up.x, up.y, up.z, position.y, forward.x, forward.y, forward.z, position.z, 0.0f, 0.0f, 0.0f, 1.0f) Matrix44 projection(Sx, 0.0f, 0.0f, 0.0f, 0.0f, Sy, 0.0f, 0.0f, 0.0f, 0.0f, Sz, Pz, 0.0f, 0.0f, 1.0f, 0.0f) However, when using them in the vertex shader gl Position projection view vec4(position, 1.0) I cannot see my geometry unless I swap the transformation to gl Position vec4(position, 1.0) view projection which means that I am using row major order instead of column order. So my question is if the pdf I'm following is correct in the notation for the column major order for view and projection matrices or what am I missing?
1
Blending vs Texture Sampling Lets say I want to blend two textures together A texture containing the result of the SSAO calculation. A texture containing the rendered scene. I could do it in two ways Use a shader that samples both the SSAO and scene textures, blends them together and outputs the final color to a render target. Render to the texture containing the scene and use a blending mode to blend the SSAO texture on top of it. Only the SSAO texture will be sampled inside the shader. Is it possible to give a general answer about which version is faster, or is it highly hardware dependent?
1
Input before or after update draw? This is how I understood the game loop, and I wanted to know if I'm correct or not 1) Draw render gt input CPU GPU Update Draw Input Rendering 2) Input gt Draw render CPU GPU Input Update Draw (nothing?) Rendering By rendering I mean updating the actual frame. What happens in the second case? Is the CPU doing nothing or is it checking input for the next frame already? Are there any differences from checking input before or after the update draw part?
1
Drawing many separate lines using mouse OpenGL(GLFW glad) So, in order to draw a line, I track the coordinates of the mouse, then I add them to the array and capture it as GL LINE STRIP ADJACENCY. However, for example, I completed drawing a line1 at P1 and decided to start drawing a different line at P2 as shown in the figure, but my two points P1 and P2 joined together, how to fix it? Need to clear the array after drawing at the point P1, actually it doesn't help if I use glClearColor and glClear(GL COLOR BUFFER BIT).. Is there any other way?
1
Missing any kind of lights? I am trying to make my first 3D game in OpenGL with Shaders (LWJGL3) I read and follow a lot of tutorials but can't figure it out what kind of light I am missing here? The wall doesn't get any effect from the light It is supposed to be like this one And my point light part in fragment vec4 calcLightColour(vec3 light colour, float light intensity, vec3 position, vec3 to light dir, vec3 normal) vec4 diffuseColour vec4(0, 0, 0, 0) vec4 specColour vec4(0, 0, 0, 0) Diffuse Light float diffuseFactor max(dot(normal, to light dir), 0.0) diffuseColour diffuseC vec4(light colour, 1.0) light intensity diffuseFactor Specular Light vec3 camera direction normalize( position) vec3 from light dir to light dir vec3 reflected light normalize(reflect(from light dir , normal)) float specularFactor max( dot(camera direction, reflected light), 0.0) specularFactor pow(specularFactor, specularPower) specColour speculrC light intensity specularFactor material.reflectance vec4(light colour, 1.0) return (diffuseColour specColour) vec4 calcPointLight(PointLight light, vec3 position, vec3 normal) vec3 light direction light.position position vec3 to light dir normalize(light direction) vec4 light colour calcLightColour(light.colour, light.intensity, position, to light dir, normal) Apply Attenuation float distance length(light direction) float attenuationInv light.att.constant light.att.linear distance light.att.exponent distance distance return light colour attenuationInv
1
How do I create a 2D tile map, and implement movement within it? How do I create a 2D tile map that I can use? How do I get the world coordinates of it so I can move something to that tile? I'm learning OpenGL, at the moment, but I'm having a lot of issues understanding how to create a 2D tile map. I have rendered tiles to the world, but I have no way of using them. For example, I'm trying to implement movement, where when clicking on a tile, I would move the player to that tile. To elaborate, I want a tiles coordinates when I click near on it, so I can use that position when implementing movement. Right now, I have no idea how to do that I am able to get the windows 2D coordinates on click, but I don't know how to turn that into a useful way of finding the coordinates I want based on where my tile is in the world.
1
3D position of an arbitrary UV coordinate I have a UV map for a 3D mesh that encodes "links" between pairs of UV coordinates. I have previously defined this links (or pairs), one to one. The links are enconded using the function rgb color(u1, v1) (u2, v2, 0.0). Therefore, in the fragment shader, given UV coordinates of the current fragment, I can get the UV coordinates of the fragment "linked" with the current one. What I want to do is to draw this links in the 3D space(i.e. draw a line between the current fragment with texture coordinates (u1, v1) and its "link" texture coordinate (u2,v2). As far as I know this is a hard problem to solve because in the fragment shader you cannot access to the 3D coordinates (u2,v2). Edit Notice that the UV map is fragmented, therefore you cannot just draw this lines in the UV domain. Do you have any idea how to approach this problem?