_id
int64
0
49
text
stringlengths
71
4.19k
1
Is continuous loop best idea for Android OpenGL game? I've done some small OpenGL games for computer. I've used continuous loop (with VSync) for game life cycle, input update render. Now i'm going to write simple game for Android, but i'm wondering if it is the right way to go. What matters? Battery usage, i've played some simple games that used so much battery compared to their complexity and graphics and i don't wanna to create that battery hungry game.
1
samplerCube for point light shadow map has dark corners relative to screen aspect ratio size? I almost have point light shadows working but the corner of the samplerCube that I use for the shadow map has corners that get darker depending on the main camera. Is this something to do with a transform into a different space somewhere? I followed learn opengls tutorial on this and they don't seem to have this issue or it just didn't come up in their simple example. Here a YouTube video of the problem because it is kinda hard to explain... https youtu.be FXpwgyJh1ZA That shows the depth map sampler not the shadows by the way. This also happens when the radius is almost 0 and the cube map is completely white. float PointLightShadow( vec3 NegL, FragWorldPos pointLights i .Position float R) radius of light if (mat hasShadowMap2 0) return 1.0f float closestDepth texture(mat shadowMap2, NegL).r float currentDepth length(NegL) float bias 0.05 float shadow R closestDepth gt currentDepth bias ? 1.0 0.0 return closestDepth for this example Sending light position to shader ... m lightData.PointLights i .Position lights i gt Position() setting in world space ... and for the FragWorldPos ... out vec3 FragWorldPos ... vec4 worldPos model vec4(vert, 1) FragWorldPos worldPos.xyz ... and then the fragment shader has a matching in variable. I think like 99 of this is working, I just don't understand why the cube map seems to have these dark corners. They change aspect by changing the screen also suggesting that they are in some type of camera space, but I don't see where that would happen. Let me know if you need more context for the code snippets, these are the only important parts I think though.
1
Fixed Function vs Programmable Pipeline performance with many batches In OpenGL 2.0 I can easily make 10,000 draw calls per frame (with state changes in between each call). However, if i try to do this in either OpenGL ES 2.0 or DirectX9 with shaders, my peformance is 1Hz. Is there an inherent different between fixed function and programmable pipeline rendering that requires programmable pipeline to require much fewer draw calls per frame, or is it more likely that I'm just doing something stupid in my code? I thought that most fixed function pipelines were implemented in shaders nowadays anyway, so if it is the case that I have to batch, then how does the fixed function implementation get away with not batching if its really using shaders? Any "official" resources you could point me to would be great.
1
Cubemaps turn black OpenGL GLSL Java LWJGL Recently I tried to add cubemaps to my 3D rendering engine. The objects with a cubemap now turn completely black. This is how I load my cubemap public static int loadCubeMap(String filename) int id GL11.glGenTextures() GL13.glActiveTexture(GL13.GL TEXTURE0) GL11.glBindTexture(GL13.GL TEXTURE CUBE MAP, id) for(int i 0 i lt CUBEMAP NAMES.length i ) TextureData data decodeTextureFile(CUBEMAPS LOCATION filename " " CUBEMAP NAMES i ) GL11.glTexImage2D(GL13.GL TEXTURE CUBE MAP POSITIVE X i, 0, GL11.GL RGBA, data.getWidth(), data.getHeight(), 0, GL11.GL RGBA, GL11.GL UNSIGNED BYTE, data.getBuffer()) GL11.glTexParameteri(GL13.GL TEXTURE CUBE MAP, GL11.GL TEXTURE MAG FILTER, GL11.GL LINEAR) GL11.glTexParameteri(GL13.GL TEXTURE CUBE MAP, GL11.GL TEXTURE MIN FILTER, GL11.GL LINEAR) GL11.glTexParameteri(GL13.GL TEXTURE CUBE MAP, GL11.GL TEXTURE WRAP S, GL12.GL CLAMP TO EDGE) GL11.glTexParameteri(GL13.GL TEXTURE CUBE MAP, GL11.GL TEXTURE WRAP T, GL12.GL CLAMP TO EDGE) return id TextureData class only contains int width, height and a ByteBuffer buffer private static TextureData decodeTextureFile(String fileName) int width 0 int height 0 ByteBuffer buffer null try FileInputStream in new FileInputStream(TEXTURES LOCATION fileName ".png") PNGDecoder decoder new PNGDecoder(in) width decoder.getWidth() height decoder.getHeight() buffer ByteBuffer.allocateDirect(4 width height) decoder.decode(buffer, width 4, Format.RGBA) buffer.flip() in.close() catch (Exception e) e.printStackTrace() return new TextureData(width, height, buffer) This is how I load the cubemap to the samplerCube cubemap i in the fragment shader public void loadCubeMap(int id, CubeMap cubemap, int cubemapid) if(cubemap ! null) super.loadTexture(location cubemaps cubemapid , cubemap.id, id, GL13.GL TEXTURE CUBE MAP) super.loadFloat(location cubemap intensity id , cubemap.intensity) else super.loadFloat(location cubemap intensity id , 0.0f) in super class protected void loadTexture(int location, int texture, int i, int texturetype) loadInt(location, i 1) GL13.glActiveTexture(GL13.GL TEXTURE0 i) GL11.glBindTexture(texturetype, texture) And this is my fragment shader version 400 core in vec4 color in vec2 texCoord0 in vec3 surface normal in vec3 to light vector in vec3 world position out out vec4 out Color uniform sampler2D sampler0 uniform samplerCube cubemap 3 uniform vec3 lightColor uniform float cubemap intensity 3 uniform int hasTexture void main(void) vec3 unitNormal normalize(surface normal) vec3 unitLight normalize(to light vector) float nDot1 dot(unitNormal, unitLight) float brightness max(nDot1, 0.02) vec3 diffuse color.xyz brightness lightColor color.xyz vec4 textureColor texture(sampler0, texCoord0.xy) if(textureColor.a lt 0.5) discard vec4 shadedTextureColor brightness vec4(lightColor, 1) textureColor vec4 coloredShadedTexture mix(vec4(diffuse, 1), shadedTextureColor, textureColor.a hasTexture) vec4 reflectionColor texture(cubemap 0 , world position out) vec4 refractionColor vec4 cubemapColor out Color reflectionColor
1
Does glScissor affect stencil and depth buffer operations? I know glScissor() affects glColorMask() and glDepthMask(), but does it affect the stencil and depth buffers? For example glEnable(GL DEPTH TEST) glEnable(GL SCISSOR TEST) glEnable(GL STENCIL TEST) glScissor(X,Y,W,H) Is this color mask set only for the scissor area? glColorMask(TRUE,TRUE,TRUE,TRUE) Does this stencil function only work within the scissor area? glstencilfunc(GL ALWAYS) Does the stencil function only work within scissor area? glstencilop(GL KEEP,GL KEEP,GL KEEP) Is this depth mask set only for the scissor area? glDepthMask(GL TRUE) Does this depth function only work within the scissor area? glDepthFunc(GL ALWAYS)
1
When I add in Java transformationMatrix I can't see images? When I add in Java transformationMatrix I can't see images moving but when I remove it I can see why and how to fix it? Any ideas how to fix it? My Renderer class public class Renderer private static final float FOV 70 private static final float NEAR PLANE 0.1f private static final float FAR PLANE 1000 private Matrix4f projectionMatrix public Renderer(StaticShader shader) createProjectionMatrix() shader.start() shader.loadProjectionMatrix(projectionMatrix) shader.stop() public void prepare() GL11.glClear(GL11.GL COLOR BUFFER BIT) GL11.glClearColor(1, 0, 0, 1) public void render(Entity entity,StaticShader shader) TexturedModel model entity.getModel() RawModel rawModel model.getRawModel() GL30.glBindVertexArray(rawModel.getVaoID()) GL20.glEnableVertexAttribArray(0) GL20.glEnableVertexAttribArray(1) Matrix4f transformationMatrix Maths.createTransformationMatrix(entity.getPosition(), entity.getRotX(), entity.getRotY(), entity.getRotZ(), entity.getScale()) shader.loadTransformationMatrix(transformationMatrix) GL13.glActiveTexture(GL13.GL TEXTURE0) GL11.glBindTexture(GL11.GL TEXTURE 2D, model.getTexture().getID()) GL11.glDrawElements(GL11.GL TRIANGLES, rawModel.getVertexCount(), GL11.GL UNSIGNED INT, 0) GL20.glDisableVertexAttribArray(0) GL20.glDisableVertexAttribArray(1) GL30.glBindVertexArray(0) private void createProjectionMatrix() float aspectRatio (float) Display.getWidth() (float) Display.getHeight() float y scale (float) ((1f Math.tan(Math.toRadians(FOV 2f))) aspectRatio) float x scale y scale aspectRatio float frustum length FAR PLANE NEAR PLANE projectionMatrix new Matrix4f() projectionMatrix.m00 x scale projectionMatrix.m11 y scale projectionMatrix.m22 ((FAR PLANE NEAR PLANE) frustum length) projectionMatrix.m23 1 projectionMatrix.m32 ((2 NEAR PLANE FAR PLANE) frustum length) projectionMatrix.m33 0 Edit added StaticShader class package shader import org.lwjgl.util.vector.Matrix4f public class StaticShader extends ShaderProgram private static final String VERTEX FILE "src shader vertexShader.txt" private static final String FRAGMENT FILE "src shader fragmentShader.txt" private int location transformationMatrix private int location projectionMatrix public StaticShader() super(VERTEX FILE, FRAGMENT FILE) Override protected void bindAttributes() super.bindAttribute(0, "position") super.bindAttribute(1, "textureCoords") Override protected void getAllUniformLocations() location transformationMatrix super.getUniformLocation("transformationMatrix") location projectionMatrix super.getUniformLocation("projectionMatrix") public void loadTransformationMatrix(Matrix4f matrix) super.loadMatrix(location transformationMatrix, matrix) public void loadProjectionMatrix(Matrix4f projection) super.loadMatrix(location projectionMatrix, projection) Edit vertexShader.txt version 400 core in vec3 position in vec2 textureCoords out vec3 colour out vec2 pass textureCoords uniform mat4 transformationMatrix uniform mat4 projectionMatrix void main(void) gl Position projectionMatrix transformationMatrix vec4(position,1.0) pass textureCoords textureCoords colour vec3(position.x 0.5,0.0,position.y 0.5)
1
glUniformMatrix4fv OpenTK equivalent Very simple and quick question which surprisingly I couldn't find an answer to over the internet what is the equivalent of glUniformMatrixfv for opentk? I've browsed all the 7 overloads of GL.UniformMatrix4 and none of them seems correct to me and or I have found any example usage for. E.g. if I have a Matrix4 matrices variable (properly initialized etc..) that I want to map on a mat4 matrices 2 in glsl, which GL.UniformMatrix4 overload should I use? P.S. Using OpenTK.Next NuGet package version 1.1.1616.8959
1
Is Frustum culling still needed today? I'm reading about efficient Frustum culling algorithms. I found an article about a smart method that first use the Frustum AABB (Axis Aligned Bounding Box) to eliminate most of the scene before check again against the actual frustum representing the camera. I didn't do any performance test yet, but maybe somebody else did and can answer this question. Let's say I'm using an Octree to quickly check again the camera bounding box. But I do not want to do a second pass again the camera planes. If I simply render everything so far, isn't that actually faster than do the second pass? Assume object geometry is loaded into VRAM using hardware vertex buffers.
1
Direct3D and OpenGL Matrix representation As I read in OpenGL matrices are column major. It means that if I create a 16 element array first four elements are the first column in matrix. Is it the same for Direct3D or any transformation's required.
1
glGenVertexArrays causes crash My code keeps crashing at runtime, I have done some creative debugging and determined that it was the glGenVertexArrays that was causing the crash, I've looked around and come across some answers that told me to enable experimental mode in GLEW but that didn't work, as far as I can tell my graphics card supports it, my opengl version is 3.1. I'm using freeGLUT and GLEW here's the code, the line in question is 45 http hastebin.com rekizejuza.cpp std cout lt lt "made it here r n" glGenVertexArrays(1, amp meshID) std cout lt lt "not here here r n" glBindVertexArray(meshID)
1
TBN matrix for normal and parallax mapping I'd like to refer to this question because I didn't completely answer to my problem. I've implemented normal and parallax mapping but because of some assumptions I have to use two different TBN matrices for each effect. One of the most important assumptions is that I have deferred renderer with normals encoding and light calculations in view space. This implies that I have to convert data from normal maps (which are in tangent space) to view space using below TBN matrix mat3 NormalMatrix transpose(inverse(mat3(ModelViewMatrix))) vec3 T normalize(NormalMatrix Tangent) vec3 N normalize(NormalMatrix Normal) vec3 B normalize(NormalMatrix Bitangent) mat3 TBN mat3(T, B, N) On the other hand parallax mapping required view direction vector in tangent space. To compute that I use camera position and fragment position (in world space) multiplied by following TBN matrix to move this vectors from world space to tangent space T normalize(mat3(ModelMatrix) Tangent) N normalize(mat3(ModelMatrix) Normal) B normalize(mat3(ModelMatrix) Bitangent) mat3 TBN transpose(mat3(T, B, N)) I'm looking for a way to optimize that. Is it possible to make it better?
1
Java LWJGL How can I click to interact with objects? I want to be able to click on a monster to walk to him and start attacking him. The part that doesnt make sense to me is the conversion between the mouse position, and the actual terrain position. There are camera angles to worry about, heights, seperate terrains, how is this done???? I am using Java LWJGL and rendering with OpenGL 4.4
1
In OpenGL Shader, Why adding color change vertex position? I have the following vertex shader version 120 attribute vec4 position attribute vec4 acolor varying vec4 theColor void main() gl Position position theColor acolor 1 theColor vec4(1.0,0.0,0.0,1.0) 2 When I use line 1 , the color is transmitted rigth to the fragment shader, but when I change the color to the attribute acolor (using the line 2 ) the positions of the triangles appear wrong. From the data of the main program, it seems to me like the positions get the color values. I put two images, 1 with the rigth vertex positions and 2 with the wrong vertex positions. UPDATE render code on Github
1
Is this a good way of separating graphics from game logic? My current architecture for me game engine looks like this, though it is not accurate Everything graphics related is done in by GraphicsEngine, and through its components, like Material, Mesh, etc). My problem is that I want to store the pointers in RenderData, but I have to include the Mesh, Material etc header files, which have included glew. I currently change an objects material using GetRenderer().SetMaterial("xyz"), which sets a string in the renderData, to be processed by the graphics engine then the correct pointer will be set, if it exists. This is not so modular, because the scene has graphics related files included, like glew. This is a problem. My only solution is to store indices in RenderData. There wont be a material pointer, but instead, an index where the material is in the GraphicsEngines material store. This way, RenderData is just a "blind" integer and string store, in which the Renderer egy the GraphicsEngine works. Is this a good solution? Meshes have VertexData members (position, normal, texture). When I call GraphicEngine.CreateMesh(), passing the MeshName and FileName, where should the file processing go? I use Tiny Obj Loader, and I don't know where I should include it, and call its function. I call the function from inside GraphicsEngine, then I transform the returned structures to my Mesh's structure, which I pass to the Mesh's constructor. The initialised list will assign it to the corresponding member variable. Inside Mesh, I pass the FileName to the Mesh constructor, and let it handle it all by itself. I think the first solution is better, but I don't really know why. Maybe using GraphicsEngine to "create" assets is better than GraphicsEngine commanding assets to "be created" but this is just a personal feeling. Which solution is better?
1
OpenGL Generating a single square with a Quad Strip I am currently trying to create a terrain generation system (for my 3D game) with OpenGL 2.1. The way I am doing it requires the creation of a massive 2D square that consists of multiple smaller squares. After the flat square is generated, the points are randomly offset in the y axis, making it into a 3D structure. But I can't seem to figure out how to generate this using GL QUAD STRIP. Can someone please help me figure this out? Thanks! P.s. I know my method may not be the best, but I would like to get it to work anyway as it suits my game
1
glGenBuffers is NULL I'm using GLEW 1.13.0, (GLUT), SDL2 and OpenGL 3.3 core. include lt GL glew.h gt include lt GL glut.h gt int main(int argc, char args ) Engine init() Initializes SDL GL attributes glutInit( amp argc, args) Don't know if needed glewExperimental GL TRUE glewInit() WINDOW "main" new Window("Test", 800, 600 , false, true) GLfloat vertices 0.5f, 0.5f, 0.0f, 0.5f, 0.5f, 0.0f, 0.0f, 0.5f, 0.0f GLuint VBO 1 glGenBuffers(1, VBO) Here my program crashes as glGenBuffers is NULL glBindBuffer(GL ARRAY BUFFER, VBO 0 ) glBufferData(GL ARRAY BUFFER, sizeof(vertices), vertices, GL STATIC DRAW) ...
1
When does OpenGL Perform Perspective Divide When does OpenGL divide by W? Is this done automatically within the vertex shader? It is my understanding that the vertex shader outputs the final vertex position, so I presume no transformations can be made after this point...
1
2D Outline shader in GLSL I have a simple prototype with 2D worms like destructible terrain. I use a trivial shader to discard pixels based on a mask. varying vec2 v texCoords uniform sampler2D u texture uniform sampler2D u mask void main() vec4 colour texture2D(u texture, v texCoords) vec4 mask texture2D(u mask, v texCoords) if (mask.a gt 0) discard else gl FragColor colour What would be a good and simple way to add few pixels thick black outlines to the terrain to make the carved parts look better? I've researched something about edge detection algorithms, but my hunch says that such a fullblown thing would be kind of an overkill for this task? I'm not after quality or high performance, but simplicity. Thanks for any ideas.
1
OpenGL Cubemap skybox edge issue I implemented a skybox into my program using a tutorial, and using the provided 6 textures from that tutorial to make a cube map texture, my skybox looked fine. However, ever since then every other skybox texture set I have tried to add has had issues with the edges not blending together. Here is generally how they always end up looking here is my code for for loading the textures and the parameters glActiveTexture(GL TEXTURE0) textureID SOIL load OGL cubemap(textures 0 , textures 1 , textures 2 , textures 3 , textures 4 , textures 5 , 0, 0, SOIL LOAD RGB) glBindTexture(GL TEXTURE CUBE MAP, textureID) if (textureID 0) glGenTextures(1, amp textureID) int width, height unsigned char image glBindTexture(GL TEXTURE CUBE MAP, textureID) for (GLuint i 0 i lt 6 i ) GLuint sID SOIL load OGL texture(textures i , 0, 0, 0) image SOIL load image(textures i , amp width, amp height, 0, SOIL LOAD RGB) glTexImage2D(GL TEXTURE CUBE MAP POSITIVE X i, 0, GL RGB, width, height, 0, GL RGB, GL UNSIGNED BYTE, image) SOIL free image data(image) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP R, GL CLAMP TO EDGE) glBindTexture(GL TEXTURE CUBE MAP, 0) And code for rendering the skybox glDepthMask(GL FALSE) shader gt bind() glBindTexture(GL TEXTURE CUBE MAP, textureID) viewMatrix.r4.x 0 viewMatrix.r4.y 0 viewMatrix.r4.z 0 shader gt loadMatrix("viewMatrix", amp viewMatrix.m11) glBindVertexArray(vao) glEnableVertexAttribArray(0) glDrawArrays(GL TRIANGLES, 0, 36) glBindVertexArray(0) glBindTexture(GL TEXTURE CUBE MAP, 0) shader gt unbind() glDepthMask(GL TRUE) And the shader code const char vertexShader " version 330 r n" "in vec3 position r n " "out vec3 texCoord r n" "uniform mat4 projectionMatrix r n" "uniform mat4 viewMatrix r n" "void main() r n" " r n" "gl Position projectionMatrix viewMatrix vec4(position, 1) r n" "texCoord position r n" " " const char fragmentShader " version 330 r n" "in vec3 texCoord r n" "out vec4 color r n" "uniform samplerCube sampler r n" "void main() r n" " r n" "color texture(sampler, texCoord) r n" " " Does anyone see something that I might be missing that's causing my skybox to not render properly for most cube maps?
1
Draw coloured transparent polygons on top of texture in modern opengl I am trying to render an image in the viewPort using symmetry create 1 and binding texture to it. static const GLfloat vertices vertexdata symmetry create, symmetry create, 0.0f, uv 0.0f, 1.0f, symmetry create, symmetry create, 0.0f, uv 1.0f, 1.0f, symmetry create, symmetry create, 0.0f, uv 1.0f, 0.0f, symmetry create, symmetry create, 0.0f, uv 0.0f, 0.0f My vertex shader looks like version 330 core layout (location 0) in vec3 aPos layout (location 1) in vec2 texCoord out vec2 TexCoord void main() gl Position vec4(aPos.x, aPos.y, aPos.z, 1.0) TexCoord texCoord And fragment shader looks like version 330 core in vec2 TexCoord out vec4 frag color uniform sampler2D myTexture void main() frag color texture(myTexture, TexCoord) I have setup my vao, vbo and indices correctly and thus I am getting the result as expected (ie A provided image on the display screen). Now I would like to draw 'n' number of transparent polygons on the image generated.For each polygon, I have thought of achieving using mouse cursor positions and clicks and render using glDrawArrays(GL TRIANGLE FAN, 0, n) . If I use the same vertex and fragment shaders my result now is black opaque polygons on top of the image. How to achieve transparency and color in the polygons. Do I need to another set of shaders(vertex and frag)?
1
Should I give each character its own VBO or should I batch them into a single VBO? I'm making a 3D first person game. Should I give each character its own VBO or should I batch all characters into a single VBO? What are the pros cons?
1
Rotation going wrong I'm calculating matrices by hand. Translations are fine void Translate (float x, float y, float z, float 4 4 m) Identity (m) m 3 0 x m 3 1 y m 3 2 z If I multiply a vector with this matrix, I get the correct transformation. My problem now is Rotations. I copied the definition from the OpenGL reference on glRotation, but I can't get it right. Can you spot my mistake? void Rotate (float angle, float x, float y, float z, float 4 4 m) float c cos (angle) float s sin (angle) m 0 0 x x (1 c) c m 0 1 y x (1 c) z s m 0 2 x z (1 c) y s m 0 3 0 m 1 0 x y (1 c) z s m 1 1 y y (1 c) c m 1 2 y z (1 c) x s m 1 3 0 m 2 0 x z (1 c) y s m 2 1 y z (1 c) x s m 2 2 z z (1 c) c m 2 3 0 m 3 0 0 m 3 1 0 m 3 2 0 m 3 3 1 I don't know what else is relevant, so, if you're kind enough to lend me a hand on this and if I just don't present enough info, just say it. Thank you for taking the time to read this question. EDIT The trouble I'm having is the following If I do Rotate (180,0,0,0), the vertexes are inverted as intended but the resulting triangle (in this case) is smaller. Screenshot
1
SDL2 and OpenGL flickering with double buffering, what am I doing wrong? I'm currently fiddling with SDL2 and OpenGL to get an understanding on how they work and, moreover, how shaders are done and work. Right now, I'm following this tutorial, using SDL instead of glfw and in a SDL window I render 2 triangles via shader and change their color dynamically, but an obnoxious flickering effect shows, even though I'm sure double buffering is active. Now, to the code Here they are SDL attributes for OpenGL SDL GL SetAttribute( SDL GL CONTEXT MAJOR VERSION, 3 ) SDL GL SetAttribute( SDL GL CONTEXT MINOR VERSION, 1 ) Double buffer should be on by default, but I set it anyway SDL GL SetAttribute(SDL GL DOUBLEBUFFER, 1) SDL GL SetAttribute( SDL GL CONTEXT PROFILE MASK, SDL GL CONTEXT PROFILE CORE ) SDL window creation mWindow SDL CreateWindow( "Window Title", SDL WINDOWPOS UNDEFINED, SDL WINDOWPOS UNDEFINED, mWidth, mHeight, SDL WINDOW OPENGL SDL WINDOW SHOWN SDL WINDOW RESIZABLE ) Use Vsync if( SDL GL SetSwapInterval( 1 ) lt 0 ) SDL LogError(SDL LOG CATEGORY APPLICATION, "Warning Unable to set VSync! SDL Error s n", SDL GetError() ) OpenGL initialization glClearColor(0.f, 0.f, 0.f, 1.f) glEnable( GL TEXTURE 2D ) glEnable( GL BLEND ) glDisable( GL DEPTH TEST ) glBlendFunc( GL SRC ALPHA, GL ONE MINUS SRC ALPHA ) while one of the shaders (no point in pasting the other one, too, as the issue is the same) I use is vertex shader version 330 layout (location 0) in vec3 position layout (location 1) in vec3 color out vec3 LSampleFrag void main() gl Position vec4(position, 1.0) LSampleFrag color fragment shader version 330 out vec4 color uniform vec4 LSampleFrag void main() color LSampleFrag Binding of vertex data to the shader glClearColor(0.f, 0.f, 0.f, 1.f) VBO data GLfloat vertexData 0.5f, 0.5f, 0.0f, 0.5f, 0.5f, 0.0f, 0.5f, 0.5f, 0.0f glGenVertexArrays(1, amp gVAO) glGenBuffers(1, amp gVBO) Bind the Vertex Array Object first, then bind and set vertex buffer(s) and attribute pointer(s). glBindVertexArray(gVAO) glBindBuffer(GL ARRAY BUFFER, gVBO) glBufferData(GL ARRAY BUFFER, sizeof(vertexData), vertexData, GL STATIC DRAW) Position attribute glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, 3 sizeof(GLfloat), nullptr ) glEnableVertexAttribArray(0) glBindVertexArray(0) and its render function is this one, where I change, everytime is called shade of green GLfloat timeValue SDL GetTicks() GLfloat greenValue (sin(timeValue) 2) 0.5 GLint vertexColorLocation glGetUniformLocation(mProgramID, "LSampleFrag") glUniform4f(vertexColorLocation, 0.0f, greenValue, 0.0f, 1.0f) glBindVertexArray(gVAO) glDrawArrays(GL TRIANGLES, 0, 3) glBindVertexArray(0) in the main render function, after rendering the shader SDL GL SwapWindow( window gt getSDLWindow()) As I've already said, double buffering is active but flickering is still an issue am I missing something? I really don't understand why it is happening.
1
Shadow mapping what is the light looking at? I'm all set to set up shadow mapping in my 3d engine but there is one thing I am struggling to understand. The scene needs to be rendered from the light's point of view so I simply first move my camera to the light's position but then I need to find out which direction the light is looking. Since its a point light its not shining in any particular direction. How do I figure out what the orientation for the light point of view is?
1
What is causing these texture edge artifacts on some video cards? I have a 2d heightfield converted into a very simple mesh and textured with tiles from a texture atlas. The tiles texture is drawn with glTexParameterf(GL TEXTURE 2D,GL TEXTURE WRAP S,GL CLAMP) glTexParameterf(GL TEXTURE 2D,GL TEXTURE WRAP T,GL CLAMP) On my integrated Intel card, it draws as I intend (top picture). On a friend's ATI card, it has very visible edges (bottom picture). What causes this, and how can I avoid it? Extra info glColor4f(1,1,1,1) has been called before drawing the mesh the texture atlas is a simple vertical strip, so the right side of each tile has x 1.0f the vertices are being shared so there isn't really a 'hole' between my rows the resolution on both systems is the same the vertices and texture coordinates are all passed as GL FLOAT the tiles atlas is mipmapped
1
OpenGL 4 several glUseProgram overhead I'm developing a little 2D game using OpenGL 4.x and I've also coded a very simple light system which does not take care of shadows. The main concept behind this light system is the frambuffer multiplicative blending. I basically have two framebuffers, one for the normal scene render and the other one for ambient light and the scene light sources. Then, I blend those two framebuffers and I get my final result (which is pretty good, to be honest). In this model I have 3 different shader programs One for rendering the normal scene (with texturing and other normal features) One for rendering lights, which compute light halos and other light effects One for blending the two framebuffers together Now, in my main application loop I have to switch between those three shader programs in order to complete a full rendering cycle. I'm also planning to add more shaders to render different light effects and particles. At the moment I'm in the design phase of my game, so I'm not able to test out the performance of this approach and I have no previous experience with it. And for the same reason I'd like to start with the best approach possibile for this situation. So my question is considering your experience, is switching between several shader programs at each frame something good or is it a bad behaviour in a 2D enviroment ? Is using fewer shader programs but with more if statement and more unused uniform variables a better solution?
1
Second glBindBuffer() call crashes program on Draw call Background Issue I'm pretty new to openGL and I'm trying to create a game engine (for learning purposes) and my program keeps crashing on my glDrawElements() call but only after trying to set glBindBuffer a second time. Code Below is some of my code in my engine. Basically, I have two possible objects I can draw with my engine, a triangle and a square. I'm trying to first send initial shape data down to GPU buffers within my RenderSystem's Initialize function like so RenderSystem.cpp bool RenderSystem Initialize() Send triangle shape data down to GPU MyOpenGL InitializeBuffers(ShapeData Triangle().vertices.size(), amp ShapeData Triangle().vertices.front(), ShapeData Triangle().indicies.size(), amp ShapeData Triangle().indicies.front(), triangleVertexBufferID , triangleIndexBufferID) Send square shape data down to GPU MyOpenGL InitializeBuffers(ShapeData Square().vertices.size(), amp ShapeData Square().vertices.front(), ShapeData Square().indicies.size(), amp ShapeData Square().indicies.front(), squareVertexBufferID, squareIndexBufferID) return true The MyOpenGL InitializeBuffers() function code is next void InitializeBuffers(int64 sizeOfGeometry, const void GeometryDataFirstElement, int64 sizeOfIndicies, const void indicieDataFirstElement, uint32 vertexBufferID, uint32 indexBufferID) glGenBuffers(1, amp vertexBufferID) glGenBuffers(1, amp indexBufferID) glBindBuffer(GL ARRAY BUFFER, vertexBufferID) glBindBuffer(GL ELEMENT ARRAY BUFFER, indexBufferID) glBufferData(GL ARRAY BUFFER, (sizeof(Vector2D) sizeOfGeometry), GeometryDataFirstElement, GL DYNAMIC DRAW) glBufferData(GL ELEMENT ARRAY BUFFER, (sizeof(uint16) sizeOfIndicies), indicieDataFirstElement, GL DYNAMIC DRAW) glEnableVertexAttribArray(0) glVertexAttribPointer(0, 2, GL FLOAT, GL FALSE, sizeof(GLfloat) 2, nullptr) Now, I want to call a draw function within my RenderSystems update() function which basically just calls this MyOpenGL Draw() function Passing in which ever BufferID's I want to draw (square or triangle) void Draw(uint32 vertexBufferID, uint32 indexBufferID, uint16 numOfIndices) glBindBuffer(GL ARRAY BUFFER, vertexBufferID) glBindBuffer(GL ELEMENT ARRAY BUFFER, indexBufferID) glDrawElements(GL TRIANGLES, numOfIndices, GL UNSIGNED SHORT, 0) ) However, after the glDrawElements call my program crashes. If I remove the glBindBuffer functions then it works, calling the last buffer object I bound everything to. Why is my program crashing when trying to rebind to whatever object I want to draw?
1
openGL Camera setup for Zoom in out centered at point under cursor I am trying to implement a zoom in out navigation mode in a openGL 3dViewer. I was able to implement zoom functionality centered at screen center just by moving eye towards the center in perspective mode. Now i am trying to do the zoom centered at arbitrary position under the cursor. I am unable to figure out how should i move my camera forward and backward such that point under cursor remains at the same screen coordinates after zoom in out. Any help would be appreciated. Below are the images which show the desired effect. Just to mention, I am working in a perspective mode with eye target and up vectors to control camera. Same effect i found in google sketchup and 'zoom to mouse position' setting in blender.
1
OpenGL float not working? I have a float array in my terrainFragmentShader to load render the biomes, however the code simply does not work. No errors are being cast and when I remove the code biomeMap int(pass textureCoordinates.x) int(pass textureCoordinates.y) by either deleting that section or manually setting it, it loads the appropriate texture. vec4 textureColour texture(backgroundTexture, pass textureCoordinates) float c biomeMap int(pass textureCoordinates.x) int(pass textureCoordinates.y) if(c lt 0) textureColour texture(sandTexture, pass textureCoordinates) else if(c lt 32) textureColour texture(stoneTexture, pass textureCoordinates) All three textures load just fine and are rendered when not using this method and the biomes are being properly loaded and initialized. Loading public void loadBiomes(float biomeMap) for(int x 0 x lt Terrain.VERTEX COUNT x ) for(int y 0 y lt Terrain.VERTEX COUNT y ) super.loadFloat(location biomeMap x y , biomeMap x y ) Initializing location biomeMap new int Terrain.VERTEX COUNT Terrain.VERTEX COUNT for(int x 0 x lt Terrain.VERTEX COUNT x ) for(int y 0 y lt Terrain.VERTEX COUNT y ) location biomeMap x y super.getUniformLocation("biomeMap " x " " y " ") And finally here is the entire terrainFragmentShader version 400 core in vec2 pass textureCoordinates in vec3 surfaceNormal in vec3 toLightVector in vec3 toCameraVector in int biomeSize out vec4 out Color uniform sampler2D backgroundTexture uniform sampler2D sandTexture uniform sampler2D stoneTexture uniform vec3 lightColour uniform float shineDamper uniform float reflectivity uniform float biomeMap 128 128 uniform float offsetX uniform float offsetZ const int levels 32 void main(void) vec4 textureColour texture(backgroundTexture, pass textureCoordinates) float c biomeMap int(pass textureCoordinates.x) int(pass textureCoordinates.y) if(c lt 0) textureColour texture(sandTexture, pass textureCoordinates) else if(c lt 32) textureColour texture(stoneTexture, pass textureCoordinates) vec3 unitNormal normalize(surfaceNormal) vec3 unitLightVector normalize(toLightVector) float nDotl dot(unitNormal,unitLightVector) float brightness max(nDotl, 0.25) vec3 unitVectorToCamera normalize(toCameraVector) vec3 lightDirection unitLightVector vec3 reflectedLightDirection reflect(lightDirection, unitNormal) vec3 totalDiffuse brightness lightColour out Color vec4(totalDiffuse,1.0) textureColour Why isn't the biome not loading? Is it an issue in me not getting the appropriate x y from the pass textureCoordinates?
1
How to render portals in OpenGL? I am making RPG in OpenGl and I need to make some portals. How should I render it if I want to see through the portal on the other side?
1
How can I use ImGui to render simple text instead of using stb truetype directly? Since ImGui builds on top of stb truetype for its font rendering, I thought it could be nicer to use its already built font processing capabilities (ImGui GetIO().Fonts), and render with those, instead of using stb truetype directly. However, I've been having trouble figuring out how to do this, specifically how to get quad position texture coordinates for a given string to use with the preloaded texture at ImGui GetIO().Fonts gt TexID. I'm not looking to draw buttons text inside ImGui windows, all I want to do is to use ImGui to build vertex data for a given string so that I can render it anywhere.
1
Storing attributes in static geometry I have a Minecraft like world where I statically create one instance of each tile type, and then place it around the world. However, I don't know how to actually change individual attributes for each tile. For instance, I need to change the lighting per tile, but if I do, it'll change the color of ever single tile. I was thinking about storing two arrays per chunk, one for the tiles and one for the light, and when I want to change the lighting, I can just change the value in the lighting array that coincides with the position in the tile array. However, this will mean I'll have to have store twice as much data per chunk, but is it something I have to accept?
1
Cubemap faces rotation in GS shader I Can't get correct rotations for cubemap faces. Thats should come to geometry shader, from camera "view" matrix I want make 6 faces for my cubemap, but seems I can't get it correctly... My first guess that camera 'indent' matrix faces to ' z' X rot 0 (mat4 ( 0 , 0 , 1.0 , 0 , 0 , 1.0 , 0 , 0 , 1.0 , 0 , 0 , 0 , 0 , 0 , 0 , 1.0 )) X rot 1 (mat4 ( 0 , 0 , 1.0 , 0 , 0 , 1.0 , 0 , 0 , 1.0 , 0 , 0 , 0 , 0 , 0 , 0 , 1.0 )) Y rot 2 (mat4 ( 1 , 0 , 0 , 0 , 0 , 0 , 1.0 , 0 , 0 , 1.0 , 0 , 0 , 0 , 0 , 0 , 1.0 )) Y rot 3 (mat4 ( 1 , 0 , 0 , 0 , 0 , 0 , 1.0 , 0 , 0 , 1.0 , 0 , 0 , 0 , 0 , 0 , 1.0 )) z rot 5 (mat4 ( 1 , 0 , 0 , 0 , 0 , 1 , 0 , 0 , 0 , 0 , 1 , 0 , 0 , 0 , 0 , 1.0 )) z rot 4 (mat4 ( 1 , 0 , 0 , 0 , 0 , 1 , 0 , 0 , 0 , 0 , 1 , 0 , 0 , 0 , 0 , 1.0 )) EDIT Correct for me Using LookAt magic, finaly find correct order of rotation NOTE thats RH coordinate system Column major order (glsl default) X rot 0 (mat4 ( 0 , 0 , 1.0 , 0 , 0 , 1.0 , 0 , 0 , 1.0 , 0 , 0 , 0 , 0 , 0 , 0 , 1.0 )) X rot 1 (mat4 ( 0 , 0 , 1.0 , 0 , 0 , 1.0 , 0 , 0 , 1.0 , 0 , 0 , 0 , 0 , 0 , 0 , 1.0 )) Y rot 3 (mat4 ( 1 , 0 , 0 , 0 , 0 , 0 , 1.0 , 0 , 0 , 1.0 , 0 , 0 , 0 , 0 , 0 , 1.0 )) Y rot 2 (mat4 ( 1 , 0 , 0 , 0 , 0 , 0 , 1.0 , 0 , 0 , 1.0 , 0 , 0 , 0 , 0 , 0 , 1.0 )) z rot 4 (mat4 ( 1 , 0 , 0 , 0 , 0 , 1 , 0 , 0 , 0 , 0 , 1 , 0 , 0 , 0 , 0 , 1.0 )) z rot 5 (mat4 ( 1 , 0 , 0 , 0 , 0 , 1 , 0 , 0 , 0 , 0 , 1 , 0 , 0 , 0 , 0 , 1.0 ))
1
WebGL wrong scaling of rectangle I'm working with this tutorial http www.html5rocks.com en tutorials webgl webgl transforms and my result is, that the rectangle moves in the direction of it's scaling values, and also scales at the same time. So if I scale (2.0, 1.0) it moves right and stretches, but I only want to strecht scale it. This is my vertex shader, it is nearly the same as in the tutorial (declarations omitted) void main(void) vec2 scaledPosition a position u scale vec2 rotatedPosition vec2( scaledPosition.x u rotation.y scaledPosition.y u rotation.x, scaledPosition.y u rotation.y scaledPosition.x u rotation.x) vec2 pos rotatedPosition u translation vec2 zeroToOne pos u resolution vec2 zeroToTwo zeroToOne 2.0 vec2 clipSpace zeroToTwo 1.0 v texCoord a texCoord gl Position vec4(clipSpace, 0, 1) What could be the mistake if it is not in the shader? Thanks in advance.
1
Making video from 3D gaphics in OpenGL What are some of the preferred methods or libraries for creating video from an OpenGL graphics simulation? For example, I want to create a visualization(video) of an N Body gravity simulation by rendering non real time OpenGL frames. The simulation is already coded, I just don't know how to convert it to video. EDIT I am also interested in providing the described functionality The user can adjust parameters including the time step between captured frames and then initiate the simulation. The user waits for the simulation to complete, and then can watch the results. The user is able to increase or decrease the playback speed of the simulation whereas in slow motion, more frames are used i.e., you see higher resolution time steps, and when the speed is increased, you see lower resolution time steps at a higher rate, but the frames per second flashing on the screen is constant.
1
OpenGL Attempt to allocate a texture too big for the current hardware I'm getting the following error java.io.IOException Attempt to allocate a texture to big for the current hardware at org.newdawn.slick.opengl.InternalTextureLoader.getTexture(InternalTextureLoader.java 320) at org.newdawn.slick.opengl.InternalTextureLoader.getTexture(InternalTextureLoader.java 254) at org.newdawn.slick.opengl.InternalTextureLoader.getTexture(InternalTextureLoader.java 200) at org.newdawn.slick.opengl.TextureLoader.getTexture(TextureLoader.java 64) at org.newdawn.slick.opengl.TextureLoader.getTexture(TextureLoader.java 24) The image I'm trying to use is 128x128. System.out.println(GL11.glGetInteger(GL11.GL MAX TEXTURE SIZE)) I get 32. 32??!! My graphics card is AMD Radeon HD 7970M with 2048 MB GDDR5 RAM, I can run all the latest games in 1080p and 60fps with no problem, and those textures sure as hell doesn't look like they are 32x32 pixels to me! How can I fix this? Edit Here's the chaos code I use to init OpenGL Display.setDisplayMode(new DisplayMode(500,500)) Display.create() if (!GLContext.getCapabilities().OpenGL11) throw new Exception("OpenGL 1.1 not supported.") Display.setTitle("Game") glMatrixMode(GL PROJECTION) glLoadIdentity() GLU.gluPerspective(45, 1, 0.1f, 5000) Mouse.setGrabbed(true) glMatrixMode(GL MODELVIEW) glLoadIdentity() glEnable(GL TEXTURE 2D) glClearColor(0, 0, 0, 0) glEnable(GL DEPTH TEST) glDepthFunc(GL LEQUAL) glHint(GL PERSPECTIVE CORRECTION HINT, GL NICEST) glBlendFunc(GL SRC ALPHA, GL ONE MINUS SRC ALPHA) glEnable(GL BLEND) glEnable(GL POINT SMOOTH) glEnable(GL LINE SMOOTH) glEnable(GL POLYGON SMOOTH) glEnable(GL POLYGON OFFSET FILL) glShadeModel(GL SMOOTH) Display is a LWJGL thing, it makes the OpenGL context and the window. Anyway, I don't think there's anything in the init code that can help me but you never know...
1
Creating a movable camera using glm lookAt() I came across this tutorial on how to create a movable camera in OpenGL using glm lookAt(glm vec3 position, glm vec3 target, glm vec3 up). In the tutorial, in order to keep the camera always facing in one direction while moving, the view matrix is created as such view glm lookAt(cameraPos, cameraPos cameraFront, cameraUp) , where cameraPos, cameraFront, and cameraUp are all glm vec3 type. What I would like to ask is why does the second argument have to be cameraPos cameraFront? If the camera position moved to the right without changing cameraFront, wouldn't cameraPos cameraFront have an effect of rotating to the right as opposed to staying in the same direction (which I think is what should be needed)?
1
OpenGL Precompute a texture rotation I'm trying to speed up particles, and one way to do that is by precomputing the texture rotations. What I want to do is load the texture, rotate it and save it to a handle. How would I go about doing this? Thanks in advance.
1
Trying to implement Camera I'm trying to implement a Camera class in order to walk and look on the world as follow ifndef CAMERA H define CAMERA H include lt glm glm.hpp gt class Camera public Camera() Camera() void Update(const glm vec2 amp newXY) if by 0.0 it means, it will use the const Class speed to scale it void MoveForward(const float by 0.0f) void MoveBackword(const float by 0.0f) void MoveLef(const float by 0.0f) void MoveRight(const float by 0.0f) void MoveUp(const float by 0.0f) void MoveDown(const float by 0.0f) void Speed(const float speed 0.0f) glm vec3 amp GetCurrentPosition() glm vec3 amp GetCurrentDirection() glm mat4 GetWorldToView() const private glm vec3 position, viewDirection, strafeDir glm vec2 oldYX float speed const glm vec3 up endif include "Camera.h" include lt glm gtx transform.hpp gt Camera Camera() up(0.0f, 1.0f, 0.0), viewDirection(0.0f, 0.0f, 1.0f), speed(0.1f) Camera Camera() void Camera Update(const glm vec2 amp newXY) glm vec2 delta newXY oldYX auto length glm length(delta) if (glm length(delta) lt 50.f) strafeDir glm cross(viewDirection, up) glm mat4 rotation glm rotate( delta.x speed, up) glm rotate( delta.y speed, strafeDir) viewDirection glm mat3(rotation) viewDirection oldYX newXY void Camera Speed(const float speed) this gt speed speed void Camera MoveForward(const float by) float s by 0.0f ? speed by position s viewDirection void Camera MoveBackword(const float by) float s by 0.0f ? speed by position s viewDirection void Camera MoveLef(const float by ) float s by 0.0f ? speed by position s strafeDir void Camera MoveRight(const float by ) float s by 0.0f ? speed by position s strafeDir void Camera MoveUp(const float by ) float s by 0.0f ? speed by position s up void Camera MoveDown(const float by ) float s by 0.0f ? speed by position s up glm vec3 amp Camera GetCurrentPosition() return position glm vec3 amp Camera GetCurrentDirection() return viewDirection glm mat4 Camera GetWorldToView() const return glm lookAt(position, position viewDirection, up) and I update and render as follow void Game OnUpdate() glLoadIdentity() glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glUniformMatrix4fv(program gt GetUniformLocation("modelToViewWorld"), 1, GL FALSE, amp cam.GetWorldToView() 0 0 ) void Game OnRender() model gt Draw() Which the Vertix shader is version 410 layout (location 0) in vec3 inVertex layout (location 1) in vec2 inTexture layout (location 2) in vec3 inNormal uniform mat4 modelToViewWorld void main() gl Position vec4(mat3(modelToViewWorld) inVertex, 1) But in result the model itslef is moving rotating instead of moving the camera around the model. What am I doing wrong here?
1
Android glowing pulsing line triangle I would like to create a simple Android app using Opengl ES 2.0 that is showing a simple shape (like line or triangle) that is glowing and pulsing like Nexus X logo in this video http youtu.be jBKVAfZUFqI?t 59s What should I look for? So far I googled around for glowing effects and found techniques like "bloom" or "additive blending". Are they relevant here? how I would implement pulsing glow with them? Any links to relevant works very appreciated Thanks! P.S I am very familiar with Android SDK just started with OpenGL ES
1
Rotating a circle on its axis I have a circular shape object, which I want to rotate like a fan along it's own axis. I can change the rotation in any direction i.e. dx, dy, dz using my transformation matrix. The following it's the code Matrix4f matrix new Matrix4f() matrix.setIdentity() Matrix4f.translate(translation, matrix, matrix) Matrix4f.rotate((float) Math.toRadians(rx), new Vector3f(1,0,0), matrix, matrix) Matrix4f.rotate((float) Math.toRadians(ry), new Vector3f(0,1,0), matrix, matrix) Matrix4f.rotate((float) Math.toRadians(rz), new Vector3f(0,0,1), matrix, matrix) Matrix4f.scale(new Vector3f(scale,scale,scale), matrix, matrix) My vertex code vec4 worldPosition transformationMatrix vec4(position,1.0) vec4 positionRelativeToCam viewMatrix worldPosition gl Position projectionMatrix positionRelativeToCam But, it's not rotating along its own axis, it is flipping like a coin instead. What am I missing here?
1
How do you determine which object surface the user's pointing at with lwjgl? Title pretty much says it all. I'm working on a simple 'lets get used to lwjgl' project involving manipulation of a rubik's cube, and I can't figure out how to tell which side square the user's pointing at.
1
Shader Transmittance or Absorption I am trying to create a transmittance or absorption shader (glsl, hlsl, cg, etc...) in realtime but I don't find any good tutorial or white paper about this subject. I only find offline rendering references. Is it possible to achieve this kind of effect in realtime using standard rasterisation of a 3D mesh ? How ?
1
How to implement a pixelated screen transition shader? I am interested in creating a screen transition seen in a lot of retro games. The transition is just a kind of pixelated distortion that increases or decreases in granularity over time. The effect is present in Super Mario World, and can be seen recreated in this clip. Also, here is an image depicting the transition (ignore the gradual lightening of the screen) I want to achieve this by animating the uniforms of a GLSL shader. Unfortunately, I don't know how to design the shader. I know how to sample gradient textures to create various screen wipes, as well as how to sample noise textures to create simple distortion effects. But I can't figure out exactly how to create this effect. Any advice on how to set up a shader to achieve this effect?
1
transformation before the perspective divide but after projecting perspectively My problem is that I would like to confine a scene render to a (possibly rotated) rectangle without using glViewport(). I don't want to use it to save, if possible, some cycles that would otherwise be spent on state switching. Also, there is the tantalizing possibility of rotating the scene render, which is not possible with glViewport... Is it possible to confine a scene render to this (possible rotated) rectangle in this way?
1
Noisy edges, smoothing out edges between faces via fragment shader I have a generated terrain, with hexagonal geometry, as per screenshot below I then generate biomes, but as you can see the borders between them are really ugly and straight. To hide that hexagonal origin, I would need to smooth out the borders between biomes. This is how it looks now in wireframe with real tringular faces What I'm aiming for is something more like this Each vertex has attibute that holds the biome type, I can also add special attributes to the vertices on the edge between two biomes, but I just don't seem to be able to figure out how to pull this off in shader code, obviously noise is involved here, but how do I make it continous across multiple faces and entire border of multiple biomes? I'm rendering with WebGL using THREE.js
1
Multiplatform GLSL shader validator? Im working on a multiplatform (Pc,Mac,Linux) game that uses shaders quite extensively. Since we do not have any funding, it is pretty hard to test our game on all possible hardware configurations. While our engine seems to run fine on different platforms, its usually the slight differences in GLSL compiling that gives us headaches. We have things set up such that we can test the shaders on Ati Nvidia cards, but since the new Macbooks have Intel graphics, I'm really looking for a tool that can simply validate my GLSL shaders for different hardware without the need of yet another system. Does anyone know such a tool?
1
Linear filter problem with diagonal lines on adjecent tiles I am quite new at using OpenGL GLSL. Basically, the project I am working on is my first 'real' experience with it. I do not know whether this is relevant, but I use libgdx for my project. Currently, I am trying to use the GL LINEAR filters to draw tile of my (tile) map (before I was using GL NEAREST without problems). My map contains diagonal roads (just simple lines in this example). A partial overview example of my map (using the nearest filter without corrections) With the GL NEAREST filter, initially I added a 1 pixel border, so the tiles were slightly overlapping and the diagonal roads were draw without the small gaps at the edges of the tiles. This does not work with the GL LINEAR filter, because I suppose that the overlapping parts are drawn twice, resulting in a darker 'blob' I tried to get rid of the blobs, by making the horizontal lines shorter, but there is always an irregularity, either a blob or an 'decrease of line width' (as seen in the diagonal part of the line). So I removed the 1 pixel additional border and have this as a result Now I am trying to fill in the 'gaps' by drawing the two triangular fillers like this but I can't get it completely right (this is the best I managed to get) I suppose it is virtually impossible to get this perfectly right by 'manually' drawing these triangles. Also, when looking at the third image, the lines at the 'edges' (that need to be filled) are very sharp. I suppose this is problematic..? So, my question is how does one solve this problem? Do I need to prepare modify my textures for this? Can I use a shader to fill the gaps automatically? Is manually filling the gaps with the triangles the way to go? Or is there some entirely different technique to have smooth diagonal lines that span multiple tiles?
1
Handling collision with LWJGL rectangles I'm testing collision with other rectangles so I can implement it into my current project. The problem is the rectangle starts at the right x and y, but I'm not sure where exactly they are. I'm pretty sure they're starting from the x and y point and the height is going either up and down. My current ortho makes the y axis start from the bottom of the screen but I'm not sure how their rectangle calculates. How can I improve this class so for each side the bottom rectangle touches, it turns a certain color to identify collision. public class a static int playerX 400 static int playerY 400 static int enemyX 100 static int enemyY 100 public static void main(String args) try Display.setDisplayMode(new DisplayMode(640, 480)) Display.setTitle("collision") Display.create() catch (LWJGLException e) e.printStackTrace() Display.destroy() System.exit(1) glMatrixMode(GL PROJECTION) glLoadIdentity() glOrtho(0, 640, 0, 480, 1, 1) glMatrixMode(GL MODELVIEW) Rectangle rl new Rectangle(playerX, playerY, 10, 20) Rectangle r2 new Rectangle(enemyX, enemyY, 200, 10) float c2 0 color while (!Display.isCloseRequested()) glClear(GL COLOR BUFFER BIT) rl.setX(playerX) rl.setY(playerY) r2.setX(enemyX) r2.setY(enemyY) if(rl.intersects(r2)) c2 1f else if(!rl.intersects(r2)) c2 0f if(Keyboard.isKeyDown(Keyboard.KEY A)) playerX 1 if(Keyboard.isKeyDown(Keyboard.KEY D)) playerX 1 if(Keyboard.isKeyDown(Keyboard.KEY W)) playerY 1 if(Keyboard.isKeyDown(Keyboard.KEY S)) playerY 1 System.out.println(playerY 100) glPolygonMode(GL FRONT AND BACK, GL LINE) glColor3f(0, 1, 0) PLAYER l glBegin(GL QUADS) glVertex2f(playerX, playerY 10 ) glVertex2f(playerX 10, playerY 10 ) glVertex2f(playerX 10, playerY 100 ) glVertex2f(playerX, playerY 100 ) glEnd() r glBegin(GL QUADS) glVertex2f(playerX 100, playerY 10 ) glVertex2f(playerX 110, playerY 10 ) glVertex2f(playerX 110, playerY 100 ) glVertex2f(playerX 100, playerY 100 ) glEnd() top glBegin(GL QUADS) glVertex2f(playerX 10, playerY 9 ) glVertex2f(playerX 100, playerY 9 ) glVertex2f(playerX 100, playerY 20 ) glVertex2f(playerX 10, playerY 20 ) glEnd() bot glBegin(GL QUADS) glVertex2f(playerX 10, playerY 90 ) glVertex2f(playerX 100, playerY 90 ) glVertex2f(playerX 100, playerY 101 ) glVertex2f(playerX 10, playerY 101 ) glEnd() glColor3f(c2, 1, 0) ENEMY glBegin(GL QUADS) glVertex2f(enemyX, enemyY) glVertex2f(enemyX 200, enemyY) glVertex2f(enemyX 200, enemyY 10) glVertex2f(enemyX, enemyY 10) glEnd() Display.update() Display.sync(60) Display.destroy() System.exit(0) I just made it so OpenGL set its position to the rectangle instead of the rectangle to OpenGL.
1
Shadow Map field of view I'm implementing a shadow map algorithm with a spotlight (a spotlight that always "looks" at a given object). My issue is that for some configuration, part of the object the light is "looking at" is out of the light frustum. As a consequence everything that is not "seen" is shadowed. To account for this I tried to increase the field of view of the "light camera", but as a result I get very blocky shadows. Am I doing something wrong? Is there a more clever way to make sure the light can see the whole object? If not, how can I solve the above said artefact?
1
Modifying GLFW callbacks I'm currently using Imgui for the GUI part of my OpenGL C engine with the GLFW binding. The problem is though that this binding has encapsulated the input callbacks in a global .cpp file which makes it impossible to access for other classes. Now I could always use the regular routine to handle input without the callbacks like if(glfwGetKey(m window, GLFW KEY A) GLFW PRESS) ... but yet this won't give me the one to one mapping like the key callback provides as the conditional code is processed more than once. I have tried using static booleans without success to see if I could overcome the problem by setting the boolean to true when the key is pressed in the callback and then handling the outcome elsewhere. I also know that you can override callbacks with different ones but that would not make any sence in this case. Does anyone know how to solve this problem?
1
differences between opengl 3 and opengl 4 I'm just getting started with game programming and I want to start learning opengl. I found a very great tutorial from scratch to get started with opengl 3 and I'm wondering if there is a big difference between openGL 3 and openGl 4. Or I should ask, does openGL 4 make openGL 3 obsolete, or I can start with openGL 3 and then I can move to openGL 4?
1
texture cordinate VBO not being updated OpenGL I'm making a minecraft style game and I decided to add a VBO with the texture atkas coordinates of the vertices but it is appearing all white. However I'm following the same process as another VBO for the block positions which works the one that works glBindBuffer(GL ARRAY BUFFER,chunk gt posOffsetVBO) glBufferData(GL ARRAY BUFFER,sizeof(glm vec3) 65536,NULL,GL DYNAMIC DRAW) glVertexAttribPointer(2,3,GL FLOAT,GL FALSE,3 sizeof(GLfloat),(GLvoid )0) glEnableVertexAttribArray(2) glVertexAttribDivisor(2,1) then I fill it with glm vec3s like this glBindBuffer(GL ARRAY BUFFER,adjacentChunks 1 1 gt posOffsetVBO) glBufferSubData(GL ARRAY BUFFER,0,sizeof(posOffsets), amp posOffsets 0 ) this is the one with the texture coordinates that doesn't work glBindBuffer(GL ARRAY BUFFER,chunk gt texCoordVBO) glBufferData(GL ARRAY BUFFER,sizeof(glm vec2) 36 1000,NULL,GL DYNAMIC DRAW) glVertexAttribPointer(1,2,GL FLOAT,GL FALSE,2 sizeof(GLfloat),(GLvoid )0) glEnableVertexAttribArray(1) then fill It with 36 glm vec2s for each block so one per vertex glBindBuffer(GL ARRAY BUFFER,adjacentChunks 1 1 gt texCoordVBO) glBufferSubData(GL ARRAY BUFFER,0,sizeof(texCoords), amp texCoords 0 ) Is there a reason why the texture one doesn't work?
1
btRigidBody world transform and scale issue When I create a collision shape for rigid body (in this case box) I use vertices positions and scale from the opengl matrix, the code looks this way glm vec3 boxDim getBoxDim(verticesPositions, scale) collisionShape .reset(new btBoxShape(btVector3(boxDim.x, boxDim.y, boxDim.z))) When a simulation step is done I need to update opengl matrix for the object. I calculate a new model matrix this way btTransform transform rigidBody gt getWorldTransform() float openglMatrix 16 transform.getOpenGLMatrix(openglMatrix) glm mat4 newModelMatrix glm make mat4(openglMatrix) newModelMatrix glm scale(newModelMatrix, glTransform.getScale()) My code works but I wonder if there is any possibility to have already scale in the transform returned by getWorldTransform. Then I can ommit this line of the code newModelMatrix glm scale(newModelMatrix, glTransform.getScale())
1
How to use TextureArray's Pre OpenGL 4.2? I've been using TextureArray's in my current Project However, I have come to the problem of OpenGL 4.2 functions not being on the present Mac software when ported over. Due to only version 4.1 of OpenGL being supported. All my problems stem from the single function. glTexStorage3D which is the only function not supported. This is how it currently works on my OpenGL 4.2 capable machine. (and works wonderfully I might add.) 1.) I allocate the memory beforehand with glTexStorage3D. glGenTextures(1, amp texture) glBindTexture(GL TEXTURE 2D ARRAY, texture) glTexStorage3D(GL TEXTURE 2D ARRAY, 255, GL RGBA8, 16, 16, maxLayerCount) 2.) Then per image I would load it into the before allocated memory block with the following. glTexSubImage3D(GL TEXTURE 2D ARRAY, 0, 0, 0, layerCount, 16, 16, 1, GL RGBA, GL UNSIGNED BYTE, image.getPixelsPtr()) 3.) Success. How would I go about mimicking this functionality pre OpenGL 4.2 without using Immutable Storage? Aka (glTexStorage3D). I've tried stuffing all my images into a horizontal single image and then loading that into just glTexImage3D with no success. Am I missing something? Additional Note I prefer to use TextureArray's for my project in order to avoid rendering artifacts I had using a TextureAtlas. It's simplicity during shaders is also a big plus.
1
BRDF Incorrect specular highlights I'm currently attempting to implement BRDF lighting, and am hitting a bit of a snag with my specular term the specular highlights aren't rendering correctly. To make things simple, I'm using a single directional light rendering down the Z axis only, and manually specify the gloss specularity of each of the following spheres. Glossyness increases left to right, and Specularity increases bottom to top. Notice how the highlights seem shifted nearly 90 degrees upwards Calculating the specular BRDF I use the generic Fresnel Shlick eq For the NDF, I use the GGX Trowbridge Reitz eq And for the Geometric Shadowing function, I use the Smith Shlick Beckmann eq vec3 BRDF Specular(vec3 F0, vec3 1minusF0, vec3 WorldPos, vec3 Normal) vec3 LightDirection vec3(0, 0, 1) vec3 ViewDirection normalize(EyePos WorldPos) vec3 HalfVec normalize(LightDirection ViewDirection) float NdotL clamp(dot(Normal, LightDirection), 0.0, 1.0) float NdotV clamp(dot(Normal, ViewDirection), 0.0, 1.0) float NdotH clamp(dot(Normal, HalfVec), 0.0, 1.0) float VdotH clamp(dot(ViewDirection, HalfVec), 0.0, 1.0) float Roughness max(1.0f Gloss, 0.0) Turn into roughness float Roughness 2 (Roughness Roughness) float NdotH 2 NdotH NdotH Normal Distribution Function GGX (Trowbridge Reitz) float D0 (NdotH 2 (a 2 1.0f) 1.0f) float D a 2 (M PI (D0 D0)) Fresnel Shlick vec3 Fs F0 1minusF0 pow(1.0 VdotH, 5.0) Geometric Shadowing Smith Shlick Beckmann float k a 2 sqrt(2.0 M PI) float GV NdotV (NdotV (1.0 k) k) float GL NdotL (NdotL (1.0 k) k) float G GV GL vec3 brdfspec vec3(max(( Fs D G ) (4.0f NdotL NdotV), 0.0)) return brdfspec I've been following so many different resources and mimicked many other implementations, and even reducing it down to this basic level I'm still having this issue. I don't know why I'm not getting proper results I'm using a deferred renderer, I derive the World position of each fragment from the depth buffer, and store my normals in viewspace. I don't use a "Metalness" map, but instead use a single rgb specular map, hense why my Fresnel calculation uses Vec3's instead of floats.
1
How to only render fragments with Z from 0 to 1 in OpenGL? I have been using OpenGL for a year now, but I just recently found out that OpenGL only clips vertices when the absolute value of the x,y or z coordinate is less than the absolute value of w coordinate. Previously I had assumed that the z coordinate would have to be 0 lt z lt w to be rendered. Is there any way to clip vertices with z less than 0, without a performance hit? I am using OpenGL 4.4
1
opengl matrix multiplication Can someone provide some type of example of multiplying a 4x4 matrix without using loops? typedef struct matrix4 data 16 m4 can someone provide a sample of how you you'd multiply two of these in the opengl column major way?
1
Tilemap not rendering properly I'm making a single screen scrolling platformer (OpenGL SDL) and I've made a tilemap out of a 2D array, pre sized with variables of the LEVEL HEIGHT and LEVEL WIDTH. Each element in this 2D array corresponds to a position in the spritesheet that I'm loading from. In my render function, I have the following code to iterate through the matrix,and put each element into a 1D vertex buffer for (int y 0 y lt LEVEL HEIGHT y ) for (int x 0 x lt LEVEL WIDTH x ) if (levelData y x ! 0) float u (float)(((int)levelData y x ) SPRITE COUNT X) (float)SPRITE COUNT X float v (float)(((int)levelData y x ) SPRITE COUNT X) (float)SPRITE COUNT Y float spriteWidth 1.0f (float)SPRITE COUNT X float spriteHeight 1.0f (float)SPRITE COUNT Y float vertices TILE SIZE x, TILE SIZE y, TILE SIZE x, ( TILE SIZE y) TILE SIZE, (TILE SIZE x) TILE SIZE, ( TILE SIZE y) TILE SIZE, TILE SIZE x, TILE SIZE y, (TILE SIZE x) TILE SIZE, ( TILE SIZE y) TILE SIZE, (TILE SIZE x) TILE SIZE, TILE SIZE y GLfloat texCoords u, v, u, v (spriteHeight), u spriteWidth, v (spriteHeight), u, v, u spriteWidth, v (spriteHeight), u spriteWidth, v glBindTexture(GL TEXTURE 2D, sSheetIds 0 ) glVertexAttribPointer(program gt positionAttribute, 2, GL FLOAT, false, 0, vertices) glEnableVertexAttribArray(program gt positionAttribute) glVertexAttribPointer(program gt texCoordAttribute, 2, GL FLOAT, false, 0, texCoords) glEnableVertexAttribArray(program gt texCoordAttribute) glDrawArrays(GL TRIANGLES, 0, 6) glDisableVertexAttribArray(program gt positionAttribute) glDisableVertexAttribArray(program gt texCoordAttribute) For some reason, when I run the code, the textures are overlapping between tiles. Consider the following example sprite sheet with tiles 2D Array for with the indices into the sprite sheet int levelData LEVEL HEIGHT LEVEL WIDTH 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1 , 4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4 , 4,4,4,4,4,4,4,4,0,0,0,0,0,0,0,0,0,0,0,0,5,0 , 0,5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,0 , 0,5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,0 , 0,5,0,0,0,0,0,6,6,6,6,6,6,6,6,0,0,0,0,0,5,0 , 0,5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,0 , 0,5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,0 , 0,5,6,6,6,6,6,0,0,0,0,0,0,0,0,6,6,6,6,6,5,0 , 0,5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,0 , 0,5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,0 , 0,5,0,0,0,0,0,6,6,6,6,6,6,6,6,0,0,0,0,0,5,0 , 0,5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,0 , 0,5,0,0,0,0,0,6,6,6,6,6,6,6,6,0,0,0,0,0,5,0 , 0,5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,0 , 0,5,0,0,0,0,0,6,6,6,6,6,6,6,6,0,0,0,0,0,5,0 Tile map being rendered when the game is run I've tried varying the size of each tile, but the textures just seem to scale up or down accordingly. I think the problem could be with the orthographic projection, but I'm not sure. What is causing the distortion here? Here is the full code for reference
1
Orthographic Projection Issue I have a problem with my Ortho Matrix. The engine uses the perspective projection fine but for some reason the Ortho matrix is messed up. (See screenshots below). Can anyone understand what is happening here? At the min I am taking the Projection matrix Transform (Translate, rotate, scale) and passing to the Vertex shader to multiply the Vertices by it. VIDEO Shows the same scene, rotating on the Y axis. http youtu.be 2feiZAIM9Y0 void Matrix4f InitOrthoProjTransform(float left, float right, float top, float bottom, float zNear, float zFar) m 0 0 2 (right left) m 0 1 0 m 0 2 0 m 0 3 0 m 1 0 0 m 1 1 2 (top bottom) m 1 2 0 m 1 3 0 m 2 0 0 m 2 1 0 m 2 2 1 (zFar zNear) m 2 3 0 m 3 0 (right left) (right left) m 3 1 (top bottom) (top bottom) m 3 2 zNear (zFar zNear) m 3 3 1 This is what happens with Ortho Matrix This is the Perspective Matrix
1
How can I orbit a camera about it's target point? I'm drawing a scene where the camera freely moves about the universe. The camera class keeps track of the view (or look at) point, the position of the camera, and the up vector. These vectors points are then passed into gluLookAt. Pan and zoom are nearly trivial to implement. However, I'm finding rotation about the look at point to be much more of an issue. I want to write a function Camera.rotate that takes 2 angles, one that rotates up down and one that rotates left right along an imaginary sphere that is centered about the look at point. Is there an easy way to do this? I've (briefly) read about quaternions, but I wanted to see if there was an easier solution given the relatively simple construction of my scene.
1
How shader program still works even after detaching shader object? In the following process of shader creation, shader object is detached after linking program (glLinkProgram), how does the shader program still works after detaching and deleting shader objects? glCreateShader glShaderSource glCompileShader glCreateProgram glAttachShader glLinkProgram glDetachShader glDeleteShader Use shader.
1
My 3D Shader wont render light direction correctly opengl c glsl This is what I'm doing vertex shader version 400 layout ( location 0 ) in vec3 vertex position layout ( location 1 ) in vec3 vertex normal uniform mat4 ModelViewMatrix model uniform mat3 NormalMatrix transpose(inverse(model))) uniform mat4 priv mat projection view model out vec3 normal out vec3 position void main() normal normalize(NormalMatrix vertex normal) position vec3(ModelViewMatrix vec4(vertex position,1.0)) gl Position priv mat vec4(vertex position,1.0) fragment shader version 400 in vec3 normal in vec3 position uniform vec4 LightPosition light position vec4(0,0,0,1) uniform vec3 LightIntensity uniform vec3 Kd Diffuse reflectivity uniform vec3 Ka Ambient reflectivity uniform vec3 Ks Specular reflectivity uniform float Shininess Shininess vec3 ads( ) vec3 n normal vec3 s normalize( vec3(LightPosition) position ) vec3 v normalize( vec3( position)) vec3 r reflect( s, n ) return LightIntensity ( Ka Kd max( dot(s, n), 0.0 ) Ks pow( max( dot(r,v), 0.0 ), Shininess ) ) void main() gl FragColor vec4(ads(), 1.0) But when I run it, I get this Where the light doesnt just come from the wrong spot, but it also doesnt change when I move the planet around, like so The diffuse value somehow gets the wrong direction? According to the example given (page 90), the diffuse is calculated by getting the dot(s,n), where s is lightposition position, and position is (viewmodelvertex position). But like you can see, the result doesnt shade the direction correctly. Solution The answer was matrix confusion all along. The problem was the ModelViewMatrix, and that It didnt have to be multiplied with view before being sent to the shader. The rest is just correct. Basically I was putting the "position" in the wrong space. Not multiplying with view before sending to the shader fixes it.
1
Indexed draw vs draw array Lately I wondered, which draw command is faster, drawArrays or drawElements. I know difference between them, drawArrays just draws every vertex in the same order they were provided, and drawElements draws vertices based on the provided indices. But I'm still curious which command is faster, or when should I use drawArrays instead of drawElements and vice versa.
1
Replacing client side vertex arrays with glBufferData I need to replace client side vertex arrays in order to upgrade to a new version of OpenGL but I'm not sure what the best way to buffer data is now. What I have is a 2D sprite engine which is using batching to push as many vertices to the GPU (using fixed pipeline functions glVertexPointer etc...) but frequently the batch is only a single quad. Because of how sorting works the buffer needs to be updated every frame (or more). I need to use glBufferData glVertexAttribPointer now so what is the best way to handle this case? I can allocate the buffer for glBufferData large enough to hold the maximum size of a batch (which is more than maybe 5000 vertices) so should I just push the old vertex array to glBufferData every frame or use another method? Maybe calling glVertexPointer glBufferData has the same costs associated with them to copy memory to the GPU so I don't need to worry about it but I'd like to know since I'm still pretty new to OpenGL. CONCLUSION In my simple tests I found calling glBufferData every frame with all vertices (OpenGL 4.1) actually slightly faster than client side vertex arrays (OpenGL 2.1). Thanks.
1
Performance issues with different types of uniform buffer I've tested different buffer layouts, because I'm confused about some performance issues. That are my test buffer (OpenGL 4.1, GLSL 410, GT 750M, MacBook Pro) Test 1 Using fixed struct buffer with unused variables inside struct SPerFrameUniformBufferVS vec4 m UnusedButPlaned1 vec4 m SometimesInUse2 vec4 m UnusedButPlaned3 vec4 m Used4 mat4 m Used5 mat4 m Used6 layout(row major, std140) uniform UPerFrameUniformBufferVS SPerFrameUniformBufferVS m PerFrameUniformBufferVS Test 2 Same buffer without unused variables struct SPerFrameUniformBufferVS vec4 m Used4 mat4 m Used5 mat4 m Used6 layout(row major, std140) uniform UPerFrameUniformBufferVS SPerFrameUniformBufferVS m PerFrameUniformBufferVS Test 3 Using fixed buffer with unused variables inside layout(row major, std140) uniform UPerFrameUniformBufferVS vec4 m UnusedButPlaned1VS vec4 m SometimesInUse2VS vec4 m UnusedButPlaned3VS vec4 m Used4VS mat4 m Used5VS mat4 m Used6VS Test 4 Same buffer without unused variables layout(row major, std140) uniform UPerFrameUniformBufferVS vec4 m Used4VS mat4 m Used5VS mat4 m Used6VS In my case i've the following performance results Test 1 40 FPS Test 2 55 FPS Test 3 46 FPS Test 4 60 FPS My application do 1558 draw and 2341 glGetUniformBlockIndex calls. glGetUniformBlockIndex seems to be the bottle neck, because it is much higher in Test 1 and Test 3.
1
When drawing transparent object in OpenGL it cuts some sides of other object? I am making some OpenGL program, and I have a lost of cubes next to each other like this When I am making this hole, I am just skipping those cubes, just don't draw them. for (int i 0 i lt NUM OF CUBES i ) for (int j 0 j lt NUM OF CUBES j ) if ((i 3 amp amp j 3) (i 4 amp amp j 3) (i 3 amp amp j 4) (i 4 amp amp j 4)) continue cubes i j .draw(true) The result is But when I make those cubes invisible, by setting alpha to 0, the result is The code is for (int i 0 i lt NUM OF CUBES i ) for (int j 0 j lt NUM OF CUBES j ) if ((i 3 amp amp j 3) (i 4 amp amp j 3) (i 3 amp amp j 4) (i 4 amp amp j 4)) cubes i j .draw(true) else cubes i j .draw(false) true false fleg which I send to draw function just sets alpha to 0 or 1, if true alpha coordinate of glColor4f is set to 0, and if false is sent alpha coordinate is set to 1. Does anyone have idea why it looks like some sides of the other cubes are cut? Like they don't exist.
1
OpenGL using quaternion to rotate camera to avoid gimbal lock I read in many sources about using quaternion to avoid gimbal lock but I can't apply this practically in my code, so I have a camera class I want to rotate it with mouse so I have Euler angles Pitch and Yaw, I have a camQuat qauaternion to store the rotation quaternion each frame, so I have the initCamera method which initialize the camera void Camera initCamera(glm vec3 amp pos, glm vec3 amp center, GLfloat yaw, GLfloat pitch) view glm translate(view, center pos) camQuat glm quat(glm vec3(glm radians(pitch), glm radians(yaw), 0.0)) camQuat glm normalize(camQuat) glm mat4 rot glm mat4 cast(camQuat) view view rot view glm translate(view, pos center) then each frame I update the camera by this method void Camera rotate(GLfloat xoffset, GLfloat yoffset, glm vec3 amp c) xoffset this gt mouseSensitivity yoffset this gt mouseSensitivity view glm translate(view, c) glm quat q(glm vec3(glm radians(yoffset), glm radians(xoffset), 0.0f)) q glm normalize(q) glm quat temp(q) q q camQuat glm conjugate(q) camQuat temp glm mat4 rot glm mat4 cast(q) view view rot view glm translate(view, c) this makes the rotation but with gimbal look, what is the bug in my code or what is the right way to use quaternion to prevent the gimbal look? Edit the first image is the normal camera FPS the rotation should be around the x and y axis only (No Rolling), once I rotate around any axis (as I read until the angle becomes 90) the gimbal lock appear and the camera begins to roll around z like the second image. enter image description here Solution due to DMGregory link below this code works fine void Camera rotate(GLfloat xoffset, GLfloat yoffset, glm vec3 amp c, GLboolean constrainpitch) xoffset this gt mouseSensitivity yoffset this gt mouseSensitivity glm quat Qx(glm angleAxis(glm radians(yoffset), glm vec3(1.0f, 0.0f, 0.0f))) glm quat Qy(glm angleAxis(glm radians(xoffset), glm vec3(0.0f, 1.0f, 0.0f))) glm mat4 rotX glm mat4 cast(Qx) glm mat4 rotY glm mat4 cast(Qy) view glm translate(view, c) view rotX view view view rotY view glm translate(view, c) I get the quaternion for both x and y then post multiply the QX by the view matrix then pre multiply the QY by the view matrix, the idea is rotating pitch globally and yaw locally to avoid unwanted creeping roll.
1
Why does my vertex shader produce no output until I add some arbitrary value to the position attribute? I want to draw a textured quad but there is nothing drawn and the setup should be right. This is my vertex shader version 330 in vec2 position in vec2 texel out vec2 texCoord void main() texCoord texel gl Position vec4(position, 0.0, 1.0) This code is producing no output on screen. Now comes the crazy stuff. If I add the following, version 330 in vec2 position in vec2 texel in vec2 foo out vec2 texCoord void main() texCoord texel gl Position vec4(position foo, 0.0, 1.0) everything is as it should be. I am pretty sure that my setup is correct. I have checked the values of the vertex attributes by passing them as texCoord instead of the uv coordinates, and assigning them to my fragment color output in the fragment shader. In both cases this produces the expected color gradient. The extra uninitialized attribute foo must be zero, because its rendered as the vertex color. So the thing is, nothing is drawn unless I add an vertex attribute to position. It doesn't matter if I add texel or the mysterious, uninitialized vertex attribute. The output is the same. update glVertexAttribPointer(1, 2, GL FLOAT, GL FALSE, sizeof(Vertex), 0) glVertexAttribPointer(2, 2, GL FLOAT, GL FALSE, sizeof(Vertex), sizeof(glm vec2) This is my vertex struct struct Vertex glm vec3 pos glm vec2 tex glm vec3 normal Vertex(const glm vec3 amp pos,const glm vec2 amp tex,const glm vec3 amp normal) pos( pos),tex( tex),normal( normal) Vertex(const Vertex amp ) default Vertex amp operator (const Vertex amp ) default The buffer is correct. I have mapped the vertex buffer and put it on the standard output. All numbers are correct.
1
Is there a standard way to track 2D tile positions both locally and on screen? I'm building a 2D engine based on 32x32 tiles with OpenGL. OpenGL draws from the top left, so Y coordinates go down the screen as they increase. Obviously this is different than a standard graph where Y coordinates move up as they increase. I'm having trouble determining how I want to track positions for both sprites and tile objects (objects that are collections of tiles). My brain wants to set the world position as the bottom left of the object and track every object this way. The problem with this is I would have to translate it to an on screen position on rendering. The positive with this is I could easily visualize (especially in the case of objects made of multiple tiles) how something is structured and needs to be built. Are there standard ways for doing this? Should I just suck it up and get used to positions beginning in the top left? Here are the OpenGL calls to start rendering enable textures since we're going to use these for our sprites glEnable(GL TEXTURE 2D) glClearColor(0.0f, 0.0f, 0.0f, 0.0f) enable alpha blending glEnable(GL BLEND) glBlendFunc(GL SRC ALPHA, GL ONE MINUS SRC ALPHA) disable the OpenGL depth test since we're rendering 2D graphics glDisable(GL DEPTH TEST) glMatrixMode(GL PROJECTION) glLoadIdentity() glOrtho(0, WIDTH, HEIGHT, 0, 1, 1) glMatrixMode(GL MODELVIEW) I assume I need to change glOrtho(0, WIDTH, HEIGHT, 0, 1, 1) to glOrtho(0, WIDTH, 0, HEIGHT, 1, 1)
1
Blurring part of the screen optimisation I develop 3d menu and sometimes I need to blur only part of the screen. I use a forward rendering. I create a frame buffer object with 3 color attachments. Rendering looks like this bind fbo render objects which are blurred to the first texture bind the first texture perform vertical blur and save a result in the second texture bind the second texture perform horizontal blur and save a result in the third texture unbind fbo (render to the default fbo) bind the thrid texture render the third texture render objects which are not blurred I can use subroutines to reduce overhead during switching shaders. I also found the following article . Do you have any other ideas how to optimize above rendering ?
1
lwjgl and slick util text over textured quad So, I load a TrueTypeFont like this private TrueTypeFont trueTypeFont try InputStream inputStream ResourceLoader.getResourceAsStream("assets fonts main.ttf") Font awtFont2 Font.createFont(Font.TRUETYPE FONT, inputStream) awtFont2 awtFont2.deriveFont(24f) set font size trueTypeFont new TrueTypeFont(awtFont2, true) catch (Exception e) e.printStackTrace() After that, I draw a textured quad, as one usualyy would and then, draw trueTypeFont.drawString(this.x, this.y, this.text, Color.white) . What this gives, however is far from text, this is what it does, the black is supposed to be text... How to fix?
1
OpenGL Shadow Mapping from directional light I have read this tutorial http www.opengl tutorial.org intermediate tutorials tutorial 16 shadow mapping I wonder, is this the current best technique for generating shadows in an arbitrary 3D scene? Is it the way current modern games implements directional light shadows or are they using some other better way of calculating shadows on the fly? What I mean is if you know your geometry so good that you know exactly every impact a shadow can have on an objects colors, it must be better to use that information instead of rendering a Shadow map? For example, if we have a 3D grid world with all perfect cubes at perfect offset (think minecraft) and the directional light if exactly 45 degree in all dimensions, that leave us with a situation where a cube closer to the light will also cast shadow on the entire cubes at position offset 1, 1, 1 from that cube, and so on. The shadows always go diagonal and put other equally sized cubes sides in 1 of three states all shadow, all light or 50 in shadow. If we know our geometry so good, then using that information must be much faster than rendering shadow maps right? Even when most of such calculation is done on the CPU instead of the GPU.
1
glfw resizing causing image scaling I have a quad rendered that extends from the top left of the window to with width of the window that is also 64 pixels high. When I resize the window, from its initial size, the quad and text scales proportionately bigger or smaller in the same say Photoshop can scale a image. What I'm seeking on a basic level is that regardless of how I resize the window, everything drawn remains the same. From the below image, the right side is the initial size and the left side is what happens when I drag the window to be a smaller size. The red bar and text scales with it. This is how Im handling my resizing glfwSetWindowSizeCallback(pWindow, WindowSizeCallback) initialized after context void WindowSizeCallback(GLFWwindow window, int width, int height) glfwSetWindowSize(window, width, height)
1
Game (X Plane) boot startup time performance I use X Plane for my question but it also concerns probably every other flight simulator or simulation game in general. When developing a plugin what bothers me most is the startup time of the application to test my plugin functionality so I was wondering how I can improve the startup speed of the simulator. First I removed all additional scenery and reduced the loaded files to a minimum which gave me a startup time of 34 seconds which is quite fast already. To improve the time even further I thought it would make sense to run the whole application from memory and I installed ubuntu 64 bit and created a 2GB tempfs filesystem in memory (ram disk). When starting x plane from this memory file system it still takes 30 seconds. Can anyone explain what the application does during startup which needs so much processing time? I assume that the graphic bitmaps for the object and the environment are compressed on disk and therefore they have to be decompressed in memory first before they can be used in opengl. This would explain the time spent on startup. If it is true that the bitmaps need to be decompressed first would it be possible to improve the time by using multiple cores? If the application is multithreaded and there are four cores would the time be divided by four or does this processing happen on the GPU? Any explanation which helps me to better understand the startup process of a computer game is really appreciated because I'm interested in improving the overall performance of the system and therefore I need a better understanding of the bottlenecks (RAM, CPU, GPU, Disks (SSD Raid), OS, libraries, ...).
1
Does one need normals for a strictly 2d Game? I'm starting to learn OpenGL by creating a pure 2D game. I have to decide on the format of the Vertices. Do I need a normal component? Or is this for a 2d component not needed? My gut feeling says I won't need it since everything is flat. But perhaps I need it for some shader or other thing I don't see yet.
1
Bad FPS for smaller size (OpenGL ES with SDL) If you saw my other question, well, there is still a little problem Click here to watch on youtube Basically, the frame rate is very bad on the actual device, where for some reason the animation is scaled (it looks like the left side on the video). It is quite fast on the simulator where it is not scaled (right side). For a test, I submitted this new changeset that simply hard codes the smaller size (in SDL CreateWindow() and in glViewport()), and as you see in the video, now it is slow even in the simulator (left side shows the small size, right side shows the original size otherwise the code is the same). I'm clueless why it's soooo slow with a smaller galaxy, in fact it should be FASTER. The question is not about general speed optimization like reducing screen resolution or joining glBegin(GL POINTS) glEnd() blocks. Update For a test, I simply reduced the screen size to 320x480 and compiled it to OS X desktop got the same speed as with 800x800, so this size specific slow down is on iOS only. Will post this on the SDL mailing list now.
1
How do I draw an animated object in OpenGL ES? I have a VBO, which I initialise like this (just an example) (void)setupVBOs GLuint vertexBuffer glGenBuffers(1, amp vertexBuffer) glBindBuffer(GL ARRAY BUFFER, vertexBuffer) glBufferData(GL ARRAY BUFFER, sizeof(Vertices), Vertices, GL STATIC DRAW) GLuint indexBuffer glGenBuffers(1, amp indexBuffer) glBindBuffer(GL ELEMENT ARRAY BUFFER, indexBuffer) glBufferData(GL ELEMENT ARRAY BUFFER, sizeof(Indices), Indices, GL STATIC DRAW) As you can see, I'm using GL STATIC DRAW, which is good for visually unchanging objects (not including translations and such). How do I draw animated objects though? I mean things that might be changed by user interaction. This video is a good example. It is obvious OpenGL is being used, as the vertices are manipulated by gestures. How is it done? by changing the x y z coordinates on every touch? Are they using GL DYNAMIC DRAW? Is this hard?
1
Why doesn't glBindVertexArray work in this case? From my understanding of what glBindVertexArray does and how it works, the following code should work fine First init glGenVertexArraysOES(1, amp vertexArray) glBindVertexArrayOES( vertexArray) glGenBuffers(1, amp buffer) glBindBuffer(GL ARRAY BUFFER, buffer) glBufferData(GL ARRAY BUFFER, kMaxDrawingVerticesCount sizeof(GLKVector3), NULL, GL DYNAMIC DRAW) glGenBuffers(1, amp indexBuffer) glBindBuffer(GL ELEMENT ARRAY BUFFER, indexBuffer) glBufferData(GL ELEMENT ARRAY BUFFER, kMaxDrawingVerticesCount 1.5 sizeof(GLuint), NULL, GL DYNAMIC DRAW) glEnableVertexAttribArray(GLKVertexAttribPosition) glVertexAttribPointer(GLKVertexAttribPosition, 3, GL FLOAT, GL FALSE, sizeof(GLKVector3), BUFFER OFFSET(0)) glBindVertexArrayOES(0) And later add new geometry glBindVertexArrayOES( vertexArray) glBufferSubData(GL ARRAY BUFFER, 0, (GLintptr) data.vertices.length, data.vertices.bytes) glBufferSubData(GL ELEMENT ARRAY BUFFER, 0, (GLintptr) data.indices.length, data.indices.bytes) glBindVertexArrayOES(0) However, it doesn't work (there is screen output, but it looks like the buffers were changed between each other). Binding the vertex array is not enough for the vertex indices buffer to get bound? Because, if I do this glBindVertexArrayOES( vertexArray) glBindBuffer(GL ARRAY BUFFER, buffer) lt note this line glBindBuffer(GL ELEMENT ARRAY BUFFER, indexBuffer) lt note this line glBufferSubData(GL ARRAY BUFFER, 0, (GLintptr) data.vertices.length, data.vertices.bytes) glBufferSubData(GL ELEMENT ARRAY BUFFER, 0, (GLintptr) data.indices.length, data.indices.bytes) glBindVertexArrayOES(0) Everything works looks as expected. I find it strange, since the vertex array should have taken care of the buffer binding. Otherwise, what's the purpose to have a vertex array if you still have to bind all the buffers?
1
Getting crash on glDrawElements Here is the code where I initialize the VAO, vertex attributes(also the main VBO) and EBO(im using my own wrapper class for these "databuffers" to hide some of the API features and make life easier so i dont think the problem will be in the generic class as it was working without problems) void initVAOManager(const bool amp ebo) if ( vaoID 0) glGenVertexArrays(1, amp vaoID) glBindVertexArray( vaoID) Here is the main data buffer (positions,colors,UVs) If it doesn t exist a new one is created if (! mainBuffer) mainBuffer new DataBuffer lt T gt (GL ARRAY BUFFER) mainBuffer gt bindBuffer() if (! eboBuffer amp amp ebo) eboBuffer new DataBuffer lt eboData gt (GL ELEMENT ARRAY BUFFER) eboBuffer gt bindBuffer() This is the position glEnableVertexAttribArray(0) glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, sizeof(Vertex), (void )offsetof(Vertex, position)) Color attrib pointer glEnableVertexAttribArray(1) glVertexAttribPointer(1, 4, GL UNSIGNED BYTE, GL TRUE, sizeof(Vertex), (void )offsetof(Vertex, color)) UV glEnableVertexAttribArray(2) glVertexAttribPointer(2, 2, GL FLOAT, GL TRUE, sizeof(Vertex), (void )offsetof(Vertex, uv)) mainBuffer gt unbindBuffer() if (ebo) eboBuffer gt unbindBuffer() glBindVertexArray(0) Then the render function (dont mind the for loop, as i want to render multiple objects from the batch in one function) void renderBatchNormal() uploadData() glBindVertexArray( VAOManager gt getVAO()) std vector lt eboData gt for (std size t i 0 i lt DATA.size() i ) glDrawElements(GL TRIANGLES, 6, GL UNSIGNED INT, 0) glBindVertexArray(0) clearData() The upload data function send a data from the vectors to their buffers, I can send it too but as Im using my generic wrapper and it worked before with normal drawings I assume there is no problem. And finally a class eboData(if anyone wondered) (basically just a blank class with an array of 6 indices) class eboData public GLuint indices 6 However, this is causing crashes on the line where I try to execute the glDrawElements command, I read that it can be caused with no binded VAO while binding the ELEMENT BUFFER but as you can see from the code I m doing it right(at least I think that). However, if I change the following line with std vector lt eboData gt eboVector glDrawElements(GL TRIANGLES, 6, GL UNSIGNED INT, eboVector.data()) The code is working (problem is also that I don t know how to render second item in the buffer as it is showing only the first one). Do you have any ideas what can cause this crash? PS glGetError() returns 0.
1
Transparency in opengl texture with alpha, color from background I have to draw texture with transparent areas on square. But I don't want this transparencies to take color from this polygon, but from the background. Do you have any good tutorial to achieve it? EDIT glColor4f(1,0,0,1) glBegin(GL QUADS) glTexCoord2f(0,0) glVertex3f( 0.5, 0.5, 1) glTexCoord2f(1,0) glVertex3f(1, 1, 1) glTexCoord2f(1,1) glVertex3f(0.5, 0.5, 1) glTexCoord2f(0,1) glVertex3f( 0.5, 0.5, 1) glEnd() I just takes color from polygon. How to make this polygon transparent.
1
File format for animated scene I've got a custom OpenGL based rendering engine and I'd like to add support for cinema type scene animation. The artist that is helping me uses primarily 3DSMax. I'd like a file format for exporting and importing this data. I'm also in need of a file format for skeletal animation data, which may have an impact here. I've been looking at MAXScript to manually export this stuff, which would buy me the most flexibility, but I have virtually no experience with 3DSMax itself, so I get a little lost when it comes to terminology. So I'd like to know what file formats exist for animated scene data, and whether they are appropriate for my use (my fear is that they will be way too broad for my fairly simple needs.) The way I view animated scene data is basically a bunch of references to animated models with keyframe based matrices describing their orientation over time. And probably some special camera stuff to handle perspective. I might also want some event type stuff for adding removing objects. Is this a sane concept?
1
Render on other render targets starting from one already rendered on I have to perform a double pass convolution on a texture that is actually the color attachment of another render target, and store it in the color attachment of ANOTHER render target. This must be done multiple time, but using the same texture as starting point What I do now is (a bit abstracted, but what I have abstract is guaranteed to work singularly) renderOnRT(firstTarget) This is working. for each other RT currRT glBindFramebuffer(GL FRAMEBUFFER, currRT.frameBufferID) programX.use() glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, firstTarget.colorAttachmentID) programX.setUniform1i("colourTexture",0) glActiveTexture(GL TEXTURE1) glBindTexture(GL TEXTURE 2D, firstTarget.depthAttachmentID) programX.setUniform1i("depthTexture",1) glBindBuffer(GL ARRAY BUFFER, quadBuffID) quadBuffID is a VBO for a screen aligned quad. It is fine. programX.vertexAttribPointer(POSITION ATTRIBUTE, 3, GL FLOAT, GL FALSE, 0, (void )0) glDrawArrays(GL QUADS,0,4) programY.use() glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, currRT.colorAttachmentID) The second pass is done on the previous pass programY.setUniform1i("colourTexture",0) glActiveTexture(GL TEXTURE1) glBindTexture(GL TEXTURE 2D, currRT.depthAttachmentID) programY.setUniform1i("depthTexture",1) glBindBuffer(GL ARRAY BUFFER, quadBuffID) programY.vertexAttribPointer(POSITION ATTRIBUTE, 3, GL FLOAT, GL FALSE, 0, (void )0) glDrawArrays(GL QUADS, 0, 4) The problem is that I end up with black textures and not the wanted result. The GLSL programs program(X,Y) works fine, already tested on single targets. Is there something stupid I am missing? Even an hint is much appreciated, thanks!
1
Wrong faces culled in OpenGL when drawing a rectangular prism I'm trying to learn opengl. I did some code for building a rectangular prism. I don't want to draw back faces so I used glCullFace(GL BACK), glEnable(GL CULL FACE) . But I keep getting back faces also when viewing from front and also sometimes when rotating sides are vanishing. Can someone point me in right direction? glPolygonMode(GL FRONT,GL LINE) draw wireframe polygons glColor3f(0,1,0) set color green glCullFace(GL BACK) don't draw back faces glEnable(GL CULL FACE) don't draw back faces glTranslatef( 10, 1, 0) position glBegin(GL QUADS) face 1 glVertex3f(0, 1,0) glVertex3f(0, 1,2) glVertex3f(2, 1,2) glVertex3f(2, 1,0) face 2 glVertex3f(2, 1,2) glVertex3f(2, 1,0) glVertex3f(2,5,0) glVertex3f(2,5,2) face 3 glVertex3f(0,5,0) glVertex3f(0,5,2) glVertex3f(2,5,2) glVertex3f(2,5,0) face 4 glVertex3f(0, 1,2) glVertex3f(2, 1,2) glVertex3f(2,5,2) glVertex3f(0,5,2) face 5 glVertex3f(0, 1,2) glVertex3f(0, 1,0) glVertex3f(0,5,0) glVertex3f(0,5,2) face 6 glVertex3f(0, 1,0) glVertex3f(2, 1,0) glVertex3f(2,5,0) glVertex3f(0,5,0) glEnd()
1
OpenGL 3.3 tilemap rendering techniques? I'm trying to draw regular 2D tiles to the screen from a single texture but there are a few ways to accomplish this and I don't feel like I know enough about rendering to assess the trade offs properly. First of all, I have potentially a lot of tiles to render. Each tile is 8 32px (depending on zoom factor) and on a Full HD monitor that's over half a million tiles (and over a million triangles) per layer, and I have two layers. While I'm not going to go out of my way to support massive screens that are zoomed all the way out, I do want a solid reliable experience for players with screens up to 2048x2048px at any zoom, which ends up being about a quarter of a million triangles (before non tile sprites, effects, and HUD elements are added on top). I've been getting a lot of conflicting advice on what is good and what is bad to do with regard to tilemapping... What's been the nicest method so far to work with is instancing 6 points, two triangles in a square configuration, and giving each instance a position and texture position. But now I'm told instancing with less than 100 vertices creates a lot of overhead... Using a geometry shader to produce vertices is also crazy expensive and impracticable for hundreds of thousands of triangles, apparently. Sometimes I'm told triangle strips are the say to go, but you have to add in messy degenerate triangles when you have tilemap gaps. Other times triangles are said to be better. Do all roads lead back to a static baked in array of all corners of each tile for efficiency? Also, when you move left, right, up, down... you need to add in more tiles to the side... but how? Is there a way to do this that isn't messy and complicated?
1
Questions related to rendering to texture and rendering to buffer (OpenGL) I am experimenting with off screen rendering. I understand there is two solutions to this rendering to texture (you attach a texture to one of the FBO's attachment point which can be a color, depth, etc.) use a render buffer. Question 1 I first experimented with using render buffers. I have a FBO and 2 RBOs, one for color (attached to COLOR0) and one for depth (attached to DEPTH). When I want to visualise the content of what I rendered to the render buffer, I use the glBlitFramebuffer function to copy the content from the FBO to the Window frame buffer (index 0). glBindFramebuffer(GL READ FRAMEBUFFER, fboId) glBindFramebuffer(GL DRAW FRAMEBUFFER, 0) glBlitFramebuffer(0, 0, width, height, 0, 0, width, height, GL COLOR BUFFER BIT, GL LINEAR) First I would like to know if this the best way? (of passing the image from the a render buffer back to the window buffer). Question 2 My second question. The method I described above works but I can only see the "color" buffer of the FBO, not the depth buffer (I would also like to visualise the depth buffer). So I was wondering how if this was possible using this approach? I tried to use the index of the depth render buffer in place of the fboIdx in glBindFramebuffer but it didn't change anything. Can I make it work without using render to texture approach? Question 3 My end goal is to render two images using an off screen approach and blend them together. The result of the mix between the two rendered images is what I want to display to the screen. To keep things fast, I understand this can be done in hardware. What's the best way for doing this? Can it be done using the render buffer approach or do I need to go the render to texture approach (aka render to two textures and then blend these textures using a fragment shader when I do my final pass in which I render a quad stretched over the surface of the screen?
1
Deferred shadow mapping Question What am i doing wrong in the CalcShadowFactor method? It looks like the depth check is not working correctly. Body I'm using deferred rendering in my engine and i have generated the following shadow map Which is draw by the following fragment void main() gl FragColor vec4(texture(gColorMap, TexCoord0).x) The position texture is generated like this WorldPos0 (gWorld vec4(Position, 1.0)).xyz Where gWorld is the transformation of the object being draw(translation rotation scale) Finally, in the light pass, i want to generate the shadow produced from this spotlight with the following code float CalcShadowFactor(vec3 WorldPos) vec4 ShadowCoord gLightWVP vec4(WorldPos,1) ShadowCoord ShadowCoord.w vec2 UVCoords UVCoords.x 0.5 ShadowCoord.x 0.5 UVCoords.y 0.5 ShadowCoord.y 0.5 float z 0.5 ShadowCoord.z 0.5 float Depth texture(gShadowMap, UVCoords).x if (Depth lt z 0.00001) return 0 else return 1.0 It returns a float which multiplicates the color output. I know, im just returning 0 to set the color to complete transparent, it is intensional. The variables used were gLightWVP It is the ProjectionView of the camare set at the light Projection LightCameraRotation LightCameraTranslation WorldPos It is the position got from the Position Texture vec3 WorldPos texture2D( gPositionMap, TextCoord0 ).xyz gShadowMap It is the shadow map texture sampler. This is what is displayed(notice how the spotlight looks at the first image) The engine passes are Shadow pass Render the scene for every light using a alternative camera which is placed in their position with their orientation. It generates the shadow maps for every light. Currently i just have one shadow map for that spot light. Render pass It renders all the scene using the main camera. It generates the position map, normal map, diffuse map and a specular map. Light pass It uses all the textures generated to place the illumination. Here is where the shadow map is being checked for generate the shadows. EDIT I'll also leave here the shadow map texture definition. glGenTextures(1, amp m shadowMap) glBindTexture(GL TEXTURE 2D, m shadowMap) glTexImage2D(GL TEXTURE 2D, 0, GL DEPTH COMPONENT32, WindowWidth, WindowHeight, 0, GL DEPTH COMPONENT, GL FLOAT, NULL) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE COMPARE MODE, GL NONE) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glBindFramebuffer(GL FRAMEBUFFER, m fbo) glFramebufferTexture2D(GL FRAMEBUFFER, GL DEPTH ATTACHMENT, GL TEXTURE 2D, m shadowMap, 0)
1
Using gl3w and Win32 api, without glut I'm still a beginner, the example on skaslev's site require us to use glut as an interface to create a window. For some reason, I'm required to create a window purely with Win32 api, but I am not sure of how to integrate it together with gl3w.c Is there a guide or tutorial on how to do that? I've been searching for hours, seems out of luck (
1
How do I pick tiles from an isometric map with slopes? I'm looking for a way to convert mouse screen coordinates to isometric map coordinates, with the addition that the world has slopes and cliffs, and I have to be able to tell which quadrant of the tile is being pointed at by the mouse. The textures are handled by OpenGL, so I can't (easily) pick directly based on the tile sprite. I've found several similar solutions, (e.g. Isometric Tiles Math, XNA Resources amp Mouse Maps for Isometric Height Maps with the latter looking most promising) but none of them seem to quite fit my requirements. My tiles look like this What algorithm or technique could I use here?
1
Better texture downscaling at 9xMSAA than at 8xMSAA I'm drawing my game on a render to texture FBO with multisampling. The multisampling works as expected, except for one thing No MSAA 8xMSAA 9xMSAA (left is a untextured quad, right is a textured quad the texture is much larger than the quad, the cross in the center of the texture is 2 line primitives) As you can see, textures seem to suddenly have better downscaling at 9 samples. Im trying to figure out why this is, because I'd prefer to have this quality of downscaling all the time (even if I'm at, say, 4 samples). The artefacting in the first two images is pretty ugly. I'm not using any mipmapping here, but I did try that as an alternative to this, and it does look much better. However, mipmapping looks a bit blurry compared to the third image, and is still not preferable in comparison. This was rendered with an NVIDIA GeForce 980M, and it appeared to have the same behavior on other nvidia cards.
1
GL SPOT CUTOFF not working properly I'm new to OpenGL. I'm studying OpenGL 2.1 and I'm trying to make a little program to test the GL SPOT CUTOFF property, but when I set a value between 0.0 90.0, the light doesn't work and everything is dark. The code void lightInit(void) GLfloat light0Position 0.0,0.0,2.0,1.0 glEnable(GL LIGHTING) glEnable(GL LIGHT0) glLightf(GL LIGHT0, GL SPOT CUTOFF, 45.0) glLightfv(GL LIGHT0, GL POSITION, light0Position) void reshapeFunc(int w, int h) glViewport(0, 0, (GLsizei) w, (GLsizei) h) glMatrixMode(GL PROJECTION) glLoadIdentity() if (w lt h) glOrtho( 4, 4, 4 (GLfloat)h (GLfloat)w, 4 (GLfloat)h (GLfloat)w, 4.0, 4.0) else glOrtho( 4 (GLfloat)w (GLfloat)h,4 (GLfloat)w (GLfloat)h, 4, 4, 4, 4.0) glMatrixMode(GL MODELVIEW) glLoadIdentity() gluLookAt(0,0,0,0,0, 1,0,1,0) void displayFunc(void) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) center sphere glutSolidSphere(1, 100,100) right sphere glPushMatrix() glTranslatef(3,0,0) glutSolidSphere(1, 100,100) glPopMatrix() left sphere glPushMatrix() glTranslatef( 3,0,0) glutSolidSphere(1, 100,100) glPopMatrix() glutSwapBuffers() void keyboardFunc(unsigned char key, int x, int y) if (key 27) exit(EXIT SUCCESS) int main(int argc, char argv) freeglut init and windows creation glutInit( amp argc, argv) glutInitDisplayMode(GLUT DOUBLE GLUT RGB GLUT DEPTH) glutInitWindowSize(600, 500) glutInitWindowPosition(300, 100) glutCreateWindow("OpenGL") glew init and errors check GLenum err glewInit() if (GLEW OK ! err) fprintf(stderr, "Error s n", glewGetErrorString(err)) return 1 fprintf(stdout, " GLEW s n n", glewGetString(GLEW VERSION)) general settings glClearColor(0.0, 0.0, 0.0, 0.0) glShadeModel(GL SMOOTH) glEnable(GL DEPTH TEST) light settings lightInit() callback functions glutDisplayFunc(displayFunc) glutReshapeFunc(reshapeFunc) glutKeyboardFunc(keyboardFunc) glutMainLoop() return 0 This code produces this image If I delete glLightf(GL LIGHT0, GL SPOT CUTOFF, 45.0) , the next image is produced Is there some kind of bug ?
1
Know if you're fully utilizing the GPU I render 17.000 VAOs each frame. 2.840.386 triangles. Only applying texture, nothing else. I have three computers and the performance across them is not as expected. Cheap laptop(i3 4010U amp Intel HD 4400) runs at 20 fps. Desktop1(FX 9590 amp GT 630) runs at 65 fps. Desktop2(i7 5820k amp GTX 970) runs at 105 fps. As the game is in a pretty early stage, the only CPU task each frame is to loop through a 3 Dimensional map containing VAO ids, and then render them. When I look at the CPU Benchmarks and the GPU Benchmarks of my systems, I would expect a much bigger difference between my Desktop2 and the two other systems. The GTX 970's 3D score(8662) is more than 15 times better than the Intel HD 4400's (546), though it only run 5 times as fast. Even more ridiculous the GTX 970's 3D score is 11 times better than the GT 630's (797), but run 1.6 times faster. Even without taking the benchmarks into consideration I would still expect bigger difference. When I compare the performance differences in other games(Like GTA V) across my systems, there's a much bigger difference. Therefore I know the problem is my game, and not a hardware driver related issue. Am I right that I would expect larger differences between the different systems, if yes How may I find the issue and how can I solve it? Edit I've read that GTX 590 is capable of 3.2 Billion triangles second, where as my game is only able to squeeze 300 million of a GTX 970. Edit 2 I've tested my desktop1(GT 630) that runs 1080p at 65fps and if I change the resolution to 1 1, I only get a performance boost of 20fps(85fps).
1
Painter algorithm vs. 3D rendering with Z buffer when drawing 2D Sprites I'm currently developing a tile base engine. I want it to look like your average old school tilebased RPG like Zelda Link to the past (orthographic projection, squared tile texture, textures overlapping each other, alpha channels, etc.). The twist is that the world is internally 3d meaning that every tile has an elevation and entities are moving in a 3d space. Reason for this is beyond the scope of this question but I'm struggling on comparing the follow two ideas for the rendering. I'm using OpenGL with libGDX in Java but wouldn't mind writing my own code to interface OpenGL. That's also why I didn't tag it. A screenshot of quot Zelda Link to the past quot in case you don't know what it looks like. Painter algorithm Disable Z Buffer and use painter algorithm to draw from back to front in the appropriate order with some overdraw and a lot of texture rebinding. I know a texture atlas can minimize the texture rebinding but the engine is supposed to act more like a sandbox so I don't want to rely too heavily on assumptions like a efficient texture atlas that I might not have all the time. Pro Easy to implement and very straight forward. Con I've no idea how big the impact will be since almost every sprite will cause a texture rebinding. And I don't know how to predict the performance costs of that many texture bind calls. Full blown 3D Rotate the camera 45 degrees up around the x axis with an orthographic projection matrix. Then use the 3d information from the game world to draw the tile textures with billboarding at the real position in the worlds 3d space. Same for entities and such. This way I can gather all sprites and then render them sorted by their texture which means one texture bind per texture and reduced overdraw due to usage of the Z buffer Pro I can use a freecam to debug render problems. I assume it's significantly faster that the painter algorithm. Con Some sprites might share the same place which could lead to Z fighting. I'm thinking about how to use multiple layers on a tile. For instance you may have a bridge tile with a railing where the railing is always overlapping an entity on the bridge but the floor of it is always overlapped by the entity. So I would have to add a little offset to different layers in the 3d space. I'm not really sure how that would work out and to be honest had a hard time figuring out how old games did it so maybe I'm overthinking it. I guess I have a very rough idea of how each method will work out but I don't know how much the texture rebinding (or sorting and preparing the sprites for the 3d approach) is affecting the performance and while it's easy to find information regarding large 3D scenes I found it difficult to get information about 2D scenes with way less polygons.I hope this question is not too much a subject of personal preference. I know that Premature optimization is the root of all evil but due to how the decision is affecting almost the entire rendering process I don't want to make the wrong call. I deliberately didn't mention any OpenGL functions mainly because I've mostly used the libGDX API which hides the function calls behind it's wrappers but generally I know how the internals work and just got into the internals and how the OpenGL API works. I don't think specific OpenGL calls do matter for the question but feel free to use them for your answer I'll just look them up.
1
Linear search vs Octree (Frustum cull) I am wondering whether I should look into implementing an octree of some kind. I have a very simple game which consists of a 3d plane for the floor. There are multiple objects scattered around on the ground, each one has an aabb in world space. Currently I just do a loop through the list of all these objects and check if its bounding box intersects with the frustum, it works great but I am wondering if if it would be a good investment in an octree. I only have max 512 of these objects on the map and they all contain bounding boxes. I am not sure if an octree would make it faster since I have so little objects in the scene.
1
Terminology for the way Transformation Matrix Data is treated I recently asked a question at math stack exchange and realize a similar questions is more suited for this forum, but the original is here https math.stackexchange.com questions 1526601 terminology for transformation matrices update or rigid body transform I'm attempting to describe transformation matrix operations within Sketchup's Ruby scripts, and I need some terminology. Sketchup allows the selection of a group of "loose" drawing objects (i.e. edges and faces), and provides options to create a grouped object from them. The drawing elements inside the group have position data relative to the group's local coordinate system. The grouped objects respond to a getter "transformation" method that returns a transformation object that contains data representing the orientation, scale, and position of the group. Sketchup's "world space" supports multiple grouped objects, and typically there are numerous grouped entities in the outer level entities, as well as loose drawing objects. Each grouped object has a separate transformation that determines their position, orientation, and scaling in world space. The transformation can be altered or replaced with methods that include a) The "transform" method Applies a transformation (i.e. like a rotation) to rotate the group and b) The "transformation " setter method Replaces the existing transformation with the supplied argument. Lets say I have a grouped object called "entity1" and rotation transformation object called "rotation". To provide the same rotation effect I could perform either of the following in Sketchup's Ruby programming entity1.transform!( rotation ) OR entity1.transformation rotation entity1.transformation Only groups and components support the "transformation " setter method, while fundamental Point3d amp Vector3d object provide "transform!" kind of methods. It is obvious, at least from within Sketchup (which uses OpenGL internally), that transformation matrix internal data can be treated as either Relative A transformation performs an incremental operation (Example "entity1.transform! identity" does not have an affect on the grouped model) or Absolute The data inside a transformation is treated as absolute position, orientation, and scaling of the grouped object. (Example "entity1.transformation IDENTITY" sets the model to Sketchup's origin position, the model is de rotated to the orientation it was when first grouped, and it is descaled) Questions 1) What is the terminology for the type of transformation that is supplied to the "transform!" method? I was calling it an "affine", now want to call it an "update transformation". 2) What is the terminology for the type of transformation that is the argument to the setter "transformation "? I have been calling it a "rigid body transformation", and sometimes "absolute transform". It appears that OpenGL is calling this the "model matrix". 3) If I call it "model matrix", is more than one model matrix transform allowed in world space? A previous english stack exchange article for a word to mean "position and orientation" is at https english.stackexchange.com questions 119883 word for position and orientation The article responses recommend to use "pose matrix".
1
An odd performance problem rendering a simple scene (less than 14k vertices) in OpenGL using two vbos with LWJGL Problem I have been having a strange degrading performance issue rendering a simple scene containing two "chunks" of 4x4x4 cubes each. Video of problem This is a screen capture showing my console output, look specifically at the FPS dropping (which is the issue). http assets.cognitive.io bugconsole.swf Details about problem My machine (a mid 2009 mbp 15") is on average using something like 10 cpu, but the machine is getting very hot to touch so I believe the graphics card is working very hard. Someone else tested this code on another machine and experienced 100 cpu usage on a quad core AMD cpu (on one core) and the same kind of performance problem. I can't quite figure out what's going on, multiple people have looked at this and we are not getting any smarter. Currently there is no frustum culling or any other sort of culling. I do not think that is the problem, I should be able to render a much much higher number of vertices without experiencing problems. I have a list of chunks (ArrayList) that is created on start, not within the render loop, that generates a vbo and all that. In my renderer I loop through the list of chunks and for each one I bind the vbo, setup some uniform variables and such before calling glDrawArrays(). I have a version of this where I only have one vbo (but then also only one cube per vbo glDrawArrays call, very inefficient) where everything is working fine. Thoughts and ideas I believe the problem is something related to either VRAM filling up or me doing something incredibly wrong. I've tried looking at my project with OSX's OpenGL profiler application, which has given me two different results First I was just using glBufferData each frame to pass the data, which resulted in 90 of the gl time spent being spent on CGLFlushDrawable(). After some advice from someone on opengl on Freenode, it was suggested I should setup the vbo once with glBufferData, and then call glMapBuffer to pass the data after that. This results in the same performance problem but 99 of the gl time being spent on glMapBuffer(). I was hoping someone had some advice for me, I would really like to just be able to continue learning opengl instead of being stuck debugging this performance issue for weeks. Code on GitHub The code is available at https github.com flexd Game tree chunkrenderer (specifically the chunkrenderer branch!, you can see that the lighting2 branch works without the same performance issue). The main game class is Game.java which contains the main method and it's also where I setup OpenGL. Chunk.java, Cube.java, Renderable.java and ChunkRenderer.java are also of interest. Shader.java is where the shader class lives, but I doubt that has anything to do with the issues, it's basically the same shader as before. Thanks in advance, I hope to be able to find the issue soon ) Looking forward to being able to learn more of this. PS Let me know if I am posting this in the wrong place, I don't know if there's a opengl category that's better suited for this.
1
How to draw or translate into world space? I've been hacking around with OpenGL, but there's a few concepts which I can not find the answer on. I want to draw three GL QUADS next to eachoter like so 1 2 3 I know GL QUADS are deprecated, but for this question i'd like to use them nonetheless. Each quad will have its own color, the first being red, second green and the third blue. r g b my code void display() glClear(GL COLOR BUFFER BIT) GLfloat vertices 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 1, 3, 1, 1, 1, 3, 1, 5, 1, 5, 1, 3, 1 GLfloat colors 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1 glLoadIdentity() glEnableClientState(GL VERTEX ARRAY) glEnableClientState(GL COLOR ARRAY) glVertexPointer(2, GL FLOAT, 0, vertices) glColorPointer(3, GL FLOAT, 0, colors) glMatrixMode(GL PROJECTION) glTranslatef( 1, 0, 0) glDrawArrays(GL QUADS, 0, 12) glDisableClientState(GL COLOR ARRAY) glDisableClientState(GL VERTEX ARRAY) glFlush() result being I'm confused as which to which matrix I should apply the translation matrix and in which order. I'm almost certain I should not use glMatrixMode(GL PROJECTION) What are the initial values of a matrix? I'm not certain which coords to pass in the vertex array at this point, model space or world space? I hope you can shed some light in my confusion.
1
OpenGL SDL2 How to resize the render area with the window? After calling SDL SetWindowSize, the area being rendered to doesn't change with the window, so if the window gets bigger, it leaves a black area on the top and right sides. I am adjusting the OpenGL viewport to the new window dimentions. Everything I'm drawing scales correctly but is cut off when the window is bigger. I've been searching for the solution all day and all I've found is creating a new OpenGL context, but that causes crashes unless I reload all my graphics data which seems ridiculous just to resize the window. Is it the default framebuffer that needs to be resized? According to the OpenGL wiki, All default framebuffer images are automatically resized to the size of the output window, as it is resized. So if the default framebuffer is being resized, why is it only rendering to the same small area? I could initialize it to the largest possible resolution and then shrink it, but wouldn't that cause OpenGL to process a bunch of fragments outside the window?
1
OpenGL glRotatef cause performance drop while rotating x and y I have made a rotating cube and I got some performance drops while using two glRotatef calls. So basically this code is giving me 80FPS code 1 GLrotate x 0.4f GLrotate y 0.4f glPushMatrix() glRotatef( GLrotate y, 0.0f, 1.0f, 0.0f ) glRotatef( GLrotate x, 1.0f, 0.0f, 0.0f ) glTranslatef(multiplxGL 0.5f spaceGL 2, multiplyGL 0.5f spaceGL 2, multiplzGL 0.5f spaceGL 2) glDrawElements(GL TRIANGLES, 36, GL UNSIGNED SHORT, BUFFER OFFSET(0)) glPopMatrix() While this one is giving me 100FPS code 2 GLrotate y 0.4f glPushMatrix() glRotatef( GLrotate y, 1.0f, 1.0f, 0.0f ) glTranslatef(multiplxGL 0.5f spaceGL 2, multiplyGL 0.5f spaceGL 2, multiplzGL 0.5f spaceGL 2) glDrawElements(GL TRIANGLES, 36, GL UNSIGNED SHORT, BUFFER OFFSET(0)) glPopMatrix() The rotate results from the rotation code 1 and code 2 are not the same (cube rotates different).I know that the code 1 is the most proper way here for the rotation but why the performance drops? Any chance of updating my code 1 to the performance of code 2? On the other side in DirectX I use D3DXMatrixRotationYawPitchRoll( amp matDXRotate, D3DXToRadian(DXrotate y), D3DXToRadian(DXrotate x), D3DXToRadian(0.0f)) and then matDXStack gt Push() D3DXMatrixTranslation( amp matDXmove, multiplxDX 0.5f spaceDX 2, multiplyDX 0.5f spaceDX 2, multiplzDX 0.5f spaceDX 2) matDXStack gt LoadMatrix( amp (matDXmove matDXRotate)) d3ddev gt SetTransform(D3DTS WORLD, matDXStack gt GetTop()) d3ddev gt DrawIndexedPrimitive(D3DPT TRIANGLELIST, 0, 0, 36, 0, 12) matDXStack gt Pop() Which gives me 100 FPS and rotate results same as code 1.
1
How can I implement a camera like the one in RotMG? RotMG, an MMO top down shooter, takes on a unique 2d 3d style, and has an intriguing camera The game is obviously 3d, not simply isometric, and if you play the game and turn on camera rotation you will notice the effect the game produces, like so. RotMG is made using flash, and I am currently experimenting with libgdx (which uses lwjgl opengl). How should I go about implementing a RotMG like camera?