_id
int64 0
49
| text
stringlengths 71
4.19k
|
---|---|
1 | Understanding normal mapping I am trying to understand the basics of normal mapping using this tutorial http ogldev.atspace.co.uk www tutorial26 tutorial26.html What I don't get there is the following equation E1 ( U1 U0 ) T ( V1 V0 ) B How do they came to this equation? This it out of nowhere for me. What is E1? The tutorial say that E1 is one edge of the triangle. But I don't get it, in the equation E1 seems to be a real number, not a vector ( which an edge is supposed to be right? he have a x and y component ). |
1 | rotation matrix problems opengl and own types In an effort to learn things deeper I'm writing my own mathematics for the first time instead of using libraries. As far as I can tell my matrix multiplication is correct, and translation and scaling work fine. I am however, struggling to debug the rotations. I've tried pretty much everything I can think to check and it still won't function as expected. If I set the angle to zero then as expected the Model matrix will not be effected. Changing to 1.0f will explode, the only value which results in expected behaviours seems to be zero... I haven't written a view matrix yet and I've been trying to debug this with the projection(orthographic) matrix on and off. Below is code with some notes. Matrices are linear arrays of 16 values. i32 is an int, f32 is float, internal is static. inline m4x4 operator (m4x4 a, m4x4 b) TODO redo this with simd m4x4 Result for (i32 Row 0 Row lt 3 Row) for (i32 Col 0 Col lt 3 Col) for (i32 n 0 n lt 3 n) Result.c (Row 4) Col a.c (Row 4) n b.c 4 n Col return (Result) inline m4x4 GetRotationZ(f32 angle) m4x4 Result f32 c Cos(angle) f32 s Sin(angle) Result c, s, 0, 0, s, c, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 return (Result) inline m4x4 RotateZ(m4x4 m, f32 angle) m4x4 Rotation GetRotationZ(angle) m4x4 Result Rotation m return (Result) internal void DrawSprite(sprite Sprite, v2 Position, v2 Size, f32 RotationAngle, v3 Colour) glUseProgram(Sprite.ShaderProgram) glBindVertexArray(Sprite.VAO) if 0 Position v2 400, 300 Size v2 100.0f, 100.0f RotationAngle 0.0f endif if 0 m4x4 Projection Orthographic(0.0f, 800.0f, 0.0f, 600.0f, 1.0f, 1.0f) u32 ProjectionUniformID glGetUniformLocation(GlobalSprite.ShaderProgram, "Projection") glUniformMatrix4fv(ProjectionUniformID, 1, GL TRUE, amp Projection.c 0 ) endif m4x4 Model IdentityMatrix() Position.x Position.x 0.01f Position.y Position.y 0.01f Model Translate(Model, V3(Position, 0.0f)) Model Translate(Model, v3 0.5f Size.x, 0.5f Size.y, 0.0f ) Model RotateZ(Model, RotationAngle) Model RotateZ(Model, 45.0f) Model Translate(Model, v3 0.5f Size.x, 0.5f Size.y, 0.0f ) Size v2 0.25, 0.25 Model Scale(Model, V3(Size, 1.0f)) u32 ModelUniformID glGetUniformLocation(Sprite.ShaderProgram, "Model") glUniformMatrix4fv(ModelUniformID, 1, GL TRUE, amp Model.c 0 ) glDrawArrays(GL TRIANGLES, 0, 6) Next the shader source char VertexShaderSource R"( version 330 core layout (location 0) in vec4 PositionAndTextureCoords uniform mat4 Model uniform mat4 Projection out vec2 TexCoords void main() TexCoords vec2(PositionAndTextureCoords.zw) gl Position vec4(PositionAndTextureCoords.xy, 0.0f, 1.0f) gl Position Model vec4(PositionAndTextureCoords.xy, 0.0f, 1.0f) gl Position Projection Model vec4(PositionAndTextureCoords.xy, 0.0f, 1.0f) gl Position Model vec4(PositionAndTextureCoords.xy, 0.0f, 1.0f) )" char FragmentShaderSource R"( version 330 core in vec2 TexCoords uniform sampler2D Texture out vec4 FragColour void main() FragColour texture(Texture, TexCoords) )" I've stepped through the debugger, I've checked every line of code again and again, I've worked through a few matrices by hand and still I don't know whats going wrong. It should be something so very trivial yet I can't seem to find the bug... Below are pictures demonstrating the difference between 0.0f and 1.0f, as of this state of the code it not only skews and rotates incorrectly, but its scaling is well off as well. |
1 | TBN Matrix Eye vs. World Space Conflict I am tired of misleading and insufficient articles making me more confused each time I read, I need a clarification that will solve my TBN matrix problem forever. Each article I read informs me differently about from which space TBN matrix converts to tangent space. One article says that it converts from eye space, other one says that it converts from world space to tangent space. As far as I know, world space is ModelMatrix vertex, eye space is ViewMatrix ModelMatrix vertex. Another article says that transpose of TBN Matrix actually does it. Can you please explain me how to use TBN matrix, does it (or transposed tbn) convert from eye or world to tangent ? Should I convert my lighting vectors from eye space to world and then apply TBN on them ? |
1 | cutting part of object How to cut part of the object if it is inside a specific area? like showed on attached image. PS Looks like stencil should be used, but I am not really sure how to do it. |
1 | How to read rendered textures back without killing performance I have an application where I need to render some OpenGL based dlls, read the rendered textures back, and send them to another directx based application that will render them in directx. Right now, I am rendering them using FBOs, reading them back, and then sending the data across the network. However, the "reading them back" step is killing the performance. My basic loop is as follows while(true) ... render 5 to 10 textures ... foreach( texture in renderedTextures ) bind texture glGetTexImage( data ) push data onto a queue to send asyncronously Commenting out the glGetTextImage call causes performance to improve drastically. What techniques can I use to read the data back faster? |
1 | LWJGL 3 how to convert screen coordinate to world coordinate? I'm trying to convert screen coordinate to world coordinate on mouse click event. For LWJGL 3 there's not GLU utility class is available whereas LWJGL 2 has. I'm using JOML math classes and wrote following code, but its returning wrong world coordinate, I'm doing something wrong and couldn't figure out. On program init, I get viewProjMatrix and viewport viewProjMatrixUniform glGetUniformLocation(this.program, "viewProjMatrix") IntBuffer viewportBuffer BufferUtils.createIntBuffer(4) int viewport new int 4 glGetIntegerv(GL VIEWPORT, viewportBuffer) viewportBuffer.get(viewport) On render loop, I calculate viewProjMatrix viewProjMatrix .setPerspective((float) Math.toRadians(30), (float) width height, 0.01f, 500.0f) define min and max planes .lookAt(eye x, eye y, eye z, eye x, eye y, 0.0f, 0.0f, 2.0f, 0.0f) glUniformMatrix4fv(viewProjMatrixUniform, false, viewProjMatrix.get(matrixBuffer)) I convert screen coordinate to world coordinate with following code DoubleBuffer mouseXBuffer BufferUtils.createDoubleBuffer(1) DoubleBuffer mouseYBuffer BufferUtils.createDoubleBuffer(1) glfwGetCursorPos(window, mouseXBuffer, mouseYBuffer) double x mouseXBuffer.get(0) double y mouseYBuffer.get(0) System.out.println("clicked at " x " " y) Vector3f v3f new Vector3f() viewProjMatrix.unproject((float)x, (float)y, 0f, viewport, v3f) System.out.println("world coordinate " v3f.x " " v3f.y) Here's the full source code https gist.github.com digz6666 48bb433c83801ea4b82fa194f05b4f02 |
1 | What can I do to ensure that everyone can run my OpenGL game? I am currently learning OpenGL from here. That website teaches OpenGl 3.3 with GLSL 3.3 and says that it is best to learn those because they are well supported and that older versions work in a less efficient way. So far, this has been going well, but, when I tried to get my OpenGl programs to run on my laptop with Intel HD Graphics 3000, I ran into issues with the lack of GLSL 3.3 support (glGetShaderiv() and such claim that GLSL 3.3 is unavailable. When I ignore these errors, anything beyond the background color isn't rendered). I am learning OpenGl in order to create a video game for which my target market will contain many older non gaming computers (in addition to users with brand new GTX 1080s). It wouldn't surprise me if a few users were still on Windows XP. It is essential that I make sure that everyone can run my game without buying extra hardware. It is equally essential that it looks good on hardware that can support it. What single, non deprecated graphics API is available on all hardware (using software emulation if needed) along with Windows, Mac, and Linux? If no single such API is available, what can I do to ensure that my game will work properly everywhere while also taking advantage of new features when available without learning multiple APIs? I would strongly prefer to stay close to the hardware by avoiding higher level APIs such as Ogre3D. If to do this I need to learn multiple APIs, then, what are those and why hasn't the industry standardized this by creating an API then releasing drivers for old hardware? Some possible solutions are A well enough performing software implementation of OpenGL 3.3 along with GLSL 3.3 A OpenGL 3.3 implementation that uses OpenGL 1.0 in the background that can be easily swapped out automatically at runtime if newer versions aren't directly available. A simple to use older version of GLSL that allows me to load separate shaders as needed. |
1 | Graphic hardware texture formats shaders relational speed I'm interested to know, is there a direct correlation to the speed that a shader will run based on the bit depth of a texture upon which it is running. For example If I have a 2 bit stencil texture and I'm accessing data from this, will the access time be 16 times 'FASTER' than accessing the data in a 32 bit texture or is it primarily used for the memory saving capabilities. SEE EDIT |
1 | How to know if these profiler values are good or bad? I'm making a 2D game for Android Mobile using LibGDX and Java in Android Studio. My game mostly runs alright with 60FPS, but for some time to time it lowers to 50FPS. I'm trying to find out what causes it. I came across this profiler and implemented it and here are the values in a very "messy" frame with lots of objects going around and messing around 06 19 15 14 21.088 9999 10033 com.gadarts.parashoot.android I GL Report Calls 498 06 19 15 14 21.088 9999 10033 com.gadarts.parashoot.android I GL Report Draw Calls 37 06 19 15 14 21.088 9999 10033 com.gadarts.parashoot.android I GL Report Shader Switches 8 06 19 15 14 21.088 9999 10033 com.gadarts.parashoot.android I GL Report Bindings 36 06 19 15 14 21.088 9999 10033 com.gadarts.parashoot.android I GL Report Vertex Count 31.945946 My question is how am I supposed to know if these are considered to be high values or low? I read the LibGDX documentation but it doesn't give any direction or indication about the values effect. Yes, it might be an opinion based question, but at least someone can give relevant direction using these values? Thanks in advance |
1 | How to process multiple shadow maps in deferred shading I'm attempting to implement shadow mapping in an OpenGL deferred shading system. I know how to handle a single shadow map in the lighting stage, but I want my rendering system to be able to support multiple shadow maps. At present, My lighting shader is capable of supporting up to 128 point and spotlights in a single pass. I'd like to know how to modify the shader to handle an indeterminate number of shadow maps of each type (ortho for directional, cube for pointlights, projection for spotlights). Normally, when I define a sampler in GLSL, i simply use uniform sampler2D shadowMap But I think it would be cumbersome to define up to 256 individual sampler uniforms, if it were even possible (which I am far from convinced about). Is there a way to define an array of samplers which I can iterate through on a 1 1 ratio for each light, and if so, how would I do this? For instance, I know that I can use UBO's to send the data for many lights to the shader, so can I do the same to populate an array of samplers, like so shadowMaps MAX SHADOWMAPS Or uniform sampler2D shadowMaps MAX SHADOWMAPS |
1 | How can I tell in code if vsync is disabled on desktop PC? My game needs to behave differently to get the best performance if the user disables vsync globally (basically, I need to change the scheduling on my housekeeping operations). Is there a graphics card independent way (SDK call, C C ) for me to find out if vsync is disabled? I'm using opengl glfw 3.2.1, and I can use glfwSwapInterval(0) to force vsync disabling, but that's not the same as reading the player's current default preferences. I can always use application settings, but it'd be good if the game could default to the most suitable approach for the default video settings. |
1 | How to handle hitboxes hurtboxes in animations? I'm curious how I should go about structuring hitboxes and hurtboxes for my actors. For context, I'm developing a game engine in OpenGL, using C and OpenTK. I have developed a working skeletal animation system, which loads animations using Assimp, and then passes BoneIDs, BoneWeights, and Transform matrices to the Vertex shader, which calculates all vertex deforms appropriately for the current animation keyframe (or current interpolation between keyframes). Is there a set standard for calculating collision in animations? I'd prefer to continue using Assimp, which means using standard, recognizable animation formats that my engine can load in, so I'd rather not store any collision data in the animation files, unless there is a precedent for this or room for metadata or something. My current thoughts One option would be to calculate the hitboxes and hurtboxes upon loading an animation, and have a mapping that determines which meshes are hitboxes and which are hurtboxes. The second option would be to create a new file format where I store my hitboxes and hurtboxes in a way that is similar to animation files (minus bone weights, since I'll just be transforming the scale position of axis aligned rectangles), where I can determine keyframes for hitboxes and hurtboxes, then interpolate based off of these to get the current collision box for each frame. I could combine the two options by automatically generating these files from the animations, then editing them manually until they are reasonable. Am I on the right track, or are there smarter ways of going about this? I don't want to reinvent the wheel if there are better options available. |
1 | OpenGL must I always preserve data that is uploaded to a VBO? If I load my vertex data from a file into a buffer in RAM, and then upload this to the GPU (say, to a static VBO so no need to modify the data), do I need to keep that buffer around? If not, how do I know that the upload is complete? |
1 | Hard edges with non power of 2 images in OpenGL When down scaling 2D images in Java2D, it does a great job at preserving hard edges while downscaling to non power of 2s. However, in OpenGL, I have been unable to find a solution to this. I have tried using GL NEAREST to get hard edges, but it also creates the wonky edges aswell. I cannot use GL LINEAR in this case either, but still, the linear interpolation still looks wonky. Example (Both rendering 30x30 hard image, rendering at 30x30 pixels). Please note I am aware that this occurs with images that aren't power of twos, like 30, but the main problem is, if I try to upscale or downscale an image that is power of two (say 16x16) to something that isn't a power of two (say 20x20), I still get this effect. This issue with scaling this occurs in Java2D, but it is by far less noticable. My question is What parameters will I need to use when rendering a non power of two image? So I can get a similar effect that I get in Java 2D. I can post source code if needed. EDIT Original Image (30 x 30 png) Turns out I wasn't scaling them at all! Both were being rendered at the original 30 x 30 pixels on both OpenGL and Java2D. Sample Code Java2D g.drawImage(heart, 0, 0, 30, 30, null) No rendering hints are being applied, just the bare minimum. In OpenGL Vertex Shader version 400 in vec2 position out vec2 textureCoords uniform float width uniform float height uniform float xOffset uniform float yOffset uniform mat4 transformationMatrix void main(void) gl Position transformationMatrix vec4(position.x, position.y, 0.0, 1.0) gl Position.x (xOffset) (width 2) gl Position.y (yOffset) (height 2) gl Position.y gl Position.y textureCoords vec2(position.x 1, position.y 1) Fragment Shader version 400 in vec2 textureCoords out vec4 out Color uniform sampler2D texture2d uniform float transparency void main() out Color texture2D(texture2d, vec2(textureCoords.x, textureCoords.y)) if(textureCoords.y gt 1 textureCoords.y lt 0) discard Rendering methods public void makeWonkyImage() render(0, 0, 30, 30, heart) public void render(float x, float y, float width, float height, int textureID) PostProcess.imageRenderer.render(x, Gfx.HEIGHT y, width 1000, height 1000, textureID) public void render(float x, float y, float width, float height, int textureID) shader.start() prepare(textureID) Vector2f position new Vector2f(( Display.WIDTH (width 1000f)) , ( Display.HEIGHT (height 1000f))) Matrix4f matrix Maths.createTransformationMatrix(position, new Vector2f(width Gfx.WIDTH 1000f, height Gfx.HEIGHT 1000f)) shader.loadTransformation(matrix) shader.loadScreenDimensions((float) Display.getWidth(), (float) Display.getHeight()) shader.loadOffsets(x, y) GL11.glDrawArrays(GL11.GL TRIANGLES, 0, quad.getVertexCount()) end() shader.stop() public void prepare(int texture) GL30.glBindVertexArray(quad.getVaoID()) GL20.glEnableVertexAttribArray(0) GL11.glEnable(GL11.GL BLEND) GL11.glBlendFunc(GL11.GL SRC ALPHA, GL11.GL ONE MINUS SRC ALPHA) GL11.glDisable(GL11.GL DEPTH TEST) GL13.glActiveTexture(GL13.GL TEXTURE0) GL11.glBindTexture(GL11.GL TEXTURE 2D, texture) GL11.glTexParameterf(GL11.GL TEXTURE 2D, GL11.GL TEXTURE MIN FILTER, GL11.GL LINEAR) GL11.glTexParameterf(GL11.GL TEXTURE 2D, GL11.GL TEXTURE MAG FILTER, GL11.GL LINEAR) |
1 | How to rotate a sprite in 2D with LWJGL3 and GLSL I have all the basic rendering already set up, with translation, scaling etc. What matrix(s) and or vector(s) do I need to multiply add with my existing code to rotate an object. Preferably around a point given by a vector2fa, also only in degrees because quaternions aren't implemented and its only 2D not 3D. Snippet of the vertex shader gl Position transformation vec4(position, 1.0) translation Where transformation is done in java as (Note not actual code as is replaced with .mul()) Matrix4f transformation ortho scale And translation is given by a method Vector4f translation new Vector4f(x , y , 0f, 0f).mul(gc.getOrtho()) All the code listed before works as expected. It is just lacking the rotation feature mentioned at the top. How can I add rotation to my code? |
1 | LWJGL int vertex attribute not working in shader I'm trying to send an integer attribute to my GLSL shader. The shader receives the attribute as follows layout (location 3) in int integer value I'm creating the attribute as follows glBindBuffer(GL ARRAY BUFFER, this.integerValueVboId) glBufferData(GL ARRAY BUFFER, integer values buffer, GL STATIC DRAW) glVertexAttribPointer(3, 1, GL INT, false, 0, 0) this.integerValueVboId is a valid ID generated with all the other VBOs. integer values buffer is an IntBuffer confirmed to contain the correct integer values. When I set one of the integer values to 1, the shader receives the integer value 1065353216, which I have found to be the value obtained when one interprets a float 1.0f as an integer value. |
1 | OpenGL Planet Generation Simple Matrix Issue (Planet Spins With Mouse) I originally asked this question on StackOverflow amp was directed here by a commenter. Im currently working on a OpenGL planet rendering. I'm using the Tessellation pipeline. So far things are going very well bar one issue. It's at the stage where I've been banging my head off it for ages and feel like progress isnt happening. First of all here is a gif of what I'm dealing with. Essentially my problem is that whenever the mouse is moved the planet rotates as if its "looking" at where the camera is pointing. There are some graphical issues but they are due to me simply repeating the same heightmap across the whole cubemap. Since it doesnt match up on the sides there are clear seams. Below is my evaluation shader void main(void) vec4 p0 gl in 0 .gl Position vec4 p1 gl in 1 .gl Position vec4 p2 gl in 2 .gl Position vec3 p gl TessCoord.xyz Normal FS in mat3(transpose(inverse((MV)))) (p0 p.x (p1 p.y) p2 p.z).xyz float displacment texture(tex1, Normal FS in).r 800 gl Position MVP (p0 p.x (p1 p.y) p2 p.z) (vec4(vec3(displacment,displacment,0) normalize(Normal FS in),1)) Its pretty simple. The shader calculates a normal for the output vertex. This is then used to grab a displacement value from the heightmap which is a cube map. Then GL Position is calculated. What I've been trying to work out is if my problem lies in the shaders or in the rest of the package. My attempts have largely been tinkering. I've moved all normal related stuff into the evaluation shader rather then calculating it in the vertex shader and passing them through to the control and evaluation shaders. The issue only occurs when the mouse is moved. Any suggestions would be fantastic hopefully all I'll need is a pointed in the right direction. Edit Doing the inverse transpose of the modelview matrix OR view matrix yields the result in the gif. Using the model matrix leads to distortion of the normals when the camera is moved, such that the terrain bends about the place. |
1 | What is Opengl ES ? What is gl and GLES20 mean? I am new in opengl. I saw tutorial some have gl at the beginning and some have GLES20 at the beginner and some has GL10. What are the differences? Is GLES20 mean opengl es2 whereas GL10 mean version 1 |
1 | how could RTT be so slow on my intel card? I simply draw something into a render target, and use it as an normal texture. It's always working greate to me with my nvidia video card. But today I found my program ran terribly slow (less than 5 fps) on a intel card. After running an analyse I found glGenerateMipmapEXT is the trouble maker, it costs most of the cpu time. Here's how I bind a RT texture glBindTexture(GL TEXTURE 2D, m textureID) lt br gt glGenerateMipmapEXT(GL TEXTURE 2D) without glGenerateMipmapEXT the texture is nothing but pure white pic. Something wrong with my RTT? |
1 | cylindrical coordinate point in origin I have a camera which has the following attributes pos (position of the camera in the scene) look(either direction in which camera will face, or target vector) up vector( y axis) I am using cylindrical coordinates system for moving camera around centre of scene. my question is 'how can I find look vector of the camera in a such a way that it will be pointing out in the origin of the scene under 45 degree?'. According to this thread there is not such a way. And you can never look directly down at the focal point, no matter how high up you go. P.S I need somehow to get look vector if it's possible. Thank you for you attention |
1 | Section cut through (solid) geometry I'm looking for image based (screen space) technique to render section cuts through arbitrary (solid) geometry. I found and studied image based CSG (Kirsch 05 OpenCSG) but I found it to be perhaps a bit of a overkill for my case, where all I need is a section plane cut. Above is a naive implementation using a discard in the fragment shader, but that obviously is not even half way there as I need to close the gaps. Does anyone know of a technique hack I could use? |
1 | Render to cubemap wrong Y values I'm currently trying to render to a cubemap in order to blur it. However the top and bottom faces appear much closer than they should be in the blurred version. I thought the problem came from my transformation matrices but they're just rotations. I'm beginning to believe that it's the way I'm handling the framebuffer that's causing this problem. Here's how I'm doing it unsigned fbo unsigned rt glViewport(0, 0, size, size) glGenFramebuffers(1, amp fbo) glGenRenderbuffers(1, amp rt) glBindFramebuffer(GL FRAMEBUFFER, fbo) glBindRenderbuffer(GL RENDERBUFFER, rt) glRenderbufferStorage(GL RENDERBUFFER, GL DEPTH COMPONENT24, size, size) glFramebufferRenderbuffer(GL FRAMEBUFFER, GL DEPTH ATTACHMENT, GL RENDERBUFFER, rt) ... enable source cubemap, set projection ... for(int i 0 i lt 6 i) set rotation glFramebufferTexture2D(GL FRAMEBUFFER, GL COLOR ATTACHMENT0, GL TEXTURE CUBE MAP POSITIVE X i, target cubemap, 0) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) draw cube (the images below are from the same corder) |
1 | how could RTT be so slow on my intel card? I simply draw something into a render target, and use it as an normal texture. It's always working greate to me with my nvidia video card. But today I found my program ran terribly slow (less than 5 fps) on a intel card. After running an analyse I found glGenerateMipmapEXT is the trouble maker, it costs most of the cpu time. Here's how I bind a RT texture glBindTexture(GL TEXTURE 2D, m textureID) lt br gt glGenerateMipmapEXT(GL TEXTURE 2D) without glGenerateMipmapEXT the texture is nothing but pure white pic. Something wrong with my RTT? |
1 | UE4 openGL context Is there a way to get an openGL context to do my drawings in UE4? I'd like to use some shaders before the loading of a scene (kind of an intro thingy), but I couldn't find anything related to get an openGL context. |
1 | Why is there no glClear() and glClearColor() method in GL30? In the GL30 interface, both the methods glClear() and glClearColor() are absent. I tried to call the method Gdx.gl30.glClear(GL30.GL COLOR BUFFER BIT) inside render() but it threw me a null pointer exception. So I checked the interface. There's no glClear() method in GL30. Only in GL20. But OpenGL documentation says that they are supported in both v2.0 and v3.0. Why is it not included in LibGDX? |
1 | Why is this orthographic projection matrix not showing my textured quad? I've been following tutorials, mainly this one, and I am still not quite sure why my textured quad is not showing inside the frustum that I've rendered before. I can see it if and only if I don't multiply gl Position with OrthoProjMatrix vertexmodelspace, and instead, multiply gl position with vertexmodelspace. Here is some of my code my Main.CPP is also available via PasteBin. Orthographic Projection Matrix Setup Code void OpenGL Engine OrthoProjectionSetup(GLuint program) GLfloat Right 100.0 GLfloat Left 50.0 GLfloat Top 100.0 GLfloat Bottom 50.0 GLfloat zFar 1.0 GLfloat zNear 1.0 GLfloat LeftAndRight 2.0f (Right Left) GLfloat TopAndBottom 2.0f (Top Bottom) GLfloat ZFarAndZNear 2.0f (zFar zNear) GLfloat orthographicprojmatrix XX XY XZ XW LeftAndRight, 0.0, 0.0, (Right Left) (Right Left), YX YY YZ YW 0.0, TopAndBottom, 0.0 , (Top Bottom) (Top Bottom), ZX ZY ZZ ZW 0.0, 0.0 , ZFarAndZNear, WX WY WZ WW (zFar zNear ) (zFar zNear), 0.0, 1.0 GLint orthographicmatrixloc glGetUniformLocation(program, "OrthoProjMatrix") glUniformMatrix4fv(orthographicmatrixloc, 1, GL TRUE, amp orthographicprojmatrix 0 ) Vertex Shader Code version 330 core layout(location 0) in vec4 vertexposition modelspace layout(location 1) in vec2 vertexUV out vec2 UV uniform mat4 OrthoProjMatrix void main() gl Position OrthoProjMatrix vertexposition modelspace UV vertexUV I'm having problems with the orthographic projection matrix either it is not being done correctly, not setup correctly, my shader is not setup correctly or it's the textured quad that is not in view. Please note that I do not want to use a library for this. What am I doing wrong? |
1 | How to Construct a Perspective Projection With 4 Vanishing Points Is it possible to construct a projection matrix which will create a perspective with four (or more) vanishing points? This question have an OpenGL tag, but general insights are welcomed as well. |
1 | An efficient way for generating smooth circle I'm looking for creating smooth circle. OpenGL supports point, line, and triangle. To create other primitives like circle, we utilize the preceding shapes. In my case, I've utilized points as follows float radius(0.5f) for ( float angle(0) angle lt glm radians(360.0f) angle glm radians(0.5f) ) Vertex vertices Vertex(glm vec3( radius cos(angle), radius sin(angle), 0)) meshes.push back( new Mesh(vertices, sizeof(vertices) sizeof(vertices 0 ), 'P') ) In rendering loop, while( Window.isOpen() ) Window.PollEvents() Window.clear() ( Rendering ) Shader.Use() for ( int i(0) i lt meshes.size() i ) meshes i gt draw() Window.SwapBuffers() The result is Now to create smooth circle, basically I just increase the number of points. I don't like this approach since I need to create a lot of points for single shape. My question is is there an alternative yet efficient approach for this issue? |
1 | Z Value of clip space position is always 1.0 I render a lot of quads on the screen into z direction (20 x 2000). I want to get the depth value in a final render target. But it looks like z is always 1.0f. I checked the result with the OpenGL debugger. Any advice? Vertex shader version 410 layout(row major) uniform UView mat4 m View mat4 m Projection layout(row major) uniform UModelMatrix mat4 m ModelMatrix layout(location 0) in vec2 VertexPosition out vec3 PSVSPosition void main(void) vec4 VSPosition m View m ModelMatrix vec4(VertexPosition.xy, 0.0f, 1.0f) PSVSPosition VSPosition.xyz gl Position m Projection VSPosition Fragment shader version 410 layout(row major) uniform USettings mat4 m ProjectionMatrix layout(location 0) out vec4 PSColor in vec3 PSVSPosition void main(void) Calculate depth vec4 CSPosition m ProjectionMatrix vec4(PSVSPosition, 1.0f) CSPosition.xyz CSPosition.w PSColor vec4(vec3(CSPosition.z 0.5f 0.5f), 1.0f) Result Closer result view space position result clip space position result |
1 | Fire simulation using java and opengl i'm newly working with opengl. I'm trying to create a simple program that will simulate fire. My question is what are the ways other than particle effects to simulate fire. And can fire simulation really be done without particle system effect?? |
1 | Are there still advantages to using gl quads? OK, I understand that gl quads are deprecated, and thus we're not 'supposed' to use them anymore. I also understand that a modern PC when running a game using gl quads is actually drawing two triangles. Now, I've heard because of that a game should be written using triangles instead. But I'm wondering if, due to the specifics of how OpenGL makes that quad into two triangles, it is ever advantageous to use quads still? Specifically, I'm currently rendering many unconnected quads from rather large buffer objects. One of the areas I have to be careful is how large the vector of floats I'm using to make update these buffer objects get (I have quite a few extra float values per vertex, and a lot of vertices in a buffer the largest buffers are about 500KB). So it strikes me that if I change my buffer objects to draw triangles this vertex data is going to be 50 larger (six vertices to draw a square rather than 4), and take 50 longer for the CPU to generate. If gl quads still work, am I getting a benefit here, or is the 50 extra memory and CPU time still being used on OpenGL's automatic conversion to two triangles? |
1 | OpenGL flickerinng near the edges I am trying to simulate particles moving around the scene with OpenCL for computation and OpenGL for rendering with GLUT. There is no OpenCL OpenGL interop yet, so the drawing is done in the older fixed pipeline way. Whenever circles get close to the edges, they start to flicker. The drawing should draw a part of the circle on the top of the scene and a part on the bottom. The effect is the following The balls you see on the bottom should be one part on the bottom and one part on the top. Wrapping around the scene, so to say, but they constantly flicker. The code for drawing them is void Scene drawCircle(GLuint index) glMatrixMode(GL MODELVIEW) glLoadIdentity() glTranslatef(pos.at(2 index),pos.at(2 index 1), 0.0f) glBegin(GL TRIANGLE FAN) GLfloat incr (2.0 M PI) (GLfloat) slices glColor3f(0.8f, 0.255f, 0.26f) glVertex2f(0.0f, 0.0f) glColor3f(1.0f, 0.0f, 0.0f) for(GLint i 0 i lt slices i) GLfloat x radius sin((GLfloat) i incr) GLfloat y radius cos((GLfloat) i incr) glVertex2f(x, y) glEnd() If it helps, this is the reshape method void Scene reshape(GLint width, GLint height) if(0 height) height 1 Prevent division by zero glViewport(0, 0, width, height) glMatrixMode(GL PROJECTION) glLoadIdentity() gluOrtho2D(xmin, xmax, ymin, ymax) std cout lt lt xmin lt lt " " lt lt xmax lt lt " " lt lt ymin lt lt " " lt lt ymax lt lt std endl |
1 | GLSL std140 uniform block fields(vec, float, mat4) always 0.0 Ok, First of all, if just use uniforms everthing works like it should. Switching to uniform blocks nothing goes, as all values look like 0.0. I tested this with various if then else stuff within the vertex shader and modifying the output color. According to CodeXL 1.3(here the data is listed as VBO, but I think this is only cosmetic), GL.BindBuffer with GL.MapBuffer, or GL.GetBufferSubData there is really the proper data within my ubo but nonetheless the uniform block in the shader contains 0.0 only. workflow Create UBO BufferUBOName new int 2 GL.GenBuffers(2, BufferUBOName) Generate the buffer BufferIndex 0 GL.BindBuffer(BufferTarget.UniformBuffer, BufferUBOName 0 ) Bind the buffer for writing GL.BufferData(BufferTarget.UniformBuffer, (IntPtr)uboWorldData.SizeInBytes, IntPtr.Zero, BufferUsageHint.StreamDraw) Request the memory to be allocated GL.BindBufferRange(BufferRangeTarget.UniformBuffer, BufferIndex, BufferUBOName 0 , IntPtr.Zero, (IntPtr)uboWorldData.SizeInBytes) Bind the created Uniform Buffer to the Buffer Index Creating and linking the shaders works fine and gives the following ouput GL 4.3 GLSL 4.30 Shader basic fs ok Shader basic vs ok ShaderProgram hexgrid basic vs, basic fs Active Uniform Block UboWorld Size 144 BlockBinding 1 BufferPoint 0 Name UboWorld.time Type n Float 4 Offset 0 Name UboWorld.screenwidth Type n Int 4 Offset 4 Name UboWorld.screenheight Type n Int 4 Offset 8 Name UboWorld.promat Type n FloatMat4 64 Offset 16 matrix stride 16 Name UboWorld.mvpmat Type n FloatMat4 64 Offset 80 matrix stride 16 Done after successful program linking uniformBlockIndices "ubodata" GL.GetUniformBlockIndex(shaderprog.Handle, "UboWorld") Gets the uniform variable Location if (uniformBlockIndices "ubodata" ! 1) GL.UniformBlockBinding(shaderprog.Handle, uniformBlockIndices "ubodata" , BufferUBOName 0 ) Then VBO VAO are created and working fine, incl. drawing elements. On UpdateFrame the UBO data will be processed like GL.BindBuffer(BufferTarget.UniformBuffer, BufferUBOName 0 ) GL.BufferSubData(BufferTarget.UniformBuffer, IntPtr.Zero, (IntPtr)uboWorldData.SizeInBytes, ref uboWorld) The offsets are taken from the information after program linking and reading the various values for the UniformBlockActiveUniforms. I skip the output here as the struct should reflect it properly. StructLayout(LayoutKind.Explicit) struct uboWorldData FieldOffset(0) public float time FieldOffset(4) public Int32 screenwidth FieldOffset(8) public Int32 screenheight FieldOffset(16) public Matrix4 projectionMatrix FieldOffset(80) public Matrix4 modelviewMatrix public static readonly int SizeInBytes 144 Vertex shader for testing version 400 in vec3 in position in vec3 in normal out vec3 normal out vec3 colortime out vec3 colortimeubo These uniforms work uniform mat4 mvpmat uniform mat4 promat uniform float time This compiles fine. But always 0.0 layout(std140) uniform UboWorld float time int screenwidth int screenheight mat4 promat mat4 mvpmat w const float pi 3.1415 void main() This uses the UNIFORM FLOAT colortime vec3(abs(sin(time 360.0 pi 180.0)), 0.3, 0.3) Here the uniform UboWorld colortimeubo vec3(abs(sin(w.time 360.0 pi 180.0)), 0.3, 0.3) normal (mvpmat vec4(in normal, 0)).xyz gl Position promat mvpmat vec4(in position, 1) Any hints why w.time and also w.mvpmat or w.promat are always zero filled ??? |
1 | Framebuffer formats with enhanced Alpha precision I render some lines with alpha values into an FBO. Because I play a lot with alpha, I need the alpha channel to have more detail than RGBA8. For example, RGBA32F works like a charm. As I have only a Quadro GPU to test. Is there a general advice on which FBO format works best (in terms of speed) on most cards? Would there be a benefit in using RGBA16F, RGBA16 etc.? What formats can I expect a GPU to support? Is RGBA32F a regular feature for a GPU not older than 5 years? |
1 | Get fragment from mouse position I have a painting app for texture artists that I am working on. I am able to paint to a flat canvas that updates the texture of a 3d object in an object viewer. Now I want to be able to paint directly to the 3d model. One way I can think of is to get the uv coordinate from the mouse position, and use that as the position to paint onto my 2d canvas, which updates the 3d models texture. Oh and only one object at a time is active, so that should make things a little simpler. Is this the right approach? If it is then how should I start. Or is there a simpler better way of painting directly to a 3d model? How does zbrush do it? |
1 | Earth's rotation along it's axis I've modeled an earth using an sphere. I can rotate it using glRotatef function around z axix by incrementing the angle. How do I simulate earth's movement which looks like 23.5Degree around Z axis? (see this image) I guess I need to calculate the x,y,z verctors. Any Idea? I'm a complete newbie in computer graphics programming. |
1 | Stencil buffer VS conditional discard in fragment shader I have a continuous height mapped mesh to represent landscape. I also have 1 to let's say 10 wells on this landscape represented by additional models. What I want to achieve is to create an illusion of an actual hole in landscape at the place where the well sits. I see the solution in two ways Draw holes to stencil buffer an then use it in fragment shader to discard landscape fragments. Send an array of uniforms (vec2) and then conditionally discard fragments if they are near those hole points. The question is what way should I prefer if I'm good with rectangular holes and want to get best performance? |
1 | What alternatives to GLUT exist? I am trying to learn OpenGL, and I just found out that GLUT is obsolete. I already know SDL, and it seems it is a good alternative. Should I use SDL to develop games with OpenGL, or are there any better alternatives. I am new to game development, so I don't know much about the state of the art. |
1 | How to achieve rendering at huge distances? These days I was reading some information about the upcoming GTA V technology, in particular, how it is able to truly render all buildings you can see without a draw distance cap or any faking. Since I will soon be prototyping a big city environment, I wanted some advice on this topic. We're talking about a really big city, with simple geometry but fully lit and shadowed. My doubt comes in particular to the projection matrix, which imposes a maximum draw distance (the zFar parameter). Now, at the same time, I read everywhere that zFar should be as small as possible for better rendering results, in particular related to depth buffer issues etc, because of the floating point issues. So, assuming my computer can render this big city in a stable framerate, how should I approach the problem of rendering parts of the city which I can see REALLY far away, fully lit and shadowed? Shadow maps also seem to have problems with low depth buffer precision.. Thanks |
1 | Time to render each frame is proportional to the amount of models in the scene This question is deliberately written in a "High Level" manor to avoid screeds and screeds of code snippets, hopefully I can get my point across, I am using C and OpenGL. I have a game engine, and in the engine I have my "game loop" that is called every frame, one of the things in this game loop is my call to Render(), before a call to SwapBuffers(), nothing new there. Essentially my engine draws the scene using a scene graph, that is I have GameObject's and each object has a GameComponent. So when the Render() is called in the "game loop" I get the root GameObject and render its GameComponent and then move onto the next GameObject and render its GameComponent, etc. until all GameObject's and each of their GameComponent's are drawn using glDrawElements(...). One of the GameComponent's a GameObject can have is a MeshRenderer, that is a Mesh paired with a Texture. In my scene I want to add loads of tree models, therefore I create for example 50 GameObject's that each have a MeshRenderer component (a tree) that will eventually in the game loop described above be drawn. The issue with this is that the time it takes to render a frame is directly proportional to the number of tree models I have (O(N) I think is how you word it). For example with 100 trees the render time is 23ms, with 200 trees it is 46ms, with 400 trees it is 90ms so on and so forth. This is terrible, I need at least 1000 trees in my "game world" and even at around 400 my game is "laggy". Because the time to draw a frame is proportional to the number of tees I know that each frame it is re drawing each mesh essentially (every frame drawing 100 trees, and then 200 and so on, and the time to draw scales with it) My question is this How can I store the mesh data so that I don't need to draw them from scratch every frame, I can just draw them once, and then re use that data for the next frame, but don't re draw it all? is there a way to firstly use glDrawElements(...), then store that mesh data(all the vertices etc.) and then every frame after call something like for example glDrawExcistingElements() instead of glDrawElements(...). Ultimately meaning that for consecutive frame draws the time is the exact same wether I have 1 tree or 1000 trees because the system is simply saying "here is the scene I already drew", instead of "lets draw it all again from scratch". |
1 | Pretty Sure I have Support for OpenGL 4, but it's not running. What can I do? Many Game Engines require OpenGL to run. I have one of those. I've confirmed that the program and any benchmarks for OpenGL above OpenGL 2 fail to run. Is there, like, a way to confirm I have dependencies or something to that effect? Is it installed or is it just a library included in other programs? Specifically using Ubuntu Linux 17.10, computer specs are fairly low but I've run the Engine on Windows on the same machine before. Linux should have more up to date graphics drivers, so I'm not sure what the root of the problem might be. |
1 | Spherical harmonics lighting interpolation I want to use hardware filtering to smooth out colors in texels of a texture when I'm accessing texels at coordinates that are not directly at the center of the texel, the catch being that the texels store 2 bands of spherical harmonics coefficients ( 4 coefficients), not RGBA intensity values. Can I just use hardware filtering like that (GL LINEAR with and without mip mapping) without any considerations? In other terms If I were to first convert the coefficients back to intensity representations, than manually interpolate between two intensities, would the resulting intensity be the same as if I interpolated between the coefficient vectors directly and then converted the interpolated result to intensities? |
1 | How could I render a star like those in Elite Dangerous coronas and some sort of 3D "plasma" effect on the surface? Lets assume I wish to render a large, 3D, star object in a game engine using OpenGL. I can use a few different meshing methods this includes mapping a cube to a sphere (with better subdivision to reduce distortion), a primitive lattitudional longitudional based "globe" mesh, or a selectively subdivided icosphere. I would choose whatever would be easiest to apply the 3D effect to I'm guessing an isocahedron, since it doesn't distort as much and the issues with mathematical discontinuities (when finding the Tangential, Normal, and Binormal) vectors should not be a worry (unless rendering the following effect would use a bump normal mapping technique with a GPU generated texture?) Further, lets assume I wish to render a boiling, roiling, or vaguely evolving surface. The diamond square algorithm could generate the proper effect, but how could this be used in 3D to generate something that looks vaguely like the surface of a star? Could I use the GPU to generate a texture that is then evolved with time, and use a procedural noise algorithm that includes the analytical derivatives to do so(for acquiring normal bump map components to add "depth")? Should the noise algorithm be in 2D or 3D? Lastly, what if I want to render the corona of the star in some volumetric fashion as well? Corona's created with billboards work well enough, but rapidly fall apart in appearance once the viewpoint gets closer to the star (as is common with sprite based techniques). Example images |
1 | Does interleaving in VBOs speed up performance when using VAOs You usually get a speed up when you use interleaved VBOs instead of using multiple VBOs. Is this also valid when using VAOs? Because it's much more convenient to have a VBO for the positions, and one for the normals etc. And you can use one VBO in multiple VAOs. |
1 | Can frame buffer objects in openGL ES have different kinds of attachments? This question is with regards to the QT openGL app presented in this repository https github.com PRBonn semantic suma I am trying to run this application on NVIDIA AGX Xavier, however, I am getting errors with respect to frame buffer object attachments. I observe that whenever any framebuffer object is attached to 2 different types of attachments (for example renderbuffer object and texture), I am getting GL FRAMEBUFFER INCOMPLETE ATTACHMENT error. I observe that if a texture is being attached to any framebuffer object which already is attached to renderbuffer object then this error is seen. This can be checked with glGetError(). Could someone please suggest any workaround for this issue. I am a newbie to OpenGL so please excuse me for this question. |
1 | GLSL Multiple Uniform Structs I'm developing a lighting system for my voxel game, and I have to send multiple (alot, say up to 200) lights to my shader program. Those lights contain the following data Position (vec3) Color (vec3) Radius (float) Strength (float) What is the most efficient way to send alof of those light structs to my shaders? I would like it to work with lower versions of OpenGL, like 2.1. |
1 | Geometric Transformations on the CPU vs GPU I've noticed that many 3d programs normally do vector matrix calculations as well as geometric transformations on the CPU. Has anyone found an advantage in moving these calculations into vertex shaders on the GPU? |
1 | How can I fade in 3D foliage smoothly? Take a look at this picture This is a picture of me diving down from a high height. As you can see, the world is a simple world with grass, snow, trees, etc. The problem here is the 3D foliage. If is obviously just "jumping" into the scene. It also changes the scene dramatically Instead of simply adding detail to the "current" scene, it's effectively overwriting the terrain underneath it and setting it's own colours details. Let's take a look at a popular game, such as war thunder Yes, you can see the 3D foliage appearing as the player gets closer to it. However, it is very hard to actually focus on how or where it is appearing. Also, the grass adds detail to the terrain, rather than completely overwriting it and providing a completely different picture. The TL DR on this is My game has grass that "pops" into existence and creates an obvious change in the terrain, whereas professional games have unobtrusive hard to notice geometry gradually fading into view. How do other games do this, and how can I implement it? |
1 | Order of rotation in Euler angles I'm want to control the direction my camera looks, so I'm using Euler angles, so rotating around an axis is relative to rotation around previous axis. Something like this. I want to always rotate using the blue axes. So I'm using three separate matrices to track rotation along their respective axes. ... glm vec3 mulVec (glm mat4 const amp , glm vec3 amp ) glm vec3 camPos glm vec3(0.0f, 1.0f, 0.0f) glm vec3 target glm vec3(0.0f, 0.0f, 5.0f) glm vec3 camup glm vec3(0.0f, 1.0f, 0.0f) tulsi ModelInput a1 while (1) ... a1.tick() Tracks user input. glm mat4 lookat glm lookAt( camPos, mulVec(a1.yAxis a1.xAxis, target), mulVec(a1.yAxis a1.xAxis, camup) ) ... I get the desired output (rotate around x axis then y) But when I switch the order of multiplication, I get different output (rotate around y then x axis) glm mat4 lookat glm lookAt( camPos, mulVec(a1.xAxis a1.yAxis, target), mulVec(a1.xAxis a1.yAxis, camup) ) So whats happening here. I'm assuming the order should not matter. |
1 | How to use multiple custom vertex attributes in OpenGL My code currently uses glBindAttribLocation and glVertexAttribPointer to specify two custom vertex attributes in indices 6 and 7. This seems to work fine, but I wish to add another attribute and no index other than 6 or 7 will work the shader instead acts like the attribute is always set to a value of 0. I'm using gl Vertex, gl Normal, gl Color and gl MultiTexCoord0, and apparently some nVidia thing means indices 0, 2, 3 and 8 are off limits, but that should still leave other indices. I don't use gl SecondaryColor or gl FogCoord anywhere in my code or shaders for example, but indices 4 and 5 still don't work. If I change graphics cards for an ATI one which supports more than 16 attributes, then indices 16 work fine, but I want to support cards with only 16 attributes. |
1 | OpenGl Error, The loaded object takes the same colors and style of the Texture? I'm new to OpenGl Faced this problem Draw function void Renderer Draw() glUseProgram(programID) shader.UseProgram() mat4 view mat4(mat3(myCamera gt GetViewMatrix())) glm mat4 VP myCamera gt GetProjectionMatrix() myCamera gt GetViewMatrix() shader.BindVPMatrix( amp VP 0 0 ) glm mat4 VP2 myCamera gt GetProjectionMatrix() myCamera gt GetViewMatrix() floorM model13D gt Render( amp shader, scale(100.0f, 100.0f, 100.0f)) scaling the skybox t2 gt Bind() model3D gt Render( amp shader, scale(2.0f, 2.0f, 2.0f)) scaling aircraft glUniformMatrix4fv(VPID, 1, GL FALSE, amp VP2 0 0 ) mySquare gt Draw() The code of loaded shader.LoadProgram() model3D new Model3D() model3D gt LoadFromFile("data models obj Galaxy galaxy.obj", true) model3D gt Initialize() myCamera gt SetPerspectiveProjection(90.0f, 4.0f 3.0f, 0.1f, 10000000.0f) model13D new Model3D() model13D gt LoadFromFile("data models obj skybox Skybox.obj", true) model13D gt Initialize() Projection matrix shader.LoadProgram() View matrix myCamera gt Reset( 0.0f, 0.0f, 5.0f, Camera Position 0.0f, 0.0f, 0.0f, Look at Point 0.0f, 1.0f, 0.0f Up Vector ) std string Images names 6 Images names 0 "right.png" Images names 1 "left.png" Images names 2 "top.png" Images names 3 "bottom.png" Images names 4 "back.png" Images names 5 "front.png" t new Texture(Images names, 0) t2 new Texture("arrakisday dn.tga", 1) |
1 | Order of rotation in Euler angles I'm want to control the direction my camera looks, so I'm using Euler angles, so rotating around an axis is relative to rotation around previous axis. Something like this. I want to always rotate using the blue axes. So I'm using three separate matrices to track rotation along their respective axes. ... glm vec3 mulVec (glm mat4 const amp , glm vec3 amp ) glm vec3 camPos glm vec3(0.0f, 1.0f, 0.0f) glm vec3 target glm vec3(0.0f, 0.0f, 5.0f) glm vec3 camup glm vec3(0.0f, 1.0f, 0.0f) tulsi ModelInput a1 while (1) ... a1.tick() Tracks user input. glm mat4 lookat glm lookAt( camPos, mulVec(a1.yAxis a1.xAxis, target), mulVec(a1.yAxis a1.xAxis, camup) ) ... I get the desired output (rotate around x axis then y) But when I switch the order of multiplication, I get different output (rotate around y then x axis) glm mat4 lookat glm lookAt( camPos, mulVec(a1.xAxis a1.yAxis, target), mulVec(a1.xAxis a1.yAxis, camup) ) So whats happening here. I'm assuming the order should not matter. |
1 | Render an infinite plane I wish to be able to render any Plane (Defined by a Normal and a distance from the origin). The closest answer state to use 'Direction' (4D vectors with w 0) with the Lengyel infinite matrix, but unfortunately it does not render anything with my implementation const float FOV glm radians(55.0f) const float e 1.0f std tan(FOV 2.0f) const float a height width const float n 0.1f this gt projection glm mat4 e, 0, 0, 0, 0, e a, 0, 0, 0, 0, 1.0f, 2.0f n, 0, 0, 1, 0 std vector lt Vertex gt vertices Vertex glm vec4( 0.0f, 0.0f, 0.0f, 1.0f), Vector Up() , Vertex glm vec4( 1.0f, 0.0f, 0.0f, 0.0f), Vector Up() , Vertex glm vec4( 0.0f, 1.0f, 0.0f, 0.0f), Vector Up() , Vertex glm vec4( 1.0f, 0.0f, 0.0f, 0.0f), Vector Up() , Vertex glm vec4( 0.0f, 1.0f, 0.0f, 0.0f), Vector Up() std vector lt GLuint gt indices 0, 1, 2, 0, 2, 3, 0, 3, 4, 0, 4, 1 Render by indices ... |
1 | Direction from the camera to the light source I'm currently writing a game using OpenGL and GLSL. For the shader I need the direction from the current camera to the light source. The lightsource is given by lightSource.position as a uniform as well as the ModelviewMatrix (mat4). The task is TODO compute the vectors from the current vertex towards the camera and towards the light source I need to fill two varyings with those information, but I'm not sure how to compute them. In addition I've the following information version 330 layout(location 0) in vec3 vertex layout(location 1) in vec3 vertex normal |
1 | How to make Volume Heat Distortion effect in OpenGL? I'm working on adding volume effects to an existent, open source game engine. At the moment, the engine only supports two dimensional "thruster" bitmaps with a planar heat distortion drawn over the bitmap. Is it possible to make a Volume filling Heat Distortion? If so, what would be the general idea behind it? |
1 | Sobel Edge Detection on Depth Texture I'm currently trying to implement contour shading via edge detection on a depth texture. I'm rendering the color and depth information onto two textures. In my post processing shader I do the following out vec4 FragColor in vec2 TexCoords uniform sampler2D colorTexture uniform sampler2D depthTexture uniform float far uniform float near Then the matrices for the sobel filter mat3 sobel y mat3( 1.0, 0.0, 1.0, 2.0, 0.0, 2.0, 1.0, 0.0, 1.0 ) mat3 sobel x mat3( 1.0, 2.0, 1.0, 0.0, 0.0, 0.0, 1.0, 2.0, 1.0 ) The function to linearize the Depthvalue float LinearizeDepth(float z) float n near float f far return (2.0 n) (f n z (f n)) In the main I'm filling the matrix to calculate the gradients and then substract the result of the diffuseColor to get Black if the gradient is high (an edge) and just output the diffuseColor if the gradient is low. void main() vec3 colorDiff texture(colorTexture, TexCoords).rgb mat3 I vec3 texel for (int i 0 i lt 3 i ) for (int j 0 j lt 3 j ) float depth LinearizeDepth(texture(depthTexture, TexCoords vec2(i 1, j 1)).r) I i j depth float gx dot(sobel x 0 , I 0 ) dot(sobel x 1 , I 1 ) dot(sobel x 2 , I 2 ) float gy dot(sobel y 0 , I 0 ) dot(sobel y 1 , I 1 ) dot(sobel y 2 , I 2 ) float g sqrt(pow(gx, 2.0) pow(gy, 2.0)) FragColor vec4(colorDiff vec3(g), 1.0) My problem is that g is always 0 since i just the the normal color as output. If i just output the depth values it does look correct Any Idea what I'm doing wrong here? |
1 | Are there any reasons to use Legacy (2.X) OpenGL? The benefits are well documented of the Modern OpenGL 3.X amp 4.X API's, but I'm wondering if there are ANY benefits to keeping with the old OpenGL, Or if learning OpenGL 2.X is a complete waste of time now no matter what? Particularly I've wondered if using the OpenGL 2.X API is appropriate if the target platform had graphics hardware capable of only up to OpenGL 2.X. Would a driver update on said target platform allow programs compiled using the Modern OpenGL API's to be released on this old platform? If they both work, which would be faster? Thanks |
1 | glm direction vector rotation I'm working on a flight simulator, but I'm stuck with my airplane orientation. I tried some things but noone worked correctly. This is what I have To be able to move it and roll it around himself, I need two vectors, forward and up, and use them to create quaternions I need for the rotation void Plane Update() m position ( m forward m speed ) mat4 translation translate( mat4( 1.0f ), m position ) float angle dot( vec3( 1.0f, 0.0f, 0.0f ), m forward ) quat direction angleAxis( acos( angle ), cross( vec3( 1.0f, 0.0f, 0.0f ), m forward ) ) m matrix translation mat4 cast( direction ) vec3( 1.0f, 0.0f, 0.0f ) is my model orientation. Note that in this code, I don't have quaternion for rolling the plane, because I first want to have a correct direction. This works great, what doesn't work is when I want to make him taking off. To do that, I first get the right vector, then use it to create my quaternion with the angle I need, and I apply the rotation to the forward and up vector. void Plane FlyUp() vec3 right cross( m forward, m up ) quat temp angleAxis( radians( 1.0f ), right ) m up temp m up m up normalize( m up ) m forward temp m forward m forward normalize( m forward ) Using debugger and an online vector visualizer, It seems to give me the good vectors, but the plane is rotating weirdly ( in fact, that's not even only rotating, he's scaled too for some reasons... ). What am I doing understanding wrong? Edit To be more precise, here is screenshoots of what I have And what I'm trying to have, whatever the m forward vector is pointing to |
1 | Custom complex body shape with cut feature I need to create sprites (or scene2d actors or whatever) with custom shape, preferably generated from a png image (marching squares?). I followed some Mesh tutorials but they all hardcode the triangularized shape vertices in an array, which is completely insane. I will later have to give my sprite a cut feature, like this game. so I'll have to regenerate and redraw a new shape several times. Some told me to review the Drawable classes as well as Mesh, and create my own MeshDrawable that renders using meshes. Do you have an idea of how to dynamically generate vertices and shapes instead of hardcoding coordinates? at what level will my Mesh be responding to touch events and reacting with the user? Any clarification will be welcome, I'm kind of lost. |
1 | Loading screen OpenGL with glut API I'm trying to make a loading screen. My idea is to have two threads working one in the background loading the 'game' scene, one displaying 'load' scene. When building 'game' scene is finished, it's displayed instead of 'load' scene. Before threading I want to make it work the usual way first display 'load' scene, then 'game' scene. Both GameScene and LoadScene inherit from a virtual function Scene. Here's how my main looks like LoadScene load scene GameScene game scene Scene scene int main(int argc, char argv) Init GLUT and create window glutInit( amp argc, argv) glutInitDisplayMode(GLUT DEPTH GLUT DOUBLE GLUT RGBA GLUT STENCIL) glutInitWindowPosition(initWindowX, initWindowY) glutInitWindowSize(windowWidth, windowHeight) glutCreateWindow("OpenGL") Register callback functions for change in size and rendering. glutDisplayFunc(renderScene) glutReshapeFunc(changeSize) glutIdleFunc(renderScene) Register Input callback functions. 'Normal' keys processing glutKeyboardFunc(processNormalKeys) glutKeyboardUpFunc(processNormalKeysUp) Special keys processing glutSpecialFunc(processSpecialKeys) glutSpecialUpFunc(processSpecialKeysUp) Mouse callbacks glutMotionFunc(processActiveMouseMove) glutPassiveMotionFunc(processPassiveMouseMove) void glutMouseFunc(void( func)(int button, int state, int x, int y)) glutMouseFunc(processMouseButtons) glutMouseWheelFunc(processMouseWheel) Position mouse in centre of windows before main loop (window not resized yet) glutWarpPointer(windowWidth 2, windowHeight 2) Hide mouse cursor glutSetCursor(GLUT CURSOR NONE) warp Initialise input and scene objects. input new Input() scene new LoadingScene(input) load scene new LoadScene() game scene new GameScene(input) load scene Create GameScene() Create LoadingScene(GameScene ) delete LoadingScene() LoadingScene thread tells game to start loafing assetss Enter GLUT event processing cycle glutMainLoop() return 1 The game scene reassignment must be called in changeSize function. I thought it should be in the renderScene function but it's the only way it works. Here's how it looks like void changeSize(int w, int h) scene load scene scene gt resize(w, h) Any ideas how I could go on about it? So far if I assign to 'scene' pointer 'load scene' object it's displaying 'load' scene. Same goes for 'game' scene. Is it even possible to achieve a transition between scenes without threading? |
1 | Bullet Physics Integration direct movement of rigid bodies I'm adding bullet physics to my engine. The physics simulation bits are all working nicely, but one bit I'm struggling with is being able to move objects using their co ordinates, and then have them affect other bullet objects. I currently have this code just before I step the simulation, and this moves all of the objects to the co ordinates that the game engine thinks the object should be, due to outside movement. btTransform trans trans.setFromOpenGLMatrix(glm value ptr(host gt getTransMat() host gt getRotMat())) motionState gt setWorldTransform(trans) body gt setWorldTransform(trans) Then I step the simulation, and move every object to where bullet thinks it should be. I am aware there are nicer ways to do this part (custom written motionstate classes I think) but I want to get the logic down first. This works, but moving a cube directly into another causes the second cube to just shake a bit. I've read in few places I should be applying forces to objects, but I don't really want to expose the btRigidBody and physics stuff to the core gameobject, and I have a lot of code that does it's own movements using co ordinates, and I don't really want to rewrite it, although I will if it's the only way. Could I replace the code below with something that compares the position of the gameobject to the rigidbody's position, and applies the correct force to make that happen? How would I implement this code? It can't be as simple as F MA can it, given that this would happen any frame? Edit 1 I have implemented the formula provided by DMGregory and I've not had any success. I've tried swapping this out for applyCentralImpulse too. The objects just stay stationary and anything falling just goes into hyper speed. This runs for every physics object just before step sim, and then the simulations positions are applied to back to their hosts. if (host nullptr)return btVector3 newPos convert(host gt getPos()) btVector3 oldPos body gt getWorldTransform().getOrigin() btVector3 dist newPos oldPos btVector3 acc 2 ((newPos oldPos ( body gt getLinearVelocity() timeStep)) (pow(timeStep, 2))) if (dist.length() gt btScalar(0.1f)) std cout lt lt acc.length() lt lt "large acc n" body gt applyCentralForce(mass acc) |
1 | How do I calculate NDC coordinates in a fragment shader? I have some weird problem going on in my openGL shader. First,, I pass the viewspace position from the vertex shader to the fragment shader like this vec4 view pos V M vec4(world position.xyz, 1.0) Then, in the fragment shader, I do this vec4 clip pos P view pos vec2 ndc xy clip pos.xy clip pos.w vec2 ndc fragcoord 2.0 gl FragCoord.xy viewport wh 1.0 I would expect that ndc xy is the same as ndc fragcoord however, when rendering them to textures, they are not nearly the same Can someone explain to me the differences? How can I get the same result in ndc fragcoord by using view space position or vice versa? This is important information for me for a later stage, when reconstructing the view space position from the depth. I have already been sitting on this the whole day, and am close to despair. |
1 | Point Sprite Size and Coloring OpenGL ES 2.0 I am trying to render point sprites with with variable color, but they are all black. Before I added gl PointSize 5.0 they had color. The environment is Android with C , I believe OpenGL ES 2.0. I have tried to work from point sprite size and applying color to point sprite, but have had no luck. Vertex Shader precision mediump float precision mediump int attribute vec4 vertex uniform mat4 mvp varying vec4 v color void main() gl Position mvp vertex v color vertex gl PointSize 5.0 Fragment Shader precision mediump float precision mediump int varying vec4 v color uniform sampler2D tex void main() gl FragColor vec4(v color) gl FragColor vec4(1.0, 0.0, 0.0, 1.0) gl FragColor texture2D(tex, gl PointCoord) gl FragColor vec4(v color.rgb, texture2D(tex, gl PointCoord).a) All of the gl FragColor assignments both commented and active lead to black points. How do I give these points color? |
1 | Depth to World Space Position problem I am having a problem with turning depth to world space position. I am using GLSL. What could go wrong? Here is the code float x (uv.x 2.0) 1.0 float y (uv.y 2.0) 1.0 float z (depth 2.0) 1.0 vec4 pos clip vec4(x, y, z 1.0) vec4 inverse pos view InvCamProjMatrix pos clip inverse pos view.xyz inverse pos view.w vec4 pos ws InvCamViewMatrix inverse pos view return pos ws.xyz Thanks. |
1 | Prevent rendering queue from overflowing when using Vsync My game loop currently looks something like this while (!quit) takeInput() Input is sampled however fast loop can run time now get time() time passed (time now time prev) time prev time now while (time passed gt SOME TIME) update() Update is done over fixed time step for determinism time passed SOME TIME STEP This time step is less than or equal to SOME TIME render(time passed) Render using interpolation from previously updated position and velocity Basically, I take input as fast as I can, run the movement and physics only in discreet chunks of time after certain period and interpolate the difference in the renderer. The renderer is set to draw at Vsync (using SDL GL SetSwapInterval( 1)) and currently, the value of SOME TIME is set so that update() runs at half of display's refresh frequency (but with a minimum bound of SOME TIME STEP which is set to 5 millisecond). The problem is that I am running the renderer as fast as I can which is submitting frames continuously. Because I am currently loading a test level, the game loop is basically finishing up in a few microseconds (unless there is something wrong with how I am calculating time too). As far as I understand, my program will submit frames to the driver way faster than it is going to draw (because of Vsync) and this should lead to some type of periodic locking up lagging. So my question is, how to I prevent this? Should I put the render call in a time bound loop too just like update()? Should I call sleep() manually instead of relying on graphics driver for Vsync? |
1 | GLSL to Cg why is the effect different? With reference to this question, where I was trying to make the shader compile, I am now trying to make an effect appear. The effect can be shown here, through a GLSL shader But when I use the equivalent Cg shader, the result becomes this Using the same images (color map normal map) and the same code (except the way to retrieve variables). Here is the original GLSL shader uniform sampler2D color texture uniform sampler2D normal texture void main() Extract the normal from the normal map vec3 normal normalize(texture2D(normal texture, gl TexCoord 0 .st).rgb 2.0 1.0) Determine where the light is positioned (this can be set however you like) vec3 light pos normalize(vec3(1.0, 1.0, 1.5)) Calculate the lighting diffuse value float diffuse max(dot(normal, light pos), 0.0) vec3 color diffuse texture2D(color texture, gl TexCoord 0 .st).rgb Set the output color of our current pixel gl FragColor vec4(color, 1.0) And here is the Cg shader I wrote struct fsOutput float4 color COLOR uniform sampler2D color texture TEXUNIT0 uniform sampler2D normal texture TEXUNIT1 fsOutput FS Main(float2 colorCoords TEXCOORD0, float2 normalCoords TEXCOORD1) fsOutput fragm float4 anorm tex2D(normal texture, normalCoords) float3 normal normalize(anorm.rgb 2.0f 1.0f) float3 light pos normalize(float3(1.0f, 1.0f, 1.5f)) float diffuse max(dot(normal, light pos), 0.0) float3 color diffuse tex2D(color texture, colorCoords).rgb fragm.color float4(color,1.0f) return fragm Please let me know if something needs to be changed in order to obtain the effect, or if you need the C code. |
1 | How to create a regular grid of triangles correctly? I am trying to create an terrain using opentk opengl. I have a problem with the VBO IBO. I think a picture of the problem is the best way to show it I dont understand why the last triangle of a row connects to the first vertices of the row. Here is the code int mapSize 7 void CreateVertexBuffer() Vector3 vertices new Vector3 (mapSize 1) (mapSize 1) short indices new short ((mapSize) (mapSize)) 6 for (int x 0 x lt mapSize 1 x ) for (int y 0 y lt mapSize 1 y ) int li offset x (mapSize 1) y vertices li offset new Vector3(x 4.0f, 0 , y 4.0f) int index 0 for (int x 0 x lt mapSize x ) for (int y 0 y lt mapSize y ) indices index 0 (short)( x (mapSize 1) y) indices index 1 (short)(indices index 0 mapSize 2) indices index 2 (short)(indices index 0 mapSize 1) indices index 3 (short)(indices index 0 ) indices index 4 (short)(indices index 0 1) indices index 5 (short)(indices index 0 2) index 6 GL.GenBuffers(1, out vbo) GL.BindBuffer(BufferTarget.ArrayBuffer, vbo) GL.BufferData lt Vector3 gt (BufferTarget.ArrayBuffer, new IntPtr(vertices.Length Vector3.SizeInBytes), vertices, BufferUsageHint.StaticDraw) GL.GenBuffers(1, out ibo) GL.BindBuffer(BufferTarget.ElementArrayBuffer, ibo) GL.BufferData lt short gt (BufferTarget.ElementArrayBuffer, new IntPtr(indices.Length sizeof(short) ), indices, BufferUsageHint.StaticDraw) How can I make the last row of triangles properly formed like all the others? |
1 | OpenGL VBO or glBegin() glEnd()? I recently was given this link to a tutorial site from someone who I gave the original OGL Redbook to. The third header down says distinctly to forget glBegin() amp glEnd() as the typical render method. I learned via the Redbook's method, but I see some benefit in VBOs. Is this really the way to go, and if so, is there a way to easily convert the render code, and subsequent shaders, into VBOs and subsequent datatypes? |
1 | Correct order of operations when enabling disabling Cg shaders in OpenG I've started writing an Effect class which uses Cg shaders in OpenGL and I'm a bit confused about the order of operations when creating and rendering using Cg. Currently, my Effect class contains CGprogram, CGprofile and an array of CGparameter variables which get populated on loading the Effect similar to this m vertexProfile cgGLGetLatesProfile(CG GL VERTEX) cgGLSetOptimalOptions(m vertexProfile) m vertexProgram cgCreateProgramFromFile(g cgContext, CG SOURCE, fileName, m vertexProfile, entryPointName, NULL) cgGLLoadProgram(m vertexProgram) color CGparameter param cgGetFirstParameter(m vertexProgram, CG PROGRAM) while (param) const char paramName cgGetParameterName(param) m vertexParameters m vertexParamNum param param cgGetNextParameter(param) It's not exactly like this and this is only using a vertex shader but it contains the important code. Anyway, that's how I create the Effect and then when I want to use it during a render I use Enable() and Disable() functions before and after I draw the verts etc. void Effect Enable() cgGLBindProgram(m vertexProgram) cgGLEnableProfile(m vertexProfile) void Effect Disable() cgGLUnbindProgram(m vertexProfile) cgGLDisableProfile(m vertexProfile) I'm not sure if this is the correct way to do it, though. Is it correct to enable and disable the profile for each shader? More to the point, do I actually want a profile per shader? I'm using the same profile for each shader so surely I could just have a global one and use that? Any advice would be much appreciated. |
1 | GLSL Multiple Uniform Structs I'm developing a lighting system for my voxel game, and I have to send multiple (alot, say up to 200) lights to my shader program. Those lights contain the following data Position (vec3) Color (vec3) Radius (float) Strength (float) What is the most efficient way to send alof of those light structs to my shaders? I would like it to work with lower versions of OpenGL, like 2.1. |
1 | Running OpenGL app on Windows XP x86 produces incorrect texture colors I'm working with the Cen64 emulator and I compiled from source a x86 version that operates fine on Windows 10 x64. As soon as I run it on a Windows XP x86 machine the colors are then all incorrect. Here are some screenshots Running on Win 10 Running on Win XP SP3 The color format is set to GL UNSIGNED SHORT 5 5 5 1 and the internal format is GL RGBA glTexImage2D(GL TEXTURE 2D, 0, GL RGBA, hres hskip, vres, 0, GL RGBA, GL UNSIGNED SHORT 5 5 5 1, buffer) I'm pretty new to configuring color packing formats but it appears like some color channels are getting swapped? Maybe some sort of endianness issue going from an x64 to an x86 architecture? Please let me know if I need to show more of my code to help debug the issue. Thanks for any help! |
1 | GLFW Problem "The driver does not appear to support OpenGL" I am trying to learn OpenGL amp GLFW , when I run this simple project I get error ID quot 65542 quot with the message quot The driver does not appear to support OpenGL quot . My graphics card is an old amp cheap Intel 64MB |
1 | Frame timing for GLFW versus GLUT I need a library which ensures me that the timing between frames are more constant as possible during an experiment of visual psychophics. This is usually done synchronizing the refresh rate of the screen with the main loop. For example if my monitor runs at 60Hz I would like to specify that frequency to my framework. For example if my gameloop is the following void gameloop() do some computation printDeltaT() Flip buffers I would like to have printed a constant time interval. Is it possible with GLFW? |
1 | Learning OpenGL 3.3 on OpenGL 2.1 machine? I am going to learn OpenGL 3.3 instead of OpenGL 2.1 because many model of OpenGL 2.1 is deprecated (base on this information). It's pity that my "old friend" only support OpenGL 2.1. Base on this article at learnopengl.com I found that from OpenGL 2.1 to OpenGL 3.3 is a huge jump of Khronos Group and I will receive error if I run OpenGL 3.3 function on OpenGL 2.1 machine. 1) Is it possible to learn OpenGL 3.3 on OpenGL 2.1 machine? 2) Relate to this question 7 years ago, is learning OpenGL 2.1 useless today? Thank you! |
1 | Problem using glm lookat I am trying to rotate a sprite so it is always facing a 3D camera. Object GLfloat vertexData X Y Z U V 0.0f, 0.8f, 0.0f, 0.5f, 1.0f, 0.8f, 0.8f, 0.0f, 0.0f, 0.0f, 0.8f, 0.8f, 0.0f, 1.0f, 0.0f, Per frame transform glm mat4 newTransform glm lookAt(glm vec3(0), gCamera.position(), gCamera.up()) shaders gt setUniform("camera", gCamera.matrix()) shaders gt setUniform("model", newTransform) In the vertex shader gl Position camera model vec4(vert, 1) The object will track the camera if I move the camera up or down, but if I move the camera left right (spin the camera around the object's y axis), it will rotate in the other direction so I end up seeing its front twice and its back twice as I rotate around it 360. If I use gCamera.up() instead, it would track the camera side to side, but spin the opposite direction when I move the camera up down. What am I doing wrong? |
1 | Advice on rendering a 2D scene My game will be made up of objects. Essentially the level editor will give me a bunch of objects to choose from and I can drag them in and thus a level is made. The objects could have animation in which case they are responsible for knowing which frame and layer they are on. So then I can have these Animated non movable objects Unanimated non movable objects Animated movable objects Unanimated movable objects So first I was thinking of, when the level starts, build a quadtree for all non movable objects. This way I can easily avoid rendering ones which are not in view. Then from there I can do bounding box check on dynamic objects (or maybe if I add something like Box2D it can do it). But then comes the rendering. At first I was thinking of each object rendering itself, but then I cannot do atlasing. I'd like to do atlasing. I was thinking that animated objects can just use their own frame sheet. But objects which are not animated should use atlasing. I know this can be categorized as premature optimization, but I don't want to eventually find out that it is horribly slow. Any design suggestions comments would be welcome. Does it seem like a good idea to try to atlas in the sort of game I'm making, which is a platformer that is not tile based. |
1 | Why does the lighting change the objects color? I have a code that draws a sphere. Without lighting it is white, but if I enable lighting, it's drawn in gray. I don't know why the sphere changed it's color include lt GL gl.h gt include lt GL glu.h gt include lt GL glut.h gt void init(void) GLfloat mat specular 1.0, 1.0, 1.0, 1.0 GLfloat mat shininess 50.0 GLfloat light position 1.0, 1.0, 1.0, 0.0 glClearColor (0.0, 0.0, 0.0, 0.0) glShadeModel (GL SMOOTH) glMaterialfv(GL FRONT, GL SPECULAR, mat specular) glMaterialfv(GL FRONT, GL SHININESS, mat shininess) glLightfv(GL LIGHT0, GL POSITION, light position) glEnable(GL LIGHTING) glEnable(GL LIGHT0) glEnable(GL DEPTH TEST) void display(void) glClear (GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glutSolidSphere (1.0, 20, 16) glFlush () void reshape (int w, int h) glViewport (0, 0, (GLsizei) w, (GLsizei) h) glMatrixMode (GL PROJECTION) glLoadIdentity() if (w lt h) glOrtho ( 1.5, 1.5, 1.5 (GLfloat)h (GLfloat)w, 1.5 (GLfloat)h (GLfloat)w, 10.0, 10.0) else glOrtho ( 1.5 (GLfloat)w (GLfloat)h, 1.5 (GLfloat)w (GLfloat)h, 1.5, 1.5, 10.0, 10.0) glMatrixMode(GL MODELVIEW) glLoadIdentity() int main(int argc, char argv) glutInit( amp argc, argv) glutInitDisplayMode (GLUT SINGLE GLUT RGB GLUT DEPTH) glutInitWindowSize (500, 500) glutInitWindowPosition (100, 100) glutCreateWindow (argv 0 ) init () glutDisplayFunc(display) glutReshapeFunc(reshape) glutMainLoop() return 0 |
1 | directional lightning I have managed to get a point light working, but I am facing problem with directional lightning. Fragment shader uniform vec4 lightColour uniform vec3 lightPos uniform float lightRadius in Vertex vec3 colour vec2 texCoord vec3 normal vec3 tangent vec3 binormal vec3 worldPos IN void main ( void ) vec4 diffuse texture ( diffuseTex , IN . texCoord ) mat3 TBN mat3 ( IN . tangent , IN . binormal , IN . normal ) vec3 normal normalize ( TBN ( texture ( bumpTex , IN . texCoord ). rgb 2.0 1.0)) vec3 incident normalize ( lightPos IN . worldPos ) float lambert max (0.0 , dot ( incident , normal )) float dist length ( lightPos IN . worldPos ) float atten 1.0 clamp ( dist lightRadius , 0.0 , 1.0) vec3 viewDir normalize ( cameraPos IN . worldPos ) vec3 halfDir normalize ( incident viewDir ) vec3 viewDir normalize ( cameraPos IN . worldPos ) vec3 halfDir normalize ( incident viewDir ) float rFactor max (0.0 , dot ( halfDir , normal )) float sFactor pow ( rFactor , 33.0 ) vec3 colour ( diffuse . rgb lightColour . rgb ) colour ( lightColour . rgb sFactor ) 0.33 gl FragColor vec4 ( colour atten lambert , diffuse . a ) gl FragColor . rgb ( diffuse . rgb lightColour . rgb ) 0.1 As far as I know, directional lightning do not have a radius or a position, just direction. How should I calculate directional lightning? |
1 | Tangents face the same direction on opposite sides of mesh I have noticed that the tangent vectors that I am calculating are not always facing the correct direction. The tangents on the left and right of the mesh both face the same direction. Here is a screenshot showing this I've mapped the XYZ value of the tangent vectors to RGB values in the fragment shader. Z is towards the camera. As you can see, both sides of the teapot have tangents facing towards the camera instead of only the left as I'd excepted. Also, the UVs for the mesh are correct and increase smoothly across the surface (I used blender's default sphere mapping to generate the UV coordinates) The tangents are correct even over the seams. I am using the following piece of code to compute my tangents for (size t i 0 i lt mesh.face list.size() i) const Face amp face mesh.face list i const size t amp i1 face.vertex indices 0 const size t amp i2 face.vertex indices 1 const size t amp i3 face.vertex indices 2 const Vec3 amp p1 mesh.vertex list i1 1 const Vec3 amp p2 mesh.vertex list i2 1 const Vec3 amp p3 mesh.vertex list i3 1 Compute edges. Vec3 e1 p2 p1 Vec3 e2 p3 p1 const size t amp txi1 face.texcoord indices 0 const size t amp txi2 face.texcoord indices 1 const size t amp txi3 face.texcoord indices 2 Vec2 texcoordDelta1 mesh.texcoord list txi2 1 mesh.texcoord list txi1 1 Vec2 texcoordDelta2 mesh.texcoord list txi3 1 mesh.texcoord list txi1 1 float coefficient 1.0f (texcoordDelta1.x texcoordDelta2.y texcoordDelta2.x texcoordDelta1.y) Vec3 faceTangent coefficient (texcoordDelta2.y e1 texcoordDelta1.y e2) const size t amp ti1 face.normal indices 0 const size t amp ti2 face.normal indices 1 const size t amp ti3 face.normal indices 2 Add tangent to all vertices making up the face. mesh.tangent list ti1 1 faceTangent mesh.tangent list ti2 1 faceTangent mesh.tangent list ti3 1 faceTangent Normalise and orthogonalise all tangents. for (size t i 0 i lt mesh.tangent list.size() i) mesh.tangent list i normalise(mesh.tangent list i (mesh.normal list i dot(mesh.normal list i , mesh.tangent list i ))) As far as I can tell, the main part of the calculation seems correct compared to things I've found online float coefficient 1.0f (texcoordDelta1.x texcoordDelta2.y texcoordDelta2.x texcoordDelta1.y) Vec3 faceTangent coefficient (texcoordDelta2.y e1 texcoordDelta1.y e2) Generally, normal mapping works great, but there are subtle lighting errors on parts of the mesh after the incorrect tangent space transformation. Hopefully this is all of the information needed to get to the bottom of this. |
1 | Create a crosshair openGL How do I draw a white crosshair in the middle of screen in openGL, it's all well and good knowing how to render objects in 3d space, but I have literally no idea on how to draw something that sticks on the screen no matter what Would this require a shader? one that does not take into account model view projection matrix? at what point would i draw the cross? after everything to coincide with the painters algorithm? Or do I give it a z value? |
1 | openGL managing images, VBOs and shaders I'm working on a game where I use shaders with vertex attributes (so not immediate mode). I'm drawing lots of images and changing the width height of the quads I use to draw them a lot. To optimize this it's probably a good idea to have one buffer but then one needs to update the complete buffer when one image changes (or only a part of the buffer using glBufferSubData...) I was just wondering what kind of strategies you guys are using? |
1 | Lwjgl or opengl double pixels I'm working in java with LWJGL and trying to double all my pixels. I'm trying to draw in an area of 800x450 and then stretch all the frame image to the complete 1600x900 pixels without them getting blured. I can't figure out how to do that in java, everything I find is in c ... A hint would be great! Thanks a lot. EDIT I've tried drawing to a texture created in opengl by setting it to the framebuffer, but I can't find a way to use glGenTextures() in java... so this is not working... also I though about using a shader but I would not be able to draw only in the smaller region... |
1 | Why glDrawElements draws nothing? I tried to implement indexed draw, but when I call I got nothing on screen. Even more strange that I wasn't get any error when call it and buffer variables packed right. There my constructor Mesh Mesh(Shader amp shader, std vector lt GLuint gt indices, std vector lt VertexData gt vertices) shaderProgram amp shader meshLenght (GLsizei) indices gt size() indices indices glGenBuffers(1, amp ebo) glBindBuffer(GL ELEMENT ARRAY BUFFER, ebo) glBufferData(GL ELEMENT ARRAY BUFFER, indices gt size(), indices, GL STATIC DRAW) create one here glGenVertexArrays(1, amp vao) glBindVertexArray( vao) std vector lt glm vec3 gt positions std vector lt glm vec3 gt normals std vector lt glm vec2 gt texture for (size t i 0 i lt vertices gt size() i) positions.push back(vertices gt at(i).positions) normals.push back(vertices gt at(i).normals) texture.push back(vertices gt at(i).texCoords) glGenBuffers(1, amp vbo) glBindBuffer(GL ARRAY BUFFER, vbo) glBufferData(GL ARRAY BUFFER, sizeof(glm vec3) (positions.size() normals.size()), nullptr, GL DYNAMIC DRAW) glBufferSubData(GL ARRAY BUFFER, 0, sizeof(glm vec3) positions.size(), positions.data()) glBufferSubData(GL ARRAY BUFFER, sizeof(glm vec3) positions.size(), sizeof(glm vec3) normals.size(), normals.data()) glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, 0, nullptr) glVertexAttribPointer(1, 3, GL FLOAT, GL FALSE, 0, reinterpret cast lt const void gt ((sizeof(glm vec3) positions.size()))) glEnableVertexAttribArray(0) glEnableVertexAttribArray(1) glBindBuffer(GL ARRAY BUFFER,0) glBindBuffer(GL ELEMENT ARRAY BUFFER, 0) glBindVertexArray(0) And there my draw code void Mesh Draw() glBindVertexArray( vao) glBindBuffer(GL ELEMENT ARRAY BUFFER, ebo) if ( ebo 0) glDrawArrays(GL TRIANGLES, 0, meshLenght) else glDrawElements(GL TRIANGLES, meshLenght, GL UNSIGNED INT, nullptr) glBindBuffer(GL ELEMENT ARRAY BUFFER, 0) glBindVertexArray(0) Also show my vertices info screenshot And if you need have apitrace trace file https drive.google.com file d 0B dWIpzqq91mQXVIeHhTOHFhRkU view?usp sharing |
1 | How can I blend grass decals in a transparent alpha channel in 3D? LibGDX I'm new to 3D and trying to create grass simulation using decals in libGDX. I'm following the logic outlined here. Atm my grass decal is a billboard of grass stems with a transparent background. I've placed 3 decals together in the form of a triangle. When I pan the camera around at times the decals blend together but most of the time you can see the outline of the decal and you can distinguish between the pngs. Can someone please help me understand a way to blend my decals better? The decal http www.reinerstilesets.de 3dtextures billboardgrass0002.png The code public class Grass implements Screen private FirstPersonCameraController camera control private PerspectiveCamera camera private Mesh floor mesh private Vector lt Texture gt grass private Vector lt Decal gt decals private DecalBatch decalBatch private Vector lt TextureRegion gt grass regions private ShaderProgram shaderProgram private Stage stage Override public void show() stage new Stage() decals new Vector lt Decal gt () grass new Vector lt Texture gt () grass regions new Vector lt TextureRegion gt () camera camera new PerspectiveCamera(67, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()) camera.near 0.01f camera.far 10f camera.position.set(1, 2, 5) camera.update() camera control new FirstPersonCameraController(camera) camera control.setVelocity(3) camera control.setDegreesPerPixel(.25f) Gdx.input.setInputProcessor(camera control) grass grass.add(new Texture("billboardgrass0002.png")) for(Texture t grass) grass regions.add(new TextureRegion(t)) decalBatch new DecalBatch(new CameraGroupStrategy(camera)) for(int i 0 i lt 3 i ) decals.add(Decal.newDecal(0.75f, 1.25f, grass regions.get(0), true)) set grass decals.get(0).setPosition(1f, 0, 1f) decals.get(1).setPosition(1.1f, 0, 1.1f) decals.get(1).setRotationY(45) decals.get(2).setPosition(.8f, 0, 1.1f) decals.get(2).setRotationY( 45) Override public void render(float delta) gl Gdx.gl20.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()) Gdx.gl20.glClearColor(1, 1, 1, 1) Gdx.gl20.glClear(GL20.GL COLOR BUFFER BIT GL20.GL DEPTH BUFFER BIT) Gdx.gl20.glEnable(GL20.GL DEPTH TEST) Gdx.gl20.glEnable(GL20.GL TEXTURE 2D) Gdx.gl20.glEnable(GL20.GL BLEND) Gdx.gl20.glBlendFunc(GL20.GL SRC ALPHA, GL20.GL ONE MINUS SRC ALPHA) Gdx.gl20.glCullFace(GL20.GL NONE) camera camera control.update() disable depth writing Gdx.gl20.glDepthMask(false) draw grass decals for(Decal d decals) decalBatch.add(d) decalBatch.flush() stage stage.getViewport().update(1280, 720, true) stage.act(delta) stage.draw() Override public void resize(int width, int height) Override public void hide() Override public void pause() Override public void resume() Override public void dispose() shaderProgram.dispose() decalBatch.dispose() decals.clear() for(Texture t grass) t.dispose() grass.clear() floor mesh.dispose() Edit disabled depth buffer writing before drawing decals, now looks better |
1 | World texturing techniques in FPS game Texturing for small objects (pickups and enimies) can easily be done UV unwrapping the model, and use a texture of reasonable size to make the model look good. But how can texturing be done for the world? Covering a large building with one texture would require a huge texture (otherwise it will look blurry), and a lot of it will be repeated (brickwall textures, wallpapers ...), so that approach seems inefficient. Assigning different textures to different faces may be slow, since there cannot be a texture switch within the draw call. I am targeting OpenGL 4 or later. I prefer good looking stuff rather than extremely high framerate I aim for 30 fps, but perhaps with motion blur (which I guess requires four times that). About tiling Tiling would work, but then there are some parts of the mesh which requires some other texture. And then I need to switch texture. |
1 | What is a reasonable OpenGL version baseline for a mid range 3D game? I decided recently write a 3D game in my spare time, as I was tired of my daily "corporate programming". If I expect to be done in 6 months 1 year, which version of OpenGL should I use as baseline? In other words, which version of OpenGL should I require my potential users to have? I did some OpenGL has a student, but that was in the days of the first edition of the red book. Things have changed a lot since then. |
1 | THREE.ShaderMaterial cannot perform antialiasing I created a ShaderMaterial to draw a box in three.js using the following key code magnetic orientation let magnetMaterial new THREE.ShaderMaterial( uniforms orientation value new THREE.Vector3(1) , vertexShader varying vec3 vPos void main() vPos position gl Position projectionMatrix modelViewMatrix vec4(position,1.0) , https stackoverflow.com questions 15688232 check which side of a plane points are on fragmentShader extension GL OES standard derivatives enable varying vec3 vPos uniform vec3 orientation void main() float a dot(orientation, vPos) gl FragColor vec4(step(a, 0.), 0, step(0., a), 1.0) ) See also online Demo. Even if I set WebGLRenderer.antialias to true, there's a heavy aliasing if the box is not axis aligned. I found Point Sampled AA and How do I implement anti aliasing in OpenGL?, but I didn't how to do? In addition, the custom shader cannot work with the light if any. In order to make custom color work with the built in effect (light). I want to try post processing via EffectComposer. Can anyone help me out? Related Add scene lights to custom vertex fragment shaders and shader materials? the last shader I added point lights threejs Adding lighting to ShaderMaterial How do you combine a ShaderMaterial and LambertMaterial? |
1 | How can I pass an array of floats to the fragment shader using textures? I want to map out a 2D array of depth elements for the fragment shader to use to check depth against to create shadows. I want to be able to copy a float array into the GPU, but using large uniform arrays causes segfaults in openGL so that is not an option. I tried texturing but the best i got was to use GL DEPTH COMPONENT glTexImage2D(GL TEXTURE 2D, 0, GL DEPTH COMPONENT, 512, 512, 0, GL DEPTH COMPONENT, GL FLOAT, smap) Which doesn't work because that stores depth components (0.0 1.0) which I don't want because I have no idea how to calculate them using the depth value produced by the light sources MVP matrix multiplied by the coordinate of each vertex. Is there any way to store and access large 2D arrays of floats in openGL? |
1 | OpenGL Regarding Frame Buffer Caching I have been trying to get myself refreshed on OpenGL ES. Whilst doing so, I had the following question in mind Does OpenGL have any "caching" mechanism? For example Let's say we draw a stationary triangle with only 3 vertices on the screen if there will be no transformation applied to the primitive, will OpenGL ES still go through the vertex and fragment shader, drawing the primitive anew every single frame? Or, does it have any "caching" mechanism, which is smart enough to know that since there is no transformation, a "caching" buffer will display the primitive on the screen bypassing calculations within vertex and fragment shaders so that unnecessary calculations could be reduced avoided in hopes to enhance frame rate and performance? Thank you. |
1 | Handling collision with LWJGL rectangles I'm testing collision with other rectangles so I can implement it into my current project. The problem is the rectangle starts at the right x and y, but I'm not sure where exactly they are. I'm pretty sure they're starting from the x and y point and the height is going either up and down. My current ortho makes the y axis start from the bottom of the screen but I'm not sure how their rectangle calculates. How can I improve this class so for each side the bottom rectangle touches, it turns a certain color to identify collision. public class a static int playerX 400 static int playerY 400 static int enemyX 100 static int enemyY 100 public static void main(String args) try Display.setDisplayMode(new DisplayMode(640, 480)) Display.setTitle("collision") Display.create() catch (LWJGLException e) e.printStackTrace() Display.destroy() System.exit(1) glMatrixMode(GL PROJECTION) glLoadIdentity() glOrtho(0, 640, 0, 480, 1, 1) glMatrixMode(GL MODELVIEW) Rectangle rl new Rectangle(playerX, playerY, 10, 20) Rectangle r2 new Rectangle(enemyX, enemyY, 200, 10) float c2 0 color while (!Display.isCloseRequested()) glClear(GL COLOR BUFFER BIT) rl.setX(playerX) rl.setY(playerY) r2.setX(enemyX) r2.setY(enemyY) if(rl.intersects(r2)) c2 1f else if(!rl.intersects(r2)) c2 0f if(Keyboard.isKeyDown(Keyboard.KEY A)) playerX 1 if(Keyboard.isKeyDown(Keyboard.KEY D)) playerX 1 if(Keyboard.isKeyDown(Keyboard.KEY W)) playerY 1 if(Keyboard.isKeyDown(Keyboard.KEY S)) playerY 1 System.out.println(playerY 100) glPolygonMode(GL FRONT AND BACK, GL LINE) glColor3f(0, 1, 0) PLAYER l glBegin(GL QUADS) glVertex2f(playerX, playerY 10 ) glVertex2f(playerX 10, playerY 10 ) glVertex2f(playerX 10, playerY 100 ) glVertex2f(playerX, playerY 100 ) glEnd() r glBegin(GL QUADS) glVertex2f(playerX 100, playerY 10 ) glVertex2f(playerX 110, playerY 10 ) glVertex2f(playerX 110, playerY 100 ) glVertex2f(playerX 100, playerY 100 ) glEnd() top glBegin(GL QUADS) glVertex2f(playerX 10, playerY 9 ) glVertex2f(playerX 100, playerY 9 ) glVertex2f(playerX 100, playerY 20 ) glVertex2f(playerX 10, playerY 20 ) glEnd() bot glBegin(GL QUADS) glVertex2f(playerX 10, playerY 90 ) glVertex2f(playerX 100, playerY 90 ) glVertex2f(playerX 100, playerY 101 ) glVertex2f(playerX 10, playerY 101 ) glEnd() glColor3f(c2, 1, 0) ENEMY glBegin(GL QUADS) glVertex2f(enemyX, enemyY) glVertex2f(enemyX 200, enemyY) glVertex2f(enemyX 200, enemyY 10) glVertex2f(enemyX, enemyY 10) glEnd() Display.update() Display.sync(60) Display.destroy() System.exit(0) I just made it so OpenGL set its position to the rectangle instead of the rectangle to OpenGL. |
1 | Problems Animating Texture in OpenGL I'm trying to animate a texture to scroll a static screen for a television, however I'm having some issues. Just translating within the texture matrix animates all textures in the scene which is obviously a problem. However when trying to push and pop the matrix the texture matrix seems to keep getting reset. So rather than a scrolling texture, the texture just stays at the same translation. Here's the code snippet. glMatrixMode(GL TEXTURE) glPushMatrix() glTranslatef(0, 0.03 dt, 0) glBindTexture(GL TEXTURE 2D, staticTexture) RepeatTexture() glBegin(GL QUADS) glTexCoord2f(0, 0) glNormal3f(0, 0, 1) glVertex3f(0.49, 0.205, 0.461) glTexCoord2f(0, 1) glNormal3f(0, 0, 1) glVertex3f(0.49, 1.204, 0.461) glTexCoord2f(1, 1) glNormal3f(0, 0, 1) glVertex3f( .80, 1.204, 0.461) glTexCoord2f(1, 0) glNormal3f(0, 0, 1) glVertex3f( .80, 0.205, 0.461) glEnd() glPopMatrix() glMatrixMode(GL MODELVIEW) |
1 | Manual GLU.gluUnproject Before I used GLU.gluUnproject to calculate my picking ray in my OpenGL game. Recently I switched to my own calculated matrices, and now I can forget the gluUnproject. How can I calculate the picking ray with my own matrices? My picking ray code (for the fixed pipeline) public Ray getPickingRay() FloatBuffer modelBuffer BufferUtils.createFloatBuffer(16) FloatBuffer projBuffer BufferUtils.createFloatBuffer(16) glGetFloat(GL MODELVIEW MATRIX, modelBuffer) glGetFloat(GL PROJECTION MATRIX, projBuffer) IntBuffer viewPort BufferUtils.createIntBuffer(16) glGetInteger(GL VIEWPORT, viewPort) FloatBuffer obj pos BufferUtils.createFloatBuffer(3) float middleX (Display.getWidth() 2) 0.01f float middleY (Display.getHeight() 2) 0.01f GLU.gluUnProject(middleX, middleY, 1.0f, modelBuffer, projBuffer, viewPort, obj pos) Vector3f objPos new Vector3f(obj pos.get(0), obj pos.get(1), obj pos.get(2)) Vector3f.sub(objPos, game.player.position, objPos) return new Ray(game.player.position, objPos) At first I thought I just could use my own model and projection matrix, but I have not found a way to manually calculate the GL VIEWPORT intBuffer. Thanks folks. |
1 | lwjgl and slick util text over textured quad So, I load a TrueTypeFont like this private TrueTypeFont trueTypeFont try InputStream inputStream ResourceLoader.getResourceAsStream("assets fonts main.ttf") Font awtFont2 Font.createFont(Font.TRUETYPE FONT, inputStream) awtFont2 awtFont2.deriveFont(24f) set font size trueTypeFont new TrueTypeFont(awtFont2, true) catch (Exception e) e.printStackTrace() After that, I draw a textured quad, as one usualyy would and then, draw trueTypeFont.drawString(this.x, this.y, this.text, Color.white) . What this gives, however is far from text, this is what it does, the black is supposed to be text... How to fix? |
1 | How does one write to another process's OpenGL DirectX context? I want to write a short of chat client that display the messages in game (OpenGL DirectX), but I really don't know how to handle this. It is easy to write my client in my graphic context... but what about the other game apps context?. My primary target is Windows and Win 7, then mac, then Linux. But I would be happy to solve it for Windows. By the way, it should try to be the most compatible with different DX versions and OGL versions. |
1 | render with const depth value This is a question that may have an answer that differs for vanilla desktop GL and GL ES 2.0 (and wishful thinking is that ES 3.0 would have the same answer as vanilla GL). What I'm doing is rendering a cubemap with a FS quad and I'd like to know if there's a way to specify a constant depth value for an entire render call rather than to get involved at the fragment shader level, in hopes of chasing a little bit of performance as well as flexibility in that I can use the same shader for drawing to different const depth values. As for why it might benefit performance, the note at the bottom here offers some ideas, such as early z testing... I suppose realistically if I emit a constant distant fragment depth, then early z can still happen while maintaining the flexibility of pixel programmable depth in general (necessary for proper impostors as shown). In this case I may be splitting hairs unnecessarily at this point. However that note does mention that if I can avoid specifying gl FragDepth, I should, but it'd be up to the driver whether it will do something smart if I e.g. set gl FragDepth to a constant. As an example I could render my 3D scene, then set depth draw value to maximum, keep depth test on, and then render a fullscreen quad for my cubemap skybox, and it would just do the right thing. Otherwise, choices include render skybox first with depth testing switched off, then render scene use a fragment shader for the skybox that specifies the requisite depth value (this may very well just be the answer) |
1 | Hardware Fragment Sorting? I'm writing a rendering engine in OpenGL. I Want to do order independent transparency. I had heard somewhere that some GPUs have support for actually sorting the fragments of all the objects in the scene based on depth and then drawing them. I then realized that this feature is likely very important to many people. Does OpenGL have a built in fragment sorting algorithm, or access to this hardware? |
1 | What is an OpenGL equivalent to ID3DXSprite? As a Direct3D developer I can use the ID3DXSprite class (in D3DX library) for drawing 2D graphics. What's the best way to implement this functionality in OpenGL? |
1 | Learning OpenGL Red and Blue book still relevant? I've recently purchased the Orange book( GLSL ) and am wondering if it is important at all to read through the red and blue books as well? Any thoughts? |
1 | Invalid GLSL on some machines I'm writing a game engine using OpenGL 4.3 using gcc 5, mainly to teach myself graphics programming. Initial development was on my Surface Pro 3 using mingw w64 and worked like a charm. I've decided to move to my desktop, which is running two GTX 670s, and both Windows and Arch Linux. I've made sure that I have the latest NVIDIA drivers installed on both operating systems, but am having issues with my vertex and fragment shaders. All I know is that neither system compiles the shaders, but the OpenGL Reference Compiler simply issues a warning saying that OpenGL 4.3 might not be fully implemented. I've also tried downgrading all the way to OpenGL 4.0, but this doesn't seem to fix the issue. Below are the shaders that I'm currently trying out. Hoping someone more experienced than I can lend a hand? shader.vert version 430 layout (location 0) in vec3 vertex position layout (location 1) in vec2 vt uniform mat4 cam view, cam proj, sprite matrix out vec2 texture coordinates void main() texture coordinates vt gl Position cam proj cam view sprite matrix vec4(vertex position, 1.0) shader.frag version 430 in vec2 texture coordinates layout (binding 0) uniform sampler2D tex out vec4 frag colour void main() vec4 texel texture(tex, texture coordinates) frag colour texel Thanks in advance ) |
1 | Creating a fragment shader to darken a white texture over time OpenGL GLSL So as a part of learning OpenGL, I've now decided to try and be a bit more creative with shaders, as part of a practice game I'm making using C OpenGL. I'm completely new when it comes to working with shaders, so I'm still trying to get used to the thought process behind creating something with it. To this extend, here's what I want to attempt. As a part of the gameplay, the entire background of the game consists of a white, plain, 2D texture. The white texture should slowly animate black circles into it over time, until finally the entire texture is completely black. I've considered some different approaches to the problem, and would like to know which one will be the most practical when it comes to shaders. Using framebuffers As I've understood from looking around the internet, you can't really, say, apply certain colors to a specific area of fragments in your fragment shader (like creating a black circle), and then save the state of those fragments for the next frame. Instead, I would have to apply the effect, and render the texture to a framebuffer and then re use it again the next frame. I could then continuously do this slowly add black circles to the texture. The problem is, is this really too much work to do for something simple as this? Or would it be fast enough? As far as I've seen, framebuffers are usually used to post process effects for entire scene frames. This would not be for the entire scene, only for the background. The advantage of this approach would be that it's relatively simply to work with as you're only working on one texture. Creating individual circle objects with own fragment shaders This was another approach I thought about doing. Basically, instead of having one shader work on the entire background texture, I would simply spawn textured quads that each individually run a fragment shader that just turns their texture black over time using a simple percentage value input as a uniform. The disadvantage of this solution is that you make it a bit more complex by having to handle more objects at runtime, instead of simply working with one texture. More vertices would also have to be handled when rendering. Of these two approaches, I don't know really what would be a good solution to run with. I would like to use the framebuffers approach but I don't know if I would "abuse" the technique with this kind of problem or not. Or perhaps there's another way of doing this that I've completely missed because I'm new to shaders, in that case other suggestions would be welcome! Thank you all for your help ) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.