_id
int64 0
49
| text
stringlengths 71
4.19k
|
---|---|
1 | How is animation handled in non immediate OpenGL? I'm a newbie to modern OpenGL. I'm comfortable with the immediate OpenGL methodology, but I've never done any serious use of VBOs. My question is about animation. In immediate mode, to achieve animation you just have to stream different vertex positions (interpolated from keyframes) in object space. How is this achieved in non immediate OpenGL? Obviously, uploading fresh VBO data for each frame is sure going to hog the graphics bus. I haven't found any literature about the modern way to do it. Thinking about it, I came to several options Attributes animation as a 3d offset. For each frame, a different (possible interpolated) offset attribute is passed for each vertex, applied to the same vertex each keyframe. Indices storing keyframes as absolute vertices and accesing them through indexing, using a different set of vertices for every keyframe. I find this approach impossible, since you can't access adjacent keyframes and therefore can't interpolate between them. Also, seems like a bad idea for procedural animation. Texture this might be very stretchy but sounds like a good solution for me. Again, animation is thought as an xyz offset for each vertex. Each keyframe can be stored in a 1D texture where the dimension maps to vertexID.If I'm not mistaken, in OpenGL 4.0 you can access textures from any shader, so you could read this texture from the vertex shader and apply each vertex transformation. A 2D texture could hold several frames. You still perform interpolation (and if interpolation works for textures outside of fragment shader, which I'm not sure of, you can linearly interpolate for free!) This could be applied to more complex animation systems like bone animation without much effort. Am I overthinking this? Can anyone shed some light? |
1 | Loading models with opengl I'm developing a video game using OpenGL as graphic API and C programming language, and I'm creating all models with blender. One question I have is how you deal with models (vertices), I mean, once you have created a model and want to load them with OpenGL you use hard coded or a software module to load them on runtime?. |
1 | OpenGL Applications Bring computer to halt Whenever I run any application that utilizes the OpenGL interface, my entire computer comes to a halt, but it doesn't do this when it utilizes the DirectX interface. I run both Linux (Ubuntu 15.10) and Windows 10 so this isn't exactly caused by the operating system. I'm running the latest drivers from NVidia and both OS's are completely up to date. This is happening on a Dell Precision M6300 laptop (Core 2 Dou 2.5ghz, NVidia Quadro FX 1600M, 4gb ram) and although it's a bit old it should be completely capable of rendering a blank OpenGL window using GLFW, however it slows down my entire computer (every application starts freezing to where it becomes unusable until the application is closed). This happens in games like Left4Dead, Half life 2, etc., but also in my own OpenGL programs. The same programs and games do not have the same effect on my desktop (although much better hardware, a blank OpenGL window shouldn't matter). Any help would be greatly appreciated, thank you. Also my apologies if I left out any vital information or made a confusing question. Just ask me to clarify or add something and I shall. Added the code for the blank OpenGL window in question include lt stdio.h gt include lt stdlib.h gt include lt GL glew.h gt include lt GLFW glfw3.h gt include lt glm glm.hpp gt using namespace glm int main(int argc, char argv ) if ( !glfwInit() ) fprintf(stderr, "Failed to initialize GLFW n") return 1 glfwWindowHint(GLFW SAMPLES, 4) 4x AA glfwWindowHint(GLFW CONTEXT VERSION MAJOR, 3) GL 3.3 glfwWindowHint(GLFW CONTEXT VERSION MINOR, 3) glfwWindowHint(GLFW OPENGL FORWARD COMPAT, GL TRUE) glfwWindowHint(GLFW OPENGL PROFILE, GLFW OPENGL CORE PROFILE) GLFWwindow window window glfwCreateWindow(1024, 768, "OpenGL Tutorial", NULL, NULL) if (window NULL) fprintf(stderr, "Failed to create OpenGL Window.") return 1 glfwMakeContextCurrent(window) glewExperimental true if (glewInit() ! GLEW OK) fprintf(stderr, "Failed to initiate the glew context!") return 1 glfwSetInputMode(window, GLFW STICKY KEYS, GL TRUE) do glfwSwapBuffers(window) glfwPollEvents() while (glfwGetKey(window, GLFW KEY ESCAPE ) ! GLFW PRESS amp amp glfwWindowShouldClose(window) 0) |
1 | How do I implement object picking, using OBB in OpenGL? I am trying to make 3D drawing software. I wanted to have the drag feature so I am using object picking using the OBB algorithm. I am facing problems in understanding the algorithm, and my implementation thus is having bugs. How do I implement object picking, using OBB in OpenGL? As I am new to OpenGL (free glut library), a step by step explanation would be helpful. |
1 | What is GL MAX COMBINED TEXTURE IMAGE UNITS? I am a beginner in OpenGL. I am learning about textures in OpenGL. What I don't understand is how to determine how many texture units are in the GPU. I heard someone said that you can see how many texture units by writing the following code. int total units glGetIntegerv(GL MAX COMBINED TEXTURE IMAGE UNITS, amp total units) std cout lt lt total units lt lt ' n' the result is 192 Is there 192 texture units in my GPU? In documentation, it says GL MAX COMBINED TEXTURE IMAGE UNITS params returns one value, the maximum supported texture image units that can be used to access texture maps from the vertex shader and the fragment processor combined. If both the vertex shader and the fragment processing stage access the same texture image unit, then that counts as using two texture image units against this limit. The value must be at least 48. See glActiveTexture. So I wanted to know how many texture units can be used to access texture maps from the vertex and fragment shaders. So I wrote and run the following code. int vertex units, fragment units glGetIntegerv(GL MAX VERTEX TEXTURE IMAGE UNITS, amp vertex units) std cout lt lt vertex units lt lt quot n quot the result is 32 glGetInteferv(GL MAX TEXTURE IMAGE UNITS, amp fragment units) std cout lt lt fragment units lt lt quot n quot the result is also 32 So 32 32 64. But why does GL MAX COMBINED TEXTURE IMAGE UNITS shows me the result of 192? I think I am missing something. What do I need to calculate to get 192? And also, in OpenGL, why are there only GL TEXTURE0 to GL TEXTURE31 macros? I think these macros are for each shaders. Am I right? |
1 | Render specific part of a texture in OpenGL (2D Sprite Sheet) I've looked at this answer to find out how to render just a part of a texture C Opengl render part of an image I tried that, but the problem is, this is how I render my texture glBegin(GL QUADS) glTexCoord2f(0, 0) glVertex2f(x, y) glTexCoord2f(1, 0) glVertex2f(x w, y) glTexCoord2f(1, 1) glVertex2f(x w, y h) glTexCoord2f(0, 1) glVertex2f(x, y h) glEnd() I tried doing this to render the top left quarter glBegin(GL QUADS) glTexCoord2f(0, 0) glVertex2f(x, y) glTexCoord2f(0.5, 0) glVertex2f(x w, y) glTexCoord2f(0.5, 0.5) glVertex2f(x w, y h) glTexCoord2f(0, 0.5) glVertex2f(x, y h) glEnd() But it just totally messes up and makes my texture bigger and cut off. What I want is to just draw a part, like you would on a sprite sheet. I've been messing around with those lines for awhile and I just can't seem to find out how to do what I want. I've been messing around with this yeah i know this math is crappy and weird but im trying EVERYTHING at this point, it draws until the dcw (width) just fine but i cant seem to integrate cx cy... double dcw double(cw) double(w) double dch double(ch) double(h) double dcx double(cx) double(dcw) double dcy double(cy) double(dch) std cout lt lt "dcw " lt lt dcx lt lt " n" lt lt "dch " lt lt dcy lt lt " n" glBegin(GL QUADS) bottom right glTexCoord2f(0, 0) glVertex2f(cx, cy) top right glTexCoord2f(dcw, 0) glVertex2f(cx cw, cy) top left glTexCoord2f(dcw, dch) glVertex2f(cx cw, cy ch) bottom left glTexCoord2f(0, dch) glVertex2f(cx, cy ch) |
1 | Shader Calculate depth relative to Object I am trying to calculate depth relative to the object.Here is a good solution to retrieve depth relative to camera Depth as distance to camera plane in GLSL varying float distToCamera void main() vec4 cs position glModelViewMatrix gl Vertex distToCamera cs position.z gl Position gl ProjectionMatrix cs position With this example the depth is relative to the camera.But I would like get the depth relative to the object. I would like the same depth and value if I am near from the object or if I am far. Here is an example of what I am trying to achieve. On the left you can see that the depth is relative to the camera. And on the right even if the camera moves back from the object, the depth remains the same because it is dependant to the object. |
1 | Camera rotation around point, but without centering Let's say i have the following Point somewhere in space Camera with position and orientation (up, right, forward) I want to rotate camera around the point, but also keep this point in same place on screen. So, if point was on (32, 32) on window, after rotation i want it to still be on (32, 32). I've seen How can I orbit a camera about it 39 s target point? , and it was somewhat helpful. I needed code to rotate point around arbitrary axis (camera's up and right), so i used this resource. Problem is, i got something like numerical errors and my camera started to wander weirdly when rotating around camera's both up and right (it seems fine when i rotate only around one of them). I tested my implementation with code Matrix m1 MatrixRotate(Vector( 1, 1, 1), 33) Matrix m2 MatrixRotate(Vector( 1, 1, 1), 33) Vector a Vector( 1, 1, 1) Vector c a c m1 c c m2 c printf(" f f f f n", c.x, c.y, c.z, c.w) And got 1.028036 0.960396 1.124331 0.000000 It worked fine when rotation axis was something 'normal' like (1, 0, 0) or (0, 0, 1). So, how else can i rotate camera around point, while keeping said point in same point on screen? |
1 | How can I efficiently render lots of 2D quads? I have lots of 2D quads. I write their local vertex position info to a buffer (accompanied with a world transform matrix) which gets sent to the render thread. I then pass the world transform matrix as a uniform into the vertex shader which applies itself to each vertex and works fine. I however have to make a call to glDrawElements() for each quad I want to render because each one has a different world transform. What is the most efficient way for me to render all of these quads? Should I do what I'm already doing, or should I calculate the world position for each vertex on the CPU before I write them to the buffer, so I can attempt to batch more quads into a single glDrawElements() call? Or is there another approach? |
1 | Why nearby triangles tend to disappear? I've just enabled back face culling and I'm noticing a weird behavior when all vertices of my triangle is outside the view and 2 of them is behind me (I think) the triangle disappears. So to see it, here is a GIF. I suspect the projection matrix reverses the order of the two vertices when they fall behind me, and changes the winding of my triangle. But it's unclear why does the triangles disappear only if all vertices out of view... How can I work around this problem, if possible? I develop on Linux if that matters. UPDATE It's pointed out it might not be due to the back face culling. I disabled it and I can indeed reproduce it. The cubes are 20 20 and the vertical field view is 90 . Its vertical apparent size roughly fills the window. UPDATE 2 Ok I'll post the relevant part of the code, projection and view matrixes are set up using my own functions void createViewMatrix( GLfloat matrix 16 , const Vector3 forward, const Vector3 up, const Vector3 pos ) Setting up perpendicular axes Vector3 rright Vector3 rup up Vector3 rforward forward vbonorm( amp rright, amp rup, amp rforward) Orthonormalization (right is computed from scratch) Filling the matrix matrix 0 rright.x matrix 1 rup.x matrix 2 rforward.x matrix 3 0 matrix 4 rright.y matrix 5 rup.y matrix 6 rforward.y matrix 7 0 matrix 8 rright.z matrix 9 rup.z matrix 10 rforward.z matrix 11 0 matrix 12 vdp(pos, amp rright) matrix 13 vdp(pos, amp rup) matrix 14 vdp(pos, amp rforward) matrix 15 1 void createProjectionMatrix( GLfloat matrix 16 , GLfloat vfov, GLfloat aspect, GLfloat near, GLfloat far ) GLfloat vfovtan 1 tan(RAD(vfov 0.5)) memset(matrix, 0, sizeof( matrix) 16) matrix 0 vfovtan aspect matrix 5 vfovtan matrix 10 (near far) (near far) matrix 11 1 matrix 14 (2 near far) (near far) Projection matrix set up with this call createProjectionMatrix(projMatrix, VERTICAL FOV, ASPECT RATIO, Z NEAR, 10000) (VERTICAL FOV 90, ASPECT RATIO 4.0 3, Z NEAR 1) Level drawing is simply void drawStuff() GLfloat projectView 16 glClearColor(0, 0, 0, 1) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) createViewMatrix(viewMatrix, amp camera.forward, amp camera.up, amp camera.pos) multiplyMatrix(projectView, viewMatrix, projMatrix) lt Row mayor multiplication. glUniformMatrix4fv(renderingMatrixId, 1, GL FALSE, projectView) bailOnGlError( FILE , LINE ) renderLevel( amp testLevel) Cubes are rendered wall by wall (optimizing this will be another story) for (j 0 j lt 6 j ) glBindTexture(GL TEXTURE 2D, cube gt wallTextureIds j ) bailOnGlError( FILE , LINE ) glDrawElements(GL TRIANGLE FAN, 4, GL UNSIGNED INT, (void )(sizeof(GLuint) 4 j)) bailOnGlError( FILE , LINE ) glUniform4f(extraColorId, 1, 1, 1, 1) bailOnGlError( FILE , LINE ) Vertex shader version 110 attribute vec3 position attribute vec3 color attribute vec2 texCoord varying vec4 f color varying vec2 f texCoord uniform mat4 renderingMatrix void main() gl Position renderingMatrix vec4(position, 1) f color vec4(color, 1) f texCoord texCoord Fragment shader version 110 varying vec4 f color varying vec2 f texCoord uniform sampler2D tex uniform vec4 extraColor void main() gl FragColor texture2D(tex, f texCoord) vec4(f color) extraColor The depth buffer simply set up by enabling it. |
1 | Java OpenGL Perspective matrix not working I'm trying to render a simple triangle with OpenGL in Java using LWJGL3. Everything is working great, but the projection matrix (perspective) is not working. In C I just used to apply the glm perspective() method which works just great. But in Java, I implemented it myself since there are no libraries like GLM handling it. So here the code for the perspective in Java public mat4 perspective(float fov, float aspectRatio, float zNear, float zFar) mat4 perspective new mat4() float halfTanFov (float) Math.tan(Math.toRadians(fov 2)) float range zNear zFar perspective.m00 1f halfTanFov aspectRatio perspective.m11 1f halfTanFov perspective.m22 (zFar zNear) range perspective.m23 1 perspective.m32 (2f zFar zNear) range return perspective Of course I tested this multiplication and a compared a result with my TI output, and it worked great. Other information, the default constructor for the mat4 class is putting all the values to 0. here is the code public void setZero() m00 0 m01 0 m02 0 m03 0 m10 0 m11 0 m12 0 m13 0 m20 0 m21 0 m22 0 m23 0 m30 0 m31 0 m32 0 m33 0 the viewMatrix() in the other hand is working great. It's a simple implementation of the lookAt() method. So when it's lookAtMatrix modelMatrix position where the position is a vec4 the result is good. But when I try to add the projection matrix for the MVP perspective lookatMatrix model position the result is nothing. Here where I do it in the code public mat4 getViewProjection() mViewProjection MatrixTransform.getInstance().lookAt(mPosition, mPosition.add(mDirection), mUp) return mViewProjection public mat4 getMVP(mat4 model) return mPerspective.mult(getViewProjection()).mult(model) And here is my simple GLSL shader (for the vertex shader ) version 430 layout(location 0) in vec3 position uniform mat4 MVP void main(void) gl Position MVP vec4(position, 1) I tried other implementation of the perspective without success, so I guess my mistake is somewhere else, but sadly enough, I can't figure out where. If someone could help, it'd be great ! thank you. If you other informations, please ask me and I'll post it. EDIT Here is how I put a mat4 into a uniform public void uniform(String variableName, mat4 matrix) int loc glGetUniformLocation(mId, variableName) FloatBuffer buffer BufferUtils.createFloatBuffer(16) matrix.putIntoBuffer(buffer) buffer.flip() glUniformMatrix4(loc, false, buffer) and the method putIntoBuffer() public void putIntoBuffer(FloatBuffer buffer) buffer.put(m00) buffer.put(m01) buffer.put(m02) buffer.put(m03) buffer.put(m10) buffer.put(m11) buffer.put(m12) buffer.put(m13) buffer.put(m20) buffer.put(m21) buffer.put(m22) buffer.put(m23) buffer.put(m30) buffer.put(m31) buffer.put(m32) buffer.put(m33) |
1 | OpenGL glVertexAttribFormat vs glVertexAttribPointer I am attempting to change my code from using glVertexAttribPointer to glVertexAttribFormat as I have heard it s more efficiend since it reduces binding calls overhead. I have 2 versions of the code, one working one not. The code looks like glGenBuffers(1, amp tvbo1) glBindBuffer(GL ARRAY BUFFER, tvbo1) glObjectLabel(GL BUFFER, tvbo1, 21, " "Mesh Vertex Buffer "") glBufferData(GL ARRAY BUFFER, sizeof(vec3) triangle.size(), triangle.data(), GL STATIC DRAW) glGenBuffers(1, amp tvbo2) glBindBuffer(GL ARRAY BUFFER, tvbo2) glObjectLabel(GL BUFFER, tvbo2, 22, " "Mesh Normals Buffer "") glBufferData(GL ARRAY BUFFER, sizeof(vec3) ns.size(), ns.data(), GL STATIC DRAW) glEnableVertexAttribArray(0) glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, 0, 0) glVertexAttribFormat(0, 3, GL FLOAT, GL FALSE, 0) comment glVertexAttribBinding(0,0) comment glEnableVertexAttribArray(1) glVertexAttribPointer(1, 3, GL FLOAT, GL FALSE, 0, 0) glVertexAttribFormat(1, 3, GL FLOAT, GL FALSE, 0) comment glVertexAttribBinding(1,1) comment GLuint ptrs tvbo1, tvbo2 GLintptr offsets 0,0 int strides 0,0 glBindVertexBuffers(0, 2, ptrs, offsets, strides) comment So whenever you see a comment. That's to denote that in the other version that line is commented out. The version that does work uses glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, 0, 0) The version that does not work uses glVertexAttribFormat(0, 3, GL FLOAT, GL FALSE, 0) comment glVertexAttribBinding(0,0) comment glBindVertexBuffers(0, 2, ptrs, offsets, strides) comment The difference between working and not working is having a purple triangle vs having a black screen. It seems that the error comes from the line glBindVertexBuffers(0, 2, ptrs, offsets, strides) comment As having that line present results in a black screen regardless of the rest of the code. |
1 | How to organize passing data to shaders in cross API render system? I try to create rendering system that supports DirectX and OpenGL. I am trying to create class for constant buffer, but DirectX constant buffers and OpenGL uniform buffers have different memory organisation rules. Is there any way to to achieve the compatibility between DirectX and std140 from OpenGL? Or probably will it be better to choose a other way to create abstraction of constant buffers for shaders? And how developers solve problem of passing data to shaders effectively in serious engines? |
1 | How costly is an OpenGL draw call, and how do I optimise them? How costly is a draw call in OpenGL? What are the ways to reduce the number of draw calls? I have seen people using glMapBufferRange, but I can't figure out how it improves the performance. How much impact do draw calls have on the overall performance of a program? |
1 | Impact of variable length loops on GPU shaders Its popular to render procedural content inside the GPU e.g. in the demoscene (drawing a single quad to fill the screen and letting the GPU compute the pixels). Ray marching is popular This means the GPU is executing some unknown number of loop iterations per pixel (although you can have an upper bound like maxIterations). How does having a variable length loop affect shader performance? Imagine the simple ray marching psuedocode t 0.f while(t lt maxDist) p rayStart rayDir t d DistanceFunc(p) t d if(d lt epsilon) ... emit p return How are the various mainstream GPU families (Nvidia, ATI, PowerVR, Mali, Intel, etc) affected? Vertex shaders, but particularly fragment shaders? How can it be optimised? |
1 | How much is atomicAdd slower than an atomic counter? I am considering replacing an atomic counter in my shader code with a SSBO an atomicAdd operation. What I need to know is the difference in performance of these two. I know the atomic counter executes in roughly 3 clock cycles on my Geforce 840m. The reason behind this is that I would like to add an arbitrary number to the counter rather than just increment by 1. |
1 | Plotting 2D map coordinates to OpenGL 3D projection I m trying to convert a 2D map created in my map editor to a 3D plotting with OpenGL. This is my map generated in my map editor Those vertices are relative to my Cartesian origin world coordinate (top up of the picture) and I m applying this formula to convert it to an OpenGL object coordinate World size 800x600 x (X 800) 0.5 y (Y 600) 0.5 Getting this result (First object face) 0.48625, 0.068333333 0.12625, 0.07 0.12875, 0.481666667 0.4875, 0.486666667 Plotting this vertex buffer in OpenGL, I got a very weird result. So how can I get a 3D model from those vertex positions? Like this picture I m rendering OpenGL in triangles mode and using this example as the start point https github.com JoeyDeVries LearnOpenGL blob master src 1.getting started 7.4.camera class camera class.cpp Edit Using the conversion formula the Earcut tessellation (https github.com mapbox earcut.hpp), I've finally got this rectangle rendering correctly inside OpenGL. With two planes with just the Z axis different, now the problem is how to render its laterals since Earcut just works with 2D coordinates... |
1 | How can I create an orthographic display that handles different screen dimensions? I'm trying to create an iPad iPhone game using GLES2.0 that contains a 3D scene with a heads up display GUI overlaid on the top. However, this problem would also apply if I were to port my game to a computer and run the game in a resizable window, or allow the user to change screen resolutions... When trying to make the 2D GUI HUD work I've made the assumption that all I'm really doing is drawing a load of 2D textured 'quads' on the screen and am trying to treat the orthographic projection as an old style 2D display with 0,0 in the upper left and screenWidth,ScreenHeight in the lower right. This causes me all sorts of confusion when I rotate my ipad into Landscape mode since I can't work out what to put into my projection and modelview matrices to turn everything around the right way. It also gets messy if I want to support the iPad's large screen, an iPhone or a Retina display since I have to then draw three sets of textures for everything and work out which ones to use. Should I be trying to map the 2D OpenGL co ords 1 1 with the screen? While typing out this question it occurs to me that I could keep my origin in the centre, still running 1 1 along the axes. This would let me scale my 2D content appropriately on the different screen sizes, but wouldn't I end up with the textures being scaled and possibly losing quality? I'm using OpenGLES 2.0 and have a matrix library that has equivalents to the GLES1.1 glOrthof() and glFrustrum() calls. |
1 | Is learning OpenGL2.1 today a bad idea? Possible Duplicate Is learning OpenGL 2.1 useless today? This question was asked around 2 years ago and i have well read the answers to it, but that was 2 years ago, i would like to know if it's a bad idea for me to learn OpenGL2.1 today. I bought the OpenGL superbible (4th edition) and not the 5th because some user in the ratings said that it was much better and i believed him. But now i'm affraid that was a long time ago. Thanks for all your feedback! |
1 | Local shape color blending I am trying to implement this in Unity 4 Pro. But I am stuck in the blending part. I don't understand how you could blend multiples textures colors using multiples volumes on an object. How could you access those volumes in the shader and check for "collision"? It's seems to be very similar to vertex pixel lighting. But maybe I am wrong. Here is the simple effect I am trying to create. |
1 | GLSL associating multiple uniform samplerBuffers At the moment I'm not sure how my VBO and TBO associate with a specific uniform samplerBuffer in my shader, I have not linked them using the location or the vbo tbo together. It seems to still work though, but now that I am using multiple samplerBuffers, how do I establish to link between a specific TBO and the uniform. Do I need to use glUniformX? |
1 | Where did I lose vertex binding? I make OpenGL render framework for my bachelor graduate work. I created simple Model and Shader classes. But when I tried to render some vertices, they do not render. I saw with an OpenGL profiler, that I haven't vertex data, but in code I bind it. There's my main class GLfloat vertex positions 0.0f, 0.5f, 1.0f, 0.0f, 0.0f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f GLchar vertexShader (GLchar ) " version 150 n" " n" "in vec2 position n" " n" "void main() n" " n" " gl Position vec4(position, 0.0, 1.0) n" " " GLchar fragmentShader (GLchar ) " version 400 core n" " n" "out vec4 fColor n" " n" "void main() n" " n" " fColor vec4(0.5, 0.4, 0.8, 1.0) n" " " static const GLfloat vertex colors 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f Shader shader shader.CompileShader(Shader ShaderType Fragment, fragmentShader) shader.CompileShader(Shader ShaderType Vertex, vertexShader) glBindFragDataLocation( shader.GetProgram(), 0, "fColor") shader.Link() Texture texture((GLfloat ) vertex colors, 16) Model model (vertex positions, amp shader, amp texture) while (!glfwWindowShouldClose(window)) static const float black 0.2f, 0.3f, 1.0f, 1.0f glClearBufferfv(GL COLOR, 0, black) shader.Bind() model.Draw() glfwPollEvents() glfwSwapBuffers(window) And my model class void Model Draw() glBindVertexArray( vao) glBindBuffer(GL ARRAY BUFFER, vbo) glDrawArrays(GL TRIANGLES, 0, 3) Model Model(GLfloat meshVert , int m size, GLuint vertexInd , int v size, Shader material, Texture texture) glGenVertexArrays(1, amp vao) glBindVertexArray( vao) glGenBuffers(1, amp vbo) glBindBuffer(GL ARRAY BUFFER, vbo) glBufferData(GL ARRAY BUFFER, sizeof(GLfloat) m size, meshVert, GL STATIC DRAW) GLint posAttrib glGetAttribLocation( material gt GetProgram(), "position") glEnableVertexAttribArray((GLuint) posAttrib) glVertexAttribPointer((GLuint) posAttrib, 2, GL FLOAT, GL FALSE, 5 sizeof(GLfloat), 0) GLint colAttrib glGetAttribLocation( material gt GetProgram(), "color") glEnableVertexAttribArray((GLuint) colAttrib) glVertexAttribPointer((GLuint) colAttrib, 3, GL FLOAT, GL FALSE, 5 sizeof(GLfloat), (void )(2 sizeof(GLfloat))) P.S. OpenGL profiler breaks on command glDrawArrays, but when I saw data info, it was empty. That's why I think I lost some data bind. P.S.S System macOS Sierra, Intel HD 6000. |
1 | Why does the lighting change the objects color? I have a code that draws a sphere. Without lighting it is white, but if I enable lighting, it's drawn in gray. I don't know why the sphere changed it's color include lt GL gl.h gt include lt GL glu.h gt include lt GL glut.h gt void init(void) GLfloat mat specular 1.0, 1.0, 1.0, 1.0 GLfloat mat shininess 50.0 GLfloat light position 1.0, 1.0, 1.0, 0.0 glClearColor (0.0, 0.0, 0.0, 0.0) glShadeModel (GL SMOOTH) glMaterialfv(GL FRONT, GL SPECULAR, mat specular) glMaterialfv(GL FRONT, GL SHININESS, mat shininess) glLightfv(GL LIGHT0, GL POSITION, light position) glEnable(GL LIGHTING) glEnable(GL LIGHT0) glEnable(GL DEPTH TEST) void display(void) glClear (GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glutSolidSphere (1.0, 20, 16) glFlush () void reshape (int w, int h) glViewport (0, 0, (GLsizei) w, (GLsizei) h) glMatrixMode (GL PROJECTION) glLoadIdentity() if (w lt h) glOrtho ( 1.5, 1.5, 1.5 (GLfloat)h (GLfloat)w, 1.5 (GLfloat)h (GLfloat)w, 10.0, 10.0) else glOrtho ( 1.5 (GLfloat)w (GLfloat)h, 1.5 (GLfloat)w (GLfloat)h, 1.5, 1.5, 10.0, 10.0) glMatrixMode(GL MODELVIEW) glLoadIdentity() int main(int argc, char argv) glutInit( amp argc, argv) glutInitDisplayMode (GLUT SINGLE GLUT RGB GLUT DEPTH) glutInitWindowSize (500, 500) glutInitWindowPosition (100, 100) glutCreateWindow (argv 0 ) init () glutDisplayFunc(display) glutReshapeFunc(reshape) glutMainLoop() return 0 |
1 | (LWJGL) Resize window content I am having a little issue with my game trying to get the screen to draw a ton larger. I am going for a retro type of feel for my game and I believe that this would help with that. I am basically trying to take a small area (example 100x177) and draw it 9 times the original size while still keeping the nearest neighbor. The image below shows what I am trying to achieve. (image taken from http forums.tigsource.com index.php?topic 27928.0). I have been trying to set the current render area to an FrameBuffer then draw the Frame Buffer larger on the window. I have not been able to find any example code or how I would go about doing that. Thank you in advance for the help. EDIT Added sample code This code is in the same method where I initialize OpenGL fboID GL30.glGenFramebuffers() GL30.glBindFramebuffer(GL30.GL FRAMEBUFFER, fboID) colID GL11.glGenTextures() GL11.glBindTexture(GL11.GL TEXTURE 2D, colID) GL11.glTexImage2D(GL11.GL TEXTURE 2D, 0, GL11.GL RGB, scaleWidth, scaleHeight, 0, GL11.GL RGB, GL11.GL UNSIGNED BYTE, (ByteBuffer)null) GL11.glTexParameteri(GL11.GL TEXTURE 2D, GL11.GL TEXTURE MAG FILTER, GL11.GL NEAREST) GL11.glTexParameteri(GL11.GL TEXTURE 2D, GL11.GL TEXTURE MIN FILTER, GL11.GL NEAREST) GL11.glTexParameteri(GL11.GL TEXTURE 2D, GL11.GL TEXTURE WRAP S, GL12.GL CLAMP TO EDGE) GL11.glTexParameteri(GL11.GL TEXTURE 2D, GL11.GL TEXTURE WRAP T, GL12.GL CLAMP TO EDGE) depID GL30.glGenRenderbuffers() GL30.glBindRenderbuffer(GL30.GL RENDERBUFFER, depID) GL30.glRenderbufferStorage(GL30.GL RENDERBUFFER, GL11.GL DEPTH COMPONENT, scaleWidth, scaleHeight) GL30.glFramebufferRenderbuffer(GL30.GL FRAMEBUFFER, GL30.GL DEPTH ATTACHMENT, GL30.GL RENDERBUFFER, depID) GL32.glFramebufferTexture(GL30.GL FRAMEBUFFER, GL30.GL COLOR ATTACHMENT0, colID, 0) drawBuffs BufferUtils.createIntBuffer(1) drawBuffs.put(0, GL30.GL COLOR ATTACHMENT0) GL20.glDrawBuffers(drawBuffs) if(GL30.glCheckFramebufferStatus(GL30.GL FRAMEBUFFER) ! GL30.GL FRAMEBUFFER COMPLETE) System.out.println("Framebuffer not complete!") else System.out.println("Framebuffer is complete!") And this is my render method that the game loop runs (updates at 60fps) clear screen GL11.glClear(GL11.GL COLOR BUFFER BIT GL11.GL DEPTH BUFFER BIT) Start FBO Rendering Code GL30.glBindFramebuffer(GL30.GL FRAMEBUFFER, fboID) Resets the current viewport GL11.glViewport(0, 0, scaleWidth, scaleHeight) Set viewport to be the size of the FBO Clear the FrameBuffer GL11.glClear(GL11.GL COLOR BUFFER BIT) Actual render code! gameMap.render() draw the texture from the FBO GL30.glBindFramebuffer(GL30.GL FRAMEBUFFER, 0) GL11.glViewport(0, 0, scaleWidth scale, scaleHeight scale) GL11.glBegin(GL11.GL QUADS) GL11.glTexCoord2f(0.0f, 0.0f) GL11.glVertex3f(0.0f, 0.0f, 0.0f) GL11.glTexCoord2f(1.0f, 0.0f) GL11.glVertex3f((float)scaleWidth scale, 0.0f, 0.0f) GL11.glTexCoord2f(1.0f, 1.0f) GL11.glVertex3f((float)scaleWidth scale, (float)scaleHeight scale, 0.0f) GL11.glTexCoord2f(0.0f, 1.0f) GL11.glVertex3f(0.0f, (float)scaleHeight scale, 0.0f) GL11.glEnd() Resets the current viewport GL11.glViewport(0, 0, scaleWidth scale, scaleHeight scale) GL11.glMatrixMode(GL11.GL MODELVIEW) GL11.glLoadIdentity() let subsystem paint if (callback ! null) callback.frameRendering() update window contents Display.update() I had to comment out "GL32.glFramebufferTexture(GL30.GL FRAMEBUFFER,GL30.GL COLOR ATTACHMENT0, colID, 0) " because it was throwing a "Function not supported" error. Thanks |
1 | Does scaling affect performance in OpenGL? I've never been able to understand the best practice in this context. I usually want to ship my game with as minimum size as possible. So whereever possible, I try to use scaling of my graphics. Let's suppose I have to draw a 1000 X 300 px wall of yellow color in my game. So I usually just use a 3 X 3 px yellow image and stretch it in game (using nearest neighbor filter). Is this the right approach? Let us consider another situation. Let's suppose I wish to render rain in my game. Basically 2 X 30 px blue white gradient streaks . Let us suppose at any time 200 drops max are going to be rendered . Now if I just ship a 2 X 6 px streak with the game and scale it at runtime , will it affect performance? In short, how does scaling affect performance in OpenGL? |
1 | glfw resizing causing image scaling I have a quad rendered that extends from the top left of the window to with width of the window that is also 64 pixels high. When I resize the window, from its initial size, the quad and text scales proportionately bigger or smaller in the same say Photoshop can scale a image. What I'm seeking on a basic level is that regardless of how I resize the window, everything drawn remains the same. From the below image, the right side is the initial size and the left side is what happens when I drag the window to be a smaller size. The red bar and text scales with it. This is how Im handling my resizing glfwSetWindowSizeCallback(pWindow, WindowSizeCallback) initialized after context void WindowSizeCallback(GLFWwindow window, int width, int height) glfwSetWindowSize(window, width, height) |
1 | How to implement weighted blended order independent transparency in OpenGL? I want to implement weighted blended OIT in my C OpenGL 3D game, but I didn't find any C examples about weighted blended order independent transparency. Paper How to clear accumTexture(framebuffer attachment) to vec4(0)? How to clear revealageTexture(framebuffer attachment) to float(1)? |
1 | Wrong texture position on camera move When in my game i move my character, the camera follows it. I have no problem drawing this char when it is moving or moving another char when the camera is still. Now, when i move both of them, the texture position of the second character is wrong, you can see it here Under the hood i round the values before moving the viewport and drawing textures, and i had no problems except in this case. How can i fix this? |
1 | How do I render a filled and stroked path using OpenGL? I want to render a 2 dimensional geometric path consisting of B zier curves and straight lines. Paths can be concave. What is the most efficient way to draw this using modern OpenGL? Can I do this with a vertex shader? How should I store the path segments? |
1 | How to compose a matrix to perform isometric (dimetric) projection of a world coordinate? I have a 2D unit vector containing a world coordinate (the player's direction), and I want to convert that to screen coordinates (classic isometric tiles). I'm aware I can achieve this by rotating around the relevant axis but I want to see and understand how to do this using a purely matrix approach? Partly because I'm learning 'modern OpenGL' (v2 ) and partly because I will want to use this same technique for other things so need a solid understanding and my math ability is a little lacking. If needed my screen's coordinate system has it's origin at top left with x amp y pointing right and down respectively. Also, my vertex positions are converted to the NDC range in my vertex shader if that's relevant. Language is C with no supporting libraries. |
1 | Light Attenuation Formula Derivation I understand that when sampling the brightness of a given point on the surface, a certain cutoff needs to be taken into consideration. Other words, when the light is further away, the intensity decreases. I came across the following formula that is used to compute the aforementioned attenuation 1.0 (1.0 a dist b dist dist) However, I have not been able to find out why and how this formula is derived. From the formula what I can understand is that as the distance get larger, the result will become smaller but how it is derived is beyond me. If distance takes an important role in the above formula, which helps determine the intensity, could we simply ignore a and b from the formula and simply use attenuation 1.0 (1.0 dist) ? Does anyone know how the formula is derived and the importance the values a and b play within the formula? Thanks. |
1 | How to render models using a personalized projection matrix? I'm creating a game with a great sprite demand. Out team is considering automatizing the sprite generation using 3D models. The problem is we have a very particular ortographic projection We have already set up a good projection matrix. The problem is none of the 3D renderer of nowadays have an option for using custom matrixes. How is this kind of problem dealt with? |
1 | Seamless tilemap rendering (borderless adjacent images) I have a 2D game engine that draws tilemaps by drawing tiles from a tileset image. Because by default OpenGL can only wrap the entire texture (GL REPEAT), and not just part of it, each tile is split off in to a separate texture. Then regions of the same tile are rendered adjacent to each other. Here's what it looks like when it's working as intended However as soon as you introduce fractional scaling, seams appear Why does this happen? I thought it was due to linear filtering blending the borders of the quads, but it still happens with point filtering. The only solution I've found so far is to ensure all positioning and scaling only happens at integer values, and use point filtering. This can degrade the visual quality of the game (particularly that sub pixel positioning no longer works so motion is not so smooth). Things I have tried considered antialiasing reduces, but does not entirely eliminate, the seams turning off mipmapping, has no effect render each tile individually and extrude the edges by 1px but this is a de optimisation, since it can no longer render regions of tiles in one go, and creates other artefacts along the edges of areas of transparency add a 1px border around source images and repeat the last pixels but then they are no longer power of two, causing compatibility problems with systems without NPOT support writing a custom shader to handle tiled images but then what would you do differently? GL REPEAT should be grabbing the pixel from the opposite side of the image at the borders, and not pick transparency. the geometry is exactly adjacent, there are no floating point rounding errors. if the fragment shader is hard coded to return the same color, the seams disappear. if the textures are set to GL CLAMP instead of GL REPEAT, the seams disappear (although the rendering is wrong). if the textures are set to GL MIRRORED REPEAT, the seams disappear (although the rendering is wrong again). if I make the background red, the seams are still white. This suggests it's sampling opaque white from somewhere rather than transparency. So the seams appear only when GL REPEAT is set. For some reason in this mode only, at the edges of the geometry there is some bleed leakage transparency. How can that be? The entire texture is opaque. |
1 | Why does glGetString returns a NULL string I am trying my hands at GLFW library. I have written a basic program to get OpenGL renderer and vendor string. Here is the code include lt GL glew.h gt include lt GL glfw.h gt include lt cstdio gt include lt cstdlib gt include lt string gt using namespace std void shutDown(int returnCode) printf("There was an error in running the code with error d n",returnCode) GLenum res glGetError() const GLubyte errString gluErrorString(res) printf("Error is s n", errString) glfwTerminate() exit(returnCode) int main() start GL context and O S window using GLFW helper library if (glfwInit() ! GL TRUE) shutDown(1) if (glfwOpenWindow(0, 0, 0, 0, 0, 0, 0, 0, GLFW WINDOW) ! GL TRUE) shutDown(2) start GLEW extension handler glewInit() get version info const GLubyte renderer glGetString (GL RENDERER) get renderer string const GLubyte version glGetString (GL VERSION) version as a string printf("Renderer s n", renderer) printf("OpenGL version supported s n", version) close GL context and any other GLFW resources glfwTerminate() return 0 I googled this error and found out that we have to initialize the OpenGL context before calling glGetString(). Although I have initialized OpenGL context using glfwInit() but still the function returns a NULL string. Any ideas? Edit I have updated the code with error checking mechanisms. This code on running outputs the following There was an error in running the code with error 2 Error is no error |
1 | 3D sphere generation I have found a 3D sphere in a closed source game engine that I really like the look of and would like to have something similar in my own 3D engine. This is how the sphere looks like when it is created in the game engine, at program game start At program start, a function named CreateSphere is called and the user has the option to choose a 3D position and a radius of the sphere. That's all I know about the function since the engine is closed source. Anyone have any idea of how this sphere might be generated programmatically? I have checked other posts sites discussing spheres but none of them has the look of the sphere in the image. Edit removed some unnecessary information to get to the point of what I need help with. |
1 | Can't understand these UV texture coordinates (range is NOT 0.0 to 1.0) I am trying to draw a simple 3D object generated by Google SketchUp 8 Pro onto my WebGL app, the model is a simple cylinder. I opened the exported file and copied the vertices positions, indices, normals and texture coordinates into a .json file in order to be able to use it on javascript. Everything seems to work fine, except for the texture coordinates which have some pretty big values, like 46.331676 and also negative values. Now I don't know if I am wrong, but isn't 2D texture coordinates supposed to be in a range from 0.0 to 1.0 only? Well, drawing the model using these texture coordinates gives me a totally weird look, and I can only get to see the texture properly when I am very close (not really me, the cam) to the model, as if the texture has been insanely reduced in it's size and repeated infinitely across the model's faces. (yeah, I am using GL REPEAT on that texture wrap thing) What I noticed is that if I get all these coordinates and divide them by 10 or 100 I get a much "normal" look, but still not in the 0.0 to 1.0 range. Here's my json file http pastebin.com Aa4wvGvv Here are my GLSL Shaders http pastebin.com DR4K37T9 And here is the .X file exported by SketchUp http pastebin.com hmYAJZWE I've also tried to draw this model using XNA, but still not working. Using this HLSL shaders http pastebin.com RBgVFq08 I tried exporting the same model to different formats, collada, fbx, and x. All those yields the same thing. |
1 | How to handle wildly varying rendering hardware getting baseline I've recently started with mobile programming (cross platform, also with desktop) and am encountering wildly differing hardware performance, in particular with OpenGL and the GPU. I know I'll basically have to adjust my rendering code but I'm uncertain of how to detect performance and what reasonable default settings are. I notice that certain shader functions are basically free in a desktop implemenation but can be unusable in a mobile device. The problem is I have no way of knowing what features will cause what performance issues on all the devices. So my first issue is that even if I allow configuring options I'm uncertain of which options I have to make configurable. I'm wondering also wheher one just writes one very configurable pipeline, or whether I should have 2 distinct options (high low). I'm also unsure of where to set the default. If I set to the poorest performer the graphics will be so minimal that any user with a modern device would dismiss the game. If I set them even at some moderate point, the low end devices will basically become a slide show. I was thinking perhaps that I just run some benchmarks when the user first installs and randomly guess what works, but I've not see a game do this before. |
1 | GLSL Shader compiles, but source is empty I'm trying to compile a GLSL shader, to which I use the following code. Initialization SDL Window boringInitStuff() SDL Init(SDL INIT VIDEO) SDL GL SetAttribute( SDL GL CONTEXT MAJOR VERSION, 3 ) SDL GL SetAttribute( SDL GL CONTEXT MINOR VERSION, 1 ) SDL GL SetAttribute( SDL GL CONTEXT PROFILE MASK, SDL GL CONTEXT PROFILE CORE ) Uint32 windowFlags SDL WINDOW OPENGL SDL Window sdlWindow SDL CreateWindow("Boooring", SDL WINDOWPOS UNDEFINED, SDL WINDOWPOS UNDEFINED, 400, 400, windowFlags) SDL GL CreateContext(sdlWindow) glewExperimental GL TRUE glewInit() return sdlWindow File parser void readFile(std string path, std string amp data) std ifstream f(path.c str(), std ios binary) data.assign((std istreambuf iterator lt char gt (f)), (std istreambuf iterator lt char gt ())) Main int main(int argv, char args) SDL Window sdlWindow boringInitStuff() std string buffer readFile(". compile test.vert", buffer) const char cBuffer buffer.c str() GLuint shaderID glCreateShader(GL VERTEX SHADER) glShaderSource(shaderID, 1, amp cBuffer, nullptr) glCompileShader(shaderID) while(!SDL QuitRequested()) SDL GL SwapWindow(sdlWindow) return 0 But when I try to inspect the source code in gDEBugger, the source code is gone. Linking of course doesn't work aswell. The weird thing is, that the compilation error checking works. EDIT When I copy amp paste the main part into another opengl project, it works. |
1 | Good system for experimenting with shaders in different languages I'm trying to experiment a bit with shaders and they have been programmed in several different languages (GLSL, Cg and HLSL). Now most systems (dirrectX, openGL) have only support for one of them. Does anybody know of such a system that easily allows all 3? Thank you |
1 | glsl demo suggestions? In a lot of places I interviewed recently, I have been asked many a times if I have worked with shaders. Even though, I have read and understand the pipeline, the answer to that question has been no. Recently, one of the places asked me if I can send them a sample of 'something' that is "visually polished". So, I decided to take the plunge and wrote some simple shader in GLSL(with opengl).I now have a basic setup where I can use vbos with glsl shaders. I have a very short window left to send something to them and I was wondering if someone with experience, could suggest an idea that is interesting enough to grab someone's attention. Thanks |
1 | Render rotated rectangle inside other rectangle bounds using Libgdx I have this code to generate a red rectangle inside a grey rectangle new Rectangle(grey rectangle position x, Game.SCREEN HEIGHT 2 Rectangle.height 2,0) This code makes the following Now, I want to rotate the red rectangle and randomize his X coordinate but keeping it inside the grey rectangle. This is my actual code of the rectangle public class Rectangle public static int width 30 public static int height 60 public static Color color Color.RED public Rectangle(int x, int y, int angle) super(x, y,angle) public void render(ShapeRenderer shapeRenderer) shapeRenderer.begin(ShapeType.Filled) shapeRenderer.setColor(color) shapeRenderer.identity() shapeRenderer.translate(pos x width 2, pos y, 0.f) shapeRenderer.rotate(0.f, 0.f, 1.f, angle) shapeRenderer.rect( width 2, height 2, width, height) shapeRenderer.end() And this is what I should do to randomize the position of the rectangle new Rectangle(randomized x position, Game.SCREEN HEIGHT 2 Rectangle.height 2,0) The problem is that I don't know how to calculate the randomized x position in order to keep the rectangle inside the grey rectangle. Another problem is that the current Rectangle.render() method is making my rectangle outside the grey rectangle after rotation When what I really want is |
1 | Rotate a plane defined by its normal and its distance First apologies for the amount of pictures, it's a bit hard trying to explain my problem without pictures. Hope I've provided all the relevant code. If you feel you want to know about how I am doing something else, please tell me and I will include it. I've been trying to work around rotating a plane in 3D space, but I keep hitting dead ends. The following is the situation I have a physics engine where I simulate a moving sphere inside a cube. To make things simpler, I have only drawn the top and bottom plane and moved the sphere vertically. I have defined my two planes as follows CollisionPlane p new CollisionPlane(glm vec3(0.0, 1.0, 0.0), 5.0) CollisionPlane p2 new CollisionPlane(glm vec3(0.0, 1.0, 0.0), 5.0) Where the vec3 defines the normal of the plane, and the second parameter defines the distance of the plane from the normal. The reason I defined their distance as 5 is because I have scaled the the model that represents my two planes by 10 on all axis, so now the distance from the origin is 5 to top and bottom, if that makes any sense. To give you some reference, I am creating my two planes as two line loops, and I have a model which models those two line loop, like the following top plane std shared ptr lt Mesh gt f1 std make shared lt Mesh gt (GL LINE LOOP) std vector lt Vertex gt verts Vertex(glm vec3(0.5, 0.5, 0.5)), Vertex(glm vec3(0.5, 0.5, 0.5)), Vertex(glm vec3( 0.5, 0.5, 0.5)), Vertex(glm vec3( 0.5, 0.5, 0.5)) f1 gt BufferVertices(verts) bottom plane std shared ptr lt Mesh gt f2 std make shared lt Mesh gt (GL LINE LOOP) std vector lt Vertex gt verts2 Vertex(glm vec3(0.5, 0.5, 0.5)), Vertex(glm vec3(0.5, 0.5, 0.5)), Vertex(glm vec3( 0.5, 0.5, 0.5)), Vertex(glm vec3( 0.5, 0.5, 0.5)) f2 gt BufferVertices(verts2) std shared ptr lt Model gt faceModel std make shared lt Model gt (std vector lt std shared ptr lt Mesh gt gt f1, f2 ) And like I said I scale the model by 10. Now I have a sphere that moves up and down, and collides with each face, and the collision response is implemented as well. The problem I am facing is when I try to rotate my planes. It seems to work fine when I rotate around the Z axis, but when I rotate around the X axis it doesn't seem to work. The following shows the result of rotating around Z However If I try to rotate around X, the ball penetrates the bottom plane, as if the collisionplane has moved down The following is the code I've tried to rotate the normals and the planes for (int i 0 i lt m entities.size() i) glm mat3 normalMatrix glm mat3 cast(glm angleAxis(glm radians(6.0f), glm vec3(0.0, 0.0, 1.0))) CollisionPlane p (CollisionPlane )m entities i gt GetCollisionVolume() glm vec3 normalDivLength p gt GetNormal() glm length(p gt GetNormal()) glm vec3 pointOnPlane normalDivLength p gt GetDistance() glm vec3 newNormal normalMatrix normalDivLength glm vec3 newPointOnPlane newNormal (normalMatrix (pointOnPlane glm vec3(0.0)) glm vec3(0.0)) p gt SetNormal(newNormal) float newDistance newPointOnPlane.x newPointOnPlane.y newPointOnPlane.z p gt SetDistance(newDistance) I've done the same thing for rotating around X, except changed the glm vec3(0.0, 0.0, 1.0) to glm vec3(1.0, 0.0, 0.0) m entites are basically my physics entities that hold the different collision shapes (spheres planes etc). I based my code on the answer here Rotating plane with normal and distance I can't seem to figure at all why it works when I rotate around Z, but not when I rotate around X. Am I missing something crucial? |
1 | What's better drawing every interval that the window updates, or drawing when necessary and updating when drawing? So what's better? In case the title is a bit confusing I mean 1) Drawing every window update interval. For example, for a 60FPS window, every 17 milliseconds. For example window.setFramerateLimit(60) Update every 17 milliseconds for 60FPS while(window.isOpen()) window.display() Or 2) Drawing when you need to (for example, when a sprite moves) and displaying it straight after. For example void func(sf RenderWindow amp window) window.draw(sf Sprite()) Whatever you're drawing window.display() Or is there a much better way? |
1 | For deferred rendering and SSAO, what coordinate system are the normals actually in? So, I'm following the very helpful LearnOpenGL online tutorials, and I'm working on implementing SSAO. I don't have a deferred rendering pipeline, but I need to collect normals during my depth pass so that I can sample them for the SSAO effect. I'm following this tutorial https learnopengl.com Advanced Lighting SSAO, and referencing this one when needed https learnopengl.com Advanced Lighting Deferred Shading. However, neither of these tutorials really explain what's going on with the normals. What is the normalMatrix being used in the shader? mat3 normalMatrix transpose(inverse(mat3(view model))) Normal normalMatrix (invertedNormals ? aNormal aNormal) I'm not worried about the invertexNormals conditional, that's probably just something weird the author added for handling different conventions, but what the heck is going on with that normalMatrix? Why would we invert the view model matrix? If we're starting with aNormal in object space (whether it's coming directly from a vertex buffer or from a normal map TBN calculation), I can't for the life of me figure out what coordinate system that resulting normal is going to end up in! So, long story short, what coordinate frame should my normals be in if I'm to use them for SSAO? World space? View Space? Clip space? None of the above? If it's some weird coordinate system, can somebody help me get there? I'm probably more confused than I should be. |
1 | Setting up openGL on Linux I am trying to figure out how to set up openGL on Linux based operating systems (Ubuntu 12.10) without using any GLUT libraries. I don't have much experience with developing on Linux distributions. To be more precise I'm trying to make a statically linked library that compiles on the three major OS flavours (using some preprocessor macros to figure out the os its being compiled on and switch up renderer code, window creation code etc). On windows I have successfully created a openGL 4.0 rendering context using wglCreateContextAttribsARB and the GL gl.h provided by mingw. On Ubuntu I have managed to create a openGL 2.2 rendering context using glXCreateContextAttribsARB and the GL gl.h provided by mesa3d. This was however in a VMware Ubuntu VM and so mesa(9.0.2 I believe) would only manage to go that far with a virtual graphics card. So I decided to go ahead and install Ubuntu on a external HDD to take advantage of the graphics card only to later find out that mesa(9.1) can only go up to openGL 3.1. After a little bit of research I figured out that the AMD and Nvidia drivers have support for openGL 4.3. So I installed the AMD 13.1 drivers (I have a Sapphire 7970 vapor x 6GB) but then the openGL header files that come with it are really strange, GL glATI.h tries to include windows.h, it doesn't define APIENTRY etc. The question is whats the proper way of setting up openGL (with 4.3 support) for Linux? And even if the GL glATI.h files were proper, wouldn't they fail if ran on a Nvidia card? I'm manually creating the window with XCreateWindow and then I'm loading the extensions with glXGetProcAddressARB so I'm not going to use GLUT or any GLUT like library. |
1 | How to mix pixel colors in Shader? I have a pixel that have a colour RGB. This color is calculated by the shader and can be anything. How can I override this color by a colour I choose. If my pixel is white it's simple, I can do this half3 original half3(1, 1, 1) half3 mycolor half3(1, 0, 0) half3 result original mycolor But what can I do if half3 original half3(0.36, 0.74, 0.18) half3 mycolor half3(1, 0, 0) half3 result ? What operation or function should I apply to my original pixel color to override it ? |
1 | OpenTK Camera Rotation issue I'm currently developing a 3D game engine in C using OpenTK. I have basic game objects, and each game object has transform (translation, rotation and scaling). A game object can have components (much like Unity). One game object (The camera game object) has three components, A camera, free move and a free look component. The free move component allows the user to move the camera around the scene, whilst the free look camera is SUPPOSED to rotate the camera. However, I get this odd effect https youtu.be rcdWZcaEqSY (Note that, the plane isn't changing position, it seems the camera is ... turning upside down?) On top of that, once I rotate the camera, moving the camera (using the arrow keys) gets really weird, up can become down, left can become right. It's all very odd. Anywho, here is the update method for my Free Look component protected override void OnUpdate(object sender, UpdateEventArgs e) Rectangle bounds Game.GetCurrentGame().Window.Bounds Vector2 center new Vector2(bounds.Left (bounds.Width 2), bounds.Top (bounds.Height 2)) Vector2 mousePosition new Vector2(System.Windows.Forms.Cursor.Position.X, System.Windows.Forms.Cursor.Position.Y) Vector2 deltaPosition center mousePosition bool rotX deltaPosition.X ! 0 bool rotY deltaPosition.Y ! 0 if(rotY) Transform.Rotate(new Vector3(0, 1, 0), MathHelper.DegreesToRadians( deltaPosition.X sensitivity)) if(rotX) Transform.Rotate(new Vector3(1, 0, 0), MathHelper.DegreesToRadians( deltaPosition.Y sensitivity)) if (rotY rotX) System.Windows.Forms.Cursor.Position new Point((int)center.X, (int)center.Y) base.OnUpdate(sender, e) here is the Rotate method in my transform class public void Rotate(Vector3 axis, float angle) rotation (Quaternion.FromAxisAngle(axis, angle) rotation).Normalized() I hope someone can help me get to the bottom of this, I've been so frustrated! If there's any other code you need to see, just let me know. |
1 | Proper way to update pixel array data For a game that updates a board every frame I am calculating the next arrangement of board, updating pixel array data and render board as 2D texture to quad the size of the screen using OpenGL. I use glTexImage2d(GL TEXTURE 2D,0,GL RGBA,tex width,tex height,0,GL BGRA,GL UNSIGNED INT 8 8 8 8 REV, amp pixel data.front()) to initialize the texture object and glTexSubImage2d(GL TEXTURE 2D,0,0,0,tex width,tex height, BGRA,GL UNSIGNED INT 8 8 8 8 REV, amp pixel data.front()) to update the texture. The slowest part of the code is update pixels() (below) Taking 256x128 elements input array and creating four component pixel array based on that. This function is very slow(45ms) and it seems it should be done differently. What would be the proper way to implement this function? Update pixel data per frame void update pixels() update pixels std vector lt unsigned char gt pixel data float state 0 size t it 0 for (size t y 0 y lt board.getWorldHeight() y) for (size t x 0 x lt board.getWorldWidth() x) state board.get state(y golf board.getWorldWidth() x) pixel data y board.getWorldWidth() x it static cast lt unsigned char gt (state 255) it pixel data y board.getWorldWidth() x it static cast lt unsigned char gt (state 255) it pixel data y board.getWorldWidth() x it static cast lt unsigned char gt (state 255) it pixel data y board.getWorldWidth() x it static cast lt unsigned char gt (1 255) return Time measurement void render() render frame() glDrawElements() for quad and scale uniform update takes 0.2ms generate states() new states for board takes 1.3ms auto t1 std chrono high resolution clock now() update pixels() create new pixel array, takes 32 ms auto t2 std chrono high resolution clock now() float update time std chrono duration cast lt std chrono microseconds gt (t2 t1).count() 1000.0f cout lt lt updae time glfwSwapBuffers() from update pixels to swap takes 13ms |
1 | Rendering multiple textures on same image for terrain with index buffers (LWJGL 3) I have started with LWJGL3 and trying to built game engine. I'm stuck on generating terrains and this is my second big problem and I'm exhausted so I need some advice on how to solve my problem. What I want to achieve I want to have all textures in one place and to have some kind of texture map that I will use to render specific texture at that place. example textures map but result is Only thing that I think is problem is because I use same points (index buffers) So instead of 8 points, I have 6. Is it possible that then texture gets all messed up? Should I use all points, even if they are at same or almost the same location? But then I would have a lot more vertices than actually need it. |
1 | Section cut through (solid) geometry I'm looking for image based (screen space) technique to render section cuts through arbitrary (solid) geometry. I found and studied image based CSG (Kirsch 05 OpenCSG) but I found it to be perhaps a bit of a overkill for my case, where all I need is a section plane cut. Above is a naive implementation using a discard in the fragment shader, but that obviously is not even half way there as I need to close the gaps. Does anyone know of a technique hack I could use? |
1 | How does Halo draw projectiles? I am trying to draw projectiles and doing billboarding. A projectile consists of a bill boarded "particle" and a "tracer". When I billboard a projectile, it cannot be seen when the player's viewing direction is parallel to the projectile axis. How does Halo or other games solve this problem, so that projectiles can be seen from behind. |
1 | How do I play a video file in OpenGL? Is there a library that will let me load a movie file and play it in an OpenGL application? Or maybe just a code sample that someone has lying around? I'm also using GLUT, if that makes a difference. I guess file format doesn't matter, although currently my movie is in AVI format. |
1 | recommended shader pipeline infrastructure in core opengl 3.3 I am writing a game project in Go and I am using an OpenGl 3.3 core context to do my rendering stuff. At the moment I have different types of renderers. Each renderer has it's own pair of vertex and fragment shader, a struct of uniform and attribute locations. A struct with glBuffers, vertex array object, and numverts (int) which contains all data required to render one object (mesh). Last but not least a constructor to create and initialize the attribute uniform locations a method to load a mesh into mesh data and the render method itself. I am absolutely not happy with this design. Every time I want to create a new simple shader, I have to write all this code, and I haven't found a way to make the overhead of a new shader smaller. All I was able to make with the help of go reflections is to automatically get the attribute uniform location based on the variable name in attribute uniform location struct and to automatically set the attribute pointers based on the layout of the vertex struct. another thing that I don't like, is that when I want to implement for example the functionality of the function glClipPlane of the fixed function pipeline, I need to add the uniform to each shader separately and I need to set this uniform in each shader and I need to implement the discard of fragments in each fragment shader Are there some common practices that significantly reduce the code overhead of a new shader? Are there good shader pipelines that you can recommend me to take a look at? Are there some good practices to add functionality to several shaders at once? |
1 | How to solve artifacts caused by vertex lighting in my voxel engine? My current lighting system bakes the light amount based on ray tracing from the light source to the 8 corners of the block (so per vertex) and the distance to the light on the blocks. It works acceptable, but it's definitely not perfect. Of course the blocks are made out of faces, which are made out of triangles. In the situation shown in the screenshot, where there is a light directly behind the camera, you get those weird triangle lighting issues. How can I fix this problem? |
1 | Artifacts when draw particles with some alpha I want to draw in my game some particles. But when I draw one particle above another particle, alpha channel from this above "clear" previous drawed particle. I set in OpenGL blend in this way glBlendFunc( GL SRC ALPHA, GL ONE MINUS SRC ALPHA ) My fragment shader for particle is very simple precision highp float precision highp int uniform sampler2D u maps 2 varying vec2 v texture uniform float opaque uniform vec3 colorize void main() vec4 texColor texture2D(u maps 0 , v texture) gl FragColor.rgb texColor.rgb colorize.rgb gl FragColor.a texColor.a opaque I attach screenshot from this Do you know what I made wrong ? I use OpenGL ES 2.0. |
1 | Failing to move exponential depth term to depth shader in exponential shadow mapping I'm playing around in my little toy project to see if I can understand how exponential shadow mapping works. To begin, I have the following two fragment shaders Light depth texture shader layout(location 0) out float fragmentdepth void main() fragmentdepth gl FragCoord.z Main shader ... vec3 shadowDiv shadowCoords.xyz shadowCoords.w float lightDepth texture(depthMap, shadowDiv.xy) float visibility clamp(exp( 70 shadowDiv.z) exp(70 lightDepth), 0, 1) ... This works fine I have shadows where I expect them, and I'm using the approximation to the boolean function by taking the product of exponentials, as shown in the ESM paper. However, my understanding is that the whole point of this, is that I can move exp(70 lightDepth) out of this shader, and into the shader when I generate the depth texture at the light. That is New light depth texture shader layout(location 0) out float fragmentdepth void main() fragmentdepth exp(70 gl FragCoord.z) Main shader ... vec3 shadowDiv shadowCoords.xyz shadowCoords.w float lightDepth texture(depthMap, shadowDiv.xy) float visibility clamp(exp( 70 shadowDiv.z) lightDepth, 0, 1) ... When I make this change, the entire scene becomes shadowed. I'm sure I'm misunderstanding something somewhere, but I'm not sure where. I have tried varying the constant term (70) between 1 and 70, though this doesn't appear to make a difference. I have no errors reported by KHR debug I am creating my light depth texture as genLightDepthMap IO GLTextureObject genLightDepthMap do lightDepthMap lt overPtr (glGenTextures 1) glBindTexture GL TEXTURE 2D lightDepthMap glTexParameteri GL TEXTURE 2D GL TEXTURE MIN FILTER GL LINEAR glTexParameteri GL TEXTURE 2D GL TEXTURE MAG FILTER GL LINEAR glTexParameteri GL TEXTURE 2D GL TEXTURE WRAP S GL CLAMP TO EDGE glTexParameteri GL TEXTURE 2D GL TEXTURE WRAP T GL CLAMP TO EDGE glTexImage2D GL TEXTURE 2D 0 GL DEPTH COMPONENT24 shadowMapResolution shadowMapResolution 0 GL DEPTH COMPONENT GL FLOAT nullPtr |
1 | Vertex shader in OpenGL GLSL transformation of the interior of a textured quad I have a LWJGL project and ran into a problem with a vertex shader I wrote. In my scene I am rendering a map whose ground consists of rectangular tiles. On top of that there are other objects (I used tiny white balls here). Here is a screenshot of my scene screenshot of my scene http 110.imagebam.com download 0 Ktu6XjdWSMpKytTr3Trg 34736 347358711 scene static.jpg You can also click here to see the image. Each of the four larger rectangles at the bottom is one huge quad drawn in one piece. Every one of them contains four coordinates (the corners top left, top right, bottom right, bottom left). Their interior is filled with a texture. The small white balls on top are single game objects each drawn by itself. Note that I aligned them with the vertical edges of the underlying rectangles. I used the following vertex shader to render the scene default shader version 120 void main() gl TexCoord 0 gl MultiTexCoord0 gl Position gl ModelViewProjectionMatrix gl Vertex Now imagine the scene getting covered with water. I modified the shader adding a little water effect water shader version 120 uniform float amplitude uniform float phase void main() gl TexCoord 0 gl MultiTexCoord0 transform x vec4 a position gl Vertex a position.x a position.x amplitude sin(phase a position.x) gl Position gl ModelViewProjectionMatrix a position An animated image of my scene can be found here or here. You can see that the borders of the rectangles keep aligned with the white balls on top of them. However, vertices inside the rectangles (I marked the most distinct areas red) suffer a displacement relative to the white balls. In the last rectangle you can even see the balls cross the white line in the middle. I need the interior of my textured rectangles to be transformed exactly like the white balls on top and move uniformly when being animated by my shader. Please let me know if my problem is clear and if any other information is needed. Thank you very much for your help. Greetings Xoric |
1 | Send Geometry Data to Multiple Shaders So I am implementing a deferred rending model for my engine, and I want to be able to send all scene geometry into a single shader to calculate ambient, diffuse, normal, ect thats not the question. Once I have all of this data buffered into the GPU I can render all of that geometry from a camera perspective defined by a uniform for the cameras position in the scene. I am wondering if I can reuse this model data already in VRam in another shader translated by a light sources projection matrix to calculate the shadows on this scene geometry without needing to send the same data to the GPU again. Is there a way to tell my light shader, hey all that juicy geometry scene data you want is already in V Ram, just use this matrix transformation instead. Alternatively when I am sending the VAO to the GPU is there a way to send it to two shades in parallel, one for the deferred rending model, one for shadow casing light sources? Thanks so much! |
1 | Confusion over GLViewport I'm hoping someone can help me understand the GLViewport and what happens when we resize it This will illustrate my confusion.... So, here I have a quad stuck in the middle of the screen. If I have my GLViewport match the device's width and height, I get what is on the first (left hand) picture. Exactly what I would expect. Device resolution, 2560 x 1600 Viewport resolution 2560 x 1600 Quad size 200 x 200 (Note, the image above is not to scale!!! )) Quad shape, appears as square Now, for the 2nd (right hand) picture... Device resolution, 2560 x 1600 Viewport resolution 2560 x 1200 (and centered vertically) Quad size (200, 200) Quad shape, appears as rectangle My question is, why is the quad displaying as a rectangle now and not a square? I've confirmed by logging that my quad is 200 x 200 pixes surely the size of the physical pixels stays the same? They can't change. So what is going on here? I thought (clearly incorrectly) that when I scaled the viewport, it literraly just chopped off pixels. Would appreciate if someone could explain how this works. Edit Currently, I'm setting my viewport like this width (int) Math.min(deviceWidth, deviceHeight 1.702127659574468) height (int) Math.min(deviceHeight, deviceWidth 1.702127659574468) ratio width height GLES20.glViewport(offsetX, offsetY, width, height) Matrix.orthoM(mProjMatrix, 0, ratio, ratio, 1, 1, 3, 7) offsetX and offsetY are just so that when there are letterboxes, the viewport is centered. |
1 | uniform z slices in clip space 1) Context I'm using a regular OpenGL perspective projection matrix created with GLM (glm perspective) and taking the inverse (glm inverse) to transform clip space back into view space (and world space ultimately). Now with z values at the near and far plane it's easy to choose the x,y,z,w components of the clip space cube corners and get the world space point of the frustum volume. 2) Question Now, I want to harness the fact that clip space is a cube and generate a grid over the frustum. for x,y this works with the regular inverse projection matrix. For z, if I take uniform subdivisions of the range 1.0, 1.0 with w 1.0, the view space z values will not be linear. I haven't got enough sleep recently to convince myself to do the math. What would be an inverse projection matrix to do this? |
1 | How can I apply a mesh distortion to walls like in Dungeon Keeper 2? In Dungeon Keeper 2, the walls of the dungeon have different random shapes depending on its state (freshly dug or reinforced, and so on). They look like they are cubes of 3x3 vertices that have a x z planar jitter or distortion. However, the corner vertices are shared by adjacent cubes, so there must be some kind of algorithm rather than just a random jitter. How could I achieve a similar effect? |
1 | Shader Convert vector into scalar I am trying to convert a half3 as a simple half but I am facing an issue. As for an example half3(1, 0, 0) give me white but half3(0, 1, 0) gives me black. How can I convert properly a half3 to a simple black and white half value Thank you. |
1 | OpenGL Two point perspective view Summary I want vertical lines to be drawn as vertical lines regardless of their position relative to the camera (distance offset). I want the projection to be perspective so the furthest objects from camera look smaller while keeping their respective proportions. What am I trying to achieve A two (vanishing) point perspective like this one http insidetheoutline.com wp content uploads 2019 07 2 Point Perspective Cityscape.jpg Why am I at it I'm really bad at 3D (obviously) therefore I'm trying to replace character models with 2D sprites like in Populous 3, or quot The Game quot quest in Fable 3. I also want to have a top downish view like in isometric games. Alas, perspective distortion totally kills the effect, making sprites look skewed whenever they are off center. What am I doing right now glMatrixMode(GL PROJECTION) glLoadIdentity gluPerspective(FOV, wndWidth wndHeight, zNear, zFar) Googling lead me to this MV matrix, alas it's not working for me 1, 0, 0, 0, 0, 1, 0, 0, 0, camY camZ, camZ, 0, 0, 0, 0, 1 glMatrixMode(GL MODELVIEW) glLoadMatrixf( cam) camX camY camZ 50 gluLookAt( camX, camY, camZ, 0, 0, 0, 0, 0, 1 ) then I draw a uniform grid 5..5 by 5..5 and slightly off center wireframe box to test if they are drawn correctly Does it work? No, it doesn't. Provided matrix produce no result at all, so I had to change it to 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, camY camZ, camZ, 1 . Just like any text on a two point perspective says quot Set one of the values in the last column to zero quot . I tried zero in any position, except the last one, to no avail. Generally, I get something like this The rightmost vertical line the one that aligns with screen center is vertical, just as I want all of them to be, but the others are skewed away. What else I managed to google Not much, really. I found an example that kinda looks like something I want to achieve, but they place the camera at ground level with zero Z delta, effectively eliminating the third vanishing point. In some other example, they just narrowed FOV to some absurdly low value, making it look like a two point perspective to some extent. I'm afraid I'm missing or misunderstanding something. I also not sure if the desired output will, in fact, look like I want, and I won't discard it if it's not. |
1 | Ray picking get direction from pitch and yaw I am attempting to cast a ray from the center of the screen and check for collisions with objects. When rendering, I use these calls to set up the camera GL11.glRotated(mPitch, 1, 0, 0) GL11.glRotated(mYaw, 0, 1, 0) GL11.glTranslated(mPositionX, mPositionY, mPositionZ) I am having trouble creating the ray, however. This is the code I have so far ray.origin new Vector(mPositionX, mPositionY, mPositionZ) ray.direction new Vector(?, ?, ?) My question is what should I put in the question mark spots? I.e. how can I create the ray direction from the pitch and roll? Any help would be much appreciated! |
1 | Does LibGDX abstract OpenGL ES away or can I still use my OpenGL ES knowledge? I've been learning OpenGL ES, and am now turning my attention to using LibGDX. My main concern with LibGDX is, if needed, will I be able to apply my OpenGL ES knowledge to something if needed and essentially override bits and pieces of the framework, or does LibGDX essentially hide any implementations of OpenGL? |
1 | Calling opengl32.DLL from java? I don't like LWJGL in some cases, so I prefer to use Swing. The thing is that Swing doesn't have OpenGL. I have tried JOGL and it's a mess to install, needs external jars, and I have yet to get it working. So I was wondering if I could just make an OpenGL class that uses opengl32.DLL and put the graphics into a window made with Swing? Also, is opengl32.DLL able to be called with a 64 bit Java program? |
1 | What's the equivalent of wglShareLists for Mac OS? I'm trying to share lists between two contexts on Mac OS but despite my research I couldn't come up with an answer so far. I've found that NSOpenGLContext was able to initialize a context with a shared context but not to set it afterward. What's the equivalent of wglShareLists on Mac OS? |
1 | How to generate a multiplier map for radiosity I am following this tutorial here I am at the part where you are creating a hemicube. I have got the code to render the scene into a texture and therfore an array. Now how can I generate a so called "multiplier map" for use given these parameters width, height, camera location, camera direction normal. I want the multiplier map to be stored in an array like this unsigned char mult0 new unsigned char width height I will have 5 of these maps for each side of the hemicube. |
1 | openGL textures in bitmap mode For reasons detailed here I need to texture a quad using a bitmap (as in, 1 bit per pixel, not an 8 bit pixmap). Right now I have a bitmap stored in an on device buffer, and am mounting it like so glBindBuffer(GL PIXEL UNPACK BUFFER, BFR.G (T 1) 2 ) glTexImage2D(GL TEXTURE 2D, 0, GL RGB, W, H, 0, GL COLOR INDEX, GL BITMAP, 0) The OpenGL spec has this to say about glTexImage2D "If type is GL BITMAP, the data is considered as a string of unsigned bytes (and format must be GL COLOR INDEX). Each data byte is treated as eight 1 bit elements..." Judging by the spec, each bit in my buffer should correspond to a single pixel. However, the following experiments show that, for whatever reason, it doesn't work as advertised 1) When I build my texture, I write to the buffer in 32 bit chunks. From the wording of the spec, it is reasonable to assume that writing 0x00000001 for each value would result in a texture with 1 px wide vertical bars with 31 wide spaces between them. However, it appears blank. 2) Next, I write with 0x000000FF. By my apparently flawed understanding of the bitmap mode, I would expect that this should produce 8 wide bars with 24 wide spaces between them. Instead, it produces a white 1 px wide bar. 3) 0x55555555 1010101010101010101010101010101, therefore writing this value ought to create 1 wide vertical stripes with 1 pixel spacing. However, it creates a solid gray color. 4) Using my original 8 bit pixmap in GL BITMAP mode produces the correct animation. I have reached the conclusion that, even in GL BITMAP mode, the texturer is still interpreting 8 bits as 1 element, despite what the spec seems to suggest. The fact that I can generate a gray color (while I was expecting that I was working in two tone), as well as the fact that my original 8 bit pixmap generates the correct picture, support this conclusion. Questions 1) Am I missing some kind of prerequisite call (perhaps for setting a stride length or pack alignment or something) that will signal to the texturer to treat each byte as 8 elements, as it suggests in the spec? 2) Or does it simply not work because modern hardware does not support it? (I have read that GL BITMAP mode was deprecated in 3.3, I am however forcing a 3.0 context.) 3) Am I better off unpacking the bitmap into a pixmap using a shader? This is a far more roundabout solution than I was hoping for but I suppose there is no such thing as a free lunch. |
1 | Noisy edges, smoothing out edges between faces via fragment shader I have a generated terrain, with hexagonal geometry, as per screenshot below I then generate biomes, but as you can see the borders between them are really ugly and straight. To hide that hexagonal origin, I would need to smooth out the borders between biomes. This is how it looks now in wireframe with real tringular faces What I'm aiming for is something more like this Each vertex has attibute that holds the biome type, I can also add special attributes to the vertices on the edge between two biomes, but I just don't seem to be able to figure out how to pull this off in shader code, obviously noise is involved here, but how do I make it continous across multiple faces and entire border of multiple biomes? I'm rendering with WebGL using THREE.js |
1 | how to add water effect to an image This is what I am trying to achieve A given image would occupy say 3 4th height of the screen. The remaining 1 4th area would be a reflection of it with some waves (water effect) on it. I'm not sure how to do this. But here's my approach render the given texture to another texture called mirror texture (maybe FBOs can help me?) invert mirror texture (scale it by 1 along Y) render mirror texture at height 3 4 of the screen add some sense of noise to it OR using pixel shader and time, put pixel.z sin(time) to make it wavy (Tech C OpenGL glsl) Is my approach correct ? Is there a better way to do this ? Also, can someone please recommend me if using FrameBuffer Objects would be the right thing here ? Thanks |
1 | OpenGL Blending feedback effect I'm struggling on a simple project, as an example sandbox, I'm rendering a small oscillating rectangle on my output. I'm not using glclearcolor() but instead, on every frame I draw a black rectangle before anything else, blending with glBlendFunc(GL SRC ALPHA, GL ONE MINUS SRC ALPHA) My goal is to see as I play with the alpha of this black rectangle feedback of previous frames, slowly fading, some kind of trail, and it's more or less working. My main problem though, the trail never really disappears, and the longer I try to get the trail, the worse it gets. Also, I have to quite crank the alpha before seeing any trail, I don't really understand why. |
1 | What version of OpenGL should I target for Steam? I'm planning on developing a game (targeting towards PC and Linux) and putting it up on Steam in the future but I am not sure of what version of OpenGL to target so that the majority of steam users can be able to play my game. I know Steam has a hardware survey but I'm still not sure of what OpenGL to use but a lot of people recommend OpenGL 3.0 and to make sure not version OpenGL 3.x since 3.x ! 3.0. |
1 | Uniform not being applied to proper mesh Ok, I got some code, and you select blocks on a grid. The selection works. I can modify the blocks to be raised when selected and the correct one shows. I set a color which I use in the shader. However, I am trying to change the color before rendering the geometry, and the last rendered geometry (in the sequence) is rendered light. However, to debug logic I decided to move the block up and make it white, in which case one block moves up and another block becomes white. I checked all my logic and it knows the correct one is selected and it is showing in, in the correct place and rendering it correctly. When there is only 1 it works properly. Video Of the bug in action, note how the highlighted and elevated blocks are not the same block, however the code for color and My Renderer is here (For the items being drawn) public void render(Renderer renderer) mGrid.render(renderer, mGameState) for (Entity e mGameEntities) UnitTypes ut UnitTypes.valueOf((String)e.getObject(D.UNIT TYPE.ordinal())) if (ut UnitTypes.Soldier) renderer.testShader.begin() renderer.testShader.setUniformMatrix("u mvpMatrix",mEntityMatrix) renderer.texture soldier.bind(0) Vector2 pos (Vector2) e.getObject(D.COORDS.ordinal()) mEntityMatrix.set(renderer.mCamera.combined) if (mSelectedEntities.contains(e)) mEntityMatrix.translate(pos.x, 1f, pos.y) renderer.testShader.setUniformf("v color", 0.5f,0.5f,0.5f,1f) else mEntityMatrix.translate(pos.x, 0f, pos.y) renderer.testShader.setUniformf("v color", 1f,1f,1f,1f) mEntityMatrix.scale(0.2f, 0.2f, 0.2f) renderer.model soldier.render(renderer.testShader,GL20.GL TRIANGLES) renderer.testShader.end() else if (ut UnitTypes.Enemy Infiltrator) renderer.testShader.begin() renderer.testShader.setUniformMatrix("u mvpMatrix",mEntityMatrix) renderer.testShader.setUniformf("v color", 1.0f,1,1,1.0f) renderer.texture enemy infiltrator.bind(0) Vector2 pos (Vector2) e.getObject(D.COORDS.ordinal()) mEntityMatrix.set(renderer.mCamera.combined) mEntityMatrix.translate(pos.x, 0f, pos.y) mEntityMatrix.scale(0.2f, 0.2f, 0.2f) renderer.model enemy infiltrator.render(renderer.testShader,GL20.GL TRIANGLES) renderer.testShader.end() String vertexShader "uniform mat4 u mvpMatrix n" "attribute vec4 a position n" "attribute vec2 a uv n" "varying vec2 v uv n" "void main() n" " n" " v uv vec2(a uv.x,1.0 a uv.y) n" " gl Position u mvpMatrix a position n" " n" String fragmentShader " ifdef GL ES n" "precision mediump float n" " endif n" " uniform vec4 v color " " uniform sampler2D tex " " varying vec2 v uv n" "void main() n" " n" " gl FragColor v color texture2D(tex, v uv) n" " " testShader new ShaderProgram(vertexShader, fragmentShader) |
1 | Why is my depth buffer texture so bright? https www.youtube.com watch?v QuvAEqgHrMY amp feature youtu.be https www.youtube.com watch?v 5ob1JsPIGAs amp feature youtu.be gluPerspective(60, (float)CONTEXT WIDTH CONTEXT HEIGHT, 0.1f, 1.f) values used for first video gluPerspective(60, (float)CONTEXT WIDTH CONTEXT HEIGHT, 0.1f, 2000.f) values used for second video, the further z clip value seems to improve the depth rendered, but far from example images below. I set my projection matrix with gluPerspective. This seems to happen when I try gluPerspective when I set my projection matrix to first person. The type of image I would like to get is Is the visual result I am getting correct? Then are the listed examples of depth buffer images are merely a result of refining post rendering process, that are simply added in order for humans to see it better? |
1 | When to use a vertex array and when to use a VBO? I'm trying to learn about vertex arrays and vertex buffer objects, but I don't understand the differences in terms of case of use (static geometry like terrains, geometry that changes every frame like a particle system, etc.) performance portability (old graphics card, consoles, devices like Android or iPhone, etc.) some clarifications? |
1 | Does Blitz3D use its own 3D engine or does it wrap OpenGL? How does Blitz3D work? I mean internally, does it use OpenGL with basic wrappers or it using some open source 3D engine that itself wraps OpenGL? |
1 | Possible to create transparency shader which doesn't stack alpha values The image above best demonstrates what I'm trying to achieve. It's a transparent shader for objects, but wherever the objects with this shader intersect they don't add together but simply merge with the same amount of transparency. It seems like a simple thing, but I can't tell if it's even possible from my understanding of how the shader pipeline works. From what I understand of the process (and please correct me if I'm wrong), a fragment is created by each independent object. Once all the fragments for each object are created, the depth buffer chooses which fragment to write to that pixel in the framebuffer at a time, layering them on top of each other, so to speak. Once they've all been added to the framebuffer, the final result becomes the pixel. If I could access the individual fragments to compare them against each other before they get written ('layered') into the frame buffer, it would be a simple case of tracking them and discarding any extra fragments in the same pixel space that are part of this shader render queue. But I don't think this is possible with OpenGL? Is there perhaps another way I can achieve the same effect? |
1 | alpha test shader 'discard' operation not working GLES2 I wrote this shader to illustare alpha test action in GLES2 (Galaxy S6). I think is not working at all cause I don't see any change with or without it. Is there anything Im missing? Any syntax error? I know its better not using if in shader but for now this is the solution I need. precision highp float precision highp int precision lowp sampler2D precision lowp samplerCube 0 CMPF ALWAYS FAIL, 1 CMPF ALWAYS PASS, 2 CMPF LESS, 3 CMPF LESS EQUAL, 4 CMPF EQUAL, 5 CMPF NOT EQUAL, 6 CMPF GREATER EQUAL, 7 CMPF GREATER bool Is Alpha Pass(int func,float alphaRef, float alphaValue) bool result true if (func 0) result false break if (func 1) result true break if (func 2) result alphaValue lt alphaRef break if (func 3) result alphaValue lt alphaRef break if (func 4) result alphaValue alphaRef break if (func 5) result alphaValue ! alphaRef break if (func 6) result alphaValue gt alphaRef break if (func 7) result alphaValue gt alphaRef break return result void FFP Alpha Test(in float func, in float alphaRef, in vec4 texel) if (!Is Alpha Pass(int(func), alphaRef, texel.a)) discard |
1 | Best way to render multiple objects I have a scene that consists of 1 player object, 1 platform, 1 enemy, and 1 background. Currently, this is how my render function looks like void Sprite Render() glUseProgram(m Program) glActiveTexture(GL TEXTURE0) Background glBindVertexArray(BackgroundVAO) glBindTexture(GL TEXTURE 2D, m Textures 1 ) glUniform1i(glGetUniformLocation(m Program, "Texture1"), 0) m ProjectionMatrix m Camera.ViewToWorldMatrix() m TransformationMatrix m ProjectionMatrix m TransformationMatrixLoc glGetUniformLocation(m Program, "TransformationMatrix") glUniformMatrix4fv(m TransformationMatrixLoc, 1, GL FALSE, amp m TransformationMatrix 0 0 ) glDrawElements(GL TRIANGLES, 6, GL UNSIGNED BYTE, 0) Enemy glBindVertexArray(EnemyVAO) glBindTexture(GL TEXTURE 2D, m Textures 2 ) glUniform1i(glGetUniformLocation(m Program, "Texture3"), 0) m ProjectionMatrix glm translate(glm mat4(), glm vec3(EnemyX, EnemyY, 0.0f)) m Camera.ViewToWorldMatrix() m TransformationMatrix m ProjectionMatrix m TransformationMatrixLoc glGetUniformLocation(m Program, "TransformationMatrix") glUniformMatrix4fv(m TransformationMatrixLoc, 1, GL FALSE, amp m TransformationMatrix 0 0 ) glDrawElements(GL TRIANGLES, 6, GL UNSIGNED BYTE, 0) Platform One glBindVertexArray(PlatformVAO) glBindTexture(GL TEXTURE 2D, m Textures 3 ) glUniform1i(glGetUniformLocation(m Program, "Texture4"), 0) m ProjectionMatrix glm translate(glm mat4(), glm vec3(0.0f, 0.5f, 0.0f)) m Camera.ViewToWorldMatrix() m TransformationMatrix m ProjectionMatrix m TransformationMatrixLoc glGetUniformLocation(m Program, "TransformationMatrix") glUniformMatrix4fv(m TransformationMatrixLoc, 1, GL FALSE, amp m TransformationMatrix 0 0 ) glDrawElements(GL TRIANGLES, 6, GL UNSIGNED BYTE, 0) Player glBindVertexArray(PlayerVAO) glBindTexture(GL TEXTURE 2D, m Textures 0 ) glUniform1i(glGetUniformLocation(m Program, "Texture2"), 0) m ProjectionMatrix glm translate(glm mat4(), glm vec3(PlayerX, PlayerY, 0.0f)) m Camera.ViewToWorldMatrix() m TransformationMatrix m ProjectionMatrix m TransformationMatrixLoc glGetUniformLocation(m Program, "TransformationMatrix") glUniformMatrix4fv(m TransformationMatrixLoc, 1, GL FALSE, amp m TransformationMatrix 0 0 ) glDrawElements(GL TRIANGLES, 6, GL UNSIGNED BYTE, 0) glBindTexture(GL TEXTURE 2D, 0) glBindVertexArray(0) As you can see, it is unnecessarily complicated. And if I want to draw, say, two or more enemies, or 4 or more platforms, then it'd get ridiculously large! My question is this, how can I make this smaller? I prefer to be able to do something like this in my main.cpp Sprite.Render(Player) Sprite.Render(EnemyOne) etc... Looking forward to reading your tips, thank you! |
1 | Resizing a Framebuffer Object (ie its attachments) on Screen Resize I have been experimenting with some post processing effects and I have been using FBOs to store stuff. The problem is, I attempt to resize them when I change resolution. I get no errors, however the image has been stretched so that I only see about the bottom left quarter of it on my screen. I have looked over the internet and and found issues such as not using glTexStorage because it is immutable, and being sure to call glViewport after binding a framebuffer. As far as I can see I'm not missing anything simple like that. Here are some (very dark) screen shots showing the issue Please excuse the horrible graphics. I am working on a bloom filter and these images only contain the bright bits so it looks dark and weird. My FBOs are initialised with a blank texture attached to GL COLOR ATTACHMENT0. Here is the rough pseudo code (in java). (my actual code is abstracted behind a game engine I'm writing) Generating texture Creates the texture this.id glGenTextures() glPixelStorei(GL UNPACK ALIGNMENT, 1) glBindTexture(GL TEXTURE 2D, this.id) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) Set up texture scaling glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, magnification) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, minification) Filling it with data (data is a ByteBuffer filled with 0s) glTexImage2D(GL TEXTURE 2D, 0, colorMode, width, height, 0, GL RGBA, GL UNSIGNED BYTE, data) glGenerateMipmap(GL TEXTURE 2D) Binding the framebuffer glBindFramebuffer(GL FRAMEBUFFER, this.id) glViewport(0, 0, width, height) width and height taken from width and height of framebuffer texture Resizing the framebuffer stores new width and height to be used with glViewport this.width width this.height height resize renderbuffer if(renderbuffer ! 0) glBindRenderbuffer(GL RENDERBUFFER, renderbuffer) glRenderbufferStorage(GL RENDERBUFFER, GL DEPTH COMPONENT, width, height) glBindRenderbuffer(GL RENDERBUFFER, 0) glBindTexture(GL TEXTURE 2D, texture) glTexImage2D(GL TEXTURE 2D, 0, GL RGBA, width, height, 0, GL RGBA, GL UNSIGNED BYTE, data) glGenerateMipmap(GL TEXTURE 2D) do i need this? Thanks very much in advance if you can spot an issue with my code. Let me know if you need the full source as well. I have a couple of ideas for issues is it possible to resize the renderbuffer like that? do I have to re attach the textures after resizing them should i just re create the entire framebuffer renderbuffer textures instead of resizing them. |
1 | Opengl texture rendered as solid color,why? her is my vertex array and texture coordinate.. float POS 0.5, 0.5, 1.0, 0.5, 0.5, 1.0, 0.5,0.5, 1.0, 0.5,0.5, 1.0 float texCoords 0.0,0.0, 1.0,0.0, 1.0,1.0, 0.0,1.0 here is my texture creation stbi set flip vertically on load(1) openglTexBuff stbi load("lena.jpg", amp w, amp h, amp c,0) glGenTextures(1, amp TexID) glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D,TexID) glTexImage2D(GL TEXTURE 2D,0,GL RGB,w,h,GL FALSE,GL RGB,GL UNSIGNED BYTE,openglTexBuff) glGenerateMipmap(GL TEXTURE 2D) glTexParameteri(GL TEXTURE 2D,GL TEXTURE WRAP S,GL REPEAT) glTexParameteri(GL TEXTURE 2D,GL TEXTURE WRAP T,GL REPEAT) glTexParameteri(GL TEXTURE 2D,GL TEXTURE MAG FILTER,GL NEAREST) glTexParameteri(GL TEXTURE 2D,GL TEXTURE MIN FILTER,GL NEAREST) glBindTexture(GL TEXTURE 2D,0) stbi image free(openglTexBuff) ' and these are my shaders shader vertex version 330 core layout(location 0) in vec4 positions layout(location 1) in vec2 texCoords uniform mat4 translation matrix inactive shader variables.. uniform mat4 rotation matrix mat4 model matrix mat4 projection matrix out vec4 pos out vec2 frag TexCoord void main() rotation matrix gl Position rotation matrix translation matrix positions translation matrix operated with POS vector.. pos positions frag TexCoord texCoords translation matrix shader fragment version 330 core layout(location 0) out vec4 color in vec4 pos in vec2 frag TexCoord rag TexCoord uniform float col uniform sampler2D a texture void main() col a texture color texture(a texture,frag TexCoord) when i use (solid)color instead of texture in shader..everything works fine but when i use shader it gives solid color instead of texture..It seems like my whole tex is super zoomed to output single color here is draw call glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) box.shader.setUniformat4x4("translation matrix",translation) box.shader.setUniformat4x4("rotation matrix",rotation arr) box.shader.setUniform1f("col",1.0) glBindTexture(GL TEXTURE 2D,TexID) box.shader.setUniform1i("a texture",0) box.render() box.render void render() vertices.bind() indices.bind() shader.bind() glDrawElements(render As,vertices.getSize(),GL UNSIGNED INT,NULL) vertices.unbind() indices.unbind() |
1 | How can I calculate a terrain's normals? Im trying to implement basic lighting in Opengl 3 (a sun) with this tutorial http www.mbsoftworks.sk index.php?page tutorials amp series 1 amp tutorial 11 Im building a basic terrain and its working well. Now im trying to add normals, and I think its not working well Terrain in wireframe As you can see I dont think my normals are good. This is how I get them for(x 0 x lt this.m size.width 1 x ) for(y 0 y lt this.m size.height 1 y ) immutable uint indice1 y this.m size.width x immutable uint indice2 y this.m size.width (x 1) immutable uint indice3 (y 1) this.m size.width x immutable uint indice4 (y 1) this.m size.width x immutable uint indice5 y this.m size.width (x 1) immutable uint indice6 (y 1) this.m size.width (x 1) Vector3 v1 vertexes indice3 vertexes indice1 Vector3 v2 vertexes indice2 vertexes indice1 Vector3 normal v1.cross(v2) normal.normalize() normals indice1 normal indices indice1, indice2, indice3, indice4, indice5, indice6 I use this basic shader http ogldev.atspace.co.uk www tutorial18 tutorial18.html or in the link I posted at the top. Thanks. |
1 | How to implement "model quality" in OpenGL In many games, there is an option named "Model Quality", which ranges from Low to High. Low model quality simply removes a lot of vertices from the model, to make it faster, while High preserves the original model vertices (I think). So I'm wondering, how do I implement this in my game? |
1 | What to do with unused vertices? Imagine yourself a vertex array in OpenGL representing blocks in a platform game. But some vertices may be not used. The environment is dynamic, so there always some vertex may suddenly become invisible. What is the best way to make them not draw? Graphic cards are complicated and it's hard to predict what is best approach. Few best ways I can think of delete and move all vertices after deleted one to fill freed space (sounds extremely inefficient) set positions to 0 set transparency to maximum I could of course benchmark, but what on my computer works faster doesn't have to on other. |
1 | GL INVALID OPERATION in glGenerateMipmap(incomplete cube map) I'm trying to learn OpenGL and i'm using SOIL to load images. I have the following piece of code GLuint texID 0 bool loadCubeMap(const char baseFileName) glActiveTexture(GL TEXTURE0) glGenTextures(1, amp texID) glBindTexture(GL TEXTURE CUBE MAP, texID) const char suffixes "posx", "negx", "posy", "negy", "posz", "negz" GLuint targets GL TEXTURE CUBE MAP POSITIVE X, GL TEXTURE CUBE MAP NEGATIVE X, GL TEXTURE CUBE MAP POSITIVE Y, GL TEXTURE CUBE MAP NEGATIVE Y, GL TEXTURE CUBE MAP POSITIVE Z, GL TEXTURE CUBE MAP NEGATIVE Z for (int i 0 i lt 6 i ) int width, height std string fileName std string(baseFileName) " " suffixes i ".png" std cout lt lt "Loading " lt lt fileName lt lt std endl unsigned char image SOIL load image(fileName.c str(), amp width, amp height, 0, SOIL LOAD RGB) if (!image) std cerr lt lt FUNCTION lt lt " cannot load image " lt lt fileName lt lt " (" lt lt SOIL last result() lt lt ")" lt lt std endl return false glTexImage2D(GL TEXTURE 2D, 0, GL RGB, width, height, 0, GL RGB, GL UNSIGNED BYTE, image) SOIL free image data(image) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE MIN FILTER, GL LINEAR MIPMAP LINEAR) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP R, GL CLAMP TO EDGE) glGenerateMipmap(GL TEXTURE CUBE MAP) glBindTexture(GL TEXTURE CUBE MAP, 0) return true When i call this, the images load successfully, but then i get an error in console OGL DEBUG message lt 1 gt 'API' reported 'Error' with 'High' severity GL INVALID OPERATION in glGenerateMipmap(incomplete cube map) BACKTRACE and no cubemap is displaying at all. Do you see any mistake in this code? |
1 | OpenGL Calculate Matrices Im trying to switch from the glTranslate etc to my own Matrices, but for some reason it does not work. Here are my 2 functions to create the view and projection matrix public Matrix4f getViewMatrix() Matrix4f viewMatrix new Matrix4f() viewMatrix.setIdentity() viewMatrix.translate(game.player.position) viewMatrix.rotate(game.player.rotation.x, new Vector3f(1.0f, 0.0f, 0.0f)) viewMatrix.rotate(game.player.rotation.y, new Vector3f(0.0f, 1.0f, 0.0f)) viewMatrix.rotate(game.player.rotation.z, new Vector3f(0.0f, 0.0f, 1.0f)) System.out.println(viewMatrix) return viewMatrix public Matrix4f getProjectionMatrix() FloatBuffer projectionBuffer BufferUtils.createFloatBuffer(16) GL11.glGetFloat(GL MODELVIEW MATRIX, projectionBuffer) Matrix4f projectionMatrix new Matrix4f() projectionMatrix.load(projectionBuffer) return projectionMatrix I send those 2 matrices to the Vertex shader with a uniform, and use gl Position view matrix proj matrix vec4(in position, 1.0) Where in position is the coordinate of the vertex. I do see some things on the screen, but it's very, very buggy, and nothing is right about it. If I use the build in gl ModelViewProjectionMatrix in the shader and use glTranslate and glRotate in OpenGL, it works perfectly fine. What am I doing wrong here? Here is the output of me view projection together with the camera position 0.5095141 0.0 0.8604623 1.1625774 0.7321221 0.52541286 0.43351877 16.980185 0.45209795 0.8508474 0.26770526 0.8665553 0.0 0.0 0.0 1.0 Vector3f 1.1625774, 16.980185, 0.8665553 |
1 | What input and window handler should I learn for complement OpenGL? I have a good base in C programming and I did some 2D games using SDL. Now I want to start making 3D games, but as much as posible, I want to learn following the standars of the profesional industry. I think learning openGL I will have more acuratte skills for programming 3D graphics, so I prefer to study openGL instead of a game engine. But my cuestion is, what library I should choice for complement openGL in the area of the input and windows handler, if posible, following the standars of the industry? Thanks you in advance for the help! (And sorry for my bad english) |
1 | Point Light shows black box rect (PointLight not working) libgdx 3D I am creating a 3d scene currently a box and rect, and trying to enable lighting. When i create a PointLight and add it to Environment everything turns to black color? all i want to do is create a 3d scene and enable point light, like a sun or rays coming from a point and shading the objects. Code environment new Environment() environment.add(new PointLight().set(1f, 1f, 1f, 0, 0, 20f, 100f)) modelBatch new ModelBatch() .. square new ModelBuilder().createBox(300,300,300,new Material(ColorAttribute.createDiffuse(Color.GREEN)), VertexAttributes.Usage.Position VertexAttributes.Usage.Normal) squareinst new ModelInstance(square) squareinst.transform.setTranslation( 500,0,0) sprites.get(0).setRotationY(sprites.get(0).getRotationY() 1f) sprites.get(1).setRotationY(sprites.get(1).getRotationY() 1f) squareinst.transform.rotate(1,0,0,1) modelBatch.begin(camera) for(Sprite3D sp sprites) has 3d rect models sp.draw(modelBatch,environment) modelBatch.render(squareinst,environment) modelBatch.end() PointLight turning everything black Without using environment or lights as per my investigation, here if pointlight is not working then everything should be black as currently, because the environment needs light, it works fine with Directional light (only the backface of rect is black even after rotations, i don't know why) libgdx version 1.6.1 android studio i checked it on both android device and desktop please i really need to get this PointLight working, i don't know if it will take a custom shader, if so please guide me to some links because i am not experienced in shaders. I also read about PointLight not working on some device or not working in opengl 2.0 enabled, but i am not sure. |
1 | handling buffers in OpenGL I'm reading through the OpenGL documentation for version 3.3 core. I'm having issues understanding proper buffer deletion. At the moment I have an object that loads itself into OpenGL memory in the constructor and only exposes a VAO with the attribute pointers and a bound element array ready for rendering. When it stops existing it deletes all the buffers and sets pointers to NULL. How do I properly delete OpenGL data? I'm going to assume that the object doesn't stop existing while it's being rendered. Do I have to do something more than just delete the VAO and then delete the buffer objects? Does that leave anything out? Should I bind the VAO disable the attributes, unbind it and then delete it? |
1 | Can I directly pass a Boost ptr vector list to glBufferData? I have a data structure like this typedef struct vertex float x float y float z float s float t vertex Then I add to a list called boost ptr vector lt vector gt vertices Is there a way to use vertices to provide the parameters for glBufferData? |
1 | Do I have to take aspect ratio into account, when computing a direction in View Space? I read an article about implementing a Single Pass Volume Renderer. After thinking about it, I have a question, which concerns the provided GLSL code snippet of the Fragment Shader. To calculate the ray direction, the author uses the FocalLength (distance of the projection plane from view point) as Z component, and the fragment coordinates in NDC space as xy component. This gives a vector pointing from the view point to the pixel on the projection plane which is computed. Obviously this direction is supposed to be in View Space, because it assumes 0,0,0 as view point and the ModelView Matrix is used to convert the direction to object local space. But I am not sure if that is correct, because isn't it that in view space the projection window isn't necessarily normalized, but has a width of twice the aspect ratio? It gets normalized by applying the projection matrix (at least in Direct3D as far as i know). So if this is true, why does this work? Why he maps the x component of gl FragCoord to 1 1 , when the leftmost pixel would have an x coordinate of aspectRatio in view space? |
1 | JOGL hardware based shadow mapping computing the texture matrix I am implementing hardware shadow mapping as described here. I've rendered the scene successfully from the light POV, and loaded the depth buffer of the scene into a texture. This texture has correctly been loaded I check this by rendering a small thumbnail, as you can see in the screenshot below, upper left corner. The depth of the scene appears to be correct objects further away are darker, and that are closer to the light are lighter. However, I run into trouble while rendering the scene from the camera's point of view using the depth texture the texture on the polygons in the scene is rendered in a weird, nondeterministic fashion, as shown in the screenshot. I believe I am making an error while computing the texture transformation matrix, but I am unsure where exactly. Since I have no matrix utilities in JOGL other then the gl Load Mult Matrix procedures, I multiply the matrices using them, like this void calcTextureMatrix() glPushMatrix() glLoadIdentity() glLoadMatrixf(biasmatrix, 0) glMultMatrixf(lightprojmatrix, 0) glMultMatrixf(lightviewmatrix, 0) glGetFloatv(GL MODELVIEW MATRIX, shadowtexmatrix, 0) glPopMatrix() I obtained these matrices by using the glOrtho and gluLookAt procedures glLoadIdentity() val wdt width 45 val hgt height 45 glOrtho(wdt, wdt, hgt, hgt, 45.0, 45.0) glGetFloatv(GL MODELVIEW MATRIX, lightprojmatrix, 0) glLoadIdentity() glu.gluLookAt( xlook lightpos. 1, ylook lightpos. 2, lightpos. 3, xlook, ylook, 0.0f, 0.f, 0.f, 1.0f) glGetFloatv(GL MODELVIEW MATRIX, lightviewmatrix, 0) My bias matrix is float biasmatrix new float 16 0.5f, 0.f, 0.f, 0.f, 0.f, 0.5f, 0.f, 0.f, 0.f, 0.f, 0.5f, 0.f, 0.5f, 0.5f, 0.5f, 1.f After applying the camera projection and view matrices, I do glTexGeni(GL S, GL TEXTURE GEN MODE, GL EYE LINEAR) glTexGenfv(GL S, GL EYE PLANE, shadowtexmatrix, 0) glEnable(GL TEXTURE GEN S) for each component. Does anybody know why the texture is not being rendered correctly? Thank you. |
1 | how to use glm rotate with a eulerangle? I have a vec3 to represent my object's orientation rotation but the glm rotate method expects a quaternion. If I just convert it to a quaternion like this glm quat rot rotation The W value will just be zero, right? And then there won't be any changes in rotation. In my code I just want to be able to do rotation.x 5.0f in the update method of an object. This is the method I'm using for my transformations glm mat4 GameObject Transform(glm mat4 model, glm vec3 position, glm vec3 scale, glm quat angleAxis) model glm translate(model, position) if (angleAxis.w gt 360.0f) angleAxis.w 360.0f else if (angleAxis.w lt 0.0f) angleAxis.w 360.0f model glm rotate(model, angleAxis.w toRadians, glm vec3(angleAxis.x, angleAxis.y, angleAxis.z)) model glm scale(model, scale) return model Currently I'm just passing on that rotation vec3 to that angleAxis quaternion parameter but that obviously doesn't work. This is how i currently calculate my front, up, and right vectors void GameObject calculateCameraView() front.x cos(glm radians(rotation.x)) cos(glm radians(rotation.y)) front.y sin(glm radians(rotation.y)) front.z sin(glm radians(rotation.x)) cos(glm radians(rotation.y)) front glm normalize(front) right glm normalize(glm cross(front, worldUp)) up glm normalize(glm cross(right, front)) front.y invertMouse ? front.y 1 front.y |
1 | Accessing uniform variables from a Cg shader in OpenGL I am trying to implement a simple PC program with OpenGL, using mandatorily Cg shaders (no Unity whatsoever). I have found some tips on this page http bobobobo.wordpress.com 2008 10 05 cg 1 and kept the Cgprogram, Cgcontext and Cgprofile types in my own project. Problem is How can you access uniform variables in the C code? I have tried the glGetUniformLocation way, but that won't work because the first parameter must be the program's ID of GLuint type and I have CGprogram type. I have already searched by myself (including the Nvidia Cg 'tutorial') but I can't seem to find both Cg and OpenGL discussed on a page without having to deal with Unity. edit I can now access the variables, but in my case I'm trying to change a texture, and it doesn't bind. |
1 | Correct way to calculate Perspective Matrix I have seen at least 3 different ways to calculate the perspective matrix and I'm confused as to which one I should be using and what the differences are? OpenGL says to do it this way f cotangent(fov 0.5) f aspect, 0, 0, 0 0, f, 0, 0 0, 0, (far near) (near far), (2 far near) (near far) 0, 0, 1, 0 glm does it this way range tan(fov 0.5) near left range aspect right range aspect bottom range top range (2 near) (right left), 0, 0, 0 0, (2 near) (top bottom), 0, 0 0, 0, (far near) (far near), 1 0, 0, (2 far near) (far near), 0 |
1 | How to adjust position relative to resolutions? I have a lot of objects on the screen and would like at different resolutions, object's positions rendered correctly on the screen irrespective of the resolution. Is it correct to multiply the position by aspect ratio ? |
1 | OpenGL Updating VAO array buffer range I am trying to stream data to a buffer which I have bound to a VAO vertex buffer binding point that I want to access through vertex attributes in the shader. Usually I stream to buffers which I have bound as GL UNIFORM BUFFERs and I do it like this create an immutable data storage glNamedBufferStorage(uboID, capacity, data, GL MAP WRITE BIT GL MAP PERSISTENT BIT GL MAP COHERENT BIT) map entire storage forever mappedPtr glMapNamedBufferRange(uboID, 0, capacity, GL MAP WRITE BIT GL MAP PERSISTENT BIT GL MAP COHERENT BIT) bind buffer to target(GL UNIFORM BUFFER) binding point glBindBufferBase(GL UNIFORM BUFFER, uboBinding, uboID) bind shader interface block to target binding point glUniformBlockBinding(shaderProgramID, blockIndex, uboBinding) Now the uniform block can be accessed in the shader program and i can stream to it like so copy data to mapped pointer at stream offset memcpy(mappedPtr uploadOffset, amp uploadData 0 , uploadSize) This is the important part tell the shader which range of the buffer to use glBindBufferRange(GL UNIFORM BUFFER, uboBinding, uboID, uploadOffset, uploadSize) increment offset uploadOffset uploadSize If I bind a buffer to a VAO and access its data as vertex attributes in the shader, I don t know how to give the shader the range in which the updated data can be found. I can give an offset when binding a buffer to a VAO when calling glVertexArrayVertexBuffer, but it has a few more parameters and does not seem like the right thing to call every frame. I have not gotten it to work with it, too. Is there something like glBindBufferRange for vertex buffers bound to VAOs? |
1 | Why do I see spaces between polygons in OpenGL, only on an Nvidia 720M? I'm developing an OpenGL application that renders .obj 3D models. I use VBOs to render the polygons. Using the same code, the loaded models appear fine on all PCs, except on my Nvidia 720M. My laptop sports an Intel HD 4000 embedded graphics GPU, and an Nvidia 720M. When I use the Intel GPU, it looks fine, but when I load it on the Nvidia 720M, spaces appear around polygons like so On Nvidia 720M GPU On everything else No other 3D application that runs on the Nvidia 720M GPU has the same issue. I am using the latest drivers. Is this a problem with my application? How can I fix it? |
1 | Rendering two textures with blending and alpha test What I am looking for is the following I have a circle on a square image, alpha is 0 at the corners and also a square shadow, alpha is 0 everywhere else I would like to have as final result a blending of these two renders, plus the shadow not being rendered outside the circle How could I achieve that? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.