_id
int64 0
49
| text
stringlengths 71
4.19k
|
---|---|
1 | glfw resizing causing image scaling I have a quad rendered that extends from the top left of the window to with width of the window that is also 64 pixels high. When I resize the window, from its initial size, the quad and text scales proportionately bigger or smaller in the same say Photoshop can scale a image. What I'm seeking on a basic level is that regardless of how I resize the window, everything drawn remains the same. From the below image, the right side is the initial size and the left side is what happens when I drag the window to be a smaller size. The red bar and text scales with it. This is how Im handling my resizing glfwSetWindowSizeCallback(pWindow, WindowSizeCallback) initialized after context void WindowSizeCallback(GLFWwindow window, int width, int height) glfwSetWindowSize(window, width, height) |
1 | Is index drawing faster than non index drawing I need to draw a lot of polygons consisting of 6 vertices's (two triangles). Without any texture coordinates, normals etc., both approaches result in 72 bytes. In the future I would definitely also need texture coordinates and normals, which would make index drawing consume less memory. Not a lot though. So my question is For VAOs with few vertex overlaps, which approach is faster? I don't care about the extra memory consumed by non index drawing, only speed. Edit To make it clear. Non index approach float 18 vertices Triangle 1 1,1,0, 1,0,0, 0,0,0, Triangle 2 1,0,0, 0,1,0, 0,0,0, Index approach float 12 vertices 1,1,0, 1,0,0, 0,0,0, 0,1,0, int 6 indices Triangle 1 0,1,2, Triangle 2 0,3,2 |
1 | GLSL Bokeh using Quads and Textures I'm trying to create a depth of field effect with bokeh sprites in GLSL. Specifically, what i would like to do is, for each pixel See if the pixel is out of the focal range If it is, draw a quad and apply a texture to provide a bokeh sprite. This kind of implementation is seen in the Unreal Engine and by Matt Pettineo, however, both implementations are in DX11 and I'm using OpenGL. I'm a bit stuck on the drawing a quad and applying a texture bit. Does anyone know how I can do this, or provide any relevant links as to how I can do this? Thanks |
1 | How to get independent from GLEW dll file I'm literally an absolute noob in GL. I just wrote my first GLEW piece of code yesterday include lt stdio.h gt include lt stdlib.h gt include lt GL glew.h gt include lt GL glfw.h gt int main(void) glfwInit() glfwOpenWindowHint(GLFW FSAA SAMPLES, 4) glfwOpenWindowHint(GLFW OPENGL VERSION MAJOR, 3) glfwOpenWindowHint(GLFW OPENGL VERSION MINOR, 3) glfwOpenWindowHint(GLFW OPENGL PROFILE, GLFW OPENGL CORE PROFILE) glfwOpenWindow(640, 480, 8, 8, 8, 0, 24, 0, GLFW WINDOW) glewInit() glfwSetWindowTitle("OpenGL Rules!") glfwEnable( GLFW STICKY KEYS ) do glfwSwapBuffers() while( glfwGetKey( GLFW KEY ESC ) ! GLFW PRESS amp amp glfwGetWindowParam( GLFW OPENED ) ) This compiles just fine with (using mingw on win8 x64) gcc opengl.c lglfw lglew32 lopengl32 However, in order to run the output, I have to copy the glew32.dll to the same directory from which I'm running the program. Is there a way to get independent from the dll? Like, compiling once and using without having to carry the dll around? |
1 | Why is my OpenGL 4.1 shader not working on OS X Macbook ( works on Linux )? I've recently rebuild shaders for my program and it stopped "working" ( black screen ) on OS X ( El Capitan ), but it's ok on Linux. What could be the cause? There are no shader compilation errors, and here is my shader code https github.com Marqin YuriaViewer blob master vertex.glsl Keep in mind that this software worked on El Capitan with OpenGL 4.1 before shader rewrite. Here are my glfw hints glfwWindowHint(GLFW CONTEXT VERSION MAJOR, 4) glfwWindowHint(GLFW CONTEXT VERSION MINOR, 1) glfwWindowHint(GLFW OPENGL PROFILE, GLFW OPENGL CORE PROFILE) glfwWindowHint(GLFW OPENGL FORWARD COMPAT, GL TRUE) My program also is checking for GL ARB gpu shader fp64 and it's available on my Macbook ( Macbook Air 2013 mid ). EDIT1 I've "debugged" it a little and it looks that i is always lesser than vis on OS X, that's why it's black. EDIT2 I made a simple test I've typed all uniform values in shader by hand and now it wasn't black, but I've got some gibberish on screen. Then I've changed every dvec3 and dvec2 to float versions and it showed nice fractal. So it looks like double is not working on OS X. But how can it be? It's saying that GL ARB gpu shader fp64 is available and it even doesn't complain when I request it in vertex shader. EDIT3 It works on iMac with R9 M395. So it's problem with Intel hardware ( or problem with OS X Intel driver ). |
1 | removing triangle artifacts from overlapping height maps OpenGL I have two 100x100 height maps that I am drawing using a triangle mesh. One represents the land height, and the other represents (water land) height. I currently draw both meshes on top of each other, and I make the water mesh transparent when the water level is 0. But where the land meets the water there are lots of artifacts. I believe the cause is the water mesh is the height of the land mesh so at the edges the water mesh height goes from being land mesh height to , the visible slope is actually the water mesh slope and not the land mesh slope. That said, I don't know any good way to fix it. Note water level may not be flat so replacing the water mesh with a plane wont work |
1 | lighting for landscape I've got a landscape(created in Photoshop .raw file) and a .tga texture for it. I read .raw file and read .tga file like this LoadRawFile("landscape.Raw", MapSize MapSize, amp HeightMap 0 0 ) Texture landscape if (LoadTGA( amp landscape, "landscape.tga")) glGenTextures(1, amp landscape.texID) glBindTexture(GL TEXTURE 2D, landscape.texID) glTexImage2D(GL TEXTURE 2D, 0, landscape.bpp 8, landscape.width, landscape.height, 0, landscape.type, GL UNSIGNED BYTE, landscape.imageData) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) if (landscape.imageData) free(landscape.imageData) else cout lt lt "Cannot load texture" lt lt endl glEnableClientState(GL VERTEX ARRAY) glVertexPointer (3, GL FLOAT, 0, VertexMap) glEnableClientState(GL TEXTURE COORD ARRAY) glTexCoordPointer(2, GL FLOAT, 0, TextureMap) for (int Row 0 Row lt MapSize 2 Row ) Indices Row Row I render landscape like this(method renderLandscape) int x, y, i, j int Index 0 for (i 0 i lt MapSize 1 i ) Index 0 for (j 0 j lt MapSize 1 j ) x j Zoom y i Zoom TextureMap Index 0 0 j TextureBit TextureMap Index 0 1 i TextureBit TextureMap Index 1 0 j TextureBit TextureMap Index 1 2 (i 1) TextureBit VertexMap Index 0 2 HeightMap j i VertexMap Index 1 2 HeightMap j i 1 VertexMap Index 0 0 x VertexMap Index 0 3 y VertexMap Index 1 0 x VertexMap Index 1 4 y Zoom Index 2 glDrawElements(GL TRIANGLE STRIP, Index, GL UNSIGNED INT, Indices) draw method glPushMatrix() glTranslatef( 20, 0, 0) glScalef(0.01, 0.01, 0.01) glRotatef(90, 1, 0, 0) landscape.RenderLandscape() glPopMatrix() So by default(without lighting) it looks like this But when I enable lighting(add this lines to my opengl init function) GLfloat light ambient 0.5, 0.5, 0.5, 1.0 GLfloat light diffuse 1.0, 1.0, 1.0, 1.0 GLfloat light specular 1.0, 1.0, 1.0, 1.0 GLfloat light position 1.0, 1.0, 1.0, 0.0 glLightfv(GL LIGHT0, GL AMBIENT, light ambient) glLightfv(GL LIGHT0, GL DIFFUSE, light diffuse) glLightfv(GL LIGHT0, GL SPECULAR, light specular) glLightfv(GL LIGHT0, GL POSITION, light position) glEnable(GL LIGHTING) glEnable(GL LIGHT0) The result landscape looks like this I don't need any special or difficult lighting. So it seems I need to calculate normales(I am new in opengl and usually create .obj models where vn precalculated). What is the best way to calculate normales in my case? |
1 | Cannot Get The Texture Showed Up Correctly glDrawElements I still have this problem almost 1 month. Tried to search on Google but did not find any solution to this. I have loaded all the data correctly but don't know why the texture came up like this. Here is my loader Header struct SVertex public float X, Y, Z struct STexture public float U, V struct SFace public int X, Y, Z class CBMap public struct SMesh SVertex Vertex SVertex Normal STexture TexCoord SFace Face SFace FaceNormal char Texture 32 int TextureID int TotalVertice int TotalNormal int TotalFace SMesh Mesh CBMap() CBMap() int TotalMesh int Load(char szFile) int Get FileHeader(FILE pFile) int Get TotalMesh(FILE pFile) int GetData(FILE pFile) CPP int CBMap Load(char szFile) FILE pFile fopen(szFile, "r") if (!pFile) return 0 int iBMap Get FileHeader(pFile) if (!iBMap) printf("BMAP file is not valid n") return 0 TotalMesh Get TotalMesh(pFile) Mesh new SMesh TotalMesh GetData(pFile) int iMesh 0 char pJunk char szHeader 32 while (!feof(pFile)) fscanf(pFile, " s", amp szHeader) if (!strncmp(szHeader, "BMAP", 4)) continue if (!strncmp(szHeader, "mesh", 4)) continue if (!strncmp(szHeader, "mtl", 3)) fscanf(pFile, " s", Mesh iMesh .Texture) Mesh iMesh .TextureID LoadTexture(Mesh iMesh .Texture) if (!strncmp(szHeader, "ve", 2)) fscanf(pFile, " d", amp pJunk) Mesh iMesh .Vertex (SVertex )malloc(sizeof (SVertex) Mesh iMesh .TotalVertice) for (int i 0 i lt Mesh iMesh .TotalVertice i) fscanf(pFile, " f f f", amp Mesh iMesh .Vertex i .X, amp Mesh iMesh .Vertex i .Y, amp Mesh iMesh .Vertex i .Z) if (!strncmp(szHeader, "uv", 2)) fscanf(pFile, " d", amp pJunk) Mesh iMesh .TexCoord (STexture )malloc(sizeof (STexture) Mesh iMesh .TotalVertice) for (int i 0 i lt Mesh iMesh .TotalVertice i) fscanf(pFile, " f f", amp Mesh iMesh .TexCoord i .U, amp Mesh iMesh .TexCoord i .V) if (!strncmp(szHeader, "vn", 2)) fscanf(pFile, " d", amp pJunk) Mesh iMesh .Normal (SVertex )malloc(sizeof (SVertex) Mesh iMesh .TotalNormal) for (int i 0 i lt Mesh iMesh .TotalNormal i) fscanf(pFile, " f f f", amp Mesh iMesh .Normal i .X, amp Mesh iMesh .Normal i .Y, amp Mesh iMesh .Normal i .Z) if (!strncmp(szHeader, "f", 1)) fscanf(pFile, " d", amp pJunk) Mesh iMesh .Face (SFace )malloc(sizeof (SFace) Mesh iMesh .TotalFace 3) Mesh iMesh .FaceNormal (SFace )malloc(sizeof (SFace) Mesh iMesh .TotalFace 3) for (int i 0 i lt Mesh iMesh .TotalFace i) fscanf(pFile, " d d d d d d", amp Mesh iMesh .Face i .X, amp Mesh iMesh .Face i .Y, amp Mesh iMesh .Face i .Z, amp Mesh iMesh .FaceNormal i .X, amp Mesh iMesh .FaceNormal i .Y, amp Mesh iMesh .FaceNormal i .Z) if (!strncmp(szHeader, "end", 3)) if (iMesh lt TotalMesh) iMesh printf("CBMap Model Loaded n") fclose(pFile) return 1 First picture is reality and second picture expectation |
1 | Z Value of clip space position is always 1.0 I render a lot of quads on the screen into z direction (20 x 2000). I want to get the depth value in a final render target. But it looks like z is always 1.0f. I checked the result with the OpenGL debugger. Any advice? Vertex shader version 410 layout(row major) uniform UView mat4 m View mat4 m Projection layout(row major) uniform UModelMatrix mat4 m ModelMatrix layout(location 0) in vec2 VertexPosition out vec3 PSVSPosition void main(void) vec4 VSPosition m View m ModelMatrix vec4(VertexPosition.xy, 0.0f, 1.0f) PSVSPosition VSPosition.xyz gl Position m Projection VSPosition Fragment shader version 410 layout(row major) uniform USettings mat4 m ProjectionMatrix layout(location 0) out vec4 PSColor in vec3 PSVSPosition void main(void) Calculate depth vec4 CSPosition m ProjectionMatrix vec4(PSVSPosition, 1.0f) CSPosition.xyz CSPosition.w PSColor vec4(vec3(CSPosition.z 0.5f 0.5f), 1.0f) Result Closer result view space position result clip space position result |
1 | How to use OpenGL blend mode functions to brighten darken a texture. Tried this code, but the texture didnot get any lighter. try texture TextureLoader.getTexture("png", Game.class.getResourceAsStream(" brick.png"), true, GL NEAREST) catch (IOException e) e.printStackTrace() GL11.glBindTexture(GL11.GL TEXTURE 2D, texture.getTextureID()) glEnable(GL BLEND) glBlendFunc(GL CONSTANT ALPHA, GL CONSTANT ALPHA) GL14.glBlendColor(1.0f, 1.0f, 1.0f, 0.5f) glColor4f(1, 1, 1, 0.5f) GL11.glBegin(GL11.GL QUADS) Start Drawing Quads Front Face GL11.glNormal3f(0.0f, 0.0f, 1.0f) Normal Pointing Towards Viewer GL11.glTexCoord2f(0.0f, 0.0f) GL11.glVertex3f( 1.0f, 1.0f, 1.0f) Point 1 (Front) GL11.glTexCoord2f(1.0f, 0.0f) GL11.glVertex3f(1.0f, 1.0f, 1.0f) Point 2 (Front) GL11.glTexCoord2f(1.0f, 1.0f) GL11.glVertex3f(1.0f, 1.0f, 1.0f) Point 3 (Front) GL11.glTexCoord2f(0.0f, 1.0f) GL11.glVertex3f( 1.0f, 1.0f, 1.0f) Point 4 (Front) glEnd() |
1 | Reset Camera view webgl I have camera rotating by dragging mouse. Im trying to add reset button that restart the scene to initial view. The rotating camera works as expected but I can't figure out how to reset it. The scene rotating in weird directions, not as expected. I have 2 vars absX, absY which hold the overall change angle value from the init mode. when reset applied mat4.rotate(getRotationCameraMat, degToRad( absX 10), 0, 1, 0 ) mat4.rotate(getRotationCameraMat, degToRad( absY 10), 1, 0, 0 ) absX 0 absY 0 apply the rotate mat4.multiply(mvMatrix, cameraRotationMat) the camera code var mouseDown false var prevMouseX null var prevMouseY null var cameraRotationMat mat4.create() mat4.identity(cameraRotationMat) function handleMouseDown(event) mouseDown true prevMouseX event.clientX prevMouseY event.clientY function handleMouseUp(event) mouseDown false var absX 0 var absY 0 function handleMouseMove(event) if (!mouseDown) return var currentX event.clientX var currentY event.clientY var deltaX currentX prevMouseX absX deltaX var newRotationMatrix mat4.create() mat4.identity(newRotationMatrix) mat4.rotate(newRotationMatrix, degToRad(deltaX 10), 0, 1, 0 ) var deltaY currentY prevMouseY absY deltaY mat4.rotate(newRotationMatrix, degToRad(deltaY 10), 1, 0, 0 ) mat4.multiply(newRotationMatrix, cameraRotationMat,cameraRotationMat) prevMouseX currentX prevMouseY currentY |
1 | understanding the basics of raytracing I have got a sphere in my world space. I don't understand how can i find my sphere using my X and Y on my screen, because i don't understand what the value Z of my ray assuming the fact that we use world coordinate system. |
1 | How do I build Assimp with MinGW? How can I build Assimp with cMake and MinGW? I tried, but I don't get a functioning library... Details of my attempt I am trying to build the Open Asset Import Library (Assimp) but I have been running into problems. The assimp documentation is really poor and expects you to know exactly what you are doing. The developers haven't been particularly helpful either. I hope someone here has successfully built assimp and can let me know where I am going wrong. I suspect that I have several problems that are contributing to my failure. I am using 64 bit Windows 8.1 pro and using MinGW version 4.8.1. The first thing I tried was downloading assimp 3.1.1 and boost 1.57. I extracted both folders and tried to use cMake to generate the makefile for MinGW. I haven't used cMake before and the assimp instructions are use cMake as you normally would, so I have no idea if I configured it right. I pointed BOOST ROOT to the boost folder I extracted from the download, set it to build static libraries and generated the makefile. I then tried running the makefile and got a number of errors. The first was IFCReaderGen.cpp.obj too many sections and was too big. After some googling, I found a workaround was to set CMAKE BUILD TYPE to release. That seemed to work and it finished the build but I only got the files assimp32.dll and libassimp.dll.a, which I thought was odd because I was expecting lib relese libassimp.a to be generated as per the details on the assimp website. Though the website might also be wrong or out of date. I linked with lassimp.dll and that allowed me to build my program. However it crashed upon start up the error that appeared immediately at start up was Program has stopped working (There was no additional info.) I guessed that this was a dll problem which was odd because I had (tried) to build the static libraries through cMake. I copied assimp32.dll into my executable folder. This time, the program wouldn't crash but the screen would be blank. I'm guessing that there was something wrong with the library I build that was causing it to link incorrectly. At this point, I deleted everything to try a fresh start. I tried to follow this article I downloaded assimp 3.1.1 and boost 1.57 and extracted them. I opened cmd, changed to the boost root directory and ran bootstrap.bat mingw I then ran b2 build dir "C Libraries boost " variant release link static address model 32 toolset gcc The result of this was 598 targets updates, 3 targets skipped, 2 targets failed. I now have a folder C Libraries boost boost bin.v2 with two folders libs and standalone, but I'm not certain what my BOOST ROOT directory is anymore. I opened cMake, selected the assimp folder I had extracted, and configured the following BOOST ROOT "C Libraries boost boost " ASSIMP BUILD STATIC LIB TRUE ASSIMP BUILD TESTS FALSE ASSIMP ENABLE BOOST WORKAROUND FALSE BUILD SHARED LIBS FALSE I then pressed configure and got the error that the boost libraries were not found. I'm only guessing what these cMake settings do, as I can't find any documentation for assimp. I'd like to build some version of assimp that I can then link to and use in a simple test program. At some point I will go back and build the shared libraries, but first I just want to get something working and understand how to do it again. Can someone see what has gone wrong? |
1 | glUniformMatrix4fv OpenTK equivalent Very simple and quick question which surprisingly I couldn't find an answer to over the internet what is the equivalent of glUniformMatrixfv for opentk? I've browsed all the 7 overloads of GL.UniformMatrix4 and none of them seems correct to me and or I have found any example usage for. E.g. if I have a Matrix4 matrices variable (properly initialized etc..) that I want to map on a mat4 matrices 2 in glsl, which GL.UniformMatrix4 overload should I use? P.S. Using OpenTK.Next NuGet package version 1.1.1616.8959 |
1 | OpenGl lighting casting black shadow I'm having issues with lighting that I can't figure out, namely solid black sections. Any ideas? Here's my fragment shader code version 400 core in vec2 pass textureCoords in vec3 surfaceNormal in vec3 toLightVector in vec3 toCameraVector out vec4 out Colour uniform sampler2D modelTexture uniform vec3 lightColour uniform float shineDamper uniform float reflectivity void main(void) vec3 unitNormal normalize(surfaceNormal) vec3 unitLightVector normalize(toLightVector) float nDot1 dot(unitNormal, unitLightVector) float brightness max(nDot1,0.4) vec3 diffuse brightness lightColour vec3 unitVectorToCamera normalize(toCameraVector) vec3 lightDirection unitLightVector vec3 reflectedLightDirection reflect(lightDirection,unitNormal) float specularFactor dot(reflectedLightDirection, unitVectorToCamera) specularFactor max(specularFactor,0.0) float dampedFactor pow(specularFactor,shineDamper) vec3 finalSpecular dampedFactor reflectivity lightColour vec4 textureColour texture(modelTexture,pass textureCoords) if(textureColour.a lt 0.5) discard out Colour vec4(diffuse,1.0) textureColour vec4(finalSpecular,1.0) and vertex shader code version 400 core in vec3 position in vec2 textureCoords in vec3 normal out vec2 pass textureCoords out vec3 surfaceNormal out vec3 toLightVector out vec3 toCameraVector uniform mat4 transformationMatrix uniform mat4 projectionMatrix uniform mat4 viewMatrix uniform vec3 lightPosition uniform float useFakeLighting void main(void) vec4 worldPosition transformationMatrix vec4(position,1.0) gl Position projectionMatrix viewMatrix worldPosition pass textureCoords textureCoords vec3 actualNormal normal if (useFakeLighting gt 0.5) actualNormal vec3(0.0,1.0,0.0) surfaceNormal (transformationMatrix vec4(actualNormal,0.0)).xyz toLightVector lightPosition worldPosition.xyz toCameraVector (inverse(viewMatrix) vec4(0.0,0.0,0.0,1.0)).xyz worldPosition.xyz |
1 | Render in a imGui Window How do i render my game scene into an imgui window? I want to get from this to this |
1 | Ray Tracing in One Weekend, how to add depth of field to view matrix? I'm working through the infamous Ray Tracing in One Weekend book as I implement it on a OpenGL compute shader, the only thing I have left to do is add depth of field. I have this function that generates rays from a view and projection matrix, so its slightly different from the book where he does his own camera setup, here is my function Ray Camera getRay(inout uvec4 useed) float aperture 0.1 float lens radius aperture 2 vec3 rd lens radius sampleUnitDisk(useed) vec3 rayNDC vec3( ((2.0f fragCoord.x) iResolution.x 1.0f), (1.0f (2.0f fragCoord.y) iResolution.y), 1.0f ) vec4 rayClip vec4( rayNDC.x, rayNDC.y 1, 1.0f, 1.0f ) vec4 rayCamera inverse(projection) rayClip rayCamera.z 1.0f, rayCamera.w 0.0f vec3 direction normalize((inverse(view) rayCamera).xyz) return Ray(position, direction) I am trying to add the randomized offset as described in the Defocus Blur chapter. I have gotten this far float aperture 0.1 float lens radius aperture 2 vec3 rd lens radius sampleUnitDisk(useed) But I am not sure what to do with rd next, do I simply add it to the ray's position? do I also need to add it to the ray direction? Do I alter the view matrix? Any help is appreciated. |
1 | How to use GLX from Java? I am trying to use Clojure to create an opengl window under Linux. Is there a tutorial on how to use GLX and X Window directly. I'd rather learn how to do this directly, which I already know how under Windows with win32 sdk, instead of relying on a wrapper. |
1 | Problem with bullet movement I'm trying to draw quot bullets quot in my game, but I think something is wrong with velocity computation. Sometimes the quot bullets quot are going in the right way, but sometimes they won't. The CameraFront is constructed with euler angles. Can you help? Thanks in advance So this is how I create bullets Bullet tracer makeBulletTracer(v3 cameraFront,v3 startP, v3 endP) Bullet tracer result assert((gTracersCount 1) lt MAX BULLET TRACERS) mat4 model indentity() model scale(model, v3(.5f,.5f,1.5f)) v3 up v3(0.0f,1.0f,0.0f) v3 right normalize(cross(cameraFront,up)) mat4 M rows3x3(right,up,cameraFront) model model M model translate(model, startP v3(0.0f,3.0f,0.0f)) result.model model result.velocity cameraFront gTracersCount return result This is how I draw them for (u32 tracerIndex 0 tracerIndex lt gTracersCount tracerIndex ) Bullet tracer tracer gameState gt tracers tracerIndex v3 p getTranslationPart(tracer gt model) v3 offset (p tracer gt velocity) input gt dtForFrame 0.01f tracer gt model translate(tracer gt model, offset) passUniformMatrix(opengl.shaderProgram,tracer gt model,false, quot model quot ) glDrawArrays(GL TRIANGLES, 0, 36) |
1 | Why does the lighting change the objects color? I have a code that draws a sphere. Without lighting it is white, but if I enable lighting, it's drawn in gray. I don't know why the sphere changed it's color include lt GL gl.h gt include lt GL glu.h gt include lt GL glut.h gt void init(void) GLfloat mat specular 1.0, 1.0, 1.0, 1.0 GLfloat mat shininess 50.0 GLfloat light position 1.0, 1.0, 1.0, 0.0 glClearColor (0.0, 0.0, 0.0, 0.0) glShadeModel (GL SMOOTH) glMaterialfv(GL FRONT, GL SPECULAR, mat specular) glMaterialfv(GL FRONT, GL SHININESS, mat shininess) glLightfv(GL LIGHT0, GL POSITION, light position) glEnable(GL LIGHTING) glEnable(GL LIGHT0) glEnable(GL DEPTH TEST) void display(void) glClear (GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glutSolidSphere (1.0, 20, 16) glFlush () void reshape (int w, int h) glViewport (0, 0, (GLsizei) w, (GLsizei) h) glMatrixMode (GL PROJECTION) glLoadIdentity() if (w lt h) glOrtho ( 1.5, 1.5, 1.5 (GLfloat)h (GLfloat)w, 1.5 (GLfloat)h (GLfloat)w, 10.0, 10.0) else glOrtho ( 1.5 (GLfloat)w (GLfloat)h, 1.5 (GLfloat)w (GLfloat)h, 1.5, 1.5, 10.0, 10.0) glMatrixMode(GL MODELVIEW) glLoadIdentity() int main(int argc, char argv) glutInit( amp argc, argv) glutInitDisplayMode (GLUT SINGLE GLUT RGB GLUT DEPTH) glutInitWindowSize (500, 500) glutInitWindowPosition (100, 100) glutCreateWindow (argv 0 ) init () glutDisplayFunc(display) glutReshapeFunc(reshape) glutMainLoop() return 0 |
1 | glViewport() Win10 doesn't draw new parts I've been steadily working through building a basic 2D tile framework in python. I'm trying to use modern openGL so I have a simple shader and some vertex buffers and it's mostly doing what I want so far. I have a Scene with a method for centering the viewport on a Sprite def centre viewport(self, x, y) x x self.viewport width 2 y y self.viewport height 2 if x gt 0 x 0 if y lt 0 y 0 if x self.viewport width lt self.width x self.width self.viewport width if y self.viewport height gt self.height y self.height self.viewport height self.viewport x x self.viewport y y glViewport(x, y, self.viewport width, self.viewport height) self.viewport width and height are set when a scene is added to the window, so they have the same dimensions as whatever window is created. In my test game, the scene width and height are twice the window dimensions of 704x512. Everything works fine on my main mac development machine. On my work windows machine though, when the sprite moves and causes the scene to scroll, the previously hidden parts of the scene are not drawn My draw loop is very simple def draw(self) glBindBuffer(GL ARRAY BUFFER, self.sprite vertex buffer) glVertexAttribPointer(self.vertices, 3, GL FLOAT, GL FALSE, 0, None) glBindBuffer(GL ARRAY BUFFER, self.sprite texture buffer) glVertexAttribPointer(self.tex coords, 2, GL FLOAT, GL TRUE, 0, None) glDrawArrays(GL TRIANGLES, 0, self.sprite vertex count) glBindBuffer(GL ARRAY BUFFER, self.tile vertex buffer) glVertexAttribPointer(self.vertices, 3, GL FLOAT, GL FALSE, 0, None) glBindBuffer(GL ARRAY BUFFER, self.tile texture buffer) glVertexAttribPointer(self.tex coords, 2, GL FLOAT, GL TRUE, 0, None) glDrawArrays(GL TRIANGLES, 0, self.tile vertex count) and as mentioned, it works fine on my mac. Is glViewport() the appropriate way to be doing this, and why isn't it working properly on windows? |
1 | OpenGL strange rendering problem when buffers have different sizes I have encountered a very odd error in my program, "odd" in the sense that everything the API says suggests that the error should not occur. I have a bunch of 2D un indexed vertex data, and I want to render it as lines. So far, so good. Then, I wanted to make each vertex have its own (RGB) color, so I generate a color for each vertex. For simplicity, I chose red. Works fine, except now only 2 3 of the points are being rendered! The problem arises from the fact that each vertex's position data consists of only 2 numbers, whereas the color data consists of 3 numbers. So, the "position" buffer has 2 elements per vertex while the "color" one has 3 elements per vertex. I thought that using glVertexAttribPointer to tell this information to OpenGL would be enough, but turns out it's not. In fact, if I say that the color data has only 2 elements per vertex, using glVertexAttribPointer(vertexColorID2,2,GL DOUBLE,GL FALSE,0,(void )0) (as opposed to 3), it renders all the points except now I can only specify two numbers for the RGB color, so I can't get the right color. The full code of the issue is below glUseProgram(programID2) draw the graph graph data graphData() std vector lt double gt graphcolordata(graph data.size() 2 3) for (int i 0 i lt graph data.size() i 3) graphcolordata i 1 glEnableVertexAttribArray(vertexPosition modelspaceID2) glBindBuffer(GL ARRAY BUFFER, graphbuffer) glBufferData(GL ARRAY BUFFER, graph data.size() sizeof(GLdouble), amp graph data 0 , GL STREAM DRAW) glVertexAttribPointer(vertexPosition modelspaceID2,2,GL DOUBLE,GL FALSE,0,(void )0) glEnableVertexAttribArray(vertexColorID2) glBindBuffer(GL ARRAY BUFFER, colorbuffer2) glBufferData(GL ARRAY BUFFER, graphcolordata.size() sizeof(GLdouble), amp graphcolordata 0 , GL STREAM DRAW) glVertexAttribPointer(vertexColorID2,3,GL DOUBLE,GL FALSE,0,(void )0) glDrawArrays(GL LINES, 0, graph data.size() 2) glDisableVertexAttribArray(vertexPosition modelspaceID2) glDisableVertexAttribArray(vertexColorID2) Note programID2 is my basic shader program, and the following variable definitions were previously used GLuint vertexPosition modelspaceID2 glGetAttribLocation(programID2, "vertexPosition modelspace") GLuint vertexColorID2 glGetAttribLocation(programID2, "vertexColor") Edit Incredibly stupid error, figured it out immediately after posting when it had previously stumped me for half an hour. std vector lt double gt graphcolordata(graph data.size() 2 3) for (int i 0 i lt graph data.size() i 3) graphcolordata i 1 should be std vector graphcolordata(graph data.size() 2 3) for (int i 0 i lt graphcolordata.size() i 3) graphcolordata i 1 When this initialization is fixed, it works fine. I would delete this, but I do not see how. |
1 | Multiple Render Targets, Multiple Fragment Shaders I render a normal and a depth image of a scene. Then I want to reuse these two images to do further texture image processing in a second fragment shader. I use a framebuffer with 3 textures attached to it. 2 for the normal and the depth textures and one is supposed to contain the final processed image. The problem is, that I can't get the first two images to the second fragment shader to use them as texture samplers. Here is my code First I create a fbo and attach 3 textures to it. create FBO glGenFramebuffers(1, amp Framebuffer) glBindFramebuffer(GL FRAMEBUFFER, Framebuffer) glGenTextures(1, amp renderedNormalTexture) glBindTexture, glTexImage2D, glTexParameteri ... left out for clarity glFramebufferTexture2D(GL FRAMEBUFFER, GL COLOR ATTACHMENT0, GL TEXTURE 2D, renderedNormalTexture, 0) glGenTextures(1, amp renderedDepthTexture) glFramebufferTexture2D(GL FRAMEBUFFER, GL COLOR ATTACHMENT1, GL TEXTURE 2D, renderedDepthTexture, 0) glGenTextures(1, amp edgeTexture) glFramebufferTexture2D(GL FRAMEBUFFER, GL COLOR ATTACHMENT2, GL TEXTURE 2D, edgeTexture, 0) ndbuffers 0 GL COLOR ATTACHMENT0 ndbuffers 1 GL COLOR ATTACHMENT1 ndbuffers 2 GL COLOR ATTACHMENT2 glDrawBuffers(3, ndbuffers) Fragment shader 1 This is the first fragment shader which outputs 2 textures to position 0 and 1 in vec3 position worldspace in vec3 normal cameraspace in vec4 vpos to fragment uniform float zmin uniform float zmax layout(location 0) out vec3 normalcolor layout(location 1) out vec3 depthcolor void main() normalcolor normalize( normal cameraspace ) 0.5 0.5 normal out vec4 v vec4(vpos to fragment) v v.w float gray ( v.z zmin) (zmax zmin) depthcolor vec3(gray) depth out fragment shader 2 And the second fragment shader which is supposed to receive two texture samplers from position 0 and 1 and do sth. with them uniform sampler2D normalImage uniform sampler2D depthImage uniform float width uniform float height in vec2 UV layout(location 2) out vec3 color void main() vec3 irgb texture2D(normalImage, UV).rgb do sth here... color irgb And finally the rendering step Maybe I am mistaken here. I render the geometry scene (once?!) and apply two fragment shaders. glUseProgram(FragmentShader1) SetMVPUniforms() glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) RenderScene() render geometry glUseProgram(FragmentShader2) GLuint nID glGetUniformLocation(FragmentShader2, "normalImage") GLuint dID glGetUniformLocation(FragmentShader2, "depthImage") glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, renderedNormalTexture) glProgramUniform1i(FragmentShader2, nID, 0) glActiveTexture(GL TEXTURE1) glBindTexture(GL TEXTURE 2D, renderedDepthTexture) glProgramUniform1i(FragmentShader2, dID, 1) Now what I get is either nothing or a wrong colored image. |
1 | Color Picking Troubles LWJGL OpenGL I'm attempting to check which object the user is hovering over. While everything seems to be just how I'd think it should be, I'm not able to get the correct color due to the second time I draw (without picking colors). Here is my rendering code public void render() glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glLoadIdentity() camera.applyTranslations() scene.pick() glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glLoadIdentity() camera.applyTranslations() scene.render() And here is what gets called on each block tile on "scene.pick()" public void pick() glColor3ub((byte) pickingColor.x, (byte) pickingColor.y, (byte) pickingColor.z) draw() glReadBuffer(GL FRONT) ByteBuffer buffer BufferUtils.createByteBuffer(4) glReadPixels(Mouse.getX(), Mouse.getY(), 1, 1, GL RGBA, GL UNSIGNED BYTE, buffer) int r buffer.get(0) amp 0xFF int g buffer.get(1) amp 0xFF int b buffer.get(2) amp 0xFF if(r pickingColor.x amp amp g pickingColor.y amp amp b pickingColor.z) hovered true else hovered false I believe the problem is that in the method of each tile block called by scene.pick(), it is reading the color from the regular drawing state, after that method is called somehow. I believe this because when I remove the "glReadBuffer(GL FRONT)" line from the pick method, it seems to almost fix it, but then it will also select blocks behind the one you are hovering as it is not only looking at the front. If you have any ideas of what to do, please be sure to reply! EDIT Adding scene.render(), tile.render(), and tile.draw() scene.render public void render() for(int x 0 x lt tiles.length x ) for(int z 0 z lt tiles.length z ) tiles x z .render() tile.render public void render() glColor3f(color.x, color.y, color.z) draw() if(hovered) glColor3f(1, 1, 1) glPolygonMode(GL FRONT AND BACK, GL LINE) draw() glPolygonMode(GL FRONT AND BACK, GL FILL) tile.draw public void draw() float x position.x, y position.y, z position.z Top glBegin(GL QUADS) glVertex3f(x, y size, z) glVertex3f(x size, y size, z) glVertex3f(x size, y size, z size) glVertex3f(x, y size, z size) glEnd() Left glBegin(GL QUADS) glVertex3f(x, y, z) glVertex3f(x size, y, z) glVertex3f(x size, y size, z) glVertex3f(x, y size, z) glEnd() Right glBegin(GL QUADS) glVertex3f(x size, y, z) glVertex3f(x size, y size, z) glVertex3f(x size, y size, z size) glVertex3f(x size, y, z size) glEnd() (The game is like an isometric game. That's why I only draw 3 faces.) |
1 | Picking 3D with OpenGL ES 2 I'm trying to implement picking in my framework but I don't understand how I can do this. I'm working with OpenGL ES 2. GLM mathematic library. What I have understand, picking can be made with two methods Draw scene with one color per polygon, and read pixel under mouse. Ray projection in 3D and finally calculate geometry lt ray intersection. My architecture A Geometry class with indices vertices texture coords colors data. A Scene class with a Mesh array (using a geometry). A Transformable class with Matrix computed with rotation position scale. A Camera class with methods like getViewport . What I have My ray calculation void Ray createFromEvent( const Camera amp camera ) glm mat4 model camera.getMatrix() glm mat4 projection camera.getProjectionMatrix() glm vec4 viewPort camera.getViewport() this gt origin glm unProject( glm vec3( x, y, camera.getMinDistance() ), model, projection, viewPort ) this gt direction glm unProject( glm vec3( x, y, camera.getMaxDistance() ), model, projection, viewPort ) So, my questions are How can I calculate intersection with my scene geometries? It's better to calculate if point is inside a bounding box or ray traversing a bounding box? How can I compute bounding box with Transformable matrix (rotation, scale, )? (I am on mobile so I am limited by hardware performance.) Thanks! |
1 | Orbital rotation equation I am developing a space sandbox (3D). Now I am working on an asteroid belt around a planet. I have an Asteroid class and an AsteroidBelt class that has a collection of the Asteroid instances. Each Asteroid object is placed randomly during initialization. Asteroid class Matrix4 transform double angle float x, y, z Vector3 initPosition public Asteroid(ModelObject object, Texture diffuseTex) initPosition new Vector3( MathUtils.random( Settings.RingsDepth, Settings.RingsDepth), x MathUtils.random( .5f, .5f), y MathUtils.random( Settings.RingsDepth, Settings.RingsDepth)) z public void update(Camera camera) angle Settings.RingsRotationalSpeed if(angle gt 360) angle 0 if(angle lt 0) angle 360 double angleRad angle MathUtils.degreesToRadians x (float) (Math.sin(angleRad) Settings.RingsRadius initPosition.x) y initPosition.y z (float) (Math.cos(angleRad) Settings.RingsRadius initPosition.z) transform.idt() transform.translate(x, y, z) The code above works correctly, but it doesn't allow to rotate the asteroid belt around a planet. How to modify x, y, and z to rotate the asteroid belt around a planet freely? I hope the illustration below will clarify my problem Now the asteroid belt looks like below But I need to rotate it from 90 to 90 degrees by X and Y axis Any help will be appreciated! |
1 | Terminology for the way Transformation Matrix Data is treated I recently asked a question at math stack exchange and realize a similar questions is more suited for this forum, but the original is here https math.stackexchange.com questions 1526601 terminology for transformation matrices update or rigid body transform I'm attempting to describe transformation matrix operations within Sketchup's Ruby scripts, and I need some terminology. Sketchup allows the selection of a group of "loose" drawing objects (i.e. edges and faces), and provides options to create a grouped object from them. The drawing elements inside the group have position data relative to the group's local coordinate system. The grouped objects respond to a getter "transformation" method that returns a transformation object that contains data representing the orientation, scale, and position of the group. Sketchup's "world space" supports multiple grouped objects, and typically there are numerous grouped entities in the outer level entities, as well as loose drawing objects. Each grouped object has a separate transformation that determines their position, orientation, and scaling in world space. The transformation can be altered or replaced with methods that include a) The "transform" method Applies a transformation (i.e. like a rotation) to rotate the group and b) The "transformation " setter method Replaces the existing transformation with the supplied argument. Lets say I have a grouped object called "entity1" and rotation transformation object called "rotation". To provide the same rotation effect I could perform either of the following in Sketchup's Ruby programming entity1.transform!( rotation ) OR entity1.transformation rotation entity1.transformation Only groups and components support the "transformation " setter method, while fundamental Point3d amp Vector3d object provide "transform!" kind of methods. It is obvious, at least from within Sketchup (which uses OpenGL internally), that transformation matrix internal data can be treated as either Relative A transformation performs an incremental operation (Example "entity1.transform! identity" does not have an affect on the grouped model) or Absolute The data inside a transformation is treated as absolute position, orientation, and scaling of the grouped object. (Example "entity1.transformation IDENTITY" sets the model to Sketchup's origin position, the model is de rotated to the orientation it was when first grouped, and it is descaled) Questions 1) What is the terminology for the type of transformation that is supplied to the "transform!" method? I was calling it an "affine", now want to call it an "update transformation". 2) What is the terminology for the type of transformation that is the argument to the setter "transformation "? I have been calling it a "rigid body transformation", and sometimes "absolute transform". It appears that OpenGL is calling this the "model matrix". 3) If I call it "model matrix", is more than one model matrix transform allowed in world space? A previous english stack exchange article for a word to mean "position and orientation" is at https english.stackexchange.com questions 119883 word for position and orientation The article responses recommend to use "pose matrix". |
1 | Creating a 3rd Person Flying Camera for 3D asteroids game In order to learn PyOpenGL and test out an engine I am developing, I am trying to write a 3D asteroids flying space shooter game. Currently, I am implementing the camera using a "look at" function positioned right behind the player's ship. The player is able to control the yaw and pitch with the left right and up down arrow keys respectively. The results look like this (forgive the placeholder art) And here is the code snippet in my Player class that implements that from pyorama.entity import Entity from pyorama.math3d.vec3 import Vec3 from pyorama.math3d.mat4 import Mat4 import math class Player(Entity) def init (self, model, camera) self.model model self.center self.model.mesh.compute bounding sphere().center self.model.transform self.model.transform.translate( self.center) self.camera camera self.key down status "Left" False, "Right" False, "Up" False, "Down" False super(Player, self). init () def update(self) messages super(Player, self).update() for message in messages if message.event type "key down" key message.data "key name" if key in self.key down status.keys() self.key down status key True if message.event type "key up" key message.data "key name" if key in self.key down status.keys() self.key down status key False self.model.transform self.model.transform.translate(self.center) if self.key down status "Up" self.model.transform self.model.transform.rotate x( 0.01) if self.key down status "Down" self.model.transform self.model.transform.rotate x(0.01) if self.key down status "Left" self.model.transform self.model.transform.rotate y(0.01) if self.key down status "Right" self.model.transform self.model.transform.rotate y( 0.01) self.model.transform self.model.transform.translate( self.center) self.model.transform self.model.transform.translate(Vec3(0, 0, 0.1)) right self.model.transform.data 0 3 up Vec3( self.model.transform.data 4 7 ) forward Vec3( self.model.transform.data 8 11 ) position Vec3( self.model.transform.data 12 15 ) temp self.model.transform self.model.transform self.model.transform.translate( self.center) right self.model.transform.data 0 3 up Vec3( self.model.transform.data 4 7 ) forward Vec3( self.model.transform.data 8 11 ) position Vec3( self.model.transform.data 12 15 ) self.camera.view Mat4.look at(position 20 forward, position, up) self.model.transform temp As you can see, the ship stays perfectly still behind and the world rotates around. As a result, it looks very unnatural, as if a 2D ship sprite was simply glued onto the screen! So my question is, how are third person cameras in flying games typically implemented? What would a keyboard mouse control scheme look like that controls yaw, pitch, roll, banking, acceleration deceleration, etc? Would the camera controls for the camera be separated from moving the ship? Any help would be greatly appreciated. |
1 | Large vertex buffer vs multiple draw calls I'm just getting started with OpenGL, and I'm attempting to use it to create a 2D game. In this game, I have a hexagonal grid made up of a very large variety of differently colored hexagons. As a newbie OpenGL programmer, I see two ways of drawing this grid Using a vertex buffer with the data for a single hexagon, then using a uniform offset value and iterating on the CPU to draw the same program many times until I have a grid. Creating a singular very large pre calculated vertex buffer that draws all the hexagons in a single call. What's the most efficient method? Is there a better way of doing this? |
1 | Is a Single Texture Cube Map Possible? I'm currently developing a test project to explore OpenGL 3 texturing abilities. I have a simple cube, made of 8 vertices and 36 indices. I want each of the cubes faces to have a different texture, so I devised this texture I made it obvious which sections I want visible (I hope...). In Direct3D, I once made a skybox, and I used a cubemap. However, I had to split it into 6 different textures. This is annoying and hard to manage, it would be nice to have just one texture. Is this even possible? I read somewhere that I could do this by duplicating vertices, is that a good idea? Someone else said I could do it in the shader, but that also baffles me... |
1 | Creating a glitch effect similar to Watch Dogs I'm currently working on a LibGDX game. When a user does something wrong, I would like all the graphics on the screen to jitter very similar to the glitch distort effect seen in the game Watch Dogs (See Below). My question is this can this effect be achieved in real time by writing a shader? If so are there any references online on how to do this? (I've had a quick Google but all I could find is how to achieve this effect in Photoshop After Effects). Thank you for your help. Screen jitter https www.youtube.com watch?v EYkqC9uI8Nc Text glitch effect https www.youtube.com watch?v Wj26Wp2AH U |
1 | Multiple texture coordinates per mesh? So far I've used the same texture coordinate for both the normal and diffuse textures on a mesh, yet when reading the Assimp documentation (http assimp.sourceforge.net lib html structai mesh.html details) on a Mesh, it implies that there can be more than one texture coordinate sets per mesh. My question is, in general what scenario would these be present and what is their purpose? |
1 | Rendering model with multiple tiling textures See edit How would I go about rendering a model with multiple tiling textures? I have a few ideas, but all of them have drawbacks, here are a few (by the way this is for an opengl game) Use a huge texture, but this comes at the cost of the texture being expensive Render diffrent objects for each textured object (eg one model could be bricks for a house, another could by the roof, each with their own tiled texture), but this would become very messy, very quickly. Here is an (amazing) image to help demonstrate what I'm trying to do. The repeating "R" on the red area represents perhaps a repeating brick texture, while the green one represents a roof, maybe roof shingles. To add on to that, I've added a window texture. My question is, how do most games deal with this? I can't seem to wrap my head around how I could fit this in one texture, but yet I see many other games do exactly what I want. EDIT This should give a better visual demo. I want a large mesh (not very large in the demo) using one texture atlas. (Please keep in mind not all textures in the atlas will be the same size!) Lets say this is my object, it uses each texture on the atlas, but not the same amount of repeats (brick repeats 3 times, when the smiley face repeats 9 or so times) How do most games achieve something like this? Without using some huge file with repeating textures. Example (from BOTW) I assume they didn't just repeat this texture on the atlas, considering how large that would be (and expensive). |
1 | How can I bend an object in OpenGL? Is there a way one could bend an object, like a cylinder or a plane using OpenGL? I'm an OpenGL beginner (I'm using OpenGL ES 2.0, if that matters, although I suspect, math matters most in this case, so it's somehow version independent), I understand the basics translate, rotate, matrix transformations, etc. I was wondering if there is a technique which allows you to actually change the geometry of your objects (in this case by bending them)? Any links, tutorials or other references are welcomed! |
1 | Rotation matrix from OpenGL to DirectX I have an application which uses openGL and i have to port it to DirectX. To sum up my issue How can I port rotation matrix based on a right handed coordinate system to a left handed coordinate system ? I hardly found some documentation on the internet. This is what i found on MSDN Flip the order of triangle vertices so that the system traverses them clockwise from the front. In other words, if the vertices are v0, v1, v2, pass them to Direct3D as v0, v2, v1. Use the view matrix to scale world space by 1 in the z direction. To do this, flip the sign of the 31, 32, 33, and 34 member of the D3DMATRIX structure that you use for your view matrix. This is work fine except i can not flip easily my vertices because i got an 3D model from CATIA which is not a primitive, so i can't use it. (I'm aware of row major and column major difference, it does not matter) Do you know how i can port my rotation matrix from openGL to DirectX ? Thanks a lot |
1 | Lightmap not moving properly with camera movement I ve implemented a 2d lighting system (also with support of 2d shadows). Everything was fine until today when I realised that its not working when moving the camera, as it looks like the lightmap has still some offset than the camera position (for example, if camera.x is 100, the lightmap is x 200) Little video showing the problem Video I can post any piece of code, for now Im sending you my light shader(fragment) version 440 in vec4 fragmentColor in vec2 TexCoords out vec4 color uniform float ambientStrength uniform vec2 resolution uniform sampler2D textureSampler uniform sampler2D lightMapTexture uniform bool textureON void main() Calc the ligthMap vec2 lightCoord (gl FragCoord.xy resolution) vec4 lightMap texture(lightMapTexture,lightCoord) if (textureON) vec4 textureColor texture(textureSampler, TexCoords) vec4 finalColor lightMap textureColor fragmentColor color vec4(finalColor.rgb,textureColor.a fragmentColor.a) else color fragmentColor Vertex version 440 in vec3 vertexPosition in vec4 vertexColor in vec2 vertexUV out vec4 fragmentColor out vec2 TexCoords uniform mat4 Projection uniform mat4 View uniform mat4 Model void main() gl Position Projection View Model vec4(vertexPosition.xy,0,1) fragmentColor vertexColor TexCoords vec2(vertexUV.x, 1.0 vertexUV.y) At first I thought that its a problem with resolution, but everything is good. Everything is working fine until I move the camera(or change its position from 0,0 to something else all other objects have good position, also mouse coords) |
1 | Cook Torrance model implementation black specular light I am trying to implement the Cook Torrance model, and this is how I calculate the parameter Rs float Rs(float m,float F,vec3 N, vec3 L,vec3 V, vec3 H) float result float NdotV dot(N,V) float NdotH dot(N,H) float NdotL dot(N,L) float VdotH dot(V,H) float Geom min(min(1.0, (2.0 NdotV NdotH) VdotH), (2.0 NdotL NdotH) VdotH) float Rough pow(1.0 (pow(m,2.0) pow(NdotH,4.0)), ( pow(NdotH,2.0) 1.0) ( pow(m,2.0) pow(NdotH,2.0))) float Fresnel F pow(1.0 VdotH,5.0) (1.0 F) return (Fresnel Rough Geom) (NdotV NdotL) I apply this formula Where I set m to 0.5 and F0 to 2.0. But I think it's wrong because I'm getting a black area where there should be the specular light PS With OpenGL 2.1, GLSL 1.20. |
1 | Send Geometry Data to Multiple Shaders So I am implementing a deferred rending model for my engine, and I want to be able to send all scene geometry into a single shader to calculate ambient, diffuse, normal, ect thats not the question. Once I have all of this data buffered into the GPU I can render all of that geometry from a camera perspective defined by a uniform for the cameras position in the scene. I am wondering if I can reuse this model data already in VRam in another shader translated by a light sources projection matrix to calculate the shadows on this scene geometry without needing to send the same data to the GPU again. Is there a way to tell my light shader, hey all that juicy geometry scene data you want is already in V Ram, just use this matrix transformation instead. Alternatively when I am sending the VAO to the GPU is there a way to send it to two shades in parallel, one for the deferred rending model, one for shadow casing light sources? Thanks so much! |
1 | Python Opengl Procedural Terrain Rendering I am rendering procedural terrain in Python using OpenGL. Once generated, data for each terrain tile is stored in lists. However currently I am only getting around 25fps despite rendering only around 4000 polygons (2 per terrain tile). The model x and z coordinates are stored in the position matrix contained in the terrain quad location list where as the heights are stored in the vertices of each terrain tile quad in the terrain VAO list. It seems like this low fps is due to inefficiency when render large numbers of models not polygons as when I render models with high polygon counts in addition to this my fps is minimally affected. For reference I am using VAOs, VBOs and EBOs and built upon these tutorials. Is there a way I can make the code more efficient? Before Runtime count 0 for i in range(terrain quad number) terrain VAO count glGenVertexArrays(1) glBindVertexArray(terrain VAO count ) terrain VBO count glGenBuffers(1) glBindBuffer(GL ARRAY BUFFER, terrain VBO count ) glBufferData(GL ARRAY BUFFER, terrain quad vertices count .nbytes, terrain quad vertices count , GL STATIC DRAW) terrain EBO count glGenBuffers(1) glBindBuffer(GL ELEMENT ARRAY BUFFER, terrain EBO count ) glBufferData(GL ELEMENT ARRAY BUFFER, terrain quad vertices count .nbytes, quad indices, GL STATIC DRAW) glEnableVertexAttribArray(0) glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, terrain quad vertices count .itemsize 5, ctypes.c void p(0)) glEnableVertexAttribArray(1) glVertexAttribPointer(1, 2, GL FLOAT, GL FALSE, terrain quad vertices count .itemsize 5, ctypes.c void p(12)) Runtime moo 0 for i in range (len(rendered terrain number)) model terrain quad location rendered terrain number moo glBindVertexArray(terrain VAO rendered terrain number moo ) glBindTexture(GL TEXTURE 2D, ground textures int(terrain quad sub biome int(rendered terrain number moo ) 0 0 ) 1 ) glUniformMatrix4fv(model loc, 1, GL FALSE, model) glDrawElements(GL TRIANGLES, len(quad indices), GL UNSIGNED INT, None) moo moo 1 I understand that I may need to lower the number of draw calls which may be done my merging terrain tile models, but how do I do this given I want each tile to have its own texture? Apparently Minecraft does this by merging every block in a chunk into one VBO but how is this done given every block has a different texture? Instancing would not work either. Here is a video of the program. |
1 | Eye Parallax Refraction Shader I am trying to implement the Parallax Refraction effect explained by Jorge Jimenez on this presentation http www.iryoku.com downloads Next Generation Character Rendering v6.pptx and I am facing some difficulties. Here is a screenshot of the interesting part. But this document lacks a bit of explanations especially the second part with the Physically based Refraction. Here is what I've achieved for the moment. This is the simple Parallax Refraction effect and, as you might notice in the screenshot, there is small glitch at grazing angles in the iris when the Parallax Scale value is too high. This seems to be caused by the parallax calculation but are there some tricks to avoid or minimize such issue ? I don't want to go deeper with parallax mapping for the moment, so I don't want to use Steep Parallax Mapping or Parallax Occlusion Mapping. This is the simple Parallax Refraction part. height value comes from a texture float2 offset height viewDir offset.y offset.y texcoord ParallaxScale offset Next there is the texture sampling with the texcoord value Here is the way I calculate the refraction vector according to Snell's law. float cosine dot(viewDir, worldNormal) float sine sqrt(1 cosine cosine) float sine2 ( IOR sine) float cosine2 sqrt(1 sine2 sine2) float3 x worldNormal float3 y normalize(cross(cross(viewDir, worldNormal), worldNormal)) float3 refractedW x cosine2 y sine2 This is integrated with the Physically based Refraction and it gives interesting results but I still don't know how to get rid of the high parallax scale issue which appear at grazing angles... Any idea how to get rig of that ? Thanks a lot. |
1 | model view projection multiplication order I'm debugging a lighting problem where the camera position is effecting the diffused lighting component on my 3d model. In researching my problem I went back and am reading over all the how to documents. I found that one of my old sources (where I learned open gl 2.0) said this (summarized) When creating the modelViewProjection matrix you can't change the order or you'll get unexpected results. First take the View Matrix and post multiply it by the projection matrix to create a viewProjection matrix. Then post multiply the model matrix to get the modelViewProjection matrix. Looking at my code, all this time I haven't been doing this matrixMultiply(mView, thisItem gt objModel, mModelView) matrixMultiply(mProjection, mModelView, mModelViewProjection) As a test I tried changing it to this matrixMultiply(mView, thisItem gt objModel, mModelView) matrixMultiply(mProjection, mView, mViewProjection) matrixMultiply(mViewProjection, thisItem gt objModel, mModelViewProjection) ...but the result appears the same. EDIT To answer a question about what the matrices contain. A snapshot of the matrix values EDIT2 As a test I did the math both ways to compare the results and even with moving the camera and model around and rotating the model I'm getting the same results for the final ModelViewProjection. |
1 | How to toggle fullscreen with lwjgl I'm using glfw in lwjgl 3 to try to create a toggleFullscreen method. However it always gives me errors. I saw this question Toggle Fullscreen at Runtime but it didn't help because glfwOpenWindow() and glfwCloseWindow() don't exist for me. I guess lwjgl uses a differnt version of glfw? Does anyone know of a good solution for toggling fullscreen? thanks. this is my current code if it helps. public void toggleFullscreen() long newWindow fullscreen !fullscreen if(fullscreen) newWindow glfwCreateWindow(windowWidth, windowHeight, "Title", glfwGetPrimaryMonitor(), window.getWindowHandle()) glfwMakeContextCurrent( newWindow ) else newWindow glfwCreateWindow(windowWidth, windowHeight, "Title", NULL, windowHandle()) glfwMakeContextCurrent( newWindow ) glfwDestroyWindow(window.getWindowHandle()) windowHandle newWindow glfwMakeContextCurrent(windowHandle) glfwSwapInterval(1) glfwShowWindow(windowHandle) GLContext.createFromCurrent() GL11.glEnable(GL11.GL TEXTURE 2D) glClearColor(0.0f, 1.0f, 0.0f, 0.0f) GL11.glEnable(GL11.GL BLEND) GL11.glBlendFunc(GL11.GL SRC ALPHA, GL11.GL ONE MINUS SRC ALPHA) GL11.glViewport(0,0,windowWidth,windowHeight) GL11.glOrtho(0, windowWidth, windowHeight, 0, 1, 1) GL11.glMatrixMode(GL11.GL MODELVIEW) game.initInputPolling() |
1 | OpenGL Get Rotated X and Y of quad I am developing a game in 2D using LWJGL library. So far I have a rotating box. I have done basic Rectangle collision, but it doesn't work for rotated rectangles. Does OpenGL have a function that returns the vertices of rotated rectangle? Or is there another way of doing this using trigonometry? I had researched how to do this and everything I found was using some matrix that I don't understand so I am asking if there is another way of doing this. For clarification, I am trying to find out the true (rotated) X,Y of each point of the rectangle. Let's say, the first point of a rectangle (top,left) has x 10 y 10.. Width and height is 100 pixels. When I rotate the rectangle using glRotatef() the x and y stay the same. The rotation is happening inside OpenGL. I need to extract the x,y of the rectangle so I can detect collisions properly. |
1 | How can I make OpenGL textures scale without becoming blurry? I'm using OpenGL through LWJGL. I have a 16x16 textured quad rendering at 16x16. When I change it's scale amount, the quad grows, then becomes blurrier as it gets larger. How can I make it scale without becoming blurry, like in Minecraft. Here is the code inside my RenderableEntity object public void render() Color.white.bind() this.spriteSheet.bind() GL11.glBegin(GL11.GL QUADS) GL11.glTexCoord2f(0,0) GL11.glVertex2f(this.x, this.y) GL11.glTexCoord2f(1,0) GL11.glVertex2f(getDrawingWidth(), this.y) GL11.glTexCoord2f(1,1) GL11.glVertex2f(getDrawingWidth(), getDrawingHeight()) GL11.glTexCoord2f(0,1) GL11.glVertex2f(this.x, getDrawingHeight()) GL11.glEnd() And here is code from my initGL method in my game class GL11.glEnable(GL11.GL TEXTURE 2D) GL11.glClearColor(0.46f,0.46f,0.90f,1.0f) GL11.glViewport(0,0,width,height) GL11.glOrtho(0,width,height,0,1, 1) And here is the code that does the actual drawing public void start() initGL(800,600) init() while(true) GL11.glClear(GL11.GL COLOR BUFFER BIT) for(int i 0 i lt entities.size() i ) ((RenderableEntity)entities.get(i)).render() Display.update() Display.sync(100) if(Display.isCloseRequested()) Display.destroy() System.exit(0) |
1 | How can I render a font in C with OpenGL? What I tried I was testing some things in order to render text with stb truetype.h and OpenGL in C. I took as a reference the example that appears here. Basically, this example, loads a .ttf file and returns the raw information in bytes, that can be used to generate a texture in OpenGL. I adapted the example, mentioned before, into modern OpenGL, because, the example uses OpenGL deprecated functions, like glVertex2f. The only thing I get to output on screen was this kind of noise of strange colors The code I use texture t fnt texture GLuint fnt shader unsigned char ttf buffer 1 lt lt 20 unsigned char temp bitmap 512 512 stbtt bakedchar cdata 96 ASCII 32..126 is 95 glyphs define FONT VS quot version 330 core n quot quot layout(location 0) in vec3 m Position quot quot layout(location 1) in vec2 m TexCoords quot quot out vec2 TexCoords n quot quot void main() n quot quot TexCoords m TexCoords n quot quot gl Position vec4(m Position, 1.0) n quot quot n quot define FONT FS quot version 330 core n quot quot in vec2 TexCoords n quot quot uniform sampler2D Texture n quot quot void main() n quot quot gl FragColor texture(Texture, TexCoords) n quot quot n quot void font init(void) fread(ttf buffer, 1, 1 lt lt 20, fopen( quot c windows fonts times.ttf quot , quot rb quot )) stbtt BakeFontBitmap(ttf buffer, 0, 32.0, temp bitmap, 512, 512, 32, 96, cdata) no guarantee this fits! glGenTextures(1, amp fnt texture 3 ) My texture type, is an array that saves the texture on the 3rd position. glBindTexture(GL TEXTURE 2D, fnt texture 3 ) glTexImage2D(GL TEXTURE 2D, 0, GL RGBA8, 512, 512, 0, GL RGBA, GL UNSIGNED BYTE, temp bitmap) glGenerateMipmap(GL TEXTURE 2D) can free temp bitmap at this point glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glBindTexture(GL TEXTURE 2D, 0) fnt shader shader init(FONT VS, FONT FS) void font render(model t model) shader bind(fnt shader) texture bind(fnt texture, 0) model begin(model) model draw(model, GL TRIANGLES) The model (vao, vbo, ibo) is rendering the whole buffer, not individual glyphs model end() texture unbind() shader unbind() Can someone tell me what I'm doing wrong, and, how I'm suposed to render correctly text, with modern OpenGL, with textures and buffers, in order to read the .ttf file and create the necessary information with stb truetype.h and, then, render the text? |
1 | Should I update VAO when I update a VBO? My VAO VBO IBO work fine on iPad and other devices on Android excepted two (A Samsung galaxy S4 and a Sony Xperia S). A problem is present when I start my application on this devices, every elements move everywhere and start to blink on each frame, the problem is present on every element updated during the simulation. I have a SpriteRenderer who share a VBO, so I need to update this VBO on each frame for each sprite (change color, uvs, ). The visual glitch is not present on static element (like text). So my question his Did I have to do something with my VAO on each frame? Here is what I've got Init part bind vao gt Bind vbo gt Bind ibo unbind vao Rendering part for( sprites ) Update (Need to bind VAO here?) bind vbo (lock) update vbo data unbind vbo (unlock bind) Draw. bind vao drawElement unbind vao Thanks! |
1 | Is Windows window creation faster more efficient than GLFW window creation? Are there any benefits to using Windows window for OpenGL or Vulkan rendering, rather than using GLFW window? Is there any performance hits when using C Window creation over GLFW since it provides OpenGL and Vulkan interface? |
1 | how to move a 2d shape in opengl 4.6 (glfw) Class Button So from my understanding in order to move a 2d shape you need manipulate the vertices and then update???? If i do need to update the buffer how i would go by doing that? float xpos 150 Cords relative to the window application so let say the window size 480 by 260 float ypos 60 Where would I plug in position in the vertices array? float vertices .5f, .5f,0, .5f, .5f,0, .5f,.5f,0, .5f,.5f,0 shader.setup( quot Resources Shaders shape.vertex quot , quot Resources Shaders shape.frag quot ) glGenVertexArrays(1, amp VAO) glGenBuffers(1, amp VBO) bind the Vertex Array Object first, then bind and set vertex buffer(s), and then configure vertex attributes(s). glBindVertexArray(VAO) glBindBuffer(GL ARRAY BUFFER, VBO) glBufferData(GL ARRAY BUFFER, sizeof(vertices), vertices, GL DYNAMIC DRAW) glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, 3 sizeof(float), (void )0) glEnableVertexAttribArray(0) note that this is allowed, the call to glVertexAttribPointer registered VBO as the vertex attribute's bound vertex buffer object so afterwards we can safely unbind glBindBuffer(GL ARRAY BUFFER, 0) You can unbind the VAO afterwards so other VAO calls won't accidentally modify this VAO, but this rarely happens. Modifying other VAOs requires a call to glBindVertexArray anyways so we generally don't unbind VAOs (nor VBOs) when it's not directly necessary. glBindVertexArray(0) |
1 | Alpha Blending performance on IOS I've got few questions without responses about my game development, can you help me? Here is the questions In my game when a large object appear on the screen, the GPU go to his limits, my framerate become very slow. How can I handle that? (I'm working on mobile). ( Currently, I'm sorting object from back to front) I've read some stuff about batching (How to draw efficiently large number of objects with alpha blending?), how can I handle batching when my game need to draw my scene from back to front with alpha blending? What is faster Add a condition to test pixel transparency in fragment shader or enable Alpha blending? (I've got colors around my sprites without alpha blending or condition) Thanks! |
1 | Posible to export f curves in blender? I work with some friends on a rail shooter game in openGL. For creating our world we are using Blender. Since we want our Tank to follow a path I have started to work with creating bezier curves and exporting them. That works, but takes a lot of time. Is it possible to export the f curves you use in the graph editor to an obj file or any other file that can be used to describe the position for a specific point in time? I would be grateful if someone has an idea that could be helpful. |
1 | Texture coordinates projection I have some classic texture coordinates and as a normal behaviour they follow the mesh's transformations. I am trying to use the same texture coordinates behaviour but without being affected by the mesh rotation transformation. The results would be a sort of texture coordinates projection. I don't know if my explanations are well explained but how could I achieve such effect. Thanks a lot. |
1 | What are some good learning resources for OpenGL? I have been using the OpenGL ES on the iPhone for a while now and basically I feel pretty lost outside to the small set of commands I've seen in examples and adopted as my own. I would love to use OpenGL on other platforms and have a good understanding of it. Every time I search for a book I find this HUGE bibles that seem uninteresting and very hard for a beginner. If I had a couple of weekends to spend on learning OpenGL what would be the best way to spend my time (and money)? |
1 | Rotation and translation like in GTA 1 OpenGL Okay, so I have a figure in XZ plain. I want to move it forward backward and rotate at it's own Y axis, then move forward again in the rotation's direction, like the character in GTA 1. Code so far Init spaceship position glm vec3(0,0,0) spaceship rotation glm vec3(0,0,0) spaceship scale glm vec3(1, 1, 1) Draw glm mat4 transform glm scale lt float gt (spaceship scale) glm rotate lt float gt (spaceship rotation.x, 1, 0, 0) glm rotate lt float gt (spaceship rotation.y, 0, 1, 0) glm rotate lt float gt (spaceship rotation.z, 0, 0, 1) glm translate lt float gt (spaceship position) drawMesh(spaceship, texture, transform) Update switch (key.keysym.sym) case SDLK UP spaceship position.z 0.1 break case SDLK DOWN spaceship position.z 0.1 break case SDLK LEFT spaceship rotation.y 1 break case SDLK RIGHT spaceship rotation.y 1 break So this only moves on the Z axis, but how can I move the object on both Z and X axis where the object is facing? |
1 | Most efficient way of drawing a lot of cubes OpenGL ES I have to draw a lot of cubes in my OpenGL programme for android. All the cubes have the same size but different colors. I know that calling glDrawArrays is expensive operation so I should call it less as possible. But as I know I have to call it 6 times (one per each side) and since I have more than 500 cubes it's not efficient at all. Does anyone have the idea what to do? Btw, I am using OpenGL ES 1.0. I saw that I can use one big VBO but I don't know how to do that. |
1 | How can I calculate the bounding box of a 3D model? I am making a game in OpenGL and Blender and trying to use JBullet as the collision manager. I am using BoxShapes for collisions, which requires a javax.vecmath.Vector3f representing half of the box to create. How do I calculate the dimensions (width, depth and height) of a Blender model in game? |
1 | Android OpenGLES, getting number of fragments passed depth test Occlusion Query? Afeter reading many blogs and searching on Internet, I did not get any sloution for getting number of fragments pixels which are occludded or not occludded of the object on OpenGL ES3.0 Android version. GL SAMPLES PASSED is not present on OpenGL ES3.0. Is there any way to get these occluded or non occluded fragments on Android OpenGl ES3.0? |
1 | Does swap buffer with vsynch guarantee synchronization? I was wondering if I could assume that all buffer related GPU operations such as glDrawElements glBufferData glSubBufferData glUnmapBuffer are guaranteed to be completed after swap buffer is performed (i.e. frame is finished) assuming vsync is on. I'm confused as I've come across implementations of vertex streaming techniques such as rond robin vbo which imply that a vbo could still be in use during the next frame. What I basically want to do is stream vertices through glMapBufferRange with GL UNSYNCHRONIZED BIT, managing the correct ranges myself so that writes and reads never overlap. This would work very well if I could just assume synchronization and reset the stream range index at the end of the frame. In other words, does swap buffer with vsynch guarantee synchronization? |
1 | Rotating a swinging sword in LWJGL gives wierd results I'm making a 2d top down game in Java using LWJGL(OpenGL, essentially). I recently tried adding a swinging sword mechanism. My coordinate system is set to the bottom left FOR ALL objects. The rotation worked, but it happens with respect to the bottom left corner of the Sword Quad. I want it to rotate with the center of the bottom line as the pivot for rotation. I tried a lot of solutions from the internet like translating the object to origin(0,0 of the screen), then rotating, then translate to the point of pivot. translating object normally, then rotating, then again translating to the point you want it to pivot at. But they all yield weird results. Here's the code The Sword Class public class Sword extends GameObject private static int speed 5, deg 0 private Texture tex public Sword(float x, float y, ObjectId id) super(x, y, id) tex ImageLoader.loadTexture(null, null) protected void update(ArrayList lt GameObject gt objects) if(deg lt 360) deg 1 else deg 0 protected void render() Draw.rect(x, y, 10, 32, deg, 1, 0, 0) public Rectangle getBounds() return null The Draw.rect method public static void rect( float x, float y, float width, float height, float deg, float r, float g, float b) glDisable(GL TEXTURE 2D) glPushMatrix() glTranslatef(0, 0, 0) glRotatef(deg, 0, 0, 1) glTranslatef(x (width 2), y, 0) glColor3f(r, g, b) glBegin(GL QUADS) Specifies to the program where the drawing code begins. just to keep stuff neat. GL QUADS specifies the type of shape you're going to be drawing. glVertex2f(0, 0) Specify the vertices. 0, 0 is on BOTTOM LEFT CORNER OF SCREEN. glVertex2f(0, height) 2f specifies the number of args we're taking(2) and the type (float) glVertex2f(width, height) glVertex2f(width, 0) glEnd() glPopMatrix() glEnable(GL TEXTURE 2D) EDIT Here is an image of what "weird result" is. In the pic, I'm translating first to origin (0, 0) of the WORLD , then rotating(by 90 degrees), then translating to translating to x (width 2) , y . As you can see, the red block that represents the sword is out of the "arena", even though I've set the x, and y of the sword to near the player. The image |
1 | Does it makes sense to apply perspective projection, in a 2D game? Does it makes sense to apply perspective projection, in a 2D game? What will be the cost for doing so, and does it overweigh the benefits gained from the 3rd dimension? Perspective projection has features that could be used in 2D games. Two very beneficial characteristics that come to my mind are Using perspective projection as a substitute for parallax scrolling. Using perspective projection to perform depth order for 2D sprites On the other hand, perspective projection resizes objects with distance, including 2D sprites. That is why mip mapping has been developed. Mip mapping, however, does not always produce perfect results, as it relies on interpolation between two neighboring mip mapped frames. This might not be sufficient, for a 2D game, where developers require pixel based precision and detail. |
1 | Bullet physics with OpenGL I have got some problem implementing bullet physics into my opengl game. The thing is that it doesn't want to update my translatef value continously but only at the end. The code for bullet looks like this void CGL initPhysics( void ) broadphase new btDbvtBroadphase() collisionConfiguration new btDefaultCollisionConfiguration() dispatcher new btCollisionDispatcher(collisionConfiguration) solver new btSequentialImpulseConstraintSolver dynamicsWorld new btDiscreteDynamicsWorld(dispatcher,broadphase,solver,collisionConfiguration) dynamicsWorld gt setGravity(btVector3(0, 10,0)) ballShape new btSphereShape(1) pinShape new btCylinderShape(btVector3(1,1,1)) pinShape gt setMargin(0.04) fallMotionState new btDefaultMotionState(btTransform(btQuaternion(0,0,0,1),btVector3(0,10,0))) btScalar mass 1 btVector3 fallInertia(0,0,0) ballShape gt calculateLocalInertia(mass,fallInertia) btCollisionShape groundShape new btStaticPlaneShape(btVector3(0,1,0),1) btDefaultMotionState groundMotionState new btDefaultMotionState(btTransform(btQuaternion(0,0,0,1),btVector3(0, 1,0))) btRigidBody btRigidBodyConstructionInfo groundRigidBodyCI(0,groundMotionState,groundShape,btVector3(0,0,0)) btRigidBody groundRigidBody new btRigidBody(groundRigidBodyCI) dynamicsWorld gt addRigidBody(groundRigidBody) btRigidBody btRigidBodyConstructionInfo fallRigidBodyCI(mass,fallMotionState,ballShape,fallInertia) btRigidBody fallRigidBody new btRigidBody(fallRigidBodyCI) dynamicsWorld gt addRigidBody(fallRigidBody) for (int i 0 i lt 300 i ) dynamicsWorld gt stepSimulation(1 60.f,10) btTransform trans fallRigidBody gt getMotionState() gt getWorldTransform(trans) fallY trans.getOrigin().getY() state list.remove( STATE FALL BALL ) printf("stoped n") And the drawing function which is called at the beginning looks like this void CGL fallingBall( void ) glPushMatrix() float colBall2 4 0.0f, 0.0f, 1.0f, 1.0f glMaterialfv( GL FRONT, GL AMBIENT, colBall2) glTranslatef(0.0f,fallY,0.0f) printf("fallY f n",fallY) glutSolidSphere(1.0f,20,20) glPopMatrix() The thing is that it shows correct value in this function's printf but translation is called only at the beginning I mean I can only see the last state. Can anyone help me, please? |
1 | Fire simulation using java and opengl i'm newly working with opengl. I'm trying to create a simple program that will simulate fire. My question is what are the ways other than particle effects to simulate fire. And can fire simulation really be done without particle system effect?? |
1 | Large vertex buffer vs multiple draw calls I'm just getting started with OpenGL, and I'm attempting to use it to create a 2D game. In this game, I have a hexagonal grid made up of a very large variety of differently colored hexagons. As a newbie OpenGL programmer, I see two ways of drawing this grid Using a vertex buffer with the data for a single hexagon, then using a uniform offset value and iterating on the CPU to draw the same program many times until I have a grid. Creating a singular very large pre calculated vertex buffer that draws all the hexagons in a single call. What's the most efficient method? Is there a better way of doing this? |
1 | How to use traditional pixel coordinates system in OpenGL? (with C SDL2) I find the normalized, centralized coordinate system used in OpenGL weird and annoying, is there anything I can do at all to make it work like normal pixel coordinates on everything ever except OpenGL? And yes, of course I Googled it. EDIT I'm using OpenGL 2.1 |
1 | World texturing techniques in FPS game Texturing for small objects (pickups and enimies) can easily be done UV unwrapping the model, and use a texture of reasonable size to make the model look good. But how can texturing be done for the world? Covering a large building with one texture would require a huge texture (otherwise it will look blurry), and a lot of it will be repeated (brickwall textures, wallpapers ...), so that approach seems inefficient. Assigning different textures to different faces may be slow, since there cannot be a texture switch within the draw call. I am targeting OpenGL 4 or later. I prefer good looking stuff rather than extremely high framerate I aim for 30 fps, but perhaps with motion blur (which I guess requires four times that). About tiling Tiling would work, but then there are some parts of the mesh which requires some other texture. And then I need to switch texture. |
1 | Light Attenuation Formula Derivation I understand that when sampling the brightness of a given point on the surface, a certain cutoff needs to be taken into consideration. Other words, when the light is further away, the intensity decreases. I came across the following formula that is used to compute the aforementioned attenuation 1.0 (1.0 a dist b dist dist) However, I have not been able to find out why and how this formula is derived. From the formula what I can understand is that as the distance get larger, the result will become smaller but how it is derived is beyond me. If distance takes an important role in the above formula, which helps determine the intensity, could we simply ignore a and b from the formula and simply use attenuation 1.0 (1.0 dist) ? Does anyone know how the formula is derived and the importance the values a and b play within the formula? Thanks. |
1 | Batching and Z order with Alpha blending in a 3D world I'm working on a game in a 3D world with 2D sprites only (like Don't Starve game). (OpenGL ES2 with C ) Currently, I'm ordering elements back to front before drawing them without batch (so 1 element 1 drawcall). I would like to implement batching in my framework to decrease draw calls. Here is what I've got for the moment Order all elements of my scene back to front. Send order list of elements to the Renderer. Renderer look in his batch manager if a batch exist for the given element with his Material. Batch didn't exist create a new one. Batch exist for element with this Material Add sprite to the batch. Compute big mesh with all sprite for each batch (1 material type 1 batch). When all batches are ok, the batch manager compute draw commands for the renderer. Renderer process draw commands (bind shader, bind textures, bind buffers, draw element) Image with my problem here But I've got some problems because objects can be behind another objects inside another batch. How can I do something like that? |
1 | Can I leverage the fact that my scene is often static to improve OpenGL (JOGL) performance? My scene is drawn based on the location of several (often several million) vertices (kept in VBO's) and a camera. I can easily tell in my code when my scene has changed and when it hasn't. There are also some odd cases such as the window being resized, but I believe I can easily enumerate and handle those as well. Can I (in user code or through some OpenGL property) leverage this to increase the performance when the scene is static? Clearly when the scene is changing, all of the math must be done to properly calculate what should be rendered. But when the scene is static, that picture isn't changing each frame. I've tried implementing something in my code to do this, but the result is a flickering scene (and I'm not entirely sure why). Basically I check to see if anything has changed and if it hasn't I simply return from the display(GLAutoDrawable drawable) function that is invoked by the JOGL FPSAnimator. I feel like this is probably a common problem that should have a standard solution. However, I haven't been able to find anything so far. |
1 | VBO glGenBuffers() IllegalStateException i'm kinda noob in OpenGL but I have this problem I get an IllegalStateException at this code int vboVertex glGenBuffers() In detail exception Exception in thread "main" java.lang.IllegalStateException Function is not supported at org.lwjgl.BufferChecks.checkFunctionAddress(BufferChecks.java 58) at org.lwjgl.opengl.GL15.glGenBuffers(GL15.java 116) at Mesh.initVbo(Mesh.java 49) OpenGL version 2.0.5819 WinXP Release Thanks for support and sorry for the horrible english. |
1 | Cross platform GLFW build with CMake I know this is probably a popular question and can be very vague, so I'm going to be as specific as possible. I don't really know how to even form this question to be able to search Google, so I came here. I've been following the tutorials from learnopengl.com and they have been by far the best explanations I've found on modern OpenGL anywhere. After I'm done with the tutorials, I'll be making a game. I have already started working on the game engine and it works just fine on Ubuntu and Mac OS X. Building the project is done with CMake (I've found this to be the easiest). The project uses GLFW and Glad as used in the tutorial. I can post the GitHub repo if you want. Today I started trying to figure out how to get the project to run on Windows, but don't even know where to start. Hence coming here. I plan on using Visual Studio but haven't used it since college. How do I even start making the project? I've tried building GLFW on the machine and the building the project using CMake, but I get build tool errors that I don't know how to troubleshoot. Where do I begin? |
1 | Is glxinfo saying that the 980 GTX doesn't support a 32 bit depth buffer? I've been using the glxinfo command (glxinfo v) to explore the supported framebuffer configurations. There are two values relating to depth, "depth" and "depthsize." According the source, it appears that the "depth" value relates to the X config and the "depthsize" value relates to the OpenGL config. Assuming that is correct, would the lack of a "depthsize 32" entry suggest that 32 bit depth buffers aren't supported? Or is my understanding of the glxinfo output flawed? |
1 | OpenGL What steps to take to correctly set up an Uniform Block Array I have managed to get uniform blocks to work, but I seem to make something wrong when trying to setup an array of uniform blocks. Assume this glsl layout(std140, binding 1) uniform LightingBlock vec4 ambient vec4 diffuse vec4 specular vec3 factors float shininess lighting 3 What's the exact procedure to bind this? What I am doing (and what works for a single block, not an array) GenBuffer... BindBuffer... BindBufferRange(UNIFORM BUFFER, 1, buffer id, 0, size in bytes) index GetProgramResourceIndex(program id, UNIFORM BLOCK, "LightingBlock") UniformBlockBinding(program id, index, 1) I read that I should replace the LightningBlock with lighting 0 , 1 etc, but this only returns invalid indices. So my current attempt looks like this GenBuffer... BindBuffer... binding 1 for(int i 0 i lt 3 i) BindBufferRange(UNIFORM BUFFER, binding i, buffer id, 0, size in bytes) index GetProgramResourceIndex(program id, UNIFORM BLOCK, "lighting " str(i) " ") UniformBlockBinding(program id, index, binding i) What am I doing wrong? How to do this correctly? |
1 | Calculate normal angle in screen space I'm working on adding billboarded sprites to a game engine, but the engine allows for walking on curved terrain, like spheres. In order to look like the player is walking on the terrain, I want to start with the billboard, then rotate it around its origin (bottom centre) so that its up vector matches the screenspace normal of the terrain. Here's a diagram of the general idea The billboarding works fine passed the camera's rotation matrix to the sprite, and the poly quad's 0,0,0 is at the bottom centre of the canvas. How can I get the angle between the normal and screenspace up? I think it's a matter of projecting the normal with the MVP matrix, to get it as a 2D screenspace vector to calculate the angle from. But I'm not having much luck attempting to implement it. The sprites take their positioning from the centre of the tiles, along with the normal. I'd also like to move the sprites down in the rotated axis, so that they cover the tile they're standing on. Any other methods are welcome too. |
1 | Long loading times on large PNG file using C , OpenGL SOIL I'm currently optimising a C OpenGL game, and I've noticed something slightly odd about texture loading. I've got a number of PNG files I load as textures and use as spritesheets these are generally 512px or 1024px, with one 2048px. I use SOIL (www.lonesock.net soil.html) to load the PNGs then bind them to a custom OpenGL texture class. I've put some primitive code in to measure how long it takes to load these PNG files as textures measured in ticks. Generally speaking the load times are 512px PNG 100 150 ticks 1024px PNG 400 500 ticks 2048px PNG 1800 2000 ticks Now I recognise that not all image files are created equally, but these don't quite seem to add up with what I understand in terms of texture files. There's not much to be gained by using fewer larger textures compared to more smaller textures, and often there's no gain for that 2048px PNG compared to four 1024px. My question is, are these (relative) load times normal? Is there something I'm missing when it comes to loading textures with SOIL? Note that my question isn't 'how can I generically speed up image loading times' (I may at a later date convert these to more OpenGL friendly file formats like DDS), but more is there a pattern I should shouldn't be using to optimise this? Thanks Nathan |
1 | Initializing a blank texture in OpenGL without artifacts I'm generating a texture atlas in OpenGL, where I want to create a blank texture and copy my sprites to it. The texture is generated like this glGenTextures(1, amp texture) glBindTexture(GL TEXTURE 2D, texture) glPixelStorei(GL UNPACK ALIGNMENT, 1) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL NEAREST) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL NEAREST) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glTexEnvf(GL TEXTURE ENV, GL TEXTURE ENV MODE, GL MODULATE) glTexImage2D(GL TEXTURE 2D, 0, GL RGBA, width, height, 0, GL RGBA, GL UNSIGNED BYTE, NULL) While the sprites are copied like this GLenum format if (sprite.hasAlpha) format GL RGBA else format GL RGB glTexSubImage2D( GL TEXTURE 2D, 0, sprite.x, sprite.y, sprite.width, sprite.height, format, GL UNSIGNED BYTE, sprite.data) The issue is I keep getting artifacts in the texture not covered by a sprite Here I'm supposed to only have two slugs, one sword in the stone, one light grey tile, and one dark grey tile. I tried creating a width height sizeof(GLuint) GLuint array filled with zeros and passing it to the glTexImage2D call, but that didn't seem to do anything. |
1 | Which consoles may I target with OpenGL? I'm thinking on technical design for a game game engine using OpenGL, and I wonder if there is any recent consoles (Xbox360, PS3, Wii U, Xbox one and PS4) that I could work with if I do so. I found plenty of conflicting answers through forums. The only straight answer seems that it is impossible for Xboxes because Microsoft forbid to use something else than DirectX, its own product. So, finally, does anyone know for sure which consoles can support an OpenGL based game? |
1 | Create a crosshair openGL How do I draw a white crosshair in the middle of screen in openGL, it's all well and good knowing how to render objects in 3d space, but I have literally no idea on how to draw something that sticks on the screen no matter what Would this require a shader? one that does not take into account model view projection matrix? at what point would i draw the cross? after everything to coincide with the painters algorithm? Or do I give it a z value? |
1 | Deferred shadow mapping Question What am i doing wrong in the CalcShadowFactor method? It looks like the depth check is not working correctly. Body I'm using deferred rendering in my engine and i have generated the following shadow map Which is draw by the following fragment void main() gl FragColor vec4(texture(gColorMap, TexCoord0).x) The position texture is generated like this WorldPos0 (gWorld vec4(Position, 1.0)).xyz Where gWorld is the transformation of the object being draw(translation rotation scale) Finally, in the light pass, i want to generate the shadow produced from this spotlight with the following code float CalcShadowFactor(vec3 WorldPos) vec4 ShadowCoord gLightWVP vec4(WorldPos,1) ShadowCoord ShadowCoord.w vec2 UVCoords UVCoords.x 0.5 ShadowCoord.x 0.5 UVCoords.y 0.5 ShadowCoord.y 0.5 float z 0.5 ShadowCoord.z 0.5 float Depth texture(gShadowMap, UVCoords).x if (Depth lt z 0.00001) return 0 else return 1.0 It returns a float which multiplicates the color output. I know, im just returning 0 to set the color to complete transparent, it is intensional. The variables used were gLightWVP It is the ProjectionView of the camare set at the light Projection LightCameraRotation LightCameraTranslation WorldPos It is the position got from the Position Texture vec3 WorldPos texture2D( gPositionMap, TextCoord0 ).xyz gShadowMap It is the shadow map texture sampler. This is what is displayed(notice how the spotlight looks at the first image) The engine passes are Shadow pass Render the scene for every light using a alternative camera which is placed in their position with their orientation. It generates the shadow maps for every light. Currently i just have one shadow map for that spot light. Render pass It renders all the scene using the main camera. It generates the position map, normal map, diffuse map and a specular map. Light pass It uses all the textures generated to place the illumination. Here is where the shadow map is being checked for generate the shadows. EDIT I'll also leave here the shadow map texture definition. glGenTextures(1, amp m shadowMap) glBindTexture(GL TEXTURE 2D, m shadowMap) glTexImage2D(GL TEXTURE 2D, 0, GL DEPTH COMPONENT32, WindowWidth, WindowHeight, 0, GL DEPTH COMPONENT, GL FLOAT, NULL) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE COMPARE MODE, GL NONE) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glBindFramebuffer(GL FRAMEBUFFER, m fbo) glFramebufferTexture2D(GL FRAMEBUFFER, GL DEPTH ATTACHMENT, GL TEXTURE 2D, m shadowMap, 0) |
1 | What resources are available for Mac OS game development? Are there are any modern resources on how to develop games for Mac OS? I suppose this would include objective c, cocoa and opengl 3 . The book Beginning Mac OS X Game Development with Cocoa looks very promising but isn't out until early next year. |
1 | Batching and Z order with Alpha blending in a 3D world I'm working on a game in a 3D world with 2D sprites only (like Don't Starve game). (OpenGL ES2 with C ) Currently, I'm ordering elements back to front before drawing them without batch (so 1 element 1 drawcall). I would like to implement batching in my framework to decrease draw calls. Here is what I've got for the moment Order all elements of my scene back to front. Send order list of elements to the Renderer. Renderer look in his batch manager if a batch exist for the given element with his Material. Batch didn't exist create a new one. Batch exist for element with this Material Add sprite to the batch. Compute big mesh with all sprite for each batch (1 material type 1 batch). When all batches are ok, the batch manager compute draw commands for the renderer. Renderer process draw commands (bind shader, bind textures, bind buffers, draw element) Image with my problem here But I've got some problems because objects can be behind another objects inside another batch. How can I do something like that? |
1 | Cheap ways to do scaling ops in shader? I've got an extensive world terrain that uses vec3 for the vertex position attribute. That's good, because the terrain has endless gradations due to the use of floating point. But I'm thinking about how to reduce the amount of data uploaded to the GPU. For my terrain, which uses discrete grid based vertex positions in x and z, it's pretty clear that I can replace my vec3s (floats, really) with shorts, halving the per vertex position attribute cost from 12 bytes each to 6 bytes. Considering I've got little enough other vertex data, and an enormous amount of terrain data to push into the world, it's a major gain. Currently in my code, one unit in GLSL shaders is equal to 1m in the world. I like that scale. If I move over to using shorts, though, I won't be able to use the same scale, as I would then have a very blocky world where every step in height is an entire metre. So I see these potential solutions to scale the positional data correctly once it arrives at the vertex shader stage Use 10 1 scaling, i.e. 1 short unit 1 decimetre in CPU side code. Do a division by 10 in the vertex shader to scale incoming decimetre values back to metres. Arbirary (non PoT) divisions tend to be slow, however. Use (some power of two) 1 scaling (eg. 8 1), which enables the use of a bitshift (eg. val gt gt 3) to do the division... not sure how performant this is in shaders, though. Not as intuitive to read values, but possibly quite a bit faster than div by a non PoT value. Use a texture as lookup table to convert the value from decimetres to metres. I've heard that this is really fast. Or whatever solutions others can offer to achieve the same results minimal vertex data with sensible scaling. |
1 | Draw coloured transparent polygons on top of texture in modern opengl I am trying to render an image in the viewPort using symmetry create 1 and binding texture to it. static const GLfloat vertices vertexdata symmetry create, symmetry create, 0.0f, uv 0.0f, 1.0f, symmetry create, symmetry create, 0.0f, uv 1.0f, 1.0f, symmetry create, symmetry create, 0.0f, uv 1.0f, 0.0f, symmetry create, symmetry create, 0.0f, uv 0.0f, 0.0f My vertex shader looks like version 330 core layout (location 0) in vec3 aPos layout (location 1) in vec2 texCoord out vec2 TexCoord void main() gl Position vec4(aPos.x, aPos.y, aPos.z, 1.0) TexCoord texCoord And fragment shader looks like version 330 core in vec2 TexCoord out vec4 frag color uniform sampler2D myTexture void main() frag color texture(myTexture, TexCoord) I have setup my vao, vbo and indices correctly and thus I am getting the result as expected (ie A provided image on the display screen). Now I would like to draw 'n' number of transparent polygons on the image generated.For each polygon, I have thought of achieving using mouse cursor positions and clicks and render using glDrawArrays(GL TRIANGLE FAN, 0, n) . If I use the same vertex and fragment shaders my result now is black opaque polygons on top of the image. How to achieve transparency and color in the polygons. Do I need to another set of shaders(vertex and frag)? |
1 | OpenGL Water reflection seems to follow camera yaw and pitch I'm attempting to add reflective water to my procedural terrain. I've got it to a point which seems like it's reflecting however when I move the camera left right up down the reflections move with it. I believe the problem has something to do with the way I convert from world space to clip space for the projective texture mapping. Here is a gif of what is happening. http i.imgur.com PDta5Qu.gifv Vertex Shader version 400 in vec4 vPosition out vec4 clipSpace uniform mat4 model uniform mat4 view uniform mat4 projection void main () clipSpace projection view model vec4(vPosition.x, 0.0, vPosition.z, 1.0) gl Position clipSpace Fragment Shader version 400 in vec4 clipSpace out vec4 frag colour uniform sampler2D reflectionTexture void main () vec2 ndc (clipSpace.xy clipSpace.z) 2.0 0.5 vec2 reflectTexCoords vec2(ndc.x, ndc.y) vec4 reflectColour texture(reflectionTexture, reflectTexCoords) frag colour reflectColour I'm using this code to move the camera under the water's surface to get the reflection float distance 2 (m camera gt GetPosition().y m water gt GetHeight()) m camera gt m cameraPosition.y distance m camera gt m cameraPitch m camera gt m cameraPitch If this is insufficient code to diagnose the problem, I'll post more. I tried to keep it to what I thought could be the problem. |
1 | GLSL C Only first sampler array index is accessible I have the following shader, which whose fragment shader contains a sampler array of 16 elements. Fragment version 330 core in vec2 uv flat in int instanceID out vec4 color uniform sampler2D sampler 16 void main() color texture(sampler instanceID , uv) Now when I try to set the second array element using the following method void Shader setUniform(const std string amp uniform, GLuint values, size t amount) const bindProgram() GLuint loc getUniformLocation(uniform " 0 ") for(int i 0 i lt amount i ) glUniform1ui(loc i, values i ) std cout lt lt values i lt lt " " For debug std cout lt lt std endl For debug It just sets the first element of the shader. Even though I'm passing 2 values. This is the output of the cout's 5 4 5 4 5 4 5 4 ... Looking up the shader variables with GDebugger, only the first element is set. It also only draws the first (index 0) image. |
1 | Generating circular bounding volume AABB for bone animated object I have the animated bounding boxes for the individual bones of an object which comes from a pre computation of the bone's bounding boxes multiplied with the bone matrix. In this question I'm referring to the surrounding aka overall BB for the model. I get it by looping through the precomputed bounding boxes of the bones. I'm able to get an OBB but not an AABB or circular volume because I can't get into the correct "space" for the center of the volume. I thought that getting an AABB would be very easy, but since I have the min max points in bone space (?) I need to do some operation to disable rotation. All I can get is OBB ( I figure that the aabbV4 vector is in "bone space". So, I have to take the inverse of that space somehow, but since I have the combined value of each bone, I need the AABB circular volumes to follow tutorials on collision detection and not just OBB that I have now. So, each object has multiple bones which have a bounding box which animate each frame according to a bone matrix. Then, the mvpGet() function creates the MVP matrix which moves rotates scales the whole thing. I render the OBB surrounding volume of the bones like this for (auto amp i myAbj.allObj) if (i gt bb gt val b amp amp i gt anim gt val b) i gt aabbV4.clear() for (auto amp j i gt bbSkelAll) for (auto amp k i gt aiGbones) if (k.name j.name) i gt obbMVP glm transpose(k.animatedXform) j.obbMVP precomputed obbMVP AABB STEP 1 GATHER STORE glm vec4 bbSkelXformMin glm transpose(k.animatedXform) glm vec4(j.min, 1.f) glm vec4 bbSkelXformMax glm transpose(k.animatedXform) glm vec4(j.max, 1.f) i gt aabbV4.push back(bbSkelXformMin) i gt aabbV4.push back(bbSkelXformMax) break i gt mvpGet() i gt render() AABB STEP 2 MIN MAX for (auto amp i myAbj.allObj) if (i gt bb gt val b amp amp i gt anim gt val b) glm vec4 aabbMin (i gt aabbV4.empty()) ? glm vec4(0.f) i gt aabbV4 0 glm vec4 aabbMax (i gt aabbV4.empty()) ? glm vec4(0.f) i gt aabbV4 0 for (uint j 0 j lt i gt aabbV4.size() j) aabbMin glm min(i gt aabbV4 j , aabbMin) aabbMax glm max(i gt aabbV4 j , aabbMax) glm vec3 aabbSize aabbMax aabbMin glm vec3 aabbCenter .5f (aabbMin aabbMax) i gt aabbMVP glm translate(glm mat4(), aabbCenter) glm scale(glm mat4(), aabbSize) i gt aabbTgl 1 i gt mvpGet() i gt render() i gt aabbTgl 0 cout lt lt "aabbCenter " lt lt glm to string(aabbCenter) lt lt endl |
1 | Variable number of light sources I'm going to be writing a very simple renderer, mainly for learning purposes (using OpengGL). I've been wondering about how to implement support for dynamic number of light sources. Two solutions that come to my mind are Define an upper limit for each kid of light source (point, directional etc) and pass an array of lights in an uniform. Write a shader for each of the light source type, iterate over each light in the scene and invoke it's appropriate shader, giving it as input the gBuffer with positions, normals and etc. This seems inefficient for me, as we will loop n times through the fragments in the framebuffer, where n is the number of lights. Are there any more solutions? |
1 | Technique suggestion to render corroded pipes in 3D We currently have a lot of data relating to cracks corrosion deformations in metal pipes which at present, can be viewed in a crude flat 2D application which makes the metal anomalies hard to spot. What I would like to create is a virtual 3D pipe viewer that displays the pipe and then models all of the dents in it. I have limited knowledge of graphics programming from University however, what I was thinking of doing is creating a height map from the data given and then use tessellation to push pull the vertices of the pipes mesh, similar to terrain generation techniques. My Question(s) Can a tessellated square plane be wrapped around to form a pipe (and would this approach loose accuracy)? (I've only ever manipulated vertices in the y direction on a flat plane). Some of the dents are extremely small (1 5mm across deep), would my approach be accurate enough to model these anomalies? Is this tessellated terrain generation style the correct way to proceed or is there a simpler solution. Any advise is welcome |
1 | cylindrical coordinate point in origin I have a camera which has the following attributes pos (position of the camera in the scene) look(either direction in which camera will face, or target vector) up vector( y axis) I am using cylindrical coordinates system for moving camera around centre of scene. my question is 'how can I find look vector of the camera in a such a way that it will be pointing out in the origin of the scene under 45 degree?'. According to this thread there is not such a way. And you can never look directly down at the focal point, no matter how high up you go. P.S I need somehow to get look vector if it's possible. Thank you for you attention |
1 | Unable to Render to multiple rendertargets I am trying to implement a simple deferred rendering engine in openGL and i have a little problem with the GBuffer. I cant get it to render to more than one texture at a single time, which is the one attached to GL COLOR ATTACHMENT0. GBuffer initialization code glGenFramebuffers(1, amp m FBO) glBindFramebuffer(GL FRAMEBUFFER,m FBO) create Gbuffers glEnable(GL TEXTURE 2D) create diffusemap glGenTextures(1, amp m Textures GBUFFER DIFFUSE 24 ) glBindTexture(GL TEXTURE 2D,m Textures GBUFFER DIFFUSE 24 ) glTexImage2D(GL TEXTURE 2D,0,GL RGB,m Width,m Height,0,GL RGB,GL UNSIGNED BYTE,0) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glFramebufferTexture2D(GL FRAMEBUFFER,GL COLOR ATTACHMENT0,GL TEXTURE 2D,m Textures GBUFFER DIFFUSE 24 ,0) if(!glIsTexture(m Textures GBUFFER DIFFUSE 24 )) return false create normalmap glGenTextures(1, amp m Textures GBUFFER NORMAL24 MATID8 ) glBindTexture(GL TEXTURE 2D,m Textures GBUFFER NORMAL24 MATID8 ) glTexImage2D(GL TEXTURE 2D,0,GL RGB32F,m Width,m Height,0,GL RGB,GL FLOAT,NULL) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glFramebufferTexture2D(GL FRAMEBUFFER,GL COLOR ATTACHMENT1,GL TEXTURE 2D,m Textures GBUFFER NORMAL24 MATID8 ,0) if(!glIsTexture(m Textures GBUFFER NORMAL24 MATID8 )) return false create depthmap glGenTextures(1, amp m Textures GBUFFER DEPTH32 ) glBindTexture(GL TEXTURE 2D,m Textures GBUFFER DEPTH32 ) glTexImage2D(GL TEXTURE 2D,0,GL DEPTH24 STENCIL8,m Width,m Height,0,GL DEPTH STENCIL,GL UNSIGNED INT 24 8,0) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glFramebufferTexture2D(GL FRAMEBUFFER,GL DEPTH STENCIL ATTACHMENT,GL TEXTURE 2D,m Textures GBUFFER DEPTH32 ,0) if(!glIsTexture(m Textures GBUFFER DEPTH32 )) return false GLenum drawBuffers GL COLOR ATTACHMENT0, GL COLOR ATTACHMENT1 glDrawBuffers(2,drawBuffers) GLenum status glCheckFramebufferStatus(GL FRAMEBUFFER) if(status ! GL FRAMEBUFFER COMPLETE) printf("FBO error!!, status 0x x n",status) return false glBindFramebuffer(GL FRAMEBUFFER,0) And then before i render all geometry i do this glClearColor(0.2f,0.6f,0.3f,0) glBindFramebuffer(GL DRAW FRAMEBUFFER,m FBO) GLenum drawBuffers GL COLOR ATTACHMENT0, GL COLOR ATTACHMENT1 glDrawBuffers(2,drawBuffers) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT GL STENCIL BUFFER BIT) And then after i rendered all of it i blit it to the normal framebuffer RenderGeometry() glBindFramebuffer(GL FRAMEBUFFER,0) glClearColor(0.0f,0.0f,0.0f,0) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT GL STENCIL BUFFER BIT) glBindFramebuffer(GL READ FRAMEBUFFER,m GBuffer gt GetHandle()) glReadBuffer(GL COLOR ATTACHMENT0) glBlitFramebuffer(0,0,1280,720, 0,0,640,370,GL COLOR BUFFER BIT,GL LINEAR) glReadBuffer(GL COLOR ATTACHMENT1) glBlitFramebuffer(0,0,1280,720, 640,370,640,370,GL COLOR BUFFER BIT,GL LINEAR) glfwSwapBuffers(m Window) And my fragment shader code is version 430 in vec2 Tex in vec3 Normal uniform sampler2D tex1 uniform bool UseTexture layout(location 0) out vec3 FragmentColor layout(location 1) out vec3 FragmentNormal void main() FragmentNormal.xyz normalize(Normal) if(UseTexture) FragmentColor texture(tex1,Tex).rgb diffuse color else FragmentColor vec3(1,1,1) And the result is It should be two rectangles drawn to the screen one containing the diffuse color and one containing the screen normals. However i am only getting one. |
1 | How can I efficiently render a tile based map with many Z levels, where the levels act like hollowed out voxels? My map setup is a little different to what you normally have with tilemaps. The map itself is a rectangular prism, with a width, height, and depth being variable. In my case, width is the x axis horizontally across the screen, height is the y axis vertically across the screen and depth is the z level, or how quot deep quot the map goes into out of the screen. In my map you could consider the rectangular prism as quot solid quot . Many most of these voxels could be considered as occluders to the voxels below them. Some voxels are hollowed out to make rooms and corridors. The map is viewed at a specific z level, from 0, the quot bottom quot of the rectangular prism, to n, the top. A quot hollowed out quot voxel, if viewed from it's level, would result in me rendering a floor texture in that tile. If there were 4 voxels above it hollowed out, then that floor texture would be rendered when the map is viewed from each of these z levels, but each z level above the quot floor quot would have a semi transparent black rect rendered on top of the tile (so it gets successively darker the further quot up quot you are). This occurs until you're on a z level above the tallest voxel that is hollowed out to make the room at which point you can't see it any longer, because this non hollowed out voxel blocks your view to the room below. The problem I have is that there are many z levels to render and most of the hollowed out sections criss cross around in the different layers to make a very complex network of little rooms and tunnels. This results in me naively rendering a floor texture, followed by several layers of semi transparent black rects, followed by a fully opaque black rect and potentially repeated for any additional overlapping rooms for every x,y tile on the screen. I'm trying to wrap my head around a way to render this more efficiently but I'm having trouble. Currently, for every render loop, I'm attempting to step through the z layers, from the current viewed z level down to 0, and finding the z level that is occluded first (or all the way to the 0 level), so that when rendering, I can skip z layers for that tile that are below this value. This isn't really giving me much of a performance increase. I'm looking for ideas on how to better perform these series of draw calls. I'm using OpenGL3.3 directly, so I'm not restricted by any library or engine constraints. Currently, I'm rendering each quot tile quot on each layer seperately and would prefer to keep it that way for simplicity. |
1 | What does glBlendFunc do when given an incorrect enumeration value? In cocos2d x, I am using particle system from plist with blend values dst 1, and src 100 which is wrong enum for gl blend. Although app logged "OpenGL error 0x0500" but it still run and showed a good blend effect. And i can not find the right pairs of dst and src to recreate that effect. So i hope someone can tell me how BlendFunc gl blend will work when it having bad src value. Base on that i hope i can recreate that effect without using wrong src value. Thanks |
1 | Region selection in OpenGL If I have a mesh of triangles and going to make a selection on it using a region (and not rectangle using Glu.gluPickMatrix(...) ), how can I implement it? |
1 | How can glass breaking effect from Smash Hit be achieved? I saw Smash Hit the other day and was amazed by the physics of the game, specially the shattered glass effect I've read other posts about this subject but I still feel that they don't share enough details to let me get started on implementing this on my own with OpenGL GLSL. Is it possible for somebody with an enhanced perception and graphics understanding to watch the gameplay and give some pointers on how this effect could be replicated? I rather not use 3rd party physics engine and do the entire thing on my own for educational purposes, so could you mention some of the physics that goes behind this as well? References to other documents and demos are highly appreciated. |
1 | Do game engines put all interpolation onto the GPU? Is it considered standard to push all the vertices and an interpolation of the next position onto the GPU? Suppose a sector is moving up every game tick at 50 units per second. You could put on the shader something like (good old parametric equation like formulas) pos.x a x1 (1 a) x0 Where you'd calculate a as a uniform and only send this value to the GPU in an uncapped fashion, like so while (true) a percentage of the way between frames in range from 0.0 to 1.0 updateGLUniform(..., a) render() Rest of stuff delay() The other way of doing it was to manually set each entity and sector every single time you loop, but then you might have to update the position of a lot of things every game cycle. Just setting one value seems way more efficient. However, I don't know if there's any downsides to this. Are there any negatives to shoving interpolation off to the GPU to save computation time in the game? Or is there something that I'm missing? What do major games do? |
1 | How do I apply 2 rotations about different points to a single primitive using OpenGL I'm working on a 2D top down shooter game that has a rotation feature like Realm Of The Mad God such that if you press e the camera rotates around the character in a clockwise direction and q rotates the camera around the character in a counterclockwise direction. I have this working with my floors and walls by translating to the character, doing the screen rotation, and drawing everything with the character as the origin. The problem arises when I shoot projectiles which need to both rotate around the character and rotate around themselves (shooting uses the mouse cursor so I can shoot at any angle). For example, if the screen is not rotated and I'm shooting rectangular projectiles, I want them to face in the direction I'm shooting (rotation around themselves). However if I only do this rotation, when I then rotate the screen the projectiles will always shoot at the same position because my cursor position does not change. Therefore I need to also either rotate the projectiles around the character or rotate the mouse cursor position to get the correct position (which would then totally screw up all of the collision detection). Likewise if I only do the screen rotation on projectiles, the rectangles will always be facing the same way and they would only look correct if I were shooting straight up or straight down. So my question is, how can I perform 2 rotations on a primitive around 2 different points? The only way I can think of is to translate to the character and do the screen rotation, then somehow calculate the translation required to move back to the middle of the projectile (seeing as how my axes are now rotated) and do its rotation. Or am I thinking about this in the wrong way and there is a different solution to accomplishing this effect? |
1 | How does re glBufferData()'ing interact with glVertexAttribPointer? I understand the basic sequence glGenBuffers() creates a quot buffer object name, quot glBindBuffer() creates the actual buffer object (as well as binding the name), glBufferData() quot creates and initializes a buffer object's data store, quot and glVertexAttribPointer() sets the format of the buffer data, as well as saving the buffer object binding as part of vertex attrib array state. The tricky thing that I'm unsure of is if the glVertexAttribPointer() stuff remains valid if the the quot buffer object data store quot (but not the buffer object itself) is blown away by a new call to glBufferData(). I'm aware that this isn't usually a good idea use glBufferSubData() when just rewriting buffers but glBufferData() seems like the only option if it needs to be resized. Plus, I'm just curious. The reference pages make it sound like it could go either way in particular (https www.khronos.org registry OpenGL Refpages gl4 html glVertexAttribPointer.xhtml) If pointer is not NULL, a non zero named buffer object must be bound to the GL ARRAY BUFFER target (see glBindBuffer), otherwise an error is generated. pointer is treated as a byte offset into the buffer object's data store. The buffer object binding (GL ARRAY BUFFER BINDING) is saved as generic vertex attribute array state (GL VERTEX ATTRIB ARRAY BUFFER BINDING) for index index. When a generic vertex attribute array is specified, size, type, normalized, stride, and pointer are saved as vertex array state, in addition to the current vertex array buffer object binding. It seems like both the byte offset and the buffer object binding could remain valid, even if the data store was replaced by something different. But I could also see it being implemented such that at the time of the call, the binding captures a specific pointer into the data store, which becomes invalidated if the data store is replaced. Neither way seems to be precluded by the wording. |
1 | GLSL shader algorithm to render a sprite map with a single draw call (on a single sprite mesh) I'm trying to render a tile map world on a single sprite mesh (instead of one sprite per tile.) This significantly reduces the array vertex bus labor for extremely large worlds In order to do this, you pass two textures to the shader the sprite sheet itself, and a texture containing tile IDs. Sprite map (I know some tiles repeat but ignore that, this could be any spritesheet) Below Entire tile world represented by a 128 x 128 texture map, where each RGB value holds actual Tile ID for that location on the map. There are more efficient ways to represent the actual tile ID. Here, for the sake of simplicity, it is stored in the R component. Putting it all together The right most square is the screen output. On the image I just repeat texture... but what should actually happen is a tile world rendered from tiles on the sprite sheet. I wrote the shader that is supposed to do this, but I'm stuck. I can render the sprite sheet itself, and even scale it on the sprite mesh, but I can't seem to get the tile IDs render at the proper location, matching the tile world data. I do think I am really close and there are just a couple of modification needed to the shader so what am I doing wrong? The following shader is what I have so far tilemap.vs version 330 core layout (location 0) in vec3 position layout (location 1) in vec3 color layout (location 2) in vec2 texCoord out vec2 TexCoord uniform sampler2D ourTexture uniform sampler2D ourTilemapDataTexture uniform mat4 Model uniform mat4 View uniform mat4 Projection void main() gl Position Projection View Model vec4(position, 1.0) TexCoord texCoord tilemap.frag version 330 core in vec2 TexCoord uniform sampler2D ourTexture uniform sampler2D ourTilemapDataTexture void main() size of the tile world (128 by 128 tiles on one sprite) float cols 128.0f float rows 128.0f float tilesPerSide 8.0f figure out which tile we re in from the 3D coordinates vec2 tile vec2(floor(TexCoord.x) cols, floor(TexCoord.y) rows) get tile s texture ID from tile ID texture map float tex texture2D(ourTilemapDataTexture, tile).r vec2 UV vec2(tex fract(TexCoord.x), fract(TexCoord.y)) color texture2D(ourTexture, UV) if (color.a 0) remove transparent areas discard PS I looked at every tutorial I could find on Google, and tried to re implement it in my shader to no effect. This is as far as I got so far. Can anyone point out what I'm doing wrong? Update 1 I was able to get position sampling to work. I painted a bit on the world map It's not perfect but definitely a step forward Adjustments made to the fragment shader float cols 128.0f float rows 128.0f float tex texture2D(ourTilemapDataTexture, TexCoord).r float tey texture2D(ourTilemapDataTexture, TexCoord).g vec2 UV vec2( (TexCoord.x (cols) (tex 32.0f)), (TexCoord.y (rows) (tey 32.0f)) ) color texture2D(ourTexture, UV) Sure it's not 100 correct but at least map indexing works now ) |
1 | Lasers in 3D space game I am creating this little 3d space shooter and am somewhat unsure on how to implement the laser beams. I'm thinking about something along Star Wars and the like where mostly you shoot rather short laser beams but very many of those. Should I create a "tube" for each of the beams, render them via instancing and give them a better look just with the shader? How would you go on about this? I was rather perplexed that there was no good tutorial on this matter out there. |
1 | Simple openGL code paradox I'm trying to create the most basic application that will draw a cube using indices and textures for starters, but nothing about it makes any sense whatsoever. I ran it on my machine with an nVidia card and the display driver (latest, not that it matters) crashes (null ref), along with the program on the glDrawElements(...) command (Renderer.cpp). Fine. Then I sent it to a friend of mine who has an AMD card to try to debug it because I was unable to find the bug in the code, and the program worked (didn't crash) on his machine. That surprised me since I only did the most basic openGL stuff, no way it was vendor specific. To get to the bottom of this, I downloaded APItrace (http apitrace.github.io ) to try to see what's going on under the hood of my machine and why does my driver crash. Once I dumped the trace after I ran the program, it said that the shader program linker failed because I didn't write to gl Position (which I have). The problem is that the program links just fine in VS debugger and no error is returned there. So it just keeps getting weirder. If you want to mess with this, you can get the code (commit with issue linked) at https github.com Karlovsky120 7DaysWorldEditor commit e86b8be40c81511c4416dc5e253b90e42a5a8ec0 I don't think you'd see the bug if I just pasted the code here, there must be a mistake with the setup somewhere. I really made an effort to make this as hassle free as possible, it's just one click install clone it, build it, and you're good to go, no additional setup required. You will need to open it with VS2017 though. There's a lot of classes in the code, but all of the relevant code is in Scene.cpp and any of the classes it references. Thank you for any help you give, even if it's just a guess at what might be wrong because I'm at my wits end here. |
1 | 2D graphic over 3D perspective projection To draw a 2D HUD (just a simple trianlge, for now) over 3D graphics in OpenGL I draw all 3D objects, then call glDisable(GL DEPTH TEST) before drawing 2D HUD draw 2D triangle call glEnable(GL DEPTH TEST) however it seems I am missing something and the triangle is only rendered when I set Model View Projection matrix to Identity DrawFrame() . .. Draw 3D objects . Coordinates of 2D triangle vert 0 0.0f vert 1 0.5f vert 2 0.0f TOP vert 3 0.5f vert 4 0.5f vert 5 0.0f LEFT vert 6 0.5f vert 7 0.5f vert 8 0.0f RIGHT glDisable(GL DEPTH TEST) glm mat4 guiMVP glm mat4(1.0f) glm mat4 guiMVP Ortho View glm mat4(1.0f) This does not render triangle glm mat4 guiMVP Projection View glm mat4(1.0f) This DOES render triangle at its place but in 3D world glUniformMatrix4fv(MatrixID, 1, GL FALSE, amp guiMVP 0 0 ) ... .. render2D triangle glEnable(GL DEPTH TEST) OnSurfaceChanged(int width, int height) glViewport(0, 0, width, height) aspect (float) width height Projection Matrix Projection glm perspective(45.0f, aspect, 0.1f, 10000.0f) Orthographic Matrix for 2D HUD Ortho glm ortho(0.0f, (float)width, (float)height, 0.0f) setCamera(float cameraX, float cameraY, float cameraZ, float cameraLX, float cameraLY, float cameraLZ) Camera View matrix View glm lookAt( glm vec3(cameraX, cameraY, cameraZ), Camera position glm vec3(cameraLX, cameraLY, cameraLZ), pointing at glm vec3(0, 1, 0) Angle ) here's my Vertex shader attribute vec3 in Position uniform mat4 MVP void main(void) vec4 v vec4(in Position, 1.0) gl Position MVP v This is screenshot of what I want I DO get this output by using an Identity MVP matrix for 2D graphics. I know this is an ugly hack that works for now. How to do this properly using Orthographic matrix ? |
1 | How do open world game engines allocate memory? My troubles I've been trying to create a game engine but since I am not well experienced in C I am having trouble deciding on how to load new scenes efficiently level by level or just an open world. Example Recent games are heavy, Most populat games are about 40 to 100 gigabytes but neither RAM nor GPU have that much memory at least Normal PC or Consoles. So obviously most game engines pre allocate enough memory to fit the scene and sometimes just tolerate some wasted memory but how would you decide how much is enough? Possible solution I was planning to create separate game loops, before entering the game loop I would load each level, then start the loop, thus I would be able to deallocate preceding level and then reallocate new memory for a next level. Since I am new to the world of games and memory management my solution may sound rather stupid, But I need to know how to handle such situations efficiently. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.