_id
int64 0
49
| text
stringlengths 71
4.19k
|
---|---|
1 | Mandelbrot set not displaying properly I am trying to render mandelbrot set using glsl. I'm not sure why its not rendering the correct shape. Does the mandelbrot calculation require values to be within a range for the (x,y) or (real, imag) ? Here is a screenshot I render a quad as follows float w2 6 float h2 5 glBegin(GL QUADS) glVertex3f( w2, h2, 0.0) glVertex3f( w2, h2, 0.0) glVertex3f(w2, h2, 0.0) glVertex3f(w2, h2, 0.0) glEnd() My vertex shader varying vec3 Position void main(void) Position gl Vertex.xyz gl Position gl ModelViewProjectionMatrix gl Vertex My fragment shader (where all the meat is) uniform float MAXITERATIONS varying vec3 Position void main (void) float zoom 1.0 float centerX 0.0 float centerY 0.0 float real Position.x zoom centerX float imag Position.y zoom centerY float r2 0.0 float iter for(iter 0.0 iter lt MAXITERATIONS amp amp r2 lt 4.0 iter) float tempreal real real (tempreal tempreal) (imag imag) imag 2.0 real imag r2 (real real) (imag imag) vec3 color if(r2 lt 4.0) color vec3(1.0) else color vec3( iter MAXITERATIONS ) gl FragColor vec4(color, 1.0) |
1 | Setting relative anchor point via matrix The page at https webglfundamentals.org webgl lessons webgl 2d matrices.html provides an example for rotating around the center of the image make a matrix that will move the origin of the 'F' to its center. var moveOriginMatrix m3.translation( 50, 75) Similarly, the library I'm using (gl matrix), provides an ability to rotate around an origin vector fromRotationTranslationScaleOrigin ... Creates a matrix from a quaternion rotation, vector translation and vector scale, rotating and scaling around the given origin However, I'd like to generically set a rotation vector based on the relative local transform e.g. something like giving 0.5,0.5, 0 as the origin parameter to fromRotationTranslationScaleOrigin() and have it always rotate around the center. Would this require handling it somewhere else in the pipeline? (e.g. after composing with the perspective matrix) or are there some best practices ways of handling this? I'd also like a matrix to allow centering within another's world matrix but that's a different question I guess ) |
1 | Earth and sun How to make the sun revolve around the earth in the following code? I want to imitate the planetary system with Earth and Sun drawn as solid spheres on the same window. I have drawn the two the earth as the bigger sphere and the sun as the smaller sphere. Now I want the earth to revolve around the sun when I press key A and when I press key B the earth must rotate around its axes. How do I achieve this? Thanks. Here is the code so far include lt GL glut.h gt void init(void) glClearColor(0.0,0.0,0.0,0.0) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) void display(void) glClearColor(0.05,0.05,0.5,0.0) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glColor3f(0.03,0.05,0.09) glutSolidSphere(0.5,30,30) Earth glMatrixMode(GL MODELVIEW) glLoadIdentity() glTranslatef(.7,.7,.7) glColor3f(1.0,1.0,1.0) glutSolidSphere(0.2,50,50) Sun glLoadIdentity() glFlush() int main(int argc, char argv) glutInit( amp argc, argv) glutInitDisplayMode(GLUT SINGLE) glutInitWindowPosition(10,10) glutInitWindowSize(400,400) init() glutCreateWindow("Planets") glutDisplayFunc(display) glFlush() glutMainLoop() |
1 | Does glScissor affect stencil and depth buffer operations? I know glScissor() affects glColorMask() and glDepthMask(), but does it affect the stencil and depth buffers? For example glEnable(GL DEPTH TEST) glEnable(GL SCISSOR TEST) glEnable(GL STENCIL TEST) glScissor(X,Y,W,H) Is this color mask set only for the scissor area? glColorMask(TRUE,TRUE,TRUE,TRUE) Does this stencil function only work within the scissor area? glstencilfunc(GL ALWAYS) Does the stencil function only work within scissor area? glstencilop(GL KEEP,GL KEEP,GL KEEP) Is this depth mask set only for the scissor area? glDepthMask(GL TRUE) Does this depth function only work within the scissor area? glDepthFunc(GL ALWAYS) |
1 | Frame Buffer Object (FBO) is not working. What is the right way to use? I am trying to use FBO but i am living some problems. I will show you my steps but first i will show my running screen ,so we can compare them. Like before fbo after fbo. My running screen and Draw() function code glClearColor(0.5,0.5,0.5,1.0) glLoadIdentity() glTranslatef(0.0,0.0, 3.0) glRotatef(0,0.0,1.0,0.0) glUniform3f(glGetUniformLocation(mainShader gt getProgramId(),"lightPos"),0,1,2) mainShader gt useShader() glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) scene gt draw(mainShader gt getProgramId()) mainShader gt delShader() After I tried to add FBO Create FBO texture function unsigned int createTexture(int w,int h,bool isDepth false) unsigned int textureId glGenTextures(1, amp textureId) glBindTexture(GL TEXTURE 2D,textureId) glTexImage2D(GL TEXTURE 2D,0,(!isDepth ? GL RGBA8 GL DEPTH COMPONENT),w,h,0,isDepth ? GL DEPTH COMPONENT GL RGBA,GL FLOAT,NULL) glTexParameteri(GL TEXTURE 2D,GL TEXTURE MAG FILTER,GL NEAREST) glTexParameteri(GL TEXTURE 2D,GL TEXTURE MIN FILTER,GL NEAREST) glTexParameteri(GL TEXTURE 2D,GL TEXTURE WRAP S,GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE 2D,GL TEXTURE WRAP T,GL CLAMP TO EDGE) int i i glGetError() if(i! 0) std cout lt lt "Error happened while loading the texture " lt lt i lt lt std endl glBindTexture(GL TEXTURE 2D,0) return textureId Init() function void init() glClearColor(0,0,0,1) glMatrixMode(GL PROJECTION) glLoadIdentity() gluPerspective(50,640.0 480.0,1,1000) glMatrixMode(GL MODELVIEW) glEnable(GL DEPTH TEST) mainShader new shader("vertex.vs","fragment.frag") quadRenderShader new shader("quadRender.vs","quadRender.frag") scene new meshLoader("test.blend") renderTexture createTexture(640,480) depthTexture createTexture(640,480,true) glGenFramebuffers(1, amp FBO) glBindFramebuffer(GL FRAMEBUFFER,FBO) GL COLOR ATTACHMENT0 GL DEPTH ATTACHMENT glFramebufferTexture2D(GL FRAMEBUFFER,GL COLOR ATTACHMENT0,GL TEXTURE 2D,renderTexture,0) glFramebufferTexture2D(GL FRAMEBUFFER,GL DEPTH ATTACHMENT,GL TEXTURE 2D,depthTexture,0) int i glCheckFramebufferStatus(GL FRAMEBUFFER) if(i! GL FRAMEBUFFER COMPLETE) std cout lt lt "Framebuffer is not OK, status " lt lt i lt lt std endl glBindFramebuffer(GL FRAMEBUFFER,0) And Draw() function void display() rendering to texture... glClearColor(0.5,0.5,0.5,1.0) glLoadIdentity() glTranslatef(0.0,0.0, 3.0) glRotatef(0,0.0,1.0,0.0) glUniform3f(glGetUniformLocation(mainShader gt getProgramId(),"lightPos"),0,1,2) mainShader gt useShader() glBindFramebuffer(GL FRAMEBUFFER,FBO) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) scene gt draw(mainShader gt getProgramId()) glBindFramebuffer(GL FRAMEBUFFER,0) mainShader gt delShader() glClearColor(0.0,0.0,0.0,1.0) render texture to screen glLoadIdentity() glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) quadRenderShader gt useShader() glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D,depthTexture) glUniform1i(glGetUniformLocation(quadRenderShader gt getProgramId(),"texture"),0) quad gt draw(quadRenderShader gt getProgramId()) quadRenderShader gt delShader() and Result only drawing last setup color (glClearColor) so black Result should be like in tutorial Note I know tutorial monkey is purple but it is not problem. |
1 | How do I align the cube in which shadows are computed with the view frustrum? ("View Space aligned frustum") Short and concise Given 8 world space positions that form a cube of arbitrary size, position and orientation and given an arbitrary light direction. How do I compute the View and Projection matrix for a directional light, such that the shadows are computed in exactly and only this cube? Edit Clarification I want to create Shadow Maps, in the way, that Crytek calls "View Space aligned" (http www.crytek.com download Playing 20with 20Real Time 20Shadows.pdf page 54 55), so I want that the cube in which shadows are computed is aligned with the view frustrum. How do I achieve that, when I have already have the 8 world space positions of a cube, around the view frustrum? |
1 | LWJGL texture bleeding fix won't work I tried a lot of things to fix texture bleeding, but nothing works. I don't want to add a transparent border around my textures, because I already got too many and it would take too much time and I can't do it with code because I'm loading textures with slick. My textures are seperate textures and they seem to wrap on the other side (texture bleeding). Here are the textures that are "bleeding" The head, body, arm and leg are seperate textures. Here's the code I'm using to draw a texture public static void drawTextureN(Texture texture, Vector2f position, Vector2f translation, Vector2f origin,Vector2f scale,float rotation, Color color, FlipState flipState) texture.setTextureFilter(GL11.GL NEAREST) color.bind() texture.bind() GL11.glTexParameteri(GL11.GL TEXTURE 2D, GL11.GL TEXTURE WRAP S, GL12.GL CLAMP TO EDGE) GL11.glTexParameteri(GL11.GL TEXTURE 2D, GL11.GL TEXTURE WRAP T, GL12.GL CLAMP TO EDGE) GL11.glTexParameteri(GL11.GL TEXTURE 2D, GL11.GL TEXTURE MAG FILTER, GL11.GL NEAREST) GL11.glTexParameteri(GL11.GL TEXTURE 2D, GL11.GL TEXTURE MIN FILTER, GL11.GL NEAREST) GL11.glTranslatef((int)position.x, (int)position.y, 0) GL11.glTranslatef( (int)translation.x, (int)translation.y, 0) GL11.glRotated(rotation, 0f, 0f, 1f) GL11.glScalef(scale.x, scale.y, 1) GL11.glTranslatef( (int)origin.x, (int)origin.y, 0) float pixelCorrection 0f GL11.glBegin(GL11.GL QUADS) GL11.glTexCoord2f(0,0) GL11.glVertex2f(0,0) GL11.glTexCoord2f(1,0) GL11.glVertex2f(texture.getTextureWidth(),0) GL11.glTexCoord2f(1,1) GL11.glVertex2f(texture.getTextureWidth(),texture.getTextureHeight()) GL11.glTexCoord2f(0,1) GL11.glVertex2f(0,texture.getTextureHeight()) GL11.glEnd() GL11.glLoadIdentity() I tried a half pixel correction but it didn't make any sense because GL12.GL CLAMP TO EDGE. I set pixelCorrection to 0, but it still wont work. |
1 | Rending 2D Tile World (With Player In The Middle) What I have at the moment is a series of data structures I'm using, and I would like to render the world onto the screen (just the visible parts). I've actually already done this several times (lots of rewrites), but it's a bit buggy (rounding seems to make the screen jump ever so slightly every x tiles the player walks past). Basically I've been confusing myself heavily on what I feel should be a pretty simple problem... so here I am asking for some help! OK! So I have a 50x50 array holding the tiles of the world. I have the player position as 2 floats, x ( 0, 49 ) and y ( 0, 49 ) in that array. I have the application size exactly in pixels (x and y). I have an arbitrary TILE SIZE static int (based on screen pixels). What I think is heavily confusing me is using a 2d orthogonal projection in opengl which maps (0,0) to the top left of the screen and (SCREEN SIZE X, SCREEN SIZE Y) to the bottom right of the screen. gl.glMatrixMode(GL.GL PROJECTION) gl.glLoadIdentity() glu.gluOrtho2D(0, getActualWidth(), getActualHeight(), 0) gl.glMatrixMode(GL.GL MODELVIEW) gl.glLoadIdentity() The map tiles are set so that the (0,0) in the array is the bottom left. And the player has to be in the middle on the screen (SCREEN SIZE X 2, SCREEN SIZE Y 2). What I've been doing so far is trying to render 1 2 tiles more all around what would be displayed on the screen so that I don't have to worry about figuring out rendering half a tile from the top left, depending where the player is. It seems like such an easy problem but after spending about 40 hours on it rewriting it many times I think I'm at a point where I just can't think clearly anymore... Any help would be appreciated. It would be great if someone can provide some very basic pseudo code on keeping the player in the middle when your projection is mapped to screen coordinates and only rendering basically the tiles that you would be any be see. Thanks! |
1 | Optimising the modelview transformation in GLSL for 2D So, the standard way to transform vertices and then pass to the fragment shader in GLSL is something like this uniform mat4 u modelview attribute vec4 a position void main() gl Position u modelview a position However, I am working in 2D so there is redundancy in the 4x4 matrix. Would it be more efficient for me to do this? uniform mat3 u modelview attribute vec3 a position void main() gl Position vec4(u modelview a position, 1.0) gl Position requires a 4 component vector so an extra operation is required at output. However the matrix multiplication is for 9 elements instead of 16. Can I do any better? |
1 | Quaternion LookAt for camera I am using the following code to rotate entities to look at points. glm vec3 forwardVector glm normalize(point position) float dot glm dot(glm vec3(0.0f, 0.0f, 1.0f), forwardVector) float rotationAngle (float)acos(dot) glm vec3 rotationAxis glm normalize(glm cross(glm vec3(0.0f, 0.0f, 1.0f), forwardVector)) rotation glm normalize(glm quat(rotationAxis rotationAngle)) This works fine for my usual entities. However, when I use this on my Camera entity, I get a black screen. If I flip the subtraction in the first line, so that I take the forward vector to be the direction from the point to my camera's position, then my camera works but naturally my entities rotate to look in the opposite direction of the point. I compute the transformation matrix for the camera and then take the inverse to be the View Matrix, which I pass to my OpenGL shaders glm mat4 viewMatrix glm inverse( cameraTransform gt GetTransformationMatrix() ) The orthographic projection matrix is created using glm ortho. What's going wrong? |
1 | Why would OpenGL ignore GL DEPTH TEST setting? I cannot figure out why some of my objects are being rendered on top of each other. I have Depth testing on. glEnable(GL DEPTH TEST) glDepthFunc(GL LEQUAL) Do I need to draw by order of what is closest to the camera? (I thought OpenGL did that for you.) Setup code private void setUpStates() glShadeModel(GL SMOOTH) glEnable(GL DEPTH TEST) glDepthFunc(GL LEQUAL) glEnable(GL LIGHTING) glEnable(GL LIGHT0) glLightModel(GL LIGHT MODEL AMBIENT, BufferTools.asFlippedFloatBuffer(new float 0, 0f, 0f, 1f )) glLight(GL LIGHT0, GL CONSTANT ATTENUATION,BufferTools.asFlippedFloatBuffer(new float 1, 1, 1, 1 ) ) glEnable(GL COLOR MATERIAL) glColorMaterial(GL FRONT, GL DIFFUSE) glMaterialf(GL FRONT, GL SHININESS, 50f) camera.applyOptimalStates() glEnable(GL CULL FACE) glCullFace(GL BACK) glEnable(GL TEXTURE 2D) glClearColor(0.0f, 0.0f, 0.0f, 0.0f) glEnableClientState(GL VERTEX ARRAY) glEnableClientState(GL COLOR ARRAY) glEnableClientState(GL NORMAL ARRAY) glHint(GL PERSPECTIVE CORRECTION HINT, GL NICEST) Render Code private void render() Clear the pixels on the screen and clear the contents of the depth buffer (3D contents of the scene) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) Reset any translations the camera made last frame update glLoadIdentity() Apply the camera position and orientation to the scene camera.applyTranslations() glLight(GL LIGHT0, GL POSITION, BufferTools.asFlippedFloatBuffer(500f, 100f, 500f, 1)) glPolygonMode(GL FRONT AND BACK, GL LINE) for(ChunkBatch cb InterthreadHolder.getInstance().getBatches()) cb.draw(camera.x(), camera.y(), camera.z()) The draw method in ChunkBatch public void draw(float x, float y, float z) shader.bind() shader.setUniform("cameraPosition", x,y,z) for(ChunkVBO c VBOs) glBindBuffer(GL ARRAY BUFFER, c.vertexid) glVertexPointer(3, GL FLOAT, 0, 0L) glBindBuffer(GL ARRAY BUFFER, c.colorid) glColorPointer(3, GL FLOAT, 0, 0L) glBindBuffer(GL ARRAY BUFFER, c.normalid) glNormalPointer(GL FLOAT, 0, 0L) glDrawArrays(GL QUADS, 0, c.visibleFaces 6) ShaderProgram.unbind() |
1 | How can I consistently map mouse movements to camera rotation? I am writing a OpenGL game engine and also an editor for the engine. In my editor, I can import 3D models from FBX Collada as a scene graph. Now I want to implement the option for the user to rotate and the camera in the viewport using mouse. I found many links to rotate the camera by some angle based on the delta x and delta y of the mouse. This is fine. But my problem is selecting the axis for rotation. For example, if the user moves the mouse in the x axis, I am changing the camera's local rotation angle along y axis (the up axis). But this is not always working. In case if the camera node's parent node is rotated 90 degree in the x axis, when I change the camera's local y axis rotation angle, the scene is rotating in wrong direction. So in this case I have to rotate the camera's z axis angle. This is my problem. So how can I ensure the camera always finally rotate left and right (whatever the parent nodes' angles are rotated) when the user moves the mouse horizontally? Also I want to mention that when the user moves mouse up and down, I want the camera to rotate up and down always. |
1 | Achieving Retro Resolution with HD I couldn't really find any tips on this (or perhaps I just lack the proper words once again), but I'm thinking about how to get some retro looks (SNES 16 Bit, specifially) using a modern system. Basically, the game still runs with the native resolution, however, vertially and horizontally we're limited to 256x224 (512x448). The colour palette is not an issue nowadays. So I basically came up with this idea and am wondering if it's a smart approach (using OpenGL) Create Orthogonal Projection Matrix with 256 units width and 224 units wide. Use a fragment shader that doesn't do anti aliasing on the textures, so the textures are upscaled to look pixel y. Since I couldn't really find a shader for 2), I also came up with a plan b) Same as 1a) Don't use textures at all, replace pixels with 1x1 coloured quads, convert spritesheets to 3D models made of quads. I think plan a) seems more realistic, however. But I do wonder how other games (Shovel Knight, Freedom Planet) approach a pixel y, retro look that stays true to the systems of 20 years ago. |
1 | Quaternion Rotation Weird Rotation I have an fps camera, and i am representing rotation with quaternions. every frame i grab how much the mouse moved that frame and then i simply do Quat DeltaQ Quat CreateRotationXYZ(MouseYDelta, MouseXDelta, 0) m CurrentRotationQ DeltaQ m CurrentRotationQ however, everything works until i rotate 90 degrees to the right. when i do that i cannot rotate up and down. also, moving the direction i am facing works, however, depending on where i am looking strafing left and right moves me in different directions, I've tried many things such as reversing order of multiplication, i'm really at a loss. Any help would be greatly appreciated. |
1 | Sharing VBO with multiple objects and fixed size buffer data I'm just messing around with OpenGL and getting some basic structures in place and my first attempt resulted in each SceneObject class (just contains vertex information right now) having it's own VBO inside it, however I've read that it might be better to share VBOs across multiple objects. Also, I read that you should avoid resizing a VBO (repeated calls to glBufferData with different size parameters), and instead choose a fixed size for a VBO, and just try a range from the buffer. I don't think changing the size of the buffer data would happen too often, but surely it would be better to only allocate the data you need? Choosing an arbitrary value seems risky. I'm looking for some advice on working with individual objects in a scene and their associated buffer data. |
1 | glOrthof not being applied I have had a problem with OpenGL where glOrthof is not being applied, leading to my frame having the default 1 1 1 ratio. Here is the code initializing it public void reshape(GLAutoDrawable glAutoDrawable, int i, int i1, int i2, int i3) GL2 gl glAutoDrawable.getGL().getGL2() if(window.getWidth()! screenWidth window.getHeight()! screenHeight)window.setSize(screenWidth,screenHeight) unitsTall window.getHeight() (window.getWidth() unitsTall) gl.glMatrixMode(GL2.GL PROJECTION) gl.glLoadIdentity() gl.glOrthof(0.0f, unitsWide, 0.0f, unitsTall, 0.0f, 1.0f) gl.glMatrixMode(GL2.GL MODELVIEW) unitsWide is equal to 100. Here is the code for drawing the rectangle public static void drawRect(float x,float y,float width,float height) GL2 gl Render.getGL2() gl.glRotatef( rotation,0,0,1) Rotation needed to be reversed gl.glColor4f(red,green,blue,alpha) gl.glBegin(GL2.GL QUADS) gl.glVertex2f(x,y) gl.glVertex2f(x width,y) gl.glVertex2f(x width,y height) gl.glVertex2f(x,y height) gl.glEnd() gl.glFlush() gl.glRotatef(rotation,0,0,1) The "Render" class is the class with all of the main methods for OpenGL. Here is the code that I used for drawing the rectangle Graphics.drawRect(0,0,0.5f,1) Finally, here is a screenshot of what it looks like when this code executes I have looked around and have not found any other problems like this. Please tell me what I might be doing wrong. |
1 | LWJGL Resize window and glTranslate breaking screen resolution I'm trying to make a 2D Tile RPG game with LWJGL but I'm having a problem with the display resizing. I want the user to be able to re size the window to whatever size they want just by expanding it (as opposed to letting them pick a resolution option from a list). See the images below for the problem. At default resolution At a re sized resolution As you can see the tile is drawn correctly in the first image. However, after moving the camera and resizing the display the tile gets cut off as it moves away from the bottom left corner of the screen. The default screen resolution is 800x600. I have a camera object that moves based on the player using the WASD keys. I decide what's to be drawn by using glTransaltef(camera.x,camera.y,0). Should I use if(Display.wasResized()) in the game loop then get what the new resolution is and use it in the glTranslatef function somehow? Any help or suggestions would be great! EDIT I added the code in Fletcher D's answer however it produced this The tile doesn't get cut off anymore but in the image above the tile is being drawn at a size of 1x1 pixels. Also when the screen resolution changes the resolution of everything on screen changes, this is not what I want. I want the screen to simply display more of the world. As I said I'm using a panning camera that moves with the player's WASD keys. So if the world is currently being displayed at 0,0 to 800,600 and the player re sizes the window to 1080x720 I want the display to show the world from 140, 60 to 940,660. (Maintaining the center of the screen at the moment of being re sized. |
1 | How can I orient a circle relative to the position of a 3D object? How can I orient a circle that has a position to a 3D object? If I have a 3D object which is a hemisphere, I would like to let the circle be underneath that hemi sphere and whenever the hemisphere moves, the circle move with the same angle and position, and underneath it. In other words, I would like to attach a 2D circle to a hemisphere and parallel to it. |
1 | OpenGL glTexImage3D not working? I'm trying to use arrax textures with opengl 3.3 since this the target version for my application. So I have to use the glTexImage3D() method instead of glTexStorage3D(). The problem is, if I use glTexImage3D() instead of glTexStorage3D() the screen is always black. The other way round, everything works. Here is how I create the array texture public ArrayTexture(int width, int height) this.width width this.height height this.arrayTextureHandle GL11.glGenTextures() GL13.glActiveTexture(GL13.GL TEXTURE0) GL11.glBindTexture(GL30.GL TEXTURE 2D ARRAY, arrayTextureHandle) Works perfect GL42.glTexStorage3D(GL30.GL TEXTURE 2D ARRAY, 1, GL11.GL RGBA8, width, height, 10) Doesn't work with these parameters GL12.glTexImage3D(GL30.GL TEXTURE 2D ARRAY, 0, GL11.GL RGBA8, width, height, 10, 0, GL11.GL RGBA, GL11.GL UNSIGNED BYTE, (ByteBuffer)null) GL11.glBindTexture(GL30.GL TEXTURE 2D ARRAY, 0) public void addTexture(ScrollingSprite... sprites) if(zLayerCounter sprites.length lt MAX LAYERS) GL13.glActiveTexture(GL13.GL TEXTURE0) GL11.glBindTexture(GL30.GL TEXTURE 2D ARRAY, arrayTextureHandle) Arrays.asList(sprites).forEach(sprite gt sprite.getTexture().getTextureData().prepare() GL12.glTexSubImage3D(GL30.GL TEXTURE 2D ARRAY, 0, Mipmap number 0, 0, zLayerCounter , xoffset, yoffset, zoffset width, height, 1, width, height, depth GL11.GL RGBA, format GL11.GL UNSIGNED BYTE, type sprite.getTexture().getTextureData().consumePixmap().getPixels()) pointer to data GL11.glTexParameteri(GL30.GL TEXTURE 2D ARRAY, GL11.GL TEXTURE WRAP S, sprite.getTextureWrapS()) GL11.glTexParameteri(GL30.GL TEXTURE 2D ARRAY, GL11.GL TEXTURE WRAP T, sprite.getTextureWrapT()) ) GL11.glBindTexture(GL30.GL TEXTURE 2D ARRAY, 0) else throw new IllegalArgumentException("Maximum number of textures reached!") |
1 | Problems Animating Texture in OpenGL I'm trying to animate a texture to scroll a static screen for a television, however I'm having some issues. Just translating within the texture matrix animates all textures in the scene which is obviously a problem. However when trying to push and pop the matrix the texture matrix seems to keep getting reset. So rather than a scrolling texture, the texture just stays at the same translation. Here's the code snippet. glMatrixMode(GL TEXTURE) glPushMatrix() glTranslatef(0, 0.03 dt, 0) glBindTexture(GL TEXTURE 2D, staticTexture) RepeatTexture() glBegin(GL QUADS) glTexCoord2f(0, 0) glNormal3f(0, 0, 1) glVertex3f(0.49, 0.205, 0.461) glTexCoord2f(0, 1) glNormal3f(0, 0, 1) glVertex3f(0.49, 1.204, 0.461) glTexCoord2f(1, 1) glNormal3f(0, 0, 1) glVertex3f( .80, 1.204, 0.461) glTexCoord2f(1, 0) glNormal3f(0, 0, 1) glVertex3f( .80, 0.205, 0.461) glEnd() glPopMatrix() glMatrixMode(GL MODELVIEW) |
1 | OpenGL large tile map rendering I want to be able to have a big tilemap (e.g. like Terraria, or an infinite procedural generated one), with only a handful of tiles on my screen. This means that I have a texture atlas (i.e. spritesheet) which contains the textures for the tiles. For the sake of simplicity, let's assume that I have a huge internal 2D array, containing the tileIDs and a lookup tablel hasmap for the UV coords for each tile type. All the tiles have the same dimension (the texture AND the onscreen tiles). I know a couple of methods to render such a 2D grid tilemap, but I don't know which one of them is a solid foundation for a game which heavily relies on modifiable tiles. Additionally, I want to be able to dynamically change the tilesize (i.e. zooming in out), modifying the tiles tilemap (e.g. destroying, building) and fast loading (fast paced player actions). Using a single draw call. I create a VBO with all the quads to fill the screen (multiple overlapping vertices, because each quad needs a different texture) and have a persistent buffer to modify the texture buffer object. As soon as the camera moves, I update the UV coordinates in the TBO. I can use attributeless rendering. I make a glDrawArrays call on an empty VAO, and use gl VertexID 6 and gl VertexID 6 to get the vertex ID and the tileID. To map the UV coordinates, I pass a TBO, which is updated at every camera move. I don't know how expensive this calculation is for the GPU. Hoping on the train of attributeless rendering, I can pass a uniform or create a const array to hold the UV offsets of every tile type, and only pass at every camera move the updated tile type IDs to the shader (with a uniform?). With gl VertexID 6 I know which index holds the current tiles type information, and then with this type, I can access my constant array to lookup the UV coordinates. Either using triangle strips, or indices to reduce the vertex count, and use provoking vertices to correctly map the UV coordinates (this can be combined with attributeless rendering). The vertices are all fixed for the window, and only the textures change. Here, once again, I have to map the TBO to a persistent buffer and update it at every camera move. Every tile gets a separate draw call (thus a separate VBO, IBO, TBO) and I can therefore render only the tiles which are seen by the camera. Because of the low amount of onscreen tiles, those separate draw calls shouldn't be too expensive. I put the whole map at the start of the game in the VAO and TBO, and modify the indices buffer, to only render the onscreen tiles (the IBO only holds enough indices to render the onscreen quads). My problem here is that I send too much data to the GPU every frame, and I have a really bad cache locality, because the information has to be accessed from different parts of the buffer objects. There might also be a way to use a geometric shader, although I have never used one. My concrete question is, is there a "state of the art" method to render big tilemaps, is any of my above mentioned ideas a good solution, and, does it really matter which option I use? Thank you! |
1 | How can I draw textures with JOGL? I have begun to learn OpenGl and using JOGL. There's something I haven't found any actual or easy guide to though how can I draw textures? For example, what kind of code would I need to draw a texture at (X, Y, Z) position? I want to use glDrawArrays() or glDrawElements(). |
1 | Would it perform faster to split calculation between vertex and fragment shaders? Assuming the total amount of computation would be the same (at least as far as my code goes), would there be any performance increase in splitting the calculation and using separate vertex and fragment shaders vs. doing all the calculations in just a fragment shader? In my use case, I would not be using vertex shaders as they are typically intended (so, not processing actual vertices). I just thought if the GPU hardware is set up such that the pipeline is expected to be used, would it be better to take advantage of that, or would it be just as fast to put it all in a fragment shader. Conceptually, it would be simpler to just use the fragment shader. Note, that I do need a lot of data going to the fragment shader that is updated every frame (eg. texture buffer object). |
1 | (LWJGL) Resize window content I am having a little issue with my game trying to get the screen to draw a ton larger. I am going for a retro type of feel for my game and I believe that this would help with that. I am basically trying to take a small area (example 100x177) and draw it 9 times the original size while still keeping the nearest neighbor. The image below shows what I am trying to achieve. (image taken from http forums.tigsource.com index.php?topic 27928.0). I have been trying to set the current render area to an FrameBuffer then draw the Frame Buffer larger on the window. I have not been able to find any example code or how I would go about doing that. Thank you in advance for the help. EDIT Added sample code This code is in the same method where I initialize OpenGL fboID GL30.glGenFramebuffers() GL30.glBindFramebuffer(GL30.GL FRAMEBUFFER, fboID) colID GL11.glGenTextures() GL11.glBindTexture(GL11.GL TEXTURE 2D, colID) GL11.glTexImage2D(GL11.GL TEXTURE 2D, 0, GL11.GL RGB, scaleWidth, scaleHeight, 0, GL11.GL RGB, GL11.GL UNSIGNED BYTE, (ByteBuffer)null) GL11.glTexParameteri(GL11.GL TEXTURE 2D, GL11.GL TEXTURE MAG FILTER, GL11.GL NEAREST) GL11.glTexParameteri(GL11.GL TEXTURE 2D, GL11.GL TEXTURE MIN FILTER, GL11.GL NEAREST) GL11.glTexParameteri(GL11.GL TEXTURE 2D, GL11.GL TEXTURE WRAP S, GL12.GL CLAMP TO EDGE) GL11.glTexParameteri(GL11.GL TEXTURE 2D, GL11.GL TEXTURE WRAP T, GL12.GL CLAMP TO EDGE) depID GL30.glGenRenderbuffers() GL30.glBindRenderbuffer(GL30.GL RENDERBUFFER, depID) GL30.glRenderbufferStorage(GL30.GL RENDERBUFFER, GL11.GL DEPTH COMPONENT, scaleWidth, scaleHeight) GL30.glFramebufferRenderbuffer(GL30.GL FRAMEBUFFER, GL30.GL DEPTH ATTACHMENT, GL30.GL RENDERBUFFER, depID) GL32.glFramebufferTexture(GL30.GL FRAMEBUFFER, GL30.GL COLOR ATTACHMENT0, colID, 0) drawBuffs BufferUtils.createIntBuffer(1) drawBuffs.put(0, GL30.GL COLOR ATTACHMENT0) GL20.glDrawBuffers(drawBuffs) if(GL30.glCheckFramebufferStatus(GL30.GL FRAMEBUFFER) ! GL30.GL FRAMEBUFFER COMPLETE) System.out.println("Framebuffer not complete!") else System.out.println("Framebuffer is complete!") And this is my render method that the game loop runs (updates at 60fps) clear screen GL11.glClear(GL11.GL COLOR BUFFER BIT GL11.GL DEPTH BUFFER BIT) Start FBO Rendering Code GL30.glBindFramebuffer(GL30.GL FRAMEBUFFER, fboID) Resets the current viewport GL11.glViewport(0, 0, scaleWidth, scaleHeight) Set viewport to be the size of the FBO Clear the FrameBuffer GL11.glClear(GL11.GL COLOR BUFFER BIT) Actual render code! gameMap.render() draw the texture from the FBO GL30.glBindFramebuffer(GL30.GL FRAMEBUFFER, 0) GL11.glViewport(0, 0, scaleWidth scale, scaleHeight scale) GL11.glBegin(GL11.GL QUADS) GL11.glTexCoord2f(0.0f, 0.0f) GL11.glVertex3f(0.0f, 0.0f, 0.0f) GL11.glTexCoord2f(1.0f, 0.0f) GL11.glVertex3f((float)scaleWidth scale, 0.0f, 0.0f) GL11.glTexCoord2f(1.0f, 1.0f) GL11.glVertex3f((float)scaleWidth scale, (float)scaleHeight scale, 0.0f) GL11.glTexCoord2f(0.0f, 1.0f) GL11.glVertex3f(0.0f, (float)scaleHeight scale, 0.0f) GL11.glEnd() Resets the current viewport GL11.glViewport(0, 0, scaleWidth scale, scaleHeight scale) GL11.glMatrixMode(GL11.GL MODELVIEW) GL11.glLoadIdentity() let subsystem paint if (callback ! null) callback.frameRendering() update window contents Display.update() I had to comment out "GL32.glFramebufferTexture(GL30.GL FRAMEBUFFER,GL30.GL COLOR ATTACHMENT0, colID, 0) " because it was throwing a "Function not supported" error. Thanks |
1 | No performance gain from instanced rendering? I recently worked through this tutorial about instanced rendering. At the end it promises to draw a huge amount of instances of one model without performance drops. So I tried some simple instanced rendering to see these effects. So I created a VBO containing four times xyz coordiantes, vertex color and texcoords. float backgroundData new float 1.0f, 1.0f, 0.0f, 1f,1f,1f,1f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1f,1f,1f,1f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1f,1f,1f,1f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1f,1f,1f,1f, 1.0f, 1.0f, 0.0f I also created a VAO and EBO containing the glVertexAttribPointers and indices int indices new int 0,1,2,1,2,3 . Now I draw everything with glDrawElementsInstanced(GL TRIANGLES, 6, GL UNSIGNED INT, 0, 10 000). But the Framerate drops as much as if I had the indices 10.000 times in my EBO. Why is the outcome so different? Did I do something wrong? |
1 | Selectively applying overlay shadows in a 2D tile based game I'm currently building a top down 2D tile based game using OpenGL. There are three tile layers, the background layer, player sprite layer amp foreground layer. The foreground layer is pretty much used to place translucent tiles that give the impression of a shadow when blended over the two underlying layers. The problem is that many textures for tiles in the background layer already have a shadow applied in their source images. I would like to exclude these background tiles from having the shadow overlaid whilst still applying it to sprites that may be above these tiles. Any ideas as to how I could go about accomplishing this? In the accompanying image notice how I only want the forground shadow applied to the character in the sprite layer to get the result. |
1 | What's the equivalent of wglShareLists for Mac OS? I'm trying to share lists between two contexts on Mac OS but despite my research I couldn't come up with an answer so far. I've found that NSOpenGLContext was able to initialize a context with a shared context but not to set it afterward. What's the equivalent of wglShareLists on Mac OS? |
1 | How do I render .dae models? I'm building a game for iOS. I'm quite new to OpenGL but what I want is to take a 3D model I have made in Google SketchUp and use it in my 3d game. The problem is I don't know how to proceed. I have built 3d graphics in OpenGL before by specifying the vertices, but the 3D shape I have made is too complex. Is there a way .dae files can be broken down and re formed as an OpenGL 3D model? |
1 | How to efficiently render a large terrain mesh? Recently I've been stuck on a problem thinking about the best way to generate a terrain into my game. In another projects I normally used heightmaps, so all the core work was based on the engine used, but now this cannot be done because the terrain has millions of specific polygons that must be drawn accurately. Also, many of them cannot be parsed from the Y vector (because of polygons hidden beneath), that is, a heightmap is not useful here. In this case I had to use a COLLADA object. Someone told me to manually divide the model inside a software like Blender, but unfortunately this is also not possible because these terrains are created in chunks in another software and loaded after into the game (that's the idea). Therefore this would be a big work to be obliged to manually slice them everytime. Thus, since a week I've been studying about how could I solve this problem and procedurally load this mesh, the terrain, accordingly to the camera frustrum, saving as much performance as possible. I came accross many documents about procedural mesh generation and I think that my problem could be solved by mapping the mesh into octrees. This is BIG work, at least for me, and that's why I'm here, because I don't want to risk taking the wrong path without before hearing from experienced people. In short, I have millions of vertices and indices that together form the terrain, but for obvious reasons I cannot draw them at the same time. It's needed some kind of procedure. What's the best way to do that, to treat a big mesh as a terrain? Is there any specific book about that? Is there a best way to implement it? Sorry for any kind of mistake, I'm very novice on this area. |
1 | Trouble getting shadow maps working I am trying to implement shadow maps in my game following this tutorial. For some reason, the light is not being occluded. In the above screenshot, the big white sprite in the foreground is a rendering of what the occlusion map looks like. In the background, you can see the result does not produce any shadows. It's hard to see, but in the top left it shows the shadow map. Enlarged version of the shadow map The occlusion map and the shadow map seem to be generating correctly, so it must be an issue with how I'm taking it into account when rendering the light. Here is the fragment shader for the light version 330 uniform sampler2D uDiffuseTexture uniform sampler2D uNormalsTexture uniform sampler2D uShadowMap uniform vec4 uLightColor uniform float uConstAtten uniform float uLinearAtten uniform float uQuadradicAtten uniform float uColorIntensity uniform vec4 uAmbientColor in vec2 TexCoords in vec2 GeomSize out vec4 FragColor float sample(vec2 coord, float r) return step(r, texture2D(uShadowMap, coord).r) float occluded() float PI 3.14 vec2 normalized TexCoords.st 2.0 1.0 float theta atan(normalized.y, normalized.x) float r length(normalized) float coord (theta PI) (2.0 PI) vec2 tc vec2(coord, 0.0) float center sample(tc, r) float sum 0.0 float blur (1.0 GeomSize.x) smoothstep(0.0, 1.0, r) sum sample(vec2(tc.x 4.0 blur, tc.y), r) 0.05 sum sample(vec2(tc.x 3.0 blur, tc.y), r) 0.09 sum sample(vec2(tc.x 2.0 blur, tc.y), r) 0.12 sum sample(vec2(tc.x 1.0 blur, tc.y), r) 0.15 sum center 0.16 sum sample(vec2(tc.x 1.0 blur, tc.y), r) 0.15 sum sample(vec2(tc.x 2.0 blur, tc.y), r) 0.12 sum sample(vec2(tc.x 3.0 blur, tc.y), r) 0.09 sum sample(vec2(tc.x 4.0 blur, tc.y), r) 0.05 return sum smoothstep(1.0, 0.0, r) float calcAttenuation(float distance) float linearAtten uLinearAtten distance float quadAtten uQuadradicAtten distance distance float attenuation 1.0 (uConstAtten linearAtten quadAtten) return attenuation vec3 calcFragPosition(void) return vec3(TexCoords GeomSize, 0.0) vec3 calcLightPosition(void) return vec3(GeomSize 2.0, 1.0) float calcDistance(vec3 fragPos, vec3 lightPos) return length(fragPos lightPos) vec3 calcLightDirection(vec3 fragPos, vec3 lightPos) return normalize(lightPos fragPos) vec4 calcFinalLight(vec2 worldUV, vec3 lightDir, float attenuation) float diffuseFactor dot(normalize(texture2D(uNormalsTexture, worldUV).rgb), lightDir) vec4 diffuse vec4(0.0) vec4 lightColor uLightColor uColorIntensity if(diffuseFactor gt 0.0) diffuse vec4(texture2D(uDiffuseTexture, worldUV.xy).rgb, 1.0) diffuse diffuseFactor lightColor diffuseFactor else discard return (uAmbientColor diffuse lightColor) attenuation void main(void) vec3 fragPosition calcFragPosition() vec3 lightPosition calcLightPosition() float distance calcDistance(fragPosition, lightPosition) float attenuation calcAttenuation(distance) vec2 worldPos gl FragCoord.xy vec2(1024, 768) vec3 lightDir calcLightDirection(fragPosition, lightPosition) lightDir (lightDir 0.5) 0.5 float atten calcAttenuation(distance) FragColor calcFinalLight(worldPos, lightDir, atten) vec4(vec3(1.0), occluded()) |
1 | Render to texture doesn't work on nvidia cards. OpenGL 3.3 I've implemented some postprocessing effects (DOF, HDR, Bloom) into my engine. I've tested it on AMD card which supports OpenGL 4.2. Yesterday I've made a test on NVidia card which supports only OpenGL 3.3. I tried to write my code to works also in OpenGL 3.3 but I was suprised when I saw something like on the screen It seems that textures are filled with random values and never change during app life (only when I'm changing dimensions of textures so basically recreate them). It happens mostly when I want downscale texture to some size (I use 2 textures to do that and a shader). Here is some examples of my code which implements Bloom effect Creating framebuffer if (! p buffer)glGenFramebuffers(1, amp p buffer) generate postprocess buffer glBindFramebuffer(GL DRAW FRAMEBUFFER, p buffer) for (int i 0 i lt 2 i ) lens i .create(x D.lensDownsample(), y D.lensDownsample(), 0, IMAGE 16F 3, IMAGE 2D, true, FILTER LINEAR) create Lens Flare texture (with half resolution) glFramebufferTexture2D(GL DRAW FRAMEBUFFER, GL COLOR ATTACHMENT2 i, GL TEXTURE 2D, lens i .get(), 0) Render to texture glBindFramebuffer(GL DRAW FRAMEBUFFER, p buffer) first downsample image glDrawBuffer(GL COLOR ATTACHMENT2) glClear(GL COLOR BUFFER BIT) glActiveTexture(GL TEXTURE0) aux aux number .lock() Shaders "ScaleBias" .set(true) Shaders "ScaleBias" .set("scale", Vector4(1.2f)) Shaders "ScaleBias" .set("bias", Vector4( 0.4)) Shaders "ScaleBias" .set("texelSize", Vector2(1 (float)D.x() D.lensDownsample(), 1 (float)D.y() (float)D.lensDownsample())) glViewport(0, 0, D.x() D.lensDownsample(), D.y() D.lensDownsample()) Quad() ScaleBias shader void main() FragColor max(vec4(0.0), texture(col, gl FragCoord.xy texelSize) bias) scale What "create" method actually does here if(! image)glGenTextures(1, amp image) glBindTexture(GL TEXTURE 2D, image) glTexParameteri( gl target, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri( gl target, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glTexParameteri( gl target, GL TEXTURE WRAP R, GL CLAMP TO EDGE) glTexParameteri( gl target, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri( gl target, GL TEXTURE MAG FILTER, GL LINEAR) glTexImage2D(GL TEXTURE 2D, 0, GL RGB16F, x, y, 0, GL RGB,GL FLOAT, NULL) I used debug extension to check if there is any invalid parameters but debuging shows nothing interesting. I tried make G Buffer textures as GL FLOAT but they works ok. I tried make shader as simple as possible (return color texture) but error still was valid. I've observed similar problem on newer NVidia cards but the only difference was that textures are clear (black). But they still didn't update. Framebuffer doesn't output any errors. Could you point me out the cause of this bug or give some clue? |
1 | glOrtho setting view I am duplicating this thread from stackoverflow, please remove it if that is not allowed. I'm completely new in OpenGL. I have this problem I have quite a complicated scene, and I am looking at it from the front (default camera position). The way I have seen to move the camera is using the gluLookAt() function to set the point you want to look at, and glTranslate3f() to move the camera position. I need to move the camera with different data my data is not the point I want to look at rather, I have the data of the projection plane determined with a viewing vector and a point in the plane. Is there a way to set the camera using this data rather than a "look at" point? I am using an ortographic projection (glOrtho()), so everything is projected onto the projection plane. UPDATE To be more precise the point I have is a point in the plane, to which I want to project to. I have in fact not only the plane, but the square to which I need to project, defined. So I have a scene somewhere a space, and a square somewhere away from that scene. I want to project the scene to that square and show that projection on screen. I hope that made it a bit clearer what I need to achieve ... While gluLookAt defines a point in the scene at which to look at, and the position of the eye. While eye is one point in space and projection offers perspective, with ortographic projection, you do not have one point, you have a whole square on which to project to, so how can I define that? For easier understanding what I am trying to achieve. Here is the image of what glOrtho normally does Here is an image of what I am trying to achieve |
1 | Linking error at tessellation shaders in GLSL I'm testing the triangle tessellation from the link http prideout.net blog ?p 48 shaders . All the shader are compiled correctly, but when I try to link the program using the command glLinkProgram(programID) I got the following error Tessellation control info (0) error C6029 No input primitive type Tessellation evaluation info (0) error c7005 No tessellation primitive mode specified Geometry info (0) error C6022 No input primitive type (0) error C6029 No ouput primitive type Not valid It's so strange I declare output for TC Shader using command layout(vertices 3) out And input layout for TE shader using command layout(triangles, equal spacing, cw) in Why I still got this error? I hope to see you answer about it. I put my shaders in below Vertex shader version 410 core in vec4 Position out vec3 vPosition void main() vPosition Position.xyz TC shader version 410 core layout(vertices 3) out in vec3 vPosition out vec3 tcPosition define ID gl InvocationID void main() float TessLevelInner 3 float TessLevelOuter 2 tcPosition ID vPosition ID if (ID 0) gl TessLevelInner 0 TessLevelInner gl TessLevelOuter 0 TessLevelOuter gl TessLevelOuter 1 TessLevelOuter gl TessLevelOuter 2 TessLevelOuter TE Shader version 410 core TessEval layout(triangles, equal spacing, cw) in in vec3 tcPosition out vec3 tePosition out vec3 tePatchDistance uniform mat4 Projection uniform mat4 Modelview void main() vec3 p0 gl TessCoord.x tcPosition 0 vec3 p1 gl TessCoord.y tcPosition 1 vec3 p2 gl TessCoord.z tcPosition 2 tePatchDistance gl TessCoord tePosition normalize(p0 p1 p2) gl Position Projection Modelview vec4(tePosition, 1) Geometry shader version 410 core geometry shader layout(triangles) in layout(triangle strip, max vertices 3) out in vec3 tePosition 3 in vec3 tePatchDistance 3 out vec3 gFacetNormal out vec3 gPatchDistance out vec3 gTriDistance uniform mat4 Modelview uniform mat3 NormalMatrix void main() vec3 A tePosition 2 tePosition 0 vec3 B tePosition 1 tePosition 0 gFacetNormal NormalMatrix normalize(cross(A, B)) gPatchDistance tePatchDistance 0 gTriDistance vec3(1, 0, 0) gl Position gl in 0 .gl Position EmitVertex() gPatchDistance tePatchDistance 1 gTriDistance vec3(0, 1, 0) gl Position gl in 1 .gl Position EmitVertex() gPatchDistance tePatchDistance 2 gTriDistance vec3(0, 0, 1) gl Position gl in 2 .gl Position EmitVertex() EndPrimitive() Fragment Shader version 410 core fragment shader out vec4 FragColor in vec3 gFacetNormal in vec3 gTriDistance in vec3 gPatchDistance in float gPrimitive uniform vec3 LightPosition float amplify(float d, float scale, float offset) d scale d offset d clamp(d, 0, 1) d 1 exp2( 2 d d) return d void main() vec3 AmbientMaterial vec3(0.04f, 0.04f, 0.04f) vec3 DiffuseMaterial vec3(0, 0.75, 0.75) vec3 LightPosition vec3(0.25, 0.25, 1) vec3 N normalize(gFacetNormal) vec3 L LightPosition float df abs(dot(N, L)) vec3 color AmbientMaterial df DiffuseMaterial float d1 min(min(gTriDistance.x, gTriDistance.y), gTriDistance.z) float d2 min(min(gPatchDistance.x, gPatchDistance.y), gPatchDistance.z) color amplify(d1, 40, 0.5) amplify(d2, 60, 0.5) color FragColor vec4(color, 1.0) |
1 | What's the best practice to use the pbo to upload multi textures? I have a basic model to upload textures as shown in the following picture. I design this for several reasons Only the primary thread owns the OpenGL context, so I choose to create buffers, map buffers and unmap buffers in the primary thread. I have many pictures to load and I don't want them to block the primary thread, so I use the subthread to load images and copy the memory. Here are my questions Is my model correct? Is my model the best practice? Should I create a PBO for each picture or create two PBO for all pictures and use them in turn? Should I use a shared context? Thank you for helping me out |
1 | Compressing textures and recompressing textures Which tools are considered best quality for compressing textures for use in OpenGL? Which can be used from the Linux commandline? And which lossless compressors give good ratio speed on compressed textures? (I see that Skyrim's BSA archives are typically only half the size of the DDSes they contain) |
1 | OpenGL missing GL SPECULAR light on the texture I'am missing specular lighting on the texture. I have include lt GL glext.h gt in the project, so basically I used glLightModeli(GL LIGHT MODEL COLOR CONTROL EXT, GL SEPARATE SPECULAR COLOR EXT) for specular effect. My init code glEnable(GL DEPTH TEST) glHint(GL PERSPECTIVE CORRECTION HINT, GL NICEST) glEnable(GL CULL FACE) GLfloat AmbientLight 0.3, 0.3, 0.3, 1.0 GLfloat DiffuseLight 0.7, 0.7, 0.7, 1.0 GLfloat SpecularLight 1.0, 1.0, 1.0, 1.0 GLfloat Shininess 90.0 GLfloat Emission 0.0, 0.0, 0.0, 1.0 GLfloat Global Ambient 0.1, 0.1, 0.1, 1.0 GLfloat LightPosition 7.0, 7.0, 7.0, 1.0 glLightModelfv(GL LIGHT MODEL AMBIENT, Global Ambient) glLightfv(GL LIGHT0, GL AMBIENT, AmbientLight) glLightfv(GL LIGHT0, GL DIFFUSE, DiffuseLight) glLightfv(GL LIGHT0, GL SPECULAR, SpecularLight) glLightfv(GL LIGHT0, GL POSITION,LightPosition) glLightf(GL LIGHT0, GL CONSTANT ATTENUATION, 0.05f ) glLightf(GL LIGHT0, GL LINEAR ATTENUATION, 0.03f ) glLightf(GL LIGHT0, GL QUADRATIC ATTENUATION, 0.002f) glMaterialfv(GL FRONT AND BACK, GL AMBIENT, AmbientLight) glMaterialfv(GL FRONT AND BACK, GL DIFFUSE, DiffuseLight) glMaterialfv(GL FRONT AND BACK, GL SPECULAR, SpecularLight) glMaterialfv(GL FRONT AND BACK, GL SHININESS, Shininess) glMaterialfv(GL FRONT AND BACK, GL EMISSION, Emission) glShadeModel(GL SMOOTH) glEnable(GL LIGHT0) glEnable(GL LIGHTING) glEnable(GL COLOR MATERIAL) glColorMaterial(GL FRONT AND BACK, GL AMBIENT AND DIFFUSE) My render code glClearColor( 0.117f, 0.117f, 0.117f, 1.0f ) glClearDepth(1.0f) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) load texture glEnable(GL TEXTURE 2D) glTexEnvf(GL TEXTURE ENV, GL TEXTURE ENV MODE, GL MODULATE) glEnable(GL LIGHTING) glLightModeli(GL LIGHT MODEL COLOR CONTROL EXT, GL SEPARATE SPECULAR COLOR EXT) glColor4f(1.0, 1.0, 1.0, 0.2) glDisable(GL COLOR MATERIAL) render geometry here glFlush() What is missing here? |
1 | Trying to translate object I'm trying to tranlate a matrix by xy but the result is strange. can someone help me translate and rotate the matrix properly? void Transform Update(Transform transform, GLfloat depth) Matrix4 SetToIdentity(transform gt transformationMatrix) Matrix4 Set(transform gt transformationMatrix, 0, 3, transform gt position gt x) Matrix4 Set(transform gt transformationMatrix, 1, 3, transform gt position gt y) Matrix4 Set(transform gt transformationMatrix, 2, 3, depth) Here is my Matrix set value. void Matrix4 Set(Matrix4 m, GLuint line, GLuint column, GLfloat value) m gt data line 4 column value Here is where I make the matrix to the identity matrix void Matrix4 SetToIdentity(Matrix4 m) GLfloat data 16 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 memcpy(m gt data, data, sizeof(GLfloat) 16 ) This is where I render my mesh and update shader values void Renderer Render(Renderer renderer, Camera camera) glUseProgram(renderer gt shader gt id) Transform Update(renderer gt gameObject gt transform, renderer gt depth) GLint location Shader GetUniformLocation(renderer gt shader, "transformationMatrix") glProgramUniformMatrix4fv(renderer gt shader gt id, location, 1, GL FALSE, renderer gt gameObject gt transform gt transformationMatrix gt data) GLint location2 Shader GetUniformLocation(renderer gt shader, "projectionMatrix") glProgramUniformMatrix4fv(renderer gt shader gt id, location2, 1, GL FALSE, camera gt projectionMatrix) GLint location3 Shader GetUniformLocation(renderer gt shader, "viewMatrix") glProgramUniformMatrix4fv(renderer gt shader gt id, location3, 1, GL FALSE, camera gt viewMatrix) glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, renderer gt texture gt id) Mesh Render(renderer gt mesh) Here is my shader transforming the vertices version 400 core in vec3 position in vec2 textureCoords out vec2 pass textureCoords uniform mat4 transformationMatrix uniform mat4 projectionMatrix uniform mat4 viewMatrix void main(void) vec4 worldPosition transformationMatrix vec4(position, 1.0) vec4 positionRelativeToCamera viewMatrix worldPosition gl Position projectionMatrix positionRelativeToCamera pass textureCoords textureCoords Here is how I get the view and projection matrix void Camera Render(Screen screen, Camera camera) glClearColor(camera gt backgroundColor gt red, camera gt backgroundColor gt green, camera gt backgroundColor gt blue, camera gt backgroundColor gt alpha) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glLoadIdentity() glTranslatef(camera gt transform gt position gt x, camera gt transform gt position gt y, 0) glRotatef(camera gt transform gt angle, 0, 0, 1) glGetFloatv(GL PROJECTION MATRIX, camera gt projectionMatrix) glGetFloatv(GL MODELVIEW MATRIX, camera gt viewMatrix) The current output when I change X is this |
1 | Material, Pass, Technique and shaders I'm trying to make a clean and advanced Material class for the rendering of my game, here is my architecture class Material void sendToShader() program gt sendUniform( nameInShader, valueInMaterialOrOther ) private Blend blendmode lt Alpha, Add, Multiply, Color ambient Color diffuse Color specular DrawingMode drawingMode Line Triangles, Program program std map lt string, TexturePacket gt textures List of textures with TexturePacket Texture , vec2 offset, vec2 scale How can I handle the link between the Shader and the Material? (sendToShader method) If the user want to send additionals informations to the shader (like time elapsed), how can I allow that? (User can't edit Material class) Thanks! |
1 | LWJGL 3 how to convert screen coordinate to world coordinate? I'm trying to convert screen coordinate to world coordinate on mouse click event. For LWJGL 3 there's not GLU utility class is available whereas LWJGL 2 has. I'm using JOML math classes and wrote following code, but its returning wrong world coordinate, I'm doing something wrong and couldn't figure out. On program init, I get viewProjMatrix and viewport viewProjMatrixUniform glGetUniformLocation(this.program, "viewProjMatrix") IntBuffer viewportBuffer BufferUtils.createIntBuffer(4) int viewport new int 4 glGetIntegerv(GL VIEWPORT, viewportBuffer) viewportBuffer.get(viewport) On render loop, I calculate viewProjMatrix viewProjMatrix .setPerspective((float) Math.toRadians(30), (float) width height, 0.01f, 500.0f) define min and max planes .lookAt(eye x, eye y, eye z, eye x, eye y, 0.0f, 0.0f, 2.0f, 0.0f) glUniformMatrix4fv(viewProjMatrixUniform, false, viewProjMatrix.get(matrixBuffer)) I convert screen coordinate to world coordinate with following code DoubleBuffer mouseXBuffer BufferUtils.createDoubleBuffer(1) DoubleBuffer mouseYBuffer BufferUtils.createDoubleBuffer(1) glfwGetCursorPos(window, mouseXBuffer, mouseYBuffer) double x mouseXBuffer.get(0) double y mouseYBuffer.get(0) System.out.println("clicked at " x " " y) Vector3f v3f new Vector3f() viewProjMatrix.unproject((float)x, (float)y, 0f, viewport, v3f) System.out.println("world coordinate " v3f.x " " v3f.y) Here's the full source code https gist.github.com digz6666 48bb433c83801ea4b82fa194f05b4f02 |
1 | Shadertoy getting help moving to glsl I spent some time writing a shader on shadertoy but now, when I try to translate my code to opengl I don't know how to calculate the uv that they describe as like this vec2 uv fragCoord.xy iResolution.xy How can I do that in a typical opengl 330 core fragment shader? |
1 | Geometry shader wireframe not rendering correctly GLSL OpenGL C Im trying to make a tool for skinning 3D models, and as part of that, I need to show faces wireframed, making use of the geometry shader stage. Im following the approach suggested here and here. My problem however, is that it ends up looking like this Where some of the lines get thicker when the faces are oriented in a specific way. This is my geometry shader (The vertex shader just passes vertices, so theres no need to show it) version 400 layout(triangles) in layout(triangle strip, max vertices 3) out noperspective out vec3 gDist void main() 800,600 window size(make uniform later) vec2 p0 vec2(800,600) gl in 0 .gl Position.xy gl in 0 .gl Position.w vec2 p1 vec2(800,600) gl in 1 .gl Position.xy gl in 0 .gl Position.w vec2 p2 vec2(800,600) gl in 2 .gl Position.xy gl in 0 .gl Position.w vec2 v0 p2 p1 vec2 v1 p2 p0 vec2 v2 p1 p2 float area abs(v1.x v2.y v1.y v2.x) gDist vec3(area length(v0),0,0) gl Position gl in 0 .gl Position EmitVertex() gDist vec3(0,area length(v1),0) gl Position gl in 1 .gl Position EmitVertex() gDist vec3(0,0,area length(v2)) gl Position gl in 2 .gl Position EmitVertex() EndPrimitive() and frag shader version 400 noperspective in vec3 gDist const vec4 wire color vec4(0.0,0.5,0.0,1) const vec4 fill color vec4(1,1,1,0) void main() float d min(gDist 0 ,min(gDist 1 ,gDist 2 )) float i exp2( 2 d d) gl FragColor i wire color (1.0 i) fill color So what am I doing wrong here? I feel like im missing something. Is anyone familiar with this? |
1 | Transparency problem, phong model It's the first time I'm trying to implement the Phong lighting model. I'm pretty sure everything is working fine. I was experimenting using different materials, meaning I played with Kd,Ks,Ka and Ns values when I came across this problem. Whenever I use a material with alpha value less than one I get this weird result At first I thought it has to do with the normals of the model since they are all supposed to point away from the model but it doesn't seem to be the case. Any ideas? |
1 | OpenGL Water reflection seems to follow camera yaw and pitch I'm attempting to add reflective water to my procedural terrain. I've got it to a point which seems like it's reflecting however when I move the camera left right up down the reflections move with it. I believe the problem has something to do with the way I convert from world space to clip space for the projective texture mapping. Here is a gif of what is happening. http i.imgur.com PDta5Qu.gifv Vertex Shader version 400 in vec4 vPosition out vec4 clipSpace uniform mat4 model uniform mat4 view uniform mat4 projection void main () clipSpace projection view model vec4(vPosition.x, 0.0, vPosition.z, 1.0) gl Position clipSpace Fragment Shader version 400 in vec4 clipSpace out vec4 frag colour uniform sampler2D reflectionTexture void main () vec2 ndc (clipSpace.xy clipSpace.z) 2.0 0.5 vec2 reflectTexCoords vec2(ndc.x, ndc.y) vec4 reflectColour texture(reflectionTexture, reflectTexCoords) frag colour reflectColour I'm using this code to move the camera under the water's surface to get the reflection float distance 2 (m camera gt GetPosition().y m water gt GetHeight()) m camera gt m cameraPosition.y distance m camera gt m cameraPitch m camera gt m cameraPitch If this is insufficient code to diagnose the problem, I'll post more. I tried to keep it to what I thought could be the problem. |
1 | Can't understand these UV texture coordinates (range is NOT 0.0 to 1.0) I am trying to draw a simple 3D object generated by Google SketchUp 8 Pro onto my WebGL app, the model is a simple cylinder. I opened the exported file and copied the vertices positions, indices, normals and texture coordinates into a .json file in order to be able to use it on javascript. Everything seems to work fine, except for the texture coordinates which have some pretty big values, like 46.331676 and also negative values. Now I don't know if I am wrong, but isn't 2D texture coordinates supposed to be in a range from 0.0 to 1.0 only? Well, drawing the model using these texture coordinates gives me a totally weird look, and I can only get to see the texture properly when I am very close (not really me, the cam) to the model, as if the texture has been insanely reduced in it's size and repeated infinitely across the model's faces. (yeah, I am using GL REPEAT on that texture wrap thing) What I noticed is that if I get all these coordinates and divide them by 10 or 100 I get a much "normal" look, but still not in the 0.0 to 1.0 range. Here's my json file http pastebin.com Aa4wvGvv Here are my GLSL Shaders http pastebin.com DR4K37T9 And here is the .X file exported by SketchUp http pastebin.com hmYAJZWE I've also tried to draw this model using XNA, but still not working. Using this HLSL shaders http pastebin.com RBgVFq08 I tried exporting the same model to different formats, collada, fbx, and x. All those yields the same thing. |
1 | OpenGL calculate circle rotation around a given point I'm trying to rotate an object around a certain point. If this point is the center of my world space I use the following algorithm glm vec3 center is the center of the current object. Each frame it will get updated. glm mat4 transformationMatrix is the 4x4 matrix I am "uploading" to my shader later on. It will get multiplied by the vertex position. t point is the point I'd like to rotate around t deflection is the impact for each axis (in order to draw ellipses instead of circles) t speed is in my case the current time since the game started (e.g. glfwGetTime()) void rotateByPointY( const glm vec3 amp t point, const glm vec3 amp t deflection, const float amp t speed ) float radius glm length( t point center ) glm vec3 newPosition( glm sin( t speed ) radius t deflection.x, center.y, glm cos( t speed ) radius t deflection.z ) translationMatrix glm translate( glm mat4( 1.0f ), newPosition ) center newPosition How can I take into account a point other than the center of my world in the calculation? EDIT So my rotateByPointY calculates the translation around the world center. Therefore I am adding the new center vector to my new position translationMatrix glm translate( glm mat4( 1.0f ), newPosition t point ) center newPosition t point That is working now! |
1 | OpenGL multiple mesh management I've been working on coding a new 3D engine in Java. For the moment at least I'm sticking to openGL... Currently I'm reworking how meshes get transformed and then drawn. ATM each mesh created has its own VBO and IBO along with its own draw method so if I wanted to draw multiple meshes it would look like this mesh1 MeshLoader.loadMesh("cube.obj") mesh2 MeshLoader.loadMesh("pyramid.obj") mesh1.addToBuffers() adds vertex and index data to vbo and ibo mesh2.addToBuffers() mesh1.draw() mesh2.draw() The draw method defined in class Mesh enables vertex attribute arrays, binds the bufffers, and then draws elements based on the ibo. Then I send my projection matrix (transformation and perspective) as a uniform to the vertex shader. QUESTION Games are made of many many meshes and each has to be identified so that transformation can be applied. I want to be able to apply transformations (translate, rotate, scale) directly to a given mesh, independent of the other meshes. In other words I want to move mesh1 and only mesh1 5 units to the right (on its own axis) without it affecting mesh2. How do I do all this without making a million draw calls to the gpu? Any advice appreciated. |
1 | Know if you're fully utilizing the GPU I render 17.000 VAOs each frame. 2.840.386 triangles. Only applying texture, nothing else. I have three computers and the performance across them is not as expected. Cheap laptop(i3 4010U amp Intel HD 4400) runs at 20 fps. Desktop1(FX 9590 amp GT 630) runs at 65 fps. Desktop2(i7 5820k amp GTX 970) runs at 105 fps. As the game is in a pretty early stage, the only CPU task each frame is to loop through a 3 Dimensional map containing VAO ids, and then render them. When I look at the CPU Benchmarks and the GPU Benchmarks of my systems, I would expect a much bigger difference between my Desktop2 and the two other systems. The GTX 970's 3D score(8662) is more than 15 times better than the Intel HD 4400's (546), though it only run 5 times as fast. Even more ridiculous the GTX 970's 3D score is 11 times better than the GT 630's (797), but run 1.6 times faster. Even without taking the benchmarks into consideration I would still expect bigger difference. When I compare the performance differences in other games(Like GTA V) across my systems, there's a much bigger difference. Therefore I know the problem is my game, and not a hardware driver related issue. Am I right that I would expect larger differences between the different systems, if yes How may I find the issue and how can I solve it? Edit I've read that GTX 590 is capable of 3.2 Billion triangles second, where as my game is only able to squeeze 300 million of a GTX 970. Edit 2 I've tested my desktop1(GT 630) that runs 1080p at 65fps and if I change the resolution to 1 1, I only get a performance boost of 20fps(85fps). |
1 | Storing attributes in static geometry I have a Minecraft like world where I statically create one instance of each tile type, and then place it around the world. However, I don't know how to actually change individual attributes for each tile. For instance, I need to change the lighting per tile, but if I do, it'll change the color of ever single tile. I was thinking about storing two arrays per chunk, one for the tiles and one for the light, and when I want to change the lighting, I can just change the value in the lighting array that coincides with the position in the tile array. However, this will mean I'll have to have store twice as much data per chunk, but is it something I have to accept? |
1 | How are dynamic blending shadows like this created? I would like to know, how dynamic shadows, that 'blend' onto other objects, are created. |
1 | get the new vertices after rotating in JOGL I had rotate an object with openGL (JOGL). Is there a simply way how I can get the new vertices? |
1 | How to manage shaders? I've done some shader programming some time ago but only simple stuff. I'm especially interested in how do you manage shaders? Do you just write one of each kind, or do you need more of them? If so, how exactly do you split and managed them? Also I've recently read that all the transformation matrix operations I used in OpenGL(glPushMatrix, etc) are now deprecated and you need to manage your own matrices.Is that something you do in a vertex shader and how would I go about doing it correctly? Suggestions for books about this topic will be highly appreciated. Thank you. |
1 | Aggregation of value in GLSL loop results in 0 I'm banging my head against a wall trying to understand why this code is giving me some reasonable results with some visible colors on parts of the screen... version 130 in vec2 vTex must match name in vertex shader flat in int iLayer must match name in fragment shader out vec4 fragColor first out variable is automatically written to the screen uniform sampler2DArray tex uniform sampler2DArray norm define MAX LIGHTS 4 struct Light vec3 position vec4 color vec3 falloff vec3 aim float aperture float aperturehardness uniform Light lights MAX LIGHTS void main() vec4 DiffuseColor texture(tex, vec3(vTex.x, vTex.y, iLayer)) if (DiffuseColor.a 0) discard vec3 NormalMap texture(norm, vec3(vTex.x, vTex.y, iLayer)).rgb NormalMap.g 1.0 NormalMap.g vec3 FinalColor vec3(0,0,0) for (int i 0 i lt MAX LIGHTS i ) vec3 LightDir vec3((lights i .position.xy gl FragCoord.xy) vec2(320.0, 200.0).xy, lights i .position.z) float D length(LightDir) vec3 N normalize(NormalMap 2.0 1.0) vec3 L normalize(LightDir) vec3 Diffuse (lights i .color.rgb lights i .color.a) max(dot(N, L), 0.0) vec3 sd normalize(vec3(gl FragCoord.xy, 1.0) lights i .position) float Attenuation smoothstep(lights i .aperture lights i .aperturehardness, lights i .aperture, dot(sd,lights i .aim)) (lights i .falloff.x (lights i .falloff.y D) (lights i .falloff.z D D) ) vec3 Intensity Diffuse Attenuation FinalColor max(FinalColor, DiffuseColor.rgb Intensity) fragColor vec4(FinalColor, DiffuseColor.a) ... but when I change FinalColor max(FinalColor, DiffuseColor.rgb Intensity) to FinalColor clamp(FinalColor DiffuseColor.rgb Intensity, vec3(0,0,0), vec3(1,1,1)) everything goes black. Is there something I don't understand about aggregation of variables in loops in GLSL, or about the operator operating on vectors? By my logic, the changed code should return more brightness than the original code because the changed code would be adding all the values whereas the old code would just be taking the highest one. |
1 | OpenGL Textures not sitting right on model I have been trying to get textures loading properly onto my models but to no avail. I'm using picopng to load my images. Here is my code. Texture code std vector lt unsigned char gt buffer, image, aaa std ifstream file(location.c str(), std ios in std ios binary std ios ate) std streamsize size 0 if (file.seekg(0, std ios end).good()) size file.tellg() if (file.seekg(0, std ios beg).good()) size file.tellg() if (size gt 0) buffer.resize((size t)size) file.read((char )( amp buffer 0 ), size) else buffer.clear() unsigned long w, h int error decodePNG(image, w, h, buffer.empty() ? 0 amp buffer 0 , (unsigned long)buffer.size()) if (error ! 0) std cout lt lt "error " lt lt error lt lt std endl glEnable(GL TEXTURE 2D) GLuint id glGenTextures(1, amp id) glBindTexture(GL TEXTURE 2D, id) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL NEAREST) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL NEAREST) glTexImage2D(GL TEXTURE 2D, 0, GL RGBA, w, h, 0, GL RGBA, GL UNSIGNED BYTE, amp image 0 ) glBindTexture(GL TEXTURE 2D, 0) glGenerateMipmap(GL TEXTURE 2D) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR MIPMAP LINEAR) float mipMapingAffectivness 0.4f glTexParameterf(GL TEXTURE 2D, GL TEXTURE LOD BIAS, mipMapingAffectivness) Any advice? Update I have tried flipping the image but to no avail, I have tried all 8 orientation variations and non seem to work. |
1 | LibGDX Box2DLights shadow offset problem on bodies Hello I just started to use LibGDX, and it's awesome. I looked at the Box2DLights library, and started to learn how the lighting work here. I got something up (source gyazo.com) As you can see, it works, but the shadow of the sprite goes over itself, and doesn't start in the right place. Why is it doing this? is it possible to set offsets? This is how I initialize them this.texture new Texture("sprites water.png") this.camera new OrthographicCamera(Gdx.graphics.getWidth(), Gdx.graphics.getHeight()) this.camera.setToOrtho(false) this.renderer new SpriteBatch() this.map new TileMap(20, 20) this.map.generateAllSame(new Texture("sprites sprite.png")) this.world new World(new Vector2(), true) this.createBodies() RayHandler.setGammaCorrection(true) RayHandler.useDiffuseLight(true) this.ray new RayHandler(this.world) this.ray.setAmbientLight(0.2f, 0.2f, 0.2f, 0.1f) this.ray.setCulling(true) this.ray.pointAtLight(50, 0) this.ray.setBlurNum(1) camera.update(true) this.spriteLight new PointLight(ray, 128) this.spriteLight.setDistance(500f) this.spriteLight.setPosition(150, 150) this.spriteLight.setColor(new Color(0.5f, 0.8f, 0.7f, 1f)) this.spriteLight.setSoftnessLength(0) this.spriteLight.setSoft(true) public void createBodies() CircleShape chain new CircleShape() chain.setRadius(10) FixtureDef def new FixtureDef() def.restitution 0.8f def.friction 0.01f def.shape chain def.density 1f BodyDef body new BodyDef() body.type BodyType.DynamicBody for (int i 0 i lt 3 i ) body.position.x 40 MathUtils.random(500) body.position.y 40 MathUtils.random(500) Body box this.world.createBody(body) box.createFixture(def) this.bodies.add(box) chain.dispose() And my rendering public void render() handleInput() camera.update() Gdx.gl.glClear(GL20.GL COLOR BUFFER BIT) renderer.setProjectionMatrix(camera.combined) renderer.disableBlending() renderer.begin() this.map.render(renderer) renderer.draw(texture, 50, 50) renderer.enableBlending() for (Body body this.bodies) Vector2 pos body.getPosition() renderer.draw(texture, pos.x amount, pos.y) renderer.end() ray.setCombinedMatrix(camera.combined, camera.position.x, camera.position.y, camera.viewportWidth camera.zoom, camera.viewportHeight camera.zoom) ray.update() ray.render() What did I do wrong? |
1 | What rotation needs to be applied to align mesh with expected axis of target? I'm using LWJGL and JOML to create a 3D view of hexagons whose positions lie on a torus. I have a number (NxM) hexagons, whose centres and normals I have calculated to be placed on the torus to completely cover the torus surface, but in the quot game quot engine I'm using I need to convert each item being rendered to a position and 3 rotation angles. I'm struggling to go from the 3 normals of the item to the 3 angles. EDIT Subsequent to posting this I have got some way in creating a matrix with the angles and converting to Euler angles, everything is now turned according to those angles, but they aren't facing directions I expect. The background I'm trying to create a visualisation of a Conway Game of Life using hexagons but instead of a simple plane, mapping each hexagon onto a Torus. I've done the maths to calculate the centres of every hexagon, and the 3 direction unit vectors that they need to point to, when in their places around the torus. For illustrative purposes, here's a view of the torus and 2 hexagons that would lie on it (not real, this is just me mocking it up in Blender) What I'm struggling to understand is how to rotate the single mesh for a hexagon to its calculated normals at the position I want to place it. i.e. How do I rotate some quot unit quot hexagon mesh (loaded from an OBJ file exported from Blender) to point in the direction of the 3 normals I've calculated they should be for each hexagon around the torus. I have read a similar question here, but I'm struggling to get from the idea of the 4d rotation matrix to how I convert that to a Vector3f for rotations. I have the 3 vector normals, could create the 4d matrix, but I need a Vector3f (the rotations about x y z) to the mesh is drawn correctly. My code is here. I'm following this guide for using LWJGL to create GameItems (my hexagons) and position rotate them from a loaded obj file mesh, but as I say, I'm struggling to calculate the rotation Vector3f needed to point in the same direction I've calculated. Here's the code section relevant to the problem at hand val mesh loadMesh( quot conwayhex models simple hexagon.obj quot ) hexGrid.hexAxes().forEach (location, axis) gt axis is a Matrix3f with my 3 normals at the centre of the hexagon, e.g cX cY cZ 0 1 0 0 0 1 1 0 0 val gameItem GameItem(mesh) gameItem.position location gameItem.scale 0.2f TODO calculate this according to the torus size what rotation do I give this? How do I calculate it from the given axis for the current item? gameItem.rotation Vector3f(30f, 30f, 30f) gameItems gameItem The output of the application given the above static 30 degree rotation is Can anyone help me unserstand how I apply the rotation to my items so they align to what I've calculated they should be? |
1 | Efficient Dynamic Memory Management My world is procedurally generated. As the player moves, chunks behind them are unloaded and chunks in front of them are loaded. Each chunk has a mesh of triangles. At the moment, I create two VBOs for each chunk (vertices and colours), when the chunk is loaded. Once a mesh is created, it is only edited every few seconds. These buffers are deleted when the chunk is no longer visible. Am I leaking memory here by constantly creating and destroying buffers? I've heard somewhere that OpenGL (or WebGL in this case) doesn't do garbage collection until the program quits, due to it being slow. Is this right? Could I improve this system somehow? |
1 | How should I structure my Android platform board game? I'm new to developing Android games, but not new to developing mobile games (J2ME). I'm currently developing a board game, for a school project, with 2 things a board and a spinning wheel (both are displayed at the same time). The user is able to zoom in out and scroll around the board and spin the wheel. The wheel is also animated resized during game. The board is build using tiles (2 4 different tiles on one image per board ) and the wheel is a image with numbers draw on it, using graphics. My questions is what is the best practice to achive the best performance of the game (game has to run on every possible Android version)? Should i use Android canvas or open GL? Is there a mechanism for drawing tiles, animations or should i just implement it my self using drawImage()? Should i separate the wheel and the board into two different threads? Should i separate the wheel and the board in to 2 activities or put it in 1 activity and draw each part separatly? What would be the best way to resize the wheel during gameplay? Scale the wheel image (but the animation has to be smooth opengl vs canvas)? What would be the best way to make the board zoomable should i scale every image on the board when the zoom is detected or does android have some better way to do this? What would be the best way to make the board scrollable should i implement a Camera that displays just a piece of the board or does android have some better way to do this? |
1 | i want to upload to 2gb model animation to vram with VBO technic but i am getting this error with nvidia (with amd no error) when i try to work with amd its working with amd hd6870 and ati x1550 but when i try to work with gtx 950 its giving this error. if i upload less than 2gb example 300mb then its working but i want to upload 2gb. what can i do ? sorry for bad english. |
1 | JWJGL Multi threading to separate update and render Introduction I am currently designing a game in Java using the LWJGL 3.0, with Gradle. I have quite an advanced knowledge on multi threading, and I am aware how GLFW does doesn't implement multi threading from this guide. Currently, I have one class Updater with a loop for updating, and another class Renderer for rendering, each extending Runnable with their own loops and time calculations. Below is a diagram showing the structure of the classes and update and render methods. I have also set up a lock system, using the Java Lock class (again below). My Problem Despite setting each thread as the current context before each loop executes its window call, and setting the current context to null at the end, I still get the error Exception in thread "RENDERER" java.lang.IllegalStateException GLFW error 0x10008 WGL Failed to make context current The requested resource is in use. Any idea as to why? Been at this for a few hours now. Loop class private void threadLoop() try lock.lock() locked true loop() TPS catch (InterruptedException e) e.printStackTrace() finally lock.unlock() locked false Override public void run() init() long lastTime System.nanoTime() double ns SECOND (MAX TPS 2) double delta 0 long timer System.currentTimeMillis() while (running) long now System.nanoTime() delta (now lastTime) ns lastTime now while(delta gt 1) threadLoop() TPS delta if(System.currentTimeMillis() timer gt 1000) finalTPS TPS timer 1000 TPS 0 stop() Thread Manager public void update() if (window null) return window.setThread() window.update() window.nullThread() public void render() if (window null) return window.setThread() window.render() window.nullThread() Renderer Override protected void loop() if (state ! null) state.render() threadManager.intermediateCode() threadManager.render() Updater Override protected void loop() if (state ! null) state.update() threadManager.intermediateCode() threadManager.update() Window public void setThread() glfwMakeContextCurrent(getID()) public void nullThread() glfwMakeContextCurrent(NULL) public void render() glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glClearColor(0.0f, 0.3f, 0.8f, 0.0f) glfwSwapBuffers(ID) System.out.println("Rendered window") public void update() glfwPollEvents() System.out.println("Updated window") |
1 | Why is my transform stretching? I've read a lot about transformations model to world space, world to camera space and projection transformationsl but when programming it, I can't get things right. I think I'm missing something, so here is my example code package com.test.rendering import... public class Test extends ApplicationAdapter Texture texture Matrix4 projection Matrix4 transform SpriteBatch batch float angle 0f Override public void create () float width Gdx.graphics.getWidth() float height Gdx.graphics.getHeight() projection new Matrix4() projection.setToProjection( width 2, width 2, height 2, height 2, 1, 100) batch new SpriteBatch() batch.setTransformMatrix(transform) batch.setProjectionMatrix(projection) texture new Texture("badlogic.jpg") Gdx.gl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f) Override public void render () Gdx.gl.glClear(Gdx.gl.GL COLOR BUFFER BIT) angle (angle 0.5f Gdx.graphics.getDeltaTime()) 360 transform new Matrix4() translate the world so we can see the texture at z 0 transform.translate(0.0f, 0.0f, 5.0f) transform.rotate(Vector3.X, angle) batch.setTransformMatrix(transform) batch.begin() batch.draw(texture, texture.getWidth() 2, texture.getHeight() 2) batch.end() Override public void dispose () If you run this code you will see a very weird transformation the texture starts rotating, but then stretches until some part of the screen, and then slowly disappears. I was expecting to see the texture just rotating around the x axi, with no stretching. Why is this happening? What is wrong with my code? |
1 | How to make an oscillation move on opengl qt I'm trying to make a character make an oscillation move. That is, the character will start by rotating to a certain angle, say 60 degrees, and then slowly come back to an upright position then rotate to the opposite direction for an angle less than 60, say 55, and do it all again until the rotation angle reaches zero and the character stops. Currently I'm trying to achieve this by declaring global variables and checking them with if blocks and changing their value by 1 degrees in those blocks so that each time the timer calls paintgl it decreases the angle by 1 degree and draws the object thus making it look like it's slowly rotating. I'm having trouble with stopping the character. This is the part of my code that handles it glRotatef(j,0,1,0) glRotatef(k,0,1,0) if(flag) j j 2 if (j 0) k 60 flag !flag if(!flag) k k 2 if(k 0) j 60 flag !flag gluCylinder(player, 1,1,8,100,100) Here the cylinder is my character and the if blocks are increasing decreasing the global variables j and k. j is initialized to 60 and k is initialized to 60. what i want to do is to make the character lean to the ground for an angle by the click of a mouse and when the mouse is released, the character will slowly go upright and lean on to the other direction, then slowly go upright again and lean on to the first direction and so on. How can I make this thing work and how can I stop it. Thanks in advance. |
1 | Storing attributes in static geometry I have a Minecraft like world where I statically create one instance of each tile type, and then place it around the world. However, I don't know how to actually change individual attributes for each tile. For instance, I need to change the lighting per tile, but if I do, it'll change the color of ever single tile. I was thinking about storing two arrays per chunk, one for the tiles and one for the light, and when I want to change the lighting, I can just change the value in the lighting array that coincides with the position in the tile array. However, this will mean I'll have to have store twice as much data per chunk, but is it something I have to accept? |
1 | Spherical Area Lights do not match reference So I'm adding spherical area lights to my application, and comparing my results with mitsuba, I am getting some differences (left is my approach, right is mitsuba a pathtraced reference) What I am mainly noticing is that The specular is way too bright for roughness values over 0.5 The specular shape gets "flat" (Left mine, Right mitsuba, noise caused by low sample count) I am using the closest point approximation and the normalization term proposed in Real Shading in UE4, that is (radius (distance 2.0) alpha) I believe issue 1) could be an issue with my normalization term, however I think issue 2) might probably be an issue with the closest point approximation? Is this kind of inaccuracy expected with the approximations I use, and if so, are there better approximations? |
1 | 3D Studio MAX dxf model to OpenGL and DirectX Main Question I saw this Loading and Animating MD5 Models with OpenGL an old post explaining .md5mesh .md5anim files. Is there any similar alternate mechanisms? Additional Questions 1.) Is there an open source tool or a Plug in for 3ds max to convert a .DXF file to such files which can be used in 3D OpenGL DirectX game programming. 2.) If I am developing a game in this approach how to make these model files encrypted and make big single file which contains all this file, so that the user cannot crack hack it? 3.) If I am providing an update via an online gaming server, then how to I update this models and keep it encrypted again? |
1 | OpenGL Perspective Issue I'm trying to troubleshoot a problem with my simple OpenGL test program, which I've written in C. I've written some math routines to do the matrix manipulation, but even after copying known working perspective functions, I cannot get anything to appear on screen. The program is simply The only geometry is a simple cube centered around the origin, here is the render loop int renderer draw frame(float delta time) glClearColor(0.2f, 0.3f, 0.2f, 1.0f) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glUseProgram(prog) bind textures glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, tile tex) glUniform1i(glGetUniformLocation(spo prog(spo), "Tex1"), 0) vec3f eye 0.0f, 0.0f, 3.0f vec3f target 0.0f, 0.0f, 0.0f vec3f up 0.0f, 1.0f, 0.0f mat4f model mat mat4f translate(0.0f, 0.0f, 0.0f) mat4f view mat look at(eye, target, up) proj mat mat4f projection(45.0f, (float)win width (float)win height, 0.1f, 100.0f) glUniformMatrix4fv(modelLoc, 1, GL FALSE, amp model mat.f 0 ) glUniformMatrix4fv(viewLoc, 1, GL FALSE, amp view mat.f 0 ) glUniformMatrix4fv(projLoc, 1, GL FALSE, amp proj mat.f 0 ) glBindVertexArray(VAO) glDrawArrays(GL TRIANGLES, 0, 36) glBindVertexArray(0) return 0 mat4f and vec3f is just this typedef struct mat4f mat4f struct mat4f float f 4 4 typedef struct vec3f vec3f struct vec3f float x float y float z The math routines in question define M4(m,i,j) m.f mat4f idx(i,j) define PI 3.14159265359f mat4f mat4f projection(float fov, float ratio, float near, float far) float d2r PI 180.0f float ys 1.0f tanf(d2r fov 2.0f) float xs ys ratio float nf near far mat4f r 0 M4(r, 0, 0) xs M4(r, 1, 1) ys M4(r, 2, 2) (far near) nf M4(r, 2, 3) 1.0f M4(r, 3, 2) 2 far near nf return r mat4f look at(vec3f eye, vec3f center, vec3f up) vec3f zaxis vec3f normalize(vec3f subv(eye, center)) vec3f xaxis vec3f normalize(vec3f crossp(vec3f normalize(up), zaxis)) vec3f yaxis vec3f crossp(zaxis,xaxis) mat4f trans mat4f id(1.0f, 1.0f, 1.0f, 1.0f) M4(trans, 0, 3) eye.x M4(trans, 1, 3) eye.y M4(trans, 2, 3) eye.z mat4f rot mat4f id(1.0f, 1.0f, 1.0f, 1.0f) M4(rot, 0, 0) xaxis.x M4(rot, 0, 1) xaxis.y M4(rot, 0, 2) xaxis.z M4(rot, 1, 0) yaxis.x M4(rot, 1, 1) yaxis.y M4(rot, 1, 2) yaxis.z M4(rot, 2, 0) zaxis.x M4(rot, 2, 1) zaxis.y M4(rot, 2, 2) zaxis.z return mat4f mul(rot, trans) Finally, the simple vertex shader version 330 core layout (location 0) in vec3 position attrib pos 0 layout (location 1) in vec2 texCoord attrib pos 1 out vec2 TexCoord uniform mat4 model uniform mat4 view uniform mat4 projection void main() gl Position projection view model vec4(position, 1.0f) TexCoord vec2(texCoord.x, 1.0f texCoord.y) Here is what I see If I change the shader as so void main() gl Position view model vec4(position, 1.0f) TexCoord vec2(texCoord.x, 1.0f texCoord.y) I see Why does it completely disappear in projection view? I've tried changed the view look at parameter but I can't ever see anything through projection view. I've tried several different implementations, currently gluPerspective (the one above is pretty much that function copied). I've been at this all night and I can't seem to get a grasp on what went wrong. Can someone gives me some pointers on where to go next? |
1 | Obtain linear depth values on FBO and deal with small differences I've a render target on which I have both a color and a depth attachment. In a second pass I need to run a filter whose width depends on the derivative and values of the depth. Now, if I try, in the second pass where I'm accessing the texture on which I rendered on previously, to display the depth information all I have is a grey blotch http imgur.com UFm5oI6 (Don't pay attention to background, I'm showing just non background values) Before displaying it I modify it by float depthValue texture(depthTexture, uv) depthValue (depthValue 0.1) 0.9 as my near clip is 0.1 and far one is 1.0 Instead, if I run the program with gDEBugger and look at the content of the depth texture what I see is a much cleaner result http imgur.com TtOgMY2 Why what I see is totally different? How gDEBugger show the values? Are the value I fetch from the depth texture wrong? Should I render depth values differently? Moreover, the width of the kernel doesn't really change as it is computed based on depth value and depth derivatives and those two are practically identical, at least judging from what I see. Is there a way to make these differences more marked? Sorry if it's a dumb question. Thank you. |
1 | Matrix operations to flip 3d models front to back, without changing position Multiple 3D objects look to be in the correct screen position, and I can move around them and this continues to be true, but each one looks like it is flipped back to front (I can see the backside of the object as if the camera were on the other side of it). Any clues to why this may be happening in addition to correcting it would be welcome also I'm getting a transform matrix from one api what is documented as OpenGL 4x4 matrix format, but my underlying api (bgfx) is wrapping opengl so may be assuming another format. I can probably go get the transform in different format, but now I'd really like to know how to correct this also if there was no option but to start with the opengl matrix. |
1 | Why is my OpenGL enabled Java game not detected by Fraps? Recently I found out that I could enable OpenGL hardware acceleration in my Java game with the line System.setProperty("sun.java2d.opengl", "True") Initial tests showed a big boost in performance. However, several days later I wanted to make a small test video using Fraps but I found out that it didn't recognize my application (as in the FPS counter did not show and I could not record anything). After I removed the earlier mentioned line of code (thus disabling OpenGL hardware acceleration) it recorded just fine. Is there any way to use Fraps while I have OpenGL enabled in my Java game? |
1 | Linear color workflow with render to texture Banding in alpha channel? In OpenGL, I am using GL SRGB8 ALPHA8 for the OpenGL internal texture format for my textures and render targets. This eliminated some banding I was seeing in dark opaque areas. However, in the following screenshot, you can see there is still some banding in the nearly opaque black pixels on the left side. The render target is transparent in the areas where you see the checkerboard pattern. Semitransparent pixels in SRGB formatted textures and render target textures appear too transparent when rendered to the screen, and pixels that are very nearly opaque (254, 253, 252 alpha) show noticeable banding as you can see in the sample image. I've seen the issue with both SRGB formatted render target textures and ordinary textures with SRGB format. The banding appears to go away if I don't call glEnable(GL FRAMEBUFFER SRGB). I'm using premultiplied alpha, but I'm pretty sure I've eliminated that as a cause of the issue. I'm not doing anything to the sampled color in the pixel shader. My blend functions Non premultipled alpha textures glBlendFuncSeparate( GL SRC ALPHA, GL ONE MINUS SRC ALPHA, GL ONE, GL ONE MINUS SRC ALPHA) Premultiplied alpha textures glBlendFuncSeparate( GL ONE, GL ONE MINUS SRC ALPHA, GL ONE, GL ONE MINUS SRC ALPHA) I tried exporting my render target to a PNG, and the true alpha values stored in the render target are increasing by 1 at each band. The texture appears much more opaque in image editors than it does in my app. Any advice as to what could cause this kind of alpha banding? |
1 | How do I change a sprite's color? In my rhythm game, I have a note object which can be of a different color depending on the note chart. I could use a sprite sheet with all the different color variations I use, but I would prefer to parametrize this. (Each note sprite is made of different shades of a hue. For example a red note has only red, light red and dark red.) How can I colourise a sprite anew? I'm working with OpenGL, but any algorithm or math explanation will do. ) |
1 | glDeleteBuffers causing other objects not to draw I have few objects in scene and they exist until I turn off the application. Their calls for glDeleteBuffers are in destructor. Since I don't delete anything in the middle of the game everything is fine. But now I created one helper function for drawing lines with following body void DrawLine(glm vec3 p1, glm vec3 p2, glm vec4 color) GLuint vao, vbo glm vec3 data 2 p1, p2 glGenBuffers(1, amp vbo) glBindBuffer(GL ARRAY BUFFER, vbo) glBufferData(GL ARRAY BUFFER, 2 sizeof(glm vec3) 2, data, GL STATIC DRAW) glGenVertexArrays(1, amp vao) glBindVertexArray(vao) glBindBuffer(GL ARRAY BUFFER, vbo) glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, 0, 0) glEnableVertexAttribArray(0) glBindVertexArray(vao) SimpleShader.SetAttribute("World", glm mat4x4()) SimpleShader.SetAttribute("UseStaticColor", true) SimpleShader.SetAttribute("UseLighting", false) SimpleShader.SetAttribute("StaticColor", color) glDrawArrays(GL LINES, 0, 2) SimpleShader.SetAttribute("UseStaticColor", false) SimpleShader.SetAttribute("UseLighting", true) glDeleteBuffers(1, amp vbo) glDeleteBuffers(1, amp vao) But as soon as glDeleteBuffers is called nothing is drawing anymore. If I comment those two lines everything is fine (except a bunch of unreleased object names of course). Anyone know why is this happening? |
1 | clip space, normalized device coordinate space and window space in OpenGL I am learning OpenGL. Clip space, normalized device coordinate space and window space are confusing. I searched but still don't understand them clearly. So, the question is what are differences between them and how are they converted from one to another? And what coordinate do the built in OpenGL functions (e.g. glBufferData) take as input? |
1 | Simplifying Camera Strafing I have a camera that follows the position and direction of the player. These are updated using spin and velocity. The velocity is updated like so auto velocity get velocity() velocity get acceleration().x get right() velocity get acceleration().y get up() velocity get acceleration().z get forward() set velocity(velocity) ... where acceleration is relative to the object, not the world. This is the first time I have used OpenGL and I am not sure if this is the best solution, but it does what I want. I was wondering if there is a better way to do this using OpenGL Mathematics (GLM). Additionally, this method works like a rocket, which is good for what I am trying to do, but if I want something more like a human, I have to change it to auto velocity get velocity() auto const y velocity.y get acceleration().y velocity get acceleration().x get right() velocity get acceleration().z get forward() velocity.y y set velocity(velocity) ... to prevent the player from being able to fly by looking up. Is there a way of simplifying this case, too? Acceleration is defined as switch (dir) case left return glm vec3( 1, 0, 0) case right return glm vec3( 1, 0, 0) case front return glm vec3( 0, 0, 1) case back return glm vec3( 0, 0, 1) case up return glm vec3( 0, 1, 0) case down return glm vec3( 0, 1, 0) default assert(false) |
1 | OpenGL ES object rotation around z axis I have an object on my screen which is presented rotated and panned, But i have 2 problems regarding the z axis rotations. It's a bit tricky to explain so i uploaded 2 videos to describe each problem. 1) Reverse rotation After rotating the object around the x axis, the z rotations are being reversed and not as it should be. 2) Wrong Z axis rotation Again, After rotating the object around the x axis, i'm trying to rotate the object around the z axis and the rotation results in a different axis rotations. I do believe the video's describe the problems well. EDIT Second attempt As kindly explained by the user Fault, i understood the problems i was facing with. So i tried to solve those by just rotating the projection matrix around the z axis, which seemed to be working, but it doesn't. Since i am rotating the model's axis, I get this problem (video). the issue appears in the 23'd seconds, while i'm demonstrating how it works. As can been seen, when the z rotation causes a wrong x and y rotations. I do understand that i must likely have to move these rotations to world view and not model view, the thing is i'm not really sure how to do that. Here is the relevant code Render function ... CC3GLMatrix projection CC3GLMatrix matrix float h 4.0f self.frame.size.height self.frame.size.width projection populateFromFrustumLeft 2 andRight 2 andBottom h 2 andTop h 2 andNear 4 andFar 100 projection rotateByZ zRotationEnd glUniformMatrix4fv( projectionUniform, 1, 0, projection.glMatrix) modelView populateFromTranslation currentPan modelView rotateBy currentRotation glUniformMatrix4fv( modelViewUniform, 1, 0, modelView.glMatrix) ... X and Y rotations (void) rotateAroundX (float) x andY (float) y newRotate (BOOL) isNewRotate if (isNewRotate) rotationStart CC3VectorMake(0.0, 0.0, 0.0) int rotationDirection ceil(zRotationEnd 90) if (rotationDirection 4 2 rotationDirection 4 3) rotationEnd.x rotationEnd.x (x rotationStart.x) rotationEnd.y rotationEnd.y (y rotationStart.y) else rotationEnd.x rotationEnd.x (x rotationStart.x) rotationEnd.y rotationEnd.y (y rotationStart.y) rotationStart.x x rotationStart.y y currentRotation CC3VectorMake(rotationEnd.y, rotationEnd.x, 0) NSLog( "Current x is f y is f z is f", currentRotation.x, currentRotation.y,zRotationEnd) Z rotations (void) rotateAroundZ (float) z newRotate (BOOL) isNewRotate if (isNewRotate) zRotationStart 0 zRotationEnd zRotationEnd (z zRotationStart) zRotationStart z And the vertex shader attribute vec4 Position attribute vec4 SourceColor varying vec4 DestinationColor uniform mat4 Projection uniform mat4 Modelview attribute vec2 TexCoordIn varying vec2 TexCoordOut void main(void) DestinationColor SourceColor gl Position Projection Modelview Position TexCoordOut TexCoordIn I would really like to get this right, so any help will be appreciated. Cheers! |
1 | How do I wrap textures inside shader GLSL? I'm trying out GLSL and one of the problems I'm facing is wrapping a random texture sampler in the shader. Searching for answers on the web first, this leads me to using these glTexParameter() GL TEXTURE WRAP S GL REPEAT GL TEXTURE WRAP T GL REPEAT I'm not sure where to put this or how to use it. I'm using a custom engine for this shader and I would assume I could wrap the texture in the shader. My main concern on this is my final output render having left and bottom artifacts. I got some previous advise it has to do with wrapping a random texture that is used for noise if that helps in my case. |
1 | How to know when graphics driver or card changes I'm about to start work on implementing GLSL binary shader compilation and I was curious how to handle the cases when the shaders need to be recompiled, such as when the driver (or perhaps even the graphics card itself) changes. Are there any mechanisms out there to do these types of things, or is this just a "delete the shader cache folder if you update your driver" type of thing? I did a quick search on here and Google and didn't see anything that looked promising. Edit or am I just ignorant and GL will let me know when a recompile is necessary? |
1 | 3D Camera Rotation (Unwanted Roll) Space Flight Cam I am working on a camera class that will have full range of motion (pitch, yaw, and roll). When only altering pitch and yaw, I am getting a large amount of roll. I understand that the issue is related to I 39 m rotating an object on two axes, so why does it keep twisting around the third axis? However, I have not been able to come up with a solution. I would like the camera to have all motion (i.e. like a spaceship). Here is the relevant code void Camera3D update(const glm vec2 amp current mouse coords) if (m mouse first movement) if (current mouse coords.x ! 0 current mouse coords.y ! 0) m mouse first movement false else const glm vec2 mouse delta (current mouse coords m old mouse coords) mouse sensitivity pitch( mouse delta.y) yaw( mouse delta.x) m old mouse coords current mouse coords void Camera3D pitch(const float angle) Pitch Rotation const glm quat pitch quaternion glm angleAxis( angle, m camera right) Update Vectors m camera up glm normalize(glm rotate(pitch quaternion, m camera up)) m camera forward glm normalize(glm rotate(pitch quaternion, m camera forward)) void Camera3D yaw(const float angle) Yaw Rotation const glm quat yaw quaternion glm angleAxis(angle, m camera up) Update Vectors m camera right glm normalize(glm rotate(yaw quaternion, m camera right)) m camera forward glm normalize(glm rotate(yaw quaternion, m camera forward)) glm mat4 Camera3D get view matrix() m view matrix glm lookAt( m camera position, m camera position m camera forward, m camera up ) return m view matrix I would like the movement of the camera to be based on local coordinates so the controls (up down left right vertical up vertical down) move the camera along its own local axis. Additional clarification based on the comment below If I look up 60 degrees and then left, I would like the horizon to stay level (essentially adding roll in the opposite direction to keep the horizon level) Any help is appreciated. Thank you! |
1 | What is the fastest way of drawing simple, textured geomtries and keeping the depth test? I'm looking for a fast way to draw simple 3D geometries that will consist of up to 10 vertices. Each of them will have a texture (though varying between geometries). I also want to store the fragment depth for depth testing. There will be no lighting and only very simple transformations (this actually is for isometric game). So far what I did was trying to achieve top performance using OpenGL for drawing simple quads (what the problem actually boils down to). For my machine (from around 2008), I had the following results Using immediate mode 4800 quads took 120ms Using vertex arrays and VBOs this took around 40 ms What I see as a problem is the bottleneck of glDraw calls. I want to somehow get below 4 5 ms , as there will be really a lot of objects on the screen. Any ideas about how could (or perhaps couldn't) achieve that? |
1 | GLM Euler Angles to Quaternion I hope you know GL Mathematics (GLM) because I've got a problem, I can not break I have a set of Euler Angles and I need to perform smooth interpolation between them. The best way is converting them to Quaternions and applying SLERP alrogirthm. The issue I have is how to initialize glm quaternion with Euler Angles, please? I read GLM Documentation over and over, but I can not find appropriate Quaternion constructor signature, that would take three Euler Angles. The closest one I found is angleAxis() function, taking angle value and an axis for that angle. Note, please, what I am looking for si a way, how to parse RotX, RotY, RotZ. For your information, this is the above metnioned angleAxis() function signature detail tquat lt valType gt angleAxis (valType const amp angle, valType const amp x, valType const amp y, valType const amp z) |
1 | Texture filtering of look up table in post process shader I am doing post processing by drawing to an FBO and then applying a certain fragment shader when drawing the FBO's texture to the screen. I want to use a look up table texture to apply color grading. Since I am targeting OpenGL ES 2.0 and possibly older PCs, I cannot use 3D textures. Instead I can use a 2D texture as an atlas of the layers of my 3D look up table. What I'm concerned about is correctness and performance when the texture lookup is performed. My look up texture needs to have linear filtering so I don't have to have a full 256 3 sized look up table to cover all hues. My fragment shader would look something like this Round B up and down to get the two regions of the texture to sample from, add offsets accordingly to R G and sample the texture twice with the two offset .rg values. And finally linearly interpolate based on B. But I as I understand it, the when the GPU encounters a texture2D call on a texture with linear filtering, it will be calculating the input coordinates for those calls for neighboring pixels in parallel to get a derivative. This derivative is used to determine how to sample the texture pixels. Since this is a post process, I don't want two neighboring pixels to influence each other. It could be a black pixel from the edge of a sprite next to a bright blue sky pixel. In the look up table, these two would need to sample distant points. So is the GPU going to decide the derivative is huge (minification) and try to linearly sample and involve all the random in between texels on my look up table? Is there a way to get my linear filter to ignore neighboring pixels and only interpolate from the 4 nearest texels? Sort of like treating everything as magnification? The problem is similar to this question that was asked in regards to HLSL, but I'm targeting OpenGL 3.0 and OpenGL ES 2.0 Color grading, shaders and 3d textures |
1 | OpenGL textures look poor I'm having some issues loading in textures in OpenGL, as my textures keep rendering incorrectly or coming out looking muddy. For instance, here I tried to load a 256x256 color spectrum image. On the left is how it looks in OpenGL and on the right is how it looks in an image viewing program As you can see, while the left image resembles the right image, the left image appears to squish the blues, and greens, and extend the pinks. I also tried loading in this 512x512 image of a dog and the result came out like this (again, left is OpenGL, right is image viewer) For this image, the image looks like it has lost a lot of its color, resulting in something that looks white washed and like it came out of a 1970s camera. (the fact that is flipped is fine however since the cube that I am drawing this on has some texture coordinates flipped to accommodate for a different image). I load in these .BMP textures using SOIL, as such glEnable(GL TEXTURE 2D) GLuint texID 0 glGenTextures(1, amp texID) int height 0, width 0 unsigned char imgData SOIL load image(filePath.c str(), amp width, amp height, 0, SOIL LOAD AUTO) glBindTexture(GL TEXTURE 2D, texID) glTexImage2D(GL TEXTURE 2D, 0, GL RGB, width, height, 0, GL RGB, imgData) set texture filtering, gen mip map Then in my fragment shader I do the following to apply the texture version 330 core in vec2 TexCoord uniform sampler2D textureSampler void main() gl FragColor texture2D(textureSampler, TexCoord) |
1 | Efficient way to output display debugging data in a window I'm writing a program in Visual Studio, C with OpenGL and for the first time, I think it will be beneficial to see some live data on top of my display render window. To give some scope, I'm developing a 3D world with a ball bouncing and it would be nice to see things like the current velocity and y positions of various objects, amongst other things. What method is a nice effective way to display this type of information to a window? Is there any libraries, vs settings or third party implementations that can be useful? (please excuse my naivety, I'm used to just cout lt lt some info, on a console). |
1 | Best strategy to track object hierarchy using groups and obj files I am making a 3D game in OpenGL from scratch. In this game I have a ship with stuff inside it. How can I attach the stuff to the ship in the CAD program and maintain that hierarchy in my own game? For example say I have a fire extinguisher in my ship that mounts on the wall. There are two approaches both with problems. Solution 1 Save fire extinguisher and ship as separate obj files. Problem How can I place the fire extinguisher in the proper place inside my ship in my game? With hundreds of objects manually placing them is completely infeasible. I want to arrange stuff in my CAD and load it into the game and be done. Solution 2 Save fire extinguisher as its own group inside the ship obj file. Problem Now I can't reuse the fire extinguisher in other ships. The obj files for game assets will balloon out of control in size with new instances of reused sub objects. Is there some way I can specify the position of an external object? A 3d point in my ship obj file representing the origin of another obj model? |
1 | opengl matrix multiplication Can someone provide some type of example of multiplying a 4x4 matrix without using loops? typedef struct matrix4 data 16 m4 can someone provide a sample of how you you'd multiply two of these in the opengl column major way? |
1 | (LWJGL) Pixel Unpack Buffer Object is Disabled? (glTextImage2D) I am trying to create a render target for my game so that I can re render at a different screen size. But I am receiving the following error Exception in thread "main" org.lwjgl.opengl.OpenGLException Cannot use offsets when Pixel Unpack Buffer Object is disabled Here is the source code for my Render method clear screen GL11.glClear(GL11.GL COLOR BUFFER BIT GL11.GL DEPTH BUFFER BIT) Start FBO Rendering Code The framebuffer, which regroups 0, 1, or more textures, and 0 or 1 depth buffer. int FramebufferName GL30.glGenFramebuffers() GL30.glBindFramebuffer(GL30.GL FRAMEBUFFER, FramebufferName) The texture we're going to render to int renderedTexture glGenTextures() "Bind" the newly created texture all future texture functions will modify this texture glBindTexture(GL TEXTURE 2D, renderedTexture) Give an empty image to OpenGL ( the last "0" ) glTexImage2D(GL TEXTURE 2D, 0,GL RGB, 1024, 768, 0,GL RGB, GL UNSIGNED BYTE, 0) Poor filtering. Needed ! glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL NEAREST) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL NEAREST) Set "renderedTexture" as our colour attachement 0 GL32.glFramebufferTexture(GL30.GL FRAMEBUFFER, GL30.GL COLOR ATTACHMENT0, renderedTexture, 0) Set the list of draw buffers. IntBuffer drawBuffer BufferUtils.createIntBuffer(20 20) GL20.glDrawBuffers(drawBuffer) Always check that our framebuffer is ok if(GL30.glCheckFramebufferStatus(GL30.GL FRAMEBUFFER) ! GL30.GL FRAMEBUFFER COMPLETE) System.out.println("Framebuffer was not created successfully! Exiting!") return Resets the current viewport GL11.glViewport(0, 0, scaleWidth scale, scaleHeight scale) GL11.glMatrixMode(GL11.GL MODELVIEW) GL11.glLoadIdentity() let subsystem paint if (callback ! null) callback.frameRendering() update window contents Display.update() It is crashing on this line glTexImage2D(GL TEXTURE 2D, 0,GL RGB, 1024, 768, 0,GL RGB, GL UNSIGNED BYTE, 0) I am not really sure why it is crashing and looking around I have not been able to find out why. Any help or insight would be greatly welcome. |
1 | Orthographic Projection Issue I have a problem with my Ortho Matrix. The engine uses the perspective projection fine but for some reason the Ortho matrix is messed up. (See screenshots below). Can anyone understand what is happening here? At the min I am taking the Projection matrix Transform (Translate, rotate, scale) and passing to the Vertex shader to multiply the Vertices by it. VIDEO Shows the same scene, rotating on the Y axis. http youtu.be 2feiZAIM9Y0 void Matrix4f InitOrthoProjTransform(float left, float right, float top, float bottom, float zNear, float zFar) m 0 0 2 (right left) m 0 1 0 m 0 2 0 m 0 3 0 m 1 0 0 m 1 1 2 (top bottom) m 1 2 0 m 1 3 0 m 2 0 0 m 2 1 0 m 2 2 1 (zFar zNear) m 2 3 0 m 3 0 (right left) (right left) m 3 1 (top bottom) (top bottom) m 3 2 zNear (zFar zNear) m 3 3 1 This is what happens with Ortho Matrix This is the Perspective Matrix |
1 | Texture artifacts depending on texture size I get some strange artifacting with textures depending on their size. I run OpenGL 3.3 with an GTX 580 so it should definitely support non power of two textures. I've narrowed down the problem specifically to the texture's size, I've tried checking if transparency had anything to do with it, color channels etc. As you can see, the 512x512 and 300x300 textures look just fine but the 437x437 is all distorted. What could be the cause of this and how can it be fixed? I could of course just stick to power of two textures which seem to work fine but since this is an personal educational project I really want to understand what's going on. |
1 | Deferred rendering camera inside point light's sphere of effect I'm trying out deferred rendering and I'm using the tutorials at http ogldev.atspace.co.uk. I've got the basics working and I'm currently trying to implement the final step from tutorial 37 (http ogldev.atspace.co.uk www tutorial37 tutorial37.html). In the previous tutorial, the author explains how to render point lights as spheres in a second render pass, combing the effect of the lights with the data generated in the geometry buffers in the first pass, and I've got this working already. In tutorial 37, he explains how to use the stencil buffer to limit the influence of the lighting to only light up objects actually inside the light sphere. He describes how the stencil buffer is set up to decrease the stencil value when the depth test fails for front facing polygons and increase it for back facing polygons, resulting in the stencil buffer containing zeroes for all pixels belonging to objects outside the light sphere (as these will either be in front of the light sphere, leading to the depth test failing both for the front facing and the back facing polygons of the lights sphere, causing the stencil value to first decrease by one and then increase by one, or behind it, leading to neither front facing nor back facing polygons of the light sphere failing the test so the stencil value remains unchanged). Objects inside the sphere though cause the depth test to fail only for back facing polygons, leading to the stencil value being not zero. This will later be used to only render the light sphere on pixels for which the stencil value is not zero. So far, so good. But then there is a remark saying that the case when the camera is inside the light sphere is left as an exercise for the reader, and this comment puzzles me. Won't it still work the same way? If the camera is inside the light sphere, then objects in front of the light sphere will not be rendered at all (since they are behind the camera). Objects behind it will still not cause the depth test to fail for the back facing polygons of the light sphere (and the front facing polygons will of course not be rendered) so the stencil values for those pixels will still be zero. As for objects inside the light sphere, the depth test for the back facing polygons will fail, causing the stencil value to be different from zero. It seems to me that no special consideration will need to be taken for the case when the camera is inside the light sphere, but maybe I'm missing something. Can someone please help me figure this out? |
1 | How can I reduce the data sent to the GPU for a voxel? I'm making a minecraft style game where each chunk each chunk has buffers holding the position of all visible blocks and texture atlas coordinates of the 36 vertices that are needed to draw the triangles. I have 8 vertices and an EBO that I use for the triangles so it takes up sizeof(GLfloat) 3 8 sizeof(GLuint) 36 sizeof(Glfloat) 2 36 so 528 bytes. that seems like a lot per block meaning that if I am rendering around 16 16 chunks(I use an stl map not an array) 16 16 256 chunks that have around 300 visible blocks that means I am sending about 40 mb of data to the GPU each frame. It runs smoothly now but could this end up being a problem? or is there a way to reduce this? one other thing, I tried to change my program from using gldraw arrays and having 720 bytes for each block to using an EBO for just the position. how can you control what vbo the ebo operates on? for example the EBO would have indices to the points of the cube and texture coordinates but still have an instance array for the position |
1 | OpenGL why do I have to set a normal with glNormal? I'm learning some basics of OpenGL but I'm wondering why there is a call glNormal to set the normal of vertices. If I create a simple triangle like this glBegin(GL TRIANGLES) glVertex3f(0,0,0) glVertex3f(1,0,0) glVertex3f(0,1,0) glEnd() Shouldn't the normals be defined implicitly by the type of the geometric primitive? If I don't set a normal will OpenGL calculate it? |
1 | Problem with 2d rotation in OpenGL I have a function to perform sprite rotation void Sprite rotateSprite(float angle) Making an array of vertices 6 for 2 triangles Vector2 lt gamePos gt halfDims( rect.w 2, rect.h 2) Vector2 lt gamePos gt bl( halfDims.x, halfDims.y) Vector2 lt gamePos gt tl( halfDims.x,halfDims.y) Vector2 lt gamePos gt br(halfDims.x, halfDims.y) Vector2 lt gamePos gt tr(halfDims.x,halfDims.y) bl rotatePoint(bl,angle) halfDims br rotatePoint(br,angle) halfDims tl rotatePoint(tl,angle) halfDims tr rotatePoint(tr,angle) halfDims 1st triangle Top right dataPointer.vertices 0 .setPosition( rect.x tr.x, rect.y tr.y) Top left dataPointer.vertices 1 .setPosition( rect.x tl.x, rect.y tl.y) Bottom left dataPointer.vertices 2 .setPosition( rect.x bl.x, rect.y bl.y) 2nd triangle Bottom left dataPointer.vertices 3 .setPosition( rect.x bl.x, rect.y bl.y) Bottom right dataPointer.vertices 4 .setPosition( rect.x br.x, rect.y br.y) Top right dataPointer.vertices 5 .setPosition( rect.x tr.x, rect.y tr.y) Vector2 lt gamePos gt Sprite rotatePoint(Vector2 lt gamePos gt pos, float amp angle) Vector2 lt gamePos gt newv newv.x pos.x cos(angle) pos.y sin(angle) newv.y pos.y cos(angle) pos.x sin(angle) return newv And the result is Am i doing something wrong ? It happens also when i put small angle (even if i put here angle 1 ) Thanks for help. |
1 | Creating a glitch effect similar to Watch Dogs I'm currently working on a LibGDX game. When a user does something wrong, I would like all the graphics on the screen to jitter very similar to the glitch distort effect seen in the game Watch Dogs (See Below). My question is this can this effect be achieved in real time by writing a shader? If so are there any references online on how to do this? (I've had a quick Google but all I could find is how to achieve this effect in Photoshop After Effects). Thank you for your help. Screen jitter https www.youtube.com watch?v EYkqC9uI8Nc Text glitch effect https www.youtube.com watch?v Wj26Wp2AH U |
1 | Move Camera Freely Around Object While Looking at It I've got a 3D model loaded (a planet) and I have a camera that I want to allow the user to move freely around it. I have no problem getting the camera to orbit the planet around either the x or y axis. My problem is when I try to move the camera on a different axis I have no idea how to go about doing it. I am using OpenGL on Android with the libGDX library. I want the camera to orbit the planet in the direction that the user swipes their finger on the screen. |
1 | Stencil buffer appears to not be decrementing values correctly I'm attempting to use the stencil buffer as a clipper for my UI system, but I'm having trouble debugging a problem I'm running in to. This is what I'm doing A widget can pass a rectangle to the the stencil clipper functions, which will increment the stencil buffer values that it covers. Then it will draw its children, which will only get drawn in the stencilled area (so that if they extend outside they'll be clipped). After a widget is done drawing its children, it pops that rectangle from the stack and in the process decrements the values in the stencil buffer that it has previously incremented. The slightly simplified code is below static void drawStencil(Rect amp rect, unsigned int ref) Save previous values of the color and depth masks GLboolean colorMask 4 GLboolean depthMask glGetBooleanv(GL COLOR WRITEMASK, colorMask) glGetBooleanv(GL DEPTH WRITEMASK, amp depthMask) Turn off drawing glColorMask(0, 0, 0, 0) glDepthMask(0) Draw vertices here ... Turn everything back on glColorMask(colorMask 0 , colorMask 1 , colorMask 2 , colorMask 3 ) glDepthMask(depthMask) Only render pixels in areas where the stencil buffer value ref glStencilFunc(GL EQUAL, ref, 0xFF) glStencilOp(GL KEEP, GL KEEP, GL KEEP) void pushScissor(Rect rect) increment things only at the current stencil stack level glStencilFunc(GL ALWAYS, s scissorStack.size(), 0xFF) glStencilOp(GL INCR, GL INCR, GL INCR) s scissorStack.push back(rect) drawStencil(rect, s ScissorStack.size()) void popScissor() undo what was done in the previous push, decrement things only at the current stencil stack level glStencilFunc(GL ALWAYS, s scissorStack.size(), 0xFF) glStencilOp(GL DECR, GL DECR, GL DECR) Rect rect s scissorStack.back() s scissorStack.pop back() drawStencil(rect, s scissorStack.size()) And this is how it's being used by the Widgets if (m clip) pushScissor(m rect) drawInternal(target, states) for (auto child m children) target.draw( child, states) if (m clip) popScissor() This is the result of the above code There are two things on the screen, a giant test button, and a window with some buttons and text areas on it. The text area scroll box is set to clip its children (so that the text doesn't extend outside the scroll box). The button is drawn after the window and should be on top of it completely. However, for some reason the text area is appearing on top of the button. The only reason I can think of that this would happen is if the stencil values were not getting decremented in the pop, and when it comes time to render the button, since those pixels don't have the right stencil value it doesn't draw over. But I can't figure out whats wrong with my code that would cause that to happen. |
1 | OpenGL behaviour depending on the graphics card? This is something that never happened to me before. I have an OpenGL code that uses GLSL shaders to texture a 3D model. The code involves a lot of GPU texture processing, blending, etc... I wanted to check how the performance of my code improves using a faster graphics card (both new and old are NVIDIA, using always the NVIDIA development drivers). But now I have found that once I run the code using the new graphics card, it behaves completely different (the final render looks wrong), probably because some blending effect is not performed correctly. I haven't really look into what has changed, but I am guessing that some OpenGL states are, by default, set different. Is this possible? Have you ever found different OpenGL GLSL behaviour using different graphics cards? Any "fast" solution? (So far I've thought of plugging back the old one, push all OpenGL default states, and compare with the ones I initially get using the new card..) Edit 1 The graphics cards are NVIDIA Quadro GX7300 (in which my code works OK) and NVIDIA GeForce GTX 560 Ti (in which the results changes fails) Edit 2 I have commented out a lot of my code, and apparently the strange behaviour has nothing to do with texture handling. A simple chessboard like floor looks differents. The diagonal white lines did not appear using the old NVIDIA Quadro GX7300. Any guess what OpenGL related thing could be causing this? Figure 1 Edit 3 I have now fixed the issue commented on the previous edit, regarding the weird unwanted diagonal thin whit lines. As I commented below, I had to remove the glEnable(GL POLYGON SMOOTH) , which was affecting the NVIDIA GeForce GTX 560 for whatever reason (probably due to reasons explained by mh01 in his answer. However, I am still facing a "texture blending" problem when using the NVIDIA GeForce GTX 560. I have a 3D model that is being textured using a shader that blends 8 different images to compute the right texture, depending on where the camera is at that particular moment. The resulting texture is then a combination of different images, and ideally they were blended nicely, using a set of blending weights computed each time the camera moves. The blending weights are still well computed, but the resulting texture is wrong. I am guessing that the GL BLEND function is somehow behaving different, but I have checked it in both graphics cards and it is actually the same one. I have no idea what else can be involved in getting this wrong result As you can imagine, the black line is where two original textures are being blended in order to get a seamless texture. If I use the exact some code and a NVIDIA Quadro GX730, the shader works as expected. However, when I use the NVIDIA GeForce GTX 560, the texture blending goes wrong. Any guess? This is the GLSL shader I am using. |
1 | How to draw realtime 3D amoebas in OpenGL? How would one draw 3D amoebas in real time in OpenGL? The key components I'm looking for are Curved closed surface that changes shape Generally transparent Something translucent inside showing the volume is occupied It has to be in real time, and be able to handle many (100's) of these moving and animating on screen simultaneously. What techniques might one use to accomplish this? |
1 | Calculate normal angle in screen space I'm working on adding billboarded sprites to a game engine, but the engine allows for walking on curved terrain, like spheres. In order to look like the player is walking on the terrain, I want to start with the billboard, then rotate it around its origin (bottom centre) so that its up vector matches the screenspace normal of the terrain. Here's a diagram of the general idea The billboarding works fine passed the camera's rotation matrix to the sprite, and the poly quad's 0,0,0 is at the bottom centre of the canvas. How can I get the angle between the normal and screenspace up? I think it's a matter of projecting the normal with the MVP matrix, to get it as a 2D screenspace vector to calculate the angle from. But I'm not having much luck attempting to implement it. The sprites take their positioning from the centre of the tiles, along with the normal. I'd also like to move the sprites down in the rotated axis, so that they cover the tile they're standing on. Any other methods are welcome too. |
1 | glOrthof not being applied I have had a problem with OpenGL where glOrthof is not being applied, leading to my frame having the default 1 1 1 ratio. Here is the code initializing it public void reshape(GLAutoDrawable glAutoDrawable, int i, int i1, int i2, int i3) GL2 gl glAutoDrawable.getGL().getGL2() if(window.getWidth()! screenWidth window.getHeight()! screenHeight)window.setSize(screenWidth,screenHeight) unitsTall window.getHeight() (window.getWidth() unitsTall) gl.glMatrixMode(GL2.GL PROJECTION) gl.glLoadIdentity() gl.glOrthof(0.0f, unitsWide, 0.0f, unitsTall, 0.0f, 1.0f) gl.glMatrixMode(GL2.GL MODELVIEW) unitsWide is equal to 100. Here is the code for drawing the rectangle public static void drawRect(float x,float y,float width,float height) GL2 gl Render.getGL2() gl.glRotatef( rotation,0,0,1) Rotation needed to be reversed gl.glColor4f(red,green,blue,alpha) gl.glBegin(GL2.GL QUADS) gl.glVertex2f(x,y) gl.glVertex2f(x width,y) gl.glVertex2f(x width,y height) gl.glVertex2f(x,y height) gl.glEnd() gl.glFlush() gl.glRotatef(rotation,0,0,1) The "Render" class is the class with all of the main methods for OpenGL. Here is the code that I used for drawing the rectangle Graphics.drawRect(0,0,0.5f,1) Finally, here is a screenshot of what it looks like when this code executes I have looked around and have not found any other problems like this. Please tell me what I might be doing wrong. |
1 | Texture with transparency not rendered correctly in LibGDX The title might be a bit misleading but I'm having a hard time to explain the problem so I'll try with pictures Same tree from opposite site I'm trying to create a voxel game and at the moment I try to get textures with transparency working. The mesh is generated for each chunk individually so that every chunk has its own ModelInstance that is rendered. And the result should be like in the second picture (almost) so that every block is rendered behind the transparent textures but thats not the case for every side. The relevant code for the transparent textures Mesh mesh new Mesh(true, meshDataBlended.sizeEstimate, (int)Math.round(meshDataBlended.sizeEstimate 1.5), new VertexAttribute(VertexAttributes.Usage.Position, 3, ShaderProgram.POSITION ATTRIBUTE), new VertexAttribute(VertexAttributes.Usage.Normal, 3, ShaderProgram.NORMAL ATTRIBUTE), new VertexAttribute(VertexAttributes.Usage.TextureCoordinates, 2, ShaderProgram.TEXCOORD ATTRIBUTE "0")) BlendingAttribute blendingAttribute new BlendingAttribute() blendingAttribute.blended false blendingAttribute.sourceFunction GL20.GL SRC ALPHA blendingAttribute.destFunction GL20.GL ONE MINUS SRC ALPHA mesh.setVertices(meshDataBlended.vertices) mesh.setIndices(meshDataBlended.indices) Node terrainNode modelBuilder.node() terrainNode.id "terrain blended" modelBuilder.part( "terrain blended", mesh, GL20.GL TRIANGLES, new Material(TextureAttribute.createDiffuse(World.TEXTURE), blendingAttribute) ) I'm pretty new to OpenGL, so I'm totally lost with this problem. No idea why it behaves like this and I even don't know where I could find the needed informations. So far i just know that it depends on what is rendered first (that's why it works from one side but not the opposite side) Sorry for using Minecraft textures it's just for testing purposes. Let me know if some additional information is needed I'll add them then. Thanks Blakk EDIT setting blended to true will make everything visible behind the leaves except other leaves (still only from specific sides). |
1 | Why do most game devs prefer OGL for OS X and D3D for Windows? Today I decided to check what Diablo 3 developers used to do graphics OpenGL or Direct3D? My mind was completely blown For Windows, they've used D3D and for OS X they've used OGL. I did some research and found that most game developers prefer the same method. This choice doesn't make sense to me. Writing one game engine with OpenGL for OS X and then rewriting same for Windows with D3D seems a lot of unnecessary work. Wouldn't it be easier to just take OpenGL and make a game engine that works on both operating systems? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.