_id
int64 0
49
| text
stringlengths 71
4.19k
|
---|---|
1 | How to draw 2d above 3d scene? I have an OpenGL(3.3) GLEW application. I want to draw a black vertical rectangle at the left side of the window for writing some information on it, like fps (I use GLEW function for drawing text). I have a common pipeline, like this m transformation PersProjTrans CameraRotateTrans CameraTranslationTrans TranslationTrans RotateTrans ScaleTrans I know how it works, but I can't figure out how to draw a projection of the rectangle on the window, like 2d. I would be glad to any advise. |
1 | What is the practical use of IBOs degenerate vertex in OpenGL? Vertices in 3D models CAN get cut in the process of optimizing 3D geometry, (degenerate vertices) by 3D graphics software (Blender, ...) when exporting because they aren't needed when reusing a vertex for multiple triangles. (In the current case 3D data is exported from Blender as .ply and read by a simple application that displays the 3D model) Every vertex has a few attributes like position, color, normal, tangent,... But the data for each vertex that is cut through the vertex sharing is lost and is missing in the vertex shader. Modern shader techniques like Bump or Normal mapping require normals tangents per vertex which are also cut. To use complex shader techniques IBOs must not be used? Or is there a way to use IBOs and retain the data per vertex that was origionally lost? |
1 | How to reconcile depth ordering with minimal shader context changes? We generally want to minimise shader program switches (glUseProgram and all associated context changes) for the sake of performance. AFAIK it is not uncommon to render by shader program, i.e. group draw calls of all mesh instances that use the same set of shaders or vertex shader at least (i.e. that have the same set of vertex attributes), for this reason. How then do we go about doing both depth ordering and shader program changes, quickly? Because it seems to me one either orders primarily by program (thus minimal program context changes), or primarily by depth (frequent context changes), and that they're mutually exclusive in terms of maximising performance. PS. In my current work I'm not using the hardware Z buffer, instead I'm using the painter's algorithm prior to making all draw calls... If this affects the answer to the question at all, I'd be happy to hear solutions for both sorts of depth ordering. |
1 | Reduce texture switches in UI I'm working on game which uses LibGDX's Stage2d for GUI. All textures for UI are packed into one texture atlas. Fonts (BitmapFont) are generated in runtime by gdx freetype for different locales and font sizes (at least 6 locales and 4 font sizes for each locale) and then placed into separate texture. Game UI contains much different elements (strategy game with lots of numbers) buttons, icons, text labels, which usually alternate between each other and trigger many texture switches when drawn. I wonder if there's any way to reduce the number of texture switches as it sometimes takes up to 5ms per frame for stage.draw(). The only way I see is to draw fonts only when all UI textures are drawn, when it is possible, but it will produce awful code and will harm the hierarchy of elements in stage. I didn't merge textures (but thought of it), because Textures use different filtering (MipMapLinearNearest for UI textures, Linear for fonts) and it will require manual texture filtering switches or mess with stage2d overriden Label class. UI texture already has a size of 2048x2048px, and I can't add all font size textures to it as it will require higher texture sizes, which are not supported by many old devices. Is there any good way of solving this issue? |
1 | THREE.ShaderMaterial cannot perform antialiasing I created a ShaderMaterial to draw a box in three.js using the following key code magnetic orientation let magnetMaterial new THREE.ShaderMaterial( uniforms orientation value new THREE.Vector3(1) , vertexShader varying vec3 vPos void main() vPos position gl Position projectionMatrix modelViewMatrix vec4(position,1.0) , https stackoverflow.com questions 15688232 check which side of a plane points are on fragmentShader extension GL OES standard derivatives enable varying vec3 vPos uniform vec3 orientation void main() float a dot(orientation, vPos) gl FragColor vec4(step(a, 0.), 0, step(0., a), 1.0) ) See also online Demo. Even if I set WebGLRenderer.antialias to true, there's a heavy aliasing if the box is not axis aligned. I found Point Sampled AA and How do I implement anti aliasing in OpenGL?, but I didn't how to do? In addition, the custom shader cannot work with the light if any. In order to make custom color work with the built in effect (light). I want to try post processing via EffectComposer. Can anyone help me out? Related Add scene lights to custom vertex fragment shaders and shader materials? the last shader I added point lights threejs Adding lighting to ShaderMaterial How do you combine a ShaderMaterial and LambertMaterial? |
1 | Is there a way to use other fonts, besides the default ones in OpenGLUT? I'm using OpenGLUT functions like glutBitmapString to render sentences and words in a game. However, there is a limited set of fonts to use and I need some bigger size fonts. Is there a way to add new fonts to these functions API? Thanks |
1 | Projection Matrix Breaks My Rectangle This is my vertex shader, shown below. version 330 core in vec3 a position in vec4 a colour FOV 70, near plane 0.1, far plane 1000 const mat4 u projection mat4( 1.428148, 0.0, 0.0, 0.0, 0.0, 1.428148, 0.0, 0.0, 0.0, 0.0, 1.0001999, 0.20002, 0.0, 0.0, 1.0, 0.0 ) uniform mat4 u projection uniform mat4 u view uniform mat4 u transformation out vec4 v colour void main() gl Position u projection u transformation vec4(a position, 1) v colour a colour Whenever I take out u projection, my square appears. When I add it back, the square is malformed. The vertices of my square are as follows, aka the contents of a position. float vertices 0, 0, 0.5f, 0.5f, 0.5f, 0.5f, 0, 0.5f, 0.5f, 0, 0, 0.5f, 0f, 0.5f, 0.5f, 0, 0.5f, 0.5f The position of the square is (0, 0, 0), the rotation is (0, 0, 0) and the scale is 1. These are computed into u transformation and uploaded. This works perfectly. If I change the 1.0001999 and the last 0 to 1 part to 1 then the square is not hidden. void bindAttribute(int index, String name) GL20.glBindAttribLocation(program, index, name) bindAttribute(0, "a position") bindAttribute(1, "a colour") |
1 | How do I draw an animated object in OpenGL ES? I have a VBO, which I initialise like this (just an example) (void)setupVBOs GLuint vertexBuffer glGenBuffers(1, amp vertexBuffer) glBindBuffer(GL ARRAY BUFFER, vertexBuffer) glBufferData(GL ARRAY BUFFER, sizeof(Vertices), Vertices, GL STATIC DRAW) GLuint indexBuffer glGenBuffers(1, amp indexBuffer) glBindBuffer(GL ELEMENT ARRAY BUFFER, indexBuffer) glBufferData(GL ELEMENT ARRAY BUFFER, sizeof(Indices), Indices, GL STATIC DRAW) As you can see, I'm using GL STATIC DRAW, which is good for visually unchanging objects (not including translations and such). How do I draw animated objects though? I mean things that might be changed by user interaction. This video is a good example. It is obvious OpenGL is being used, as the vertices are manipulated by gestures. How is it done? by changing the x y z coordinates on every touch? Are they using GL DYNAMIC DRAW? Is this hard? |
1 | Depth Stencil Buffer In OpenGL, what is the difference between GL DEPTH COMPONENT and GL DEPTH STENCIL? I have looked around and have been unable to find a clear explanation. Information on their usage with GLSL would also be appreciated (mainly DEPTH STENCIL). Thanks for any help. |
1 | How are walkable regions determined, and movements handled on a 3D map? I've created a 3D scene with both indoor and outdoor parts. The outdoor part has bridges and caves also. How is it possible to handle player movements on it? My intuition is To determine the walkable regions, I could create a large grid. If the normal under the segment is not too steep, then it is walkable. If the difference between the heights of two neighboring segments in not large, then the player can move between them. My solution is far from optimal. It would consume a lot of memory, and it can't handle situations, when there are multiple walkable regions on the same XZ position (for example a bridge over a valley) How to manage movements effectively? How is it solved in game engines like Unreal or Unity? |
1 | fragment shader directional light positioning with camera Im trying to set up directional lighting in the fragment shader. So the direction of my light moves with the camera position. version 150 core uniform sampler2D diffuseTex uniform vec4 lightColour uniform vec3 lightDirection vec3 LNorm normalize(lightDirection) vec3 normal normalize(IN.normal) vec3 calColour lightColour i .rgb intensity gl FragColor vec4(diffuse.rbg calColour, diffuse.a) It lights the entire scene. |
1 | Parts of 3D model disappear during animation I'm trying to properly play an animation done in Blender 2.8 inside my OpenGL application. When the animation is running, it happens that at some particular frames, some parts of the 3D model "disappear", by "disappear" I mean that they obviously get relocated at some incorrect position and rotation in the 3D scene. The model is an FBX file. Has anyone got an idea about what could be the possible reasons for that? Maybe some wrong settings in Blender? Thanks in advance! |
1 | Calculating Per Vertex Normal in Geometry Shader I am able to calculate normals per face in my Geometry Shader but i want to calculate per vertex normal for smooth shading. My Geometry shader is version 430 core layout ( triangles ) in layout ( triangle strip, max vertices 3 ) out out vec3 normal out uniform mat4 projectionMatrix uniform mat4 viewMatrix uniform mat4 modelTranslationMatrix uniform mat4 modelRotationXMatrix uniform mat4 modelRotationYMatrix uniform mat4 modelRotationZMatrix uniform mat4 modelScaleMatrix void main(void) Please ignore my modelMatrix and NormalMatrix calculation here mat4 modelMatrix modelTranslationMatrix modelScaleMatrix modelRotationXMatrix modelRotationYMatrix modelRotationZMatrix mat4 modelViewMatrix viewMatrix modelMatrix mat4 mvp projectionMatrix modelViewMatrix vec3 A gl in 2 .gl Position.xyz gl in 0 .gl Position.xyz vec3 B gl in 1 .gl Position.xyz gl in 0 .gl Position.xyz mat4 normalMatrix transpose(inverse(modelViewMatrix)) normal out mat3(normalMatrix) normalize(cross(A,B)) gl Position mvp gl in 0 .gl Position EmitVertex() gl Position mvp gl in 1 .gl Position EmitVertex() gl Position mvp gl in 2 .gl Position EmitVertex() EndPrimitive() Since i don't have access to adjacent faces here, i cannot calculate per vertex normals. How can i calculate per vertex normals in my Geometry Shader? |
1 | How can rotate a 2D textured quad in legacy OpenGL immediate mode? I have a texture of a tank, and I want to it appearance on the screen depends on its current direction, so I decided to use rotating texture functions of OpenGL. I followed some advices through Google and received unsuccessful results. Anyone can help me with this problem? This is my render() function define UP 0 define DOWN 1 define LEFT 2 define RIGHT 3 void GameObject render() glBindTexture(GL TEXTURE 2D, textureID) glLoadIdentity() glTranslatef((GLfloat)coord.getX(), (GLfloat)coord.getY(), 0.0f) glDrawArrays(GL QUADS, 0, 4) |
1 | Sprite quickly disappears after rendering I'm currently making space invaders and I'm using the game loop pattern as described here. I have an entity class from which there is a spaceship derived class. The base entity class contains all of the general entity data such as the x y position, speed, etc... Currently, I have the X and Y positions initialized to (0,0) so the spaceship is rendered in the center of the screen. I tried initializing the y position to a different coordinate corresponding to a position, but for some reason, the spaceship just keep moving vertically as if the y position is just being accumulated at each frame. I think the problem is caused by the fact that in my render() method of the spaceship class, I'm translating the modelmatrix with the x and y position of the spaceship as that would explain why there isn't vertical movement when the coordinates are (0,0). I'm not sure exactly if this is the proper way to structure the render() method but I couldn't think of another way to change the object's position. Below is the relevant code Spaceship class definition class Spaceship public Entity public Spaceship(float xDirect, float yDirect, float xPosition, float yPosition, float speed, float rState, ShaderProgram program, SheetSprite newSprite, bool movingLeft, bool movingRight) Entity(xDirect, yDirect, xPosition, yPosition, speed, rState, program, newSprite), movingLeft(movingLeft), movingRight(movingRight) posY 0.1 virtual void Update(float elapsed) move stuff and check for collisions if (movingLeft) posX elapsed 0.001 else if (movingRight) posX elapsed 0.001 virtual void Render() setOrthoProj() setObjMatrices() translateObj(posX, posY, 0.0) mySprite.Draw() bool movingLeft bool movingRight Initialization Code in main Spaceship spaceship new Spaceship( 1.0f, 1.0f, 5.1f, 0.0f, 3.0f, 0.0f, program,mySprite, false, false) spaceship gt Render() Game Loop movement code const Uint8 keys SDL GetKeyboardState(NULL) if (keys SDL SCANCODE LEFT ) spaceship gt movingRight false spaceship gt movingLeft true spaceship gt Update(elapsed) else if (keys SDL SCANCODE RIGHT ) spaceship gt movingLeft false spaceship gt movingRight true spaceship gt Update(elapsed) spaceship gt Render() For some reason, the first render() call outside of the loop isn't rendering the spaceship, only the one inside the loop works. Given the structure of the program, what could be causing the problem? EDIT Translate method from Entity base class void translateObj(float x, float y, float z) posX x posY y modelMatrix.Translate(posX, posY, 0.0) void setObjMatrices() glUseProgram(program gt programID) program gt setModelMatrix(modelMatrix) program gt setProjectionMatrix(projectionMatrix) program gt setViewMatrix(viewMatrix) |
1 | How can I organize render and transformation data in a scalable fashion? I am writing for OpenGL 2.0 and in the future porting to OpenGL ES 2.0. I only use VBOs and shaders (no immediate mode, no vertex arrays). I already have working solutions, they just... feel wrong. All my calls related to OpenGL are tucked away in a separate class called RenderManager (this includes creation of VBOs and IBOs, texture IDs and shader programs) and rendering. I called the creation of OpenGL objects logging in and out, so majority of this action is invoked from the ResourceManager object upon loading of data. However, a lot of geometry (level) is generated dynamically and therefore some RenderManager calls to create the VBOs IBOs is scattered throughout the level class. Now, I have a Renderable interface which returns the (single) transformation matrix and a RenderOp object when it is time for rendering. RenderOps is an object that has Vertex Index Buffer pointers (has actual geom. data which in turn has VBO IBO indices) and some parameters such as whether to use the index buffer and if the RenderOp needs blending and stuff. There is also the RenderView object it has the viewport parameters, projection matrix and view matrix (camera). The game objects reside in the World class (single dimension arrays). These game objects are added upon their creation, and are derived from Entity interface. Entity is able to return the number of Renderable type objects within as well as the actual pointer to a Renderable at a given index. The magic happens when the main loop traverses the World. First it collects all Light objects visible from within Frustum. Then it collects Entity object that's within Frustum (it gets added to std vector inside RenderView if it is), and then it checks that same object to see if it is inside any of the visible Light's bounding volumes (and gets added into Light's std vector). Then the RenderView is passed onto the RenderManager class, which extracts appropriate Renderables from each Entity, applies Renderable's shader, then applies Renderable's textures, then stores the transforms inside itself and promptly shoves them down the programmable pipeline's throat along with attributes stored inside the RenderOperation (that gets extracted from Renderable as well). Last note I have NO way to render debug data, which is extremely annoying. A lot of what I am doing feels wrong. My question is, how do you organize all your Renderable and Transformation data to facilitate your rendering needs? I am burning out thinking of semi flexible ways of managing this data and I want to hear your ways. I am not a big fan of scene graphs, and consequently don't really want to implement one as I feel there are no real hierarchies inside the game, but I feel like I'm running myself into a corner. Please throw me a bone here, how do you manage all this data? And if you suggest an event based rendering system, how would you design it? |
1 | making a game in 2D(C ). SDL or openGL? Or, why not both? I'm trying to make a platform game in 2D and I want to know what tool should I use to make it happen. I understand that I can use SDL with openGL. However, if I want to make a solid 2D platform game, should I use SDL? Or, openGL? |
1 | How to make shadows softer as distance increases? I'm trying to make shadows become more blurry as the distance between the caster and receiver, so they look a little more realistic. What method can I use to achieve this? |
1 | 2D Sidescroller camera I'm using OpenGL. For my tiles, I'm using a display list and I'm just using immediate more for my player (for now). When I move the player, I want to center him in the center of the window, but allow him to jump around on the y axis and not have the camera follow him. But the problem is, is that I can't figure out how to center the player in the viewport! Here is the player update method public void update() if (Keyboard.isKeyDown(Keyboard.KEY D)) World.scrollx Constants.scrollSpeed setCurrentSprite(Sprite.PLAYER RIGHT) if (Keyboard.isKeyDown(Keyboard.KEY A)) World.scrollx Constants.scrollSpeed setCurrentSprite(Sprite.PLAYER LEFT) move((Constants.WIDTH) 2 World.scrollx, getY()) World.scrollx and World.scrolly are variables that I increase decrease to move the tiles. move() is just a method that sets the player position, nothing else. I render the player at his current coordinates like this public void render() glBegin(GL QUADS) Shape.renderSprite(getX(), getY(), getCurrentSprite()) glEnd() Shape.renderSprite is this public static void renderSprite(float x, float y, Sprite sprite) glTexCoord2f(sprite.x, sprite.y Spritesheet.tiles.uniformSize()) glVertex2f(x, y) glTexCoord2f(sprite.x Spritesheet.tiles.uniformSize() , sprite.y Spritesheet.tiles.uniformSize()) glVertex2f(x Constants.PLAYER WIDTH, y) glTexCoord2f(sprite.x Spritesheet.tiles.uniformSize(), sprite.y) glVertex2f(x Constants.PLAYER WIDTH, y Constants.PLAYER HEIGHT) glTexCoord2f(sprite.x, sprite.y) glVertex2f(x, y Constants.PLAYER HEIGHT) Pretty simple, I just render the quad at the current player's position. This is how I actually render everything public void render(float scrollx, float scrolly) Spritesheet.tiles.bind() glPushMatrix() glTranslatef(scrollx, scrolly, 0) glCallList(tileID) glPopMatrix() player.render() This is the part I'm confused on. I translate the tiles according to the scrollx and scrolly variables, and then I render the player at his current position. But the player moves faster than the tiles scroll, and he can escape out the side of the screen! How do I center the player with moving tiles? Thanks for any help! |
1 | Camera movement with slerp I have 3 spots, I would like to move my camera to using slerp. Just As seen in the image below. My question is how I can connect my camera to the first spot? I should be able to move between other spots after I connect my camera to the first spot. Maybe a better way to ask is how can I make the camera location my first quaternion spot? |
1 | How can I render a JBox2D ParticleGroup? I want to render a ParticleGroup from JBox2d using OpenGL. I've managed to define a particle group area, but I'm unsure how to draw the individual particles. Here's how I create the ParticleGroup m world.setParticleRadius(0.15f) m world.setParticleDamping(0.2f) PolygonShape shape new PolygonShape() shape.setAsBox(8, 10, new Vec2( 12, 10.1f), 0) ParticleGroupDef pd new ParticleGroupDef() pd.shape shape m world.createParticleGroup(pd) This is how I draw a normal Square (which I don't know how to apply to groups of particles) public void draw(GLAutoDrawable gLDrawable, Vec3 position, float angle) gLDrawable.getGL().getGL2().glEnable(GL.GL BLEND) gLDrawable.getGL().getGL2().glEnable(GL.GL TEXTURE 2D) gLDrawable.getGL().getGL2().glBlendFunc(GL.GL SRC ALPHA, GL.GL ONE MINUS SRC ALPHA) gLDrawable.getGL().getGL2().glBindTexture(GL2.GL TEXTURE 2D, TextureFactory.getTextureIndex(TextureCollection.valueOf(getTextureSelection()))) gLDrawable.getGL().getGL2().glPushMatrix() gLDrawable.getGL().getGL2().glTranslatef(position.x getP2M(), position.y getP2M(), position.z) gLDrawable.getGL().getGL2().glRotated(Math.toDegrees(angle), 0, 0, 1) gLDrawable.getGL().getGL2().glBegin(GL2.GL QUADS) gLDrawable.getGL().getGL2().glTexCoord2f(0.0f, 0.0f) gLDrawable.getGL().getGL2().glVertex3f( getWidth() 2 getP2M(), getHeight() 2 getP2M(), 0.0f) gLDrawable.getGL().getGL2().glTexCoord2f(0.0f, 1.0f) gLDrawable.getGL().getGL2().glVertex3f( getWidth() 2 getP2M(), getHeight() 2 getP2M(), 0.0f) gLDrawable.getGL().getGL2().glTexCoord2f(1.0f, 1.0f) gLDrawable.getGL().getGL2().glVertex3f(getWidth() 2 getP2M(), getHeight() 2 getP2M(), 0.0f) gLDrawable.getGL().getGL2().glTexCoord2f(1.0f, 0.0f) gLDrawable.getGL().getGL2().glVertex3f(getWidth() 2 getP2M(), getHeight() 2 getP2M(), 0.0f) gLDrawable.getGL().getGL2().glEnd() gLDrawable.getGL().getGL2().glFlush() gLDrawable.getGL().getGL2().glPopMatrix() gLDrawable.getGL().getGL2().glDisable(GL.GL TEXTURE 2D) gLDrawable.getGL().getGL2().glDisable(GL.GL BLEND) How should I do this? (Example code would be great.) |
1 | VBO and shaders confusion, what's their connection? Considering OpenGL 2.1 VBOs and 1.20 GLSL shaders When creating an entity like "Zombie", is it good to initialize just the VBO buffer with the data once and do N glDrawArrays() calls per each N zombies? Is there a more efficient way? (With a single call we cannot pass different uniforms to the shader to calculate an offset, see point 3) When dealing with logical object (player, tree, cube etc), should I always use the same shader or should I customize (or be able to customize) the shaders per each object? Considering an entity class, should I create and define the shader at object initialization? When having a movable object such as a human, is there any more powerful way to deal with its coordinates than to initialize its VBO object at 0,0 and define an uniform offset to pass to the shader to calculate its real position? |
1 | MSAA CSAA FXAA How to set the mode in OpenGL? I'm learning OpenGL and something I am stuck with is AA. Specially when I want to turn it on and off at runtime. I know that I can set the samplecount when I create a FBO and blit it over to the final window. When I want to change the mode I switch the FBO and everything is fine. The question I am completely stuck with is, how do I change the mode and also important, how do I query the modes that the card supports. With mode I mean CSAA,MSAA,.. .I can't find a lot. At least I know that it is vendor specific. Hope anyone can point me in the right direction. Thanks |
1 | How can I render a font in C with OpenGL? What I tried I was testing some things in order to render text with stb truetype.h and OpenGL in C. I took as a reference the example that appears here. Basically, this example, loads a .ttf file and returns the raw information in bytes, that can be used to generate a texture in OpenGL. I adapted the example, mentioned before, into modern OpenGL, because, the example uses OpenGL deprecated functions, like glVertex2f. The only thing I get to output on screen was this kind of noise of strange colors The code I use texture t fnt texture GLuint fnt shader unsigned char ttf buffer 1 lt lt 20 unsigned char temp bitmap 512 512 stbtt bakedchar cdata 96 ASCII 32..126 is 95 glyphs define FONT VS quot version 330 core n quot quot layout(location 0) in vec3 m Position quot quot layout(location 1) in vec2 m TexCoords quot quot out vec2 TexCoords n quot quot void main() n quot quot TexCoords m TexCoords n quot quot gl Position vec4(m Position, 1.0) n quot quot n quot define FONT FS quot version 330 core n quot quot in vec2 TexCoords n quot quot uniform sampler2D Texture n quot quot void main() n quot quot gl FragColor texture(Texture, TexCoords) n quot quot n quot void font init(void) fread(ttf buffer, 1, 1 lt lt 20, fopen( quot c windows fonts times.ttf quot , quot rb quot )) stbtt BakeFontBitmap(ttf buffer, 0, 32.0, temp bitmap, 512, 512, 32, 96, cdata) no guarantee this fits! glGenTextures(1, amp fnt texture 3 ) My texture type, is an array that saves the texture on the 3rd position. glBindTexture(GL TEXTURE 2D, fnt texture 3 ) glTexImage2D(GL TEXTURE 2D, 0, GL RGBA8, 512, 512, 0, GL RGBA, GL UNSIGNED BYTE, temp bitmap) glGenerateMipmap(GL TEXTURE 2D) can free temp bitmap at this point glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glBindTexture(GL TEXTURE 2D, 0) fnt shader shader init(FONT VS, FONT FS) void font render(model t model) shader bind(fnt shader) texture bind(fnt texture, 0) model begin(model) model draw(model, GL TRIANGLES) The model (vao, vbo, ibo) is rendering the whole buffer, not individual glyphs model end() texture unbind() shader unbind() Can someone tell me what I'm doing wrong, and, how I'm suposed to render correctly text, with modern OpenGL, with textures and buffers, in order to read the .ttf file and create the necessary information with stb truetype.h and, then, render the text? |
1 | OpenGL can't use glew 3.0 I've been trying to follow a tutorial of glew, but i can't run this glGenVertexArrays, it always leads to a memmory access violation!... I tried glExperimental GL TRUE too, also updated my video card driver 2 times, but it doesn't run. Btw, there's my code (to TRY to generate a simple triangle) include lt stdio.h gt include lt stdlib.h gt include lt iostream gt using namespace std include lt glew.h gt include lt glfw.h gt include lt glm glm.hpp gt using namespace glm define GLFW DLL pragma comment(linker, " SUBSYSTEM WINDOWS") pragma comment(linker, " ENTRY mainCRTStartup") int main() glewExperimental GL TRUE if(!glfwInit()) cout lt lt "Failed to load graphics... " lt lt endl return 1 GLuint VertexArrayID glGenVertexArrays(1, amp VertexArrayID) glBindVertexArray(VertexArrayID) static const GLfloat g vertex buffer data 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, GLuint vertexbuffer glGenBuffers(1, amp vertexbuffer) glBindBuffer(GL ARRAY BUFFER, vertexbuffer) glBufferData(GL ARRAY BUFFER, sizeof(g vertex buffer data), g vertex buffer data, GL STATIC DRAW) glfwOpenWindowHint(GLFW FSAA SAMPLES, 4) glfwOpenWindowHint(GLFW VERSION MAJOR, 3) glfwOpenWindowHint(GLFW VERSION MINOR, 1) glfwOpenWindowHint(GLFW OPENGL PROFILE, GLFW OPENGL CORE PROFILE) glfwOpenWindowHint(GLFW OPENGL PROFILE, 0) if(!glfwOpenWindow(1024, 768, 0, 0, 0, 0, 32, 0, GLFW WINDOW)) cout lt lt "Failed to open window" glfwTerminate() return 2 glfwSetWindowTitle("IfUSeeingThisMsgTheShitWorked") glfwEnable(GLFW STICKY KEYS) Main loop do cout lt lt "happaned" glEnableVertexAttribArray(0) glBindBuffer(GL ARRAY BUFFER, vertexbuffer) glVertexAttribPointer(0,3, GL FLOAT, GL FALSE, 0, (void ) 0) glDrawArrays(GL TRIANGLES, 0, 3) glDisableVertexAttribArray(0) glfwSwapBuffers() while(glfwGetKey(GLFW KEY ESC) ! GLFW PRESS amp amp glfwGetWindowParam(GLFW OPENED)) return 0 Also, I want to point out that this function doesn't work glfwOpenWindowHint(GLFW OPENGL PROFILE, GLFW OPENGL CORE PROFILE) so I had to change it to glfwOpenWindowHint(GLFW OPENGL PROFILE, 0) Does it have anything to do with my error? hm. I'm posting it here, just to make sure that there's no other thing I can do, other than buy a new video card. I really want to use the modern OpenGL, it is pointless to learn the old one. |
1 | When I add in Java transformationMatrix I can't see images? When I add in Java transformationMatrix I can't see images moving but when I remove it I can see why and how to fix it? Any ideas how to fix it? My Renderer class public class Renderer private static final float FOV 70 private static final float NEAR PLANE 0.1f private static final float FAR PLANE 1000 private Matrix4f projectionMatrix public Renderer(StaticShader shader) createProjectionMatrix() shader.start() shader.loadProjectionMatrix(projectionMatrix) shader.stop() public void prepare() GL11.glClear(GL11.GL COLOR BUFFER BIT) GL11.glClearColor(1, 0, 0, 1) public void render(Entity entity,StaticShader shader) TexturedModel model entity.getModel() RawModel rawModel model.getRawModel() GL30.glBindVertexArray(rawModel.getVaoID()) GL20.glEnableVertexAttribArray(0) GL20.glEnableVertexAttribArray(1) Matrix4f transformationMatrix Maths.createTransformationMatrix(entity.getPosition(), entity.getRotX(), entity.getRotY(), entity.getRotZ(), entity.getScale()) shader.loadTransformationMatrix(transformationMatrix) GL13.glActiveTexture(GL13.GL TEXTURE0) GL11.glBindTexture(GL11.GL TEXTURE 2D, model.getTexture().getID()) GL11.glDrawElements(GL11.GL TRIANGLES, rawModel.getVertexCount(), GL11.GL UNSIGNED INT, 0) GL20.glDisableVertexAttribArray(0) GL20.glDisableVertexAttribArray(1) GL30.glBindVertexArray(0) private void createProjectionMatrix() float aspectRatio (float) Display.getWidth() (float) Display.getHeight() float y scale (float) ((1f Math.tan(Math.toRadians(FOV 2f))) aspectRatio) float x scale y scale aspectRatio float frustum length FAR PLANE NEAR PLANE projectionMatrix new Matrix4f() projectionMatrix.m00 x scale projectionMatrix.m11 y scale projectionMatrix.m22 ((FAR PLANE NEAR PLANE) frustum length) projectionMatrix.m23 1 projectionMatrix.m32 ((2 NEAR PLANE FAR PLANE) frustum length) projectionMatrix.m33 0 Edit added StaticShader class package shader import org.lwjgl.util.vector.Matrix4f public class StaticShader extends ShaderProgram private static final String VERTEX FILE "src shader vertexShader.txt" private static final String FRAGMENT FILE "src shader fragmentShader.txt" private int location transformationMatrix private int location projectionMatrix public StaticShader() super(VERTEX FILE, FRAGMENT FILE) Override protected void bindAttributes() super.bindAttribute(0, "position") super.bindAttribute(1, "textureCoords") Override protected void getAllUniformLocations() location transformationMatrix super.getUniformLocation("transformationMatrix") location projectionMatrix super.getUniformLocation("projectionMatrix") public void loadTransformationMatrix(Matrix4f matrix) super.loadMatrix(location transformationMatrix, matrix) public void loadProjectionMatrix(Matrix4f projection) super.loadMatrix(location projectionMatrix, projection) Edit vertexShader.txt version 400 core in vec3 position in vec2 textureCoords out vec3 colour out vec2 pass textureCoords uniform mat4 transformationMatrix uniform mat4 projectionMatrix void main(void) gl Position projectionMatrix transformationMatrix vec4(position,1.0) pass textureCoords textureCoords colour vec3(position.x 0.5,0.0,position.y 0.5) |
1 | Why does this work in the fragment shader but not in the vertex shader? I'm doing some model view and projection transforms in the vertex shader and I want to determine whether the current vertex will end up on the viewport or not. After searching a bit I found that after the projection transform the coordinates have not been "normalized". So the criteria for a vertex (after the transforms) to appear in the final viewport is the following w lt x lt w w lt y lt w 0 lt z lt w where w is the 4th coordinate of the vertex after the transforms Performing this check I had a great amount of vertices passing the test without being in the viewport and this caused many sorts of problems in whatever I was trying to implement. However, passing the vertices in their local coordinates to the fragment shader and then doing the exact same process with the transforms and the tests everything worked out fine. Why? |
1 | input arrays in OpenGL vertex Shader would something like this be valid as an input to a vertex shader? layout (location 0) in float 6 faceTextureOffsets I know you can use things like vec3s but for this situation I need an array of 6 floats. I am using a texture atlas and I would like for different sides of the cubes i'm rendering to have different textures and I need a way to be able to change the position in the texture atlas. |
1 | Textures created with Photoshop show up with a white border All of a sudden, I started having this problem. I've used libGDX Photoshop before and never had this problem. One of my textures is showing up with a white border glow around. The texture is the black blob on the middle, which is a 256 pixels png with a few black brushstrokes I created with Photoshop, just to test it. Everything in libGDX is on default. I haven't changed the blending function, I'm just using a SpriteBatch to render the image without any changes. The weird thing is, some other textures I created with Photoshop, same size, RBG 8 bit color mode (just like my blob image), look fine. These are temporary textures I created by downloading a png from the web, pasting it into Photoshop, applying a solid color overlay effect to it and saving it back as a png. Any idea what's causing this? Edit Here's how I'm loading my textures private static Texture createTexture(String fileName, boolean maxQuality, Texture.TextureWrap textureWrap) Texture result new Texture(Gdx.files.internal(fileName), true) result.setFilter(maxQuality ? Texture.TextureFilter.MipMapLinearLinear Texture.TextureFilter.MipMapLinearNearest, Texture.TextureFilter.Linear) TODO Are these the best? if(textureWrap ! null) result.setWrap(textureWrap, textureWrap) return result texture new TextureRegion(createTexture("data blob.png")) I just found out using this technique that the transparent pixels in my texture are white (1, 1, 1, 0). So I'm pretty sure the cause is a combination of this, the linear resizing and probably the blending function. Not sure what's the right way to fix it though. Don't want to change the resizing to nearest. |
1 | Minecraft OpenGL Create Multiple viewports I'm working on a project for the game and i have been trying to draw a second smaller "window" that would act like a second camera, independent from the player. I have looked into glViewport however i keep running into issues where the second viewport is the only one being drawn (the first one gets covered by the second and has a blank screen where the first viewport should be So i guess the question is How can i have multiple windows on screen and make it so it plays nice with the game's default rendering system? I know it is possible since the game uses OpenGL, and maybe its just a simple case of putting the code in the right class. |
1 | Quaternion precision problems Each entity and camera have a Transformation object that holds position info as vec3, rotation info as quat and scale info as vec3. When I need a matrix (that being either model or view matrix), I construct the matrix out of given data. That gives me the freedom to change any of transformation info without having to worry about order of change. Now, I'm trying to implement it in a fly camera. The two constraints are pitch that has to be limited from 90 to 90 and roll which should not occur. While rotating the camera along X and Y axis, a sort of drift happens along the Z axis. In order to combat that, after all changes to rotation are applied, I get the angle of rotation around the Z axis and then rotate it by angle, effectively resetting the Z rotation to 0. That all works nicely in theory, but not so much in practice. You see, when I go over 55ish both in the positive or negative Y rotation, the camera starts freaking out and fluctuating. It seems that every frame it switches from some negative z rotation, to positive z rotation of the same value. It looks like this (moving camera to the left, negative Y rotation) Of course, since the object is only in the one place at one time, I had to edit the screenshot in paint to give you a better idea of what's going on. It really does look like there are two objects at the same time, both flickering as crazy. Also note that if I were to move camera further to the left, the Z rotation offset would increase, moving both objects further from the horizontal plane. At this point, the Y rotation would also start to fluctuate ever so slightly. You couldn't see it on screen, but printing out angle each frame would show that it's fluctuating at about 3rd digit after the dot (0.00 ). Once I get far enough, the camera would just give up and just show the object rapidly changing it's Z rotation with no apparent pattern. My guess is that quaternion rotation is imprecise for some reason. You can tell by the fact that my strict Z rotation starts to influence the Y rotation after the Z drift gets severe enough. Here's a snippet of code, for what it's worth transformation gt rotateAroundY(angleY) GLfloat roll transformation gt getRoll() transformation gt rotateAroundZ( roll) angleY is delta Y of mouse position between this and previous frame. rotateAround (angle) calls this code rotation glm tquat lt GLfloat gt (glm angleAxis(glm radians(angle), axis)) where rotation is the quaternion, while axis is tvec3 lt GLfloat gt representing the axis around which the rotation takes place. How do I combat this? How can I consistently reverse unintentional Z drift without making the camera freak out? |
1 | Compress 16 bit raw data in webgl I have a 3D volume where every voxel is 16 bits. Is there anyway I can use some kind of compression to store the data so I can use less video ram? Webgl supported different compressiosn if you enable them with extension, but can I use them for a one channel data? Like WEBGL compressed texture s3tc only supports rgb and rgba data COMPRESSED RGB S3TC DXT1 EXT COMPRESSED RGBA S3TC DXT1 EXT COMPRESSED RGBA S3TC DXT3 EXT COMPRESSED RGBA S3TC DXT5 EXT If its not possible in webgl, is it supported in OpenGL to compress a 16 bit values with one channel? |
1 | Proper way to maintain Vertex Buffer Objects I've started learning WebGL, currently I'm building a 2D lighting system, but there is some confusion going on inside my head. How the lighting works is based on this tutorial http archive.gamedev.net archive reference programming features 2dsoftshadow default.html (probably the most linked article of the genre). My question is about the proper way to store create update the Vertex Buffer Objects of the polygons. Currently I have something like during initialization of each polygon build vertices from the provided points ... create the VBO this. VBO gl.createBuffer() gl.bindBuffer( gl.ARRAY BUFFER, this. VBO ) gl.bufferData( gl.ARRAY BUFFER, vertices, gl.DYNAMIC DRAW ) during rendering of each polygon gl.bindBuffer( gl.ARRAY BUFFER, this. VBO ) gl.enableVertexAttribArray( program.aVertexPosLoc ) gl.vertexAttribPointer( program.aVertexPosLoc, 3, gl.FLOAT, false, 0, 0 ) The vertices array is created only one time at the top of the file, and reused for each new polygon, the motive to do that is to avoid the creation of various Float32Arrays per object. top of file var vertices new Float32Array( 20 3 ) maximum of 20 points It's okay to have one VBO for each object? I was thinking about the possibility of a single VBO, but I don't understand how I can make it work with only one. The number of vertices can vary between one polygon and another, so how I'm going to store that? Currently I'm not taking into consideration textures, however, answers that already take they into account are welcome. |
1 | How can I use ImGui to render simple text instead of using stb truetype directly? Since ImGui builds on top of stb truetype for its font rendering, I thought it could be nicer to use its already built font processing capabilities (ImGui GetIO().Fonts), and render with those, instead of using stb truetype directly. However, I've been having trouble figuring out how to do this, specifically how to get quad position texture coordinates for a given string to use with the preloaded texture at ImGui GetIO().Fonts gt TexID. I'm not looking to draw buttons text inside ImGui windows, all I want to do is to use ImGui to build vertex data for a given string so that I can render it anywhere. |
1 | Consistent Shadow Map Filtering I want to filter my shadow map generated by PSSM, but the problem is that I have a inconsistent filter size. The problem is that the shadow map sources rotate to find the best fit for the camera frustum, and thus sampling with a constant offset (e.g. 10.0 resolution) produces inconsistent results. A gif shows it probably best (same shadow and sun position, just different view angles) GIF showing the issue My idea was to, instead of using a fixed filter offset, compute the filter offset by projecting 2 world space points to light space and taking the difference of both, like this vec4 find filter size(mat4 projection, vec3 light, float sample radius) Find an arbitrary tangent and bitangent to the light vector vec3 v0 abs(light.z) lt 0.99 ? vec3(0, 0, 1) vec3(1, 0, 0) vec3 tangent normalize(cross(v0, light)) vec3 binormal normalize(cross(light, tangent)) Project everything vec2 proj origin project(projection, vec3(0)).xy vec2 dx proj tangent project(projection, tangent).xy proj origin vec2 dx proj binormal project(projection, binormal).xy proj origin return vec4(dx proj tangent, dx proj binormal) sample radius However that didn't quite work out, and, in fact, produces even more artifacts. Any suggestions how I could fix this? I just need a basic idea, then I can figure out the implementation by myself. |
1 | Architecture to draw many different objects in OpenGL I have some objects that I want to draw. I am not sure how I can create my architecture in a way where I can draw everything as fast as possible. As example class MyObject float vertices float colors float textures public MyObject() fill every buffer with data public void Draw() set shader set shader uniforms etc. foreach buffer BindBuffer() VertexAttribPointer() DrawArrays() In this way, I can draw every object like myObject.Draw after it is constructed. The problem would be, that if I have 5000 objects, I have 5000 draw calls. I need to set the shader every time even if I loop over the 5000 objects because I don't know if they use the same shaders. Another way class MyObject bool needsToBeChanged public MyObject() set my edge points needsToBeChanged true void ChangeMyObject() e.g. I changed the height of my object needsToBeChanged true class GLWindow float vertices float colors float textures GameLoop() foreach(MyObject myObj in allMyObjects) if(myObj.needsToBeChanged) get objects points for drawing it and set the global buffers up myObj.needsToBeChanged false DrawArrays() draw only once (because every data is in the buffer) Only 1 draw call for everything. I guess this will be faster than first method. The problem on this code is the big buffer that contains everything. Let's imagine we have inherited from MyObject and want to draw objects in different shapes. Now the Loop gets more complicated. You may have to split it because you need another shader for example for text. Now you have 2 arrays... And with other changes the Loop becomes more and more complicated and harder to maintain. What solutions (beside the both) are available for this kind of problem? Can I use the first way or is there a better way where I can reduce the draw calls? How is it solved in other applications? |
1 | Why do I have to switch T(v) texture coordinates while importing OpenGL to Direct3D? I am importing my code from OpenGL to Direct3D. My D3DTS PROJECTION uses D3DXMatrixPerspectiveFovRH, and my D3DTS VIEW uses D3DXMatrixLookAtRH to set a view equal to OpenGL's view. My question is why do I have to switch all of my 1.0000 Tex(v) texture coordinates to "minus value" in D3D to get equal texture mapping as in OpenGL. OpenGL T(u) T(v) 1.0000, 1.0000, 0.0000, 1.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 1.0000, 1.0000, 1.0000, 1.0000, 0.0000, 0.0000, 1.0000, 0.0000, ... Direct3D T(u) T(v) D3DXVECTOR2( 1.0000f, 1.0000f) D3DXVECTOR2( 0.0000f, 1.0000f) D3DXVECTOR2( 1.0000f, 0.0000f) D3DXVECTOR2( 1.0000f, 0.0000f) D3DXVECTOR2( 0.0000f, 1.0000f) D3DXVECTOR2( 0.0000f, 0.0000f) D3DXVECTOR2( 0.0000f, 1.0000f) D3DXVECTOR2( 0.0000f, 0.0000f) D3DXVECTOR2( 1.0000f, 1.0000f) D3DXVECTOR2( 1.0000f, 1.0000f) D3DXVECTOR2( 0.0000f, 0.0000f) D3DXVECTOR2( 1.0000f, 0.0000f) ... Is it because Direct3D's origin (0.0) of texture coordinates lies in different place than in OpenGL? |
1 | Using gluLookAt to move camera in 2D iPhone game? I'm trying to use gluLookAt to move the camera in my iPhone game, but every time I've tried to use gluLookAt my screen just goes "blank" ( grey in this case ) I'm trying to render a simple triangle and to move the camera, this is my code to setup my scene I do glViewport(0, 0, backingWidth, backingHeight) glMatrixMode(GL PROJECTION) glLoadIdentity() glRotatef( 90.0, 0.0, 0.0, 1.0) using iPhone in horizontal mode glOrthof( 240, 240, 160, 160, 1, 1) glMatrixMode(GL MODELVIEW) then my "triangle rendering" code looks like GLfloat triangle 0, 100, 100, 0, 100, 0, glClearColor(0.7, 0.7, 0.7, 1.0) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glEnableClientState(GL VERTEX ARRAY) glColor4f(1.0, 0.0, 0.0, 1.0) glVertexPointer(2, GL FLOAT, 0, amp triangle) glDrawArrays(GL TRIANGLES, 0, 6) glDisableClientState(GL VERTEX ARRAY) This draws a red triangle in the middle of the screen, when I try to apply gluLookAt ( I got the implementation of the function from Cocos2D so I asume it's correct ), i do glMatrixMode(GL MODELVIEW) glLoadIdentity() gluLookAt(0,0,1,0,0,0,0,0,1) try to move the camera a bit ? GLfloat triangle 0, 100, 100, 0, 100, 0, glClearColor(0.7, 0.7, 0.7, 1.0) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glEnableClientState(GL VERTEX ARRAY) glColor4f(1.0, 0.0, 0.0, 1.0) glVertexPointer(2, GL FLOAT, 0, amp triangle) glDrawArrays(GL TRIANGLES, 0, 6) glDisableClientState(GL VERTEX ARRAY) This leads me to grey screen (glClearColor is grey), I've tried all sort of things and read what I've found about gluLookAt on the net, but no luck (, if someone could explain me or show me how to move to move the camera in a top down fashion ( zelda, etc ), I would really appreciate it. Thanks! |
1 | Should I use the X Y plane when using an orthographic projection in OpenGL? I'm currently at a loss rendering a tile based 3D map with an orthographic projection in OpenGL. Imagine any isometric 3D game (using actual geometry instead of sprites). Internally, the tiles of my map have x and y coordinates (for column and row position within the map respectively). The tile with x 0 and y 0 is the one in the "top left", the one with y 0 and x 1 the one to the right of the first tile and so on. Since in OpenGL, the coordinate system is so that the X Z plane is the "ground" plane, I create the model matrix for each tile by using the tile's y coordinate for the matrix' (or position vector's) z component. I then set up an orthographic "camera" by using glm ortho(). This seems to work fine at first, but I quickly ran into clipping issues when trying to render larger parts of the map. I assume this is due to the fact that the orthographic projection clips on z. A quick and dirty attempt of having the map in the X Y plane confirmed this, as I didn't have clipping issues then. Now, I'm not sure what's the best approach here? Change the code so the tiles are being layed out in the X Y plane? In that case, it seems as if I would have to remember to rotate every single object around x accordingly, which seems wrong. Do I need to set up the orthographic projection manually to use Y as the clipping plane? Is that even possible? I think OpenGL always uses z for clipping, right? Some other approach? I think I'm lost in coordinates and matrices, really. I assume this is a common problem with a well known solution, so I'm hoping to learn about that before I go ahead, change a lot and then learn I did it the wrong way... |
1 | Simplest way of skeletal animation in openGL I am not interested in using easy libraries like assimp which will import collada files with ease to get skeleton animation done. I wanna know what is the minimal requirement to get the animation done. What I understand is that a bone is a 3 3 rotation matrix which is associated with each group of vertex. We have to augment it with the model matrix to animate anything How would I play a sequential animation with my existing bone system. What's the workflow to process the bones. Sending bones through Uniform block is slow but I would consider doing it. What's the simplest file format to store rig and animation data. Collada is unnecessarily advanced and complex. Or how would I write my own file format for animation. I am using blender to model and rig. How would I even manage to get an exporter for my format in blender |
1 | What are the different attributes of a vertex array object (VAO)? What are the 16 attributes of a VAO? 0. vertex position 1. vertex colours 2. normal vector 3. texture coords 4. ??? 5. ??? 6. ??? 7. ??? 8. ??? 9. ??? 10. ??? 11. ??? 12. ??? 13. ??? 14. ??? 15. ??? |
1 | How to animate a model in WebGL? I created a human model in Blender, exported the vertices and indices into a JSON file and render the model in a browser using WebGL. Now I created a walk and jump animation in Blender and would like to do the same with WebGL. I saw examples that use a list of vertices for every frame of the animation. Is this the way to go? Do I need to export the vertices for every frame for every animation? |
1 | Creating a retro style palette swapping effect in OpenGL I'm working on a Megaman like game where I need to change the color of certain pixels at runtime. For reference in Megaman when you change your selected weapon then main character's palette changes to reflect the selected weapon. Not all of the sprite's colors change, only certain ones do. This kind of effect was common and quite easy to do on the NES since the programmer had access to the palette and the logical mapping between pixels and palette indices. On modern hardware, though, this is a bit more challenging because the concept of palettes is not the same. All of my textures are 32 bit and do not use palettes. There are two ways I know of to achieve the effect I want, but I'm curious if there are better ways to achieve this effect easily. The two options I know of are Use a shader and write some GLSL to perform the "palette swapping" behavior. If shaders are not available (say, because the graphics card doesn't support them) then it is possible to clone the "original" textures and generate different versions with the color changes pre applied. Ideally I would like to use a shader since it seems straightforward and requires little additional work opposed to the duplicated texture method. I worry that duplicating textures just to change a color in them is wasting VRAM should I not worry about that? Edit I ended up using the accepted answer's technique and here is my shader for reference. uniform sampler2D texture uniform sampler2D colorTable uniform float paletteIndex void main() vec2 pos gl TexCoord 0 .xy vec4 color texture2D(texture, pos) vec2 index vec2(color.r paletteIndex, 0) vec4 indexedColor texture2D(colorTable, index) gl FragColor indexedColor Both textures are 32 bit, one texture is used as lookup table containing several palettes which are all the same size (in my case 6 colors). I use the source pixel's red channel as an index to the color table. This worked like a charm for achieving Megaman like palette swapping! |
1 | How to properly repeat a texture on a quad in OpenGL? I'm drawing a quad and binding a texture to this quad like this define TERRAIN WIDTH 2000 define TERRAIN LENGTH 2000 define TERRAIN WIDTH 2 (TERRAIN WIDTH 2) define TERRAIN LENGTH 2 (TERRAIN LENGTH 2) glBindTexture(GL TEXTURE 2D, gameTextures.terrain) glBegin(GL QUADS) glTexCoord2f(0, 0) glVertex3f( TERRAIN WIDTH 2, 0.0f, TERRAIN LENGTH 2) glTexCoord2f(X, 0) glVertex3f( TERRAIN WIDTH 2, 0.0f, TERRAIN LENGTH 2) glTexCoord2f(X, X) glVertex3f( TERRAIN WIDTH 2, 0.0f, TERRAIN LENGTH 2) glTexCoord2f(0, X) glVertex3f( TERRAIN WIDTH 2, 0.0f, TERRAIN LENGTH 2) glEnd() glBindTexture(GL TEXTURE 2D, 0) Normally, X would be equal to 1 but this will stretch the texture because the quad is very large. The idea is to increase that value so the texture is repeated (the texture was generated with GL REPEAT of course). But how should I find the best value for X? I have a few 512x512 textures which look nice if I replace X 512. Then I have another 1024x1024 texture which looks good to with X 512 but doesn't start to look so good with a higher lower value. Another texture is 2048x2048 and now I need to go down to X 256, otherwise it won't look so good. What's confusing me even more is the fact that both 512x512 and 1024x1024 textures look good with X 512. The way I see it, the lower the texture resolution, the more times I need to repeat it. How should I calculate the value of X then? |
1 | OpenGL fixed function WASD 2D movement I need to make a simple object (like a turtle) move on a 2D scene. The keys w, s, a and d, respectively, make a positive and negative translate on x axis (move to forward and backward), and rotate 1 degree (a and d). The problem is the simple object, will initially rotate and translate correctly (because initially the object is at the origin). But after that the commands don't work like I intend them to. Which are the best practices to make this? Anyone can help me? To compile (assuming this code is saved in a file named movement2D.cpp) g movement2D.cpp o movement2D lGL lglut To execute, use . movement2D 500 500 Here's the code include lt GL glut.h gt struct point int x int y typedef struct point t point struct object int width int height t point pose float angle typedef struct object t object void display(void) void keyPress(unsigned char, int, int) void initialize(int, int) void exit(void) void draw(void) t object head, body int main(int argc, char argv) glutInit( amp argc, argv) glutInitDisplayMode(GLUT SINGLE GLUT RGB) glutInitWindowSize(atoi(argv 1 ), atoi(argv 2 )) glutInitWindowPosition(10, 10) glutCreateWindow( quot Movement quot ) initialize(atoi(argv 1 ), atoi(argv 2 )) glutDisplayFunc(display) glutKeyboardFunc(keyPress) atexit(exit) glutMainLoop() void display(void) glClear(GL COLOR BUFFER BIT) glPushMatrix() glLoadIdentity() glRotatef(body gt angle, 0.0f, 0.0f, 1.0f) glTranslatef(body gt pose.x, body gt pose.y, 0.0f) draw() glPopMatrix() glutSwapBuffers() void keyPress(unsigned char key, int x, int y) switch (key) case 'a' case 'A' body gt angle 1 break case 'd' case 'D' body gt angle 1 break case 'w' case 'W' body gt pose.x 5 break case 's' case 'S' body gt pose.x 5 break case 27 exit(1) glutPostRedisplay() void initialize(int width, int height) head (t object ) calloc(1, sizeof(t object)) body (t object ) calloc(1, sizeof(t object)) body gt width body gt height 100 head gt width head gt height 20 body gt pose.x body gt pose.y 0 glClearColor(0.0f, 0.0f, 0.0f, 1.0f) glMatrixMode(GL PROJECTION) glOrtho( width 2, width 2, height 2, height 2, 100, 100) glMatrixMode(GL MODELVIEW) void exit(void) free(head) free(body) void draw(void) glColor3f(1.0f, 0.0f, 0.0f) glBegin(GL QUADS) glVertex2i( body gt width 2, body gt height 2) glVertex2i( body gt width 2, body gt height 2) glVertex2i(body gt width 2, body gt height 2) glVertex2i(body gt width 2, body gt height 2) glEnd() glTranslatef(body gt width 2 head gt width 2, 0.0f, 0.0f) glColor3f(0.0f, 0.0f, 1.0f) glBegin(GL QUADS) glVertex2i( head gt width 2, head gt height 2) glVertex2i( head gt width 2, head gt height 2) glVertex2i(head gt width 2, head gt height 2) glVertex2i(head gt width 2, head gt height 2) glEnd() |
1 | Is there any reason not to save skinning animation data in texture? I have thought about saving animation data in texture. I think I can save shader parameter setting and interpolation cost in CPU, and also enable animated instancing. But I couldn't find no text mentioning about this, so I think there's some obvious reason people don't do this. Which I'm missing. Can I know what I missed? Or is this just widely known technique? |
1 | Optimizing texture fetches with higher mip levels Let's say I have some shader program in DirectX or OpenGL rendering a full screen quad. And in a pixel fragment shader I sample some huge textures at random texture coordinates. That is one same texture coordinate for all texture samplings in one shader invocation, but it is various among different shader invocations. These fetch operations produce performance drop, I even think that due to the size of the textures the GPU texture cache is not big enough and is used not efficiently. Now I have a theoretical question can I optimize the performance by using some low resolution like 32x32 mask textures, which are built by mipmapping the large textures, and if a value in a mask texture at given texture coordinate at some higher mip level is not appropriate, then I don't need to perform texture fetches at full size level 0? Something like this in HLSL (GLSL code is pretty similar, but there is no branch attribute) float2 tc calculateTexCoordinates() bool performHeavyComputations testValue(largeMipmappedTexture.SampleLevel(sampler, tc, 5)) float result 0 branch if (performHeavyComputations) result largeMipmappedTexture.SampleLevel(sampler, tc, 0) About 50 of texels at mip level 5 will not pass the test. And so a lot of shader invocations should not sample the full size textures. But I am introducing branching in the code. May this branching hurt the performance even worse than sampling the full size texture even if that is not needed? Different GPUs may behave differently, some may not even support branching, will they perform two fetches instead of one? I can test this code on some machines, but my question is theoretical. And can you suggest another optimizations, if this won't work properly ? |
1 | Opengl scrolling world guidance I'm looking for a tutorial about how to implement a horizontally scrolling background with various objects that auto scrolls as your character player moves just like various car motorbikes games. I'll have images for various obstacles, objects along the path. I'm concerned if I load everything on start of game. i would want these to be loaded for visible area as the player moves (or appears to move). Though I'm looking for a tutorial in android and I'm new to opengl, but I welcome any sort of help. |
1 | Problems with rendering a SkyBox At the moment I'm writing an Android OpenGL ES 2.0 game but now I get stuck on rendering a SkyBox. Here is my (a bit simplified) code for the SkyBox float vertices 1,1,1,1,1,1, 1, 1,1,1, 1,1, 1,1, 1,1,1, 1, 1, 1, 1,1, 1, 1 byte drawOrder 1,3,0,0,3,2,4,6,5,5,6,7,0,2,4,4,2,6,5,7,1,1,7,3,5,1,4,4,1,0,6,2,7,7,2,3 private final void bindTexture() int shaderTextureUnit GLES20.glGetUniformLocation(shaderProgram, "u TextureUnit") GLES20.glActiveTexture(GLES20.GL TEXTURE0) GLES20.glBindTexture(GLES20.GL TEXTURE CUBE MAP, texture) GLES20.glUniform1i(shaderTextureUnit, 0) public void draw(float viewMatrix) viewMatrix Camera data array GLES20.glUseProgram(shaderProgram) bindTexture() Load Texture shaderPosition GLES20.glGetAttribLocation(shaderProgram, "a Position") shaderMatrix GLES20.glGetUniformLocation(shaderProgram, "u Matrix") get handle to shape's transformation matrix GLES20.glUniformMatrix4fv(shaderMatrix, 1, false, vpMatrix, 0) Prepare the triangle coordinate data GLES20.glEnableVertexAttribArray(shaderPosition) GLES20.glVertexAttribPointer(shaderPosition, 3, GLES20.GL FLOAT, false, 0, vertexBuffer) Draw the object to the screen GL UNSIGNED BYTE GLES20.glDrawElements(GLES20.GL TRIANGLES, orderBuffer.capacity(), GLES20.GL UNSIGNED BYTE, orderBuffer) GLES20.glDisableVertexAttribArray(shaderPosition) Free the position in shader I think this code should be perfect because I got it from an OpenGL for beginners book. My problem now is that it looks like this if I render the SkyBox to the screen. I initialize my Camera with this code and here you have also the code for my touch event. If my considerations are right this should produce a circle around the origin so I can see all sides of the SkyBox. private float angle 90 Matrix.setLookAtM(data, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0) the viewMatrix Z out of screen Z in screen public void onTouch() angle 2 float s (float) Math.sin(angle Math.PI 180) float c (float) Math.cos(angle Math.PI 180) Matrix.setLookAtM(data, 0, 0, 0, 0, c, 0, s, 0, 1, 0) the viewMatrix The draw Method from the SkyBox is called with the data Matrix from the camera (ViewMatrix) and with disabled depthBuffer. Do you have any ideas why the SkyBox is rendered so strange? What is wrong with my code? Only to be sure that all is right with my shaders here is the code from them FragmentShader precision mediump float uniform samplerCube u TextureUnit varying vec3 v Position void main() gl FragColor textureCube(u TextureUnit, v Position) VertexShader uniform mat4 u Matrix attribute vec3 a Position varying vec3 v Position void main() v Position a Position v Position.z v Position.z gl Position u Matrix vec4(a Position, 1.0) gl Position gl Position.xyww |
1 | How can I blend up to 3 textures on a polygon without blend maps? For my voxel game I need to blend up to 3 textures in the same polygon. It would be preferable if I could specify a texture id for each vertex, but other solutions are accepted as well. Here's an illustration of what I want. How can I achieve this type of blending? Multi texturing with blend maps is NOT a solution. Also please notice that there will be a lot of different textures. |
1 | What is the best way to draw outline of object using OpenGL I want to select the best way to draw outline of 3d human like object and what would be the best way to draw outline for this kind of objects. I found about stencil buffer based methods, geometric shader based methods and toon shader based methods is one if this is good if so which method. or is there any other best way to draw a outline. |
1 | Using large textures on limited hardware I've run into a problem where some of the models that I'm loading can have very large textures (2000x2000 for example). While my desktop computer can load them just fine, my laptop gets a segfault at the glTexImage call. I've thought of a couple of things I can do in this situation Before giving the image to OpenGL, I could downscale it so it's below the max texture size, but I'm not sure how to then stretch the image to its original width and height I could also outright refuse to load the texture, and replace it with a generic repeating texture, like Valve's black and purple grid, but this isn't at all reliable or desired Are these two options viable? How is it done with game engines like Unity or Source? Are there any other options that are better than my ideas? |
1 | Ray Picking how can I find which copy of model to pick, if they share the same vertices, but each one is translated before being drawn? I have a scene, in which I am drawing few different objects each one has the same vertices and each one is translated to proper place before being drawn. While using libgdx (but I think that this question should be generic enough to work with other libraries), I try to do ray picking, casting ray and trying to find intersections of given ray with triangles, from meshes of each object. My problem is, that because they all share the same set of vertices, how should I find which one exactly was pointed by user? As I have no experience in this, I was thinking about translating the vertices for each object and getting rid of glTranslate(x, y, z) called before each one, but in that situation I won't be able to use one copy of mesh to work with all objects. May I ask for a few hints links to articles how this should be done? |
1 | if a desktop machine supports OpenGL 3.0 I can assume that it supports OpenGL ES 2.0 too? This isn't clear for me, if i use the drivers from the GPU manufacturer and they support OpenGL 3.0 and or above, i can always make an OpenGL ES 2.0 application work? |
1 | Matrix loading problems with jbullet and lwjgl The following code does not load the matrix correctly from jbullet. box is a RigidBody Transform trans new Transform() trans box.getMotionState().getWorldTransform(trans) float matrix new float 16 trans.getOpenGLMatrix(matrix) pass that matrix to OpenGL and render the cube FloatBuffer buffer ByteBuffer.allocateDirect(4 16).asFloatBuffer().put(matrix) buffer.rewind() glPushMatrix() glMultMatrix(buffer) glBegin(GL POINTS) glVertex3f(0,0,0) glEnd() glPopMatrix() the jbullet is configured as so CollisionConfiguration new DefaultCollisionConfiguration() dispatcher new CollisionDispatcher(collisionConfiguration) Vector3f worldAabbMin new Vector3f( 10000, 10000, 10000) Vector3f worldAabbMax new Vector3f(10000,10000,10000) AxisSweep3 overlappingPairCache new AxisSweep3(worldAabbMin, worldAabbMax) SequentialImpulseConstraintSolver solver new SequentialImpulseConstraintSolver() dynamicWorld new DiscreteDynamicsWorld(dispatcher, overlappingPairCache, solver, collisionConfiguration) dynamicWorld.setGravity(new Vector3f(0, 10,0)) dynamicWorld.getDispatchInfo().allowedCcdPenetration 0f CollisionShape groundShape new BoxShape(new Vector3f(1000.f, 50.f, 1000.f)) Transform groundTransform new Transform() groundTransform.setIdentity() groundTransform.origin.set(new Vector3f(0.f, 60.f, 0.f)) float mass 0f Vector3f localInertia new Vector3f(0, 0, 0) DefaultMotionState myMotionState new DefaultMotionState(groundTransform) RigidBodyConstructionInfo rbInfo new RigidBodyConstructionInfo(mass, myMotionState, groundShape, localInertia) RigidBody body new RigidBody(rbInfo) dynamicWorld.addRigidBody(body) dynamicWorld.clearForces() Nothing is rendered on the screen. What am I doing wrong? |
1 | Deferred rendering and gaussian blur artifacts I compute Gaussian blur in two passes (horizontally and vertically). Shaders look like this Horizontal blur fragment shader version 420 layout (location 0) out vec4 outColor in vec2 texCoord float PixOffset 5 float (0.0,1.0,2.0,3.0,4.0) float Weight 5 float ( 0.2270270270, 0.1945945946, 0.1216216216, 0.0540540541, 0.0162162162 ) float scale 4.0 uniform sampler2D texture0 uniform vec2 screenSize void main(void) float dx 1.0 screenSize.x vec4 sum texture(texture0, texCoord) Weight 0 for( int i 1 i lt 5 i ) sum texture(texture0, texCoord vec2(PixOffset i , 0.0) scale dx ) Weight i sum texture(texture0, texCoord vec2(PixOffset i , 0.0) scale dx ) Weight i outColor sum I use deferred rendering and the following screens shows a diffuse material texture after blurring. I simplified a render loop for the sake of clarity(only one render target a diffuse material). A render loop first case bind fbo1 gbuffer stage unbind fbo1 bind fbo2 read a diffuse texture, render to a temporary texture (full screen quad) read a temporary texture, horizontal blur, render to a temporary texture (full screen quad) read a temporary texture, vertical blur, render to a temporary texture (full screen quad) unbind fbo2 read a temporary texture, render to the default framebuffer (full screen quad) The final image has artifacts(flickering pixels). Some of them are placed between two triangles which create the full screen quad. A render loop second case bind fbo1 gbuffer stage unbind fbo1 bind fbo2 read a diffuse texture, render to a temporary texture (full screen quad) read a temporary texture, horizontal blur, render to a temporary texture (full screen quad) unbind fbo2 read a temporary texture, vertical blur, render to the default framebuffer (full screen quad) Some artifacts may appear between triangles A render loop third case bind fbo1 gbuffer stage unbind fbo1 bind fbo2 read a diffuse texture, horizontal blur, render to a temporary texture (full screen quad) unbind fbo2 read a temporary texture, vertical blur, render to the default framebuffer (full screen quad) The final image doesn't contain artifacts. How to fix that ? The above code is simplified but normally the first case(render loop order) is most useful, for example I want to blur a glow texture and use it when shading. |
1 | Matrix loading problems with jbullet and lwjgl The following code does not load the matrix correctly from jbullet. box is a RigidBody Transform trans new Transform() trans box.getMotionState().getWorldTransform(trans) float matrix new float 16 trans.getOpenGLMatrix(matrix) pass that matrix to OpenGL and render the cube FloatBuffer buffer ByteBuffer.allocateDirect(4 16).asFloatBuffer().put(matrix) buffer.rewind() glPushMatrix() glMultMatrix(buffer) glBegin(GL POINTS) glVertex3f(0,0,0) glEnd() glPopMatrix() the jbullet is configured as so CollisionConfiguration new DefaultCollisionConfiguration() dispatcher new CollisionDispatcher(collisionConfiguration) Vector3f worldAabbMin new Vector3f( 10000, 10000, 10000) Vector3f worldAabbMax new Vector3f(10000,10000,10000) AxisSweep3 overlappingPairCache new AxisSweep3(worldAabbMin, worldAabbMax) SequentialImpulseConstraintSolver solver new SequentialImpulseConstraintSolver() dynamicWorld new DiscreteDynamicsWorld(dispatcher, overlappingPairCache, solver, collisionConfiguration) dynamicWorld.setGravity(new Vector3f(0, 10,0)) dynamicWorld.getDispatchInfo().allowedCcdPenetration 0f CollisionShape groundShape new BoxShape(new Vector3f(1000.f, 50.f, 1000.f)) Transform groundTransform new Transform() groundTransform.setIdentity() groundTransform.origin.set(new Vector3f(0.f, 60.f, 0.f)) float mass 0f Vector3f localInertia new Vector3f(0, 0, 0) DefaultMotionState myMotionState new DefaultMotionState(groundTransform) RigidBodyConstructionInfo rbInfo new RigidBodyConstructionInfo(mass, myMotionState, groundShape, localInertia) RigidBody body new RigidBody(rbInfo) dynamicWorld.addRigidBody(body) dynamicWorld.clearForces() Nothing is rendered on the screen. What am I doing wrong? |
1 | LWJGL 3 how to convert screen coordinate to world coordinate? I'm trying to convert screen coordinate to world coordinate on mouse click event. For LWJGL 3 there's not GLU utility class is available whereas LWJGL 2 has. I'm using JOML math classes and wrote following code, but its returning wrong world coordinate, I'm doing something wrong and couldn't figure out. On program init, I get viewProjMatrix and viewport viewProjMatrixUniform glGetUniformLocation(this.program, "viewProjMatrix") IntBuffer viewportBuffer BufferUtils.createIntBuffer(4) int viewport new int 4 glGetIntegerv(GL VIEWPORT, viewportBuffer) viewportBuffer.get(viewport) On render loop, I calculate viewProjMatrix viewProjMatrix .setPerspective((float) Math.toRadians(30), (float) width height, 0.01f, 500.0f) define min and max planes .lookAt(eye x, eye y, eye z, eye x, eye y, 0.0f, 0.0f, 2.0f, 0.0f) glUniformMatrix4fv(viewProjMatrixUniform, false, viewProjMatrix.get(matrixBuffer)) I convert screen coordinate to world coordinate with following code DoubleBuffer mouseXBuffer BufferUtils.createDoubleBuffer(1) DoubleBuffer mouseYBuffer BufferUtils.createDoubleBuffer(1) glfwGetCursorPos(window, mouseXBuffer, mouseYBuffer) double x mouseXBuffer.get(0) double y mouseYBuffer.get(0) System.out.println("clicked at " x " " y) Vector3f v3f new Vector3f() viewProjMatrix.unproject((float)x, (float)y, 0f, viewport, v3f) System.out.println("world coordinate " v3f.x " " v3f.y) Here's the full source code https gist.github.com digz6666 48bb433c83801ea4b82fa194f05b4f02 |
1 | How to draw 2D pixel data with OpenGL I am fairly new to OpenGL. I have a 2D game in SDL2 that uses currently works by creating a SDL Surface from the pixel data, copying it into a SDL Texture, and rendering it to the screen with SDL Renderer. But rather than using SDL to render the pixels, I'd like to switch to OpenGL. The reason I'd like to switch is because I need to render some lines on top of the pixel data and SDL RenderDrawLine() just doesn't have all the features I need (like line thickness or glScissor). At first, I attempted to switch to OpenGL by using glDrawPixels() and I was happy with the results. However, I found out that glDrawPixels() does not seem to be available in OpenGL ES (mobile devices). I have looked through some tutorials, but they all use shaders and other fancy stuff that I don't think I really need. Is there a simple way (like glDrawPixels()) to just draw pixel data to the screen for a 2D game? The pixel data is in the format GL UNSIGNED BYTE and it contains everything that I want to draw on the screen (except for several 2D line segments that I plan on using GL to render on top). |
1 | What makes a game look "good"? I am working on a 3D space game using OpenGL and C and I am planning to focus on giving the game modern, eye catching graphics, but the more I think of it the more I realise I don't really know what makes graphics "good". Sure, I can go and play some well known AAA games and bask in the amazingly put together graphics, but I don't really know how the graphics looks good. (this is why I consider games to be an art!) This is what I can think of now High quality textures High quality models A good lighting model Bumpmapping specularity mapping High quality UI, if applicable A wealth of not overdone posteffects I'm asking here in the hope of an experienced game developer who has produced games and know how they work inside and out can explain some techniques that warrant a game's graphics to look "good", and some not well known quirky tips. That'd be awesome. Thanks! |
1 | Is glxinfo saying that the 980 GTX doesn't support a 32 bit depth buffer? I've been using the glxinfo command (glxinfo v) to explore the supported framebuffer configurations. There are two values relating to depth, "depth" and "depthsize." According the source, it appears that the "depth" value relates to the X config and the "depthsize" value relates to the OpenGL config. Assuming that is correct, would the lack of a "depthsize 32" entry suggest that 32 bit depth buffers aren't supported? Or is my understanding of the glxinfo output flawed? |
1 | View to normal calculation in GLSL Sorry for the terrible title, but I really cant think of anything better.. Suggestions welcome. I am trying to do something like showcased in this video http www.youtube.com watch?v CaTI2d0tQME So basically smoothly change the opacity when looked face on. This is my vertex shader so far, the fragment shader is simple as it just multiplies the lightColor with the texture version 430 core uniform mat4 MV uniform mat4 MVP layout(location 0) attribute vec3 vertexPosition layout(location 1) attribute vec2 vertexUV layout(location 2) attribute vec3 vertexNormal layout(location 3) attribute mat4 bufferMatrix For per instance translation varying vec2 UV varying vec4 lightColor flat varying int InstanceID void main() vec4 mcPosition MV bufferMatrix vec4(vertexPosition, 1.0) mcPosition mcPosition length(mcPosition) vec3 mcNormal vertexNormal vec3 ecNormal vec3(MV bufferMatrix vec4(mcNormal, 0.0)) ecNormal ecNormal length(ecNormal) float dotProduct dot(vec4(mcNormal, 1.0), mcPosition) lightColor vec4(dotProduct) gl Position MVP bufferMatrix vec4(vertexPosition, 1.0) UV vertexUV InstanceID gl InstanceID The base is copied code from a 'phong' shader that works!.. I have tried everything I could think of, as well as searched google for quite some time. I think I realize what I need to do mathematically, which is getting the dot product of the vertex normal on to the view to vertex vector. That is mathematically speaking, another is to do it in GLSL with matrices etc. I am bad at debugging GLSL code, but right now the lightColor is always 0, no matter where I look at the model from. Quick Bonus Question What is the technique in the video actually called if anything? |
1 | How do I draw a portion of a texture onscreen in OpenGL? I have a sprite sheet loaded as an OpenGL texture, and I'd like to draw a portion of that texture sequentially for animation. Is there an actual OpenGL command to draw part of a texture? I can get it done by modifying the U and V components when drawing the texture, but I'm not sure if there's a more elegant correct method of doing this. |
1 | OpenGL Transformations I'm not sure if I correctly understand 3D transformations in OpenGL. Let's assume I'm using the typical matrix stack. It seems like you move the world X units over, drop in a bag of verts (a mesh) and move the world back by popping the matrix off the stack, then you repeat the process to place another model? Say I wanted to place a model at (10, 0, 0) and another at (0, 10, 0). I think I would... PUSH move(10,0,0) DRAW model A POP PUSH move(0,10,0) DRAW model B POP Some questions come to mind, like for example what does loading an identity matrix do in the stack? Am I right to assume that these operations in effect move the world around, then you draw the mesh at 0,0,0 in world space when you've moved the world to your liking? Or is it the other way around, does GL "apply" the matrix stack to your mesh as you draw it? Thanks, Cody |
1 | What is the recommended way to output values to FBO targets? (OpenGL 3.3 GLSL 330) I'll begin by apologizing for any dumb assumptions you might find in the code below since I'm still pretty much green when it comes to OpenGL programming. I'm currently trying to implement deferred shading by using FBO's and their associated targets (textures in my case). I have a simple (I think P) geometry fragment shader program and I'd like to write its Fragment Shader stage output to three different render targets (previously bound by a call to glDrawBuffers()), like so version 330 in vec3 WorldPos0 in vec2 TexCoord0 in vec3 Normal0 in vec3 Tangent0 layout(location 0) out vec3 WorldPos layout(location 1) out vec3 Diffuse layout(location 2) out vec3 Normal uniform sampler2D gColorMap uniform sampler2D gNormalMap vec3 CalcBumpedNormal() vec3 Normal normalize(Normal0) vec3 Tangent normalize(Tangent0) Tangent normalize(Tangent dot(Tangent, Normal) Normal) vec3 Bitangent cross(Tangent, Normal) vec3 BumpMapNormal texture(gNormalMap, TexCoord0).xyz BumpMapNormal 2 BumpMapNormal vec3(1.0, 1.0, 1.0) vec3 NewNormal mat3 TBN mat3(Tangent, Bitangent, Normal) NewNormal TBN BumpMapNormal NewNormal normalize(NewNormal) return NewNormal void main() WorldPos WorldPos0 Diffuse texture(gColorMap, TexCoord0).xyz Normal CalcBumpedNormal() If my render target textures are configured as RT1 (GL RGB32F, GL RGB, GL FLOAT, GL TEXTURE0, GL COLOR ATTACHMENT0) RT2 (GL RGB32F, GL RGB, GL FLOAT, GL TEXTURE1, GL COLOR ATTACHMENT1) RT3 (GL RGB32F, GL RGB, GL FLOAT, GL TEXTURE2, GL COLOR ATTACHMENT2) And assuming that each texture has an internal format capable of contaning the incoming data, will the fragment shader write the corresponding values to the expected texture targets? On a related note, do the textures need to be bound to the OpenGL context when they are Multiple Render Targets? From some Googling, I think there are two other ways to output to MRTs 1 Output each component to gl FragData n . Some forum posts say this method is deprecated. However, looking at the latest OpenGL 3.3 and 4.0 specifications at opengl.org, the core profiles still mention this approach. 2 Use a typed output array variable for the expected type. In this case, I think it would be something like this out vec3 3 output void main() output 0 WorldPos0 output 1 texture(gColorMap, TexCoord0).xyz output 2 CalcBumpedNormal() So which is then the recommended approach? Is there a recommended approach at all if I plan to code on top of OpenGL 3.3? Thanks for your time and help! |
1 | How to fix texture edge artefacts? This issue has been annoying me for a long time now and even after reading a lot of articles about it, I am still unable to fix the issue. First, to my setup. I'm using LWJGL for a 2D project, rendering in immediate mode (yes, I know, FBOs, but I feel like it's too late to change it now). Textures can be scaled, rotated tinted and cropped in any way and are read from packed spritesheets that are multiples of 2 (the one in question is 512x1024 for example). The Problem Because there is no smoothing applied to the textures, scaled and rotated textures look quite ugly, showing jagged lines and black outlines. I tried improving it with using glEnable(GL POLYGON SMOOTH) (even though I know now that this is apparantly a bad practice, but I have no idea how to add multisampling to immediate mode rendering, other sites even state that multisampling is probably overkill for 2D games, which leaves me even more confused). This is what it looks like, with and without POLYGON SMOOTH Quite clearly, there are multiple issues visible With POLYGON SMOOTH disabled, there are horribly jagged lines and dark edges (that aren't there in the texture) With POLYGON SMOOTH enabled, the edges are no longer jagged, but the dark edges are still there and now there are those diagonal lines. After reading up on that a bit, I found out that it has something to do with how graphics cards render quads, which is as two triangles, and when smoothing is applied, this happens. To test that out, instead of drawing a single quad I draw two triangles which indeed moves the diagonal lines around. The dark edges seem to be caused by something like like pre multiplied alpha (according to this), but when I check the raw pixel data, it doesn't look pre multiplied to me. A completely white texture with 0.5f alpha for example is represented as (in ARGB) 0x80FFFFFF. So applying the "fix" (using glBlendFunc(GL ONE, GL ONE MINUS SRC ALPHA) instead of the default glBlendFunc(GL SRC, GL ONE MINUS SRC ALPHA)suggested in the article doesn't really work, because my textures aren't premultiplied alpha, so alpha components don't work anymore (same texture, black background) Texture without alpha I do use custom shaders, so I guess I probably could make alpha work in some way by passing it to the shader and then multiplying the color by it, would that fix the dark edges (which would enable me to use GL ONE, the "better" blend function)? That would still leave me with the issue of the diagonal lines though, which I seem to be able to move around, but not remove. |
1 | OpenGL and 3ds model loading Path of least resistance? Hey guys, im working on a final class project for a graphics class, and me and a teammate are making a simple 3d tower defense game. We're currently planning on using 3ds models and drawing them with OpenGL.However, niether of us have a lot of practice experience with loading drawing models. What is the fastest and or easiest (not neccesarily the best or most feature implemented) way to load a 3ds model and draw it with a OpenGL glut setup? |
1 | Applying different materials to an object I'm currently implementing an Object Loader for the Wavefront File Format ( .obj). When exporting a model (with associated materials) from blender, a material for a group of faces is specified like this usemtl MyMaterial s off f 2 1 1 4 2 1 3 3 1 f 1 4 1 2 1 1 3 3 1 Since each material consists of (at least) 3 components for each of ambient, diffuse specular color and also shininess (which totals up to 10 floats) I would consider it to be a tremendous waste of memory to store all of these values for each vertex. Does OpenGL provide us with a more elegant viable way to achieve the same result with a more reasonable memory consumption or do I have to swallow the bitter pill? |
1 | Multiple buffers and calling glBufferSubData Originally asked this on Computer Graphics, but it might fit in better here. In my project, for convenience I would like to use many buffers. Many buffers in my case means 50 100 terrain patches represented by buffers with vertex coordinates, normals, indices and maybe color. The magnitude of data would be, let's say 10 4 floats per buffer. Some of this data can be shared between each terrain patch, f.ex. xz coordinates and indices. During the rendering loop, some terrain patches will be updated. Which means that for certain buffers I call glBufferSubData() for the whole buffer. My question is are there any pros cons, performance wise between these two methods 1) Controlling my data in many buffers (50 100), thus letting me call glBufferSubData on a complete buffer when needed. 2) Controlling my data in fewer (5 20) buffers, with more data in each. But then having to set up a system where I need to call glBufferSubData on smaller portion of a buffer. (Which leads to a more complex design in my case). |
1 | No performance gain from instanced rendering? I recently worked through this tutorial about instanced rendering. At the end it promises to draw a huge amount of instances of one model without performance drops. So I tried some simple instanced rendering to see these effects. So I created a VBO containing four times xyz coordiantes, vertex color and texcoords. float backgroundData new float 1.0f, 1.0f, 0.0f, 1f,1f,1f,1f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1f,1f,1f,1f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1f,1f,1f,1f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1f,1f,1f,1f, 1.0f, 1.0f, 0.0f I also created a VAO and EBO containing the glVertexAttribPointers and indices int indices new int 0,1,2,1,2,3 . Now I draw everything with glDrawElementsInstanced(GL TRIANGLES, 6, GL UNSIGNED INT, 0, 10 000). But the Framerate drops as much as if I had the indices 10.000 times in my EBO. Why is the outcome so different? Did I do something wrong? |
1 | GLSL Shaders How to manage? As your game get's bigger and bigger, you will use more and more different shader effects. Let's take an easy example I have clouds in my voxel based world, and I want to give it a blue ish tint with a shader. Do I create a new shader, disable the default shader, enable the cloud shader, draw the cloud, disable the cloud shader and enable the default one again? If your game starts to get huge, how on earth can you use 300 shader effects enabled for 1 voxel and disabled for the other? I know the question may sound strange, but it really makes me think. I now have a Vertex and Fragmentshader, for the matrix calculations, and basic lighting. If I want to add a shading effect for for example water blocks, what to do? I hope someone can help me with this mystery. |
1 | How can I make a transparent hole with shaperenderer using stencil masking in libgdx? I'm making a 2d game, where I need a resizeable, moveable rectangle outline. I'm trying to use stencil masking to do it by cutting a hole in a solid rectangle, and I thought this would help How can I add a transparent overlay to a UI in libGDX? But all I get is a solid box. Here's my code Gdx.gl.glClear(GL STENCIL BUFFER BIT) Gdx.gl.glColorMask(false, false, false, false) Gdx.gl.glDepthMask(false) Gdx.gl.glEnable(GL20.GL STENCIL TEST) Gdx.gl.glStencilFunc(GL20.GL ALWAYS, 0x1, 0xffffffff) Gdx.gl.glStencilOp(GL REPLACE, GL REPLACE, GL REPLACE) shapeRenderer.begin(ShapeRenderer.ShapeType.Filled) shapeRenderer.box(cubby.xpos 5 , cubby.ypos 5, 0, cubby.size 10, cubby.size 10, 0) shapeRenderer.end() shapeRenderer.begin(ShapeRenderer.ShapeType.Filled) Gdx.gl.glColorMask(true, true, true, true) Gdx.gl.glDepthMask(true) Gdx.gl.glStencilFunc(GL NOTEQUAL, 0x1, 0xffffffff) Gdx.gl.glStencilOp(GL KEEP, GL KEEP, GL KEEP) shapeRenderer.box(cubby.xpos, cubby.ypos,0,cubby.size, cubby.size, 0) shapeRenderer.end() Gdx.gl.glDisable(GL20.GL STENCIL TEST) |
1 | Irradiance Map ( Irradiance environment map)? As irradiance map is generated for every possible normal for all the texels in environment map (as every texel act as a light source) so that we can look up irradiance map, based on normal of fragment to find out its resulting illumination. When Irradiance map is generated, does it get generated for every object in the scene to capture all possible normal and then eventually combined in a single texture? Is there any example code to create irradiance map and use it to light the scene? Any one has an example of CG shader for reading irradiance map? Thanks |
1 | Is it appropriate approach to simulate shadowing via occlusion culling of lights? I have own deferred renderer and a scene with both closed and open spaces. I want to prevent light passing though solid objects. For example, there can be a house with a lot of point lights inside. I want to prevent lighting of objects outside the house, but it is not always possible to adjust appropriate falloff distance. And it seems that creating shadows maps for each of them will require a lot of time and memory. So I could try another approach create some set of bounding primitives for each light source and perform occlusion culling with all of them to determine whether some space is affected by the source. Is this method appropriate for the described situation or maybe is there any other methods to simulate shadowing? |
1 | What resources are available for Mac OS game development? Are there are any modern resources on how to develop games for Mac OS? I suppose this would include objective c, cocoa and opengl 3 . The book Beginning Mac OS X Game Development with Cocoa looks very promising but isn't out until early next year. |
1 | How do I load textures with SFML for OpenGL? I'm looking at NeHe's texture mapping tutorial. It looks overly complicated for just loading a texture. Is there a way to load a texture in SFML and then use it in Open GL? I use SFML for my windowing. |
1 | Is it possible to create a Facebook game that is a Windows executable? I'm thinking of trying to make a game, and I heard Facebook is good place to make it popular, but I don't want to make a sprite based Flash game, I want to use OpenGL for rendering to get nice graphics and good performance with it. I know there is Java which supports OpenGL (I think), but I would like to make my game closed source so that people cannot make their own mods for it like what happened with Minecraft. Another option I thought was to make OpenGL executable and a Flash version, but I think it might be too much work for one guy. Not to mention I have never done anything in Flash. So, is there any way to develop closed source game that can be also played through a browser? By closed source I mean the people have no way of getting the source code by decompiling it in any way. |
1 | Can frequent state changes decrease rendering performance? Can frequent texture and shader binding decrease rendering performance? "Frequent" binding example for object for material in object render part of object using that material "Low count" binding example for material for object in material render part of object using that material I'm planning to use an octree later and with this "low count" method of rendering it can drastically increase memory consumption. So is it good idea? |
1 | TBN Matrix Eye vs. World Space Conflict I am tired of misleading and insufficient articles making me more confused each time I read, I need a clarification that will solve my TBN matrix problem forever. Each article I read informs me differently about from which space TBN matrix converts to tangent space. One article says that it converts from eye space, other one says that it converts from world space to tangent space. As far as I know, world space is ModelMatrix vertex, eye space is ViewMatrix ModelMatrix vertex. Another article says that transpose of TBN Matrix actually does it. Can you please explain me how to use TBN matrix, does it (or transposed tbn) convert from eye or world to tangent ? Should I convert my lighting vectors from eye space to world and then apply TBN on them ? |
1 | What is the difference between Constant Vertex Attributes and Uniforms? According to the OpenGL ES 2.0 Programming Guide A constant vertex attribute is the same for all vertices of a primitive, and therefore only one value needs to be specified for all the vertices of a primitive. For uniforms the book states ...any parameter to a shader that is constant across either all vertices or fragments (but that is not known at compile time) should be passed in as a uniform. I've always used uniforms for data that is constant for a primitive but now it appears that attributes can also be used in the same way. Is there more to constant vertex attribute than simply 'they are the same as uniforms'? |
1 | One framebuffer for each light vs unique framebuffer per light Currently I have two ideas how to implement my lighting system 1) Use one framebuffer per light source no need to switch texture attachments of framebuffer may be faster bigger memory usage 2) Use one framebuffer for every light source some time may be wasted on attaching new textures to framebuffer less memory usage So could you please tell me which way is better and why? Or I'm totally wrong on my thoughts and there is no difference at all? |
1 | How to achieve rendering at huge distances? These days I was reading some information about the upcoming GTA V technology, in particular, how it is able to truly render all buildings you can see without a draw distance cap or any faking. Since I will soon be prototyping a big city environment, I wanted some advice on this topic. We're talking about a really big city, with simple geometry but fully lit and shadowed. My doubt comes in particular to the projection matrix, which imposes a maximum draw distance (the zFar parameter). Now, at the same time, I read everywhere that zFar should be as small as possible for better rendering results, in particular related to depth buffer issues etc, because of the floating point issues. So, assuming my computer can render this big city in a stable framerate, how should I approach the problem of rendering parts of the city which I can see REALLY far away, fully lit and shadowed? Shadow maps also seem to have problems with low depth buffer precision.. Thanks |
1 | Why does glGetString returns a NULL string I am trying my hands at GLFW library. I have written a basic program to get OpenGL renderer and vendor string. Here is the code include lt GL glew.h gt include lt GL glfw.h gt include lt cstdio gt include lt cstdlib gt include lt string gt using namespace std void shutDown(int returnCode) printf("There was an error in running the code with error d n",returnCode) GLenum res glGetError() const GLubyte errString gluErrorString(res) printf("Error is s n", errString) glfwTerminate() exit(returnCode) int main() start GL context and O S window using GLFW helper library if (glfwInit() ! GL TRUE) shutDown(1) if (glfwOpenWindow(0, 0, 0, 0, 0, 0, 0, 0, GLFW WINDOW) ! GL TRUE) shutDown(2) start GLEW extension handler glewInit() get version info const GLubyte renderer glGetString (GL RENDERER) get renderer string const GLubyte version glGetString (GL VERSION) version as a string printf("Renderer s n", renderer) printf("OpenGL version supported s n", version) close GL context and any other GLFW resources glfwTerminate() return 0 I googled this error and found out that we have to initialize the OpenGL context before calling glGetString(). Although I have initialized OpenGL context using glfwInit() but still the function returns a NULL string. Any ideas? Edit I have updated the code with error checking mechanisms. This code on running outputs the following There was an error in running the code with error 2 Error is no error |
1 | Generate screen space distance field from depth buffer I've been wanting to try out raymarching on real 3D scenes to implement effects like AO, soft shadows and such. I pretty much know how to use signed distance functions (as described by Inigo Quilez) to approximate a model and raymarch it, but I want to have way more precision and have the distance field of my scene. How would you go about generating a (un)signed distance field from the depth buffer in screen space ? Or would the usual depth buffer be usable as a distance field (I don't think so) ? Or there might be better ways that don't use the depth buffer ? |
1 | How to properly scale from natural coordinates to screen coordinate system in OpenGL? I am working on a Physics Engine, and I have been using opengl to visualize what it's doing. I think I have a scaling issue when going from natural coordinate system to the opengl screen coordinate system, which is between 1 and 1. I am saying that 1 is 150 meters and 1 is 150 meters. Therefore, to get to scale I should be able to take the natural coordinate system point say lt 0,100 , and scale to screen by doing lt 0,100 150 , which is approximately lt 0,.667 screen coordinates. When I watch a 1 kg ball fall from 100 meters, it seems too be falling to slow, as if it's falling under water (see https www.youtube.com watch?v KiZODHIvJ4A for reference). The physics engine seems to be correct. I think it has something to do with the coordinate system translation (just dividing by 150 scalar right now). Does anyone know if I am doing this correctly? |
1 | OpenGL vertex data per index Usually, vertex data is assigned to a particular vertex, like this data vertices 1.31 gt 1, 13, 5 84.3 gt 5, 8, 12 .095 gt 8, 3, 10 Then, you would typically have an array saying which order to draw the vertices in indices 0 2 1 2 Using this, the first vertex would be drawn first, then the third vertex, then the second one, then the third one again. My question is if it is possible to assign vertex data to the index array rather than to the vertex array, like this data indices 1.31 gt 0 84.3 gt 2 .095 gt 1 30.1 gt 2 So that when it draws each vertex from the index array, it uses that particular value. Please note that I am dealing with a lot more vertices than are shown here. (17 3 to be precise) The examples I shown above wouldn't be useful by themselves, but it would make a significant difference in performance in my program if this was possible. |
1 | Why is my depth buffer texture so bright? https www.youtube.com watch?v QuvAEqgHrMY amp feature youtu.be https www.youtube.com watch?v 5ob1JsPIGAs amp feature youtu.be gluPerspective(60, (float)CONTEXT WIDTH CONTEXT HEIGHT, 0.1f, 1.f) values used for first video gluPerspective(60, (float)CONTEXT WIDTH CONTEXT HEIGHT, 0.1f, 2000.f) values used for second video, the further z clip value seems to improve the depth rendered, but far from example images below. I set my projection matrix with gluPerspective. This seems to happen when I try gluPerspective when I set my projection matrix to first person. The type of image I would like to get is Is the visual result I am getting correct? Then are the listed examples of depth buffer images are merely a result of refining post rendering process, that are simply added in order for humans to see it better? |
1 | why is my OpenGL texture transparent? I have a terrain in OpenGL, and two textures which I am combining using GLSL mix() function. Here are the textures I am using. Now I am able to combine and mix these two textures, but for some reason, when I render the textures on the terrain, the terrain becomes transparent. I render the LHS texture first, and then I render the RHS texture with alpha channel, I don't understand why it is transparent. Here is an interesting fact, in the screenshot, you can see the result of the terrain when rendered on Nvidia GPU, when I render the same thing on interl HD 3k, I get different result. This result is how it is supposed to be, nothing is transparent in this following screenshot. Here is my fragment shader code.. void main() vec4 dryTex texture( u dryTex, vs texCoord 1 ) vec4 grassTex texture( u grassTex, vs texCoord 1 ) vec4 texColor1 mix(dryTex, grassTex , grassTex.a) out fragColor texColor1 |
1 | Render on other render targets starting from one already rendered on I have to perform a double pass convolution on a texture that is actually the color attachment of another render target, and store it in the color attachment of ANOTHER render target. This must be done multiple time, but using the same texture as starting point What I do now is (a bit abstracted, but what I have abstract is guaranteed to work singularly) renderOnRT(firstTarget) This is working. for each other RT currRT glBindFramebuffer(GL FRAMEBUFFER, currRT.frameBufferID) programX.use() glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, firstTarget.colorAttachmentID) programX.setUniform1i("colourTexture",0) glActiveTexture(GL TEXTURE1) glBindTexture(GL TEXTURE 2D, firstTarget.depthAttachmentID) programX.setUniform1i("depthTexture",1) glBindBuffer(GL ARRAY BUFFER, quadBuffID) quadBuffID is a VBO for a screen aligned quad. It is fine. programX.vertexAttribPointer(POSITION ATTRIBUTE, 3, GL FLOAT, GL FALSE, 0, (void )0) glDrawArrays(GL QUADS,0,4) programY.use() glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, currRT.colorAttachmentID) The second pass is done on the previous pass programY.setUniform1i("colourTexture",0) glActiveTexture(GL TEXTURE1) glBindTexture(GL TEXTURE 2D, currRT.depthAttachmentID) programY.setUniform1i("depthTexture",1) glBindBuffer(GL ARRAY BUFFER, quadBuffID) programY.vertexAttribPointer(POSITION ATTRIBUTE, 3, GL FLOAT, GL FALSE, 0, (void )0) glDrawArrays(GL QUADS, 0, 4) The problem is that I end up with black textures and not the wanted result. The GLSL programs program(X,Y) works fine, already tested on single targets. Is there something stupid I am missing? Even an hint is much appreciated, thanks! |
1 | Disadvantages of using multiple versions of OpenGL in LWJGL? So, I'm trying to figure out LWJGL, and my goal is to use OpenGL 3.2 (because pretty shaders are pretty). But in every tutorial I can find for LWJGL, they import a bunch of different OpenGL versions and use them at the same time. For example, this one imports GL11, GL15, GL20, and GL30 and uses functions from all of them. This seems intuitively like it would be messy and generally a bad idea. Besides the normal disadvantages of using deprecated functions, is there any particular reason not to do this? Could it affect performance in any way? Is there a potential for errors and odd behavior to occur? Also, if possible, does anyone have an example of LWJGL written only using GL32? I've found C code samples that do this, but I'm not too familiar with C , so it's difficult for me to translate. |
1 | Inter Quake Model IQM render Directx9 I'm trying to render an Inter Quake Model(http lee.fov120.com iqm ) in DirectX9 that I exported from blender. I want to display animations which IQM supports and my model format does not. The model is a cylinder. It loads fine in the iqm sdk opengl viewer but when i try to render it in directx9 using for example(this is just to render the vertices) IDirect3DDevice9 device HRESULT hr S OK for(int i 0 i lt nummeshes i ) iqmmesh amp m meshes 0 hr device gt DrawIndexedPrimitiveUP(D3DPT TRIANGLELIST, 0, 3 m.num triangles, m.num triangles , amp tris m.first triangle ,D3DFMT INDEX32 ,inposition ,sizeof(unsigned int)) It renders like this Incorrect The light grey bit that looks like two triangles in the middle is what is rendered(ignore the other stuff). Whereas it is meant to look like this(using a custom importer which I designed which matches what is displayed in blender) Correct Anyone have any suggestions on what might be going wrong? |
1 | Wierd behaviour upon limiting rotation on Y axis Okay, this is the code that works for a FPS camera, except it allows the player to flip it over by going under over 90 rotation on Y axis transformation gt rotateObjectFromRight(glm radians(angleX), glm tvec3 lt GLfloat gt (0.0f, 1.0f, 0.0f)) transformation gt rotateObjectFromLeft(glm radians(angleY), glm tvec3 lt GLfloat gt (1.0f, 0.0f, 0.0f)) transformation object holds the data required to construct the view matrix, among others, a quaternion representing the rotation. The functions called will construct the quaternion with the given data and then multiply it with the rotation quaternion either from the left (relative to global axes) or the right (relative to local axes) side. angleX and angleY are mouse X and Y offsets representing the angles to rotate around. The first line yaws the camera around local Y axis, while the second then pitches it around global X axis, creating the characteristic FPS camera movement. Okay. So far so good. This is the code that should limit the pitch to never exceed the 90 boundaries transformation gt rotateObjectFromRight(glm radians(angleX), glm tvec3 lt GLfloat gt (0.0f, 1.0f, 0.0f)) glm tvec3 lt GLfloat gt forwardVector glm rotate(glm inverse(transformation gt getRotation()), glm tvec3 lt GLfloat gt (0.0f, 0.0f, 1.0f)) GLfloat pitchAngle forwardVector.y 90.0f angleY glm clamp(angleY, (pitchAngle 90.0f), (pitchAngle 90.0f)) transformation gt rotateObjectFromLeft(glm radians(angleY), glm tvec3 lt GLfloat gt (1.0f, 0.0f, 0.0f)) First and the last line are the same. What happens here? First I get the unit vector that lies on the Z axis and rotate it so that it represents the forward vector of the camera. I then take the Y component of that vector (ranging from 1 to 1) and multiply it with 90 to get the degrees. Then I clamp the angleY so that current rotation combined with the new one never exceeds the 90 boundary. If the rotation was about to flip the camera overhead, it would change the new rotation angle so the camera looks straight up. Similar thing for underfoot flipping. However, the camera acts weird. It's fastest at pitch angles close to 0 , but gets progressively slower as the pitch nears 90 . If I move my mouse slower, the camera is faster, if I move it faster, it's slower. It's as though it's discarding the too large small angleY values instead of clamping them to the appropriate values (but it's not, and I can confirm via the debugger all the calculated values look alright and are correct). What could be amiss here to cause such weird movement? |
1 | How to mitigate this strange pattern shown in Phong lighting? I am creating an entire lighting scene in OpenGL. The entire scene consist of only one point light. I noticed some strange z fighting like pattern. This flickers when I move the camera. Can anyone tell me what is the potential cause of the problem? How should I mitigate this problem? Thank you. EDIT The problem is solved. It was primarily caused by normal transform computation error and my ill made model might have been aggravated this speckle flickering effect. |
1 | Data Organization in Custom 3D Mesh File Format After careful consideration to use middleware, I have decided on creating my own 3d file format format to export meshes from 3D authoring application (Softimage) into my game. I will need to export the following Vertices Indices Normals UVs Material Information Animation information (no clue, how to import it) Collision mesh Game Properties defined within 3D authoring tool (object intelligence, aggressivity, etc..) ..another assets.. Can I kindly ask for a hint, how to construct my custom file format. How to organize data within my files, please? Does anoybody have a good adivce on exporting animation information, especially when the mesh changes its geometry? I would be thankful for advices that could point me into right direction. It would be nice to save some time instead of wasting it on incorrect approaches. I use Softimage as my 3D authoring tool. Target platform is OpenGL ES 2.0 running on mobile devices (iOS, Android). Programming language C . |
1 | Stencil buffer mask calculation I dont understand how exactly stencil buffer works in openGL. One aspect that confuses me is why do we use a bit operator there in glStencilFunc. Some text say that is is used to achieve multiple stencil planes but I am not sure how that can be done. Can anybody please explain the whole procedure of stencil buffer calculation during the rendering cycle. Thank you. |
1 | How to implement OpenGL triple buffering of buffer objects to avoid stalls? I'm trying to implement the triple buffering described here. The intent is to gain higher frame rate by avoiding waiting for glBufferSubData() to finish. My understanding is that this is how to implement triple buffer make 3 buffers with glGenBuffers() Current frame bind to buffer3 draw last frame bind to buffer1 glBufferSubData for all the vertices, keep track of their texture vertex count,blend function, etc Next frame bind to buffer 1 draw last frame (bind texture, blend func, glDrawArrays) and so on There doesn't seem to be any performance gain between this and doing glBufferSubData() and then immediately calling glDrawArrays(). Am I missing something or misunderstand how triple buffer should work? Also is there a way to know when glDrawArrays() stalls due to glBufferSubData() still not being finished? |
1 | How can I render a shiny, metallic surface? I am creating a game for which I want to render a kind of shiny black metallic surface with some refection property in it. I know that there would be something related to masks involved with it. But somehow I am not able to get started on it. So could someone give me insight in the matter or give a link to a good tutorial? |
1 | OpenGL What's the best way to convert a screen coordinate to a world coordinate? For example, suppose I'm building a first person shooter and the player pulls the trigger. I want to convert the screen coordinate (x, y) into a world coordinate (x, y, z) to know what they've hit. I've seen some people mentioning a lookup using glReadPixels, but I've heard it's slow and those references were years old, so I'm not sure if this is the best way. I'm working with core OpenGL 3.3, not ES. gluUnProject is not part of the API I'm asking about. These types of answers would be most helpful An explanation of how to retrive a pixel's (x, y, z) value using glReadPixels and how to transform that into world coordinates. I know how to multiply by inverse matrices I don't know exactly what steps OpenGL takes to arrive at its depth buffer value. Or, an explantion of how to do something similar, perhaps more efficiently, with a pixel buffer object. Or some method that I don't even know about that's better than either of the above. For some context I have many months' worth of code into a game I'm developing in OpenGL using the core 3.3 profile. I'm using minimal libraries and not the old fixed function GL pipeline, which is why it does not make sense for me to use gluUnProject. I mention this to highlight why this question is not the duplicate of a currently linked older question. Note I previously mentioned a shader idea that did not add much to the question, so I removed it. I also edited the question to clarify the context and the type of answer I am looking for. |
1 | Efficient Dynamic Memory Management My world is procedurally generated. As the player moves, chunks behind them are unloaded and chunks in front of them are loaded. Each chunk has a mesh of triangles. At the moment, I create two VBOs for each chunk (vertices and colours), when the chunk is loaded. Once a mesh is created, it is only edited every few seconds. These buffers are deleted when the chunk is no longer visible. Am I leaking memory here by constantly creating and destroying buffers? I've heard somewhere that OpenGL (or WebGL in this case) doesn't do garbage collection until the program quits, due to it being slow. Is this right? Could I improve this system somehow? |
1 | What methods exist to implement a 2D looping world? I would like to know what other methods exist to implement a 2D wrapping world. I know the simple modulo solution (ObjectPosition MapSize) but that doesn't really fulfills my requirements. My issue with this easy solution that it only works on points and not on objects. I was trying to clone duplicate my objects and make them visible on both sides of the level that way but it doesn't really works out with box2d physics. So i am thinking if there is any other solution? I would like to make my game world a truly looping cylindrical world, but i don't know what other methods exist to implement such a thing in 2D. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.