_id
int64 0
49
| text
stringlengths 71
4.19k
|
---|---|
1 | Aquire disassembly of shader code Is there a way to get the disassembly that your driver generates when compiling a shader? I noticed that you can get an accidental disassembly dump if you go over the maximum thread group size supported by the hardware in compute shaders, so naturally I figured there must be some way to do this for all shader types. Is there a reliable way to get the disassembly (perhaps specific to NVIDIA AMD Intel hardware), using tools or even just "hacks" like causing errors during compilation linking? |
1 | How to create realistic rainfall animation in modern OpenGL using shaders? I see many examples of particle system in old style OpenGL with falling raindrops. Examples of rainfall that can interact with environment lighting and look realistic are hard to find. How to create realistic rainfall animation in modern OpenGL using shaders? |
1 | OpenGL multiple mesh management I've been working on coding a new 3D engine in Java. For the moment at least I'm sticking to openGL... Currently I'm reworking how meshes get transformed and then drawn. ATM each mesh created has its own VBO and IBO along with its own draw method so if I wanted to draw multiple meshes it would look like this mesh1 MeshLoader.loadMesh("cube.obj") mesh2 MeshLoader.loadMesh("pyramid.obj") mesh1.addToBuffers() adds vertex and index data to vbo and ibo mesh2.addToBuffers() mesh1.draw() mesh2.draw() The draw method defined in class Mesh enables vertex attribute arrays, binds the bufffers, and then draws elements based on the ibo. Then I send my projection matrix (transformation and perspective) as a uniform to the vertex shader. QUESTION Games are made of many many meshes and each has to be identified so that transformation can be applied. I want to be able to apply transformations (translate, rotate, scale) directly to a given mesh, independent of the other meshes. In other words I want to move mesh1 and only mesh1 5 units to the right (on its own axis) without it affecting mesh2. How do I do all this without making a million draw calls to the gpu? Any advice appreciated. |
1 | Why does GLM only have a translate function that returns a 4x4 matrix, and not a 3x3 matrix? I'm working on a 2D game engine project, and I want to implement matrices for my transformations. I'm going to use the GLM library. Since my game is only 2D, I figured I only need a 3x3 matrix to combine the translation, rotation and scale operations. However, glm translation is only overloaded to return a 4x4 matrix, and never a 3x3. I thought a translation could be performed by using a 3x3 matrix why does GLM only have a translate function that returns a 4x4 matrix, and not a 3x3 matrix? |
1 | Is index drawing faster than non index drawing I need to draw a lot of polygons consisting of 6 vertices's (two triangles). Without any texture coordinates, normals etc., both approaches result in 72 bytes. In the future I would definitely also need texture coordinates and normals, which would make index drawing consume less memory. Not a lot though. So my question is For VAOs with few vertex overlaps, which approach is faster? I don't care about the extra memory consumed by non index drawing, only speed. Edit To make it clear. Non index approach float 18 vertices Triangle 1 1,1,0, 1,0,0, 0,0,0, Triangle 2 1,0,0, 0,1,0, 0,0,0, Index approach float 12 vertices 1,1,0, 1,0,0, 0,0,0, 0,1,0, int 6 indices Triangle 1 0,1,2, Triangle 2 0,3,2 |
1 | How to draw textured tile map in opengl I am trying to create a textured hexagonal tile map in opengl. I have the VBO and respective index buffer. Additionally I have a texture atlas for texturing individual tiles. I'm attempting to create a distinct texture for each hex without mixing the textures. I understood that one should use a pair of (U,V) texture coordinates for each vertex. Now if the tile map would not be indexed and have the overlapping vertices, I could just set the texture coordinates for each vertex and get each hex rendering the correct texture. However, with indexing the overlapping vertices are gone and I can only set single pair of texture coordinates for each vertex which results in the textures mixing inside the hexes. Is there a way to texture tiles with indexing or another alternative approach to creating hex maps with different textures? |
1 | How shader program still works even after detaching shader object? In the following process of shader creation, shader object is detached after linking program (glLinkProgram), how does the shader program still works after detaching and deleting shader objects? glCreateShader glShaderSource glCompileShader glCreateProgram glAttachShader glLinkProgram glDetachShader glDeleteShader Use shader. |
1 | Animate multiple entities I'm trying to animate multiple(3) entities using one model(IQM format). It's working but performance is really bad because I'm calling animate function for each entity in my game loop (I think problem is there). What's the best way to animate multiple entities (with different animation ofc) in OpenGL? I think I can try build one VBO entity for better performances but I don't think it's the best way to do it. |
1 | Jittery Rotational Movement with Arcball implementation I have an assignment where I need to implement arcball using Opengl ( 2.0). I have it more or less implemented but I have a some problems issues bugs and I'm not sure whats either causing them or how to solve. Arcball rotation Specically minor variance in mouse movement away from the axis I'm rotating on seems to "confuse" my program, causing artifacts (such as rotating briefly a completely different direction before resuming its current path). Code void mouse motion(int mx, int my) float angleRadians handles mouse motion events if ( arcball on amp amp (mx ! last mx my ! last my) ) if left button is pressed lArcAngle arcAngle cout lt lt " Last Angle " lt lt lArcAngle lt lt endl cur mx mx cur my my glm vec3 va get arcball vector(last mx, last my) glm vec3 vb get arcball vector(cur mx, cur my) angleRadians acos(min(1.0f, glm dot(va, vb))) arcAngle ( ( angleRadians 180 ) 3.14159265 ) cout lt lt " Angle in degrees " lt lt arcAngle lt lt endl cout lt lt " ArcAngle after Mult " lt lt arcAngle lt lt endl arcAngle lArcAngle cout lt lt " ArcAngle after accumulation " lt lt arcAngle lt lt endl axis in world coord glm cross(va, vb) last mx cur mx last my cur my glutPostRedisplay() other code for other functionality... void display() Background color glClearColor(1, 1, 1, 0) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glClear(GL COLOR BUFFER BIT) glEnable(GL DEPTH TEST) Matrix setup glMatrixMode(GL PROJECTION) glViewport(0, 0, width, height) glLoadIdentity() gluPerspective(40, (float)width (float)height, 0.1, 1000) Matrix setup glMatrixMode(GL MODELVIEW) glLoadIdentity() glTranslatef(0, 0, 3) int myFlagCtr getFlagCtr() cout lt lt "CRX " lt lt crx lt lt " CRY " lt lt cry lt lt " CRZ " lt lt crz lt lt endl glPushMatrix() glRotatef(arcAngle, axis in world coord.x, axis in world coord.y, axis in world coord.z) glTranslatef(vec blx, vec bly, zScaling) glRotatef(arcAngle, axis in world coord.x, axis in world coord.y, axis in world coord.z) if (myFlagCtr 1) Set Wireframe glPolygonMode(GL FRONT AND BACK, GL LINE) color red glColor3f(1.0f, 0.0f, 0.0f) else if (myFlagCtr 2) glPolygonMode(GL FRONT AND BACK, GL LINE) color red glColor3f(1.0f, 0.0f, 0.0f) glEnable(GL POLYGON OFFSET LINE) glPolygonOffset( 1.0, 1.0) draw object blDrawMouseMesh() glPolygonMode(GL FRONT AND BACK, GL FILL) color red glColor3f(0.0f, 1.0f, 0.0f) else if (myFlagCtr 0 myFlagCtr 1) Bring back to Default glPolygonMode(GL FRONT AND BACK, GL FILL) glColor3f(0.53f, 0.12f, 0.47f) blDrawMouseMesh() glPopMatrix() glutSwapBuffers() As you can see between the two mouse vectors I get the angle between them, and the orthonogol vector which is the axis I am rotating around. But this only seems to work in a rather "dirty" and not particularly smooth way. Edit I now think the problem is the accumulation of degrees, when I switch axis's it remembers the angle from before, so it will "rotate" by the same amount previously plus the new angle, I'm not sure how to solve given the loading of the identity matrix causing non accumulated values to reset. |
1 | Is it a good idea to render many textures into one texture? I'm making a 2D game with OpenGL. In order to avoid changing the state machine and binding at runtime, I want to make consolidate my textures into bigger textures, for example, taking 4 128x128 textures and making 1 big 512x512. I could then just render part of the texture rather than binding a new one. Is this a good idea? Could my newly created texture get lost, or are there major faults in doing this? Thanks |
1 | Is the "impossible object" possible in computer graphics? This may be a silly question but I want to know the answer to it. I saw this thing called the "impossible object", while they're many different images of this online, it's suppost to be impossible geometry. Here is an example Now as far as logic goes, I know you don't have to obey it in games, such as a flying cow, or an impossible object. So that's out of the way, but what stands in my way is whether or not there is a way to draw this onto a 3D scene. Like is there a way to represent it as a 3D object? Thanks! |
1 | Rotate view matrix based on touch coordinates I'm working on an Android game where I need to rotate the camera around the origin based on the user dragging their finger. My view matrix has initial position of sitting on the negative z and facing origin. I have succeeded in moving the camera through rotation left or right, up or down based on the user dragging the finger, but my problem is obviously that after I drag my finger up down and rotate say 90 degrees so my intial position of z is now y and still facing origin, if I drag my finger left right I want to rotate from y to x, but what happens is it rotates around the pole y. This is to be expected as I am mapping 2D touch drag coords to 3D space, but I dont know where to start trying to do what I want. Perhaps someone can point me in the right direction, I've been googling for a while now but I don't know what I want to do is called! Edit What I was looking for is called an ArcBall, google it for lots of info on it. |
1 | How to calculate view matrix for OpenGL 3. 2D Camera Roll I'm trying to create a camera object for my 2D game engine, but I just can't seem to get the view matrix down right. Here's the code I'm currently using glm mat4 Camera GetViewMatrix() glm mat4 view glm vec3 camPos glm vec3(this gt pos, CAM Z) GLfloat xup, yup xup sin(this gt rot) yup cos(this gt rot) view glm lookAt(camPos, camPos glm vec3(0.0f, 0.0f, 1.0f), glm vec3(xup, yup, 0.0f)) return view The result I get from this is as follows Before rotation After rotation As you can see, the camera is rotating around its top left corner. The top left corner is the origin point (0,0,0) and the camera's position is (0,0). Shouldn't the lookAt function position the camera at (0,0,1) and point it at (0,0,0)? That's not what seems to be happening. I'm just trying to get the camera to roll about it's set position. Any help would be greatly appreciated! Thanks in advance! |
1 | Infinite terrain generation performance one big heightmap or several smaller heightmap? I'm generating a procedural terrain using (a home made fBm based on) Perlin noise as heightmap. To make the terrain infinite, I redraw part of it while the camera moves. There are two alternatives either have several (say 9) heightmap textures and rotate them while the camera moves or have only one heightmap (say 9 times bigger) and redraw only part of it as necessary. Note The two approaches redraw exactly the same amount of pixels each time. But in the first case, I must bind 3 small textures and entirely redraw them to redraw a line of my grid in the second case, I bind the whole bigger texture and redraw only a line inside it. Question Should I expect one to be faster than the other? Does glBindTexture has higher cost for larger texture? |
1 | Map Engine Rendering Broken So I've been making my own tile map engine in LibGDX as the one currently implemented doesn't support collision detection with scaling. So I've come up with my own Layer and Map class and a drawing function for the map. Currently this is what it produces Here's the rendering code public void draw(SpriteBatch batch) for(Layer layer layers) for(int x 0 x lt layer.getTileMap().length x ) for(int y 0 y lt layer.getTileMap() x .length y ) batch.draw(layer.getTiles() layer.getTileMap() x y , xOffset (x ((layer.getTileWidth() scale))), yOffset (y ((layer.getTileHeight() scale))), (layer.getTileWidth() scale), (layer.getTileHeight() scale)) EDIT Yes, I'm aware the map looks weird, it's just random textures thrown in from a tile sheet. |
1 | How do you fix wobbling shadow edges? I've implemented an omni directional shadow map and I've noticed a rather unwanted behaviour on the shadows. It seems like when the angle between the occluded points and the light source is really steap, then the egde of the shadow starts to wobble. It's almost looks like it's pulsating. Could it have something to do with when the light source moves further away from the occluded polygons, the pixels next to each other are further away from each other than the corresponding texels on the shadow map face that I sample from which result in some sort of magnification side effect when performing aliasing? This is just a really wild guess! Is anyone familiar with this type of behaviour and or have any solution to it? Thank you! |
1 | using heightmap to simulate 3d in an isometric 2d game I saw a video of an 2.5d engine that used heightmaps to do zbuffering. Is this hard to do? I have more or less no idea of Opengl(lwjgl) and that stuff. I could imagine, that you compare each pixel and its depthmap to the depthmap of the already drawn background to determine if it gets drawn or not. Are there any tutorials on how to do this, is this a common problem? It would already be awesome if somebody knows the names of the Opengl commands so that i can go through some general tutorials on that. greets! Great 2.5d engine with the needed effect, pls go to the last 30 seconds Edit, just realised, that my question wasn't quite clear expressed How can i tell Opengl to compare the existing depthbuffer with an grayscale texure, to determine if a pixel should get drawn or not? |
1 | In OpenGL, how can I discover the depth range of a depth buffer? I am doing a GL multi pass rendering app for iOS. The first pass renders to a depth buffer texture. The second pass uses the values in the depth buffer to control the application of a fragment shader. I want to rescale the values in the depth buffer to something useful but before I can do that I need to know the depth value range of the depth buffer values. How do I do this? |
1 | Multithreaded Game Loop I'm trying to implement a multithreaded game loop. I already did that but had to use a few locks for that, which ruined the performance. After researching a bit I came up with this idea Instead of splitting the engines subsystems into different threads (e.g. physics, animation), all subsystems run on all threads. So when we got four CPUs, four threads are created, with each thread having one loop for all subsystems. So the single core game loop is copied on all four threads. These game loops are controlled by one other loop, which sends messages (or 'jobs', 'tasks') to one of these threads (depending on their usage) according to user input or scripts. This could be done with a double buffered command buffer. Only the rendering loop is alone in a thread for maximum rendering performance. Now I'm thinking of the best way to communicate with the rendering loop. The best idea I could come up with is to again use a command buffer and swap it when the rendering loop is complete. That way the rendering loop doesn't have to wait for any of the loops and can go on rendering. If the game loop hasn't finished when the rendering loop swapped the buffer, all commands after that will be executed in the next frame of the rendering loop. To make sure that all objects will be drawn even if the game loop hasn't finished, the rendering loop holds all objects that will be drawn and draws them until it gets the command to stop drawing them. My goal is to make the engine scalable to cpu numbers and make it to use all cores. Is this a way to do that? What is the best approach to this and how are modern engines handling this? |
1 | Batching and Z order with Alpha blending in a 3D world I'm working on a game in a 3D world with 2D sprites only (like Don't Starve game). (OpenGL ES2 with C ) Currently, I'm ordering elements back to front before drawing them without batch (so 1 element 1 drawcall). I would like to implement batching in my framework to decrease draw calls. Here is what I've got for the moment Order all elements of my scene back to front. Send order list of elements to the Renderer. Renderer look in his batch manager if a batch exist for the given element with his Material. Batch didn't exist create a new one. Batch exist for element with this Material Add sprite to the batch. Compute big mesh with all sprite for each batch (1 material type 1 batch). When all batches are ok, the batch manager compute draw commands for the renderer. Renderer process draw commands (bind shader, bind textures, bind buffers, draw element) Image with my problem here But I've got some problems because objects can be behind another objects inside another batch. How can I do something like that? |
1 | How do I create good looking plasma explosion effects? Is this just a billboard quad with a bloom shader? |
1 | How to check for cube collisions? I want a method, which takes two "ObjectBox" objects (A "ObjectBox" has .getX() .getY() .getZ() .getSizeX() .getSizeY() .getSizeZ() methods) as a parameter and return true if the two Boxes are colliding and false if they aren't. So it should be something like this public static boolean checkCollision(ObjectBox box1, ObjectBox box2) return TRUE IF COLLIDING FALSE IF NOT COLLIDING I tried figuring it out but it seemed pretty hard to me. |
1 | Migrating 2D game from immediate mode to VBOs I am in the process of migrating away from legacy OpenGL calls in my 2D LWJGL game. Previously I would render each sprite using immediate mode, which was dead easy bind the texture, add the vertices, and it's done. Now I want to use vertex arrays. Ideally I would put the vertices for ALL my entities in one big array and render it in a single draw call. The problem? Each entity type uses a different texture, and glDrawArrays uses just one texure at a time. Here are some possible solutions I came up with 1) Create a separate vertex array for each entity This is by far the easiest solution, but it somehow seems wrong to me, because it's so similar to using immediate mode. Doesn't this defeat the benefit of using vertex arrays? 2) Create a separate vertex array for each entity type In other words, all entities that have the same texture will be rendered together. This isn't ideal as it would involve a good chunk of extra work and some extra effort on the CPU to separate the entities by type, and more importantly, would affect the z ordering of the entities. For example, it would be impossible to have a Tree that appears in front of one Squirrel but behind another Squirrel. That is, unless I introduced a depth buffer, but I was trying to avoid this as it adds another level of complexity and, as I understand it, more work on the GPU (although that probably doesn't matter too much). 3) Combine all textures using a texture atlas If I was able to use a single texture for all entities, I could use a single vertex array. However, there are 2 problems with this I have not been able to find a good library for creating a texture atlas at runtime, and writing this code myself would be non trivial. I am worried that some GPUs may have a prohibitively small texture size limit. Summary Would the first solution be viable advisable? Am I being foolish for trying to avoid using the depth buffer? Am I missing something here? Is there a better solution? |
1 | My Ambient lighting is not working correctly I'm having a problem when using ambient lights in my opengl game. When I first started with my program, i had a positioned light, and the code was this GLfloat AmbientColor 1.0f,1.0f,1.0f,1.0f glLightModelfv(GL LIGHT MODEL AMBIENT,AmbientColor) POSITIONED LIGHT GLfloat LightColor0 0.9f,0.9f,0.7f,1.0f GLfloat LightPos0 0.0f,900.0f,1000.0f,1.0f glLightfv(GL LIGHT0,GL DIFFUSE,LightColor0) glLightfv(GL LIGHT0,GL POSITION,LightPos0) When doing this, I had a well lit area with no problem. Once I changed my matrix to GL MODELVIEW, I noticed that my lights were not equal on each face. I changed my code to look like this GLfloat AmbientColor 1.0f,1.0f,1.0f,1.0f glLightModelfv(GL LIGHT MODEL AMBIENT,AmbientColor) POSITIONED LIGHT GLfloat LightColor0 0.9f,0.9f,0.7f,1.0f glLightfv(GL LIGHT0,GL AMBIENT,AmbientColor) Once I used this code, only the mountains in my terrain were lit up, and anything perfectly flat was not lit at all(completely black...) So my question is How would I use my ambient light correctly? Example code would be nice, thanks. |
1 | Lag Spike When Creating Model I am creating a game using OpenGl in c . Whenever I create a new model while the game is running, such as fire a bullet, there is a huge lag spike. The function that creates the model is below. std string jsonString jsonString file gt load(type) json jf json parse(jsonString) Might be causing the lag indicesSizeTexture jf quot textureIndices quot .size() verticesSizeTexture jf quot textureVertices quot .size() indicesSizeCollision jf quot collisionIndices quot .size() verticesSizeCollision jf quot collisionVertices quot .size() verticesTexture new float verticesSizeTexture 8 verticesCollision new float verticesSizeCollision 8 verticesCollisionUpdated new float verticesSizeCollision 8 indicesTexture new int indicesSizeTexture indicesCollision new int indicesSizeCollision for (int i 0 i lt verticesSizeTexture i ) responsible for just the texture vertices verticesTexture i jf quot textureVertices quot i for (int i 0 i lt indicesSizeTexture i ) responsible for just the texture indices indicesTexture i jf quot textureIndices quot i for (int i 0 i lt verticesSizeCollision i ) responsible for just the collision vertices verticesCollision i jf quot collisionVertices quot i verticesCollisionUpdated i verticesCollision i for (int i 0 i lt indicesSizeCollision i ) responsible for just the collision indices indicesCollision i jf quot collisionIndices quot i binds id glGenBuffers(1, amp VBO) glGenVertexArrays(1, amp VAO) glGenBuffers(1, amp EBO) glGenTextures(1, amp texture) glBindVertexArray(VAO) glBindBuffer(GL ARRAY BUFFER, VBO) glBufferData(GL ARRAY BUFFER, verticesSizeTexture 8 sizeof(float), verticesTexture, GL STATIC DRAW) position attribute glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, 8 sizeof(float), (void )0) glEnableVertexAttribArray(0) texture glBindTexture(GL TEXTURE 2D, texture) glVertexAttribPointer(2, 2, GL FLOAT, GL FALSE, 8 sizeof(float), (void )(6 sizeof(float))) glEnableVertexAttribArray(2) stbi set flip vertically on load(true) unsigned char data stbi load(texturePathString.c str(), amp width, amp height, amp nrChannels, 0) glTexImage2D(GL TEXTURE 2D, 0, GL RGBA, width, height, 0, GL RGBA, GL UNSIGNED BYTE, data) glGenerateMipmap(GL TEXTURE 2D) stbi image free(data) glBindBuffer(GL ARRAY BUFFER, 0) glBindVertexArray(0) I stripped out a lot of parts that I am almost certain aren't causing the lag. There is a lot of stuff going on, but it is mostly simple mathematical operations. The only parts that I think could be causing the lag is the json section used for loading the model data. The model data is stored in a variable from file as a string. I need the json section for the data storage though. What could be causing the lag? should I find a different data storage type? What if I created a bullet offscreen on startup, then copied it whenever I needed it? The specific json library I am using is https github.com nlohmann json |
1 | how to make a fast fragment shader that converts intensity to saturation? I have a simple fragment shader that looks like this ifdef GL ES precision lowp float endif varying vec4 v fragmentColor varying vec2 v texCoord uniform vec4 u desiredColor uniform sampler2D u texture void main() gl FragColor texture2D(u texture, v texCoord) u desiredColor My goal is to take a grayscale image and apply a color to it, but maintain luminance. There was a stackoverflow post from years ago where the poster was after exactly what I want https stackoverflow.com questions 4361023 opengl es 1 1 how to change texture color without losing luminance The answers however are outdated (OpenGL ES 1.0) and also I am a little confused by them. The accepted answer says to use multi texturing, and the 2nd answer says that is "nonsense". Regardless, the code example there is incompatible with the version of OpenGL ES 2.0 that my game is using with cocos2d v2.x. Here are my original grayscale sprites With my simple shader applied and setting u desiredColor to vec4(1.0, 0.0, 0.0, 1.0), you can see it does exactly what I don't want it to do it completely tints the entire image, turning everything red, including the white. What I really want is for the darker shades of gray have the richest color, but as the sampled grayscale pixel approaches white, the colorized pixels lose saturation, where ultimately a sampled pixel of white has 0 saturation but a sampled pixel of dark gray would be 100 saturation. My desired output is My thought was that the simplest approach would be to have my shader make the output pixel color's saturation be something like (1.0 samplePixelIntensity). So, if the sample pixel's intensity was 1.0, that would mean 0 saturation in the output, if the sample pixel's intensity was 0.5, it would have 50 saturation.. if the sample pixel's intensity was 0.2, then it would have 80 saturation. However, I am not very familiar with writing shaders, so I am wondering how I might construct something like this? |
1 | OpenGL vector rotation matrix to a given point I have a sphere with a radius of .5 The sphere is centred at 0,0,0 I am now trying to place items around my sphere based on a given x,y,z on the surface of the sphere. My items have been draw created in such a way that they are positioned on the surface when translated to 0,0,0 because their centre offset has been positioned at the centre of the sphere. What I need is some help with the calculation needed to get the correct glRotate values so that my icons will show at the correct x,y,z on the surface. To help explain what I mean, here is a picture So you can see 0,0,0 at the centre, and also rotating around the same point is the blue line, This represents the rotation of my object. The green part is the visible part of my item, and so will draw on the surface of the sphere. I have everything ready to go for this, in that I can already draw my items in this manner, however I am not able to correctly calculate the rotation. With the default rotation of Identity my items will draw on the bottom of the sphere. In summary I am looking for something like this glRotate GetRotationForSurface(X,Y,Z) I hope that's clear, this is a bit tricky to explain. Thanks, |
1 | Single Array Texture, multiple wrap modes I have a array texture of which I would like some of its sub images have wrap modes of GL REPEAT and others to be GL CLAMP TO EDGE. From my research, I haven't found a clear answer. It appears Sampler Objects may be what I'm looking for, however I'm unsure as to how to go about implementing this. As current I have a sampler class which looks like this class TextureSampler public unsigned int id TextureSampler() TextureSampler(unsigned int wrapS, unsigned int wrapT, unsigned int filterMin, unsigned int filterMag) TextureSampler() void bind(unsigned int textureId) void unbind(unsigned int textureId) As you can expect the constructor simply sets all the wrap and filter parameters. I bind with this glBindSampler(textureId, id) . The textureId being that from glGenTextures() and the id from glGenSamplers(). Now when it comes to my implementation, I've created two samplers, one for repeating wrap mode and another for edge clamping. This is where things are a bit murky. I've got two samplers, one texture array... it doesn't seem possible to bind both samplers at the same time and yet how do I access both types in the fragment shader? Google has not been much help and I've read parts of the ARB sampler objects extension to see if it had useful information and it seemed to say ... Should sampler objects be made visible to the shading language. This is left for a future extension. I am using GL 4.3, so this is extension is not built in, however that quote leaves me wondering if I've approached this all wrong. What is the correct approach to this? Is this possible with a single texture array? |
1 | 3D collision detection on non flat surface I am developing a game which needs an accurate collision detection algorithm, when an object travels down a slope which isn't flat. To be more precise I need to simulate a skier who travels down a ramp. My first idea was to create simple bounding box around the skies and the ramp. Then place the skier in mid air and start calculating gravity. When the two bounding boxes intersect collision detected (this is how I did it in 2D). But the problem is that the skies are flat and the slope isn't. So there is a chance that the ski will stick into the jump (due to curved surface of the jump) or even worse the skier will go under the jump (I don't want the skies to "sink" into the jump as well) . What would be the best way of solving this? What I could think of is when collision is detected, rotate the skies until the front and the back end of the skies are colliding with the surface. Is this idea any good? Did any face this kind of problem? Would be better to morph the skies according to the slope (but this would probably be an overhead I can't afford)? P.S. if I didn't explain enough, write a comment and make a sketch. |
1 | Collision Detection Tips I need collision detection for my 3D racing game but it isn't going well right now. I think I understand the concept of testing boxes and generating a response, however implementation part is a nightmare for me. OpenGL makes everything more complicated than it has to be. How can I use a shape that is constructed with glVertex ? Should I create the bounding boxes with some Vector3D class? |
1 | How do I flip upside down fonts in FTGL I just use FTGL to use it in my app. I want to use the version FTBufferFont to render font but it renders in the wrong way. The font(texture?buffer?) is flipped in the wrong axis. I want to use this kind of orthographic settings void enable2D(int w, int h) winWidth w winHeight h glViewport(0, 0, w, h) glMatrixMode(GL PROJECTION) glLoadIdentity() I don't even want to swap the 3rd and 4th param because I like to retain the top left as the origin glOrtho(0, w, h, 0, 0, 1) glMatrixMode(GL MODELVIEW) I render the font like this No pushing and popping of matrices No translation font.Render("Hello World!", 1, position, spacing, FTGL RenderMode RENDER FRONT) On the other forums, they said, just scaled it down to 1, but it wont work in mine. Example font is above the screen I can't see relevant problem like in mine in google so I decide to ask this here again. How can I invert texture's v coordinate without modifying its source code ? (assume its read only) |
1 | Geometry shader for multiple primitives How can I create a geometry shader that can handle multiple primitives? For example when creating a geometry shader for triangles, I define a layout like so layout(triangles) in layout(triangle strip, max vertices 3) out But if I use this shader then lines or points won't show up. So adding layout(triangles) in layout(triangle strip, max vertices 3) out layout(lines) in layout(line strip, max vertices 2) out The shader will compile and run, but will only render lines (or whatever the last primitive defined is). So how do I define a single geometry shader that will handle multiple types of primitives? Or is that not possible and I need to create multiple shader programs and change shader programs before drawing each type? |
1 | When should I use GL TRUE or GL FALSE values? When using GLboolean, should I just use true and false or should I use GL TRUE and GL FALSE? When should I prefer using GL TRUE and GL FALSE? or maybe I shouldn't care at all (Because both works)? Here It is said that both are actually a different type. But in some tutorial websites I've read, they use GL TRUE or GL FALSE on functions that requires GLboolean. Example glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, 3 sizeof(GLfloat), (GLvoid ) 0) Well, it works too if we substitute GL FALSE with false in the code above. |
1 | model view projection multiplication order I'm debugging a lighting problem where the camera position is effecting the diffused lighting component on my 3d model. In researching my problem I went back and am reading over all the how to documents. I found that one of my old sources (where I learned open gl 2.0) said this (summarized) When creating the modelViewProjection matrix you can't change the order or you'll get unexpected results. First take the View Matrix and post multiply it by the projection matrix to create a viewProjection matrix. Then post multiply the model matrix to get the modelViewProjection matrix. Looking at my code, all this time I haven't been doing this matrixMultiply(mView, thisItem gt objModel, mModelView) matrixMultiply(mProjection, mModelView, mModelViewProjection) As a test I tried changing it to this matrixMultiply(mView, thisItem gt objModel, mModelView) matrixMultiply(mProjection, mView, mViewProjection) matrixMultiply(mViewProjection, thisItem gt objModel, mModelViewProjection) ...but the result appears the same. EDIT To answer a question about what the matrices contain. A snapshot of the matrix values EDIT2 As a test I did the math both ways to compare the results and even with moving the camera and model around and rotating the model I'm getting the same results for the final ModelViewProjection. |
1 | How to use LWJGL Vertex Buffer Objects? I have been learning how to make a game with LWJGL for a while now by following YouTube tutorials online but I've recently been having a problem understanding Vertex Buffer Objects. I've looked at several tutorials including the one at the LWJGL Wiki, but I still don't understand how to use them. Can someone please clarify how Vertex Buffer Objects should be used? I understand the code behind it but I don't know how it should be used. I know that the VBO should only be created once and be rendered every frame loop. But my main question is, is there one Vertex Buffer Object per game or one per Object to be drawn? Also, what's the difference between VBOs and Interleaved VBOs? |
1 | I get GL INVALID VALUE after calling glTexSubImage2D I am trying to figure out why my texture allocation does not work. Here is the code glTexStorage2D(GL TEXTURE 2D, 2, GL RGBA8, 2048, 2048) glTexSubImage2D(GL TEXTURE 2D, 0, 0, 0, 2048, 2048, GL RGB, GL UNSIGNED SHORT 5 6 5 REV, amp BitMap 0 ) glTexSubImage2D returns GL INVALID VALUE but the maximum texture allowed is 16384x16384 on my card. The source of the image is 16bit (Red 5, Green 6, Blue 5). |
1 | How to make OpenGL rendering resolution independent from its window context resolution? Can the rendering resolution of OpenGL and the window size (at least for the Windows OS) be separated? For example, I may only want to render at 400x300 resolution, but I want my window size to be 800x600. If so, how is this done? |
1 | Blending vs Texture Sampling Lets say I want to blend two textures together A texture containing the result of the SSAO calculation. A texture containing the rendered scene. I could do it in two ways Use a shader that samples both the SSAO and scene textures, blends them together and outputs the final color to a render target. Render to the texture containing the scene and use a blending mode to blend the SSAO texture on top of it. Only the SSAO texture will be sampled inside the shader. Is it possible to give a general answer about which version is faster, or is it highly hardware dependent? |
1 | GLSL multiplying fragment output color by normal after calling texture function displays wrong texture So I'm quite new to OpenGL GLSL. I'm trying to make multitexturing and simple normal lighting work. Here is a Vertex shader version 130 uniform mat4 transform in vec3 vertPos in vec2 texUV in vec3 vertNorm in uint thisLayer out vec2 fragTexCoord out vec3 fragNorm out float fragLayer void main() gl Position transform vec4(vertPos, 1.0f) fragTexCoord texUV fragLayer float(thisLayer) fragNorm vertNorm ... and here is Fragment shader version 130 uniform sampler2DArray texArray uniform uint texCount uniform float rTime in vec2 fragTexCoord in vec3 fragNorm in float fragLayer out vec4 color void main() float actual layer max(0, min(texCount uint(1), floor(fragLayer 0.5))) Getting texture layer vec3 texCoord vec3(fragTexCoord, actual layer) color texture(texArray, texCoord) And it works fine Multiplying color.rbg by any constant works as expected color texture(texArray, texCoord) color.rgb vec3(0.5, 0.5, 0.5) But if I try to multiply color.rgb by a normal, like this vec3 finalNorm (normalize(fragNorm) vec3(1,1,1)) vec3(2,2,2) color texture(texArray, texCoord) color.rgb finalNorm The wall texture changes to the last texture in the array (the one on the top) I have no idea why this happens. I normalize vertex normals and map their values into 0, 1 range, so colors will stay in 0, 1 range after multiplication as well. It is supposed to simply shift the colors intensity, like it did while multiplying by a constant, not change the texture itself, right? I am aware of the fact that this is not how you perform normal lighting, however, this is not the correct behavior as well, and I'm ripping my hair off as to why. Any help is appreciated. I will provide more info if needed. EDIT OS is Windows 10. GPU's Intel HD 4600 Nvidia GTX 860M (happens on both of them). OpenGL version 4.3.0 build 20.19.15.4624. Some additional info I use SOIL library to load data from PNG files. I use glTexImage3D(GL TEXTURE 2D ARRAY, 0, GL RGBA, img width, img height, layer count 2, 0, GL RGBA, GL UNSIGNED BYTE, 0) to create a container for texture array, and use glTexSubImage3D(GL TEXTURE 2D ARRAY, 0, 0, 0, i, img width, img height, layer count, GL RGBA, GL UNSIGNED BYTE, data) to load data into it. One issue I have with this is that glTexImage3D takes layer count (the number of textures) multiplied by 2, and I have no idea why as well! Maybe this is somehow related to the issue I'm having above? |
1 | How to free the GPU from a long task? I'm developing an application that uses the GPU. Some tasks might be rather long (like a fragment shader with a loop). I have the impression that I can make the entire OS visually freeze by asking the GPU to perform a job that doesn't end. Of course, I'm not doing this on purpose, but when that accidentally happens, I have to force power off my machine and this is very difficult to debug. Some questions Is this behaviour normal? Or could this be a bug in the hardware or driver? Is there a technique to recover from this without force stopping my machine? Since the OS is still functioning (only not rendering), I could try to ssh into my machine and kill the process. Would that free the GPU from its job? Update I tried to ssh into my machine and sudo kill 9 pid , but didn't work. I would like to know if there is an OS independent answer for this. If there isn't, my setup is macOS, with OpenGL (running on Intel HD Graphics 3000). |
1 | shadow mapping and linear depth I'm implementing ominidirectional shadow mapping for point lights. I want to use a linear depth which will be stored in the color textures (cube map). A program will contain two filtering techniques software pcf (because hardware pcf works only with depth textures) and variance shadow mapping. I found two ways of storing linear depth const float linearDepthConstant 1.0 (zFar zNear) first float moment1 viewSpace.z linearDepthConstant float moment2 moment1 moment1 outColor vec2(moment1, moment2) second float moment1 length(viewSpace) linearDepthConstant float moment2 moment1 moment1 outColor vec2(moment1, moment2) What are differences between them ? Are both ways correct ? For the standard shadow mapping with software pcf a shadow test will depend on the linear depth format. What about variance shadow mapping ? I implemented omnidirectional shadow mapping for points light using a non linear depth and hardware pcf. In that case a shadow test looks like this vec3 lightToPixel worldSpacePos worldSpaceLightPos vec3 aPos abs(lightToPixel) float fZ max(aPos.x, max(aPos.y, aPos.z)) vec4 clip pLightProjection vec4(0.0, 0.0, fZ, 1.0) float depth (clip.z clip.w) 0.5 0.5 float shadow texture(ShadowMapCube, vec4(normalize(lightToPixel), depth)) I also implemented standard shadow mapping without pcf which using second format of linear depth (Edit 1 i.e. distance to the light some offset to fix shadow acne) vec3 lightToPixel worldSpacePos worldSpaceLightPos const float linearDepthConstant 1.0 (zFar zNear) float fZ length(lightToPixel) linearDepthConstant float depth texture(ShadowMapCube, normalize(lightToPixel)).x if(depth lt fZ) shadow 0.0 else shadow 1.0 but I have no idea how to do that for the first format of linear depth. Is it possible ? Edit 2 For non linear depth I used glPolygonOffset to fix shadow acne. For linear depth and distance to the light some offset should be add in the shader. I'm trying to implement standard shadow mapping without pcf using a linear depth ( viewSpace.z linearDepthConstant offset) but following shadow test doesn't produce correct results vec3 lightToPixel worldSpacePos worldSpaceLightPos vec3 aPos abs(lightToPixel) float fZ max(aPos.x, max(aPos.y, aPos.z)) vec4 clip pLightProjection vec4(0.0, 0.0, fZ, 1.0) float fDepth (clip.z clip.w) 0.5 0.5 float depth texture(ShadowMapCube, normalize(lightToPixel)).x if(depth lt fDepth) shadow 0.0 else shadow 1.0 How to fix that ? |
1 | How can I simulate a limited (256) color palette in OpenGL? On Twitter, I found this screenshot of a game in development The image on top seems to be without any color limitation. But the two other pictures at the bottom have a 256 color palette. I want to achieve a similar effect in my game (I am using OpenGL). How can I do so? |
1 | GLM Euler Angles to Quaternion I hope you know GL Mathematics (GLM) because I've got a problem, I can not break I have a set of Euler Angles and I need to perform smooth interpolation between them. The best way is converting them to Quaternions and applying SLERP alrogirthm. The issue I have is how to initialize glm quaternion with Euler Angles, please? I read GLM Documentation over and over, but I can not find appropriate Quaternion constructor signature, that would take three Euler Angles. The closest one I found is angleAxis() function, taking angle value and an axis for that angle. Note, please, what I am looking for si a way, how to parse RotX, RotY, RotZ. For your information, this is the above metnioned angleAxis() function signature detail tquat lt valType gt angleAxis (valType const amp angle, valType const amp x, valType const amp y, valType const amp z) |
1 | What is the best method to update shader uniforms? What is the most accepted way for keeping a shader's matrices up to date, and why? For example, at the moment I have a Shader class that stores the handles to the GLSL shader program amp uniforms. Every time I move the camera I then have to pass the new view matrix to the shader, then every different world object I must pass it's model matrix to the shader. This severely limits me as I can't do anything without having access to that shader object. I thought of creating a singleton ShaderManager class that is responsible for holding all active shaders. I can then access that from anywhere and a world object wouldn't have to know about what shaders are active just that it needs to let the ShaderManager know the desired matrices but I'm not sure this is the best way and there's probably some issues that will arise from taking this approach. |
1 | Opengl lighting not working I have rendered a spinning model in LWJGL. I have calculated normals and enabled lighting. Now I make a light float lightpos 0, 0, 0, 0 FloatBuffer lightposb BufferUtils.createFloatBuffer(8) lightposb.put(lightpos) glLight(GL LIGHT0, GL POSITION, lightposb) float light 10, 10, 10, 1 FloatBuffer lightb BufferUtils.createFloatBuffer(8) lightb.put(light) glLight(GL LIGHT0, GL SPECULAR, lightb) glLight(GL LIGHT0, GL DIFFUSE, lightb) glLight(GL LIGHT0, GL AMBIENT, lightb) The model, instead of being lit up, is a dark gray. Also, no color appears on the model, even though I set its color to cyan float color 0,1,1,1 FloatBuffer colorb BufferUtils.createFloatBuffer(8) colorb.put(color) glMaterial(GL FRONT, GL AMBIENT, colorb) UPDATE Even though I have calculated normals and set them with glNormal3f, the model still appears to be shaded flat. UPDATE I HAVE done glEnable(GL LIGHTING) and glEnable(GL LIGHT0) that is NOT the problem. UPDATE I reversed the order of the light code and the model code and now the model flashes white, then goes dark grey and stays there. |
1 | Animate some tiles of tile map I have a game where each "tile" of the terrain is a triangle (Settlers 2). Those are placed next to each other and in way creating an infinite world (right bottom jumps back to left top). There is a 3D component (each triangle point has a height) but that gets translated into an Y position offset, so everything can be considered 2D. Everything (texture coords, screen coords, color (white lt black, FoW)) is put into VBOs already and usually does not change (FoW may disappear which updates the color VBO, but rarely) When drawing I iterate over the visible tiles (currently in view) and collect "runs" of tiles with the same texture. After that I do Bind current texture Draw current run (offset into VBOs num of same textured tiles) Repeat until all runs are drawn To reduce the number of texture switches I wanted to create a texture atlas. But the problem Some textures are animated (texture animation so all tiles use the same frame of the animation at any given time) and those animations may be different. To give some numbers 30 textures, pretty small ( 30 60px each) 8 are animated, rest static animation length is 4 8 frames animation frame time is constant How would I best go with those animations? Put all static textures into 1 texture, animated ones into separate ones? Unroll animations so I have 8 textures (possibly 24 if I have 8 frame and 6 frame animations) where each texture is 1 frame (so static textures are copied into those unmodified) Repeat all textures into 1 texture and do some U coordinate shifting in a shader (not used ATM, but will add soon) again possible waste Some shader magic? Using OpenGL 2.1, C . Visualization of 3 textures Triangles cut out from squares in any position (in fact only 6 are possible Top to bottom, top to middle, top to middle rotated by 135 and their mirrored counterparts). Shown is 1 of 2 triangles per square, the other one is mirrored horizontally. All tiles may have different sizes |
1 | Can I change the order of these OpenGL Win32 calls? I've been adapting the NeHe OpenGL Win32 code to be more object oriented and I don't like the way some of the calls are structured. The example has the following pseudo structure Register window class Change display settings with a DEVMODE Adjust window rect Create window Get DC Find closest matching pixel format Set the pixel format to closest match Create rendering context Make that context current Show the window Set it to foreground Set it to having focus Resize the GL scene Init GL The points in bold are what I want to move into a rendering class (the rest are what I see being pure Win32 calls) but I'm not sure if I can call them after the Win32 calls. Essentially what I'm aiming for is to encapsulate the Win32 calls into a Platform Initiate() type method and the rest into a sort of Renderer Initiate() method. So my question essentially boils down to quot Would OpenGL allow these methods to be called in this order? quot Register window class Adjust window rect Create window Get DC Show the window Set it to foreground Set it to having focus Change display settings with a DEVMODE Find closest matching pixel format Set the pixel format to closest match Create rendering context Make that context current Resize the GL scene Init GL (obviously passing through the appropriate window handles and device contexts) Thanks in advance. |
1 | Event Based Render Update Loop I'm working on a few game dev tools which use OpenGL or DirectX to render 3D models (viewing). Consider your favorite 3D modeling software package (Softimage, Maya, 3DS Max, modo, etc.). Note that the 3D viewport does not sit in a constant render loop pegged at max FPS. It remains idle (0 FPS) until some action is taking place in the viewport such as moving the viewport camera from side to side using the mouse or pressing play for animation. If the viewport is idle, there is no drain on the CPU GPU. I'd like to use this technique in a couple of tools I'm working on and possibly within the game. Is there a generally accepted programming technique with OpenGL and or DirectX (via C ) that will achieve this? One solution would be to do a standard render loop and check for input 1000s of times per second, but that seems inefficient and hard to manage as you add more and more quot events quot which would trigger a updated render. Please let me know if the question is unclear. If I knew how to describe it perfectly I could probably just search google. |
1 | OpenGL How to map point inside frustrum to normal device coordinates (NDC)? I read this article http www.songho.ca opengl gl projectionmatrix.html. It's explain how calculate projection matrix coefficients. But I completely can't understand how author perform mapping from frustrum to NDC. Why x n perpendicular to x p? I can't imagine it. Maybe someone can explain it? Or share useful source. |
1 | Geometry design and buffers I'm making some tests with rendering stuff and I'm wondering how to design my Geometry class. For the moment, here is how I do Init Stock array with positions, array with colors, array with normals, has attributes. Just before rendering, if my geometry have changed, I update his vertex buffer and index buffer. To do that, I compute interleaved array here and send it to my vertex buffer. I've made some tests with particles (10000 on iPad), the heaviest function is "Geometry computeInterleavedArray. How can I avoid this part? Is it better to stock geometry's data inside a single array with all data already interleaved? |
1 | libgdx glClearColor not setting right color? i just new to libgdx and trying to understand the example code, the following code sets the bg color Gdx.gl.glClear(GL10.GL COLOR BUFFER BIT) Gdx.gl.glClearColor(60,181,00,0f) my expected color is green, R 60 Green 181 B 0, but the about code show me yellow color in my android app, anything i'm doing wrong? http rapid tools.net online color picker |
1 | OpenglGL Render two scenes with one draw call I need to draw a normal vector scene and default scene. I could achieve this my having two programs with different fragment shader to produce the following images. It will require me to draw twice. somethinglike useProgram(normal vector rendering program) render bunny() useProgram(just default scene program) render bunny() However when I am rendering the default scene, I am already passing normal vector information to achieve the lighting. So it got me curious, can I produce two different scenes with one program(a program that has two color buffer attaches) and one draw call somehow? I think if such is possible then I am saving lots of computer resource by not having to draw multiple times for different images. |
1 | Translating Viewmatrix is inverted, why? So I've defined a Projectionmatrix, Viewmatrix and a Modelmatrix using OpenGL (LWJGL). But when I translate my Viewmatrix to X it moves my object to the right (hence my camera is moving to the left), and when I move it to X the object moves to the left (camera moving to the right). Same goes for Y movement, Z works as it should though. My matrices are as follows Projectionmatrix perspectiveMatrix new Matrix4f() perspectiveMatrix.setIdentity() float aspectRatio WIDTH HEIGHT float zRange zNear zFar COLUMN ROW perspectiveMatrix.m00 (float) (1.0f (aspectRatio Math.tan(degreesToRadians(fieldOfView 2.0f)))) perspectiveMatrix.m11 (float) (1.0f (Math.tan(degreesToRadians(fieldOfView 2.0f)))) perspectiveMatrix.m22 ( zNear zFar) zRange perspectiveMatrix.m23 1 perspectiveMatrix.m32 (2 zFar zNear) zRange Viewmatrix Matrices.viewMatrix new Matrix4f() Matrix4f.setIdentity(Matrices.viewMatrix) Initialize all vectors this.xAxis new Vector3f() this.yAxis new Vector3f() this.zAxis new Vector3f() this.newPos new Vector3f() this.currPos new Vector3f() position.xyz 0, 2, 7 target.xyz 0, 0, 1 up.xyz 0, 1, 0 Set all values to the Camera class values position.negate(this.currPos) target.normalise(this.zAxis) up.normalise(this.yAxis) Vector3f.cross(target, up, xAxis) Vector3f.cross(xAxis, target, yAxis) Recalulate yAxis to make it valid coordinate system. This defines the View Matrix. This is calculated and explained here http ogldev.atspace.co.uk www tutorial13 tutorial13.html m(Column)(Row) Matrices.viewMatrix.m00 xAxis.x Matrices.viewMatrix.m10 xAxis.y Matrices.viewMatrix.m20 xAxis.z Matrices.viewMatrix.m01 yAxis.x Matrices.viewMatrix.m11 yAxis.y Matrices.viewMatrix.m21 yAxis.z Matrices.viewMatrix.m02 zAxis.x Matrices.viewMatrix.m12 zAxis.y Matrices.viewMatrix.m22 zAxis.z Matrices.viewMatrix.m33 1 Matrices.viewMatrix.translate(this.currPos) Modelmatrix Identity |
1 | GLSL Multiple Uniform Structs I'm developing a lighting system for my voxel game, and I have to send multiple (alot, say up to 200) lights to my shader program. Those lights contain the following data Position (vec3) Color (vec3) Radius (float) Strength (float) What is the most efficient way to send alof of those light structs to my shaders? I would like it to work with lower versions of OpenGL, like 2.1. |
1 | How to rotate a direction I'm working a spotlight for my deferred renderer and I'm having trouble with matching the mesh to the visual representation of the light. Right now my mesh is a cone, the apex of the cone is at (0,0,0), it has a height of 1 and a radius of 1. The direction of this cone is (0, 1,0) when the rotation is (0,0,0). The relevant GLSL code float spot alpha dot( l,normalize(vec3(0, 1,0) )) lt float inner alpha cos(light.falloff) float outer alpha cos(light.radius) float spot clamp((spot alpha outer alpha) (inner alpha outer alpha),0.,1.) As you can see, the GLSL code uses a direction to define the area to be lit, so I could get this direction as a uniform, but I would need to find a rotation from that for the mesh to follow, or I can get the rotation and find a direction, but I don't know how to do either of these things. Can I rotate a direction? How do I do it? If I can't, is there another solution for this problem? |
1 | How do multipass shaders work in OpenGL? In Direct3D, multipass shaders are simple to use because you can literally define passes within a program. In OpenGL, it seems a bit more complex because it is possible to give a shader program as many vertex, geometry, and fragment shaders as you want. A popular example of a multipass shader is a toon shader. One pass does the actual cel shading effect and the other creates the outline. If I have two vertex shaders, "cel.vert" and "outline.vert", and two fragment shaders, "cel.frag" and "outline.frag" (similar to the way you do it in HLSL), how can I combine them to create the full toon shader? I don't want you saying that a geometry shader can be used for this because I just want to know the theory behind multipass GLSL shaders ) |
1 | Methods to 'cull check' polys in OpenGL A quick search through the web suggests there's quite a few methods of potentially detecting back culled faces on the CPU. The purpose of the check is to evaluate whether to performing other operations such as tessellations is reasonable by dynamically altering the LOD of visible polys based on the number of polys on the screen. What would your suggestions be for implementing this? Speed isn't necessarily the greatest concern for this prototype mathematical simplicity is more important. Code snippets welcome ) Edit It seems obvious to me to calculate the dot product based on the camera's viewing vector in world space against the surface normal vector but I'm asking this question because I'm intrigued as to why everyone and their (programming genius) dog seems to have an alternative method of handling this. |
1 | How to store renderer vertex index data in scene graph objects? I have a SceneNode class which contains a Mesh instance. The Mesh class stores client side information such as vertex and index arrays (before they're uploaded to the GPU). I also have an abstracted Renderer class, for GL and D3D, which render the SceneNodes. However, I'm not sure where I should store the API specific variables, e.g. GLuint via glGenBuffers for GL, and an ID3D11Buffer for D3D. The few options I've considered are Create a derived Mesh class for each API, e.g. GLMesh D3DMesh Create a derived MeshData class for each API, which is stored in the main Mesh class Store a map of Mesh to API variables in each renderer, e.g. perform a lookup of Mesh to GLuint ID3D11Buffer for each object that is rendered (variables would have to be generated after the scene had been updated, but before rendering). Separate the logic of rendering from scenes, by visiting the SceneGraph after update, and generating a RenderGraph of all renderable nodes in the scene. What's the recommended way of doing this? |
1 | Best strategy on VAO and texture coordinates for voxel rendering? I'm working on a game that has to render a large amount of cubes (voxels) with OpenGL. All cubes have the same geometry (so I can re use the vertex position VBO) and a single sprite sheet texture is used for all of them. This means the texture coordinates will have to vary for the cubes, depending on which sprite to use. All six sides of a cube will use the same sprite (in other words, the same texture coordinates). Now, what is the best strategy to approach this with a modern OpenGL renderer? I came up with two possibilities Create as many VAOs as there are cube sprites in the sprite sheet. Bind the same vertex position VBO to all of them, but a different texture coordinate VBO (according to what sprite part of the texture to show). In my specific case, this would make for about 600 different texture coordinate VBOs and therefore 600 VAOs. Create only one VAO. Set up the texture coordinate VBO in a generic way (e.g. 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0). In the rendering loop, send the actual texture coordinates (only 4 values) as uniform variable to the shaders. In the shader, the generic texture coordinates can be used to figure out which two of the four values from the uniform should be used. Are there other possibilities that I'm missing? What is best regarding performance? |
1 | OpenGL Reconstructing Position from Depth I know this has been asked a lot of time before but none of those answers fixed my problem. I try to implement deferred shading and to do so I need to reconstruct the world space position from the depth. (Having a position texture in the gbuffer would be a waste of data) The calculation works fine for rotationing the camera ( moving the mouse) but as soon as I move the camera ( pressing wasd) they get weird. GLSL vec4 screenSpacePosition vec4(pass Texture 2.0 1.0, texture(gbuffer texture 2 , pass Texture).r, 1) vec4 worldSpacePosition invertedViewProjection screenSpacePosition vec3 finalPosition worldSpacePosition.xyz worldSpacePosition.w gbuffer texture 2 is the depth attachment of OpenGL. invertedViewProjection is the matrix I used to render the scene inverted on the CPU. |
1 | Why am I having trouble, combining color attachments to implement bloom? I'm trying to implement bloom in the same manner as this tutorial, but I am having difficulties. I have the blur buffer blurring correctly, but I can't seem to combine the two images. Below, you will find my code. The commented out lines just render one or the other buffer to the screen, so I know these are correct. For now, I am just trying to combine them together not fix them with HDR. I just add together both textures, but no matter how I change things around, all I get is the one on the left. blurShader.use() glUniform1i(glGetUniformLocation(blurShader.program, "horizontal"), true) glBindFramebuffer(GL FRAMEBUFFER, pingpongFBO 1 ) long blurCount 5 for(long i 0 i lt blurCount i ) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glActiveTexture(GL TEXTURE0) bind texture of other framebuffer (or scene if first iteration) glBindTexture(GL TEXTURE 2D, pingpongColorbuffers i 2 ) RenderQuad() glBindFramebuffer(GL FRAMEBUFFER, pingpongFBO i 2 ) glUniform1i(glGetUniformLocation(blurShader.program, "horizontal"), false) for(long i 0 i lt blurCount i ) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glActiveTexture(GL TEXTURE0) bind texture of other framebuffer (or scene if first iteration) glBindTexture(GL TEXTURE 2D, pingpongColorbuffers (i 1) 2 ) RenderQuad() glBindFramebuffer(GL FRAMEBUFFER, pingpongFBO (i 1) 2 ) glBindFramebuffer(GL FRAMEBUFFER, 0) renderScreenQuad(colorBuffers 0 ) renderScreenQuad(pingpongColorbuffers 1 ) 2. Now render floating point color buffer to 2D quad and tonemap HDR colors to default framebuffer's (clamped) color range glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) bloomShader.use() glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, colorBuffers 0 ) glActiveTexture(GL TEXTURE1) glBindTexture(GL TEXTURE 2D, pingpongColorbuffers 1 ) glUniform1i(glGetUniformLocation(bloomShader.program, "bloom"), bloom) glUniform1f(glGetUniformLocation(bloomShader.program, "exposure"), exposure) RenderQuad() version 330 core out vec4 FragColor in vec2 TexCoords uniform sampler2D colorTexture uniform sampler2D bloomTexture uniform bool bloom uniform float exposure void main() vec3 firstColor texture(colorTexture, TexCoords).rgb vec3 secondColor texture(bloomTexture, TexCoords).rgb FragColor vec4(firstColor secondColor, 1.0f) version 330 core layout (location 0) in vec3 position layout (location 1) in vec2 texCoords out vec2 TexCoords void main() gl Position vec4(position, 1.0f) TexCoords texCoords Basically, what happens is that the first image bound goes through, and is accessible. Then, the second is the same as the first! If I change the order in which I bind the images, then the other image becomes the one sent in. glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, pingpongColorbuffers 0 ) glActiveTexture(GL TEXTURE1) glBindTexture(GL TEXTURE 2D, pingpongColorbuffers 1 ) glUniform1i(glGetUniformLocation(bloomShader.program, "bloom"), bloom) glUniform1f(glGetUniformLocation(bloomShader.program, "exposure"), exposure) RenderQuad() The above code is the section I'm talking about. Each of the pingpongcolorbuffers is attached to its own FBO. I am trying to send those in. Is there some reason that would limit me to sending in only one? Here is the combined result, before HDR |
1 | Can DDS be used in OpenGL in Linux without license patent issue? Is there a 'standard' for OpenGL game creation intended for both Windows and Linux? I understand DDS is the DirectX standard (or, at least, it appears to be). Is there one that does not have potential patent license issue or does it really not matter? I am asking as I would like to avoid focusing on DDS as the format to use, only to have that kick me in the teeth later when some license patent ??? issue requires me to remove and replace DDS with texture format here . This is not about what is better or an opinion post, but what is (relatively) safe from the constraints mentioned earlier? |
1 | My sweep and prune is giving false positives My code is just giving false positives. Every frame it reports an imaginary collision. Here is my code Sweep And Prune public static void updateObjects() List lt BoundingBox gt activeList new LinkedList lt gt () boolean adding true for (BoundingBox box axisList) box.getOverlappingPairs().clear() if(adding) activeList.add(box) adding false else for(BoundingBox e activeList) if(box.collides(e)) box.setOverlappingPairs(e) System.out.println("COLLISION " box " " e) else activeList.remove(box) Edit My AABB boxes are colliding with themselves. I have debugged my classes and have narrowed it down to the sweep and prune class. I would appreciate if someone would help me. |
1 | rotate opengl mesh relative to camera I have a cube in opengl. It's position is determined by multiplying it's specific model matrix, the view matrix, and the projection matrix and then passing that to the shader as per this tutorial (http www.opengl tutorial.org beginners tutorials tutorial 3 matrices ). I want to rotate it relative to the camera. The only way I can think of getting the correct axis is by multiplying the inverse of the model matrix (because that's where all the previous rotations and tranforms are stored) times the view matrix times the axis of rotation (x or y). I feel like there's got to be a better way to do this like use something other than model, view and projection matrices, or maybe I'm doing something wrong. That's what all the tutorials I've seen use. PS I'm also trying to keep with opengl 4 core stuff. edit If quaternions would fix my problems, could someone point me to a good tutorial example for switching from 4x4 matrices to quaternions. I'm a little daunted by the task. |
1 | How would you implement chromatic aberration? How would you implement the effect of chromatic aberration with shaders? Would rendering of the world with different focus distances for each color solve the problem (maybe with the usage of only one depth rendering pass)? |
1 | Shadowmap first phase and shaders I am using OpenGL 3.3 and am tryin to implement shadow mapping using cube maps. I have a framebuffer with a depth attachment and a cube map texture. My question is how to design the shaders for the first pass, when creating the shadowmap. This is my vertex shader in vec3 position uniform mat4 lightWVP void main() gl Position lightWVP vec4(position, 1.0) Now, do I even need a fragment shader in this shader pass? from what I understand after reading http www.opengl.org wiki Fragment Shader, by default gl FragCoord.z is written to the currently attached depth component (to which my cubemap texture is bound to). Thus I shouldnt even need a fragment shader for this pass and from what I understand, there is no other work to do in the fragment shader other than writing this value. Is this correct? |
1 | How to put triangle information into a glsl shader for dynamic shadows? I am experimenting with how much I can push the GPU with clever optimizations and correct use of hardware, so I know this may lag my computer a lot, that's not the issue. Currently my goal is as follows Define a variable size structure (i.e an array) that contains floats, every 3 floats define a position vector, every 9 define a triangle. I need to create an array of these structures (the array of all shadow casting objects currently loaded) and pass the array to the GPU (the glsl shader). If I follow a couple of conventions when putting my data into this format I can use hashing to get an amortized constant search time for the shadow casting triangles, The issue is to get the data into the format that I need. If this was C the format would be an array of vectors and we're done, but GLSL is not as good at managing memory, does somebody know a way to pass the information in the fashion I described, or am I trying to do something that can't be done? |
1 | Slick2D crashes on Image.getGraphics() on some machines On some machines, creating a new Image and calling getGraphics() on it causes lwjgl to enter into some kind of faulty state that crashes when swapping buffers. Any idea what causes this and how to make Slick clean up after itself properly? Reproduction code package crashrepro import org.newdawn.slick. public class CrashRepro extends BasicGame public static void main(String args) throws Exception CrashRepro cr new CrashRepro("CrashRepro") Crash only happens in fullscreen mode. AppGameContainer agc new AppGameContainer(cr, 800, 600, true) agc.start() public CrashRepro(String title) super(title) Override public void init(GameContainer gc) throws SlickException Override public void update(GameContainer gc, int i) throws SlickException Override public void render(GameContainer gc, Graphics grphcs) throws SlickException Image img new Image(128, 128) img.getGraphics() This crashes the game. Stack trace Java frames (J compiled Java code, j interpreted, Vv VM code) j org.lwjgl.opengl.WindowsContextImplementation.nSwapBuffers(Ljava nio ByteBuffer )V 0 j org.lwjgl.opengl.WindowsContextImplementation.swapBuffers()V 35 j org.lwjgl.opengl.ContextGL.swapBuffers()V 3 j org.lwjgl.opengl.DrawableGL.swapBuffers()V 0 j org.lwjgl.opengl.Display.swapBuffers()V 39 j org.lwjgl.opengl.Display.update(Z)V 44 j org.lwjgl.opengl.Display.update()V 1 j org.newdawn.slick.AppGameContainer.gameLoop()V 78 j org.newdawn.slick.AppGameContainer.start()V 17 j crashrepro.CrashRepro.main( Ljava lang String )V 27 v StubRoutines call stub |
1 | Confused About My Code Suggesting The Normal Matrix Is Equivalent To The ModelView Matrix I'm learning environment mapping in OpenGL by following this page. In his vertex shader, the author calculates the vertex normal in eye space with the following code nEye vec3(viewMatrix modelMatrix vec4(vertexNormal, 0.0)) This works, but I've fooled around and made modifications to the shader code, so that it calculates the vertex normal with this code nEye vec3(normalMatrix vec4(vertexNormal, 0.0)) Where normalMatrix is a uniform 4x4 matrix I calculated outside the shader objectNormalMatrix viewMatrix modelMatrix objectNormalMatrix.invert() objectNormalMatrix.transpose() (I read from this question that the above is how you are supposed to calculate the normal matrix). For either method of calculating the vertex normal I use, I get the same (working) graphical result, and this is what is confusing me. My question is, why is(viewMatrix modelMatrix) apparently equivalent normalMatrix? In calculating the normal matrix, I invert and transpose, but I do none of that when just multiplying by (viewMatrix modelMatrix). Here is some extra info that might be relevant The matrix library I use in my application is column major order (same as OpenGL). When I pass my matrices to GLSL with glUniformMatrix, I have transpose as false. The view matrix and model matrix in GLSL are uniforms from the application, and they are the exact same view matrix and model matrix matrices I use in my calculation of the normal matrix. |
1 | How do I draw a point sprite using OpenGL ES on Android? Edit I'm using the GL enum, which is incorrect since it's not part of OpenGL ES (see my answer). I should have used GL10, GL11 or GL20 instead. Here's a few snippets of what I have so far... void create() renderer new ImmediateModeRenderer() tiles Gdx.graphics.newTexture( Gdx.files.getFileHandle("res tiles2.png", FileType.Internal), TextureFilter.MipMap, TextureFilter.Linear, TextureWrap.ClampToEdge, TextureWrap.ClampToEdge) void render() Gdx.gl.glClear(GL10.GL COLOR BUFFER BIT GL10.GL DEPTH BUFFER BIT) Gdx.gl.glClearColor(0.6f, 0.7f, 0.9f, 1) void renderSprite() int handle tiles.getTextureObjectHandle() Gdx.gl.glBindTexture(GL.GL TEXTURE 2D, handle) Gdx.gl.glEnable(GL.GL POINT SPRITE) Gdx.gl11.glTexEnvi(GL.GL POINT SPRITE, GL.GL COORD REPLACE, GL.GL TRUE) renderer.begin(GL.GL POINTS) renderer.vertex(pos.x, pos.y, pos.z) renderer.end() create() is called once when the program starts, and renderSprites() is called for each sprite (so, pos is unique to each sprite) where the sprites are arranged in a sort of 3D cube. Unfortunately though, this just renders a few white dots... I suppose that the texture isn't being bound which is why I'm getting white dots. Also, when I draw my sprites on anything other than 0 z axis, they do not appear I read that I need to crease my zfar and znear, but I have no idea how to do this using libgdx (perhaps it's because I'm using ortho projection? What do I use instead?). I know that the texture is usable, since I was able to render it using a SpriteBatch, but I guess I'm not using it properly with OpenGL. |
1 | Hardware Fragment Sorting? I'm writing a rendering engine in OpenGL. I Want to do order independent transparency. I had heard somewhere that some GPUs have support for actually sorting the fragments of all the objects in the scene based on depth and then drawing them. I then realized that this feature is likely very important to many people. Does OpenGL have a built in fragment sorting algorithm, or access to this hardware? |
1 | Enabling multisampling in Irrlicht? I'm working on a little game that uses Irrlicht. I'm pretty new to Irrlicht and I was wondering how I could enable multisampling. The device driver is EDT OPENGL initiated as such IrrlichtDevice device createDevice(video EDT OPENGL, core dimension2d lt u32 gt (800, 700), 16, NO) How do I enable multisampling? |
1 | Nothing gets rendered in SceneKit I have this code in OpenGL Vuforia Matrix44F modelViewProjection VuforiaApplicationUtils translatePoseMatrix(0.0f, 0.0f, self.scale, amp Vuforia modelViewMatrix.data 0 ) VuforiaApplicationUtils scalePoseMatrix(self.scale, self.scale, self.scale, amp Vuforia modelViewMatrix.data 0 ) VuforiaApplicationUtils multiplyMatrix( amp projectionMatrix.data 0 , amp Vuforia modelViewMatrix.data 0 , amp modelViewProjection.data 0 ) And the modelViewProjection gets used like this glUniformMatrix4fv(mvpMatrixHandle, 1, GL FALSE, (const GLfloat ) amp modelViewProjection.data 0 ) With mvpMatrixHandle being a piece of code in the shader mvpMatrixHandle glGetUniformLocation(shaderProgramID, "modelViewProjectionMatrix") And in the shader itself gl Position modelViewProjectionMatrix vertexPosition Now I try to convert this to SceneKit code (never used it before, so I might miss a big part of my SceneKit code) Setup let plane SCNNode(mdlObject MDLAsset(url Bundle.main.url(forResource "Bubble", withExtension "obj")!).object(at 0)) scene.rootNode.addChildNode(plane) let camera SCNCamera() plane.camera camera Each frame modelViewMatrix SCNMatrix4Translate(modelViewMatrix, 0, 0, scale) modelViewMatrix SCNMatrix4Scale(modelViewMatrix, scale, scale, scale) let modelViewProjection SCNMatrix4Mult(arrayToMatrix(pose projectionMatrix), modelViewMatrix) camera.projectionTransform modelViewProjection scene.rootNode.childNodes 0 .transform modelViewProjection As you can see, I have also tried to set this as the projectionTransform of the camera, but that did not work either. My result is the most annoying to debug nothing gets rendered using the SceneKit code, while the OpenGL code works. |
1 | OpenGL strange rendering problem when buffers have different sizes I have encountered a very odd error in my program, "odd" in the sense that everything the API says suggests that the error should not occur. I have a bunch of 2D un indexed vertex data, and I want to render it as lines. So far, so good. Then, I wanted to make each vertex have its own (RGB) color, so I generate a color for each vertex. For simplicity, I chose red. Works fine, except now only 2 3 of the points are being rendered! The problem arises from the fact that each vertex's position data consists of only 2 numbers, whereas the color data consists of 3 numbers. So, the "position" buffer has 2 elements per vertex while the "color" one has 3 elements per vertex. I thought that using glVertexAttribPointer to tell this information to OpenGL would be enough, but turns out it's not. In fact, if I say that the color data has only 2 elements per vertex, using glVertexAttribPointer(vertexColorID2,2,GL DOUBLE,GL FALSE,0,(void )0) (as opposed to 3), it renders all the points except now I can only specify two numbers for the RGB color, so I can't get the right color. The full code of the issue is below glUseProgram(programID2) draw the graph graph data graphData() std vector lt double gt graphcolordata(graph data.size() 2 3) for (int i 0 i lt graph data.size() i 3) graphcolordata i 1 glEnableVertexAttribArray(vertexPosition modelspaceID2) glBindBuffer(GL ARRAY BUFFER, graphbuffer) glBufferData(GL ARRAY BUFFER, graph data.size() sizeof(GLdouble), amp graph data 0 , GL STREAM DRAW) glVertexAttribPointer(vertexPosition modelspaceID2,2,GL DOUBLE,GL FALSE,0,(void )0) glEnableVertexAttribArray(vertexColorID2) glBindBuffer(GL ARRAY BUFFER, colorbuffer2) glBufferData(GL ARRAY BUFFER, graphcolordata.size() sizeof(GLdouble), amp graphcolordata 0 , GL STREAM DRAW) glVertexAttribPointer(vertexColorID2,3,GL DOUBLE,GL FALSE,0,(void )0) glDrawArrays(GL LINES, 0, graph data.size() 2) glDisableVertexAttribArray(vertexPosition modelspaceID2) glDisableVertexAttribArray(vertexColorID2) Note programID2 is my basic shader program, and the following variable definitions were previously used GLuint vertexPosition modelspaceID2 glGetAttribLocation(programID2, "vertexPosition modelspace") GLuint vertexColorID2 glGetAttribLocation(programID2, "vertexColor") Edit Incredibly stupid error, figured it out immediately after posting when it had previously stumped me for half an hour. std vector lt double gt graphcolordata(graph data.size() 2 3) for (int i 0 i lt graph data.size() i 3) graphcolordata i 1 should be std vector graphcolordata(graph data.size() 2 3) for (int i 0 i lt graphcolordata.size() i 3) graphcolordata i 1 When this initialization is fixed, it works fine. I would delete this, but I do not see how. |
1 | OpenGL Orthographic projection questions I'm studying now about projections with OpenGL and I've some questions Is there importance for the viewer's location? I know if I move forward backward it won't mind, but what if I move upward with 30 degrees for e.g. Does the viewer is located in the 0,0,0 axis? or he can be in another place after defining glOrtho's properties. Are there only 6 projections? (I'm asking this because of the properties of the box, and defining the glOrtho near, far, left, right, up, bottom) Can I hide objects with ortho proj? I think it's obvious that yes, but I'm still curious maybe I'm wrong. Do objects that are closer to me seem bigger, meaning that other objects may be hidden? I believe that not. the objects don't seem bigger. they stay the same size. |
1 | Using a programmable pipeline in a game engine As a learning experience, I'm developing my own 3D game engine using OpenGL. I'm a little confused as to how to implement my rendering engine such that it uses a programmable pipeline while still being responsible for projecting and displaying vertices correctly. Let's say my engine has a simple vertex shader that uses projection and modelview matrices to project a vertex in 3D space. How do I use the engine's vertex shader while still allowing the programmer to provide his her own vertex shader for other calculations in their game? What is the normal approach here? |
1 | How can I make a transparent hole with shaperenderer using stencil masking in libgdx? I'm making a 2d game, where I need a resizeable, moveable rectangle outline. I'm trying to use stencil masking to do it by cutting a hole in a solid rectangle, and I thought this would help How can I add a transparent overlay to a UI in libGDX? But all I get is a solid box. Here's my code Gdx.gl.glClear(GL STENCIL BUFFER BIT) Gdx.gl.glColorMask(false, false, false, false) Gdx.gl.glDepthMask(false) Gdx.gl.glEnable(GL20.GL STENCIL TEST) Gdx.gl.glStencilFunc(GL20.GL ALWAYS, 0x1, 0xffffffff) Gdx.gl.glStencilOp(GL REPLACE, GL REPLACE, GL REPLACE) shapeRenderer.begin(ShapeRenderer.ShapeType.Filled) shapeRenderer.box(cubby.xpos 5 , cubby.ypos 5, 0, cubby.size 10, cubby.size 10, 0) shapeRenderer.end() shapeRenderer.begin(ShapeRenderer.ShapeType.Filled) Gdx.gl.glColorMask(true, true, true, true) Gdx.gl.glDepthMask(true) Gdx.gl.glStencilFunc(GL NOTEQUAL, 0x1, 0xffffffff) Gdx.gl.glStencilOp(GL KEEP, GL KEEP, GL KEEP) shapeRenderer.box(cubby.xpos, cubby.ypos,0,cubby.size, cubby.size, 0) shapeRenderer.end() Gdx.gl.glDisable(GL20.GL STENCIL TEST) |
1 | Rendering only a part of the screen in high detail If graphics are rendered for a large viewing angle (e.g. a very large TV or a VR headset), the viewer can't actually focus on the entire image, just a part of it. (Actually, this is the case for regular sized screens as well.) Combined with a way to track the viewer's eyes (which is mostly viable in VR I guess), you could theoretically exploit this and render the graphics away from the viewer's focus with progressively less details and resolution, gaining performance, without losing perceived quality. Are there any techniques for this available or under development today? |
1 | How to load large arrays to gpu and render with OpenGL? I am trying to make a volumetric rendering of a cloud. I have been defining the cloud density functions on the glsl shaders and performing ray marching methods successfully. But now I would like to render a 3D grid (100x100x100) representing the density of a cloud that I calculated using the cpu. The idea that I was trying was to make use of the storage buffer objects, but when I access the array to get the density value and render it, it doesn't work. This is at the beginning of the glsl fragment version 440 core layout(std430, binding 3) buffer layoutName float data SSBO 100 100 100 And the density function definition is float density(vec3 position, float t) const float dx 1. 100., dy 1. 100., dz 1. 100. int i, j, k if( (position.x gt 0.) amp amp (position.y gt 0.) amp amp (position.z gt 0.) amp amp (position.x lt 1.) amp amp (position.y lt 1.) amp amp (position.z lt 1.)) i int(position.x dx) j int(position.y dy) k int(position.z dz) return data SSBO i 100 100 j 100 k else return 0. And in the c code there are the buffer creation, bindings, etc glGenBuffers(1, amp ssbo) glBindBuffer(GL SHADER STORAGE BUFFER, ssbo) glBufferData(GL SHADER STORAGE BUFFER, 100 100 100 sizeof(float), grid, GL STATIC DRAW) glBufferSubData(GL ARRAY BUFFER, 0, 100 100 100 sizeof(float), grid) and in the rendering function there is glClearColor(1.f, 1.f, 0.f, 1.0f) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glBindBuffer(GL ARRAY BUFFER, VBO) glBindBuffer(GL SHADER STORAGE BUFFER, ssbo) glBindBufferBase(GL SHADER STORAGE BUFFER, 3, ssbo) glBufferSubData(GL ARRAY BUFFER, 0, 3 2 2 sizeof(float), buffer) glVertexAttribPointer(0, 2, GL FLOAT, GL FALSE, 2 sizeof(float), (void )0) coordenadas glEnableVertexAttribArray(0) glUseProgram(shaderProgram) I believe the problem has to do with the binding, I have been trying different combinations, like binding after and before glUseProgram, etc. I literally have no idea what is wrong, I see this is really confusing. |
1 | Framerate limited by lack of mouse movement? Using Torque, it appears that the program is running at around 25fps when the mouse is still, but as long as I keep the mouse moving, the framerate can hit well over 300fps. What in the world would cause the framerate to be tied to mouse movement? |
1 | What is the difference between OpenGL ES and OpenGL? Android uses OpenGL ES, what is the difference between it and OpenGL? |
1 | C 11 Efficient Networking ClientServer"Zone(s)" I am building both an engine and a networked game at once because I'm completely insane. The original design plan was to use internal Unix sockets for each "zone" for server lt zone communication, with UDP for server lt client(s) communication. My conundrum is this It seems like there are a variety of ways to handle getting packets from a client to the intended "zone", not none of them seem particularly efficient. Simply move data from a singular queue to the intended zone server, via data objects within code. (Pro Simple Con Potentially slow as molasses in a Canadian winter.) (Somehow) Allow Server and Zones to communicate via Unix sockets, with a networking class wrapper to pass packets into a "pool" for redistribution to Zones. (Pro Unix sockets, from my reading, tend to be fairly quick. Con Need to re evaluate how I am currently handling packets and the redistribution of packets makes it too similar to 1 for my liking.) Use the underlying messages system on Linux. Ref (Pro Crazy fast for large objects. Con No idea how to implement at the moment.) Additionally, if there is a specific text, document, or book(s) that I should be hitting instead of harassing everyone here, PLEASE let me know! Not at all afraid to spend a week tearing through a 200 page tome. Thank You! |
1 | OpenGL Blending GUI Textures I'm currently creating a menu for my project and I'm trying to get the textures to blend so I'm only left with the actual Image on the texture and not the background. The problem is the whole texture is somewhat transparent, it's not just removing the background. My RGBA texture looks like And the black background needs to be removed from the image. I was using GL SRC ALPHA, GL ONE MINUS SRC, ALPHA for my blend function but it wasn't blending anything. I changed to GL ONE, GL ONE and now I'm at where I am now. You can see the text is there but its also transparent, but the background has been removed which is good. This is how i'm drawing my button. The world behind it is drawn after (i've tried switching order didnt change anything) and it's drawn using VBO's whereas the Buttons are drawn in immediate mode. glEnable(GL BLEND) glBlendFunc(GL ONE, GL ONE) left get button pos Point3 lt float gt pos button gt getPos() get button dimensions int width button gt getWidth() int height button gt getHeight() bind button texture and draw quad glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, button gt getState() ? button gt getDownTex() gt getTextureID() button gt getUpTex() gt getTextureID()) glBegin(GL QUADS) glTexCoord2i(0, 1) glVertex3f(pos.x, pos.y, 0.0f) glTexCoord2i(1, 1) glVertex3f(pos.x width, pos.y, 0.0f) glTexCoord2i(1, 0) glVertex3f(pos.x width, pos.y height, 0.0f) glTexCoord2i(0, 0) glVertex3f(pos.x, pos.y height, 0.0f) glEnd() I'm using sdl to load the texture SDL Surface image IMG Load(textureName) data.m w image gt w data.m h image gt h data.m bitsPerPixel image gt format gt BitsPerPixel data.m alpha image gt format gt alpha int colourMode image gt format gt BytesPerPixel if (colourMode 4) internalFormat GL RGBA if (colourMode 3) internalFormat GL RGB if (colourMode 1) internalFormat GL LUMINANCE if (SDL BYTEORDER SDL LIL ENDIAN) if (colourMode 4) format GL BGRA else format GL BGR else if (colourMode 4) format GL RGBA else format GL RGB GLuint texture 1 create texture handle glGenTextures(1, amp texture) gen texture bindTexture(GL TEXTURE 2D, texture) glTexParameterf( GL TEXTURE 2D, GL TEXTURE WRAP S, GL REPEAT) glTexParameterf( GL TEXTURE 2D, GL TEXTURE WRAP T, GL REPEAT) glTexParameterf( GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR ) glTexParameterf( GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR ) glTexImage2D(GL TEXTURE 2D, 0, internalFormat, image gt w, image gt h, 0, format, GL UNSIGNED BYTE, image gt pixels) normal texture |
1 | instancing and GPU skinning I'm trying to render a large number of identical rigged entities with independent animation. Compared to standard skeletal animation, I can't compute the pose of each entities and send it to the GPU for each frame. This would demand too much work for the CPU and saturate the VRAM very quickly. Let's say I have 5000 entities with 20 bones, I would have to compute and send 5000 20 (size of matrix quaternion translation dual quaternion) poses to the GPU, each frame. I recently found this article paper https developer.nvidia.com gpugems GPUGems3 gpugems3 ch02.html Their idea is to load animations to the GPU in order to access them independently with each instance. Correct me if I'm wrong but I think this is equivalent to move the interpolation part to the GPU. I pre computed all the bone transformation in local space for each frame for each animation and packed it into texture buffers. It works as expected. However, I experience strange results with my animated models... In fact the interpolation is necessarily wrong ! I am trying to interpolate between poses in local space, not in bone space like with standard CPU interpolation. There is no problem with translations, but with rotations. (the axis is shifted to the local space origin). I imagine I could send the bone space transformations AND the bone hierarchy. But that would impose to parse the hierarchy for each vertex (in vertex shader), in order to re compute the interpolated bone transformations. One other idea I had is to upsample the animations in order to have smaller interpolation intervals, but again, it will use more VRAM. To be honest I'm not sure I understood well the paper (especially in the interpolation part)... Any suggestion ? EDIT 1 It's maybe unclear by "local space" I mean entity space, so it is more like "global space"... EDIT 2 To provide more details about my implementation In the CPU (c ) pre process part, I pre compute all the bone transformations in local space for each frame (natively, I get bone space transformations using Assimp library so I have to do a bit of work here). Then, I send this information as an array to the GPU (array of numFrames numBones numAnims dual quaternions). In the vertex shader, I get information about the current instance using gl instanceID. I get world position, current animation and time in current animation. Then I compute the interpolated pose of the bones influencing the current vertex. I do that by finding the bounding frames using the instance's time. I interpolate my pairs of transformations (pairs of dual quaternions) for each concerned bone. (I know there is a mistake because the interpolation must be performed in bone space). Once they are interpolated, I blend linearly the transformations into one using the bone weights attached to my vertex (and I normalize my final dual quaternion). It worked when i was doing interpolation in bone space in CPU, I'm just trying to move the interpolation part on the GPU. |
1 | How do I implement my old OpenGL based gfx render triangle list using DX11? I am working at a game that has lots of procedural content. I had built a game engine using OpenGL that handles everything needed for creating a basic 2D game, sprites, primitives, blending, polygons etc. At a particular moment I decided to implement target textures to obtain interesting effects and problems started to arise Sometimes the engine did not initialize OpenGL and GLEW properly and could not isolate the exception. Finally I abandoned the idea of using OpenGL because of similar issues (SOMETIMES cannot use newer GLEW functions...NOTE drivers up to date). I decided to use DX11 instead and take benefit from the new technologies of the latest GPUs. So I did. Now I am porting my engine to DX11 and I am quite stuck. I had procedures to be called in a render() function, like engine gt gfx render triangle list(pointer) which in principle just called a procedure like glBegin(GL TRIANGLES) ... glVertex2f(.....) ... glEnd() Now the problem that arises for me is the new pipeline of DX11. Load up vertices in the VertexBuffer, process them by the VertexShader then fill them with pixels and transform with the PixelShader. This is great as I have full access to what is being displayed on the screen, but, how do I do it the 'old fashion' way? I mean, how do I draw primitives like in OpenGL, each frame sending unique, new triangles to the screen. I didn't find any resources that explain this and I cannot think this through myself, loading some vertices in the vertex buffer is designed for meshes that are human made and static and they are only transformed by the shaders. Long story short How do I implement my old gfx render triangle list using DX11? NOTE My engine will be used as a DLL. |
1 | OpenGL light calculation I want to add somebasic point lights to my OpenGL application. I read here that I have to caluclate the light in a pre projection space Lighting can be done in any pre projection space (e.g., world coordinates or eye coordinates), as long as all the objects and lights are all in that same space. And here is my problem. My Renderer currently looks something like this public void onDrawFrame(GL10 unused) GLES20.glClear(GLES20.GL COLOR BUFFER BIT GLES20.GL DEPTH BUFFER BIT) Matrix.multiplyMM(camProjection, 0, projection, 0, camera, 0) for(int i 0 i lt sceneObjects.size() i ) SceneObject object sceneObjects.get(i) object.draw(camProjection) ... As you can see I multiply my projectionMatrix with the viewMatrix and I think that is a good solution because I only have to do this once. To draw I give the PVMatrix to all objects and multiply it with their modelMatrix and then I push the finall MVPMatrix to the shader. This is a problem because I can not calculate the light direction in eye space in the shader because I do not have any pre projection matrix here. Do I really have to push a MVMatrix to the shader and calculate the normals and so on with it? I think this is really inefficient because I have to do much more matrix multiplications, isn't it? What would you suggest to calculate this efficiently? |
1 | Can I store D3D9 textures so that they don't consume process memory and also don't require "lost device" handling? I have an OpenGL application that uses a lot of texture memory. While the texture is stored in the system memory, that texture memory is not part of my application process memory. When this application is ported to D3D and the textures are uploaded to the D3DPOOL MANAGED pool, the process memory increases proportional to the texture memory and the application crashes due to running out of memory (while in OpenGL, even though the overall system memory utilization has greatly increased, the process memory is still low) I don't want to have to switch to D3DPOOL DEFAULT, since the I would have to reload all textures when the device is reset. Is there a workable solution to get my D3D9 application to put the system texture memory somewhere else beside my process, similar to OpenGL? EDIT So I just learned about IDirect3DDevice9Ex. Turns out using this interface instead of IDirect3DDevice9 make DirectX work exactly like opengl in terms of texture memory (system memory goes up proportional to my texture usage, but not the process memory). So this is an option for me. However I would still like to know if there are any options since IDirect3DDevice9Ex doesn't work on XP. |
1 | How to map control key to a specific angle or orientation? At the moment, I basically have built out a large hash table of various angles and what direction they map to, if the user would press right, up, down, left. This is needed, in my case, as I have a collection of geometries that when the user presses right, they all go right, and when left, they all go left ... However, when the scene containing these is rotated anything other than its original state, these controls need to be updated reversed etc. That is, if you're looking at the scene from behind, pressing left now has the objects all going right, etc. I was trying to see if there is a simpler way of applying matrix transformations instead of hard coding a large hash table mapping various scene angles to keys. |
1 | Should I make a FPS game on Fixed Function Pipeline or Programmable Pipeline OpenGL? I have a FPS game I have programmed in Fixed Function Pipleline and one made in Programmable pipeline OpenGL. While the programmable pipeline has lots of weird things that you can edit, it does not have glLoadIdentity that I need for the gun to be attached to the camera. There is little to no information on this subject and most of the information that I can find is in the fixed function pipeline rather than the programmable one. Keep in mind that with the fixed function pipeline, I can just use the glloadidentity function and attach it then move on to another thing. On the programmable one, I don t know how to do this, so I have spent a whole week looking up how I can do it. Should I just use the fixed function pipeline one and abandon the programmable pipeline? What shall I do? Thanks! |
1 | What is the best way to store Vertex Buffer Object data? Until now, I have been using a vertexData structure to store data for a Vertex Buffer Object (VBO) vertexData holds a static array of 6 vertices (2 triangles). I then save them to a vector of the vertexData type, before finally using this vertexData in the buffering method struct vertexData Vertex vertices 6 position, color, UVs std vector lt vertexData gt DATA void SpriteBatch createVBO() glBindBuffer(GL ARRAY BUFFER, VBO) glBufferData(GL ARRAY BUFFER, DATA.size() sizeof(vertexData),nullptr,GL DYNAMIC DRAW) glBufferSubData(GL ARRAY BUFFER,0, DATA.size() sizeof(vertexData), DATA.data()) glBindBuffer(GL ARRAY BUFFER,0) This works good, as everything in my vector is contiguous. However, I want to work with other vertex sizes, for different shapes. I tried a vector of vector types, in oder to work with dynamically allocated arrays, but it did not work, as it was not contiguous. At first, I thought about using a polymorphic class to be stored as a vector, where every child would have a different array size but then I recognized that this vertex buffer does not work with pointers The second idea was to create a generic SpriteBatch. For example, a batch for 6 vertices, and then a batch for 2 vertices, etc. etc. What is the best way to store Vertex Buffer Object data? |
1 | FBO picking in Oculus Rift applications I am writing an Oculus Rift application, I am rendering a very high poly mesh that I wish to be able to perform picking on using the Oculus Touch. Ideally, I want to be able to get the "triangle" id and other information attached to it. Currently, I am using an OBB Tree and ray casting to perform picking on the CPU, It works perfectly, the problem is that even with OBB tree the picking process is slow. I thought I'd perform picking on the GPU by rendering the view (from the point of view of the Oculus Touch) to an FBO using a custom shader that outputs "triangle information" to the buffer and then use glReadPixels to read the central pixel data. The problem I am facing is that the Oculus does distortion to my on screen scene but there is no way to apply it to the off screen buffer, so there is significant difference between the on screen buffer and the off screen buffer. My question is, Is ray casting the only feasible way to do picking in Oculus Apps or is there a way to perform the faster FBO picking even when the view distorted? |
1 | ASSIMP aiNode mMeshes understanding I'm trying to use ASSIMP to load an fbx file with a skeleton and animations, but I'm stuck on something I'm misunderstanding and struggling to see it. I've attached 2 images, one of the fbx file as rendered by Open 3D Mod which uses ASSIMP and looks correct. The second image is my renderer that does not look correct. The only thing wrong with mine is that the main body is rotated 90 degrees from what it should be. For this fbx file, it has 3 nodes that contain a mesh to draw "Mesh 1" is the main body and "Mesh 0" is an axe. The axe is referenced twice, once from each hand node. The main body mesh's vertices are tied to the bones, so Mesh 1 gets animated with the bones. The axe mesh's vertices are not tied to bones. So I'm traversing the skeleton, and when I get to a node that references a mesh to draw, I transform to that node's position, then draw the mesh instance at that spot. This is how the axe meshes get animated the axe mesh vertices are not tied to bones, but since the hand nodes reference them, I transform wherever the hand bone animation currently is then draw them. My problem is when I do this same thing on the main body (Mesh 1), it ends up rotated 90 degrees as seen in the blue picture. The reason is because the node it is attached to has an mTransformation of 90 degrees about the x axis. Another note is that the mTransformation of the RootNode is the identity matrix, so the global inverse is identity. There's too much code involved for me to realistically be able to post it here. However, the relevant logic follows what's shown in this tutorial http ogldev.atspace.co.uk www tutorial38 tutorial38.html The one thing the tutorial doesn't mention is handling the mesh references at each node. If I ignore where the meshes are referenced from, the body works but the axes break (axes don't animate any more and simply draw at their bind position). The model I'm using was a free one I grabbed from the unity store for testing https assetstore.unity.com packages 3d animations zombie 0 1 19254 My questions are Am I correctly interpreting the mesh references at a node by transforming to the node position prior to drawing? If so, why does this get the axes to the correct place but not the main body? If not, what am I missing? Open 3D Mod's render, looking down the Z axis My render, looking down the Z axis |
1 | Drawing multiple Textures as tilemap I am trying to draw a 2d game map and the objects on the map in a single pass. Here is my OpenGL initialization code Turn off unnecessary operations glDisable(GL DEPTH TEST) glDisable(GL LIGHTING) glDisable(GL CULL FACE) glDisable(GL STENCIL TEST) glDisable(GL DITHER) glEnable(GL BLEND) glEnable(GL TEXTURE 2D) activate pointer to vertex amp texture array glEnableClientState(GL VERTEX ARRAY) glEnableClientState(GL TEXTURE COORD ARRAY) My drawing code is being called by a NSTimer every 1 60 s. Here is the drawing code of my world object (void) draw (NSRect)rect withTimedDelta (double)d GLint t glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, textureManager textureByName "blocks" ) glTexEnvi(GL TEXTURE ENV, GL TEXTURE ENV MODE, GL REPLACE) for (int x 0 x lt map getWidth x ) for (int y 0 y lt map getHeight y ) GLint v 16 x ,16 y, 16 x 16,16 y, 16 x 16,16 y 16, 16 x ,16 y 16 t textureManager getBlockWithNumber map getBlockAtX x andY y glVertexPointer(2, GL INT, 0, v) glTexCoordPointer(2, GL INT, 0, t) glDrawArrays(GL QUADS, 0, 4) ( textureManager is a Singelton only loading a texture once!) The object drawing codes is identical (except the nested loops) in terms of OpenGL calls (void) drawWithTimedDelta (double)d GLint t GLint v 16 xpos ,16 ypos, 16 xpos 16,16 ypos, 16 xpos 16,16 ypos 16, 16 xpos ,16 ypos 16 glBindTexture(GL TEXTURE 2D, textureManager textureByName textureName ) t textureManager getBlockWithNumber 12 glVertexPointer(2, GL INT, 0, v) glTexCoordPointer(2, GL INT, 0, t) glDrawArrays(GL QUADS, 0, 4) As soon as my central drawing routine calls the two drawing methods the second call overlays the first one. i would expect the call to world.draw to draw the map and "stamp" the objects upon it. Debugging shows me, that the first call is performed correctly (world is being drawn), but the following call to all objects ONLY draws the objects, the rest of the scene is getting black. I think i need to blend the drawn textures, but i cant seem to figure out how. Any help is appreciated. Thanks PS Here is the github link to the project. It may not be in sync of my post here, but for some more in depth analysis it may help. |
1 | How can I use OpenGL and D3D to render to the same window at the same time? I have main render loop in which initial drawing is done via OpenGL to an SDL window, and after that the same window handle is passed to a Direct3D device, which does subsequent rendering. Once I execute the program it will initially draw the OpenGL scene, and then do Direct3D drawing but that latter drawing overwrites the OpenGL work. What I want to see is both drawings in parallel. How can I accomplish that? |
1 | Difference between the terms Material Effect I'm making an effect system right now (I think, because it may be a material system... or both!). The effects system follows the common (e.g. COLLADA, DirectX) effect framework abstraction of Effects have Techniques, Techniques have Passes, Passes have States amp Shader Programs. An effect, according to COLLADA, defines the equations necessary for the visual appearance of geometry and screen space image processing. Keeping with the abstraction, effects contain techniques. Each effect can contain one or many techniques (i.e. ways to generate the effect), each of which describes a different method for rendering that effect. The technique could be relate to quality (e.g. high precision, high LOD, etc.), or in game situation (e.g. night day, power up mode, etc.). Techniques hold a description of the textures, samplers, shaders, parameters, amp passes necessary for rendering this effect using one method. Some algorithms require several passes to render the effect. Pipeline descriptions are broken into an ordered collection of Pass objects. A pass provides a static declaration of all the render states, shaders, amp settings for "one rendering pipeline" (i.e. one pass). Meshes usually contain a series of materials that define the model. According to the COLLADA spec (again), a material instantiates an effect, fills its parameters with values, amp selects a technique. But I see material defined differently in other places, such as just the Lambert, Blinn, Phong "material types shaded surfaces", or as Metal, Plastic, Wood, etc. In game dev forums, people often talk about implementing a "material effect system". Is the material not an instance of an effect? Ergo, if I had effect objects, stored in a collection, amp each effect instance object with there own parameter setting, then there is no need for the concept of a material... Or am I interpreting it wrong? Please help by contributing your interpretations as I want to be clear on a distinction (if any), amp don't want to miss out on the concept of a material if it should be implemented to follow the abstraction of the DirectX FX framework amp COLLADA definitions closely. |
1 | What's the equivalent of wglShareLists for Mac OS? I'm trying to share lists between two contexts on Mac OS but despite my research I couldn't come up with an answer so far. I've found that NSOpenGLContext was able to initialize a context with a shared context but not to set it afterward. What's the equivalent of wglShareLists on Mac OS? |
1 | Painter algorithm vs. 3D rendering with Z buffer when drawing 2D Sprites I'm currently developing a tile base engine. I want it to look like your average old school tilebased RPG like Zelda Link to the past (orthographic projection, squared tile texture, textures overlapping each other, alpha channels, etc.). The twist is that the world is internally 3d meaning that every tile has an elevation and entities are moving in a 3d space. Reason for this is beyond the scope of this question but I'm struggling on comparing the follow two ideas for the rendering. I'm using OpenGL with libGDX in Java but wouldn't mind writing my own code to interface OpenGL. That's also why I didn't tag it. A screenshot of quot Zelda Link to the past quot in case you don't know what it looks like. Painter algorithm Disable Z Buffer and use painter algorithm to draw from back to front in the appropriate order with some overdraw and a lot of texture rebinding. I know a texture atlas can minimize the texture rebinding but the engine is supposed to act more like a sandbox so I don't want to rely too heavily on assumptions like a efficient texture atlas that I might not have all the time. Pro Easy to implement and very straight forward. Con I've no idea how big the impact will be since almost every sprite will cause a texture rebinding. And I don't know how to predict the performance costs of that many texture bind calls. Full blown 3D Rotate the camera 45 degrees up around the x axis with an orthographic projection matrix. Then use the 3d information from the game world to draw the tile textures with billboarding at the real position in the worlds 3d space. Same for entities and such. This way I can gather all sprites and then render them sorted by their texture which means one texture bind per texture and reduced overdraw due to usage of the Z buffer Pro I can use a freecam to debug render problems. I assume it's significantly faster that the painter algorithm. Con Some sprites might share the same place which could lead to Z fighting. I'm thinking about how to use multiple layers on a tile. For instance you may have a bridge tile with a railing where the railing is always overlapping an entity on the bridge but the floor of it is always overlapped by the entity. So I would have to add a little offset to different layers in the 3d space. I'm not really sure how that would work out and to be honest had a hard time figuring out how old games did it so maybe I'm overthinking it. I guess I have a very rough idea of how each method will work out but I don't know how much the texture rebinding (or sorting and preparing the sprites for the 3d approach) is affecting the performance and while it's easy to find information regarding large 3D scenes I found it difficult to get information about 2D scenes with way less polygons.I hope this question is not too much a subject of personal preference. I know that Premature optimization is the root of all evil but due to how the decision is affecting almost the entire rendering process I don't want to make the wrong call. I deliberately didn't mention any OpenGL functions mainly because I've mostly used the libGDX API which hides the function calls behind it's wrappers but generally I know how the internals work and just got into the internals and how the OpenGL API works. I don't think specific OpenGL calls do matter for the question but feel free to use them for your answer I'll just look them up. |
1 | Skinning on simple model doesn't work as expected I have created a simple application to make skinning work with OpenGL. so I want to share with my vertex shader version 330 core layout (location 0) in vec3 RealPos layout (location 1) in vec3 vertex color layout (location 2) in vec2 vertex textcoord layout (location 3) in vec3 RealNor layout (location 4) in vec4 Joint layout (location 5) in vec4 Weight uniform mat4 u jointMatrix 2 out vec2 vs text out vec3 vs color out vec3 normal out vec3 FragPos uniform mat4 model uniform mat4 view uniform mat4 projection uniform bool isSkin void main() mat4 skinMatrix Weight.x u jointMatrix int(Joint.x) Weight.y u jointMatrix int(Joint.y) Weight.z u jointMatrix int(Joint.z) Weight.w u jointMatrix int(Joint.w) vs text vertex textcoord vs color vertex color FragPos vec3(model vec4(RealPos, 1.0f)) normal mat3(transpose(inverse(model))) RealNor if(isSkin) gl Position projection view model skinMatrix vec4(RealPos, 1.0f) else gl Position projection view model vec4(RealPos, 1.0f) so as you can see, if my model is skinned then it will call the first statement. and here my C code unsigned j 0 for (unsigned u 0 u lt m.numOfJoints u ) glm mat4 g glm mat4(inverseBindMatriceHandler.at(j), inverseBindMatriceHandler.at(j 1), inverseBindMatriceHandler.at(j 2), inverseBindMatriceHandler.at(j 3),inverseBindMatriceHandler.at(j 4), inverseBindMatriceHandler.at(j 5), inverseBindMatriceHandler.at(j 6), inverseBindMatriceHandler.at(j 7), inverseBindMatriceHandler.at(j 8), inverseBindMatriceHandler.at(j 9), inverseBindMatriceHandler.at(j 10), inverseBindMatriceHandler.at(j 11), inverseBindMatriceHandler.at(j 12), inverseBindMatriceHandler.at(j 13), inverseBindMatriceHandler.at(j 14), inverseBindMatriceHandler.at(j 15)) m.inverseBindMatrice.push back(g) glm mat4 gTransform glm inverse(g) m.globalTransform.push back(gTransform) j 16 if (Model.isSkinned) for (unsigned l 0 l lt Model.numOfJoints l ) jointM l Model.globalTransform.at(l) Model.inverseBindMatrice.at(l) int isSkinLoc glGetUniformLocation(program, quot isSkin quot ) glUniform1i(isSkinLoc, 1) for (unsigned l 0 l lt Model.numOfJoints l ) 2 std string name quot u jointMatrix quot std to string(l) quot quot int jointMatrixLoc glGetUniformLocation(program, name.c str()) glUniformMatrix4fv(jointMatrixLoc, 1, GL FALSE, glm value ptr(jointM l )) else int isSkinLoc glGetUniformLocation(program, quot isSkin quot ) glUniform1i(isSkinLoc, 0) inverseBindMatriceHandler is a vector. I have 2 joints (numOfJoints). I can't understand why my model doesn't work properly. can someone help me? Thanks |
1 | glOrtho setting view I am duplicating this thread from stackoverflow, please remove it if that is not allowed. I'm completely new in OpenGL. I have this problem I have quite a complicated scene, and I am looking at it from the front (default camera position). The way I have seen to move the camera is using the gluLookAt() function to set the point you want to look at, and glTranslate3f() to move the camera position. I need to move the camera with different data my data is not the point I want to look at rather, I have the data of the projection plane determined with a viewing vector and a point in the plane. Is there a way to set the camera using this data rather than a "look at" point? I am using an ortographic projection (glOrtho()), so everything is projected onto the projection plane. UPDATE To be more precise the point I have is a point in the plane, to which I want to project to. I have in fact not only the plane, but the square to which I need to project, defined. So I have a scene somewhere a space, and a square somewhere away from that scene. I want to project the scene to that square and show that projection on screen. I hope that made it a bit clearer what I need to achieve ... While gluLookAt defines a point in the scene at which to look at, and the position of the eye. While eye is one point in space and projection offers perspective, with ortographic projection, you do not have one point, you have a whole square on which to project to, so how can I define that? For easier understanding what I am trying to achieve. Here is the image of what glOrtho normally does Here is an image of what I am trying to achieve |
1 | GL GENERATE MIPMAP vs. GL MAX TEXTURE SIZE If we generate mipmaps for a texture using GL GENERATE MIPMAP (or glGenerateMipmap), how big can the original texture be? Is it the size returned by GL MAX TEXTURE SIZE, or half of it? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.