_id
int64 0
49
| text
stringlengths 71
4.19k
|
---|---|
1 | Draw multiple times same object but translated and rotated I want to draw lots of spheres in different locations and orientations with Opengl4 and JOGL. As the vertexes and colours are the same for all of them, I have just one array for vertexes and another for colours. For the positions and orientations, I have another big matrix where I have all data for all spheres. In principle, drawing one with glDrawArrays is not a problem but for severals, I have read that I should use glDrawArraysInstanced instead. My problem is that I am a bit confused about how to apply each transformation for my particles. How should I introduce this array into the shader? Should I send the matrix model after doing the transformations in the cpu or should I send the positions and orientations and transform them inside the shader? How do I connect the data to the shader? How should the shader look like? |
1 | Updating texture memory via shader? What the title says.Is it possible to update a texture via a glsl shader ? Something like Read vec4 Pixel texture2D(TextureID,gl TexCoord TextureIndex .st) Write to texture memory ? vec4 NewPixel Pixel vec4(0.0,0.0,0.0,AlphaPatch) ?? How to write back to texture memory ?? texture2D(TextureID,gl TexCoord TextureIndex .st) NewPixel ?? |
1 | OpenGL Drawing multiple meshes at once using VBOs and IBOs I have been learning OpenGL 2.1 but using shaders, VBOs, IBOs, etc. I have gotten a rendering engine that can load and draw meshes, materials, forward lighting (no shadows yet), SceneNodes, and NodeComponents. There are no optimizations yet (for obvious reasons) such as occlusion culling, only Face culling at the moment. A mesh (simplified) class looks like this struct Mesh GLuint vbo GLuint ibo GLuint size void draw(Program shaderProgram) const glEnableVertexAttribArray(shaderProgram gt attrib("vertPos")) glEnableVertexAttribArray(shaderProgram gt attrib("vertTexCoord")) ... glBindBuffer(GL ARRAY BUFFER, vbo) glVertexAttribPointer(shaderProgram gt attrib("vertPos"), 3, GL FLOAT, false, sizeof(Vertex), (const GLvoid )(0 sizeof(float))) ... glBindBuffer(GL ELEMENT ARRAY BUFFER, ibo) glDrawElements(GL TRIANGLES, size, GL UNSIGNED INT, 0) glBindBuffer(GL ARRAY BUFFER, 0) glBindBuffer(GL ELEMENT ARRAY BUFFER, 0) glDisableVertexAttribArray(shaderProgram gt attrib("vertPos")) ... When I have been drawing multiple objects with the same mesh (or multiple meshes), I notice a frame drop compared to a single mesh with the same number of triangles. Stanford Models were used. (1 Face Triangle) Model Num Models Total Faces With Lighting Without Lighting ms frame ms frame 0 (control) 0 0.47 0.41 bunny 1 69630 1.2 0.64 bunny 14 974820 12.4 3.50 buddha 1 1087474 7.87 2.89 buddha 10 10874740 71.4 23.8 I does look like that time frame is O(n). However, drawing one vbo with 12 more faces, the frame time is decreased by 37 . So my questions are Is drawing multiple VBOs actually worse than drawing one big one? How would I combine these VBOs and IBOs into one big one if this is true? |
1 | Separate shader programs or branch in shader? I have a bunch of point lights and directional lights. Instead of checking the light type in the fragment shader and then branch for either point light calculation or directional light calculation, is it more efficient to use two separate programs, one for point lights and one for directional lights? (Using deferred shading in OpenGL 3.3) |
1 | Managing many draw calls for dynamic objects We are developing a game (cross platform) using Irrlicht. The game has many (around 200 500) dynamic objects flying around during the game. Most of these objects are static mesh and build from 20 50 unique Meshes. We created seperate scenenodes for each object and referring its mesh instance. But the output was very much unexpected. Menu screen (150 tris Just to show you the full speed rendering performance of 2 test computers) a) NVidia Quadro FX 3800 with 1GB 1600 FPS DirectX and 2600 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb 260 FPS in OpenGL Now inside the game in a test level (160 dynamic objects counting around 10K tris) a) NVidia Quadro FX 3800 with 1GB 45 FPS DirectX and 50 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb 45 FPS in OpenGL Obviously we don't have the option of mesh batch rendering as most of the objects are dynamic. And the one big static terrain is already in single mesh buffer. To add more information, we use one 2048 png for texture for most of the dynamic objects. And our collision detection hardly and other calculations hardly make any impact on FPS. So we understood its the draw calls we make that eats up all FPS. Is there a way we can optimize the rendering, or are we missing something? |
1 | What's the best way of drawing a glowing 3d line using LWJGL? Sort of like a strip light effect not actually a light source, but just a polygon with glowing edges. Can this be done easily? Right now I'm contemplating drawing a line more than once with varying alpha width values to achieve this, but I think that'll cause weird glowing points at the vertices, which is far from ideal. |
1 | OpenGL Framebuffer Object issue I am struggling with implementing a proper Framebuffer Object . glCheckFramebufferStatus is returning me GL FRAMEBUFFER INCOMPLETE ATTACHMENT (36054). What am i missing? GLuint fbo, rboColor, rboDepth Color renderbuffer. glGenRenderbuffers(1, amp rboColor) glBindRenderbuffer(GL RENDERBUFFER,rboColor) Set storage for currently bound renderbuffer. glRenderbufferStorage(GL RENDERBUFFER, GL BGRA, w, h) Depth renderbuffer glGenRenderbuffers(1, amp rboDepth) glBindRenderbuffer(GL RENDERBUFFER,rboDepth) glRenderbufferStorage(GL RENDERBUFFER, GL DEPTH COMPONENT24, w, h) Framebuffer glGenFramebuffers(1, amp fbo) glBindFramebuffer(GL FRAMEBUFFER,fbo) glFramebufferRenderbuffer(GL DRAW FRAMEBUFFER, GL COLOR ATTACHMENT0, GL RENDERBUFFER, rboColor) Set renderbuffers for currently bound framebuffer glFramebufferRenderbuffer(GL DRAW FRAMEBUFFER, GL DEPTH ATTACHMENT,GL RENDERBUFFER,rboDepth) Set to write to the framebuffer. glBindFramebuffer(GL FRAMEBUFFER,fbo) Tell glReadPixels where to read from. glReadBuffer(GL COLOR ATTACHMENT0) GLenum e glCheckFramebufferStatus(GL DRAW FRAMEBUFFER) glCheckFramebufferStatus check the completeness status of a framebuffer if (e ! GL FRAMEBUFFER COMPLETE) printf(" nThere is a problem with the FBO.") |
1 | Modern OpenGL Sprite Drawing Not Rendering I'm trying to render sprites based of this tutorial, in LWJGL. Here's what I have so far, with descriptions of what class I use Transformable2D is a class that holds transform matrices meant for 2D rendering, and Drawable is just an interface that has a draw() function. Texture is a container for loading and creating Textures, and its bind function just sets the active texture to it's ID and binds itself to the passed unit parameter. SpriteTechnique is a container for a shader program, and I will paste the shaders here too, although they are the same as from the tutorial. I make the shader and vao static, because every Sprite uses the same vao and shader. Finally, I pass a View to the draw() function, which holds the necessary info to make an orthogonal matrix based off the transforms of the object. This matrix is correct, as I also use it for text rendering, where it works as expected. All of the classes described above work completely I am 100 sure the problem is not within the backing classes. Here are the GL settings I set in my backend, too glFrontFace(GL CW) glCullFace(GL BACK) glEnable(GL CULL FACE) glEnable(GL DEPTH TEST) So, onto the actual class public class Sprite extends Transformable2D implements Drawable private Texture texture private Vector3f color private static SpriteTechnique tech new SpriteTechnique() private static boolean isRendererInitialized private static int vao public Sprite(Texture texture) this.texture texture color new Vector3f() public static boolean initRenderer() if (isRendererInitialized) return true if (!tech.init()) return false float vertices Pos Tex 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f FloatBuffer vertexBuf BufferUtils.createFloatBuffer(vertices.length) vertexBuf.put(vertices) vertexBuf.flip() vao glGenVertexArrays() glBindVertexArray(vao) int vbo glGenBuffers() glBindBuffer(GL ARRAY BUFFER, vbo) glBufferData(GL ARRAY BUFFER, vertexBuf, GL STATIC DRAW) glEnableVertexAttribArray(0) glVertexAttribPointer(0, 4, GL FLOAT, false, 16, 0) glBindBuffer(GL ARRAY BUFFER, 0) glBindVertexArray(0) isRendererInitialized true return true Override public void draw(View v, RenderStates states) if (!isRendererInitialized) System.err.println("Sprite renderer must be initialized before drawing sprites!") return tech.enable() tech.setTextureUnit(COLOR TEXTURE UNIT INDEX) tech.setColor(color) tech.setProjection(getOrthoTrans(v).getMatrix()) texture.bind(COLOR TEXTURE UNIT) glBindVertexArray(vao) glDrawArrays(GL TRIANGLES, 0, 6) glBindVertexArray(0) Here are my shaders Sprite.vs version 330 core layout (location 0) in vec4 vertex lt vec2 position, vec2 texCoords gt out vec2 TexCoords uniform mat4 gProjection void main() TexCoords vertex.zw gl Position gProjection vec4(vertex.xy, 0.0, 1.0) Sprite.fs version 330 core in vec2 TexCoords out vec4 color uniform sampler2D gSpriteTexture uniform vec3 gSpriteColor void main() color vec4(gSpriteColor, 1.0) texture(gSpriteTexture, TexCoords) When I try to render a Sprite, however, nothing appears on the screen. Not even a black quad. I've tried to scale the quad, but to no avail. What am I doing wrong here? |
1 | Gldraw call asynchronous or not When i call gldrawarrays with a large set of data will the function return only after drawing all the vertices or will it happen asynchronously b w cpu and gpu? Also how does vsync work. Aren't they related? |
1 | GLSL Shader compiles, but source is empty I'm trying to compile a GLSL shader, to which I use the following code. Initialization SDL Window boringInitStuff() SDL Init(SDL INIT VIDEO) SDL GL SetAttribute( SDL GL CONTEXT MAJOR VERSION, 3 ) SDL GL SetAttribute( SDL GL CONTEXT MINOR VERSION, 1 ) SDL GL SetAttribute( SDL GL CONTEXT PROFILE MASK, SDL GL CONTEXT PROFILE CORE ) Uint32 windowFlags SDL WINDOW OPENGL SDL Window sdlWindow SDL CreateWindow("Boooring", SDL WINDOWPOS UNDEFINED, SDL WINDOWPOS UNDEFINED, 400, 400, windowFlags) SDL GL CreateContext(sdlWindow) glewExperimental GL TRUE glewInit() return sdlWindow File parser void readFile(std string path, std string amp data) std ifstream f(path.c str(), std ios binary) data.assign((std istreambuf iterator lt char gt (f)), (std istreambuf iterator lt char gt ())) Main int main(int argv, char args) SDL Window sdlWindow boringInitStuff() std string buffer readFile(". compile test.vert", buffer) const char cBuffer buffer.c str() GLuint shaderID glCreateShader(GL VERTEX SHADER) glShaderSource(shaderID, 1, amp cBuffer, nullptr) glCompileShader(shaderID) while(!SDL QuitRequested()) SDL GL SwapWindow(sdlWindow) return 0 But when I try to inspect the source code in gDEBugger, the source code is gone. Linking of course doesn't work aswell. The weird thing is, that the compilation error checking works. EDIT When I copy amp paste the main part into another opengl project, it works. |
1 | Is making an acceptable 3D engine possible using only SDL functions? I've been watching Javidx9's 3D engine series and I decided to start making it with SDL because it was something I had heard of and seemed simple enough. I'm not going to implement too complex graphical features anyway. So I start writing code, and I'm drawing my triangles with SDL DrawLine(), and I'm filling my triangles with SDL DrawLine, and I go to load up the Utah Teapot and the frames are stuttering quite a bit. I'm pretty sure I already know the answer to this question but I want to be sure Is it unrealistic to try to create a 3D engine without using raw OpenGL funcions? It would make sense that the frames are so bad because SDL would have to work through something like OpenGL so I'm just calling an abstracted DrawLine function that's just making the computer do extra stuff and slowing it down. By the way, I do know that SDL has OpenGL, but I (naively?) figured it would just be easier to go without it. |
1 | Loading models with opengl I'm developing a video game using OpenGL as graphic API and C programming language, and I'm creating all models with blender. One question I have is how you deal with models (vertices), I mean, once you have created a model and want to load them with OpenGL you use hard coded or a software module to load them on runtime?. |
1 | OpenGL Objects wont draw at certain angles while using lighting(shaders) I was following this tutorial (http www.lighthouse3d.com tutorials glsl 12 tutorial point light per pixel ) but when the camera is at a certain angle, if I had alpha test and blend enabled the cube i have in the scene wouldn't draw, otherwise they would be black(if alpha and blend was disabled) Any thoughts on what's going on? here is some relevant code glClearColor(0.6, 0.6, 0.6, 1) glMatrixMode(GL PROJECTION) glLoadIdentity() gluPerspective(80, (float)window.getSize().x (float)window.getSize().y, 0.01f, 1000.0) glMatrixMode(GL MODELVIEW) glEnable(GL TEXTURE 2D) glEnable(GL DEPTH TEST) glEnable(GL CULL FACE) glCullFace(GL BACK) glEnable(GL BLEND) glBlendFunc(GL SRC ALPHA, GL ONE MINUS SRC ALPHA) glEnable(GL ALPHA TEST) glAlphaFunc(GL GREATER, 0.1f) glEnable(GL COLOR MATERIAL) glShadeModel(GL SMOOTH) glEnable(GL LIGHTING) glEnable(GL LIGHT0) glEnable(GL AUTO NORMAL) glEnable(GL NORMALIZE) ..... void Lighting(Camera cam, sf Vector3f Pos) GLfloat amb 0.2, 0.2, 0.2, 1 glLightModelfv(GL LIGHT MODEL AMBIENT, amb) GLfloat diff 1, 1, 1, 1 GLfloat pos Pos.x, Pos.y, Pos.z, 1 glLightfv(GL LIGHT0, GL DIFFUSE, diff) glLightfv(GL LIGHT0, GL POSITION, pos) .... glLoadIdentity() glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) cam.up(delta, window) Lighting(cam, Pos) sf Shader bind( amp shader) cube.draw() cube1.draw() window.display() sf Shader bind(NULL) draw function for cubes glPushMatrix() glTranslatef(pos.x, pos.y, pos.z) glBegin(GL QUADS) glColor3f(color.r 255, color.g 255, color.b 255) glVertex3f(0, 0, 0) glVertex3f(width, 0, 0) glVertex3f(width, height, 0) glVertex3f(0, height, 0) glVertex3f(width, 0, 0) glVertex3f(width, 0, length) glVertex3f(width, height, length) glVertex3f(width, height, 0) glVertex3f(0, 0, length) glVertex3f(0, 0, 0) glVertex3f(0, height, 0) glVertex3f(0, height, length) glVertex3f(width, 0, length) glVertex3f(0, 0, length) glVertex3f(0, height, length) glVertex3f(width, height, length) glVertex3f(width, 0, 0) glVertex3f(0, 0, 0) glVertex3f(0, 0, length) glVertex3f(width, 0, length) glVertex3f(0, height, 0) glVertex3f(width, height, 0) glVertex3f(width, height, length) glVertex3f(0, height, length) glEnd() glPopMatrix() |
1 | OpenGL additive vertex color mode Surely OpenGL has a way to set the vertex color mode. By default it is multiplication. When I have an existing texture on a quad to represent a player in my game, I want to add color to it. As of now, doing so only multiplies the color with the texels on the quad. So if I have a color that represents bright light say (128,128,128) what I want is additive behavior take the color in the texel and add the channels together. I have not found anything to do this automatically in OpenGL without the use of shaders or including a second draw step on one sprite with a different blend mode to simulate this. |
1 | GLSL sampler2D fallback to constant color? So I have the following situation I'm sharing a blinn shader accross many meshes. Some meshes have specular amp normal maps, others do not. I'd like to, without making the shader code too complicated, be able to specify a constant color instead of a texture, for the normal or specular maps. This is for example if a given mesh doesn't need one of those maps. The way I imagine it, I would just pass a flat "grey" as the specular map, for instance, and the shader could just act as if a texture was passed in. Is this possible? Ideally, I don't want to have an extra uniform for each mesh specifying whether or not the texture should be used. Another alternative would be to actually create a grey texture on the fly, if this is the better way, please advise on the simplest way to do this. |
1 | Tangent on generated sphere I have difficulties understanding the tangent bitangent concept for normal mapping, or rather the calculations of them. I draw a sphere which is generated with the code in the OpenGL redbook http www.glprogramming.com red chapter05.html. I use cube mapping to texture the sphere. Now, pretty much everything in the web links to http www.terathon.com code tangent.html. But in that algorithm, they use the texture coords u and v, which I don't have like this because my texture mapping is 3D... Any pointers to how I can compute the tangents and bitangents with this setup? Also, aren't the tangents just vectors orthogonal to the normal of a point on the sphere? To me it seems there could be an easier way.. Edit As far as I understand I need those values to be able to "apply" the normal from the normal map to the point on the sphere. (I'm not sure what the correct mathematical terminologies are here). In other words, to deviate the normals of any point on the sphere based on the normal map. I calculated the normals as point on surface center of sphere (which, in modelspace, when the center is (0,0,0), is just point on surface) |
1 | Texture artifacts depending on texture size I get some strange artifacting with textures depending on their size. I run OpenGL 3.3 with an GTX 580 so it should definitely support non power of two textures. I've narrowed down the problem specifically to the texture's size, I've tried checking if transparency had anything to do with it, color channels etc. As you can see, the 512x512 and 300x300 textures look just fine but the 437x437 is all distorted. What could be the cause of this and how can it be fixed? I could of course just stick to power of two textures which seem to work fine but since this is an personal educational project I really want to understand what's going on. |
1 | Animation of moving circle by square's outer surface I want to make this animation in OpenGL, here I attached a simple gif how I want it to looks like Main problem is that I can not figure out how to move by corner circle should smoothly move by corner, not to just jump to another side's begin position. import Blender from Blender import Draw,BGL from Blender.BGL import from math import sin,cos import time squareLength, squareX, squareY, circleRadius 200, 200, 200, 20 movingX, movingY 0, 0 def event(evt, val) if evt Draw.ESCKEY Draw.Exit() def gui() global movingX, movingY glClearColor(0.17, 0.24, 0.31, 1.0) glClear(BGL.GL COLOR BUFFER BIT) glLineWidth(1) glColor3f(0.74, 0.76, 0.78) glBegin(GL QUADS) glVertex2i(squareX, squareY) glVertex2i(squareX, squareY squareLength) glVertex2i(squareX squareLength, squareY squareLength) glVertex2i(squareX squareLength, squareY) glEnd() glColor3f(0.58, 0.65, 0.65) glPushMatrix() if movingX circleRadius if movingY lt squareLength movingY 1 else movingX 0 movingY squareLength circleRadius else if movingY squareLength circleRadius if movingX lt squareLength movingX 1 else movingX squareLength circleRadius movingY squareLength else if movingY lt 0 if movingX gt 0 movingX 1 else movingX circleRadius movingY 0 else if movingY gt 0 movingY 1 else movingX squareLength movingY circleRadius glTranslatef(movingX, movingY, 0) glBegin(GL LINE LOOP) for i in xrange(0, 360, 1) glVertex2f(squareX sin(i) circleRadius, squareY cos(i) circleRadius) glEnd() glPopMatrix() Draw.Redraw(1) Draw.Register(gui, event, None) Can you please say me, how can I optimize this code to make circle moving by corners? Just start learning computer graphics and think up such exercises for training, so I will really appreciate for any help. |
1 | Where to store OpenGL object id s When working with OpenGL, you often recieve integer id s to keep track of OpenGL objects. For example, representing a simple mesh, you may have a number of references to objects like so GLuint arrayBuffer GLuint elementArrayBuffer GLuint texture Having many meshes, the id s add up quickly. Therefore my problem is, where it is most logical and efficient to store them? Alternative 1 My first thaught was to hand out the id s to corresponding game entities. Something along the lines class Entity public GLuint arrayBuffer GLuint elementArrayBuffer GLuint texture Other methods... Then having a render class Create the buffers and textures and pass id s to entity. renderer.init(entity) Render with corresponding buffer texture id s stored in entity renderer.render(projection, view, entity) Where it then is easy enough, in the implementation of the render method, just picking id s and render with them. Some disadvantages is that this is a leaky abstraction, where OpenGL concepts are "leaked" to game entities. Which does not feel quite right. An advantage might be that the method supposedly is efficient. You do not have to find id s, you know them for each rendering pass. Alternative 2 Another alternative would be to store the id s internally in the renderer. For example, an internal data struct struct RenderData GLuint arrayBuffer GLuint elementArrayBuffer GLuint texture Then having an unordered map with references to each entity and corresponding RenderData std unordered map lt unsigned int, RenderData gt render data Then in the rendering method, the correct render data is fetched from the map only by a unique id that is stored in each entity. Advantages of this approach, is that it is less "leaky", where only the entity id is passed to the renderer, to know what data to use. Disadvantages may be slightly less efficiency, because of lookups in the unordered map? I would greatly appreciate inputs on this, what approach to prefer, or if there are any other approaches out there? |
1 | Why is OpenGL provided with latest GeForce drivers so extremely slow? I'm writting a code in OpenGL and using two computers an old and a new one. On the old computer (which I use for debugging creating implementation for the old GL) I have GeForce 5500 FX. Before windows reinstall on it, I had NVIDIA drivers with OpenGL 1.5. And everything worked perfectly. Games made quite good FPS, even with dozens or hundreds sprites. However, after XP reinstall, I installed the drivers provided with my video card but it seems there's no OpenGL at all (mysterious 'GDI Generic' renderer). So I tried to download the newest NVidia drivers. After their installation I saw there's OpenGL 2.3 but when I runned my app it worked with extremely low FPS (about 0.25, I mean I had to wait four seconds for any app reaction) with only one sprite. And as I loaded smaller images, the FPS relatively increased, but on the new computer it's always (no matter how big and how many sprites) speed, so it must be the perfomance problem. Without any sprites, it seemed it worked normal, but game without graphics doesn't make any sense. That's why I downloaded the oldest drivers for NVIDIA GeForce 5500 FX I found on the Nvidia site. But they're still providing OpenGL 2.0 (2.1 exactly). The problem's the same. I also tried to change all the properties of OpenGL to increase performance but it didn't help too. So what can I do? I'd like to find drivers with OpenGL 1.5 but it seems they're not on the NVIDIA site and most people say it's the best to download they from there. |
1 | How to render a texture partly transparent? Good Morning StackOverflow, I'm having a bit of a problem right now as I can't seem to find a way to render part of a texture transparently with openGL. Here is my setting I have a quad, representing a wall, covered with this texture (converted to PNG for uploading purposes). Obviously, I want the wall to be opaque, except for the panes of glass. There is another plane behind the wall which is supposed to show a landscape. I want to see the landscape from behind the window. Each texture is a TGA with alpha channel. The "landscape" is rendered first, then the wall. I thought it would be sufficient to achieve this effect but apparently it's not the case. The part of the window supposed to be transparent is black and the landscape only appears when I move past the wall. I tried to fiddle with GLBlendFunc() after having enabled it but it doesn't seem to do the trick. Am i forgetting an important step ? Thank you ) |
1 | Storing and using UV and Normal Data with OpenGL I'm having trouble mapping UV coordinates to vertices in OpenGL. I'm binding my buffers and enabling all of my attrib arrays perfectly well and things are rendering but what I'm stuck on is how to map UV and Normal data. You can't just store UV and Normal data with the Vertex data, because obviously you'll be re using vertices, but not necessarily with the same Normal or UV. For clarity this is the general flow of what I'm doing at the moment I bind my shader program. I bind my vertex array buffer with GL.BindBuffer I Bind my element buffer with GL.BindBuffer I use GL.Uniform (1, 2, 3 or 4) to bind values to my uniform variables I enable all of my vertex buffer attributes with GL.EnableVertexAttribArrau and then encode their layout using GL.VertexAttribPointer I draw using GL.DrawElements I disable all of my vertex attrib arrays again I'm totally stuck on how to encode UV and Normal data to my vertex attributes. Do I need to use a second buffer? If so how does OpenGL know to use a different set of indices to find them? I'm totally lost and any help would be greatly appreciated. |
1 | How to use continious collision detection in a dynamic AABB tree I am currently writing a game in c using openGL, and I am currently using a kinetic sweep and prune algorithm for the broad phase and then using GJK Raycast GJK amp EPA for the narrow phase. However I have realized that kinetic sweep and prune may not be the best choice because there are many objects in the scene that are just static and cause a lot of swapping when an object moves. So basically I would like to implement a dynamic AABB tree for the broad phase knowing there will be only a few objects requiring continuous collision detection (but these objects are essential). However for these fast moving objects what is the best way to detect a possible collision with other objects in the broad phase? I am thinking about using an AABB that contains the object trajectory from one frame to another, is this a good idea? Or will this a lot of overhead due to many false positives? Also I haven't read to much about dynamic AABB trees but I think I understand the idea, the idea is for each object that moved check if it's AABB if overlapping with the tree node, if it is do the same check with it's children and do so until we are at the leaves of the tree. All help is greatly appreciated |
1 | Pixel perfect clickable picture in OpenGL C So I have a picture(for easier understanding of problem like this http www.lib.utexas.edu maps europe europe 95.jpg). My goal is to click on any of the countries and get what country I clicked. The picture is full of colors, not just simple lets say Germany is purple. Also if I hover above a country I could also do things, like highlight it, write some text in a bar, etc. What solutions popped in my mind Draw every slice into a mesh, and ray trace the click. (making lots of meshes is a lot of time) Draw a single quad with texture, make a pixelmap, and get the coordinates clicked, then look up in the table.(I have to make a map for every resolution) Is there any better way of doing this? Or are there any algorithms for it? |
1 | OpenGL Updating VAO array buffer range I am trying to stream data to a buffer which I have bound to a VAO vertex buffer binding point that I want to access through vertex attributes in the shader. Usually I stream to buffers which I have bound as GL UNIFORM BUFFERs and I do it like this create an immutable data storage glNamedBufferStorage(uboID, capacity, data, GL MAP WRITE BIT GL MAP PERSISTENT BIT GL MAP COHERENT BIT) map entire storage forever mappedPtr glMapNamedBufferRange(uboID, 0, capacity, GL MAP WRITE BIT GL MAP PERSISTENT BIT GL MAP COHERENT BIT) bind buffer to target(GL UNIFORM BUFFER) binding point glBindBufferBase(GL UNIFORM BUFFER, uboBinding, uboID) bind shader interface block to target binding point glUniformBlockBinding(shaderProgramID, blockIndex, uboBinding) Now the uniform block can be accessed in the shader program and i can stream to it like so copy data to mapped pointer at stream offset memcpy(mappedPtr uploadOffset, amp uploadData 0 , uploadSize) This is the important part tell the shader which range of the buffer to use glBindBufferRange(GL UNIFORM BUFFER, uboBinding, uboID, uploadOffset, uploadSize) increment offset uploadOffset uploadSize If I bind a buffer to a VAO and access its data as vertex attributes in the shader, I don t know how to give the shader the range in which the updated data can be found. I can give an offset when binding a buffer to a VAO when calling glVertexArrayVertexBuffer, but it has a few more parameters and does not seem like the right thing to call every frame. I have not gotten it to work with it, too. Is there something like glBindBufferRange for vertex buffers bound to VAOs? |
1 | Keeping ratio the same across devices on fixed screen game My game is an Android game using OpenGL ES 2.0 (But this question could apply to any platform). I have read many questions on here regarding ratio management, and also read many tutorials outside of this site, but I'm still really confused as to how to manage this. My game is a Fixed screen 2d platformer. By fixed screen, I mean the player sees the whole screen at once and the screen doesn't scroll. All action takes place on this one screen (kind of like Bubble Bobble). Therefore scrolling is not possible as we need to see the whole play area. On my development device, I've written everything to look perfect, like so What I've currently done is when the game is run on other devices, I resize my GLViewport so that I maintain ratio like so Obviously, this has it's own problem namely, it wastes screen real estate. Now, I would accept this reluctantly, if I couldn't find a better solution, however, Google's documentation states that it's not allowed (of sorts) see App uses the whole screen in both orientations and does not letterbox to account for orientation changes. So, finally, I just stretched it out to fit the screen like so This takes the whole screen, but frankly looks a little naff as everything is stretched. Am I out of options? I see some similar games on the Play Store (ie, fixed screen) and they seem to look identical on different screens (and nothing is stretched) and there doesn't appear to be any extra space, but I have absolutely no idea how they achieve this. Would love to hear from someone who has dealt with this problem themselves or has any ideas on how best to proceed. |
1 | Libgdx, TiledMap, and tearing My TiledMap tears when the window is specific sizes, resized, or the camera moves. The tileset is padded by 2px and have tried as high as 10px. Each tile is 70x70. Here are the pack.json settings, paddingX 2, paddingY 2, bleed true, edgePadding true, duplicatePadding true, maxWidth 4096, maxHeight 4096, filterMin Nearest, filterMag Nearest, ignoreBlankImages true, wrapX ClampToEdge, wrapY ClampToEdge, grid true, fast false Here is the code. Parameters params new Parameters() params.textureMinFilter TextureFilter.Nearest params.textureMagFilter TextureFilter.Nearest TileMap map new TmxMapLoader().load(file.getAbsolutePath(),params) So what am I doing wrong? If nothing, does that mean libgdx absolutely cannot handle TileMaps of any sort? |
1 | How do I calculate specular contribution in physically based rendering? I'm trying to implement physically based rendering, in a small game engine built for academic and learning purposes. I cannot understand the right way to calculate specular and diffuse contribution, based on the materials metallic and roughness values. We don't use any third party libraries or engines for rendering everything is hand written in OpenGL 3.3. Right now, I ues this Calculate contribution based on metallicity vec3 diffuseColor baseColor baseColor metallic vec3 specularColor mix(vec3(0.00), baseColor, metallic) I'm under the impression that the specular has to be dependant on the roughness, somehow. I was thinking to change it to this vec3 specularColor mix(vec3(0.00), baseColor, roughness) Again, I'm not sure. What is the right way to do it? Is there even a right way, or should I just use the 'trial and error' method until I get a satisfying result? Here is the full glsl code Calculates specular intensity according to the Cook Torrance model float CalcCookTorSpec(vec3 normal, vec3 lightDir, vec3 viewDir, float roughness, float F0) Calculate intermediary values vec3 halfVector normalize(lightDir viewDir) float NdotL max(dot(normal, lightDir), 0.0) float NdotH max(dot(normal, halfVector), 0.0) Note the following line could also be NdotL, which is the same value float NdotV max(dot(normal, viewDir), 0.0) float VdotH max(dot(viewDir, halfVector), 0.0) float specular 0.0 if(NdotL gt 0.0) float G GeometricalAttenuation(NdotH, NdotV, VdotH, NdotL) float D BeckmannDistribution(roughness, NdotH) float F Fresnel(F0, VdotH) specular (D F G) (NdotV NdotL 4) return specular vec3 CalcLight(vec3 lightColor, vec3 normal, vec3 lightDir, vec3 viewDir, Material material, float shadowFactor) Helper variables vec3 baseColor material.diffuse vec3 specColor material.specular vec3 emissive material.emissive float roughness material.roughness float fresnel material.fresnel float metallic material.metallic Calculate contribution based on metallicity vec3 diffuseColor baseColor baseColor metallic vec3 specularColor mix(vec3(0.00), baseColor, metallic) Lambertian reflectance float Kd DiffuseLambert(normal, lightDir) Specular shading (Cook Torrance model) float Ks CalcCookTorSpec(normal, lightDir, viewDir, roughness, fresnel) Combine results vec3 diffuse diffuseColor Kd vec3 specular specularColor Ks vec3 result lightColor (emissive diffuse specular) return result (1.0 shadowFactor) |
1 | What does "GL CLAMP TO EDGE should be used in NPOT textures" mean? I have two sRGB PNG images I am using for textures. One is 64x64, and works fine. The other is 64x47, and when I attempt to use it I get an error reason 'GL CLAMP TO EDGE should be used in NPOT textures' What does this mean, and how do I address it? |
1 | OpenGL gradient banding on Samsung Galaxy S2 Android phone I've got a live wallpaper out on the market which uses OpenGL to render some basic shapes and a flat plane. The simple lighting creates a gradient effect across the plane, which looks fine on most devices. The Samsung Galaxy S2 series seems to have some trouble rendering the gradient, though, as you can see in this screen shot The color banding looks awful, especially compared to this screen shot from an Incredible I'm using a 565 EGL config in both cases, so I believe this is just a display issue with the GS2 devices. Can anyone confirm this suspicion? Is there any solution to the banding? |
1 | Advanced copying between textures in OpenGL I have many RGBA textures in my project. I often copy a part of one texture into another texture using copyTexSubImage2D . I am probably able to copy just a specific channel by setting mask (colorMask). I would like to multiply (and store) the alpha of one texture by an alpha of another texture. Is it possible to do it somehow without creating and calling shaders? I am working with WebGL 1.0 (OpenGL ES 2.0). |
1 | OpenGL model not rendering properly I have been playing with OpenGL for a while and I came into the wall that I don't know how to pass. I am trying to render a model of an object based on a .obj file. In that file I have position coordinates, uv coordinates and indices of a positions and uv coordinates (faces). I am trying to render the model like so Get all the positions from the files Get all the uv coordinates from the file Get all the faces. Generate array of vertices with all the positions and uv coordinates in order defined by indices. Index the vertices 0,1,2,... Draw the indexed vertices. I got blocked when I tried just to render the model without the texture. I have been shown a monstrosity instead of what I am trying to achieve. When I have been drawing the model in the other way (get all the vertices and index them in the order they should be drawn) everything is fine but in this way I cannot texture the model the way I wanted. I am adding the code below Reading from the file std vector lt float gt verts container for vertices std vector lt unsigned int gt inds container for indexes of vertices std vector lt unsigned int gt texinds container for indexes of textures std vector lt float gt texs container for textures bool LoadFromFile(const std string amp path) std ifstream f(path) if (!f.is open()) return false while (!f.eof()) char line 128 f.getline(line, 128) std strstream s s lt lt line char junk char junk1 char junk2 char junk3 if ((line 0 'v') amp amp (line 1 't')) float Textu 2 s gt gt junk gt gt junk1 gt gt Textu 0 gt gt Textu 1 ingoring the first 2 characters (vt) before data texs.push back( Textu 0 ) texs.push back( Textu 1 ) if (line 0 'f') unsigned int Index 6 s gt gt junk gt gt Index 0 gt gt junk1 gt gt Index 1 gt gt Index 2 gt gt junk2 gt gt Index 3 gt gt Index 4 gt gt junk3 gt gt Index 5 ingoring f and every between indexes inds.push back( Index 0 1 ) texinds.push back( Index 1 1 ) inds.push back( Index 2 1 ) texinds.push back( Index 3 1 ) inds.push back( Index 4 1 ) texinds.push back( Index 5 1 ) if ((line 0 'v') amp amp (line 1 ' ')) float Vertex 3 s gt gt junk gt gt Vertex 0 gt gt Vertex 1 gt gt Vertex 2 verts.push back( Vertex 0 ) verts.push back( Vertex 1 ) verts.push back( Vertex 2 ) Creating array of vertices and idexing them float Vertices 89868 for (int i 0 i lt inds.size() i ) Vertices i verts inds i Creating array with the vertices in order defined in the index vector unsigned int indices 89868 for (int i 0 i lt inds.size() i ) indices i i I understand maybe I have made a stupid mistake somewhere but I am literally incapable of finding it. |
1 | Setting up OpenGL camera with off center perspective I'm using OpenGL ES (in iOS) and am struggling with setting up a viewport with an off center distance point. Consider a game where you have a character in the left hand side of the screen, and some controls alpha'd over the left hand side. The "main" part of the screen is on the right, but you still want to show whats in the view on the left. However when the character moves "forward" you want the character to appear to be going "straight", or "up" on the device, and not heading on an angle to the point that is geographically at the mid x position in the screen. Here's the jist of how i set my viewport up where it is centered in the middle setup the camera glMatrixMode(GL PROJECTION) glLoadIdentity() const GLfloat zNear 0.1 const GLfloat zFar 1000.0 const GLfloat fieldOfView 90.0 can definitely adjust this to see more less of the scene GLfloat size zNear tanf(DEGREES TO RADIANS(fieldOfView) 2.0) CGRect rect rect.origin CGPointMake(0.0, 0.0) rect.size CGSizeMake(backingWidth, backingHeight) glFrustumf( size, size, size (rect.size.width rect.size.height), size (rect.size.width rect.size.height), zNear, zFar) glMatrixMode(GL MODELVIEW) rotate the whole scene by the tilt to face down on the dude const float tilt 0.3f const float yscale 0.8f const float zscale 4.0f glTranslatef(0.0, yscale, zscale) const int rotationMinDegree 0 const int rotationMaxDegree 180 glRotatef(tilt (rotationMaxDegree rotationMinDegree) 2, 1.0f, 0.0f, 0.0f) glTranslatef(0, yscale, zscale) static float b 25 0 static float c 0 rotate by to face in the direction of the dude float a RADIANS TO DEGREES( atan2f( gCamera.orientation.x, gCamera.orientation.z)) glRotatef(a, 0.0, 1.0, 0.0) and move to where it is glTranslatef( gCamera.pos.x, gCamera.pos.y, gCamera.pos.z) draw the rest of the scene ... I've tried a variety of things to make it appear as though "the dude" is off to the right do a translate after the frustum to the x direction do a rotation after the frustum about the up y axis move the camera with a biased lean to the left of the dude Nothing i do seems to produce good results, the dude will either look like he's stuck on an angle, or the whole scene will appear tilted. I'm no OpenGL expert, so i'm hoping someone can suggest some ideas or tricks on how to "off center" these model views in OpenGL. Thanks! |
1 | When trying to render a texture in opengl after including the stb image.h file in the project i get a linker error Initially i gave the path of the stb image.h amp stb image.c file location C C include directories as adding the header file directly to the project was not working. |
1 | Do I need Texture Units when NOT using shaders? Does calling glActiveTexture() even make sense when not using shaders? I only have to switch the textures before drawing a buffer with glBindTexture(), right? |
1 | using heightmap to simulate 3d in an isometric 2d game I saw a video of an 2.5d engine that used heightmaps to do zbuffering. Is this hard to do? I have more or less no idea of Opengl(lwjgl) and that stuff. I could imagine, that you compare each pixel and its depthmap to the depthmap of the already drawn background to determine if it gets drawn or not. Are there any tutorials on how to do this, is this a common problem? It would already be awesome if somebody knows the names of the Opengl commands so that i can go through some general tutorials on that. greets! Great 2.5d engine with the needed effect, pls go to the last 30 seconds Edit, just realised, that my question wasn't quite clear expressed How can i tell Opengl to compare the existing depthbuffer with an grayscale texure, to determine if a pixel should get drawn or not? |
1 | Taking fixed direction on hemisphere and project to normal (openGL) I am trying to perform sampling using hemisphere around a surface normal. I want to experiment with fixed directions (and maybe jitter slightly between frames). So I have those directions vec3 sampleDirections 6 vec3(0.0f, 1.0f, 0.0f), vec3(0.0f, 0.5f, 0.866025f), vec3(0.823639f, 0.5f, 0.267617f), vec3(0.509037f, 0.5f, 0.700629f), vec3( 0.509037f, 0.5f, 0.700629), vec3( 0.823639f, 0.5f, 0.267617f) now I want the first direction to be projected on the normal and the others accordingly. I tried these 2 codes, both failing. This is what I used for random sampling (it doesn't seem to work well, the samples seem to be biased towards a certain direction) and I just used one of the fixed directions instead of s (here is the code of the random sample, when i used it with the fixed direction i didn't use theta and phi). vec3 CosWeightedRandomHemisphereDirection( vec3 n, float rand1, float rand2 ) float theta acos(sqrt(1.0f rand1)) float phi 6.283185f rand2 vec3 s vec3(sin(theta) cos(phi), sin(theta) sin(phi), cos(theta)) vec3 v normalize(cross(n,vec3(0.0072, 1.0, 0.0034))) vec3 u cross(v, n) u s.x u v s.y v vec3 w s.z n vec3 direction u v w return normalize(direction) EDIT This is the new code vec3 FixedHemisphereDirection( vec3 n, vec3 sampleDir) vec3 x vec3 z if(abs(n.x) lt abs(n.y)) if(abs(n.x) lt abs(n.z)) x vec3(1.0f,0.0f,0.0f) else x vec3(0.0f,0.0f,1.0f) else if(abs(n.y) lt abs(n.z)) x vec3(0.0f,1.0f,0.0f) else x vec3(0.0f,0.0f,1.0f) z normalize(cross(x,n)) x cross(n,z) mat3 M mat3( x.x, n.x, z.x, x.y, n.y, z.y, x.z, n.z, z.z) return M sampleDir So if my n (0,0,1) and my sampleDir (0,1,0) shouldn't the M sampleDir be (0,0,1)? Cause that is what I was expecting. |
1 | Slick2d font rendering makes all other drawings vanish I'm converting a java game to lwjgl and slick utils. I've followed the slick2d examples for font loading rendering but I found that the font shows as a solid box of color unless I add the following glBlendFunc code GL11.glEnable(GL11.GL BLEND) GL11.glBlendFunc(GL11.GL SRC ALPHA, GL11.GL ONE MINUS SRC ALPHA) head font.drawString( (float)(x 20), (float)(y 80), "Title Here", org.newdawn.slick.Color.green) GL11.glDisable(GL11.GL BLEND) I've tried leaving this code as is, I've tried leaving the blend function on, but any other drawing methods I have vanish when that blendFunc line is left in. For example, here's a basic rectangle drawing method. The rect drawn here, only shows when I leave out the blend func. Is there a different blend func I need to constantly switch to? GL11.glEnable(GL11.GL BLEND) GL11.glColor4f( r f, g f, b f, a f ) GL11.glRecti(x1, y1, x2, y2) GL11.glEnd() GL11.glDisable(GL11.GL BLEND) Update Setting the blendFunc back to default values for the drawRect method I have works, but I'm not sure yet if it's the right solution. GL11.glBlendFunc(GL11.GL ONE, GL11.GL ZERO) |
1 | Artifacts when draw particles with some alpha I want to draw in my game some particles. But when I draw one particle above another particle, alpha channel from this above "clear" previous drawed particle. I set in OpenGL blend in this way glBlendFunc( GL SRC ALPHA, GL ONE MINUS SRC ALPHA ) My fragment shader for particle is very simple precision highp float precision highp int uniform sampler2D u maps 2 varying vec2 v texture uniform float opaque uniform vec3 colorize void main() vec4 texColor texture2D(u maps 0 , v texture) gl FragColor.rgb texColor.rgb colorize.rgb gl FragColor.a texColor.a opaque I attach screenshot from this Do you know what I made wrong ? I use OpenGL ES 2.0. |
1 | Should all primitives be GL TRIANGLES in order to create large, unified batches? Optimizing modern OpenGL relies on aggressive batching, which is done by calls like glMultiDrawElementsIndirect. Although glMultiDrawElementsIndirect can render a large number of different meshes, it makes the assumption that all these meshes are made of the same primitives (eg. GL TRIANGLES, GL TRIANGLE STRIP, GL POINTS). In order to most efficiently batch rendering, is it wise to force everything to be GL TRIANGLES (while ignoring possible optimizations with GL TRIANGLE STRIP or GL TRIANGLE FAN) in order to make it possible to group more meshes together? This though comes from reading the Approaching Zero Driver Overhead slides, which suggests to draw everything (or, presumably, as much as possible) in a single glMultiDrawElementsIndirect call. |
1 | OpenGL SDL2 How to resize the render area with the window? After calling SDL SetWindowSize, the area being rendered to doesn't change with the window, so if the window gets bigger, it leaves a black area on the top and right sides. I am adjusting the OpenGL viewport to the new window dimentions. Everything I'm drawing scales correctly but is cut off when the window is bigger. I've been searching for the solution all day and all I've found is creating a new OpenGL context, but that causes crashes unless I reload all my graphics data which seems ridiculous just to resize the window. Is it the default framebuffer that needs to be resized? According to the OpenGL wiki, All default framebuffer images are automatically resized to the size of the output window, as it is resized. So if the default framebuffer is being resized, why is it only rendering to the same small area? I could initialize it to the largest possible resolution and then shrink it, but wouldn't that cause OpenGL to process a bunch of fragments outside the window? |
1 | Having trouble running OpenGL on MSVC I'm using the OpenGL Programming Guide, 8th Edition with MSVC 2013 but I can't get the triangles.cpp file to run. These are the errors popping up http puu.sh jAokn c07420cf46.png |
1 | How can glass breaking effect from Smash Hit be achieved? I saw Smash Hit the other day and was amazed by the physics of the game, specially the shattered glass effect I've read other posts about this subject but I still feel that they don't share enough details to let me get started on implementing this on my own with OpenGL GLSL. Is it possible for somebody with an enhanced perception and graphics understanding to watch the gameplay and give some pointers on how this effect could be replicated? I rather not use 3rd party physics engine and do the entire thing on my own for educational purposes, so could you mention some of the physics that goes behind this as well? References to other documents and demos are highly appreciated. |
1 | Pixelation shader explanation? I was looking for a pixelation shader for my postprocessing and came across this shader snippet Works pretty well! Not a whole lot of explanations on how it works except for "Pixelation is process when pixel at x, y is duplicated into x dx, y dy rectangle" I re wrote formatted the shader a bit out vec4 FinalColor in vec2 FragUV uniform sampler2D Texture void main() float Pixels 512.0 float dx 15.0 (1.0 Pixels) float dy 10.0 (1.0 Pixels) vec2 Coord vec2(dx floor(FragUV.x dx), dy floor(FragUV.y dy)) FinalColor texture(Texture, Coord) My understanding Messing with the 'Pixels' I found that the higher this number the more pixels there on the screen the smaller the size pixels get thus a less pixelated effect. Then he's calculating some x and y offset values based on that number and using those offsets to get new texture coordinates to sample from. My question I have no idea how that's actually happening. I mean how does all that achieve "duplicating a pixel at x, y is into x dx, y dy rectangle"? What does 15 and 10 represent? (how much to offset on x and y?) What happens when we actually multiply dx dy with the result of the floor function and divide the texture coordinate by them? and why 'floor' in particular? Any help or explanation is appreciated, thanks! |
1 | Simulation of ball movement in a 3d landscape. The easiest way? I have a landscape(generated via Perlin noise) and a ball. I want the ball to move along the geodesic(implementation of basic physics gravitation, friction). I thought to do raycast around the ball to the landscape, choose the lowest point and move the ball to this point, but it won't work in every case and it won't allow the ball to jump (with inertia). So, what is the best way algorithm to implement such feature? P.S. I don't want to use any libraries. Thanks. |
1 | How do I sort with both depth and y axis in OpenGL? Continuing my misadventures in pyOpenGL, I've refactored the whole thing to use 4 buffers tile vertices all drawn at the start, probably never modified tile texture co ords not modified often (but enables support for animated tiles later) sprite vertices modified often as sprites move around the map sprite texture co ords modified often as sprites animate My draw loop binds each buffer, and calls drawArrays() twice. I have enabled depth testing, and clear the depth buffer on each draw glutInitDisplayMode(GLUT DOUBLE GLUT RGBA GLUT DEPTH) ... glDepthFunc(GL LESS) glEnable(GL DEPTH TEST) ... glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) For the moment, I've hard coded the z coordinate for each of the things I'm drawing, just to be sure they're definitely in the right order grass tiles are at 0.9, the tree (4 tiles) is set to 1.0 and the sprites are at 0.0. This is what I get In a previous attempt to build this engine without openGL, I was using Tkinter canvas images, which ended up very slow. To get the draw order right, I had to sort all the objects by their y coordinate plus their z coordinate, because it is all on a 2D plane. I want something similar, here. I don't understand why the grass tiles cause this problem, when they should be drawn right at the back. GL DEPTH TEST is definitely enabled if I disable it by commenting out the glEnable() call, the sprites are drawn on top of everything (including on top of the trees). I want an illusion of depth by being able to walk behind high sticking up objects. I've continued hacking away at this, and I've gotten a little further by modifying my shader fragment shader shaders.compileShader(""" uniform sampler2D u image varying vec2 v texCoords void main() vec4 tex texture2D(u image, v texCoords) if(tex.a lt 1.0) discard gl FragColor tex texture2D(u image, v texCoords) """, GL FRAGMENT SHADER) By checking for transparent pixels and throwing them away, I get a lot closer to my goal, however it is still not quite right, and I do not know how to fix it. The wizard is at z 0.5, and the tree trunk is at z 0.2 it is doing what I'm telling it to do, just not what I want it to do. If I put them on the same z level, the trunk is drawn over the wizard when he stands in front of the tree, and correctly when he stands behind it I don't just want depth sorting, I also want to sort on the y axis. How can I do this in OpenGL? |
1 | 2D games and modern OpenGL Preconcepts Ok, so what I've gathered so far is this don't use fixed pipeline (deprecated or going to be deprecated) vbos store quot object models quot (n vertex data, mostly) vaos describe how data is laid down so that draw calls know what part of each vbo is for what kind of vertex information (one vao can refer to multiple vbos, the opposite is kinda difficult) each draw call also sends vertex data to shaders How I see 3D (optional) Given these informations, I can see how drawing 3D complicated objects is very nice with modern OpenGL. You basically load a bunch of object models (probably from Blender or other similar software) into VBOs with local coordinates, and then you simply provide for each instance of an object a different shader parameter (offset) to draw to world space. Problem question In 2D problems and priorities are completely different, though. You don't draw much complex objects, you don't need complicated projection matrixes and whatnot and shaders are much simpler. What would be the best way to draw frequently (really frequently, basically every frame) changing geometry with modern OpenGL? In the following paragraph you can see some ideas of problems (the circle and rectangle problem), that better identify the kind of changes I'm interested in. My attempts (optional) So, I began to think how I would deal with drawing basic 2D geometry on the screen a square load a (1, 0), (1, 1), (0, 1), (0, 0) VBO for the geometry of a square in local space, then provide the shader with the actual width of the square and world coordinates and color informations cools, looks easy. Let's move to a circle a circle triangle fan with... eh. how much precision (number of vertexes)? for small circles precision must be small and for bug circles precision must be high. Clearly loading 1 VBO cannot possibly fit all cases. What if I need to add precision because a circle is resized to be bigger? Less cool. Let's move to something slightly easier, a rectangle a rectangle eh. there's no quot general rectangle geometry quot . You just have a width height proportion and that's it, but each rectangle is probably different if the size changes. As you can see, things go downhill from there. Especially with complex polygons and whatnot. No code policy P I just need an overview of the idea, no code is necessary, especially C or C code. Just say stuff like quot make a VBO with this vertex data, and then bind it, ... quot . |
1 | Uniform arrays do not work on every GPU I am trying to implement instanced rendering for objects that repeat, so I came up with idea I could simply group objects that loaded same model files, create array with their Model matrices and so on, and finally pass this array as uniform array to shaders, which will be indexed by gl instanceID to get correct model matrix for given objects. C implementation looks pretty much like this std vector RelativePositions, Scales std vector Models int ctr 0 bool first true int isz 0 obj gt getIndices(isz) pDrawData gt refObjects holds all Object3D instances which share same model geometry for (std list lt IObject3D gt iterator pObj pDrawData gt refObjects.begin() pObj ! pDrawData gt refObjects.end() pObj ) IObject3D obj pObj int render prepareForRender(obj, flags, isShadow, offset) if (!render) continue if (first) this is gonna be same for all objects of this class setUniform(Type, (int)obj gt getType()) setUniform(Flags, (int)obj gt getFlags()) first false Scales.push back(obj gt getScale()) RelativePositions.push back(obj gt getPos()) Models.push back(obj gt getModel()) ctr if (ctr) int Rem 0 const int Step MAX OBJ PER INST 64 for (size t i 0, j Scales.size() i lt j i Step) Rem Scales.size() i if (Rem gt Step) Rem Step following functions set uniform array of specified length setUniform(Scale, Scales.data() i, Rem) setUniform(RelativePosition, RelativePositions.data() i, Rem) setUniform(Model, Models.data() i, Rem) glDrawElementsInstanced(GL TRIANGLES, (GLsizei)isz, GL UNSIGNED INT, 0, Rem) Rendering object is done via Vertex Shader Tess Control Shader Tess Eval Shader Geom Shader FragmentShader. Instance ID is being passed using flat out in int InstanceID from shader to shader while being once set in Vertex shader as gl InstanceID. Tessellation Evaluation Shader logics look like this version 430 layout(triangles) in in vec3 tcPosition in vec3 tcTexCoord in vec3 tcNormal flat in int tcInstanceID out vec3 tePosition out vec3 teDistance out vec3 teTexCoord out vec3 teNormal flat out int teInstanceID define MAX OBJ PER INST 64 uniform mat4 Model MAX OBJ PER INST uniform vec3 Scale MAX OBJ PER INST uniform vec3 RelativePosition MAX OBJ PER INST ... other uniforms void main(void) vec3 p0 gl TessCoord.x tcPosition 0 vec3 p1 gl TessCoord.y tcPosition 1 vec3 p2 gl TessCoord.z tcPosition 2 ... other stuff ... teDistance gl TessCoord tePosition (p0 p1 p2) teInstanceID tcInstanceID 0 switch(Type) ... other types ... default tePosition (vec4(tePosition,1) Model teInstanceID ).xyz Scale teInstanceID RelativePosition teInstanceID break This works fine when I run it on any nVidia GPU when on laptop, but when I try to run it on AMD GPU or Intel Integrated Graphics, it suddenly doesn't work and I only get blank screen. But if I remove uniform arrays and pass only single variable for uniform, so all instances would suddenly have same model matrix, position and scale, it works even on other GPUs, so problem is definitely with arrays... Is there any solution to this? |
1 | GLSL light coloring blocked surfaces I have created a very simple lighting shader. It currently only supports point lights, but it lights up surfaces that are completely blocked from the light. I know why, but I want to know how I can fix it. I was thinking of using a form of shadow mapping to darken those areas, but shadow mapping is difficult for me to understand and has very major performance issues. I was hoping for a more elegant and efficient version, even if it is not as nice. Until something like that exists, I need a nice, efficient way of checking to see if a fragment or vertex is visible and computing things from there. I would also like to add that I intent to support semi transparent objects, which should block light based on and alpha value. The things I want Fast (25 fps with the modified shaders, I currently run at 82) Easily modified to incorporate more lights (three or four at most, probably). Support for edge smoothing (user can enable it, but will lose performance) The things I do not want Pulling information from the GPU during rendering (view stuttering is always visible when that happens) Visible noticeable moving of lighting when scene is stationary (like the light is moving when it really isn't, due to randomizing) Expensive operations when the scene first renders. I do not expect anybody to implement these things just to leave room and maybe a comment where these would be done. I know this is the "holy grail" of rendering when combined with bump maps and such, but if you tone down the quality a bit, I hope it is possible. I may have to do some extra work on the CPU, but that is acceptable. Now for the shaders I currently have working Vertex shader varying vec4 color varying vec3 normal varying vec4 position void main (void) color gl Color pass color to the fragment shader normal normalize(gl NormalMatrix gl Normal) calculate useable normals from the basic normal data. gl Position gl ModelViewProjectionMatrix gl Vertex set position gl TexCoord 0 gl MultiTexCoord0 set texture coordinates position gl Vertex pass gl Vertex to the fragment shader for light calculations. Fragment shader varying vec4 color varying vec3 normal varying vec4 position uniform sampler2D texture uniform vec4 light position of the light, same coordinate system as non transformed objects void main (void) vec3 vector light.xyz position.xyz get the vector needed to determine length float distance sqrt((vector.x vector.x) (vector.y vector.y) (vector.z vector.z)) determine length vec4 final color set color final.xyz (1.0 distance) calculate light intensity on pixel gl FragColor final texture2D(texture, gl TexCoord 0 .st) multiply by texture color I can do a lot of extra cool things with this current code, such as giving lights their own colors and giving objects their own material properties, but that is not the point now. I just want basic code, preferably without needing more buffers and shaders, to determine visibility. From my research, it sounds like the stencil and depth buffers may be of use here, and they already setup, so they may be a better way to do it. I can not stress enough that I only need the bare essentials here nothing overly fancy. Thanks in advance for your help. |
1 | OpenGL render to texture causing edge artifacts This is my first post here so any help would be massively appreciated ) I'm using C with SDL and OpenGL 3.3 When rendering directly to screen I get the following result And when I render to texture I this happens Anti aliasing is turned off for both. I'm guessing this has something to do with depth buffer accuracy but I've tried a lot of different methods to improve the result but, no success ( I'm currently using the following code to set up my FBO GLuint frameBufferID glGenFramebuffers(1, amp frameBufferID) glBindFramebuffer(GL FRAMEBUFFER, frameBufferID) glGenTextures(1, amp coloursTextureID) glBindTexture(GL TEXTURE 2D, coloursTextureID) glTexImage2D(GL TEXTURE 2D,0,GL RGB,SCREEN WIDTH,SCREEN HEIGHT,0,GL RGB,GL UNSIGNED BYTE,NULL) glTexParameteri(GL TEXTURE 2D,GL TEXTURE MIN FILTER,GL NEAREST) glTexParameteri(GL TEXTURE 2D,GL TEXTURE MAG FILTER,GL NEAREST) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) Depth buffer setup GLuint depthrenderbuffer glGenRenderbuffers(1, amp depthrenderbuffer) glBindRenderbuffer(GL RENDERBUFFER, depthrenderbuffer) glRenderbufferStorage(GL RENDERBUFFER, GL DEPTH COMPONENT24, SCREEN WIDTH,SCREEN HEIGHT) glFramebufferRenderbuffer(GL FRAMEBUFFER, GL DEPTH ATTACHMENT, GL RENDERBUFFER, depthrenderbuffer) glFramebufferTexture(GL FRAMEBUFFER, GL COLOR ATTACHMENT0, coloursTextureID, 0) GLenum DrawBuffers 1 GL COLOR ATTACHMENT0 glDrawBuffers(1, DrawBuffers) if(glCheckFramebufferStatus(GL FRAMEBUFFER) ! GL FRAMEBUFFER COMPLETE) return false Thank you so much for any help ) |
1 | How to mitigate this strange pattern shown in Phong lighting? I am creating an entire lighting scene in OpenGL. The entire scene consist of only one point light. I noticed some strange z fighting like pattern. This flickers when I move the camera. Can anyone tell me what is the potential cause of the problem? How should I mitigate this problem? Thank you. EDIT The problem is solved. It was primarily caused by normal transform computation error and my ill made model might have been aggravated this speckle flickering effect. |
1 | Why doesn't the y Axis work with SuperBible frame reference or GluLookAt I'm currently trying to understand how to use the GLFrame Class in the superbible book, and acording to the 4th edition of the book, the camera matrix derived from the Frame of reference class should work the same as GluLookAt When I add these lines cameraFrame.SetForwardVector( 0.5f, 0.0f, 0.5f) cameraFrame.Normalize() The camera looks in the correct direction, yaw at 45 Degrees (Am I doing that right!) However when I add this cameraFrame.SetForwardVector(0.0f, 0.5f, 0.5f) The camera just looks as if it was set to (0.0f, 0.0f, 1.0f) Why is this! It's been driving me mad for three days. Maybe I'm not passing in the vectors correctly, but I'm not sure how to pass in x,y 360 degrees for the look at (forward) location vector. Do the vectors have to be normalized before passing them in? Eventually I hope to do full mouse look (FPS style) , but for now just understanding why I can't make the camera simply pitch up would be a good start. Thansk! Here is the code in situ. Called to draw scene void RenderScene(void) Color values static GLfloat vFloorColor 0.0f, 1.0f, 0.0f, 1.0f static GLfloat vTorusColor 1.0f, 0.0f, 0.0f, 1.0f static GLfloat vSphereColor 0.0f, 0.0f, 1.0f, 1.0f Time Based animation static CStopWatch rotTimer float yRot rotTimer.GetElapsedSeconds() 60.0f Clear the color and depth buffers glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) Save the current modelview matrix (the identity matrix) modelViewMatrix.PushMatrix() M3DMatrix44f mCamera My Code cameraFrame.SetForwardVector( 0.5f, 0.5f, 0.5f) cameraFrame.Normalize() End of my code cameraFrame.GetCameraMatrix(mCamera) modelViewMatrix.PushMatrix(mCamera) Transform the light position into eye coordinates M3DVector4f vLightPos 0.0f, 10.0f, 5.0f, 1.0f M3DVector4f vLightEyePos m3dTransformVector4(vLightEyePos, vLightPos, mCamera) Draw the ground shaderManager.UseStockShader(GLT SHADER FLAT, transformPipeline.GetModelViewProjectionMatrix(), vFloorColor) floorBatch.Draw() for(int i 0 i lt NUM SPHERES i ) modelViewMatrix.PushMatrix() modelViewMatrix.MultMatrix(spheres i ) shaderManager.UseStockShader(GLT SHADER POINT LIGHT DIFF, transformPipeline.GetModelViewMatrix(), transformPipeline.GetProjectionMatrix(), vLightEyePos, vSphereColor) sphereBatch.Draw() modelViewMatrix.PopMatrix() Draw the spinning Torus modelViewMatrix.Translate(0.0f, 0.0f, 2.5f) Save the Translation modelViewMatrix.PushMatrix() Apply a rotation and draw the torus modelViewMatrix.Rotate(yRot, 0.0f, 1.0f, 0.0f) shaderManager.UseStockShader(GLT SHADER POINT LIGHT DIFF, transformPipeline.GetModelViewMatrix(), transformPipeline.GetProjectionMatrix(), vLightEyePos, vTorusColor) torusBatch.Draw() modelViewMatrix.PopMatrix() "Erase" the Rotation from before Apply another rotation, followed by a translation, then draw the sphere modelViewMatrix.Rotate(yRot 2.0f, 0.0f, 1.0f, 0.0f) modelViewMatrix.Translate(0.8f, 0.0f, 0.0f) shaderManager.UseStockShader(GLT SHADER POINT LIGHT DIFF, transformPipeline.GetModelViewMatrix(), transformPipeline.GetProjectionMatrix(), vLightEyePos, vSphereColor) sphereBatch.Draw() Restore the previous modleview matrix (the identity matrix) modelViewMatrix.PopMatrix() modelViewMatrix.PopMatrix() Do the buffer Swap glutSwapBuffers() Tell GLUT to do it again glutPostRedisplay() |
1 | Why is this orthographic projection matrix not showing my textured quad? I've been following tutorials, mainly this one, and I am still not quite sure why my textured quad is not showing inside the frustum that I've rendered before. I can see it if and only if I don't multiply gl Position with OrthoProjMatrix vertexmodelspace, and instead, multiply gl position with vertexmodelspace. Here is some of my code my Main.CPP is also available via PasteBin. Orthographic Projection Matrix Setup Code void OpenGL Engine OrthoProjectionSetup(GLuint program) GLfloat Right 100.0 GLfloat Left 50.0 GLfloat Top 100.0 GLfloat Bottom 50.0 GLfloat zFar 1.0 GLfloat zNear 1.0 GLfloat LeftAndRight 2.0f (Right Left) GLfloat TopAndBottom 2.0f (Top Bottom) GLfloat ZFarAndZNear 2.0f (zFar zNear) GLfloat orthographicprojmatrix XX XY XZ XW LeftAndRight, 0.0, 0.0, (Right Left) (Right Left), YX YY YZ YW 0.0, TopAndBottom, 0.0 , (Top Bottom) (Top Bottom), ZX ZY ZZ ZW 0.0, 0.0 , ZFarAndZNear, WX WY WZ WW (zFar zNear ) (zFar zNear), 0.0, 1.0 GLint orthographicmatrixloc glGetUniformLocation(program, "OrthoProjMatrix") glUniformMatrix4fv(orthographicmatrixloc, 1, GL TRUE, amp orthographicprojmatrix 0 ) Vertex Shader Code version 330 core layout(location 0) in vec4 vertexposition modelspace layout(location 1) in vec2 vertexUV out vec2 UV uniform mat4 OrthoProjMatrix void main() gl Position OrthoProjMatrix vertexposition modelspace UV vertexUV I'm having problems with the orthographic projection matrix either it is not being done correctly, not setup correctly, my shader is not setup correctly or it's the textured quad that is not in view. Please note that I do not want to use a library for this. What am I doing wrong? |
1 | opengl z index changing does not zoom in I am making a simple opengl application where I have a cube and I want to move forward to it with the glTranslate3d function, where I change the z index. One would expect the object to get bigger because of the view but it does not My code glClearColor(0, 0, 0, 1) glClearDepth(1.0) glEnable(GL DEPTH TEST) glDepthFunc(GL LEQUAL) glViewport(0, 0, width, height) glMatrixMode(GL PROJECTION) glOrtho(30, width, 0, height, 100, 100) glMatrixMode(GL MODELVIEW) I have tried many things also I have been told that gluPerspective is out dated and should not be used.... EDIT it works when I rotate the object you can see how 3d it is p |
1 | How can I render a semi transparent model with OpenGL correctly? I'm using OpenGL ES 2 and I want to render a simple model with some level of transparency. I'm just starting out with shaders, and I wrote a simple diffuse shader for the model without any issues but I don't know how to add transparency to it. I tried to set my fragment shader's output (gl FragColor) to a non opaque alpha value but the results weren't too great. It sort of works, but it looks like certain model triangles are only rendered based on the camera position... It's really hard to describe what's wrong so please watch this short video I recorded http www.youtube.com watch?v s0JqA0rZabE I thought this was a depth testing issue so I tried playing around with enabling disabling depth testing and back face culling. Enabling back face culling changes the output slightly but the problem in the video is still there. Enabling disabling depth testing doesn't seem to do anything. Could anyone explain what I'm seeing and how I can add some simple transparency to my model with the shader? I'm not looking for advanced order independent transparency implementations. edit Vertex Shader color varying for fragment shader varying mediump vec3 LightIntensity varying highp vec3 VertexInModelSpace void main() vec4 LightPosition vec4(0.0, 0.0, 0.0, 1.0) vec3 LightColor vec3(1.0, 1.0, 1.0) vec3 DiffuseColor vec3(1.0, 0.25, 0.0) find the vector from the given vertex to the light source vec4 vertexInWorldSpace gl ModelViewMatrix vec4(gl Vertex) vec3 normalInWorldSpace normalize(gl NormalMatrix gl Normal) vec3 lightDirn normalize(vec3(LightPosition vertexInWorldSpace)) save vertexInWorldSpace VertexInModelSpace vec3(gl Vertex) calculate light intensity LightIntensity LightColor DiffuseColor max(dot(lightDirn,normalInWorldSpace),0.0) calculate projected vertex position gl Position gl ModelViewProjectionMatrix gl Vertex Fragment Shader varying to define color varying vec3 LightIntensity varying vec3 VertexInModelSpace void main() gl FragColor vec4(LightIntensity,0.5) |
1 | How to render a curved horizon when rendering terrain far away For games with long view distance, it might look at bit weird when you fly high above the ground and the horizon is all flat. So, how can I make a crumb bend effect, when I fly high above the surface? Can this be done in the shaders? If you're not sure what I mean, heres a picture A more clear example, though it might be captured with fish eye. What would this best way to do this? I believe you could rotate and move (using transformationMatrix) all vertices a tiny little bit according to how far they are from the camera. But would this even create the desired effect, and would it be too resource extensive? Or maybe you know a whole another way to do it. |
1 | OpenGL Fragment Shader simulate LCD slow response time I have a very simple OpenGL view rendering 2 triangles with a single texture applied. The minimum setup for rendering a 2d game. What i do is redraw the texture for every frame and easily get 60fps. I would like to add an effect simulating LCD slow response by outputting pixels that are the average between the current and the previous frame. The pseudo code of fragment shader would look like this varying vec2 v TexCoordinate uniform sampler2D u Texture void main() gl FragColor (texture2D(u Texture, v TexCoordinate) PIXEL PREVIOUS FRAME) 2 PIXEL PREVIOUS FRAME gl FragColor Well, i think shaders can't keep a persistent variable like PIXEL PREVIOUS FRAME, then i really don't know how to tackle this problem. Any suggestion? |
1 | camera movement along with model I am making a game in which a cube travels along a maze with the motive of crossing the maze safely. I have two problems in this. The cube needs to have a smooth movement like it is traveling on a frictionless surface. So could someone help me achieve this. I need to have this done in a event callback function I need to move the camera along with the cube. So could someone advice me a good tutorial about camera positions along with an object? |
1 | SDL2 Draw scene to texture. SDL2 RenderTexture like SFML I've been developing a 2D Engine using SFML ImGui. The editor is rendered using ImGui and the scene window is a sf RenderTexture where I draw the GameObjects and then is converted to ImGui Image to render it in the editor. Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 ImGui and I want to recreate what I did with the 2D Engine. I've managed to render the editor like I did in the 2D Engine using this Example that comes with ImGui. But I don't know how to create an equivalent of sf RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui Image to show it in the editor. If you can provide code will be better. And if you want me to provide any specific code tell me. Thanks! |
1 | Creating a voxel world with 3D arrays using threads I am making a voxel game (a bit like Minecraft) in C (11), and I've come across an issue with creating a world efficiently. In my program, I have a World class, which holds a 3D array of Region class pointers. When I initialize the world, I give it a width, height, and depth so it knows how large of a world to create. Each Region is split up into a 32x32x32 area of blocks, so as you may guess, it takes a while to initialize the world once the world gets to be above 8x4x8 Regions. In order to alleviate this issue, I thought that using threads to generate different levels of the world concurrently would make it go faster. Having not used threads much before this, and being still relatively new to C , I'm not entirely sure how to go about implementing one thread per level (level being a xz plane with a height of 1), when there is a variable number of levels. I tried this for(int i 0 i lt height i ) std thread th(std bind( amp World load, this, width, height, depth)) th.join() Where load() just loads all Regions at height "height". But that executes the threads one at a time (which makes sense, looking back), and that of course takes as long as generating all Regions in one loop. I then tried std thread t1(std bind( amp World load, this, w, h1, h2 1, d)) std thread t2(std bind( amp World load, this, w, h2, h3 1, d)) std thread t3(std bind( amp World load, this, w, h3, h4 1, d)) std thread t4(std bind( amp World load, this, w, h4, h 1, d)) t1.join() t2.join() t3.join() t4.join() This works in that the world loads about 3 3.5 times faster, but this forces the height to be a multiple of 4, and it also gives the same exact VAO object to every single Region, which need individual VAOs in order to render properly. The VAO of each Region is set in the constructor, so I'm assuming that somehow the VAO number is not thread safe or something (again, unfamiliar with threads). So basically, my question is two one part How to I implement a variable number of threads that all execute at the same time, and force the main thread to wait for them using join() without stopping the other threads? How do I make the VAO objects thread safe, so when a bunch of Regions are being created at the same time across multiple threads, they don't all get the exact same VAO? Turns out it has to do with GL contexts not working across multiple threads. I moved the VAO VBO creation back to the main thread. Fixed! Here is the code for block.h .cpp, region.h .cpp, and CVBObject.h .cpp which controls VBOs and VAOs, in case you need it. If you need to see anything else just ask. EDIT Also, I'd prefer not to have answers that are like "you should have used boost". I'm trying to do this without boost to get used to threads before moving onto other libraries. |
1 | Does anything need to be done to support variable refresh rate? Does anything have to be done in order for my game to take advantage of variable refresh rate technology supported by some displays, such as G SYNC and FreeSync? |
1 | OpenGL Array Texture generates black texture I am creating a game using OpenGL that has many textured cubes. I am trying to use an Array Texture to load multiple textures from one file. I tried to setup my texture atlas however, all that renders is a black texture. I'm not sure what exactly I am doing wrong. Here is the texture loading code Load image stbi set flip vertically on load(true) unsigned char data stbi load(path, amp width, amp height, amp nrChannels, 0) Generate texture object glActiveTexture(GL TEXTURE0) glGenTextures(1, amp textureId) glBindTexture(GL TEXTURE 2D ARRAY, textureId) Actually load texture if (data) glTexImage3D(GL TEXTURE 2D ARRAY, 0, GL RGBA, width, height, 0, 0, GL RGB, GL UNSIGNED BYTE, data) glGenerateMipmap(GL TEXTURE 2D ARRAY) else std cout lt lt "Failed to load " lt lt path lt lt " texture" lt lt std endl set texture filtering parameters glTexParameteri(GL TEXTURE 2D ARRAY, GL TEXTURE MIN FILTER, GL NEAREST) glTexParameteri(GL TEXTURE 2D ARRAY, GL TEXTURE MAG FILTER, GL NEAREST) Free memory stbi image free(data) Here is my fragment shader version 330 core out vec4 FragColor in vec2 TexCoord uniform sampler2DArray my sampler void main() FragColor texture(my sampler, vec3(0, 0, 1)) vec4(1.0f) Here is my texture file Does anyone know how I can solve my issue? |
1 | FBO result not drawing to screen I recently added framebuffer rendering to my game and rendering to the FBO works (verified with glGetTexImage), but when I go to render a quad to show the result nothing is drawn to the screen. I'm using OpenGL 3.3 Core, so I'm drawing using vertex buffers for the quad and I have two simple pass through shaders to display the quad. I apologize for the amount of code that follows, but I felt it was all necessary for anyone who ends up looking at this. Quad generation create vertices Vertex(position, normal, tex coord) quadVertices.Add( Vertex( Vector3( 1.0f, 1.0f, 0.0f ), Vector3(), Vector2( 0.0f, 1.0f ) ) ) quadVertices.Add( Vertex( Vector3( 1.0f, 1.0f, 0.0f ), Vector3(), Vector2( 0.0f, 0.0f ) ) ) quadVertices.Add( Vertex( Vector3( 1.0f, 1.0f, 0.0f ), Vector3(), Vector2( 1.0f, 0.0f ) ) ) quadVertices.Add( Vertex( Vector3( 1.0f, 1.0f, 0.0f ), Vector3(), Vector2( 1.0f, 1.0f ) ) ) quadVertices.SendData() gl Buffer methods wrapper create indices (quadIndices is a buffer of 16 bit integers) quadIndices.Add( 0 ) quadIndices.Add( 1 ) quadIndices.Add( 2 ) quadIndices.Add( 0 ) quadIndices.Add( 2 ) quadIndices.Add( 3 ) quadIndices.SendData() Quad rendering glDisable( GL DEPTH TEST ) glDisable( GL CULL FACE ) bind the texture glActiveTexture( GL TEXTURE0 ) glBindTexture( GL TEXTURE 2D, framebuffer.GetColorHandle() ) shader.Seti( "texture0", 0 ) projection matrix is glm ortho( 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f ) shader.Set( "proj", projection ) get attributes int vertex shader.GetAttribLoc( "vertex" ) int texCoord shader.GetAttribLoc( "texCoordV" ) bind buffers quadVertices.Bind() quadIndices.Bind() enable arrays and point to data glEnableVertexAttribArray( vertex ) glVertexAttribPointer ( vertex, 3, GL FLOAT, GL FALSE, sizeof( Vertex ), (void )( 0 ) ) glEnableVertexAttribArray( texCoord ) glVertexAttribPointer ( texCoord, 2, GL FLOAT, GL FALSE, sizeof( Vertex ), (void )( sizeof( Vector3 ) 2 ) ) draw! glDrawElements( GL TRIANGLES, quadIndices.Size(), GL UNSIGNED SHORT, 0 ) disable arrays glDisableVertexAttribArray( vertex ) glDisableVertexAttribArray( texCoord ) unbind everything quadVertices.Unbind() quadIndices.Unbind() glBindTexture( GL TEXTURE 2D, 0 ) glEnable( GL CULL FACE ) glEnable( GL DEPTH TEST ) Vertex shader version 330 core in vec3 vertex in vec2 texCoordV out vec2 texCoordF uniform mat4 proj void main() texCoordF texCoordV gl Position proj vec4( vertex, 1.0 ) Fragment shader version 330 core layout(location 0) out vec4 fragColor in vec2 texCoordF uniform sampler2D texture0 void main() fragColor texture( texture0, texCoordF ) |
1 | Deferred shading how to combine multiple lights? I'm starting out with GLSL and I've implemented simple deferred shading that outputs G buffer with positions, normals and albedo. I've also written a simple point light shader. Now I draw a sphere for the point light and output goes into a lighting buffer. The problem is, how do I combine the results of lighting buffer when drawing multiple lights? E.g. when I'm drawing the second light to the lightbuffer using the point light shader, how do I add first light to the second light in the lighting buffer. I mean, you can't read from and write to the same output buffer? |
1 | Using GLFW and GLUT together I'm new to OpenGL Lets say I create an OpenGL window using glfw and I need the UI feature from glut (such as popup menu). Is it possible to use glfw and glut in one program? Thank you |
1 | Looking for specific "textured quad" openGL tutorial in c I'm new to openGL and I've been googling around for the old and simple textured quad tutorial in openGL but I haven't been able to find one that suits my needs. OpenGL3.X compatible OpenGL ES 2.0 compatible AND Only uses core openGL library (no GLUT, GLEW...) I'm creating the window and the openGL context with SDL2. I only need the most basic stuff to draw quads. I'm not interested in any of the new OpenGL features, just in compatibility and cross platformness (Windows, OSX, Linux, Android and iOS). |
1 | How do I draw a point sprite using OpenGL ES on Android? Edit I'm using the GL enum, which is incorrect since it's not part of OpenGL ES (see my answer). I should have used GL10, GL11 or GL20 instead. Here's a few snippets of what I have so far... void create() renderer new ImmediateModeRenderer() tiles Gdx.graphics.newTexture( Gdx.files.getFileHandle("res tiles2.png", FileType.Internal), TextureFilter.MipMap, TextureFilter.Linear, TextureWrap.ClampToEdge, TextureWrap.ClampToEdge) void render() Gdx.gl.glClear(GL10.GL COLOR BUFFER BIT GL10.GL DEPTH BUFFER BIT) Gdx.gl.glClearColor(0.6f, 0.7f, 0.9f, 1) void renderSprite() int handle tiles.getTextureObjectHandle() Gdx.gl.glBindTexture(GL.GL TEXTURE 2D, handle) Gdx.gl.glEnable(GL.GL POINT SPRITE) Gdx.gl11.glTexEnvi(GL.GL POINT SPRITE, GL.GL COORD REPLACE, GL.GL TRUE) renderer.begin(GL.GL POINTS) renderer.vertex(pos.x, pos.y, pos.z) renderer.end() create() is called once when the program starts, and renderSprites() is called for each sprite (so, pos is unique to each sprite) where the sprites are arranged in a sort of 3D cube. Unfortunately though, this just renders a few white dots... I suppose that the texture isn't being bound which is why I'm getting white dots. Also, when I draw my sprites on anything other than 0 z axis, they do not appear I read that I need to crease my zfar and znear, but I have no idea how to do this using libgdx (perhaps it's because I'm using ortho projection? What do I use instead?). I know that the texture is usable, since I was able to render it using a SpriteBatch, but I guess I'm not using it properly with OpenGL. |
1 | Is Phong shading supposed to be so camera angle dependent? I'm not sure if I have a bug in my code or not. It seems like it's a bug, or at least a major shortfall. Here's two images of the same model at slightly different angles (by moving the camera) As we can see, the diffuse shading is wildly different from moving the camera just a bit. I would expect this effect from the specular, but not the diffuse (and removing the specular component has no noticeable affect on this shading). The relevant part of the fragment shader (which removing will remove this issue, but also remove all diffuse shading) is clamp(dot(normal, lightDir), 0, 1) Where normal is the normalized normal vector in camera space (found with (VP M vec4(vertexNormal modelspace,0)).xyz) and lightDir is the normalized light direction vector, which is found with vec3 vertexPosition cameraspace (VP M vec4(vertexPosition modelspace, 1)).xyz eyeDirection cameraspace vec3(0, 0, 0) vertexPosition cameraspace vec3 lightPosition cameraspace (VP vec4(lightPos, 1)).xyz lightDirection cameraspace lightPosition cameraspace eyeDirection cameraspace The above clamped dot product is multiplied by the diffuse color and light color. It seems to be exactly what the Wikipedia page describes (they don't clamp it, but describe the need for it in the text). Most of the Phong shading code is directly or heavily drawn from this site. Is the Phong model supposed to result in such camera angle dependent diffuse shading? If not, any ideas as to what might be wrong? If so, what alternatives are there that don't have this effect? What I want to achieve is essentially the diffuse shading, but without it changing on camera angle, which seems very unrealistic to me. Instead, only the angle between the light source and the normal would matter (and these don't change when we move the camera). This is homework. |
1 | Drawing many separate lines using mouse OpenGL(GLFW glad) So, in order to draw a line, I track the coordinates of the mouse, then I add them to the array and capture it as GL LINE STRIP ADJACENCY. However, for example, I completed drawing a line1 at P1 and decided to start drawing a different line at P2 as shown in the figure, but my two points P1 and P2 joined together, how to fix it? Need to clear the array after drawing at the point P1, actually it doesn't help if I use glClearColor and glClear(GL COLOR BUFFER BIT).. Is there any other way? |
1 | Textures not rendering with VBOs After having used display lists for my programs since I started learning OpenGL, I've finally decided to switch to VBOs after experiencing a considerable amount of lag when I started work on a new game. I've nearly finished with my transition, but I'm still having a bit of trouble with rendering the textures to the screen. The way I'm working it is this I have several sprites of the same dimensions for each type of "block" in the game (it's a Terraria port) which are loaded into the game via a convenience method, then after they're all loaded, they're compiled into a virtual texture atlas via Graphics2D, converted to a Slick texture, and their relative coordinates are saved to a HashMap. However, when it comes time to grab the textures out of the HashMap and render them, the game simply doesn't. Here's part of my code for adding a block to the VBO (this is executed 4 times per block, once for each corner) top left vertex values.add(Float.valueOf((float)b.getLocation().getPixelX())) values.add(Float.valueOf((float)b.getLocation().getPixelY())) light values.add(Float.valueOf((float)b.getLightLevel() 15)) values.add(Float.valueOf((float)b.getLightLevel() 15)) values.add(Float.valueOf((float)b.getLightLevel() 15)) texture values.add(tX) values.add(tY) (Note values is a list which is later converted to an array.) Then comes my code for rendering the VBO public static void render() glPushMatrix() glTranslatef(MineFlat.xOffset, MineFlat.yOffset, 0) glBindTexture(GL TEXTURE 2D, BlockUtil.atlas.getTextureID()) glVertexPointer(2, GL FLOAT, 28, 0) glColorPointer(3, GL FLOAT, 28, 8) glTexCoordPointer(2, GL FLOAT, 28, 20) glDrawArrays(GL QUADS, 0, vertexArray.length 7) glBindTexture(GL TEXTURE 2D, 0) glPopMatrix() The blocks are properly rendered with the correct lighting, but they lack any sort of texture. Any suggestions as to how to get the code working? I apologize if the answer is obvious, but I still consider myself a bit of an OpenGL noob, especially in the area of VBOs. One last thing I should mention that I'm not using any shaders in my current game. EDIT I seem to be a bit mistaken. Upon experimenting with the code by manually setting all texture coords, I discovered that the game simply converts the texture to a uniform color by averaging the RGB value of all pixels. I didn't recognize this before because the color of most blocks is grey. I recall having this problem before with display lists, but that was quite a while ago, and so I don't remember how I resolved it. SECOND EDIT Screenshots of the expected and actual result, respectively http i.stack.imgur.com 3t4r4.png http i.stack.imgur.com edv8v.png |
1 | Why do we multiply perspective modelview point? A common line in vertex shaders is gl Position projection matrix model view matrix object space vertex I've seen this a lot, why isn't it written like gl Position object space vertex model view matrix projection matrix ? That would be more intuitive I suppose. Is it mathematically wrong? |
1 | Is it a good idea to render many textures into one texture? I'm making a 2D game with OpenGL. In order to avoid changing the state machine and binding at runtime, I want to make consolidate my textures into bigger textures, for example, taking 4 128x128 textures and making 1 big 512x512. I could then just render part of the texture rather than binding a new one. Is this a good idea? Could my newly created texture get lost, or are there major faults in doing this? Thanks |
1 | Does the order of vertex buffer data when rendering indexed primitives matter? I'm building a 3d object's triangles. If I can write them to the buffer in the order they are calculated it will simplify the CPU code. The vertices for the triangles will not be contiguous. Is there any performance penalty for writing them out of order? |
1 | Detect collision with bullet physics, to make a character controller I inherited from btCollisionWorld ContactResultCallback but I really have no idea how to use this virtual function btScalar addSingleResult(btManifoldPoint amp cp, const btCollisionObjectWrapper colObj0Wrap, int partId0, int index0, const btCollisionObjectWrapper colObj1Wrap, int partId1, int index1) I thought about using btCollisionWorld ConvexResultCallback instead but there is no method in btCollisionWorld to use it. For now my only goal is to move a btCollisionObject around and detect collision with walls, to adjust the position and movement. I would just need the collision normal, some collision point, or anything else... |
1 | Calculating vertex normals in OpenGL C Does anyone knows a simple solution for calculating vertex normals? I've been looking for this on the internet but i cant find a simple solution, for example, if I have some vertices like this GLfloat vertices 0.5f, 0.5f, 0.0f, Top Right 0.5f, 0.5f, 0.0f, Bottom Right 0.5f, 0.5f, 0.0f, Bottom Left 0.5f, 0.5f, 0.0f Top Left |
1 | timing rendering to monitor screen update I have an application with a very light rendering loop, taking a predictable and small time to execute. Right now, I have two solutions. The first solution uses Vsync after a screen update, my application renders one frame immediately and then waits for the next screen update. This adds approximately one frame of latency. The second solution simply renders continuously, consuming the whole CPU and relying on OpenGL style triple buffering to catch the correct frame. Is there a way for me to acquire the monitor screen update timepoints, preferably through some kind of callback? My current workflow is GLFW with OpenGL, but other solutions are welcome too. I wish to produce a hybrid solution after a screen update, it should wait for nearly an entire frame, render one frame, and then the screen update will catch this just rendered frame. GLFW's glfwSwapInterval is almost what I want, but it works in the opposite way, by swapping the buffer right after the screen update instead of before. With the screen update timepoints, I will be able to implement what I want, by measuring confidence intervals around my rendering loop's execution time and waiting for the correct duration. |
1 | Merging OpenGL rendering result with other graphical elements like user interface. Concepts For my application, there are several elements drawing An user interface elements, and several libraries for rendering part of the content of the window. Lets imagine some kind of Google Earth where a library render the earth geoid and several buttons and texts on top (possibly with transparency). Something like that seem trivial, but it is not, cause OpenGL rendering on double buffer at 60fps and GUI elements are redrawn only on demand changes for small portion of the window. One is by GPU and the other by CPU. One use a endless main loop that peek event, the other stay blocked until some event happens. The idea is to draw the 3D elements on an internal buffer (in Thread A) and from the QT user interface (Thread B) just take the picture and put it at its correct place. Thread A and B could also have some mutex signal to synchronize. Questions are What are the correct concepts involved? FBO? PBO? Any other? Where to found up to date information about how to implements it correctly. (meaning for example PBO with OpenGL 3 , not using ARB or deprecated stuff). Is this image transfer a good idea? any better solution? what about efficiency implications? As a starting hypothesis (for simplification), I could make modification on any library source as required. |
1 | What rotation needs to be applied to align mesh with expected axis of target? I'm using LWJGL and JOML to create a 3D view of hexagons whose positions lie on a torus. I have a number (NxM) hexagons, whose centres and normals I have calculated to be placed on the torus to completely cover the torus surface, but in the quot game quot engine I'm using I need to convert each item being rendered to a position and 3 rotation angles. I'm struggling to go from the 3 normals of the item to the 3 angles. EDIT Subsequent to posting this I have got some way in creating a matrix with the angles and converting to Euler angles, everything is now turned according to those angles, but they aren't facing directions I expect. The background I'm trying to create a visualisation of a Conway Game of Life using hexagons but instead of a simple plane, mapping each hexagon onto a Torus. I've done the maths to calculate the centres of every hexagon, and the 3 direction unit vectors that they need to point to, when in their places around the torus. For illustrative purposes, here's a view of the torus and 2 hexagons that would lie on it (not real, this is just me mocking it up in Blender) What I'm struggling to understand is how to rotate the single mesh for a hexagon to its calculated normals at the position I want to place it. i.e. How do I rotate some quot unit quot hexagon mesh (loaded from an OBJ file exported from Blender) to point in the direction of the 3 normals I've calculated they should be for each hexagon around the torus. I have read a similar question here, but I'm struggling to get from the idea of the 4d rotation matrix to how I convert that to a Vector3f for rotations. I have the 3 vector normals, could create the 4d matrix, but I need a Vector3f (the rotations about x y z) to the mesh is drawn correctly. My code is here. I'm following this guide for using LWJGL to create GameItems (my hexagons) and position rotate them from a loaded obj file mesh, but as I say, I'm struggling to calculate the rotation Vector3f needed to point in the same direction I've calculated. Here's the code section relevant to the problem at hand val mesh loadMesh( quot conwayhex models simple hexagon.obj quot ) hexGrid.hexAxes().forEach (location, axis) gt axis is a Matrix3f with my 3 normals at the centre of the hexagon, e.g cX cY cZ 0 1 0 0 0 1 1 0 0 val gameItem GameItem(mesh) gameItem.position location gameItem.scale 0.2f TODO calculate this according to the torus size what rotation do I give this? How do I calculate it from the given axis for the current item? gameItem.rotation Vector3f(30f, 30f, 30f) gameItems gameItem The output of the application given the above static 30 degree rotation is Can anyone help me unserstand how I apply the rotation to my items so they align to what I've calculated they should be? |
1 | Converting Euler rotation angles from Z up to Y up (Max to OpenGL) I'm working on pulling geometry and it's transformation from a 3DS Max exported FBX (Z up) to an OpenGL model format (Y up). The main problem is I intend to keep the transformations as Translation and XYZ rotations. Right now I pull all position transforms simply going from (x y z) to (x z y) and everything looks good. However the geometry rotation comes in as Euler angle rotations ( X Y Z ). Rotating these has turned into a nightmare. I know I can get a matrix and just multiply it by the RotX 90 mat but pulling those Eulers back is not reliable. What seems to almost work is this, and it's what seemed to give correct results most of the time. It jut seems bad for the case ( 92.83, 89.41, 53.53) for example when it looks like it's loosing a 90 deg rotation on X. Matrix rot90 ( 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1) RotMat RotX RotY RotZ RotMat Rot90X RotMat Quaternion Convert from RotMat NewX Quaternion.Pitch NewY Quaternion.Yaw NewZ Quaternion.Roll I know the god of bit shifting must have some civil way of doing this. Any tips suggestions would be greatly appreciated. ps. exporting the FBX with Y up won't work as it just seems to apply a transformation on the root node, not to actually swap vertices y z positions. |
1 | OpenGL Fragment Shader simulate LCD slow response time I have a very simple OpenGL view rendering 2 triangles with a single texture applied. The minimum setup for rendering a 2d game. What i do is redraw the texture for every frame and easily get 60fps. I would like to add an effect simulating LCD slow response by outputting pixels that are the average between the current and the previous frame. The pseudo code of fragment shader would look like this varying vec2 v TexCoordinate uniform sampler2D u Texture void main() gl FragColor (texture2D(u Texture, v TexCoordinate) PIXEL PREVIOUS FRAME) 2 PIXEL PREVIOUS FRAME gl FragColor Well, i think shaders can't keep a persistent variable like PIXEL PREVIOUS FRAME, then i really don't know how to tackle this problem. Any suggestion? |
1 | OpenGL float not working? I have a float array in my terrainFragmentShader to load render the biomes, however the code simply does not work. No errors are being cast and when I remove the code biomeMap int(pass textureCoordinates.x) int(pass textureCoordinates.y) by either deleting that section or manually setting it, it loads the appropriate texture. vec4 textureColour texture(backgroundTexture, pass textureCoordinates) float c biomeMap int(pass textureCoordinates.x) int(pass textureCoordinates.y) if(c lt 0) textureColour texture(sandTexture, pass textureCoordinates) else if(c lt 32) textureColour texture(stoneTexture, pass textureCoordinates) All three textures load just fine and are rendered when not using this method and the biomes are being properly loaded and initialized. Loading public void loadBiomes(float biomeMap) for(int x 0 x lt Terrain.VERTEX COUNT x ) for(int y 0 y lt Terrain.VERTEX COUNT y ) super.loadFloat(location biomeMap x y , biomeMap x y ) Initializing location biomeMap new int Terrain.VERTEX COUNT Terrain.VERTEX COUNT for(int x 0 x lt Terrain.VERTEX COUNT x ) for(int y 0 y lt Terrain.VERTEX COUNT y ) location biomeMap x y super.getUniformLocation("biomeMap " x " " y " ") And finally here is the entire terrainFragmentShader version 400 core in vec2 pass textureCoordinates in vec3 surfaceNormal in vec3 toLightVector in vec3 toCameraVector in int biomeSize out vec4 out Color uniform sampler2D backgroundTexture uniform sampler2D sandTexture uniform sampler2D stoneTexture uniform vec3 lightColour uniform float shineDamper uniform float reflectivity uniform float biomeMap 128 128 uniform float offsetX uniform float offsetZ const int levels 32 void main(void) vec4 textureColour texture(backgroundTexture, pass textureCoordinates) float c biomeMap int(pass textureCoordinates.x) int(pass textureCoordinates.y) if(c lt 0) textureColour texture(sandTexture, pass textureCoordinates) else if(c lt 32) textureColour texture(stoneTexture, pass textureCoordinates) vec3 unitNormal normalize(surfaceNormal) vec3 unitLightVector normalize(toLightVector) float nDotl dot(unitNormal,unitLightVector) float brightness max(nDotl, 0.25) vec3 unitVectorToCamera normalize(toCameraVector) vec3 lightDirection unitLightVector vec3 reflectedLightDirection reflect(lightDirection, unitNormal) vec3 totalDiffuse brightness lightColour out Color vec4(totalDiffuse,1.0) textureColour Why isn't the biome not loading? Is it an issue in me not getting the appropriate x y from the pass textureCoordinates? |
1 | Bullet Physics Integration direct movement of rigid bodies I'm adding bullet physics to my engine. The physics simulation bits are all working nicely, but one bit I'm struggling with is being able to move objects using their co ordinates, and then have them affect other bullet objects. I currently have this code just before I step the simulation, and this moves all of the objects to the co ordinates that the game engine thinks the object should be, due to outside movement. btTransform trans trans.setFromOpenGLMatrix(glm value ptr(host gt getTransMat() host gt getRotMat())) motionState gt setWorldTransform(trans) body gt setWorldTransform(trans) Then I step the simulation, and move every object to where bullet thinks it should be. I am aware there are nicer ways to do this part (custom written motionstate classes I think) but I want to get the logic down first. This works, but moving a cube directly into another causes the second cube to just shake a bit. I've read in few places I should be applying forces to objects, but I don't really want to expose the btRigidBody and physics stuff to the core gameobject, and I have a lot of code that does it's own movements using co ordinates, and I don't really want to rewrite it, although I will if it's the only way. Could I replace the code below with something that compares the position of the gameobject to the rigidbody's position, and applies the correct force to make that happen? How would I implement this code? It can't be as simple as F MA can it, given that this would happen any frame? Edit 1 I have implemented the formula provided by DMGregory and I've not had any success. I've tried swapping this out for applyCentralImpulse too. The objects just stay stationary and anything falling just goes into hyper speed. This runs for every physics object just before step sim, and then the simulations positions are applied to back to their hosts. if (host nullptr)return btVector3 newPos convert(host gt getPos()) btVector3 oldPos body gt getWorldTransform().getOrigin() btVector3 dist newPos oldPos btVector3 acc 2 ((newPos oldPos ( body gt getLinearVelocity() timeStep)) (pow(timeStep, 2))) if (dist.length() gt btScalar(0.1f)) std cout lt lt acc.length() lt lt "large acc n" body gt applyCentralForce(mass acc) |
1 | How portable are OpenGL versions, really? If I write a game engine that uses OpenGL 1.5 (not assuming what else I do), is it portable now and is it still portable five years from now or are will support for OpenGL by hardware and drivers (be) exclusive to their (much more farther along) target OpenGL versions? Lately I've been looking at a lot of answers on this website that direct users to divert their work towards the most recent OpenGL versions, citing hardware surveys of DirectX support and only recommending earlier versions as a last, final resort (as if to imply there is something wrong with them that makes all usage of them invalid or pointless). If I only have computers that can provide OpenGL lt 1.5 or lt 2.1 contexts should I just give up game programming if I can't afford a new computer with hardware and drivers for 3.x and 4.x? Or should I finish my game engine the way I intended to? Will by the time I get a 4.x supporting setup will there be new versions and a lack of backwards compatibility that trash all usage of 4.x? Will 4.x ever dominate over earlier versions support wise before a new major version is realized and released? |
1 | Can diffuse and specular component of Phong model shine thru object? I have been implementing simple 3D engine in OpenGL, mostly based on tutorials by Tom Dalling. I have implemented the Phong lightening model as described in his tutorial, but I see light artifacts on concave shaped objects (and also when using normal mapping). I came to a point, where i don t know if my code is broken, or this is actually normal behaviour, and you need special handling for it. I think that these artifacts could be happening, because the normals of concave object at same points head back into the point light source, not actually considering there is a solid object in between. I tried to do a little scetch of this situation in 2D (for diffuse component) So I need to know, if this is a common problem of this light model, or my calculations are wrong. |
1 | glVertexAttribPointer normalization glVertexAttribPointer(index, size, type, normalized, stride, pointer) If I use type GL UNSIGNED BYTE and normalized GL TRUE how is it normalized? would the data be divided by 256 for normalization? or? This would mean there is no way to have a normalized value of '1.0f'.. |
1 | How can I apply a mesh distortion to walls like in Dungeon Keeper 2? In Dungeon Keeper 2, the walls of the dungeon have different random shapes depending on its state (freshly dug or reinforced, and so on). They look like they are cubes of 3x3 vertices that have a x z planar jitter or distortion. However, the corner vertices are shared by adjacent cubes, so there must be some kind of algorithm rather than just a random jitter. How could I achieve a similar effect? |
1 | wrongly scaled on cocos2dx 3.2 when render to multiple fbo (more 1 deep)? i have bug that only happen when retina flag disabled. i try draw multiple render texture to achive some multi post effect so i have 3 render texture, fbo 1 for final pos effect fbo 2 for back buffer w light fbo 3 for light mask i apply this using fbo mechanisms of cocos2dx push the fbo1 to screen push the fbo2 to fbo1 fbo 3 just for sampler texture for render in fbo2 shader gameplay actor will be drawn on fbo2, after that fbo2 will be rendered to fbo1 ( we can apply some effect here) then fbo1 drawn to screen, then do some gui layering. the thing is all gp actor drawn into fbo2 , "seems" become scaled down 3 times, so its like the small game screen on left bottom corner, but the light from fbo3 or gui that directly drawn to screen normal positioned and not scaled . i really dontknow who is scale my fbo2, but i check the matrix its all correct. but seems lot setviewport in rendertexture.h, idont know how its work actualy, so if any one know how viewport work on fbo, no direct open gl acces for my code except attach shader if i dont use fbo2 one, ( using fbo1 directly ), the bug doesnt happen this doesnt happen on retina true ( scaled 2x ) , this happen on win32 or mac when i disabled retina , tracing draw order correcty log below on command onbegin fbo1 onbegin fbo2 onbegin fbo3 ( i render some sprite to this fbo first ex alpha gradient mask) onend fbo3 ( i render gp actor here) onend fbo2 ( render attached fbo2 texture with shader carries sampler of attached texture 3) onend fbo1 ( render attached fbo1 texturee) may caries some shader ( render gui part ) |
1 | Runtime resolution changing with GLFW3 I've been trying to figure out the correct method for changing the resolution fullscreen state of a GLFW window for a while now, but after searching all I found were references of how to do it with older versions of the library such as this. I suspect you'd just destroy the window object and re create it, but was not sure because of how some functions such as glfwSetKeyCallback take a GLFWwindow as a parameter, and I don't know if that'd continue to work after it has been re created. The documentation also does not have any examples on doing such a thing, so any help would be appreciated. |
1 | glCreateShader causes segmentation fault I can't create a shader when trying to use shaders with sfml. The function glCreateShader(GL VERTEX SHADER) causes a segmentation fault. At first I googled it and found that it does that when the program does not have an opengl context. I tried SDL first but the poor documentation and "look at the header to know what to do" made me go for sfml the code that causes the seg fault is bellow sf Window App(sf VideoMode(800, 600, 32), "SFML OpenGL") Set color and depth clear value glClearDepth(1.f) glClearColor(0.f, 0.f, 0.f, 0.f) Enable Z buffer read and write glEnable(GL DEPTH TEST) glDepthMask(GL TRUE) Setup a perspective projection glMatrixMode(GL PROJECTION) glLoadIdentity() GLuint vertShader glCreateShader(GL VERTEX SHADER) ... I'm including glew, gl.h, sfml window, sfml system, using opengl 2.1 on gcc linux. What is missing? |
1 | Rotate view matrix based on touch coordinates I'm working on an Android game where I need to rotate the camera around the origin based on the user dragging their finger. My view matrix has initial position of sitting on the negative z and facing origin. I have succeeded in moving the camera through rotation left or right, up or down based on the user dragging the finger, but my problem is obviously that after I drag my finger up down and rotate say 90 degrees so my intial position of z is now y and still facing origin, if I drag my finger left right I want to rotate from y to x, but what happens is it rotates around the pole y. This is to be expected as I am mapping 2D touch drag coords to 3D space, but I dont know where to start trying to do what I want. Perhaps someone can point me in the right direction, I've been googling for a while now but I don't know what I want to do is called! Edit What I was looking for is called an ArcBall, google it for lots of info on it. |
1 | Is it acceptable to buffer no data OpenGL? Correct me if I am wrong, but if you call "glBufferData(...)" upon an existing buffer, it will resize the buffer to whatever data you upload. Does that mean if I call something like glBufferData(GL ARRAY BUFFER, 0, NULL, GL DYNAMIC DRAW) it will keep the buffer in existence, but it will have size and no data? In the description of glBufferData, it says it is acceptable to pass NULL as a data parameter, but it mentions nothing of the size parameter. |
1 | GluUnproject works, but only when the camera is not rotated I am working on a very basic 3D program, my first one using OpenGL. What I am trying to do is trace a ray from the mouse's location on click which works, but only when the camera is not rotated. When the camera is rotated, the unproject treats it as if it was not. What am I doing wrong? The code for unproject Coord3 lt GLdouble gt Display GetOGLPos(uint16 t x, uint16 t y) GLint viewport 4 GLdouble modelview 16 GLdouble projection 16 Coord2 lt GLfloat gt window location Coord3 lt GLdouble gt resultant ray glGetDoublev(GL MODELVIEW MATRIX, modelview) glGetDoublev(GL PROJECTION MATRIX, projection) glGetIntegerv(GL VIEWPORT, viewport) window location.x (float)x window location.y viewport 3 (float)y gluUnProject(window location.x , window location.y, 1.0f, modelview, projection, viewport, amp resultant ray.x, amp resultant ray.y, amp resultant ray.z) return resultant ray The display code void Display Render() if(!dirty) return glTranslatef(camera.location.x ,camera.location.y, camera.location.z) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glRotatef(camera.angle.y, 1, 0, 0) glRotatef(camera.angle.x, 0, 1, 0) glCallList(cube) glColor3ub(0, 0, 255) glBegin(GL LINES) glVertex3f(0,0,0) glVertex3f(mouse click.x, mouse click.y, mouse click.z) glEnd() glLoadIdentity() SDL GL SwapBuffers() dirty 0 SDL Delay(33) |
1 | Simple openGL code paradox I'm trying to create the most basic application that will draw a cube using indices and textures for starters, but nothing about it makes any sense whatsoever. I ran it on my machine with an nVidia card and the display driver (latest, not that it matters) crashes (null ref), along with the program on the glDrawElements(...) command (Renderer.cpp). Fine. Then I sent it to a friend of mine who has an AMD card to try to debug it because I was unable to find the bug in the code, and the program worked (didn't crash) on his machine. That surprised me since I only did the most basic openGL stuff, no way it was vendor specific. To get to the bottom of this, I downloaded APItrace (http apitrace.github.io ) to try to see what's going on under the hood of my machine and why does my driver crash. Once I dumped the trace after I ran the program, it said that the shader program linker failed because I didn't write to gl Position (which I have). The problem is that the program links just fine in VS debugger and no error is returned there. So it just keeps getting weirder. If you want to mess with this, you can get the code (commit with issue linked) at https github.com Karlovsky120 7DaysWorldEditor commit e86b8be40c81511c4416dc5e253b90e42a5a8ec0 I don't think you'd see the bug if I just pasted the code here, there must be a mistake with the setup somewhere. I really made an effort to make this as hassle free as possible, it's just one click install clone it, build it, and you're good to go, no additional setup required. You will need to open it with VS2017 though. There's a lot of classes in the code, but all of the relevant code is in Scene.cpp and any of the classes it references. Thank you for any help you give, even if it's just a guess at what might be wrong because I'm at my wits end here. |
1 | Why should every light source have its own ambient light? I'm currently learning how to use OpenGl, and I'm following a tutorial on learnOpenGL. I'm at the lightning chapter, and it has introduced basic lightning, ambient and diffuse with specular highlight. Phong shading essentially. I'm confused about the ambient term. I understand that the ambient light is basically a light that comes from nowhere in particular, and it just fills the scene you're in with a soft light. And it's used to fake Global Illumination in renderers that do not compute it, so that the stuff that's not in the direct view of a light source is not completely black but it has some generic lightning on it. So going by the definition, before actually trying to implement a renderer myself, and by using other engines like UE4, I always just assumed that ambient light would be a separate thing from say a point light or a directional light. I thought it would be a parameter independent of other stuff that would basically just light the scene. But in the tutorial, at least for the basic lightning part, the ambient is actually part of the light quot object quot itself. Essentially every light source has its own ambient light. This seems counterintuitive to me. Why does every light need an ambient part too? Wouldn't make more sense to have a quot separate quot ambient light object that provides the ambient light itself? Also, if my light has a radius of say 100 units, then outside of that radius nothing will receive any direct light, obviously, but also no ambient and will be completely dark. Isn't the point of ambient light exactly not to have scenes without direct light go completely dark? Most importantly I'm wondering if this implementation approach is the standard used in modern game engines, or if it's maybe a simplification in the tutorial. (For example I don't remember seeing a parameter for ambient light in the light objects themselves in UE4) Links for context https learnopengl.com Lighting Basic Lighting https learnopengl.com Lighting Light casters |
1 | Can I develop Linux Windows games using C , .NET and OpenGL? I want to learn to develop 3D and 2D graphics with either OpenGL or DirectX. I chose OpenGL as it's being used in webGL and works cross platform. I already know .NET well, and the C language just fits my style very well and I feel secure with it. However, I want to develop a game that works on Windows and Linux. Is this possible to do with the combination of C , .NET and OpenGL. I know that OpenGL will not be a problem. |
1 | What is UVIndex and how do I use it on OpenGL? I am a noob in OpenGL ES 2.0 (for WebGL) and I'm trying to draw a simple model I've made with a 3D tool and exported to .fbx format. I've been able to draw some models that only have A vertex buffer, a index buffer for the vertices, a normal buffer and a texture coordinate buffer, but this model now has a "UVIndex" and I'm not sure where am I supposed to put this UVIndex. My code looks like this GL.bindBuffer(GL.ARRAY BUFFER, this.Model.House.VertexBuffer) GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute "vPosition" ,3,GL.FLOAT, false, 0, 0) GL.bindBuffer(GL.ARRAY BUFFER, this.Model.House.NormalBuffer) GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute "vNormal" , 3, GL.FLOAT, false, 0, 0) GL.bindBuffer(GL.ARRAY BUFFER, this.Model.House.TexCoordBuffer) GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute "TexCoord" , 2, GL.FLOAT, false, 0, 0) GL.bindBuffer(GL.ELEMENT ARRAY BUFFER, this.Model.House.IndexBuffer) GL.bindTexture(GL.TEXTURE 2D, this.Texture.HTex1) GL.activeTexture(GL.TEXTURE0) GL.drawElements(GL.TRIANGLES, this.Model.House.IndexBuffer.Length, GL.UNSIGNED SHORT, 0) But my model renders totally incorrect and I think it has to do with the fact that I am ignoring this "UVIndex" in the .fbx file, since I've never drawn any model that uses this UVIndex I really have no clue on what to do with it. This is the json file containing the model's data http pastebin.com raw.php?i G294TVmz |
1 | OpenGL optimization tips What tips or tricks do you have when it comes to making the OpenGL more efficient? |
1 | How to implement translation, scale, rotation gizmos for manipulating 3D object's transforms? I am in the process of developing a basic 3D editor. It uses OpenGL for rendering a 3D world. Right now my scene is just a few boxes of different sizes and I am at the stage where I want to be able to select each box and then move scale rotate it to achieve any transform I want. How can I solve the problem of implementing both the rendering of these tool's gizmos(or handles, or how people usually call them), and also picking them on each axis to perform the change in the transform with my mouse? For clarity My research so far suggested the cleanest approach is to have an axis aligned bounding box per arrow in the gizmo and another one per square (the ones that move the object in a plane rather than a single axis) and then cast a ray from the mouse position and see what it collides with. But this is still a bit too abstract for me, I would appreciate further guidance in how this algorithm would go (pseudocode is more than enough) |
1 | How can I pass an array of floats to the fragment shader using textures? I want to map out a 2D array of depth elements for the fragment shader to use to check depth against to create shadows. I want to be able to copy a float array into the GPU, but using large uniform arrays causes segfaults in openGL so that is not an option. I tried texturing but the best i got was to use GL DEPTH COMPONENT glTexImage2D(GL TEXTURE 2D, 0, GL DEPTH COMPONENT, 512, 512, 0, GL DEPTH COMPONENT, GL FLOAT, smap) Which doesn't work because that stores depth components (0.0 1.0) which I don't want because I have no idea how to calculate them using the depth value produced by the light sources MVP matrix multiplied by the coordinate of each vertex. Is there any way to store and access large 2D arrays of floats in openGL? |
1 | openGL Vertex Projection does not work as expected I try to render a simple grid using glBegin(GL LINES). I have a class Camera, which provides values like this float farPlane 100.0f float nearPlane 0.1f float screenRatio 1,42857 width height 1000 700 float frustum 70.0f glm vec3 position glm vec3(3.0f, 3.0f, 3.0f) glm vec3 UP glm vec3(0.0f, 1.0f, 0.0f) glm mat4 view glm lookAt(position, position glm normalize(position), UP) makes the lookAt vector always point towards (0, 0, 0) glm mat4 projection glm perspective(frustum, screenRatio, nearPlane, farPlane) using the view and projection matrices, i transform every vertex from my Grid model and render it. glm mat4 viewProjectionMatrix Graphic camera gt getViewProjectionMatrix() returns Camera projection Camera view glBegin(GL LINES) for (unsigned int i 0 i lt vertexNum i) glColor3f(vertexArray i .normal.x, vertexArray i .normal.y, vertexArray i .normal.z) normal vector is used for color in this case glm vec4 translatedPosition(viewProjectionMatrix gridTransformationMatrix (glm vec4(vertexArray i .position, 1.0f))) glVertex3f(translatedPosition.x, translatedPosition.y, translatedPosition.z) glEnd() but this is what i see when i move the camera along the line (0,0,0) u (1,1,1) http i.imgur.com PrcDcLs.gifv (you can see the camera cooridnates in the console) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.