_id
int64
0
49
text
stringlengths
71
4.19k
1
Frame Buffer Object (FBO) is not working. What is the right way to use? I am trying to use FBO but i am living some problems. I will show you my steps but first i will show my running screen ,so we can compare them. Like before fbo after fbo. My running screen and Draw() function code glClearColor(0.5,0.5,0.5,1.0) glLoadIdentity() glTranslatef(0.0,0.0, 3.0) glRotatef(0,0.0,1.0,0.0) glUniform3f(glGetUniformLocation(mainShader gt getProgramId(),"lightPos"),0,1,2) mainShader gt useShader() glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) scene gt draw(mainShader gt getProgramId()) mainShader gt delShader() After I tried to add FBO Create FBO texture function unsigned int createTexture(int w,int h,bool isDepth false) unsigned int textureId glGenTextures(1, amp textureId) glBindTexture(GL TEXTURE 2D,textureId) glTexImage2D(GL TEXTURE 2D,0,(!isDepth ? GL RGBA8 GL DEPTH COMPONENT),w,h,0,isDepth ? GL DEPTH COMPONENT GL RGBA,GL FLOAT,NULL) glTexParameteri(GL TEXTURE 2D,GL TEXTURE MAG FILTER,GL NEAREST) glTexParameteri(GL TEXTURE 2D,GL TEXTURE MIN FILTER,GL NEAREST) glTexParameteri(GL TEXTURE 2D,GL TEXTURE WRAP S,GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE 2D,GL TEXTURE WRAP T,GL CLAMP TO EDGE) int i i glGetError() if(i! 0) std cout lt lt "Error happened while loading the texture " lt lt i lt lt std endl glBindTexture(GL TEXTURE 2D,0) return textureId Init() function void init() glClearColor(0,0,0,1) glMatrixMode(GL PROJECTION) glLoadIdentity() gluPerspective(50,640.0 480.0,1,1000) glMatrixMode(GL MODELVIEW) glEnable(GL DEPTH TEST) mainShader new shader("vertex.vs","fragment.frag") quadRenderShader new shader("quadRender.vs","quadRender.frag") scene new meshLoader("test.blend") renderTexture createTexture(640,480) depthTexture createTexture(640,480,true) glGenFramebuffers(1, amp FBO) glBindFramebuffer(GL FRAMEBUFFER,FBO) GL COLOR ATTACHMENT0 GL DEPTH ATTACHMENT glFramebufferTexture2D(GL FRAMEBUFFER,GL COLOR ATTACHMENT0,GL TEXTURE 2D,renderTexture,0) glFramebufferTexture2D(GL FRAMEBUFFER,GL DEPTH ATTACHMENT,GL TEXTURE 2D,depthTexture,0) int i glCheckFramebufferStatus(GL FRAMEBUFFER) if(i! GL FRAMEBUFFER COMPLETE) std cout lt lt "Framebuffer is not OK, status " lt lt i lt lt std endl glBindFramebuffer(GL FRAMEBUFFER,0) And Draw() function void display() rendering to texture... glClearColor(0.5,0.5,0.5,1.0) glLoadIdentity() glTranslatef(0.0,0.0, 3.0) glRotatef(0,0.0,1.0,0.0) glUniform3f(glGetUniformLocation(mainShader gt getProgramId(),"lightPos"),0,1,2) mainShader gt useShader() glBindFramebuffer(GL FRAMEBUFFER,FBO) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) scene gt draw(mainShader gt getProgramId()) glBindFramebuffer(GL FRAMEBUFFER,0) mainShader gt delShader() glClearColor(0.0,0.0,0.0,1.0) render texture to screen glLoadIdentity() glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) quadRenderShader gt useShader() glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D,depthTexture) glUniform1i(glGetUniformLocation(quadRenderShader gt getProgramId(),"texture"),0) quad gt draw(quadRenderShader gt getProgramId()) quadRenderShader gt delShader() and Result only drawing last setup color (glClearColor) so black Result should be like in tutorial Note I know tutorial monkey is purple but it is not problem.
1
HDR Tone Mapping choosing parameters I implement HDR in my graphics engine (deferred rendering) based on this document link I save a luminance in a texture (RGBA16F) this way const float delta 1e 6 vec3 color texture(texture0, texCoord).xyz float luminance dot(color, vec3(0.2125, 0.7154, 0.0721)) float logLuminance log(delta luminance) fragColor vec4(logLuminance, logLuminance, 0.0, 0.0) first channel stores max luminance during a minification process Then I can calculate an average luminance and find a max luminance. The delta 1e 6. Is that a good choice ? Can I calculate "a" (equation 2) dynamically to achieve better results ?
1
Is it possible to access vertex values of an object created using display lists? I wanted to learn to develop games so started with tetris. I encountered a problem right in the beginning. My game is 20 ready. There are five types of shapes viz YELL, BOX, LINE, TEE, YEL which are shapes of different kinds. I created five display lists like this define YELL 1 define BOX 2 define LINE 3 define TEE 4 define YEL 5 int bs 20 void defineshapes() glNewList(YELL,GL COMPILE) glPushMatrix() glTranslated( bs,0,0) glColor3fv(color color1 ) glBegin(GL POLYGON) glVertex2i(0,0) glVertex2i(bs,0) glVertex2i(bs, bs) glVertex2i(0, bs) glEnd() glTranslated(bs,0,0) glColor3fv(color color2 ) glBegin(GL POLYGON) glVertex2i(0,0) glVertex2i(bs,0) glVertex2i(bs, bs) glVertex2i(0, bs) glEnd() glTranslated(bs,0,0) glColor3fv(color color3 ) glBegin(GL POLYGON) glVertex2i(0,0) glVertex2i(bs,0) glVertex2i(bs, bs) glVertex2i(0, bs) glEnd() glTranslated( 2 bs, bs,0) glColor3fv(color color4 ) glBegin(GL POLYGON) glVertex2i(0,0) glVertex2i(bs,0) glVertex2i(bs, bs) glVertex2i(0, bs) glEnd() glPopMatrix() glEndList() glNewList(BOX,GL COMPILE) glPushMatrix() ..... and randomly picking one like this srand(time(NULL)) color1 rand() 6 color2 rand() 6 color3 rand() 6 color4 rand() 6 defineshapes() randshape rand() 5 1 and drawing in display function like this void display() glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glPushMatrix() glTranslated(x2,y2,0) glCallList(randshape) glPopMatrix() glFlush() I expected problem in the beginning itself when i used the display lists because i had to access the vertex values(x,y) of the shapes to check if they might overlap with another block below before moving them down. Is there a way of knowing the vertex values of objects drawn using display lists. I encountered the same problem once before when i needed to create a tyre of a car. I wanted the tyre to be painted with colors so created a mesh by rotating a circle again using display lists. Then i needed to know the values of the intersection points of longitudes and latitudes which i didnt know how to get so i abandoned that task. A YELL looks like this What is the easiest method of getting around this problem. I am sure OpenGL has a way of getting this done but i am new to it.
1
Does Java support OpenGL by itself? Note This is long but I explaining everything that you need to know. Don't read half way through it and say "What's the question?". It's simple but long and I need help as soon as possible. So I asked a question not long ago that was similar but it wasn't my best work. What I'm trying to do is make a small jar with no other files but the jar (maybe natives if needed) that handles a window and graphics for games. I'm making this for some people I know who don't want to get into advanced graphics and stuff to make games, plus I figure it would be easier to stick everything they need into one jar that they know how to use. Anyway I found JOGL but after like the past 3 hours or so all I got with JOGL was the memory to never use it because it's a pain to install (at least for me) and everyone says a different way to install it and I need like 100 files along with my jar to get it to work. So since I'm not dealing with JOGL i figured that it's best to try and find something else. So does anyway know any way to get OpenGL into java without libraries that add more files and if so just like 1 file? I'm trying to get it so it's just that jar and nothing else. I just want this done but I'm very confused. I would also like it to be able to run on Windows, Linux and Mac. I only have a Windows machine although I can get Linux to test it on and I know someone who has a Mac but keep in mind I'm building it on a Windows machine. So my question really is how would I be able to stick OpenGL (I would like OpenAL and maybe OpenCL too) in a single jar and nothing else? I have a few exceptions such as I'm kinda ok if I need a few natives but I don't want 10 jars and 50 natives and I need it to work on all kind of machines, and also I would like to be able to use swing to handle the window.
1
OpenGL have object follow mouse I want to have an object follow around my mouse on the screen in OpenGL. (I am also using GLEW, GLFW, and GLM). The best idea I've come up with is Get the coordinates within the window with glfwGetCursorPos. The window was created with window glfwCreateWindow( 1024, 768, "Test", NULL, NULL) and the code to get coordinates is double xpos, ypos glfwGetCursorPos(window, amp xpos, amp ypos) Next, I use GLM unproject, to get the coordinates in "object space" glm vec4 viewport glm vec4(0.0f, 0.0f, 1024.0f, 768.0f) glm vec3 pos glm vec3(xpos, ypos, 0.0f) glm vec3 un glm unProject(pos, View Model, Projection, viewport) There are two potential problems I can already see. The viewport is fine, as the initial x,y, coordinates of the lower left are indeed 0,0, and it's indeed a 1024 768 window. However, the position vector I create doesn't seem right. The Z coordinate should probably not be zero. However, glfwGetCursorPos returns 2D coordinates, and I don't know how to go from there to the 3D window coordinates, especially since I am not sure what the 3rd dimension of the window coordinates even means (since computer screens are 2D). Then, I am not sure if I am using unproject correctly. Assume the View, Model, Projection matrices are all OK. If I passed in the correct position vector in Window coordinates, does the unproject call give me the coordinates in Object coordinates? I think it does, but the documentation is not clear. Finally, to each vertex of the object I want to follow the mouse around, I just increment the x coordinate by un 0 , the y coordinate by un 1 , and the z coordinate by un 2 . However, since my position vector that is being unprojected is likely wrong, this is not giving good results the object does move as my mouse moves, but it is offset quite a bit (i.e. moving the mouse a lot doesn't move the object that much, and the z coordinate is very large). I actually found that the z coordinate un 2 is always the same value no matter where my mouse is, probably because the position vector I pass into unproject always has a value of 0.0 for z. Edit The (incorrectly) unprojected x values range from about 0.552 to 0.552, and the y values from about 0.411 to 0.411.
1
Generate screen space distance field from depth buffer I've been wanting to try out raymarching on real 3D scenes to implement effects like AO, soft shadows and such. I pretty much know how to use signed distance functions (as described by Inigo Quilez) to approximate a model and raymarch it, but I want to have way more precision and have the distance field of my scene. How would you go about generating a (un)signed distance field from the depth buffer in screen space ? Or would the usual depth buffer be usable as a distance field (I don't think so) ? Or there might be better ways that don't use the depth buffer ?
1
How to adapt a webgl shader using mouse position to have symmetrical behaviour I'm trying to create a particular kind of effect for the background of my SpriteKit game, and I've found some shaders on shadertoy that are close to what I want, but I'm struggling to adapt them to do exactly what I want.. There are a few shaders that simulate the sun with atmospheric scattering based on its xy position. here are some examples https www.shadertoy.com view MsVSWt https www.shadertoy.com view MdtXD2 I want something similar to this effect but I want the glow to be symmetrical, so that when the 'sun' reaches close to bottom or top edge the glow effect is at a maximum, and when it is in the centre it is at a minimum. I've been playing with these shaders trying to get this effect but my understanding of webgl is clearly lacking as I'm struggling to achieve this. What I've tried playing with is replacing variables such as sunPosition.y with abs(sunPosition.y) my logic being that the y axis ranges from 1 to 1 so I should be able to get symmetry by making everything positive.. but this in fact has no effect. If someone can nudge me in the right direction would be greatly appreciated as I'm now thoroughly confused by webgl!
1
How to generate a RGB a texture for a glow effect in GLSL? I would like to create a glow effect in GLSL, there is a tutorial that explains how we multiply RGB a I have some questions is it an operation that is in a fragment shader, where RGBa is calculated each frame, and then we apply the blur at the end of the shader? The whole process in the same shader, called each frame? Or can we multiply RGBa in a separate process at the beginning of the game and send this texture to the fragment that will blur the result? I have tried to translate the informations I found, but I am a beginner with the shaders, could someone put me on the right track here? Should we use a "buffer"? Here is my starting code GLuint CreateBufferForMixAB() unsigned char data GLuint bindingPoint 1, mixTex, buffer glGenTextures( 1, amp mixTex ) glBindTexture( GL TEXTURE 2D, mixTex ) data (unsigned char )malloc( width height ) ...imageTex.rgb alphaTex.rgb... glGenBuffers(1, amp buffer) glBindBuffer(GL UNIFORM BUFFER, buffer) glBufferData(GL UNIFORM BUFFER, sizeof(mixTex), mixTex, GL DYNAMIC DRAW) glBindBufferBase(GL UNIFORM BUFFER, bindingPoint, buffer) glTexImage2D(GL TEXTURE 2D, 0, GL RGB, width, height, 0, GL RGB, GL UNSIGNED BYTE, data) Thanks
1
OpenGL VBO or glBegin() glEnd()? I recently was given this link to a tutorial site from someone who I gave the original OGL Redbook to. The third header down says distinctly to forget glBegin() amp glEnd() as the typical render method. I learned via the Redbook's method, but I see some benefit in VBOs. Is this really the way to go, and if so, is there a way to easily convert the render code, and subsequent shaders, into VBOs and subsequent datatypes?
1
Queries regarding Geometry Shaders I am dealing with geometry shaders using GL ARB geometry shader4 extension. My code goes like GLfloat vertices 0.5,0.25,1.0, 0.5,0.75,1.0, 0.5,0.75,1.0, 0.5,0.25,1.0, 0.6,0.35,1.0, 0.6,0.85,1.0, 0.6,0.85,1.0, 0.6,0.35,1.0 glProgramParameteriEXT(psId, GL GEOMETRY INPUT TYPE EXT, GL TRIANGLES) glProgramParameteriEXT(psId, GL GEOMETRY OUTPUT TYPE EXT, GL TRIANGLE STRIP) glLinkProgram(psId) glBindAttribLocation(psId,0,"Position") glEnableVertexAttribArray (0) glVertexAttribPointer(0, 3, GL FLOAT, 0, 0, vertices) glDrawArrays(GL TRIANGLE STRIP,0,4) My vertex shader is version 150 in vec3 Position void main() gl Position vec4(Position,1.0) Geometry shader is version 150 extension GL EXT geometry shader4 enable in vec4 pos 3 void main() int i vec4 vertex gl Position pos 0 EmitVertex() gl Position pos 1 EmitVertex() gl Position pos 2 EmitVertex() gl Position pos 0 vec4(0.3,0.0,0.0,0.0) EmitVertex() EndPrimitive() Nothing is rendered with this code. What exactly should be the mode in glDrawArrays() ? How does the GL GEOMETRY OUTPUT TYPE EXT parameter will affect glDrawArrays() ? What I expect is 3 vertices will be passed on to Geometry Shader and using those we construct a primitive of size 4 (assuming GL TRIANGLE STRIP requires 4 vertices). Can somebody please throw some light on this ?
1
C OpenGL Game Timer Possible Duplicate Tips for writing the main game loop? I am attempting to make a game using OpenGL and C but i haven't been able to find any good tutorials..or anything that could assist me in setting up my main game loop with some kind of timer tick system...i would prefer to use as little external libraries as possible...i am using Win32 any help would be appreciated
1
Combining and drawing 2D lights in OpenGL I have progressed to the lighting portion of my little framework and I managed to solve a few initial problems so now I'm left with the subtle weirdnesses. At this stage I just have 2 lights the ambient daylight and a hard coded example light in my fragment shader (ultimately, I want to have multiple lights that I loop through adding together, and shadows would be nice too, but 1 step at a time!) uniform sampler2D u image varying vec2 v texCoords varying vec4 v position uniform vec3 ambient light set as (0.3, 0.3, 0.3), night! vec2 point light pos vec2( 0.4, 0.3) vec3 point light col vec3(0.999, 0.999, 0.999) float point light intensity 0.4 void main() vec4 frag color texture2D(u image, v texCoords) if(frag color.a lt 1.0) discard float distance distance(point light pos, v position.xy) float diffuse 0.0 if (distance lt point light intensity) diffuse 1.0 abs(distance point light intensity) gl FragColor vec4(min(frag color.rgb ((point light col diffuse) ambient light), frag color.rgb), 1.0) Which almost does what I want, but there are 2 problems The light has a bright ring in it and the light is squashed on the y axis. I actually like how it looks squashed at this ratio, but I'd much rather do it deliberately. I the y axis squashing is down to the window aspect ratio if the window is square, the light is circular, but I don't know how to fix this in the fragment shader. I don't know where the bright ring is coming from.
1
Performance issues with different types of uniform buffer I've tested different buffer layouts, because I'm confused about some performance issues. That are my test buffer (OpenGL 4.1, GLSL 410, GT 750M, MacBook Pro) Test 1 Using fixed struct buffer with unused variables inside struct SPerFrameUniformBufferVS vec4 m UnusedButPlaned1 vec4 m SometimesInUse2 vec4 m UnusedButPlaned3 vec4 m Used4 mat4 m Used5 mat4 m Used6 layout(row major, std140) uniform UPerFrameUniformBufferVS SPerFrameUniformBufferVS m PerFrameUniformBufferVS Test 2 Same buffer without unused variables struct SPerFrameUniformBufferVS vec4 m Used4 mat4 m Used5 mat4 m Used6 layout(row major, std140) uniform UPerFrameUniformBufferVS SPerFrameUniformBufferVS m PerFrameUniformBufferVS Test 3 Using fixed buffer with unused variables inside layout(row major, std140) uniform UPerFrameUniformBufferVS vec4 m UnusedButPlaned1VS vec4 m SometimesInUse2VS vec4 m UnusedButPlaned3VS vec4 m Used4VS mat4 m Used5VS mat4 m Used6VS Test 4 Same buffer without unused variables layout(row major, std140) uniform UPerFrameUniformBufferVS vec4 m Used4VS mat4 m Used5VS mat4 m Used6VS In my case i've the following performance results Test 1 40 FPS Test 2 55 FPS Test 3 46 FPS Test 4 60 FPS My application do 1558 draw and 2341 glGetUniformBlockIndex calls. glGetUniformBlockIndex seems to be the bottle neck, because it is much higher in Test 1 and Test 3.
1
Problem with 2d rotation in OpenGL I have a function to perform sprite rotation void Sprite rotateSprite(float angle) Making an array of vertices 6 for 2 triangles Vector2 lt gamePos gt halfDims( rect.w 2, rect.h 2) Vector2 lt gamePos gt bl( halfDims.x, halfDims.y) Vector2 lt gamePos gt tl( halfDims.x,halfDims.y) Vector2 lt gamePos gt br(halfDims.x, halfDims.y) Vector2 lt gamePos gt tr(halfDims.x,halfDims.y) bl rotatePoint(bl,angle) halfDims br rotatePoint(br,angle) halfDims tl rotatePoint(tl,angle) halfDims tr rotatePoint(tr,angle) halfDims 1st triangle Top right dataPointer.vertices 0 .setPosition( rect.x tr.x, rect.y tr.y) Top left dataPointer.vertices 1 .setPosition( rect.x tl.x, rect.y tl.y) Bottom left dataPointer.vertices 2 .setPosition( rect.x bl.x, rect.y bl.y) 2nd triangle Bottom left dataPointer.vertices 3 .setPosition( rect.x bl.x, rect.y bl.y) Bottom right dataPointer.vertices 4 .setPosition( rect.x br.x, rect.y br.y) Top right dataPointer.vertices 5 .setPosition( rect.x tr.x, rect.y tr.y) Vector2 lt gamePos gt Sprite rotatePoint(Vector2 lt gamePos gt pos, float amp angle) Vector2 lt gamePos gt newv newv.x pos.x cos(angle) pos.y sin(angle) newv.y pos.y cos(angle) pos.x sin(angle) return newv And the result is Am i doing something wrong ? It happens also when i put small angle (even if i put here angle 1 ) Thanks for help.
1
Making a button that spawn objects in the world? I want to make a button that can spawn an object that I've already created. Though I have no clue how to actually do this where to start. Has anyone ever done something similar?
1
How to update vertices and indices in OpenGL I want to change and update the vertices and indices of the 3D object. I am new to openGL and I do not know if this the right place to ask this question. I searched and read about that, and I found that I should bind the VAO and VBO and make that, but nothing changed in vertices and indices. The vertices vector class type has the position, normal and uv coords and the vertices vector class type is int. I wonder how I can make changes to these two vectors when I translate or scale the model. here is the code Create buffers arrays glGenVertexArrays(1, amp this gt VAO) glGenBuffers(1, amp this gt VBO) glGenBuffers(1, amp this gt EBO) glBindVertexArray(this gt VAO) Load data into vertex buffers glBindBuffer(GL ARRAY BUFFER, this gt VBO) A great thing about structs is that their memory layout is sequential for all its items. The effect is that we can simply pass a pointer to the struct and it translates perfectly to a glm vec3 2 array which again translates to 3 2 floats which translates to a byte array. glBufferData(GL ARRAY BUFFER,this gt vertices.size() sizeof(vertex), amp this gt vertices 0 , GL STATIC DRAW) glBindBuffer(GL ELEMENT ARRAY BUFFER, this gt EBO) glBufferData(GL ELEMENT ARRAY BUFFER, this gt indices.size() sizeof(int), amp this gt indices 0 , GL STATIC DRAW) Set the vertex attribute pointers Vertex Positions glEnableVertexAttribArray(0) glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, sizeof(vertex), (GLvoid )0) Vertex Normals glEnableVertexAttribArray(1) glVertexAttribPointer(1, 3, GL FLOAT, GL FALSE, sizeof(vertex), (GLvoid )offsetof(vertex, normal)) Vertex Texture Coords glEnableVertexAttribArray(2) glVertexAttribPointer(2, 2, GL FLOAT, GL FALSE, sizeof(vertex), (GLvoid )offsetof(vertex, uv)) glBindVertexArray(0)
1
What is the benefit of a sparse bindless texture array over just bindless textures? I've seen the talks on AZDO, and they tend to suggest moving towards an architecture where you have one texture array per texture "shape". This is sparse, so you allocate loads of layers for potential future texture loads, and then commit each layer as you load a texture to it. You then make the whole thing bindless, and you can now pass a handle amp layer to your shaders for texturing. This is great, but what has it bought me above just using bindless textures? With bindless textures I seem to have to do exactly the same amount of work in fact, a little less I just pass in the texture handle (or sampler handle) to my shader. So why the push for sparse bindless texture arrays?
1
How can I efficiently render a tile based map with many Z levels, where the levels act like hollowed out voxels? My map setup is a little different to what you normally have with tilemaps. The map itself is a rectangular prism, with a width, height, and depth being variable. In my case, width is the x axis horizontally across the screen, height is the y axis vertically across the screen and depth is the z level, or how quot deep quot the map goes into out of the screen. In my map you could consider the rectangular prism as quot solid quot . Many most of these voxels could be considered as occluders to the voxels below them. Some voxels are hollowed out to make rooms and corridors. The map is viewed at a specific z level, from 0, the quot bottom quot of the rectangular prism, to n, the top. A quot hollowed out quot voxel, if viewed from it's level, would result in me rendering a floor texture in that tile. If there were 4 voxels above it hollowed out, then that floor texture would be rendered when the map is viewed from each of these z levels, but each z level above the quot floor quot would have a semi transparent black rect rendered on top of the tile (so it gets successively darker the further quot up quot you are). This occurs until you're on a z level above the tallest voxel that is hollowed out to make the room at which point you can't see it any longer, because this non hollowed out voxel blocks your view to the room below. The problem I have is that there are many z levels to render and most of the hollowed out sections criss cross around in the different layers to make a very complex network of little rooms and tunnels. This results in me naively rendering a floor texture, followed by several layers of semi transparent black rects, followed by a fully opaque black rect and potentially repeated for any additional overlapping rooms for every x,y tile on the screen. I'm trying to wrap my head around a way to render this more efficiently but I'm having trouble. Currently, for every render loop, I'm attempting to step through the z layers, from the current viewed z level down to 0, and finding the z level that is occluded first (or all the way to the 0 level), so that when rendering, I can skip z layers for that tile that are below this value. This isn't really giving me much of a performance increase. I'm looking for ideas on how to better perform these series of draw calls. I'm using OpenGL3.3 directly, so I'm not restricted by any library or engine constraints. Currently, I'm rendering each quot tile quot on each layer seperately and would prefer to keep it that way for simplicity.
1
How does the following code generate a full screen quad? How does this struct Output float4 position cs SV POSITION float2 texcoord TEXCOORD Output main(uint id SV VertexID) Output output output.texcoord float2((id lt lt 1) amp 2, id amp 2) output.position cs float4(output.texcoord float2(2, 2) float2( 1, 1), 0, 1) return output and this pImmediateContext gt VSSetShader(fxaaVS, ..., ...) pImmediateContext gt IASetPrimitiveTopology(D3D11 PRIMITIVE TOPOLOGY TRIANGLELIST) pImmediateContext gt Draw(3, 0) generate a Full Screen Quad? It has only three vertices b(provided quad has 4 vertices) the positions produced are also not like a (half of a) quad. Looking at this, i might be confused between a quad and a full screen triangle.
1
How can I change the camera to work from an Y up system to a Z up I am following the tutorial from learnopengl.com but it uses a Y up system, but I would like to change it for a Z up system because I'm more used to it. I tried changing the up vector to be 1.0 on the last coordinate but and inverting the pitch And the Yaw, but when having the camera face the positive y axis the mouse movement breaks and starts to move slower and opposite to the way I move the mouse. The Axis system I'm trying to achieve That is the same as the Blender one. This is the function of the camera class which handles the Pitch and Yaw (offset is the mouse movement on the x axis and y axis) void Camera updateCameraVectors() Calculate the new Front vector glm vec3 front front.x cos(glm radians(Yaw)) cos(glm radians(Pitch)) front.y sin(glm radians(Pitch)) front.z sin(glm radians(Yaw)) cos(glm radians(Pitch)) Front glm normalize(front) Also re calculate the Right and Up vector Right glm normalize(glm cross(Front, WorldUp)) Normalize the vectors, because their length gets closer to 0 the more you look up or down which results in slower movement. Up glm normalize(glm cross(Right, Front)) Up is the vector that I initialized as (0.0f, 0.0f 1.0f) Are there any resources available online that explain how to make the switch from a Y up to Z up system?
1
How to achieve rendering at huge distances? These days I was reading some information about the upcoming GTA V technology, in particular, how it is able to truly render all buildings you can see without a draw distance cap or any faking. Since I will soon be prototyping a big city environment, I wanted some advice on this topic. We're talking about a really big city, with simple geometry but fully lit and shadowed. My doubt comes in particular to the projection matrix, which imposes a maximum draw distance (the zFar parameter). Now, at the same time, I read everywhere that zFar should be as small as possible for better rendering results, in particular related to depth buffer issues etc, because of the floating point issues. So, assuming my computer can render this big city in a stable framerate, how should I approach the problem of rendering parts of the city which I can see REALLY far away, fully lit and shadowed? Shadow maps also seem to have problems with low depth buffer precision.. Thanks
1
Why doesn't glBindVertexArray work in this case? From my understanding of what glBindVertexArray does and how it works, the following code should work fine First init glGenVertexArraysOES(1, amp vertexArray) glBindVertexArrayOES( vertexArray) glGenBuffers(1, amp buffer) glBindBuffer(GL ARRAY BUFFER, buffer) glBufferData(GL ARRAY BUFFER, kMaxDrawingVerticesCount sizeof(GLKVector3), NULL, GL DYNAMIC DRAW) glGenBuffers(1, amp indexBuffer) glBindBuffer(GL ELEMENT ARRAY BUFFER, indexBuffer) glBufferData(GL ELEMENT ARRAY BUFFER, kMaxDrawingVerticesCount 1.5 sizeof(GLuint), NULL, GL DYNAMIC DRAW) glEnableVertexAttribArray(GLKVertexAttribPosition) glVertexAttribPointer(GLKVertexAttribPosition, 3, GL FLOAT, GL FALSE, sizeof(GLKVector3), BUFFER OFFSET(0)) glBindVertexArrayOES(0) And later add new geometry glBindVertexArrayOES( vertexArray) glBufferSubData(GL ARRAY BUFFER, 0, (GLintptr) data.vertices.length, data.vertices.bytes) glBufferSubData(GL ELEMENT ARRAY BUFFER, 0, (GLintptr) data.indices.length, data.indices.bytes) glBindVertexArrayOES(0) However, it doesn't work (there is screen output, but it looks like the buffers were changed between each other). Binding the vertex array is not enough for the vertex indices buffer to get bound? Because, if I do this glBindVertexArrayOES( vertexArray) glBindBuffer(GL ARRAY BUFFER, buffer) lt note this line glBindBuffer(GL ELEMENT ARRAY BUFFER, indexBuffer) lt note this line glBufferSubData(GL ARRAY BUFFER, 0, (GLintptr) data.vertices.length, data.vertices.bytes) glBufferSubData(GL ELEMENT ARRAY BUFFER, 0, (GLintptr) data.indices.length, data.indices.bytes) glBindVertexArrayOES(0) Everything works looks as expected. I find it strange, since the vertex array should have taken care of the buffer binding. Otherwise, what's the purpose to have a vertex array if you still have to bind all the buffers?
1
How do I follow this glsl1.2 lights shadows tutorial? I am following this great tutorial but I have many questions. Let's see if I understand the basic idea. 1. I must create the same number of FBOs that lights (maximum 8). 2. I must create the same number of depth textures (shadow maps) that FBOs. 3. For every FBO I must perform offscreen render (render to texture, drawing the scene from the point of view of each of the lights, maximum 8 times). Is this correct or can I use just a texture with multiple FBOs, or fbo with various textures or how? Assuming you use the idea above, how should the final render of the scene? glUseProgram(programa) glUniformli(shadowM0,4) glActivateTexture(GL TEXTURE4) glBindTexture(GL TEXTURE 2D,depthT0) Draw Escene? glUniformli(shadowM1,5) glActivateTexture(GL TEXTURE5) glBindTexture(GL TEXTURE 2D,depthT1) Draw Escene? glUniformli(shadowM2,6) glActivateTexture(GL TEXTURE6) glBindTexture(GL TEXTURE 2D,depthT2) Draw Escene? ... glUseProgram(0) Is this okay? I think not, because it uses a lot of resources and the scene would have to be drawn up to 7 times. (One per light source and shadow mapping). Well but then how should the render scene with multiple light sources and shadows?
1
How to rotate a sprite in 2D with LWJGL3 and GLSL I have all the basic rendering already set up, with translation, scaling etc. What matrix(s) and or vector(s) do I need to multiply add with my existing code to rotate an object. Preferably around a point given by a vector2fa, also only in degrees because quaternions aren't implemented and its only 2D not 3D. Snippet of the vertex shader gl Position transformation vec4(position, 1.0) translation Where transformation is done in java as (Note not actual code as is replaced with .mul()) Matrix4f transformation ortho scale And translation is given by a method Vector4f translation new Vector4f(x , y , 0f, 0f).mul(gc.getOrtho()) All the code listed before works as expected. It is just lacking the rotation feature mentioned at the top. How can I add rotation to my code?
1
Prevent tile layout gaps I'm making a map viewer where you can specify where tiles go in a map and then see it on the screen, translate it, and zoom in and out. Here's a picture showing the tiles used to make my example map Here's the tiles connected together, more or less my desired result. The problem is that at particular combinations of translations and zoom levels the spacing is off. Here's an example picture of the problem. In order to arrange the tiles I'm using a Perspective projection matrix parametrized by the zoom level, a camera matrix which just translates the scene based the camera's 2d position, and finally each tile is translated using another matrix derived from its x and y position. tile camera projection How can I avoid this weird spacing issue at particular positions? I'm guessing this is some kind of floating point precision issue, but I'm not sure. (for those interested, the tiles I'm using are from this collection) EDIT I was using the following parameters for the standard openGL perspective projection matrix let near 0.5 let far 1024.0 let scale zoom near let top scale let bottom top let right aspect ratio scale let left right where aspect ratio is the window width divided by the window height, (in pixels,) and zoom in the error case above was set to 8.0. I read some more on perspective projection and floating point errors but couldn't find anything sensible to change so I resorted to adjusting the parameters. Specifically I tried setting near to 1.0 and now I can't replicate the weird spacing. So I guess my problem is solved? But this is a mysterious and unsatisfying answer, so I would still appreciate it if someone could explain what went wrong, and how I can avoid it in the future without fiddling with numbers until it "magically" works! EDIT responding to amitp Here's a screenshot from the same version as above (with near still set to 0.5), with the background set to red as suggested. This seems to indicate that it's something wrong with the tile layout, not image bleed.
1
Rendering a model with transparent or translucent uv map applied doesn't work Before I try to make anything transparent, the model renders nicely. When I change the uv layout so that one piece of the model will be transparent, it renders horribly. This is the result with a translucent green texture I did transparency with OpenGL when I did 2D games and everything worked nicely because of these two lines glEnable(GL BLEND) glBlendFunc(GL SRC ALPHA, GL ONE MINUS SRC ALPHA) Now, I fear, this won't be enough, right? Or maybe I should use some other technique?
1
Is this process for generating an OpenGL perspective projection correct? I have been programming OpenGL for a while now and I have successfully created a perspective camera, however, I can't help but shake the feeling that I am doing it wrong. The code I am using is below, is there something wrong with how I am constructing the perspective projection? int MainWindow SDL execute(QCoreApplication amp app) SDL Init(SDL INIT EVERYTHING) glewInit() ... glLoadIdentity() glEnable(GL TEXTURE 2D) glEnable(GL DEPTH TEST) glMatrixMode(GL MODELVIEW) glDisable(GL LIGHTING) gluPerspective(50, 1.33, 1, 1000) gluLookAt(3, m xAxisLocation, 4, 0, 0, 0, 0, 1, 0) glewExperimental GL TRUE ... GameStart() app.exit() return 0 I also set it up in the game loop void MainWindow SDL GameLoop() OpenGLRenderer t renderer t renderer.PreGameRender() while (m state GameState RUNNING) glPushMatrix() glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glLoadIdentity() processInput() glMatrixMode(GL MODELVIEW) glEnable(GL TEXTURE 2D) glEnable(GL DEPTH TEST) gluPerspective(50, 1.33, 1, 1000) gluLookAt(m xAxisLocation, m yAxisLocation, m zAxisLocation, 0, 0, 0, 0, 1, 0) t renderer.DuringGameRender() SDL GL SwapWindow(m window) glPopMatrix() t renderer.PostGameRender() GameEnd()
1
Using same buffer for vertex and index data? Is it possible to use the same buffer for both GL ARRAY BUFFER and GL ELEMENT ARRAY BUFFER? I load both vertex data and index data into a big slab of memory, so it would be easier for me to just load it all into a single buffer. So naturally, I do like this glBindBuffer(GL ARRAY BUFFER, vboId) glBufferData(GL ARRAY BUFFER, dataSize, data, usage) glBindBuffer(GL ARRAY BUFFER, 0) Is it legal to ndash during rendering ndash simply use it as both? glBindBuffer(GL ARRAY BUFFER, vboId) glBindBuffer(GL ELEMENT ARRAY BUFFER, vboId) glVertexAttribPointer(...) ... glDrawElements(mode, count, dataType, (void )indexOffset) I can't find anything in the spec saying it's ok to do so, but I can't find anything that says that I can't either. Googling doesn't turn up much either, but I might be looking in the wrong places.
1
Uniform arrays do not work on every GPU I am trying to implement instanced rendering for objects that repeat, so I came up with idea I could simply group objects that loaded same model files, create array with their Model matrices and so on, and finally pass this array as uniform array to shaders, which will be indexed by gl instanceID to get correct model matrix for given objects. C implementation looks pretty much like this std vector RelativePositions, Scales std vector Models int ctr 0 bool first true int isz 0 obj gt getIndices(isz) pDrawData gt refObjects holds all Object3D instances which share same model geometry for (std list lt IObject3D gt iterator pObj pDrawData gt refObjects.begin() pObj ! pDrawData gt refObjects.end() pObj ) IObject3D obj pObj int render prepareForRender(obj, flags, isShadow, offset) if (!render) continue if (first) this is gonna be same for all objects of this class setUniform(Type, (int)obj gt getType()) setUniform(Flags, (int)obj gt getFlags()) first false Scales.push back(obj gt getScale()) RelativePositions.push back(obj gt getPos()) Models.push back(obj gt getModel()) ctr if (ctr) int Rem 0 const int Step MAX OBJ PER INST 64 for (size t i 0, j Scales.size() i lt j i Step) Rem Scales.size() i if (Rem gt Step) Rem Step following functions set uniform array of specified length setUniform(Scale, Scales.data() i, Rem) setUniform(RelativePosition, RelativePositions.data() i, Rem) setUniform(Model, Models.data() i, Rem) glDrawElementsInstanced(GL TRIANGLES, (GLsizei)isz, GL UNSIGNED INT, 0, Rem) Rendering object is done via Vertex Shader Tess Control Shader Tess Eval Shader Geom Shader FragmentShader. Instance ID is being passed using flat out in int InstanceID from shader to shader while being once set in Vertex shader as gl InstanceID. Tessellation Evaluation Shader logics look like this version 430 layout(triangles) in in vec3 tcPosition in vec3 tcTexCoord in vec3 tcNormal flat in int tcInstanceID out vec3 tePosition out vec3 teDistance out vec3 teTexCoord out vec3 teNormal flat out int teInstanceID define MAX OBJ PER INST 64 uniform mat4 Model MAX OBJ PER INST uniform vec3 Scale MAX OBJ PER INST uniform vec3 RelativePosition MAX OBJ PER INST ... other uniforms void main(void) vec3 p0 gl TessCoord.x tcPosition 0 vec3 p1 gl TessCoord.y tcPosition 1 vec3 p2 gl TessCoord.z tcPosition 2 ... other stuff ... teDistance gl TessCoord tePosition (p0 p1 p2) teInstanceID tcInstanceID 0 switch(Type) ... other types ... default tePosition (vec4(tePosition,1) Model teInstanceID ).xyz Scale teInstanceID RelativePosition teInstanceID break This works fine when I run it on any nVidia GPU when on laptop, but when I try to run it on AMD GPU or Intel Integrated Graphics, it suddenly doesn't work and I only get blank screen. But if I remove uniform arrays and pass only single variable for uniform, so all instances would suddenly have same model matrix, position and scale, it works even on other GPUs, so problem is definitely with arrays... Is there any solution to this?
1
Cannot Draw a triangle without VAO on MacOS So I was watching Cherno's Video on Vertex attributes and he was successful in drawing a triangle without a VAO, but in tutorials from learnopengl.com they specifically say they we need a VAO to draw VBO and I tested it and it works fine. Here is my snippet glGenVertexArrays(1, amp VAO) glBindVertexArray(VAO) glGenBuffers(1, amp VBO) glBindBuffer(GL ARRAY BUFFER, VBO) glBufferData(GL ARRAY BUFFER, sizeof(triangleVertices), triangleVertices, GL STATIC DRAW) glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, 3 sizeof(GLfloat), (GLvoid )0) glEnableVertexAttribArray(0) Game Loop Loop until the user closes the window while (!glfwWindowShouldClose(window)) Poll for and process events glfwPollEvents() Render here glClearColor(0.2f, 0.3f, 0.3f, 1.0f) RGBA glClear(GL COLOR BUFFER BIT) glUseProgram(shaderProgram) glBindVertexArray(VAO) we did not unbind it so it's still binded to the assigned VAO glDrawArrays(GL TRIANGLES, 0, 3) Swap front and back buffers glfwSwapBuffers(window) This only works if I uncomment the VAO which is understandable, what I can't get my head around is how did Cherno got it working without a VAO and just using a VBO? Is it a MacOS specific thing?
1
UnrealEngine4 "backwards compatilibility" Can I view a site that uses Unreal Engine4 on a very old laptop and still see in high definition? Backwards compatible? The project I want to render a high definition model on my personal website such that it'll be rendered in the highest definition possible on any device, from a 20 year old Gateway PC running Windows 95 to an iPhoneX. It's particularly important that it's real pretty on mobile (whatever Android and iPhone models are big these days) The question I was thinking I'd buy an expensive desktop with GPUs to make sure I can make the app really fancy for users on a high end desktop with nice monitors. But I'm not going to do that if the high definition I get on my new expensive screen doesn't make it out to the app my employer is building. How does this work? Does OpenGL display the highest definition render it can on the mobile device?
1
What to do with unused vertices? Imagine yourself a vertex array in OpenGL representing blocks in a platform game. But some vertices may be not used. The environment is dynamic, so there always some vertex may suddenly become invisible. What is the best way to make them not draw? Graphic cards are complicated and it's hard to predict what is best approach. Few best ways I can think of delete and move all vertices after deleted one to fill freed space (sounds extremely inefficient) set positions to 0 set transparency to maximum I could of course benchmark, but what on my computer works faster doesn't have to on other.
1
OpenGL gradient banding on Samsung Galaxy S2 Android phone I've got a live wallpaper out on the market which uses OpenGL to render some basic shapes and a flat plane. The simple lighting creates a gradient effect across the plane, which looks fine on most devices. The Samsung Galaxy S2 series seems to have some trouble rendering the gradient, though, as you can see in this screen shot The color banding looks awful, especially compared to this screen shot from an Incredible I'm using a 565 EGL config in both cases, so I believe this is just a display issue with the GS2 devices. Can anyone confirm this suspicion? Is there any solution to the banding?
1
Arbitrary number of VBO to Vertex Shader I am currently using standard modern OpenGL way to render a mesh via VBO and attributes glEnableVertexAttribArray(aVertexPosition) glBindBuffer(GL ARRAY BUFFER, VBO) glVertexAttribPointer(aVertexPosition, 3, GL FLOAT, GL FALSE, 0,0) glBindBuffer(GL ARRAY BUFFER, 0) and in the GLSL code also the standard way to received into as in vec3 aVertexPosition. However, I wonder if there is anyway to make this for a dynamic amount of VBOs. Let's say my software creates a number of VBOs that depend on the inputs, and I successfully create and allocate memory for all of them, how do I pass all of them to the shader? Does the number need to be constant? If so, what is the best way to do it?
1
Help finding coordinate for left of screen in this frustum I have a frustum with l 2, r 2, n 0.5, f 10, corresponding to left, right, near, far respectively. I also define top and bottom too. I've set the camera eye up at (0, 0, 2.5), looking directly at (0, 0, 0) with up (0, 1, 0). Suppose I want to position an object centred at z 1. I want to find the x coordinate so that under this frustum it appears exactly centred on the left edge of the screen. I set up the following diagram to help me z x lt gt x x 1 l near dist n 0.5 a 2.5 z So to find x I can use simple trig. Since tan a l n and tan a x 1.5 then the formula for x is x l n 1.5 2 0.5 1.5 6, where . is the absolute value. But when I use this x value to draw an object left of the centre, it does not appear centred at the edge of the screen. I can see the object but it's not on the edge. Moreover, if I increase n then the gap between the object and the left edge increases. What am I doing wrong? UPDATE Empirically I have discovered that if I compute x l n 2.5 then my object is correctly centred on the left edge of the device screen, and works for any n value I choose. Not sure why 2.5 works... SOLUTION Not really worthy of an answer, but I found in some legacy code that the renderer was set up to transalte the z forward by 1 unit, so this was undoing my 1 z value, hence why multiplying by 2.5 worked and not 1.5. Despite this annoying fact, I'm now confident I fully understand how the frustum fits into OpenGL.
1
Is it ok to use NPOT textures with OpenGL in this situation? I drew some icons to use on orthographic projection and instead of loading each icon file individually as a texture I thought of putting them all together in one single file. Now I have a file 72x36 which is not a power of two. This texture is divide in two icons, both with 36px wide. I didn't remember about this restriction when designing the texture but I eventually noticed it when the texture didn't load because I have an if statement checking if the texture is a power of two, if it's not, than the texture is not loaded. I commented that code to see if the texture loaded anyway and it did. The texture was loaded and properly applied to the area I wanted. To draw the first icon I do this glBindTexture(GL TEXTURE 2D, gameTextures.hudKeys) glBegin(GL QUADS) glTexCoord2f(0, 0) glVertex2f(0, 36) glTexCoord2f(0.5, 0) glVertex2f(36, 36) glTexCoord2f(0.5, 1) glVertex2f(36, 0) glTexCoord2f(0, 1) glVertex2f(0, 0) glEnd() And for the second this glBindTexture(GL TEXTURE 2D, gameTextures.hudKeys) glBegin(GL QUADS) glTexCoord2f(0.5, 0) glVertex2f(0, 36) glTexCoord2f(1, 0) glVertex2f(36, 36) glTexCoord2f(1, 1) glVertex2f(36, 0) glTexCoord2f(0.5, 1) glVertex2f(0, 0) glEnd() Which is easy to understand, I only use half of the texture depending on which icon I want. I don't want my icons to be distorted or stretched, the icons were designed with 36x36 in mind. If I use a power of 2 texture, for instance 128x64 (that's the lowest possible to fit a 72x36 texture), how do I properly map the 36x36 texture area I want to the quad? Or maybe I can keep doing using this NPOT texture or should I still really create a power of 2 texture? How to fix the problem above then?
1
OpenGL ES 2 Developing snowfall particle system I've recently begun OpenGL ES 2 and I'm trying to implement a snowfall (on Android), but I'm not sure what's the best approach. I would like to develop it myself rather than use a library. It really helps a lot in understanding OpenGL. What I'm doing at the moment is I have an array of, say, 1000 particles, each with its own lifetime ( number of frames it will live). It works fine if a particle "dies" and starts all over when it already left the screen, but if it is still on the screen and it has to die, the user will notice that the particle suddenly disappeared. The problem is very noticeable if this happens very often. I need a way to kill a particle only when it left the screen. Is there a way to improve my approach or maybe I should change it completely?
1
How to render a model that consists of individually transformable parts? Consider a 3D model for a spider that consists of a body and eight legs. The legs can be transformed (rotated) relative to the body. I am not sure what the common ways of rendering such a scene are, at a low level. Two ways I could think of Render the body and legs separately, using different VBOs (or different indices) (one for the body, one for the leg) and separate draw calls. Encode in each vertex which body part it belongs to, and decide based on this information how to transform the vertex in the vertex shader. I searched a bit and could find some tutorials on skeletal animation and similar techniques, but they still boil down to this question.
1
Learning OpenGL Red and Blue book still relevant? I've recently purchased the Orange book( GLSL ) and am wondering if it is important at all to read through the red and blue books as well? Any thoughts?
1
Deferred rendering and gaussian blur artifacts I compute Gaussian blur in two passes (horizontally and vertically). Shaders look like this Horizontal blur fragment shader version 420 layout (location 0) out vec4 outColor in vec2 texCoord float PixOffset 5 float (0.0,1.0,2.0,3.0,4.0) float Weight 5 float ( 0.2270270270, 0.1945945946, 0.1216216216, 0.0540540541, 0.0162162162 ) float scale 4.0 uniform sampler2D texture0 uniform vec2 screenSize void main(void) float dx 1.0 screenSize.x vec4 sum texture(texture0, texCoord) Weight 0 for( int i 1 i lt 5 i ) sum texture(texture0, texCoord vec2(PixOffset i , 0.0) scale dx ) Weight i sum texture(texture0, texCoord vec2(PixOffset i , 0.0) scale dx ) Weight i outColor sum I use deferred rendering and the following screens shows a diffuse material texture after blurring. I simplified a render loop for the sake of clarity(only one render target a diffuse material). A render loop first case bind fbo1 gbuffer stage unbind fbo1 bind fbo2 read a diffuse texture, render to a temporary texture (full screen quad) read a temporary texture, horizontal blur, render to a temporary texture (full screen quad) read a temporary texture, vertical blur, render to a temporary texture (full screen quad) unbind fbo2 read a temporary texture, render to the default framebuffer (full screen quad) The final image has artifacts(flickering pixels). Some of them are placed between two triangles which create the full screen quad. A render loop second case bind fbo1 gbuffer stage unbind fbo1 bind fbo2 read a diffuse texture, render to a temporary texture (full screen quad) read a temporary texture, horizontal blur, render to a temporary texture (full screen quad) unbind fbo2 read a temporary texture, vertical blur, render to the default framebuffer (full screen quad) Some artifacts may appear between triangles A render loop third case bind fbo1 gbuffer stage unbind fbo1 bind fbo2 read a diffuse texture, horizontal blur, render to a temporary texture (full screen quad) unbind fbo2 read a temporary texture, vertical blur, render to the default framebuffer (full screen quad) The final image doesn't contain artifacts. How to fix that ? The above code is simplified but normally the first case(render loop order) is most useful, for example I want to blur a glow texture and use it when shading.
1
Texture dimensions guidelines I have 3 related questions What are the main optimizations related to power of 2 dimensions for 1D 2D 3D textures? This question gives some answers like mipmapping, but are not well explained and the list of optimizations does not seem to be complete. I want a complete ordered list from most important to less of the optimizations with complete explanations, with references replacing full explanations only where full explanation would overwhelming, and proper references (paper graphics official page OpenGL Vulkan CUDA ) for every statement about explained optimization. When the comparation is not direct, take a guess based on the one that would improve overall performance most in majority of computer graphics applications. Furthermore I want to specify which of this optimizations are available at least for OpenGL and CUDA. More specifications would be acclaimed. What benefits exists for divisible by 8 dimensions? This article states the rule as an alternative complement to the power of 2 one but does not give explanations nor references. Is there any benefit for divisibility by a larger number? This article uses divisible by 32 dimensions. I know that CUDA has a warp size of 32 and that blocks should be multiple of 32 to have a good overall average utilization, but I do not know if this could be related to texture sizes and would like to know if the answer is no. In many places posibilities for texture optimization are loosely stated and not properly documentated. This question aims at solving this issue.
1
When I add in Java transformationMatrix I can't see images? When I add in Java transformationMatrix I can't see images moving but when I remove it I can see why and how to fix it? Any ideas how to fix it? My Renderer class public class Renderer private static final float FOV 70 private static final float NEAR PLANE 0.1f private static final float FAR PLANE 1000 private Matrix4f projectionMatrix public Renderer(StaticShader shader) createProjectionMatrix() shader.start() shader.loadProjectionMatrix(projectionMatrix) shader.stop() public void prepare() GL11.glClear(GL11.GL COLOR BUFFER BIT) GL11.glClearColor(1, 0, 0, 1) public void render(Entity entity,StaticShader shader) TexturedModel model entity.getModel() RawModel rawModel model.getRawModel() GL30.glBindVertexArray(rawModel.getVaoID()) GL20.glEnableVertexAttribArray(0) GL20.glEnableVertexAttribArray(1) Matrix4f transformationMatrix Maths.createTransformationMatrix(entity.getPosition(), entity.getRotX(), entity.getRotY(), entity.getRotZ(), entity.getScale()) shader.loadTransformationMatrix(transformationMatrix) GL13.glActiveTexture(GL13.GL TEXTURE0) GL11.glBindTexture(GL11.GL TEXTURE 2D, model.getTexture().getID()) GL11.glDrawElements(GL11.GL TRIANGLES, rawModel.getVertexCount(), GL11.GL UNSIGNED INT, 0) GL20.glDisableVertexAttribArray(0) GL20.glDisableVertexAttribArray(1) GL30.glBindVertexArray(0) private void createProjectionMatrix() float aspectRatio (float) Display.getWidth() (float) Display.getHeight() float y scale (float) ((1f Math.tan(Math.toRadians(FOV 2f))) aspectRatio) float x scale y scale aspectRatio float frustum length FAR PLANE NEAR PLANE projectionMatrix new Matrix4f() projectionMatrix.m00 x scale projectionMatrix.m11 y scale projectionMatrix.m22 ((FAR PLANE NEAR PLANE) frustum length) projectionMatrix.m23 1 projectionMatrix.m32 ((2 NEAR PLANE FAR PLANE) frustum length) projectionMatrix.m33 0 Edit added StaticShader class package shader import org.lwjgl.util.vector.Matrix4f public class StaticShader extends ShaderProgram private static final String VERTEX FILE "src shader vertexShader.txt" private static final String FRAGMENT FILE "src shader fragmentShader.txt" private int location transformationMatrix private int location projectionMatrix public StaticShader() super(VERTEX FILE, FRAGMENT FILE) Override protected void bindAttributes() super.bindAttribute(0, "position") super.bindAttribute(1, "textureCoords") Override protected void getAllUniformLocations() location transformationMatrix super.getUniformLocation("transformationMatrix") location projectionMatrix super.getUniformLocation("projectionMatrix") public void loadTransformationMatrix(Matrix4f matrix) super.loadMatrix(location transformationMatrix, matrix) public void loadProjectionMatrix(Matrix4f projection) super.loadMatrix(location projectionMatrix, projection) Edit vertexShader.txt version 400 core in vec3 position in vec2 textureCoords out vec3 colour out vec2 pass textureCoords uniform mat4 transformationMatrix uniform mat4 projectionMatrix void main(void) gl Position projectionMatrix transformationMatrix vec4(position,1.0) pass textureCoords textureCoords colour vec3(position.x 0.5,0.0,position.y 0.5)
1
Texture being rendered to main frame buffer? I'm using Ogre 1.10.12 (openglES2 as render system) to create a manual texture like this rtt texture Ogre TextureManager getSingleton().createManual("RttTex", Ogre ResourceGroupManager DEFAULT RESOURCE GROUP NAME, Ogre TEX TYPE 2D, m size.width(), m size.height(), 0, Ogre PF A1R5G5B5, Ogre TU RENDERTARGET) m renderTexture static cast lt Ogre GLES2FBORenderTexture gt (rtt texture gt getBuffer() gt getRenderTarget()) m renderTexture gt addViewport(m camera) m renderTexture gt getViewport(0) gt setClearEveryFrame(true) m renderTexture gt getViewport(0) gt setBackgroundColour(Ogre ColourValue Red) m renderTexture gt getViewport(0) gt setOverlaysEnabled(false) then, I bind the texture to the FBO and retrieve the FBO's ID like Ogre GLES2FrameBufferObject ogreFbo 0 m renderTexture gt getCustomAttribute("FBO", amp ogreFbo) Ogre GLES2FBOManager manager ogreFbo gt getManager() manager gt bind(m renderTexture) GLint id glGetIntegerv(GL FRAMEBUFFER BINDING, amp id) My concern is that id is 0 so I cannot render this texture outside my display, it's getting visible which I don't want it to be. Shouldn't be ogre creating an unused frame buffer object when creating a manual texture with the TU RENDERTARGET parameter?
1
Sprite batching in OpenGL I've got a JAVA based game with an OpenGL rendering front that is drawing a large amount of sprites every frame (during testing it peaked at 700). Now this game is completely unoptimized. There is no spatial partitioning (so a sprite is drawn even if it isn't on screen) and every sprite is drawn separately like this graphics.glPushMatrix() graphics.glTranslated(x, y, 0.0) graphics.glRotated(degrees, 0, 0, 1) graphics.glBegin(GL2.GL QUADS) graphics.glTexCoord2f (1.0f, 0.0f) graphics.glVertex2d(half size , half size) upper right same for upper left, lower left, lower right graphics.glEnd() graphics.glPopMatrix() Currently the game is running at 25FPS and is CPU bound. I would like to improve performance by adding spatial partitioning (which I know how to do) and sprite batching. Not drawing sprites that aren't on screen will help a lot, however since players can zoom out it won't help enough, hence the need for batching. However sprite batching in OpenGL is a bit of mystery to me. I usually work with XNA where a few classes to do this are built in. But in OpenGL I don't know what to do. As for further optimization, the game I'm working on as a few interesting characteristics. A lot of sprites have the same texture and all the sprites are square. Maybe these characteristics will help determine an efficient batching technique?
1
What causes some computers to have no or slow OpenGL, and how to fix it? I am using Java with JOGL to create OpenGL enhanced 2D graphics. The graphics operations I use are nothing fancy, and should be supported by almost any recent graphics card. For example, my game runs great on a Netbook. I was hoping the game would run on most computers. It runs fine on my own computers. However, I found some computers have very slow performance (apparently software fallback, yielding about 2 frames per second). I ran a LWJGL app on one such computer. It doesn't run at all (it reports something like org.lwjgl.LWJGLException Pixel format not accelerated, you can find various forum threads complaining about this but no apparent solutions, except the suggestion that it is a driver problem). Other OpenGL software does not seem to work, either. I also found that my Flash version of the game with exactly the same graphical effects performs pretty well full screen on that same computer. The computer in question has a recent ATI card but unfortunately I have no access to the driver manager. The problem appears fairly widespread. I think it is very unfortunate that OpenGL does not always provide access to graphics features found on most computers. This makes it less attractive for casual 2D games, which I expect to run on any computer. Did any of you run into this problem and manage to fix it? AFAIK, both NVidia and ATI provide OpenGL as part of their standard driver sets, but maybe there are some exceptions? Is this problem caused by third party drivers not supporting OpenGL, and can the problem be fixed by installing better drivers? How many other graphics cards are out there without OpenGL drivers? EDIT As a final note, I can conclude that OpenGL is just not well supported on Windows machines. Microsoft seems to have been doing their best to keep it off their platform, for example by deliberately leaving out OpenGL drivers in some Windows driver bundles. Get the vendor drivers, and you get OpenGL get the standard drivers that Windows downloads for you, and you don't. This has been causing no end of trouble for others as well. For example, for implementing WebGL, Web browsers use Angle, which is an OpenGL ES emulator for DirectX. So, what we'd really need is something like Angle, only for full OpenGL.
1
JOGL Hardware Shadow Mapping Transparent Shadow Texture I'm using hardware shadow mapping on JOGL based on the demo(HardwareShadowMapping) supplied by the distribution. After generating the shadow texture from lights point of view, I apply it to my scene with no problems. What I'm looking for is soft shadows instead of pitch black shadows. Is there any way that I can change the alpha values while applying the shadow depth texture? Btw I'm not an expert on OpenGL especially about custom shaders and I don't want to spend more time on understanding and developing custom shaders instead of the game. Here's a screenshot of the application in case you come up with another solution to what I'm trying to achieve. The game is actually a hack amp slash RTS hybrid.Here's the pseudo code of the gl event listener attached to shadow pbuffer. gl.glClear(GL.GL COLOR BUFFER BIT GL.GL DEPTH BUFFER BIT) gl.glPolygonOffset(polygonOffsetFactor, polygonOffsetUnits) gl.glEnable(GL.GL POLYGON OFFSET FILL) render shadow casting geometry gl.glDisable(GL.GL POLYGON OFFSET FILL) gl.glBindTexture(GL.GL TEXTURE 2D, lightViewTextureID) gl.glCopyTexSubImage2D(GL.GL TEXTURE 2D, 0, 0, 0, 0, 0, textureSize, textureSize) I sense here a way of updating alpha values of this texture
1
How can I render a JBox2D ParticleGroup? I want to render a ParticleGroup from JBox2d using OpenGL. I've managed to define a particle group area, but I'm unsure how to draw the individual particles. Here's how I create the ParticleGroup m world.setParticleRadius(0.15f) m world.setParticleDamping(0.2f) PolygonShape shape new PolygonShape() shape.setAsBox(8, 10, new Vec2( 12, 10.1f), 0) ParticleGroupDef pd new ParticleGroupDef() pd.shape shape m world.createParticleGroup(pd) This is how I draw a normal Square (which I don't know how to apply to groups of particles) public void draw(GLAutoDrawable gLDrawable, Vec3 position, float angle) gLDrawable.getGL().getGL2().glEnable(GL.GL BLEND) gLDrawable.getGL().getGL2().glEnable(GL.GL TEXTURE 2D) gLDrawable.getGL().getGL2().glBlendFunc(GL.GL SRC ALPHA, GL.GL ONE MINUS SRC ALPHA) gLDrawable.getGL().getGL2().glBindTexture(GL2.GL TEXTURE 2D, TextureFactory.getTextureIndex(TextureCollection.valueOf(getTextureSelection()))) gLDrawable.getGL().getGL2().glPushMatrix() gLDrawable.getGL().getGL2().glTranslatef(position.x getP2M(), position.y getP2M(), position.z) gLDrawable.getGL().getGL2().glRotated(Math.toDegrees(angle), 0, 0, 1) gLDrawable.getGL().getGL2().glBegin(GL2.GL QUADS) gLDrawable.getGL().getGL2().glTexCoord2f(0.0f, 0.0f) gLDrawable.getGL().getGL2().glVertex3f( getWidth() 2 getP2M(), getHeight() 2 getP2M(), 0.0f) gLDrawable.getGL().getGL2().glTexCoord2f(0.0f, 1.0f) gLDrawable.getGL().getGL2().glVertex3f( getWidth() 2 getP2M(), getHeight() 2 getP2M(), 0.0f) gLDrawable.getGL().getGL2().glTexCoord2f(1.0f, 1.0f) gLDrawable.getGL().getGL2().glVertex3f(getWidth() 2 getP2M(), getHeight() 2 getP2M(), 0.0f) gLDrawable.getGL().getGL2().glTexCoord2f(1.0f, 0.0f) gLDrawable.getGL().getGL2().glVertex3f(getWidth() 2 getP2M(), getHeight() 2 getP2M(), 0.0f) gLDrawable.getGL().getGL2().glEnd() gLDrawable.getGL().getGL2().glFlush() gLDrawable.getGL().getGL2().glPopMatrix() gLDrawable.getGL().getGL2().glDisable(GL.GL TEXTURE 2D) gLDrawable.getGL().getGL2().glDisable(GL.GL BLEND) How should I do this? (Example code would be great.)
1
GLSL to Cg why is the effect different? With reference to this question, where I was trying to make the shader compile, I am now trying to make an effect appear. The effect can be shown here, through a GLSL shader But when I use the equivalent Cg shader, the result becomes this Using the same images (color map normal map) and the same code (except the way to retrieve variables). Here is the original GLSL shader uniform sampler2D color texture uniform sampler2D normal texture void main() Extract the normal from the normal map vec3 normal normalize(texture2D(normal texture, gl TexCoord 0 .st).rgb 2.0 1.0) Determine where the light is positioned (this can be set however you like) vec3 light pos normalize(vec3(1.0, 1.0, 1.5)) Calculate the lighting diffuse value float diffuse max(dot(normal, light pos), 0.0) vec3 color diffuse texture2D(color texture, gl TexCoord 0 .st).rgb Set the output color of our current pixel gl FragColor vec4(color, 1.0) And here is the Cg shader I wrote struct fsOutput float4 color COLOR uniform sampler2D color texture TEXUNIT0 uniform sampler2D normal texture TEXUNIT1 fsOutput FS Main(float2 colorCoords TEXCOORD0, float2 normalCoords TEXCOORD1) fsOutput fragm float4 anorm tex2D(normal texture, normalCoords) float3 normal normalize(anorm.rgb 2.0f 1.0f) float3 light pos normalize(float3(1.0f, 1.0f, 1.5f)) float diffuse max(dot(normal, light pos), 0.0) float3 color diffuse tex2D(color texture, colorCoords).rgb fragm.color float4(color,1.0f) return fragm Please let me know if something needs to be changed in order to obtain the effect, or if you need the C code.
1
removing triangle artifacts from overlapping height maps OpenGL I have two 100x100 height maps that I am drawing using a triangle mesh. One represents the land height, and the other represents (water land) height. I currently draw both meshes on top of each other, and I make the water mesh transparent when the water level is 0. But where the land meets the water there are lots of artifacts. I believe the cause is the water mesh is the height of the land mesh so at the edges the water mesh height goes from being land mesh height to , the visible slope is actually the water mesh slope and not the land mesh slope. That said, I don't know any good way to fix it. Note water level may not be flat so replacing the water mesh with a plane wont work
1
Using one GLSL shader program for textured and untextured rendering? Rather than have two separate shaders in my OpenGL code (one for when a texture is bound, one for when none is bound) I usually go for one shader program which handles both. This is my usual fragment shader version 330 uniform bool textured uniform sampler2D sampler in vec4 fColor in vec2 tCoord out vec4 final color void main() vec4 texel vec4(1.0, 1.0, 1.0, 1.0) if (textured) texel texture(sampler, tCoord) final color fColor texel In a modern OpenGL profile, this compiled but did not output anything but black pixels (alternatively in LWJGL legacy profile, it outputted regular colors if textured false and textured if textured true). I don't see any reason that this wouldn't work, but what is a method which would work for both textured and untextured fragments? EDIT My Shader Class My Main CPP File
1
OpenGL 3 Range Picking How do I perform range picking in the latest OpenGL version? By range picking I mean selecting all objects which are picked using a selection rectangle, like in an RTS game. For single object picking I'm using the ray picking method, which I guess can be used in this case as well, but I'm not sure how I should go about doing that. Could you give me any pointers? Thank you.
1
Wrong angles given by atan2 I have a method to calculate angle between two vectors, but it is giving wrong values. here is the code float Vector2 Dot(Vector2 a, Vector2 b) return a.x b.x a.y b.y float Vector2 Cross(Vector2 a, Vector2 b) return a.x b.y a.y b.x float Vector2 Angle(Vector2 a, Vector2 b) return atan2f(Vector2 Cross(a, b), Vector2 Dot(a, b)) As you can see in the image bellow the angle is slightly wrong. I'm giving the two positions marked by a black cross. is the code correct or Do I have a problema elsewhere? Here's is where I do my transformation matrix calculation float Transform GetLocalAngle(DeadTransform transform) if (transform gt parent ! NULL) return transform gt parent gt angle transform gt angle else return transform gt angle void Transform Update(DeadTransform transform, GLfloat depth) float angle Transform GetLocalAngle(transform) Vector2 position Transform GetLocalPosition(transform) Vector2 scale Transform GetLocalScale(transform) GLfloat data 16 (GLfloat)cosf(angle) scale.x, (GLfloat)( sinf(angle)) scale.y, 0.0f, position.x 1000, (GLfloat)(sinf(angle)) scale.x, (GLfloat)cosf(angle) scale.y, 0.0f, position.y 1000, 0.0f, 0.0f, 1.0f, depth, 0.0f, 0.0f, 0, 1.0f Matrix SetData(transform gt transformationMatrix, data) This is where I create the road direction Vector2 Subtract( tile gt top gt transform gt position, tile gt topLeft gt transform gt position) pos.x tile gt top gt transform gt position gt x direction.x 2 pos.y tile gt top gt transform gt position gt y direction.y 2 float angle Vector2 Angle( tile gt top gt transform gt position, tile gt topLeft gt transform gt position) CreateConnection(application, pos.x, pos.y, angle) void CreateConnection(struct Application application, float x, float y, float angle) DeadGameObject zone GameObject Create("Connection") zone gt transform gt position gt x x zone gt transform gt position gt y y zone gt transform gt angle angle zone gt transform gt scale gt x 64 zone gt transform gt scale gt y 6 DeadRenderer renderer Renderer Create(Texture2D Create("Images Connection.png", GL CLAMP, GL LINEAR)) Renderer SetDepth(renderer, 2) GameObject AddComponent(zone, renderer, Type Renderer) Application Instantiate(application, zone)
1
Should I use the X Y plane when using an orthographic projection in OpenGL? I'm currently at a loss rendering a tile based 3D map with an orthographic projection in OpenGL. Imagine any isometric 3D game (using actual geometry instead of sprites). Internally, the tiles of my map have x and y coordinates (for column and row position within the map respectively). The tile with x 0 and y 0 is the one in the "top left", the one with y 0 and x 1 the one to the right of the first tile and so on. Since in OpenGL, the coordinate system is so that the X Z plane is the "ground" plane, I create the model matrix for each tile by using the tile's y coordinate for the matrix' (or position vector's) z component. I then set up an orthographic "camera" by using glm ortho(). This seems to work fine at first, but I quickly ran into clipping issues when trying to render larger parts of the map. I assume this is due to the fact that the orthographic projection clips on z. A quick and dirty attempt of having the map in the X Y plane confirmed this, as I didn't have clipping issues then. Now, I'm not sure what's the best approach here? Change the code so the tiles are being layed out in the X Y plane? In that case, it seems as if I would have to remember to rotate every single object around x accordingly, which seems wrong. Do I need to set up the orthographic projection manually to use Y as the clipping plane? Is that even possible? I think OpenGL always uses z for clipping, right? Some other approach? I think I'm lost in coordinates and matrices, really. I assume this is a common problem with a well known solution, so I'm hoping to learn about that before I go ahead, change a lot and then learn I did it the wrong way...
1
Rendering multiple textures on same image for terrain with index buffers (LWJGL 3) I have started with LWJGL3 and trying to built game engine. I'm stuck on generating terrains and this is my second big problem and I'm exhausted so I need some advice on how to solve my problem. What I want to achieve I want to have all textures in one place and to have some kind of texture map that I will use to render specific texture at that place. example textures map but result is Only thing that I think is problem is because I use same points (index buffers) So instead of 8 points, I have 6. Is it possible that then texture gets all messed up? Should I use all points, even if they are at same or almost the same location? But then I would have a lot more vertices than actually need it.
1
What book guide should I follow for GLSL I searched a lot on the web without coming to a real solution and if I ask this it's because I really have difficulties getting an answer. I need to learn well GLSL 1.20 with OpenGL 2.1. I have bought beginning OpenGL game programming, 2nd edition, but it talk too little about GLSL. So I'm following some tutorials like this one, but it just makes a lot of examples without explaining the theory. The problem is that if I want to know something, I have to search a lot, and often the thing that I'm searching doesn't come straightforward. For example I didn't know how to compute the direction between two points, I looked into some example codes and I discovered that it was done with the dot product just by seeing the example code. But with this approach I waste a lot of time. I need a book guide which tells me how to do basilar stuff and also explains the theory. I just feel like I'm travelling into the fog. What book guide would you suggest?
1
Is this a good way of separating graphics from game logic? My current architecture for me game engine looks like this, though it is not accurate Everything graphics related is done in by GraphicsEngine, and through its components, like Material, Mesh, etc). My problem is that I want to store the pointers in RenderData, but I have to include the Mesh, Material etc header files, which have included glew. I currently change an objects material using GetRenderer().SetMaterial("xyz"), which sets a string in the renderData, to be processed by the graphics engine then the correct pointer will be set, if it exists. This is not so modular, because the scene has graphics related files included, like glew. This is a problem. My only solution is to store indices in RenderData. There wont be a material pointer, but instead, an index where the material is in the GraphicsEngines material store. This way, RenderData is just a "blind" integer and string store, in which the Renderer egy the GraphicsEngine works. Is this a good solution? Meshes have VertexData members (position, normal, texture). When I call GraphicEngine.CreateMesh(), passing the MeshName and FileName, where should the file processing go? I use Tiny Obj Loader, and I don't know where I should include it, and call its function. I call the function from inside GraphicsEngine, then I transform the returned structures to my Mesh's structure, which I pass to the Mesh's constructor. The initialised list will assign it to the corresponding member variable. Inside Mesh, I pass the FileName to the Mesh constructor, and let it handle it all by itself. I think the first solution is better, but I don't really know why. Maybe using GraphicsEngine to "create" assets is better than GraphicsEngine commanding assets to "be created" but this is just a personal feeling. Which solution is better?
1
Which code is faster to convert 1 to 0 and 1 to 1? I'm writing a shader for rendering the sides of triangles with different colors. I have a value mediump float back dot(V, N) which is positive if the normal faces away from the camera and negative if towards the camera. To correctly select which diffuse color to shade a particular fragment, I need something like this lowp vec4 color max(0.0, sign( back)) frontdiffuse max(0.0, sign(back)) back diffuse It's more or less an "unrolled conditional"... Which code produces faster instructions? x 0.5 0.5, or max(0.0, x) ? The former may be better mathematically behaved than the latter, but maybe min max clamping is super efficient in hardware
1
OpenGL ES 2.0 Calculated Cube Vertex Normals Verification Could I kindly ask to confirm, that the calculated normals are correct, please? I have calculated them on my own, but my testcube is still strangely lighted within OpenGLES 2.0. The vertices were exported from 3D authoring application. The vertex normals were calculated using matrix cross product. The vertex and fragment shaders I were exact copies from the book, so there should not be a bug. The output in OpenGL looks like the sphere is lighted from its bottom. Moreover, it looks, like the box is visible from inside, as the top part is always transparent. Any idea, where can be a problem? Added The vertex normals are calculated based on triangulated quad polygons. The triangulation is performed internally, by the 3d application. Vertices indices are generated by calling API calls of that application. begin 8 vertices vs 24 v 25.000000 25.000000 25.000000 v 25.000000 25.000000 25.000000 v 25.000000 25.000000 25.000000 v 25.000000 25.000000 25.000000 v 25.000000 25.000000 25.000000 v 25.000000 25.000000 25.000000 v 25.000000 25.000000 25.000000 v 25.000000 25.000000 25.000000 end 8 vertices begin 12 faces fs 36 f 0 2 3 f 0 3 1 f 0 1 5 f 0 5 4 f 0 4 6 f 0 6 2 f 1 3 7 f 1 7 5 f 2 6 7 f 2 7 3 f 4 5 7 f 4 7 6 end 12 faces begin 8 calculated normals ns 24 n 0.577350 0.577350 0.577350 n 0.816497 0.408248 0.408248 n 0.408248 0.816497 0.408248 n 0.408248 0.408248 0.816497 n 0.408248 0.408248 0.816497 n 0.408248 0.816497 0.408248 n 0.816497 0.408248 0.408248 n 0.577350 0.577350 0.577350 end 8 calculated normals I use OpenGL ES 2.0, but that should not be relevant at this moment. Update I can confirm now, that the calculated vertex normals are correct, and can therefore be used by others, to verify correctness of their triangulated cube vertex normals calculations. However, I still have a problem that all the vertices, which are in positive z axis, are not rendered. Not sure if they are culled, but I have pretty large frustum. Vertices in the z axis are kept, so the rendered mesh looks like cut in half, as it is position in 0 z axis. This lines below show that I should not concsicously setup small render area mProjection glm perspective (45.0f, (float)WINDOW WIDTH (float)WINDOW HEIGHT, 1130.0f, 1130.0f) mViewTransformation glm lookAt(glm vec3(vIntCamera.x, vIntCamera.y, 1130.0f), glm vec3(vIntCamera.x, vIntCamera.y, 0.0f), glm vec3(0.0f, 1.0f, 0.0f)) Any ideas, please, what else should I check? Solved Eye position of the camera (in glm lookAt() function) should be above the top of the perspective frustum, defined by glm perspective(). Thanks for your support guys, especially with the upper part of my question.
1
How can I implement a camera like the one in RotMG? RotMG, an MMO top down shooter, takes on a unique 2d 3d style, and has an intriguing camera The game is obviously 3d, not simply isometric, and if you play the game and turn on camera rotation you will notice the effect the game produces, like so. RotMG is made using flash, and I am currently experimenting with libgdx (which uses lwjgl opengl). How should I go about implementing a RotMG like camera?
1
Blending and shadowmapping? I am trying to implement shadow mapping, and currently I have 2 point lights and 1 global ambient light source and my rendering loop looks roughly like this (the details are not relevant) void OpenGLRenderer DrawRenderables(const uint32 t windowWidth, const uint32 t windowHeight, const RenderQueue amp renderQueue, const RenderableLighting amp lighting) GLCALL(glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT)) for (uint32 t lightIndex 0 lightIndex lt lighting.mUsedLights lightIndex ) if (lighting.mLights lightIndex .mLightType LightType LIGHT TYPE POINT) GLCALL(glUseProgram(mShadowMapProgram.mProgramHandle)) GLCALL(glBindFramebuffer(GL FRAMEBUFFER, mFrameBuffer)) GLCALL(glViewport(0, 0, mShadowTextureWidth, mShadowTextureHeight)) for (uint32 t faceNum 0 faceNum lt CUBEMAP NUM FACES faceNum ) GLCALL(glFramebufferTexture2D(GL FRAMEBUFFER, GL DEPTH ATTACHMENT, GL TEXTURE CUBE MAP POSITIVE X faceNum, mShadowMapTexture, 0)) GLCALL(glClear(GL DEPTH BUFFER BIT)) ShadowPass() GLCALL(glBindFramebuffer(GL FRAMEBUFFER, 0)) final pass GLCALL(glUseProgram(mDefaultProgram.mProgramHandle)) GLCALL(glViewport(0, 0, (GLsizei)windowWidth, (GLsizei)windowHeight)) enable blending, how? ShadingPass() GLCALL(glUseProgram(0)) The problem is the screen will only be shown with last light in the light list, for example the ambient light and therefore ignoring the effects of the point lights. I assume I somehow needs to do blending but I was wondering how and where is this done? Thanks
1
How to draw 2D pixel data with OpenGL I am fairly new to OpenGL. I have a 2D game in SDL2 that uses currently works by creating a SDL Surface from the pixel data, copying it into a SDL Texture, and rendering it to the screen with SDL Renderer. But rather than using SDL to render the pixels, I'd like to switch to OpenGL. The reason I'd like to switch is because I need to render some lines on top of the pixel data and SDL RenderDrawLine() just doesn't have all the features I need (like line thickness or glScissor). At first, I attempted to switch to OpenGL by using glDrawPixels() and I was happy with the results. However, I found out that glDrawPixels() does not seem to be available in OpenGL ES (mobile devices). I have looked through some tutorials, but they all use shaders and other fancy stuff that I don't think I really need. Is there a simple way (like glDrawPixels()) to just draw pixel data to the screen for a 2D game? The pixel data is in the format GL UNSIGNED BYTE and it contains everything that I want to draw on the screen (except for several 2D line segments that I plan on using GL to render on top).
1
Render on texture with alpha 0 for the background colour Sorry if the question is stupid but I am very new to opengl. I render on a target a scene with a couple of object on a background that is nothing more than the clear colour (which is 0.0,0.0,0.0,0.0). When I access the texture again every single pixel has 1 has alpha, how can I make sure the background has 0.0 ? Thank you Faye
1
What is an appropriate OpenGL(ES) based game engine meeting these criteria? I want to try my hands on a little shooter on the Android platform, and I'm looking for a full featured 3D game engine. I can't afford to pay more than 400 for a license, so expensive engines are excluded from the start. These are my requirements The engine has to have good level editor (able to place entities, actors and triggers) so "graphics engines" like Irrlicht and Horde 3D aren't what I'm looking for. The engine should be based on OpenGL Core and use shaders rather than the fixed function pipeline. It shouldn't use octrees and BSP PVS. It needs to have decent occlusion culling because I need good frame rates on not very powerful graphics cards. It should come with source code. I like Horde 3D very, very much and I like it's smart design, rendering capabilities and compactness. Sadly, I can't use it due to the lack of tools. So far I've come across Torque 3D, C4 engine, Shiva 3D, and Unity 3D. Torque 3D is really nice, decent design, has good tools, good performance, it's really cheap and comes with source code. Sadly it's only DirectX for now. Unity has a lot of features, decent performance and tools, runs on Android but they don't give source code within my price range. C4 is good enough, has tools, and source code but there's a catch. While you have source code the source isn't ported to Android and the engine's owner expressly forbids anyone to port source code on other platforms or to release a game on not officially supported platforms. Shiva 3D seems nice, too, supports Android, has tools but they don't give you source code license for a decent sum. That's all I could find. I used the list on devmaster.net, I searched this site, I've searched gamedev.net, gamasutra, polycount.com and of course Google and Bing. Any suggestions that could help me?
1
clamp a 2D coordinate to fit within an ellipse I need to clamp a 2D coordinate to fit within an ellipse. Call of Duty Modern Warfare 2 does something similar where capture points are translated from a 3D vector in the world to a 2D screen coordinate and then the 2D coordinates are clamped within an ellipse. When the capture points are in view they're within the bounds of the ellipse. When they're behind you they are clamped to be within the bounds of the ellipse. Given a 2D coordinate that could be off screen, etc, what is the math behind clamping it within an ellipse?
1
Optimising Voxel World Rendering There are a couple of questions like this already. This one is different because it is specifically about rendering, as opposed to navigation and generation. I've implemented suggestions from here already. The current optimisations are Only render block faces that aren't adjacent to another face Render blocks in chunks, each chunk as one mesh (VAO) According to the NetBeans profiler, 96 of time is spent on the glfwSwapBuffers(long window) method, which is when OpenGL renders the scene. This implies that my other operations are efficient enough, compared to rendering. What else can I do to improve rendering performance? The chunk size is 16x16x16 voxels. Sample renders and FPS 64 chunks 500 FPS 125 chunks 300 FPS 1000 chunks 45 FPS 4096 chunks 12 FPS
1
Reinhard tone mapping and color space I found two ways of doing tone mapping (first, second) Ld this part of the code is the same for both versions float lum dot(rgb, vec3(0.2126f, 0.7152f, 0.0722f)) float L (scale averageLum) lum float Ld (L (1.0 L lumwhite2)) (1.0 L) first vec3 xyY RGBtoxyY(rgb) xyY.z Ld rgb xyYtoRGB(xyY) second rgb (rgb lum) Ld For an example pixel data above equations produces different results. Which way is correct ?
1
SFML Segmentation Fault when using VBOs? I'm trying to follow along with the gltut tutorials and for some reason when I call GLDrawArrays my program segmentation faults. I've been looking at the state of my application with the Mac OpenGL Profiler and Googling and I can't seem to figure out why it isn't working. I may have missed setting some OpenGL state somewhere. My code is here, it segfaults on line 32. If I don't call glEnableClientState(GL VERTEX ARRAY) and the respective glDisableClientState(GL VERTEX ARRAY) it compiles and runs, but I get no rendering, which is what I would expect given my limited OpenGL knowledge.
1
Procedural texturing with opengl I have a hexagonal grid of fields, each field has a certain terrain type. I assign every vertex of hexagon with terrain type and pass it as attribute to vertex and then fragment shader. Then I use the extrapolated value to blend terrain textures together. For instance 1 is grassy, 2 is desert. Vertex shader varying vec2 vUv attribute float terrainType varying float vTerrainType void main() vUv uv vTerrainType terrainType vec4 mvPosition mvPosition modelViewMatrix vec4( position, 1.0 ) gl Position projectionMatrix mvPosition Fragment shader uniform vec3 diffuse uniform float opacity varying vec2 vUv varying float vTerrainType uniform sampler2D map uniform sampler2D map1 void main() gl FragColor vec4( diffuse, opacity ) vec4 texelColor texture2D( map, vUv ) (2.0 vTerrainType) texture2D( map1, vUv ) (vTerrainType 1.0) gl FragColor gl FragColor texelColor Implementation is WebGL, if it makes any difference. The result looks really unnatural, so is there any way to make it smoother and rounder?
1
What is the right process to get compatibility or at least a workaround for the Threaded optimization feature of NVIDIA? It's peculiar this issue is not well understood on NVIDIA forums and project forums. For example, the well known ioquake3 project based on id tech 3 requires to force 'Threaded optimization' off on the NVIDIA settings or there are severe FPS drops. Do you know what a programmer has to do to acquire compatibility with the feature or at least a workaround to not get issues with it (e.g. turning it off explicitly via the application or other means)? The main importance is that such projects get issues with it by default since not all users know they have to explicitly turn off the feature.
1
Getting OpenGL hardware acceleration with SDL on Linux I'm trying to use SDL OpenGL but I don't believe hardware acceleration is working because the framerate for around 18000 polys is about 24fps on a quad core machine but is a hopeless 1 2fps on an Intel Atom. Even the quad core starts to struggle when the poly count rises above this. I've checked my code over but I'm clearly missing something obvious. I've changed my SDL initialisation code to use the same code as in the SDL OpenGL test. It reports that SDL GL ACCELERATED VISUAL is 1 but that hw available in SDL VideoInfo is 0 Also the vendor is reported correctly as Nvidia on both machines and accelerated apps such as Compiz and glxgears work fine. Any ideas of what to try? Thanks
1
GLSL Rewriting shaders from 330 to 130 I recently created a game (LD21) that uses a geometry shader to convert points into textured triangles culling. Since I was under the impression that the support for 330 was widespread I only wrote 330 shaders, but it seems that a lot of not so old hardware only support 130 (according to GLView) Now, since I'm only familiar with the 330 core functionality I am having trouble rewriting my shaders to 130. The fragment shader was quite trivial to rewrite, but I've only managed to get my vertex and geometry shader down to 150. So, is it possible to rewrite the shaders, or would it require a lot of changes in my rendering engine? Geometry shader version 150 layout(points) in layout(triangle strip, max vertices 4) out uniform mat4 oMatrix in VertexData vec4 position vec4 texcoord vec4 size vert out vec2 gTexCoord void main() if(vert.position.x gt 4f amp amp vert.position.x lt 4f amp amp vert.position.y gt 2f amp amp vert.position.y lt 2f) gTexCoord vec2(vert.texcoord.z,vert.texcoord.y) gl Position vert.position vec4(vert.size.x,vert.size.y,0,0) EmitVertex() gTexCoord vec2(vert.texcoord.x,vert.texcoord.y) gl Position vert.position vec4(0.0,vert.size.y,0,0) EmitVertex() gTexCoord vec2(vert.texcoord.z,vert.texcoord.w) gl Position vert.position vec4(vert.size.x,0.0,0,0) EmitVertex() gTexCoord vec2(vert.texcoord.x,vert.texcoord.w) gl Position vert.position EmitVertex() EndPrimitive() Vertex shader version 150 extension GL ARB explicit attrib location enable layout (location 0) in vec2 position layout (location 1) in vec4 textureCoord layout (location 2) in vec2 size uniform mat4 oMatrix uniform vec2 offset out VertexData vec4 position vec4 texcoord vec4 size outData void main() outData.position oMatrix vec4(position.x offset.x,position.y offset.y,0,1) outData.texcoord textureCoord outData.size oMatrix vec4(size.x,size.y,0,0)
1
How to put the camera inside a cube in OpenGL I'm really new to OpenGL and can't seem to figure out how to put the "camera" inside the cube I've created so that I can move in an FPS like style. I tried to use gluLookAt and gluPerspective but I'm clearly missing some steps. What should I do before gluLookAt? Here's the code written so far int rotate Y used to rotate the cube about the Y axis int rotate X used to rotate the cube about the X axis the display function draws the scene and redraws it void display() clear the screen and the z buffer glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glLoadIdentity() resets the transformations glRotatef(rotate X, 1.0, 0.0, 0.0) glRotatef(rotate Y, 0.0, 1.0, 0.0) Front Face of the cube vertex definition glBegin(GL POLYGON) glColor3f(0.0, 1.0, 0.0) glVertex3f( 0.5f, 0.5f, 0.5f) glVertex3f( 0.5f, 0.5f, 0.5f) glVertex3f(0.5f, 0.5f, 0.5f) glVertex3f(0.5f, 0.5f, 0.5f) glEnd() Back Face of the cube vertex definition glBegin(GL POLYGON) glColor3f(1.0, 0.0, 0.0) glVertex3f( 0.5f, 0.5f, 0.5f) glVertex3f( 0.5f, 0.5f, 0.5f) glVertex3f(0.5f, 0.5f, 0.5f) glVertex3f(0.5f, 0.5f, 0.5f) glEnd() Right Face of the cube vertex definition glBegin(GL POLYGON) glColor3f(1.0, 0.0, 1.0) glVertex3f(0.5f, 0.5f, 0.5f) glVertex3f(0.5f, 0.5f, 0.5f) glVertex3f(0.5f, 0.5f, 0.5f) glVertex3f(0.5f, 0.5f, 0.5f) glEnd() Left Face of the cube vertex definition glBegin(GL POLYGON) glColor3f(0.7, 0.7, 0.0) glVertex3f( 0.5f, 0.5f, 0.5f) glVertex3f( 0.5f, 0.5f, 0.5f) glVertex3f( 0.5f, 0.5f, 0.5f) glVertex3f( 0.5f, 0.5f, 0.5f) glEnd() Upper Face of the cube vertex definition glBegin(GL POLYGON) glColor3f(0.7, 0.7, 0.3) glVertex3f( 0.5f, 0.5f, 0.5f) glVertex3f( 0.5f, 0.5f, 0.5f) glVertex3f(0.5f, 0.5f, 0.5f) glVertex3f(0.5f, 0.5f, 0.5f) glEnd() Bottom Face of the cube vertex definition glBegin(GL POLYGON) glColor3f(0.2, 0.2, 0.8) glVertex3f( 0.5f, 0.5f, 0.5f) glVertex3f( 0.5f, 0.5f, 0.5f) glVertex3f(0.5f, 0.5f, 0.5f) glVertex3f(0.5f, 0.5f, 0.5f) glEnd() glFlush() glutSwapBuffers() send image to the screen int main(int argc, char argv ) initialize GLUT glutInit( amp argc, argv) request double buffering, RGB colors window and a z buffer glutInitDisplayMode(GLUT DOUBLE GLUT RGB GLUT DEPTH) create a window glutInitWindowSize(600, 600) glutInitWindowPosition(100, 100) glutCreateWindow("Space") enable depth glEnable(GL DEPTH TEST) callback functions glutDisplayFunc(display) display redraws the scene glutSpecialFunc(specialKeys) special allows interaction with specialkeys pass control to GLUT for events glutMainLoop() return 0 this line is never reached Feel free to ask more about what I'm trying to achieve if I wasn't clear enough. Thanks for your help.
1
How to change the color of an invalid texture? In openGL, when I don't bind a texture, or I bind a texture that was loaded incorrectly, any calls to texture2D texelFetch in the shader will return vec4(0, 0, 0, 1), is there a way to change this to return vec4(1, 1, 1, 1) instead? Initially I looked for a way to test if the sampler2D was valid, and when I couldn't find that I tried using glColor4f, but that doesn't seem to work either although that could be the amdgpu driver, it seems to have some trouble with the old pipeline calls generally. Is there someway to change what this default return color is?
1
How do I create the playing field for my game? I want to create a game with playfield as shown in the video in OpenGL www.youtube.com user stanfordcs248 I have fair knowledge of OpenGL and I know this playfield can be rendered using several different cubes but that takes a lot of effort. Also it takes away the power of easily modifying the field. One idea that was suggested to me to solve this problem was to use an XML file describing almost every component of the game. Like the floor, walls , moving floors, moving obstacles etc. That way i can easily change the playfield when I want. But this approach also seems tedious. Also I couldn't figure out the structure of the XML that would make my life quite easy. So is there any better way to create or store information about the playfield than suggested above? Note Blender is also an option. But please provide a tutorial if you are suggesting the Blender way.
1
View matrix in opengl Sorry for my clumsy question. But I don't know where I am wrong at creating view matrix. I have the following code createMatrix(vec4f(xAxis.x, xAxis.y, xAxis.z, dot(xAxis,eye)), vec4f( yAxis.x , yAxis.y , yAxis.z , dot(yAxis,eye)), vec4f( zAxis.x , zAxis.y , zAxis.z , dot(zAxis,eye)), vec4f(0, 0, 0, 1)) column1, column2,... I have tried to transpose it, but with no success. I have also tried to use gluLookAt(...) with success. At the reference page, I watched the remarks about the to be created matrix, and it seems the same as mine. Where I am wrong?
1
OpenGL doesn't draw (3.3 ) Brief I've been following this tutorial about OpenGL for 2 days, and I still can't have a triangle drawn, so I'm asking for help here. The tutorial is turned to OpenGL version 3.3 programing, using vertex arrays, buffers, etc. The libraries are GLFW3 and GLEW, and I setted them by myself. The screen keeps black all the time. Full code link here (It's just like a Hello World opengl program) Further Details I get no errors at all. I downloaded a software to test my video card, and it supports OpenGL 4.1 Standard OpenGL code for drawing (from earlier version) such as this one works normally. I'm using Microsoft Visual Studio 10.0 I presume all the OpenGL implementation was dune right I added Additional Dependences to the linker as glew32.lib, opengl32.lib, glfw3.lib. The glew.dll was placed at SysWOW64 because I'm running window 64bits, and glew is 32. Notes I've been working hard to find out what this is, but I can't find. I would appreciate if anyone could test this code for me, so I can know if I implemented something wrong, and that its not my code.
1
Is it possible to gain performance by omitting vertex normals in the GPU pipe? I am working on a rendering problem where I want to render as many raw triangles to the screen as I can with either OpenGL or DirectX with the absolute fastest performance possible. I wondered about omitting vertex normals completely and only transforming vertex positions during the vertex shader stage. 1) Is this possible? 2) Is it actually going to increase performance or is the "bare metal" of the GPU designed in such a way that trying to omit normals won't gain any more throughput? p.s. Yes, I realize that omitting normals will leave you with the problem of how to shade the triangle during the shader stage, but I could at least render a solid color to the screen (no shading). At this point, all I'm wondering about is how much data I can eliminate from the typical pipeline to increase the pipeline throughput to the absolute maximum.
1
Compressing textures and recompressing textures Which tools are considered best quality for compressing textures for use in OpenGL? Which can be used from the Linux commandline? And which lossless compressors give good ratio speed on compressed textures? (I see that Skyrim's BSA archives are typically only half the size of the DDSes they contain)
1
SDL2 Draw scene to texture. SDL2 RenderTexture like SFML I've been developing a 2D Engine using SFML ImGui. The editor is rendered using ImGui and the scene window is a sf RenderTexture where I draw the GameObjects and then is converted to ImGui Image to render it in the editor. Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 ImGui and I want to recreate what I did with the 2D Engine. I've managed to render the editor like I did in the 2D Engine using this Example that comes with ImGui. But I don't know how to create an equivalent of sf RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui Image to show it in the editor. If you can provide code will be better. And if you want me to provide any specific code tell me. Thanks!
1
Region selection in OpenGL If I have a mesh of triangles and going to make a selection on it using a region (and not rectangle using Glu.gluPickMatrix(...) ), how can I implement it?
1
How can I improve rendering speeds of a Voxel Minecraft type game? I'm writing my own clone of Minecraft (also written in Java). It works great right now. With a viewing distance of 40 meters I can easily hit 60 FPS on my MacBook Pro 8,1. (Intel i5 Intel HD Graphics 3000). But if I put the viewing distance on 70 meters, I only reach 15 25 FPS. In the real Minecraft, I can put the viewing disntance on far ( 256m) without a problem. So my question is what should I do to make my game better? The optimisations I implemented Only keep local chunks in memory (depending on the player's viewing distance) Frustum culling (First on the chunks, then on the blocks) Only drawing really visible faces of the blocks Using lists per chunk that contain the visible blocks. Chunks that become visible will add itself to this list. If they become invisible, they are automatically removed from this list. Blocks become (in)visible by building or destroying a neighbour block. Using lists per chunk that contain the updating blocks. Same mechanism as the visible block lists. Use nearly no new statements inside the game loop. (My game runs about 20 seconds until the Garbage Collector is invoked) I'm using OpenGL call lists at the moment. (glNewList(), glEndList(), glCallList()) for each side of a kind of block. Currently I'm even not using any sort of lighting system. I heard already about VBO's. But I don't know exactly what it is. However, I'll do some research about them. Will they improve performance? Before implementing VBO's, I want to try to use glCallLists() and pass a list of call lists. Instead using thousand times glCallList(). (I want to try this, because I think that the real MineCraft doesn't use VBO's. Correct?) Are there other tricks to improve performance? VisualVM profiling showed me this (profiling for only 33 frames, with a viewing distance of 70 meters) Profiling with 40 meters (246 frames) Note I'm synchronising a lot of methods and code blocks, because I'm generating chunks in another thread. I think that acquiring a lock for an object is a performance issue when doing this much in a game loop (of course, I'm talking about the time when there is only the game loop and no new chunks are generated). Is this right? Edit After removing some synchronised blocks and some other little improvements. The performance is already much better. Here are my new profiling results with 70 meters I think it is pretty clear that selectVisibleBlocks is the issue here. Thanks in advance! Martijn Update After some extra improvements (like using for loops in stead of for each, buffering variables outside loops, etc...), I now can run viewing distance 60 pretty good. I think I'm going to implement VBO's as soon as possible. PS All source code is available on GitHub https github.com mcourteaux CraftMania
1
What rotation needs to be applied to align mesh with expected axis of target? I'm using LWJGL and JOML to create a 3D view of hexagons whose positions lie on a torus. I have a number (NxM) hexagons, whose centres and normals I have calculated to be placed on the torus to completely cover the torus surface, but in the quot game quot engine I'm using I need to convert each item being rendered to a position and 3 rotation angles. I'm struggling to go from the 3 normals of the item to the 3 angles. EDIT Subsequent to posting this I have got some way in creating a matrix with the angles and converting to Euler angles, everything is now turned according to those angles, but they aren't facing directions I expect. The background I'm trying to create a visualisation of a Conway Game of Life using hexagons but instead of a simple plane, mapping each hexagon onto a Torus. I've done the maths to calculate the centres of every hexagon, and the 3 direction unit vectors that they need to point to, when in their places around the torus. For illustrative purposes, here's a view of the torus and 2 hexagons that would lie on it (not real, this is just me mocking it up in Blender) What I'm struggling to understand is how to rotate the single mesh for a hexagon to its calculated normals at the position I want to place it. i.e. How do I rotate some quot unit quot hexagon mesh (loaded from an OBJ file exported from Blender) to point in the direction of the 3 normals I've calculated they should be for each hexagon around the torus. I have read a similar question here, but I'm struggling to get from the idea of the 4d rotation matrix to how I convert that to a Vector3f for rotations. I have the 3 vector normals, could create the 4d matrix, but I need a Vector3f (the rotations about x y z) to the mesh is drawn correctly. My code is here. I'm following this guide for using LWJGL to create GameItems (my hexagons) and position rotate them from a loaded obj file mesh, but as I say, I'm struggling to calculate the rotation Vector3f needed to point in the same direction I've calculated. Here's the code section relevant to the problem at hand val mesh loadMesh( quot conwayhex models simple hexagon.obj quot ) hexGrid.hexAxes().forEach (location, axis) gt axis is a Matrix3f with my 3 normals at the centre of the hexagon, e.g cX cY cZ 0 1 0 0 0 1 1 0 0 val gameItem GameItem(mesh) gameItem.position location gameItem.scale 0.2f TODO calculate this according to the torus size what rotation do I give this? How do I calculate it from the given axis for the current item? gameItem.rotation Vector3f(30f, 30f, 30f) gameItems gameItem The output of the application given the above static 30 degree rotation is Can anyone help me unserstand how I apply the rotation to my items so they align to what I've calculated they should be?
1
How can I write only to the stencil buffer in OpenGL ES 2.0? I'd like to write to the stencil buffer without incurring the cost of my expensive shaders. As I understand it, I write to the stencil buffer as a 'side effect' of rendering something. In this first pass where I write to the stencil buffer, I don't want to write anything to the color or depth buffer, and I definitely don't want to run through my lighting equations in my shaders. Do I need to create no op shaders for this (and can I just discard fragments), or is there a better way to do this? As the title says, I'm using OpenGL ES 2.0. I haven't used the stencil buffer before, so if I seem to be misunderstanding something, feel free to be verbose.
1
PBR Metalness Implementation specColor at Value 1? I am doing a BRDF supporting the metalness roughness workflow. I know that in cases of Metalness 1.0 the reflectance value is taken from the albedo map so is the specular color to tint the highlights. My question is if the specular color is taken at the same value that it is in the albedo or is it set to 1.0 first and then used ? Example A Pixel in my Albedo Map has a Hue Sat Value of 0.5 0.5 0.5 With metalness 1 the reflectance is 0.5. Is specColor a) 0.5 0.5 0.5 b) 0.5 0.5 1.0 (to not dim the specular) ? Thanks, Jens
1
Projection Matrix Breaks My Rectangle This is my vertex shader, shown below. version 330 core in vec3 a position in vec4 a colour FOV 70, near plane 0.1, far plane 1000 const mat4 u projection mat4( 1.428148, 0.0, 0.0, 0.0, 0.0, 1.428148, 0.0, 0.0, 0.0, 0.0, 1.0001999, 0.20002, 0.0, 0.0, 1.0, 0.0 ) uniform mat4 u projection uniform mat4 u view uniform mat4 u transformation out vec4 v colour void main() gl Position u projection u transformation vec4(a position, 1) v colour a colour Whenever I take out u projection, my square appears. When I add it back, the square is malformed. The vertices of my square are as follows, aka the contents of a position. float vertices 0, 0, 0.5f, 0.5f, 0.5f, 0.5f, 0, 0.5f, 0.5f, 0, 0, 0.5f, 0f, 0.5f, 0.5f, 0, 0.5f, 0.5f The position of the square is (0, 0, 0), the rotation is (0, 0, 0) and the scale is 1. These are computed into u transformation and uploaded. This works perfectly. If I change the 1.0001999 and the last 0 to 1 part to 1 then the square is not hidden. void bindAttribute(int index, String name) GL20.glBindAttribLocation(program, index, name) bindAttribute(0, "a position") bindAttribute(1, "a colour")
1
Why do tutorials use different approaches to OpenGL rendering? http www.sdltutorials.com sdl opengl tutorial basics http www.opengl tutorial.org beginners tutorials tutorial 2 the first triangle These two tutorials use completely different approaches to get nearly the same result. The first uses things like glBegin(GL QUADS). The second one uses stuff like vertexBufferObjects, shaders based on GLEW. But the result is the same you get basic shapes. Why do these differences exist? The first approach seems much easier to understand. What's the advantage of the complicated second approach?
1
Do UV coordinates need correction for moving object? The image above is a static capture of a dynamic OpenGL project I created in which I wrapped a NASA albedo, i.e., sans clouds, image on an OpenGL generated sphere. In so doing, I also generated the UV coordinates associated with each vertex position. This was an incremental learning effort in which I had already applied model matrix corrections to the vertex positions for the rotating and translating (orbiting) "Earth". I was surprised to find that I did not have to apply a model matrix correction to the UV coordinates. I have tentatively concluded that once the jpg image coordinates are associated with the corresponding vertex positions with the UV coordinates in the range of 0, 1 , they are fixed and need no further correction. Does that sound correct, or is there more to the situation?
1
Skeletal animation unexpected quarternion values I'm following ThinMatrix's skeletal animation tutorial and his custom written Collada parser is deriving a different rotation quaternion from each keyframe matrix than the Assimp importer library does. They are close, except the values are signed inversely and unpredicatbly. I dug around his importer code but it's a mess. I assume he is manually performing an additional axis calculation somewhere, any ideas what it could be and why? Original Matrix 1 0 0 0 0 0.06466547 0.997907 0 0 0.997907 0.06466556 3.810999 0 0 0 1 Assimp rotation quaternion (0.7296115 , 0 , 0 , 0.683862) ThinMatrix's rotation quaternion 0.72961134 , 0, 0 , 0.68386203) And for reference, the position vec3 (0, 0, 3.810999) Update With an identity matrix both Assimp and the ThinMatrix code produce (0, 0, 0, 1)
1
Cubemaps turn black OpenGL GLSL Java LWJGL Recently I tried to add cubemaps to my 3D rendering engine. The objects with a cubemap now turn completely black. This is how I load my cubemap public static int loadCubeMap(String filename) int id GL11.glGenTextures() GL13.glActiveTexture(GL13.GL TEXTURE0) GL11.glBindTexture(GL13.GL TEXTURE CUBE MAP, id) for(int i 0 i lt CUBEMAP NAMES.length i ) TextureData data decodeTextureFile(CUBEMAPS LOCATION filename " " CUBEMAP NAMES i ) GL11.glTexImage2D(GL13.GL TEXTURE CUBE MAP POSITIVE X i, 0, GL11.GL RGBA, data.getWidth(), data.getHeight(), 0, GL11.GL RGBA, GL11.GL UNSIGNED BYTE, data.getBuffer()) GL11.glTexParameteri(GL13.GL TEXTURE CUBE MAP, GL11.GL TEXTURE MAG FILTER, GL11.GL LINEAR) GL11.glTexParameteri(GL13.GL TEXTURE CUBE MAP, GL11.GL TEXTURE MIN FILTER, GL11.GL LINEAR) GL11.glTexParameteri(GL13.GL TEXTURE CUBE MAP, GL11.GL TEXTURE WRAP S, GL12.GL CLAMP TO EDGE) GL11.glTexParameteri(GL13.GL TEXTURE CUBE MAP, GL11.GL TEXTURE WRAP T, GL12.GL CLAMP TO EDGE) return id TextureData class only contains int width, height and a ByteBuffer buffer private static TextureData decodeTextureFile(String fileName) int width 0 int height 0 ByteBuffer buffer null try FileInputStream in new FileInputStream(TEXTURES LOCATION fileName ".png") PNGDecoder decoder new PNGDecoder(in) width decoder.getWidth() height decoder.getHeight() buffer ByteBuffer.allocateDirect(4 width height) decoder.decode(buffer, width 4, Format.RGBA) buffer.flip() in.close() catch (Exception e) e.printStackTrace() return new TextureData(width, height, buffer) This is how I load the cubemap to the samplerCube cubemap i in the fragment shader public void loadCubeMap(int id, CubeMap cubemap, int cubemapid) if(cubemap ! null) super.loadTexture(location cubemaps cubemapid , cubemap.id, id, GL13.GL TEXTURE CUBE MAP) super.loadFloat(location cubemap intensity id , cubemap.intensity) else super.loadFloat(location cubemap intensity id , 0.0f) in super class protected void loadTexture(int location, int texture, int i, int texturetype) loadInt(location, i 1) GL13.glActiveTexture(GL13.GL TEXTURE0 i) GL11.glBindTexture(texturetype, texture) And this is my fragment shader version 400 core in vec4 color in vec2 texCoord0 in vec3 surface normal in vec3 to light vector in vec3 world position out out vec4 out Color uniform sampler2D sampler0 uniform samplerCube cubemap 3 uniform vec3 lightColor uniform float cubemap intensity 3 uniform int hasTexture void main(void) vec3 unitNormal normalize(surface normal) vec3 unitLight normalize(to light vector) float nDot1 dot(unitNormal, unitLight) float brightness max(nDot1, 0.02) vec3 diffuse color.xyz brightness lightColor color.xyz vec4 textureColor texture(sampler0, texCoord0.xy) if(textureColor.a lt 0.5) discard vec4 shadedTextureColor brightness vec4(lightColor, 1) textureColor vec4 coloredShadedTexture mix(vec4(diffuse, 1), shadedTextureColor, textureColor.a hasTexture) vec4 reflectionColor texture(cubemap 0 , world position out) vec4 refractionColor vec4 cubemapColor out Color reflectionColor
1
Drawing sprite shapes with triangles instead of textures If I want to use no textures in a game (ie. no png images), could I just break down my drawing into triangles, combined into the shape I want to draw, and draw those triangles instead? For example, I found this in Google Images, showing how an "arrow" graphic could be drawn as a triangle mesh instead of a textured sprite
1
Which consoles may I target with OpenGL? I'm thinking on technical design for a game game engine using OpenGL, and I wonder if there is any recent consoles (Xbox360, PS3, Wii U, Xbox one and PS4) that I could work with if I do so. I found plenty of conflicting answers through forums. The only straight answer seems that it is impossible for Xboxes because Microsoft forbid to use something else than DirectX, its own product. So, finally, does anyone know for sure which consoles can support an OpenGL based game?
1
Cube world rendering optimizations I'm making a Minecraft like game and I would like to apply some optimizations. Firstly I didn't and I rendered the world using a vbo in which I stored a cube model and I drow all blocks of the worlds and off course it caused lag if there were too many blocks. Then I decided to draw the world face by face and check if a face is hidden from another one. This concept is simple if I have the world made only by cubes of the same size, like this Blue and black block are two blocks of the same size and the red face is the face to ignore since is hidden (overlapped) by another opaque face. As I said if I want to make blocks of different sizes (scale) position rotation but by always drawing them from faces, the matter is different since I can't know if a face is completely visible or not. Here's an image of a normal block with a "custom" block on it Blue block is the custom block, the black a "normal" block and the red is the face overlapped by the biggest one (then it should be ignored during rendering). Here the rotation is null but if there's no way to check rotated faces I can simply ignore it and draws all rotated blocks by default (I don't rotate all the world lol)... that's not what I'd like to do but if there's no way or the way is too difficult... Anyway, finally, how can I check (using GL functions or doing it in my code) if a face is hidden and to ignore it on rendering? I forgot to say that another optimization I did is if a block is hidden (has a block relative to each face) by other blocks I don't draw it.
1
How to pass depth buffer from OGRE to CUDA? I am using OGRE for rendering some objects. At every frame, I would like to pass the resulting depth buffer to CUDA for running some kernels on it and computing a result. How can I achieve this? How do I get access to depth buffer in OGRE? How do I pass this to CUDA for processing? I do not need to write to the depth buffer in the CUDA kernels, it can be read only.
1
OpenGL Black rectangle in rendering texture I create textures (for example, I just fill the texture color), and render their. But, for some reason, to the upper edge added black rectangle. Screenshot var Textures array of GLuint procedure LoadTexture(const TextureID Byte var Texture GLuint) var w, h Cardinal Data PByteArray begin if TextureID 0 then begin w 512 h 512 end else begin w 128 h 64 end GetMem(Data, w h 3) if TextureID 0 then FillChar(Data , w h 3, 204) else FillChar(Data , w h 3, 102) glBindTexture(GL TEXTURE 2D, Texture) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL NEAREST) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexImage2D(GL TEXTURE 2D, 0, GL RGB8, w, h, 0, GL RGB, GL UNSIGNED BYTE, Data) FreeMem(Data, w h 3) Data nil end procedure glInit begin glEnable(GL TEXTURE 2D) SetLength(Textures, 2) glGenTextures(1, Textures 0 ) LoadTexture(0, Textures 0 ) glGenTextures(1, Textures 1 ) LoadTexture(1, Textures 1 ) glClearColor(1, 1, 1, 0) glShadeModel(GL SMOOTH) glClearDepth(1) glDisable(GL DEPTH TEST) glHint(GL PERSPECTIVE CORRECTION HINT, GL NICEST) glViewport(0, 0, 800, 600) glMatrixMode(GL PROJECTION) glLoadIdentity glOrtho(0, 800, 600, 0, 0, 1) glMatrixMode(GL MODELVIEW) glLoadIdentity end procedure glDraw begin glClearColor(1, 1, 1, 0) glClear(GL COLOR BUFFER BIT or GL DEPTH BUFFER BIT) glBindTexture(GL TEXTURE 2D, Textures 0 ) glBegin(GL QUADS) glTexCoord2f(0, 0) glVertex2f(0, 0) glTexCoord2f(1, 0) glVertex2f(512, 0) glTexCoord2f(1, 1) glVertex2f(512, 512) glTexCoord2f(0, 1) glVertex2f(0, 512) glEnd glBindTexture(GL TEXTURE 2D, Textures 1 ) glBegin(GL QUADS) glTexCoord2f(0, 0) glVertex2f(128, 64) glTexCoord2f(1, 0) glVertex2f(256, 64) glTexCoord2f(1, 1) glVertex2f(256, 128) glTexCoord2f(0, 1) glVertex2f(128, 128) glEnd end
1
Is it reasonable to use a 2D texture as a lookup table in GLSL I need a lookup table in a shader. The input values would be color values and the output other color values. Something like uniform float lut 256 color vec3(lut int(color.r 255.) , lut int(color.g 255.) , lut int(color.b 255) ) does what I need but certain implementations of OpenGL ES do not allow arrays with variable indexing so finally I've arrived to the conclusion that the most portable way is to have a 2D texture as the LUT. A texture of 256x256 could store many different lookup tables, each one (256 entries) in a single row. In order to load a row in the texture (c ) I would use glTexSubImage2D(GL TEXTURE 2D, 0, 0, 0, 256, 1, GL LUMINANCE, GL UNSIGNED BYTE, data) to store a table in the first row (I assume that internally in the texture I'd have the value repeated in R, G and B) To fetch the LUT in the fragment shader I'd use texture2D(lut,vec2(index, constRow)) (index in the range 0 1 and constRow 0 being the row in the texture where it is stored the lut). Is that a reasonable way to implement that? Update In order to fetch values in the texture LUT you need normalized indexes (0.0 1.0), so to access row 7 I have to index it as texture2D(lut, vec2(index, 7. 255.)) From what I see the result is (or may be) an interpolation with the two other consecutive rows.
1
How do I wrap textures inside shader GLSL? I'm trying out GLSL and one of the problems I'm facing is wrapping a random texture sampler in the shader. Searching for answers on the web first, this leads me to using these glTexParameter() GL TEXTURE WRAP S GL REPEAT GL TEXTURE WRAP T GL REPEAT I'm not sure where to put this or how to use it. I'm using a custom engine for this shader and I would assume I could wrap the texture in the shader. My main concern on this is my final output render having left and bottom artifacts. I got some previous advise it has to do with wrapping a random texture that is used for noise if that helps in my case.
1
How to achieve light that changes color mid way? I thought of creating light sources, and some colored windows. Now, the windows are semi transparent. How could I make it so that when the light (say, pure white) hits the glass and continues through it, but changes the color to the same color as the glass it passed? I know the effect described here can be faked by using area lights on the "colored" side of the window, but what if I just wanted to have one white point light?
1
Rendering only a part of the screen in high detail If graphics are rendered for a large viewing angle (e.g. a very large TV or a VR headset), the viewer can't actually focus on the entire image, just a part of it. (Actually, this is the case for regular sized screens as well.) Combined with a way to track the viewer's eyes (which is mostly viable in VR I guess), you could theoretically exploit this and render the graphics away from the viewer's focus with progressively less details and resolution, gaining performance, without losing perceived quality. Are there any techniques for this available or under development today?
1
Shadowmapping with multiple light sources I am using shadow mapping to add shadows to my world. Currently, I use one lightsource, which means I have to draw my world twice once in the lightview, once in the camera view. I want to add more light sources to the world, I want at least 2 sources, maybe 3 or 4. My world geometry is large, roughly 1M triangles, so drawing them comes at a large cost. Are there any shortcuts I can take to avoid drawing the geometry for every light source? Would it be possible to write to multiple depth textures in a single pass? E.g. pass several lightview matrices to a vertex shader and somehow write several framebuffers in one go? I am using OpenGL core profile. What I've tried I looked at Multiple Render Targets, but it seems that every attached buffer will share the same vertex transformation, which would not help in my situation every light view has a different transformation.
1
glTranslate, how exactly does it work? I have some trouble understanding how does glTranslate work. At first I thought it would just simply add values to axis to do the transformation. However then I have created two objects that would load bitmaps, one has matrix set to GL TEXTURE public class Background float vertices new float 0f, 1f, 0.0f, 4f, 1f, 0.0f, 0f, 1f, 0.0f, 4f, 1f, 0.0f .... private float backgroundScrolled 0 public void scrollBackground(GL10 gl) gl.glLoadIdentity() gl.glMatrixMode(GL10.GL MODELVIEW) gl.glTranslatef(0f, 0f, 0f) gl.glPushMatrix() gl.glLoadIdentity() gl.glMatrixMode(GL10.GL TEXTURE) gl.glTranslatef(backgroundScrolled, 0.0f, 0.0f) gl.glPushMatrix() this.draw(gl) gl.glPopMatrix() backgroundScrolled 0.01f gl.glLoadIdentity() and another to GL MODELVIEW public class Box float vertices new float 0.5f, 0f, 0.0f, 1f, 0f, 0.0f, 0.5f, 0.5f, 0.0f, 1f, 0.5f, 0.0f .... private float boxScrolled 0 public void scrollBackground(GL10 gl) gl.glMatrixMode(GL10.GL MODELVIEW) gl.glLoadIdentity() gl.glTranslatef(0f, 0f, 0f) gl.glPushMatrix() gl.glMatrixMode(GL10.GL MODELVIEW) gl.glLoadIdentity() gl.glTranslatef(boxScrolled, 0.0f, 0.0f) gl.glPushMatrix() this.draw(gl) gl.glPopMatrix() boxScrolled 0.01f gl.glLoadIdentity() Now they are both drawn in Renderer.OnDraw. However background moves exactly 5 times faster. If I multiply boxScrolled by 5 they will be in sinc and will move together. If I modify backgrounds vertices to be float vertices new float 1f, 1f, 0.0f, 0f, 1f, 0.0f, 1f, 1f, 0.0f, 0f, 1f, 0.0f It will also be in sinc with the box. So, what is going under glTranslate?