_id
int64
0
49
text
stringlengths
71
4.19k
1
How do I load use substances with libgdx? I've been looking at some substances (.sbsar file format) but I have no clue how to load or use these files. The links I find brings me to Unity amp Unreal Engine, but if I search up anything related to libgdx or opengl, I can't get any results. Does anyone know how I can use these textures in my libgdx java game?
1
Need an ideal opensource 3D engine for building Golf simulation game I'm looking for a open source 3D graphics and physics engine specialized in golf simulation. The preferred engine would be have short learning curve, supports Windows platform and Java or C and Lua Python support for scripting, also prefer something like outdoor golf plane scene with weather system support, I'm new to this area and would love to hear all of your advice. I'm just come from stackoverflow site, mikera there recommends jMonkeyEngine, it sounds a fit, but I'm not sure if there's other ideal candidate engine?
1
OpenGL Blending Format In OpenGL, is there a format for a texture in glTexImage2D that prevents blending, or do you have to disable it using glDisable(GL BLEND). Reason for use is that I would like to store additional data in the alpha channel of a texture, but it blends with whatever's behind it.
1
What method replaces GL SELECT for box selection? Last 2 weeks I started working on a box selection that selects shapes using GL SELECT and I just got it working finally. When looking up resources online, there is a significant number of posts that say GL SELECT is deprecated in OpenGL 3.0, but there is no mention of what had replace that function. I learnt OpenGL 1.2 in back in college 2 years back but checking wikipedia now, I realise we already have OpenGL 4.0 but I am unaware of what I need to do to keep myself up to date. So, in the meantime, what would be the latest preferred method for box selection? EDIT I found http www.khronos.org files opengl quick reference card.pdf on page 5 this card still lists glRenderMode(GL SELECT) as part of the OpenGL 3.2 reference.
1
Texture rotation inside a quad How can I rotate a texture inside a quad without rotating the quad? Here's the code but I'm rotating the whole quad here. glBindTexture(GL TEXTURE 2D, texName5) glMatrixMode(GL TEXTURE) glLoadIdentity() glMatrixMode(GL MODELVIEW) glPushMatrix() glTranslatef(12.8, 10, 10) glRotatef(rotateAd, 1, 0, 0) glBegin(GL QUADS) glNormal3f(1, 0, 0) glTexCoord2f(0.0, 0.0) glVertex3f(0, 3, 3) glTexCoord2f(0.0, 1.0) glVertex3f(0, 3, 3) glTexCoord2f(1.0, 0.0) glVertex3f(0, 3, 3) glTexCoord2f(1.0, 1.0) glVertex3f(0, 3, 3) glEnd() glPopMatrix() Thanks.
1
OpenGL missing GL SPECULAR light on the texture I'am missing specular lighting on the texture. I have include lt GL glext.h gt in the project, so basically I used glLightModeli(GL LIGHT MODEL COLOR CONTROL EXT, GL SEPARATE SPECULAR COLOR EXT) for specular effect. My init code glEnable(GL DEPTH TEST) glHint(GL PERSPECTIVE CORRECTION HINT, GL NICEST) glEnable(GL CULL FACE) GLfloat AmbientLight 0.3, 0.3, 0.3, 1.0 GLfloat DiffuseLight 0.7, 0.7, 0.7, 1.0 GLfloat SpecularLight 1.0, 1.0, 1.0, 1.0 GLfloat Shininess 90.0 GLfloat Emission 0.0, 0.0, 0.0, 1.0 GLfloat Global Ambient 0.1, 0.1, 0.1, 1.0 GLfloat LightPosition 7.0, 7.0, 7.0, 1.0 glLightModelfv(GL LIGHT MODEL AMBIENT, Global Ambient) glLightfv(GL LIGHT0, GL AMBIENT, AmbientLight) glLightfv(GL LIGHT0, GL DIFFUSE, DiffuseLight) glLightfv(GL LIGHT0, GL SPECULAR, SpecularLight) glLightfv(GL LIGHT0, GL POSITION,LightPosition) glLightf(GL LIGHT0, GL CONSTANT ATTENUATION, 0.05f ) glLightf(GL LIGHT0, GL LINEAR ATTENUATION, 0.03f ) glLightf(GL LIGHT0, GL QUADRATIC ATTENUATION, 0.002f) glMaterialfv(GL FRONT AND BACK, GL AMBIENT, AmbientLight) glMaterialfv(GL FRONT AND BACK, GL DIFFUSE, DiffuseLight) glMaterialfv(GL FRONT AND BACK, GL SPECULAR, SpecularLight) glMaterialfv(GL FRONT AND BACK, GL SHININESS, Shininess) glMaterialfv(GL FRONT AND BACK, GL EMISSION, Emission) glShadeModel(GL SMOOTH) glEnable(GL LIGHT0) glEnable(GL LIGHTING) glEnable(GL COLOR MATERIAL) glColorMaterial(GL FRONT AND BACK, GL AMBIENT AND DIFFUSE) My render code glClearColor( 0.117f, 0.117f, 0.117f, 1.0f ) glClearDepth(1.0f) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) load texture glEnable(GL TEXTURE 2D) glTexEnvf(GL TEXTURE ENV, GL TEXTURE ENV MODE, GL MODULATE) glEnable(GL LIGHTING) glLightModeli(GL LIGHT MODEL COLOR CONTROL EXT, GL SEPARATE SPECULAR COLOR EXT) glColor4f(1.0, 1.0, 1.0, 0.2) glDisable(GL COLOR MATERIAL) render geometry here glFlush() What is missing here?
1
FBO result not drawing to screen I recently added framebuffer rendering to my game and rendering to the FBO works (verified with glGetTexImage), but when I go to render a quad to show the result nothing is drawn to the screen. I'm using OpenGL 3.3 Core, so I'm drawing using vertex buffers for the quad and I have two simple pass through shaders to display the quad. I apologize for the amount of code that follows, but I felt it was all necessary for anyone who ends up looking at this. Quad generation create vertices Vertex(position, normal, tex coord) quadVertices.Add( Vertex( Vector3( 1.0f, 1.0f, 0.0f ), Vector3(), Vector2( 0.0f, 1.0f ) ) ) quadVertices.Add( Vertex( Vector3( 1.0f, 1.0f, 0.0f ), Vector3(), Vector2( 0.0f, 0.0f ) ) ) quadVertices.Add( Vertex( Vector3( 1.0f, 1.0f, 0.0f ), Vector3(), Vector2( 1.0f, 0.0f ) ) ) quadVertices.Add( Vertex( Vector3( 1.0f, 1.0f, 0.0f ), Vector3(), Vector2( 1.0f, 1.0f ) ) ) quadVertices.SendData() gl Buffer methods wrapper create indices (quadIndices is a buffer of 16 bit integers) quadIndices.Add( 0 ) quadIndices.Add( 1 ) quadIndices.Add( 2 ) quadIndices.Add( 0 ) quadIndices.Add( 2 ) quadIndices.Add( 3 ) quadIndices.SendData() Quad rendering glDisable( GL DEPTH TEST ) glDisable( GL CULL FACE ) bind the texture glActiveTexture( GL TEXTURE0 ) glBindTexture( GL TEXTURE 2D, framebuffer.GetColorHandle() ) shader.Seti( "texture0", 0 ) projection matrix is glm ortho( 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f ) shader.Set( "proj", projection ) get attributes int vertex shader.GetAttribLoc( "vertex" ) int texCoord shader.GetAttribLoc( "texCoordV" ) bind buffers quadVertices.Bind() quadIndices.Bind() enable arrays and point to data glEnableVertexAttribArray( vertex ) glVertexAttribPointer ( vertex, 3, GL FLOAT, GL FALSE, sizeof( Vertex ), (void )( 0 ) ) glEnableVertexAttribArray( texCoord ) glVertexAttribPointer ( texCoord, 2, GL FLOAT, GL FALSE, sizeof( Vertex ), (void )( sizeof( Vector3 ) 2 ) ) draw! glDrawElements( GL TRIANGLES, quadIndices.Size(), GL UNSIGNED SHORT, 0 ) disable arrays glDisableVertexAttribArray( vertex ) glDisableVertexAttribArray( texCoord ) unbind everything quadVertices.Unbind() quadIndices.Unbind() glBindTexture( GL TEXTURE 2D, 0 ) glEnable( GL CULL FACE ) glEnable( GL DEPTH TEST ) Vertex shader version 330 core in vec3 vertex in vec2 texCoordV out vec2 texCoordF uniform mat4 proj void main() texCoordF texCoordV gl Position proj vec4( vertex, 1.0 ) Fragment shader version 330 core layout(location 0) out vec4 fragColor in vec2 texCoordF uniform sampler2D texture0 void main() fragColor texture( texture0, texCoordF )
1
How do you develop the model matrix for an object? This might be a misunderstanding on my part but let me give you my superficial understanding of Model View Projection pipeline and different coordinate systems. Then I will ask a question. As we know, we might have many models in our scene. You might load vertex attributes for each using a model loader etc. When you load the coordinates, these are in object space or model space. So they describe the object wrt a local frame. We use model matrix to transform this to world space shared by all models, a global frame let's say. After this, we are in a huge world space. We might think this as an endless Euclidean 3D space. Here coordinates may have any value. Somewhere in the world, we have a camera and these objects in the world have different coordinates wrt. camera's frame. This is where view matrix comes into play. After applying this transformation, coordinates are in view space. In the OpenGl world, we think like the camera lies at the origin. We specify a view frustum and we do (actually delay) the clipping etc. After this we apply projection matrix and transform this viewing pyramid into a cube that spans 1, 1 in all the axes. This is homogeneous coordinates and we do perspective divide, clipping etc. This is my view of the whole thing, roughly. The question is, we have routines to generate view and projection transformations given the required parameters. But how about the model matrix? For example, I load a bunny model stored in an OBJ file. It might have a coordinate value like (32.657, 12.545, 8.444). In which space are these coordinates? If it's in the model space, what is the best strategy to develop a model matrix to put this object into world space? Another OBJ model of same bunny might have coordinates all lying in 1, 1 range for example. Does this mean this object is already transformed with an MVP? Are those model matrices related to only skeletal, hierarchical models? I think this is a strange question so I don't see this really mentioned anywhere. But this bugs me, leaves me with a superficial understanding of whole matter. I want to truly understand.
1
Heightmap Physics Optimization Improvement I'm working on implementing the physics surrounding a player walking on a heightmap. A heightmap is a grid of points which are evenly spaced on the x and z axes, with varying heights. The physical representation of my player (what is exposed to my physics engine manager) is simply a point mass where his feet are, rather than complicating the problem by treating him as a sphere or a box. This image should help further explain heightmaps and show how triangles are generated from the points. This picture would be looking straight down on a heightmap, and each intersection would be a vertex that has some height. Feel free to move this over to math, physics, or stack overflow, but I'm pretty sure this is where it belongs as it is a programming question related to games. Here's my current algorithm Calculate the player's velocity from input gravity previous velocity etc Move the player's position (nextPos prevPos velocity timeSinceLastFrame) Figure out which triangle (all graphics is done in triangles!) of my heightmap the player's new position is vertically aligned with Use the three vertices of that triangle to calculate the equation for the plane which that triangle lies in Plug in the player's x and z coordinates into the plane's equation to get the y coordinate for the player's position on that plane Set the y coordinate of the player's position to this (if newPos.y lt y) This is all fine and dandy, but I'm wondering if there's anything that I can optimize. For example, my first thought is to store the plane's equation with the triangle's vertex information. This way all I have to do is plug in the x and z values to the equation to get the y. However, this would require adding 4 floats to every vertex of the heightmap which is a little ridiculous memory wise. Also, my heightmaps are dynamic (meaning the heights will change at runtime), which would mean changing the plane equations every time a vertex's height changes. Is there a faster way to calculate that point than digging up the plane's equation and then plugging in x and z? Should I store the plane equations and update them on vertex height change or should I just regenerate the plane equations for every triangle every time the player moves on that triangle? Is there a better way to do heightmap physics that maintains this simplicity? (this seems very simple to me as far as physics engines go)
1
Giving values to uniform in OpenGL First thing is that I know how to give values to uniforms in OpenGL. Second thing is that it is a question related to optimization and performance. The habit for changing the uniforms, we preferred is like this Consider that we have shaders having 'aValue' as uniform During initialization GLint loc glGetUniformLocation(program, "aValue") During a loop or when required to change uniform's value if (loc ! 1) glUniform1f(loc, 0.75) But what if it is like this 'program' is a global variable Can be anywhere but not after destructing 'program' void updateUniformf( const char name, float value ) glUniform1f(glGetUniformLocation(program, name), value) During a loop or when required to change uniform's value updateUniformf("aValue", 0.75) How much would such approach decreases the performance? Or would this approach even affects the performance? It will be appreciable to have some measurements or practical example rather than all theories. Of course, I need to know reasons as well. Thanks for answering this question!
1
How do I fit the camera frustum inside directional light space? I'm trying to improve the coverage of a shadow map for a directional light. Currently, it works great if the camera is looking straight down. However, if the camera is close to the ground and looking toward the horizon, I can see where the shadow map ends because it doesn't cover enough of the scene. I'd like to create a frustum for the directional light so that it entirely encloses the camera's frustum, as this article suggests in figures 14 and 15. (I'm not really worried about the near and far planes for the light I've picked a number that works well for every test I've tried) I know the direction of the light, and the position, rotation, and field of view of the camera. I'm using OpenGL. How do I create a frustum for the light so that it encloses my view frustum?
1
How can I insert and remove blocks quickly in a Minecraft style world? I currently have volume data for the world stored as an array of booleans. I then check each empty block and if it has non empty neighbors the faces get drawn. This prevents me from sending a bunch of faces to the graphics card using OpenGL. I'm now working on inserting and removing blocks but I'm not sure how to do this quickly. It is simple enough to change the volume data but I don't want to recompute all the vertices from the volume data each time someone inserts or removes a block. It occurred to me just to add the block to the vertex buffer at the end of the existing vertex data but then I don't have a good way of destroying it as I have no way to correlate between the volume and vertex buffer data. Any thoughts on how I can fix the mess I'm in?
1
OpenGL ES indices optimization I am using OpenGL ES 1.x 2.x I have 2 attributes to be passed to the GPU(one is colors, one is vertices, one color per vertex). I use indices. Both attributes will use the same indices array This is not a big issue but I was wondering if there is a way to tell the GPU that both attributes use the same indices array so that it is not transfered to the gpu twice, or maybe it does not matter because the GPU uses the RAM?
1
How do rendering pipelines improve the performance of updating all the vertices every frame? Let's say I am implementing a simple game engine, particularly the rendering part. From the high level view we have some vertices which are copied to the graphics card alongside shader information etc. I do not really understand how this is done in a performant manner Consider a basic OpenGL program (and all the tutorials or whatever I could find out there) which copies all the vertices to the graphics card at each draw call. This seems to be a lot of work. And as far as I understand if I am doing some kind of third person game, for example, I have to copy every vertex of the character at each draw call (and possibly more). Even though the world is static and the character is representing only a small amount, this still seems to be a lot. Do game engines apply some kind of "caching" here, to make sure they only copy what has actually changed, or is this a rather mindless copy of everything, and the performance is not such a big deal? Essentially I want to learn about some kind of architectural choices concerning this vertex copy operation, however I cannot seem to find the right resources. Thank you for any input you could provide.
1
How to draw 2d above 3d scene? I have an OpenGL(3.3) GLEW application. I want to draw a black vertical rectangle at the left side of the window for writing some information on it, like fps (I use GLEW function for drawing text). I have a common pipeline, like this m transformation PersProjTrans CameraRotateTrans CameraTranslationTrans TranslationTrans RotateTrans ScaleTrans I know how it works, but I can't figure out how to draw a projection of the rectangle on the window, like 2d. I would be glad to any advise.
1
What is causing my default libraries to conflict with OpenGL extensions? So I'm currently following the tutorial for creating OpenGL programs on learnopengl.com,so I'm using GLFW, GLEW and the base library for OpenGL. However, when I go to build my code, I get a warning about how some of my default libraries "conflict with use of other libs use NODEFAULTLIB library". So I went and disabled the appropriate defaults, and I get even more errors. I'm trying to figure out, what other reason would there be that this error might be caused? From what I can tell, GLFW and GLEW should not experience this warning in the first place. I wouldn't have been too upset, since it compiles properly, however, when I add a third library in (one called soil, for adding images as textures), I get errors straight off the bat. The libraries I used were from the official websites www.glfw.org download.html (windows 32 bit binaries) sourceforge.net projects glew files glew 1.13.0 (the win32 .zip file) I've seen people asking about this particular warning before, but the methods given to fixing it don't seem to apply to me L Edit Here's the list of errors
1
samplerCube for point light shadow map has dark corners relative to screen aspect ratio size? I almost have point light shadows working but the corner of the samplerCube that I use for the shadow map has corners that get darker depending on the main camera. Is this something to do with a transform into a different space somewhere? I followed learn opengls tutorial on this and they don't seem to have this issue or it just didn't come up in their simple example. Here a YouTube video of the problem because it is kinda hard to explain... https youtu.be FXpwgyJh1ZA That shows the depth map sampler not the shadows by the way. This also happens when the radius is almost 0 and the cube map is completely white. float PointLightShadow( vec3 NegL, FragWorldPos pointLights i .Position float R) radius of light if (mat hasShadowMap2 0) return 1.0f float closestDepth texture(mat shadowMap2, NegL).r float currentDepth length(NegL) float bias 0.05 float shadow R closestDepth gt currentDepth bias ? 1.0 0.0 return closestDepth for this example Sending light position to shader ... m lightData.PointLights i .Position lights i gt Position() setting in world space ... and for the FragWorldPos ... out vec3 FragWorldPos ... vec4 worldPos model vec4(vert, 1) FragWorldPos worldPos.xyz ... and then the fragment shader has a matching in variable. I think like 99 of this is working, I just don't understand why the cube map seems to have these dark corners. They change aspect by changing the screen also suggesting that they are in some type of camera space, but I don't see where that would happen. Let me know if you need more context for the code snippets, these are the only important parts I think though.
1
Depth Map resolution shifting the problem is with shadow mapping as you can see, actually it works fine but in a certain condition that the Depth Map size must be equal to the size of rendering buffer, I use an infinite directional light so if the window is 800x600 the depth map must be 800x600, and when i change the size of the shadow map to be 900x600 it starts to be shifted and when it's size be 1024x1024 it also shifts till it disappears the GLSL shadow function float calcShadow(sampler2D Dmap, vec4 coor) vec4 sh vec4((coor.xyz coor.w),1) sh.z 0.9 return step(sh.z,texture2D(Dmap,sh.xy).r) here's the result when it's the same size as the window Colored result amp Depth Map and here's the shifted result, as you can notice the depth map is exactly as the previous one with the addition of white space to the right. Colored result http goo.gl 5lYIFV Depth Map http goo.gl 7320Dd
1
Intersperse 2D with 3D opengl I want to be able to draw 3D objects as well as 2D objects in an openGL environment. Normally, I would draw my 3D stuff, disable the depth buffer and depth mask, then draw my 2D stuff. However, this creates a hassle. What if I have a single draw() function that is supposed to draw 3D and 2D? I would need to separate it, which I really don't want to do. I want to be able to do stuff like draw3D(...) draw2D(...) draw3D(...) So is there a common way of doing this? I can think of a few solutions but would prefer to do it a standard way. Here's a visual of the problem. My 2D text is interfering with my 3D environment
1
Create 2D sprites with libGdx using a shape and a texture separately I am creating a 2D game with LibGdx that will have creatures that are generated from dozens of characteristics with potentially millions of unique combinations. For each segment of each creature, I want to use two sprites so that I can mix shapes and colors and cut down on my resources. I would like one sprite to be a black and white (or grayscale) of the shape of the body part while the second sprite is just a color pattern. I think I may need to write a shader that splices the color pattern onto my shape to give me the sprite I need. I just need some guidance on how to get started. Another way of explaining this is with the example of clothing. I want to be able to draw a shirt shape, and pants shape, a socks shape, etc. and also draw different fabric patterns. Then I would mix and match the clothing with the pattern instead of drawing every possible combination. How can I accomplish this with LibGdx?
1
Calculating Per Vertex Normal in Geometry Shader I am able to calculate normals per face in my Geometry Shader but i want to calculate per vertex normal for smooth shading. My Geometry shader is version 430 core layout ( triangles ) in layout ( triangle strip, max vertices 3 ) out out vec3 normal out uniform mat4 projectionMatrix uniform mat4 viewMatrix uniform mat4 modelTranslationMatrix uniform mat4 modelRotationXMatrix uniform mat4 modelRotationYMatrix uniform mat4 modelRotationZMatrix uniform mat4 modelScaleMatrix void main(void) Please ignore my modelMatrix and NormalMatrix calculation here mat4 modelMatrix modelTranslationMatrix modelScaleMatrix modelRotationXMatrix modelRotationYMatrix modelRotationZMatrix mat4 modelViewMatrix viewMatrix modelMatrix mat4 mvp projectionMatrix modelViewMatrix vec3 A gl in 2 .gl Position.xyz gl in 0 .gl Position.xyz vec3 B gl in 1 .gl Position.xyz gl in 0 .gl Position.xyz mat4 normalMatrix transpose(inverse(modelViewMatrix)) normal out mat3(normalMatrix) normalize(cross(A,B)) gl Position mvp gl in 0 .gl Position EmitVertex() gl Position mvp gl in 1 .gl Position EmitVertex() gl Position mvp gl in 2 .gl Position EmitVertex() EndPrimitive() Since i don't have access to adjacent faces here, i cannot calculate per vertex normals. How can i calculate per vertex normals in my Geometry Shader?
1
OpenGL What to do after running glBufferData? I am interested in understanding a bit more behind how OpenGL does its memory management and what are some good practices before I start heavily coding and back myself into a corner. The real question I have is regarding memory. I have a class called Mesh that consists of a pointer to some vertex data, a pointer to the face data, as well as the number of vertices and faces. I am familiar with one possible process that OpenGL uses to get these drawn on the screen generate a buffer for vertices and a buffer for faces, bind them as GL ARRAY BUFFER and GL ELEMENT ARRAY BUFFER, call glBufferData to move the data to the buffers, and lastly call glDrawElements to draw them. From here I can understand how I can load several Mesh objects and draw them on screen, and I have no issue there. My question then lies in where the memory is stored and what I should be doing in order to conserve memory. My assumption is that after the call to glBufferData, the data is copied to the GPU (or to some other register to later be copied to the GPU). Is it safe for me to delete the original Mesh object that was used with the buffer calls? In my head, the mesh data is retained inside the GPU so the data is not lost, and so long as I do not need to reload this data using OpenGL later then I won't miss this data at any point. Also in my head, this frees up RAM space for other uses, so my Mesh, in some sense, now totally lives in GPU memory and nowhere else, and I can do my drawing solely by making OpenGL calls to the correct buffers (so I obviously need to save the generated buffer IDs, but that part goes without saying). If someone could tell me if this is correct incorrect logic, as well as good bad game dev practice?
1
FBO and VBO for performance I discovered VBO's recently and changed my code to use them instead of immediate mode. Now I'm rendering 25000 squares and it's really slowing down my FPS. If I drew all the squares VBO's to an FBO, then bound it to a single VBO, would it increases performance? The 25000 squares is more of a stress test, im trying to optimize my rendering code for snow. Edit Each snow particle draws one of three static VBO's, this helped performance.
1
Help me understand this vbo rotation, and how its done opengl c Im pretty new to opengl, and I just cant figure out how to rotate this vbo vao in 2d space. This is how I bind my coordinates float points 0.0f, 0.10f, 0.0f, 0.10f, 0.10f, 0.0f, 0.10f, 0.10f, 0.0f glGenBuffers(1, amp points vbo) glBindBuffer(GL ARRAY BUFFER, points vbo) glBufferData(GL ARRAY BUFFER, 9 sizeof(float), points, GL STATIC DRAW) I place the vbo at vao 0 glBindBuffer(GL ARRAY BUFFER, points vbo) glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, 0, NULL) And I enable the array glEnableVertexAttribArray(0) and call to draw in the cpu, with glDrawArrays(GL TRIANGLES, 0, 3) In my shader, I load the array by glsl shader layout ( location 0 ) in vec3 vertex position void main() gl Position vec4(vertex position, 1.0) Ive edited out the coloring fragment to save space, but this is my problem I can make a uniform to move my triangle around, but idk how to apply a transformation matrix to a vec4. It seems the shader reads my vertex position VAO one vec element at a time, and trying to apply any matrix just gives an error. How do you make or apply a rotation matrix on vao, am I inputting it wrong? All the uniforms I apply seem to only stretch, zoom, or move the overall triangle. Sorry if the question is bad, or partially confused, shading is new to me overall, so thanks in advance.' my shader version 410 core layout ( location 0 ) in vec3 vertex position layout ( location 1 ) in vec3 vertex colour out vec3 colour uniform vec3 fade vec3(0.0,0.0,0.0) fade uniform float zoom 1.0 uniform mat4 transform mat4(vec4(1.0,0.0,0.0,0.0), vec4(0.0,1.0,0.0,0.0), vec4(0.0,0.0,1.0,0.0), vec4(0.0,0.0,0.0,1.0)) void main() colour vertex colour fade gl Position transform vec4(vertex position,1.0) this example https open.gl transformations and the answer made it doable.
1
OpenGL Assimp oddity or error? A friend and I are working on developing a game engine in C . He doesn't live anywhere near me, so we use Dropbox to sync our files. I opened his project to test his code, and I kept getting errors saying that the program could not start with an error that stated "This application could not start. 0xc015002." I used a dependency walker to find that ILUT.dll was causing some issues. I have done just about everything suggested on most forums to make ILUT work, and I have not had any success. I've re downloaded it, put the DLL just about everywhere I could, and I still had issues. We then tried to compile it without Assimp, and the program compiled without errors. This was not the end of our problems though. I made a class to create a primitive cube using OpenGL. When I tested it out, there was no cube on the screen. I had my friend test the program out, and to our surprise, there was a perfectly rendered cube on his screen. So I'm honestly not sure where things could possibly be going wrong. Any help at all would be greatly appreciated. Thank you for taking the time to read this long question. I'll be happy to provide anyone with necessary info needed to help with this question! My OpenGL version is 4.2. He has an nVidia graphics card, and I have an AMD Radeon graphics card. We are using Visual Studios 2012 as our IDE.
1
How to determine what color will be written into single pixel framebuffer? I thinking about rendering into single pixel (1x1) framebuffer. For example we have two triangles which covers whole NDC area, one is green second red. What color will be written to 1x1 framebuffer? As far as I understend graphic API (no matter which) tries to put whole scene in to given framebuffer, in our example, into single pixel. But I can't figure how GPU will choose what should be written into that pixel.
1
How to reproduce examples from ShaderToy on my computer? I am a beginner with computer graphics. I have some experience with drawing polygons, shaded surfaces, the use of geometry shaders, etc. I am trying to create a volumetric cloud volume render shader like this, with techniques such as 'raymarching'. The problem is that I have understood that graphics are rendered throughout a pipeline (generate vertex information with cpu, store them in buffer objects, process them with the vertex shader and then render the image with the fragment shaders), so for me it is not clear at what stage of the pipeline the volumetric rendering is done, since the cloud is not made of polygons or vertex in general. My first step would be reproducing the same rendering from shadertoy, to understand the technique better, add parameters, lighting, etc. What kind of shader is it? how should I write a program to use it? I use OpenGL in Linux. Thank you in advance!
1
Tile Collision subtracting a minor value to the box size I'm having a certain problem when doing collision checks on bounding boxes that have their edges aligned with upper or right tiles. It shouldn't count as an intersection but I don't know a good solution for this. Let's say the player is centered with some tile with that has no collision and the 8 surrounding tiles do have collision. The problem that happens here is that my collision check function will still report a collision because the upper and right edges of the square are in touch with the other tiles. The lower and left are too, but they won't collide I check the collision using the center and size, x and y, for the box. bool isColliding( float centerX, float centerY, float sizeX, float sizeY ) So in this image, the player would be centered at ( 1.5, 1.5 ), relatively to tiles, and his size would be 1 tile x and y. That makes him centered in tile ( 1, 1 ). And when we do the math his left and lower limits are 1, so his limits are within tile ( 1, 1 ), but his upper and right limits are 2 so it reports an intersection with those tiles (the darker ones in the picture). This happens because I calculate half sizes float halfSizeX sizeX 2.0f And the left limit would be int leftLimit centerX halfSizeX In this case the player is not supposed to be intersecting any tile, so my quick fix was to subtract a very small value to the half size like this float halfSizeX sizeX 2.0f 0.001f So the right limit will be 1.999f or so, and won't actually be 2. And this will work as this is only done in the collision checking, not in correcting the position, and all amounts of movement are bigger than 0.001 tiles which would be the biggest concern. But I don't really like this solution, it looks kind of dirty.. My questions is, is there any good alternative to subtracting this value out of the blue? EDIT As asked below in the comments I'm posting the code bool Map isColliding( float centerX, float centerY, float sizeX, float sizeY ) float halfSizeX sizeX 2.0f 0.001f float halfSizeY sizeY 2.0f 0.001f int fromX (int)max( centerX halfSizeX , 0.0f ) int toX (int)min( centerX halfSizeX, (float)getNumTilesX() ) int fromY (int)max( centerY halfSizeY , 0.0f ) int toY (int)min( centerY halfSizeY, (float)getNumTilesY() ) for ( int x fromX x lt toX x ) for ( int y fromY y lt toY y ) if ( getTile( x, y ) gt hasCollision() ) return true return false
1
OpenGL vs DirectX difference from Graphics card perspective? I want to know the difference in purely hardware level if there is any. For example the most simple question Is there a chip for DirectX and another chip for OpenGL? What do hardware producers do to make their cards DirectX or OpenGL compliant?
1
Import Blender files to SharpGL (openGL for c ) please i have problem with import 3D models to SharpGL (openGL in c ). Have you got pls any project where is this problem solved? I need create any models in for example blender, and then import to sharpGL. I try this 4 days, and without result. Please can you post me any project with loading (obj,3ds,...) files with texturing? I do big school project and without this i never completed this project. Thank you for answer. Sorry for my english.
1
How to animate a model in WebGL? I created a human model in Blender, exported the vertices and indices into a JSON file and render the model in a browser using WebGL. Now I created a walk and jump animation in Blender and would like to do the same with WebGL. I saw examples that use a list of vertices for every frame of the animation. Is this the way to go? Do I need to export the vertices for every frame for every animation?
1
Correcting Lighting in Stencil Reflections I'm just playing around with OpenGL seeing how different methods of making shadows and reflections work. I've been following this tutorial which describes using GLUT STENCIL's and MASK's to create a reasonable interpretation of a reflection. Following that and a bit of tweaking to get things to work, I've come up with the code below. Unfortunately, the lighting isn't correct when the reflection is created. glPushMatrix() plane() draw plane that reflection appears on glColorMask(GL FALSE, GL FALSE, GL FALSE, GL FALSE) glDepthMask(GL FALSE) glEnable(GL STENCIL TEST) glStencilFunc(GL ALWAYS, 1, 0xFFFFFFFF) glStencilOp(GL REPLACE, GL REPLACE, GL REPLACE) plane() draw plane that acts as clipping area for reflection glColorMask(GL TRUE, GL TRUE, GL TRUE, GL TRUE) glDepthMask(GL TRUE) glStencilFunc(GL EQUAL, 1, 0xFFFFFFFF) glStencilOp(GL KEEP, GL KEEP, GL KEEP) glDisable(GL DEPTH TEST) glPushMatrix() glScalef(1.0f, 1.0f, 1.0f) glTranslatef(0,2,0) glRotatef(180,0,1,0) sphere(radius, spherePos) draw object that you want to have a reflection glPopMatrix() glEnable(GL DEPTH TEST) glDisable(GL STENCIL TEST) sphere(radius, spherePos) draw object that creates reflection glPopMatrix() It looked really cool to start with, then I noticed that the light in the reflection isn't correct. I'm not sure that it's even a simple fix because effectively the reflection is also a sphere but I thought I'd ask here none the less. I've tried various rotations (seen above the first time the sphere is drawn) but it doesn't seem to work. I figure it needs to rotate around the Y and Z axis but that's not correct. Have I implemented the rotation wrong or is there a way to correct the lighting?
1
Quaternion based rotation and pivot position I can't figure out how to perform matrix rotation using Quaternion while taking into account pivot position in OpenGL.What I am currently getting is rotation of the object around some point in the space and not a local pivot which is what I want. Here is the code Using Java Quaternion rotation method public void rotateTo3(float xr, float yr, float zr) rotation.x xr rotation.y yr rotation.z zr Quaternion xrotQ Glm.angleAxis((xr), Vec3.X AXIS) Quaternion yrotQ Glm.angleAxis((yr), Vec3.Y AXIS) Quaternion zrotQ Glm.angleAxis((zr), Vec3.Z AXIS) xrotQ Glm.normalize(xrotQ) yrotQ Glm.normalize(yrotQ) zrotQ Glm.normalize(zrotQ) Quaternion acumQuat acumQuat Quaternion.mul(xrotQ, yrotQ) acumQuat Quaternion.mul(acumQuat, zrotQ) Mat4 rotMat Glm.matCast(acumQuat) model new Mat4(1) scaleTo( scaleX, scaleY, scaleZ) model Glm.translate( model, new Vec3( pivot.x, pivot.y, 0)) model rotMat.mul( model) model.mul(rotMat) rotMat.mul( model) model Glm.translate( model, new Vec3( pivot.x, pivot.y, 0)) translateTo( x, y, z) notifyTranformChange() Model matrix scale method public void scaleTo(float x, float y, float z) model.set(0, x) model.set(5, y) model.set(10, z) scaleX x scaleY y scaleZ z notifyTranformChange() Translate method public void translateTo(float x, float y, float z) x x pivot.x y y pivot.y z z position.x x position.y y position.z z model.set(12, x) model.set(13, y) model.set(14, z) notifyTranformChange() But this method in which I don't use Quaternion works fine public void rotate(Vec3 axis, float angleDegr) rotation.add(axis.scale(angleDegr)) change to GLM Mat4 backTr new Mat4(1.0f) backTr Glm.translate(backTr, new Vec3( pivot.x, pivot.y, 0)) backTr Glm.rotate(backTr, angleDegr, axis) backTr Glm.translate(backTr, new Vec3( pivot.x, pivot.y, 0)) model model.mul(backTr) backTr.mul( model) notifyTranformChange()
1
How to change variable position within shader Unity I recently learned how to write shader in Unity and I'm trying to move the texture within one of my variable. this is the texture I used on the model. I separated the texture into two shapes, which is texRound and texStar based on position. What I'm trying to do is to move the star texture stored in texStar variable without affecting the position of texRound. What comes to my mind right now is to use something like this code below, but it doesn't work as the texStar variable only save the color value without the position. (texStar.x posModifier).a texUV.r What should I do to access the position of the texStar variable? Thank you so much in advance. Here's my full code Shader "Custom ASK RGB" Properties UV ("UV", 2D) "white" variable I intend to use as modifier value to change the y position of the star texture modifier ("Modifier" , float) 0 SubShader Tags "RenderType" "Opaque" LOD 200 CGPROGRAM pragma surface surf Lambert sampler2D UV struct Input float2 uv2 UV texCOORD1 void surf (Input IN, inout SurfaceOutput o) variable used to store the UV texRoundture mentioned half4 texUV tex2D ( UV, IN.uv2 UV.xy) variables used to store the round texture and the star texture half4 texRound half4 texStar if (IN.uv2 UV.x gt 0.7) if (IN.uv2 UV.y gt 0.5) texRound.a texUV.r texStar.a 0 else texRound.a 0 texStar.a texUV.r else texRound.a 0 texStar.a 0 texRound.rgb texUV.rgb texStar.rgb texUV.rgb o.Albedo texRound.rgb texRound.a texStar.rgb texStar.a ENDCG FallBack "Diffuse"
1
Need some help implementing VBO's with Frustum Culling i'm currently developing my first 3D game for a school project, the game world is completely inspired by minecraft (world completely made out of cubes). I'm currently seeking to improve the performance trying to implement vertex buffer objects but i'm stuck, i already have this methods implemented Frustum culling, only drawing exposed faces and distance culling but i have the following doubts I currently have about 2 24 cubes in my world, divided in 1024 chunks of 16 16 64 cubes, right now i'm doing immediate mode rendering, which works well with frustum culling, if i implement one VBO per chunk, do i have to update that VBO each time i move the camera (to update the frustum)? is there a performance hit with this? Can i dynamically change the size of each VBO? of do i have to make each one the biggest possible size (the chunk completely filled with objects)?. Would i have to keep each visited chunk in memory or could i efficiently remove that VBO and recreated it when needed?.
1
How to draw a smooth circle in Android using OpenGL? I am learning about OpenGL API on Android. I just drew a circle. Below is the code I used. public class MyGLBall private int points 360 private float vertices 0.0f,0.0f,0.0f private FloatBuffer vertBuff centre of circle public MyGLBall() vertices new float (points 1) 3 for(int i 3 i lt (points 1) 3 i 3) double rad (i 360 points 3) (3.14 180) vertices i (float)Math.cos(rad) vertices i 1 (float) Math.sin(rad) vertices i 2 0 ByteBuffer bBuff ByteBuffer.allocateDirect(vertices.length 4) bBuff.order(ByteOrder.nativeOrder()) vertBuff bBuff.asFloatBuffer() vertBuff.put(vertices) vertBuff.position(0) public void draw(GL10 gl) gl.glPushMatrix() gl.glTranslatef(0, 0, 0) gl.glScalef(size, size, 1.0f) gl.glColor4f(1.0f,1.0f,1.0f, 1.0f) gl.glVertexPointer(3, GL10.GL FLOAT, 0, vertBuff) gl.glEnableClientState(GL10.GL VERTEX ARRAY) gl.glDrawArrays(GL10.GL TRIANGLE FAN, 0, points 2) gl.glDisableClientState(GL10.GL VERTEX ARRAY) gl.glPopMatrix() It is actually taken directly from here The circle looks pretty good. But now I want to make the boundary of the circle smooth. What changes do I need to make to the code? Or do I need to use some other technique for drawing the circle? Thanks.
1
What is the difference between OpenGL ES and OpenGL? Android uses OpenGL ES, what is the difference between it and OpenGL?
1
How can I reduce aliasing in my outline glow effect? I'm trying to replicate the glowing outline effect in the Left 4 Dead game. The effect causes an objects outline to glow, even when the object is occluded. Here is a screenshot of the effect I'm somewhat able to replicate this effect in my OpenGL based program. This is what I'm currently doing Create a color and depth texture that is half the screen size for rendering the glowing objects Clear the glow color depth textures. Color is cleared to black. For each glowing object, render it to the glow texture as a solid color Perform a separable gaussian blur on the glow texture Render the full resolution scene as normal Additively blend the glow texture with the normal scene, but use the glow depth texture to mask out the object, leaving just the blurred outline. Here is a screenshot of my approach Here is the fragment shader that combines the glow texture with the scene uniform sampler2D glowColorTex uniform sampler2D glowDepthTex uniform sampler2D sceneColorTex void main() vec2 uv gl TexCoord 0 .st vec4 color texture2D( sceneColorTex, uv) float depth texture2D( glowDepthTex, uv).r if(depth 1.0) color 2.0 texture2D( glowColorTex, uv) gl FragColor color As you can see, it seems to work for the most part, but the aliasing on the outline is really bad. Does anybody have any suggestions for smoothing the inner edge of the outline? Should I be sampling the neighboring depth values of each pixel and scale the glow based on the number of depth values that equal 1.0? Or is there a better approach that will produce smoother results?
1
OpenGL ES 2.0 Point Sprites Size I am trying to draw point sprites in OpenGL ES 2.0, but all my points end up with a size of 1 pixel...even when I set gl PointSize to a high value in my vertex shader. How can I make my point sprites bigger?
1
glBufferData consuming system memory I am memory profiling my game in Visual Studio and about 60 of memory usage is happening from calls to glBufferData(). I may be missing something but should this consume GPU memory instead of system memory? I call it using GL ARRAY BUFFER and GL STATIC DRAW I was just wondering is there a way I can force it to use only VRAM? Visual Studio attributes the memory usage to "nvoglv32.dll"
1
Two reflective surfaces facing each other in openGL How do I render 2 reflective surfaces, Example a mirror, face each other? To render a single surface I would use a cube map by rendering from the object on six directions. But to render 2 surfaces it is becoming a paradox on which one to render first! Can anyone gimme a spark? https learnopengl.com !Advanced OpenGL Cubemaps I used this tutorial
1
Passing variables down the pipeline glsl I am sorry to post a question that may be easily tested, but I don't have an OGL4 hardware at the moment and I have to make some design decision beforehand so I wanted a clear scenario. Looking at various tutorial that involve tessellation shaders, each single parameter that I want to send from the texture shader to the fragment shader are before passed through tessellation shaders as well (as they always needed in the case I saw). If I have a variable that I want to send from vertex shader to the fragment one, but that's not needed by TCS and TES, can I just do something like Vertex shader out vec3 foo Ignore foo in tessellation control and eval shader Fragment shader in vec3 foo or I have to pass foo through the tessellation shaders as well? I always understood the out variables are just being passed to the next stage, but what if the variable is not "picked" until another non consecutive stage? Thank you!
1
Why am I getting these artifacts when rendering polygons with OpenGL and SDL from far distance? I'm working in a toy BSP (quake3 version) renderer. I started creating the context and handling the input with GLFW but then I switched over SDL. The performance change was amazing with SDL leading with 100 fps. But when rendering some surfaces from far now I'm getting some strange and ugly artifacts in the borders or junctions. The only thing that I changed from GLFW to SDL was the initialization (code bellow) and the input handling but the OpenGL Draw calls are the same. Why am I getting these artifacts when rendering polygons with OpenGL and SDL from far distance? This only happens with some surfaces (mostly the ones without textures, only vertex colors) but there are some tiny surfaces (pendants or flags) that also have the same problem. Also, not all the borders of the same faces have the problem. if (SDL Init(SDL INIT VIDEO SDL INIT EVENTS) lt 0) cout lt lt "Failed to init SDL Video" lt lt endl SDL GL SetAttribute(SDL GL CONTEXT MAJOR VERSION, 4) SDL GL SetAttribute(SDL GL CONTEXT MINOR VERSION, 1) SDL GL SetAttribute(SDL GL CONTEXT FLAGS, SDL GL CONTEXT FORWARD COMPATIBLE FLAG) SDL GL SetAttribute(SDL GL CONTEXT PROFILE MASK, SDL GL CONTEXT PROFILE CORE) SDL GL SetAttribute(SDL GL DOUBLEBUFFER, 1) SDL GL SetAttribute(SDL GL MULTISAMPLEBUFFERS, 1) SDL GL SetAttribute(SDL GL MULTISAMPLESAMPLES, 4) window SDL CreateWindow("BSP Test", 0, 0, width, height, SDL WINDOW SHOWN SDL WINDOW OPENGL) if ( window nullptr) SDL Quit() return false SDL GLContext ctx SDL GL CreateContext( window) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glGetIntegerv(GL MAX PATCH VERTICES, amp ee) cout lt lt "Max patch vertices " lt lt ee lt lt endl glPatchParameteri(GL PATCH VERTICES, 9) cout lt lt "GL ERROR LOAD " lt lt glGetError() lt lt endl glClearColor(0.0f, 0.0f, 0.01f, 1.0f) glEnable(GL CULL FACE) glCullFace(GL BACK) glFrontFace(GL CCW) glEnable(GL DEPTH TEST) glEnable(GL MULTISAMPLE) glEnable(GL POLYGON SMOOTH) glEnable(GL LINE SMOOTH) SDL GL SetSwapInterval(0) matrices.proj glm perspective(glm radians(100.0f), (float) width (float) height, 1.0f, 4000.0f) camera.position glm vec3 41.415211f, 320.293121f, 537.225281f return true
1
How does glfw handle input? GLFW is clearly an OpenGL framework, which means we are working with graphics card. How input gets handled then?Seems like it handles all of the controllers on our system, not only graphics card.
1
glDrawArrays Erasing the buffer between calls? I have two VBOs, assigned to names 1 and 2. Only 2 is being drawn. However, if I comment out VBO 2, VBO 1 is drawn with no problem. Here's the relevant code. for m in shader. models try Set the model's transform matrix uniform vbo, mode m.render data glUniformMatrix4fv( shader.uniforms 'model' 0 , 1, GL FALSE, np.array(m.model matrix)) vbo.bind() Binds the VBO. glDrawArrays(mode, 0, len(vbo)) finally vbo.unbind() Unbinds it. glGetting GL ARRAY BUFFER BINDING in the loop gives me alternating 1 and 2s, so the buffers are binding correctly. The second object is smaller and positioned in a different location, so they aren't hiding each other. The only explanation I can think of is glDrawArrays clears the whole buffer every time it draws? What the heck?
1
How many textures can usually I bind at once? I'm developing a game engine, and it's only going to work on modern (Shader model 4 ) hardware. I figure that, by the time I'm done with it, that won't be such an unreasonable requirement. My question is how many textures can I bind at once on a modern graphics card? 16 would be sufficient. Can I expect most modern graphics cards to support that number of textures? My GTX 460 appears to support 32, but I have no idea if that's representative of most modern video cards.
1
Why I get inconsistent occlusion query results? My system Catalyst 15.12, mesa 11.2.1, Archlinux, kernel 4.5.1 Depending on camera position I get inconsistent occlusion query results. Following scene contains wall, objects behind wall and objects on sides. Objects behind wall are always invisible(correct). Objects on sides should be always visible but sometimes marked as invisible. Why? On screenshots green means visible, red boundary means occluded This is the code that I use void gems render objects(Scene scene) int i 0, j 0, res AssetInstance inst if(!s queryIds) s queryIds (GLuint )malloc(sizeof(GLuint) (scene gt numInstances)) glGenQueries(scene gt numInstances, s queryIds) draw everything for(i 0 i lt scene gt numInstances i ) glc draw inst(scene, scene gt instances i, amp s curProgramId, amp s curVaoId) query everything glDepthMask(GL FALSE) glColorMask(GL FALSE, GL FALSE, GL FALSE, GL FALSE) for(i 0 i lt scene gt numInstances i ) inst scene gt instances i glBeginQuery(GL SAMPLES PASSED, s queryIds i ) glc draw inst(scene, inst gt bbInst, amp s curProgramId, amp s curVaoId) glEndQuery(GL SAMPLES PASSED) glDepthMask(GL TRUE) glColorMask(GL TRUE, GL TRUE, GL TRUE, GL TRUE) draw bounding boxes of invisible glDepthMask(GL FALSE) glDisable(GL DEPTH TEST) for(i 0 i lt scene gt numInstances i ) inst scene gt instances i res 0 glGetQueryObjectiv(s queryIds i , GL QUERY RESULT, amp res) if(res 0) GLenum prevDrawType inst gt bbInst gt asset gt drawType inst gt bbInst gt asset gt drawType GL LINE STRIP glc draw inst(scene, inst gt bbInst, amp s curProgramId, amp s curVaoId) inst gt bbInst gt asset gt drawType prevDrawType glEnable(GL DEPTH TEST) glDepthMask(GL TRUE)
1
How do you develop the model matrix for an object? This might be a misunderstanding on my part but let me give you my superficial understanding of Model View Projection pipeline and different coordinate systems. Then I will ask a question. As we know, we might have many models in our scene. You might load vertex attributes for each using a model loader etc. When you load the coordinates, these are in object space or model space. So they describe the object wrt a local frame. We use model matrix to transform this to world space shared by all models, a global frame let's say. After this, we are in a huge world space. We might think this as an endless Euclidean 3D space. Here coordinates may have any value. Somewhere in the world, we have a camera and these objects in the world have different coordinates wrt. camera's frame. This is where view matrix comes into play. After applying this transformation, coordinates are in view space. In the OpenGl world, we think like the camera lies at the origin. We specify a view frustum and we do (actually delay) the clipping etc. After this we apply projection matrix and transform this viewing pyramid into a cube that spans 1, 1 in all the axes. This is homogeneous coordinates and we do perspective divide, clipping etc. This is my view of the whole thing, roughly. The question is, we have routines to generate view and projection transformations given the required parameters. But how about the model matrix? For example, I load a bunny model stored in an OBJ file. It might have a coordinate value like (32.657, 12.545, 8.444). In which space are these coordinates? If it's in the model space, what is the best strategy to develop a model matrix to put this object into world space? Another OBJ model of same bunny might have coordinates all lying in 1, 1 range for example. Does this mean this object is already transformed with an MVP? Are those model matrices related to only skeletal, hierarchical models? I think this is a strange question so I don't see this really mentioned anywhere. But this bugs me, leaves me with a superficial understanding of whole matter. I want to truly understand.
1
How can I get the texture file name for my polygon? I have a problem with the FBX SDK. I read in the data for the vertex position and the UV coordinates. It works fine, but now I want to read for each polygon to which texture it belongs, so that I can have models with multiple textures. Can anyone tell me how I can get the texture file name for my polygon? My code to read in vertex position and uv coordinates is the following int i, j, lPolygonCount pMesh gt GetPolygonCount() FbxVector4 lControlPoints pMesh gt GetControlPoints() int vertexId 0 for (i 0 i lt lPolygonCount i ) int lPolygonSize pMesh gt GetPolygonSize(i) for (j 0 j lt lPolygonSize j ) int lControlPointIndex pMesh gt GetPolygonVertex(i, j) FbxVector4 pos lControlPoints lControlPointIndex current model vertex index .x pos.mData 0 pivot offset 0 current model vertex index .y pos.mData 1 pivot offset 1 current model vertex index .z pos.mData 2 pivot offset 2 FbxVector4 vertex normal pMesh gt GetPolygonVertexNormal(i,j, vertex normal) current model vertex index .nx vertex normal.mData 0 current model vertex index .ny vertex normal.mData 1 current model vertex index .nz vertex normal.mData 2 read in UV data FbxStringList lUVSetNameList pMesh gt GetUVSetNames(lUVSetNameList) get lUVSetIndex th uv set const char lUVSetName lUVSetNameList.GetStringAt(0) const FbxGeometryElementUV lUVElement pMesh gt GetElementUV(lUVSetName) if(!lUVElement) continue only support mapping mode eByPolygonVertex and eByControlPoint if( lUVElement gt GetMappingMode() ! FbxGeometryElement eByPolygonVertex amp amp lUVElement gt GetMappingMode() ! FbxGeometryElement eByControlPoint ) return index array, where holds the index referenced to the uv data const bool lUseIndex lUVElement gt GetReferenceMode() ! FbxGeometryElement eDirect const int lIndexCount (lUseIndex) ? lUVElement gt GetIndexArray().GetCount() 0 FbxVector2 lUVValue get the index of the current vertex in control points array int lPolyVertIndex pMesh gt GetPolygonVertex(i,j) the UV index depends on the reference mode int lUVIndex pMesh gt GetTextureUVIndex(i, j) lUVValue lUVElement gt GetDirectArray().GetAt(lUVIndex) current model vertex index .tu (float)lUVValue.mData 0 current model vertex index .tv (float)lUVValue.mData 1 vertex index float v1 3 , v2 3 , v3 3 v1 0 current model vertex index 3 .x v1 1 current model vertex index 3 .y v1 2 current model vertex index 3 .z v2 0 current model vertex index 2 .x v2 1 current model vertex index 2 .y v2 2 current model vertex index 2 .z v3 0 current model vertex index 1 .x v3 1 current model vertex index 1 .y v3 2 current model vertex index 1 .z collision model gt addTriangle(v1,v2,v3)
1
Calculating vertex normals in OpenGL C Does anyone knows a simple solution for calculating vertex normals? I've been looking for this on the internet but i cant find a simple solution, for example, if I have some vertices like this GLfloat vertices 0.5f, 0.5f, 0.0f, Top Right 0.5f, 0.5f, 0.0f, Bottom Right 0.5f, 0.5f, 0.0f, Bottom Left 0.5f, 0.5f, 0.0f Top Left
1
Rotation ascending into infinity? I'm creating a rotation for my sun to move, however it quickly extends to Infinity before going into NaN. I thought that taking advantage of the Matrix4f would make this much easier but it does as previously stated. Ideally should have the sun rotate around the sunCenter. Why is it doing this and how would I fix it? private Vector3f getSunPosition() double rotation (time DAY LENGTH) 360 Matrix4f matrix new Matrix4f() Vector3f pos sun.getPosition() matrix.m03 pos.x matrix.m13 pos.y matrix.m23 pos.z Matrix4f.rotate((float) Math.toRadians(rotation), sunCenter, matrix, matrix) return new Vector3f(matrix.m03, matrix.m13, matrix.m23) I had it print out the Vector3f and rotation. The rotation looks a bit odd at first, however I can fix that another time. Vector3f 1.0, 1.0, 1.0 0.0 Vector3f 288375.56, 0.18377686, 287549.88 79.41000366210938 Vector3f 1.96323607E11, 125296.0, 1.96323607E11 158.82000732421875 Vector3f 1.05730567E17, 0.0, 1.05730567E17 238.23001098632812 Vector3f 5.5935644E22, 0.0, 5.5935644E22 240.02999877929688 Vector3f 2.9050322E28, 0.0, 2.9050322E28 241.8300018310547 Vector3f 1.4801178E34, 0.0, 1.4801178E34 243.6300048828125 Vector3f Infinity, 0.0, Infinity 243.8400115966797 Vector3f Infinity, NaN, Infinity 244.04998779296875 Vector3f NaN, NaN, NaN 244.25999450683594 Vector3f NaN, NaN, NaN 244.41000366210938 Vector3f NaN, NaN, NaN 244.55999755859375
1
Best way of writing pixel manipuliting intensive applications with OpenGL Direct3D Recently I have been making little experiments with engines similar to old Wolfenstein 3D, Doom and Build, engines where the 3D rendering is entirely done in software and therefore you need full access to the screen buffers at pixel level. In the days of DOS and before we had current GPUs it was relatively straightforward to get a surface to plot pixels. You had to devise the fastest way to do your calculations and the fastest way of updating the framebuffer. In fact, it is still possible by using libraries like SDL, which still give you a reasonable speed when you manipulate surfaces at pixel level. So I wonder which would be the best way of dealing with pixel intensive applications like the ones I suggest using current GPUs, by using OpenGL or Direct3D. I understand this sounds absurd, but I have been looking at API documentation and it seems that it is a feature that we have somewhat lost in current hardware. I see suggestions as using glDrawPixels or using pixel shaders, but no solid explanation or theory which can help me to make the right decisions.
1
GL SPOT CUTOFF not working properly I'm new to OpenGL. I'm studying OpenGL 2.1 and I'm trying to make a little program to test the GL SPOT CUTOFF property, but when I set a value between 0.0 90.0, the light doesn't work and everything is dark. The code void lightInit(void) GLfloat light0Position 0.0,0.0,2.0,1.0 glEnable(GL LIGHTING) glEnable(GL LIGHT0) glLightf(GL LIGHT0, GL SPOT CUTOFF, 45.0) glLightfv(GL LIGHT0, GL POSITION, light0Position) void reshapeFunc(int w, int h) glViewport(0, 0, (GLsizei) w, (GLsizei) h) glMatrixMode(GL PROJECTION) glLoadIdentity() if (w lt h) glOrtho( 4, 4, 4 (GLfloat)h (GLfloat)w, 4 (GLfloat)h (GLfloat)w, 4.0, 4.0) else glOrtho( 4 (GLfloat)w (GLfloat)h,4 (GLfloat)w (GLfloat)h, 4, 4, 4, 4.0) glMatrixMode(GL MODELVIEW) glLoadIdentity() gluLookAt(0,0,0,0,0, 1,0,1,0) void displayFunc(void) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) center sphere glutSolidSphere(1, 100,100) right sphere glPushMatrix() glTranslatef(3,0,0) glutSolidSphere(1, 100,100) glPopMatrix() left sphere glPushMatrix() glTranslatef( 3,0,0) glutSolidSphere(1, 100,100) glPopMatrix() glutSwapBuffers() void keyboardFunc(unsigned char key, int x, int y) if (key 27) exit(EXIT SUCCESS) int main(int argc, char argv) freeglut init and windows creation glutInit( amp argc, argv) glutInitDisplayMode(GLUT DOUBLE GLUT RGB GLUT DEPTH) glutInitWindowSize(600, 500) glutInitWindowPosition(300, 100) glutCreateWindow("OpenGL") glew init and errors check GLenum err glewInit() if (GLEW OK ! err) fprintf(stderr, "Error s n", glewGetErrorString(err)) return 1 fprintf(stdout, " GLEW s n n", glewGetString(GLEW VERSION)) general settings glClearColor(0.0, 0.0, 0.0, 0.0) glShadeModel(GL SMOOTH) glEnable(GL DEPTH TEST) light settings lightInit() callback functions glutDisplayFunc(displayFunc) glutReshapeFunc(reshapeFunc) glutKeyboardFunc(keyboardFunc) glutMainLoop() return 0 This code produces this image If I delete glLightf(GL LIGHT0, GL SPOT CUTOFF, 45.0) , the next image is produced Is there some kind of bug ?
1
mixing forward and deferred rendering Simplified version of my deferred rendering looks like this bind fbo1 clear color and depth buffers gbuffer stage (gbuffer contains only these pixels which pass a depth test) unbind fbo1 bind fbo2 clear color buffer draw a full screen quad, shading stage (render to the texture for some post processing effects) unbind fbo2 clear color and depth buffers draw a full screen quad and render to the default framebuffer How to combine that with the forward rendering ?
1
GLSL shader without a vertex array Ok so I have a idea for a neat GPU driven curve renderer, and I realised that the vertex shader can be hardwired to generate points of the curve segment (to be rendered as a line strip) without sending any vertex positions gl Position could be set completely procedurally. That said I'd still need to specify a "t" value per point via vertex attributes. Is it possible to specify attributes (ie via glVertexAttribPointer) without specifying vertices? Or does the GL need "space" in the buffers for vertices even if they aren't initialized.
1
Font loads with artifacts I've recently come on to an error when loading fonts in my game, some letters show up perfect while others have artifacts around them. Here's an example of this While some letters are perfectly normal, others are being unstable. My options menu This is what my loader method looks like. (Yes i've tried other fonts) private static int loadTexture(BufferedImage image) int BYTES PER PIXEL 4 int pixels new int image.getWidth() image.getHeight() image.getRGB(0, 0, image.getWidth(), image.getHeight(), pixels, 0, image.getWidth()) ByteBuffer buffer BufferUtils.createByteBuffer(image.getWidth() image.getHeight() BYTES PER PIXEL) for(int y 0 y lt image.getHeight() y ) for(int x 0 x lt image.getWidth() x ) int pixel pixels y image.getWidth() x buffer.put((byte) ((pixel gt gt 16) amp 0xFF)) Red component buffer.put((byte) ((pixel gt gt 8) amp 0xFF)) Green component buffer.put((byte) (pixel amp 0xFF)) Blue component buffer.put((byte) ((pixel gt gt 24) amp 0xFF)) Alpha component. Only for RGBA buffer.flip() int textureID glGenTextures() glBindTexture(GL TEXTURE 2D, textureID) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL NEAREST) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL NEAREST) glTexImage2D(GL TEXTURE 2D, 0, GL RGBA8, image.getWidth(), image.getHeight(), 0, GL RGBA, GL UNSIGNED BYTE, buffer) return textureID
1
BlitzMax generating 2D neon glowing line effect to png file Originally asked on StackOverflow, but it became tumbleweed. I'm looking to create a glowing line effect in BlitzMax, something like a Star Wars lightsaber or laserbeam. Doesn't have to be realtime, but just to TImage objects and then maybe saved to PNG for later use in animation. I'm happy to use 3D features, but it will be for use in a 2D game. Since it will be on black space background, my strategy is to draw a series of white blurred lines with color and high transparency, then eventually central lines less blurred and more white. What I want to draw is actually bezier curved lines. Drawing curved lines is easy enough, but I can't use the technique above to create a good laser neon effect because it comes out looking very segmented. So, I think it may be better to use a blur effect shader on what does render well, which is a 1 pixel bezier curve. The problems I've been having are Applying a shader to just a certain area of the screen where lines are drawn. If there's a way to do draw lines to a texture and then blur that texture and save the png, that would be great to hear about. There's got to be a way to do this, but I just haven't gotten the right elements working together yet. Any help from someone familiar with this stuff would be greatly appreciated. Using just 2D calls could be advantageous, simpler to understand and re use. It would be very nice to know how to save a PNG that preserves the transparency alpha stuff. p.s. I've reviewed this post (and many many others on the Blitz site), have samples working, and even developed my own 5x5 frag shaders. But, it's 3D and a scene wide thing that doesn't seem to convert to 2D or just a certain area very well. I'd rather understand how to apply shading to a 2D scene, especially using the specifics of BlitzMax.
1
Workaround for reading and writing same texture? To apply post effects, it is often needed to read the preliminary image, perform computations on its pixels and store the result in the same texture again. For example, think of a tone mapping or desaturation effect. The input and output should be the same texture, but this isn't allowed by OpenGL. Therefore, what is a good workaround? One idea I came up with but which isn't very satisfying is to have two image textures and automatically use them alternating. Also keeping track of which is the newer one to finally display it on the screen.
1
How to find the bottleneck of the graphical pipeline I've been wondering about this issue for a while. How to find the bottleneck of the graphical pipeline. Recently I've been using a program to draw massive amount of polygons in a simple scene with alpha blending (AKA grass scene). I've used two programs, one uses static coordinates and another uses rotation and translation. Both run at 60 FPS with no other heavy processes running. But when I use them together (Two windows each having same amount of grasses and grass positions) the one that uses translation and rotation runs at 10 FPS but the other one is about 55 FPS. My question is why are both running 60 FPS and when such thing happens why does the second one(Rotation and translation of each grass) drop about 50 FPS but the second is still 55? Sounds like a bottle neck to me. Please let me know if you have any idea, or in a more general answer if you have an idea or paper about finding bottleneck of GPU(or GPGPU), or optimizing the graphical code for running on GPU?
1
Creating a DPI and Resolution Independant GUI How would one best go about it? Assuming I'm using OpenGL to do all the drawing. It seems like the best approach is to have the GUI elements maintain the same physical size regardless of the screen it is displayed on.It seems that Windows will act as if you had a screen with 72 DPI regardless of its actual size unless you specifically request tell it that you handle high DPI cases. Which would greatly simplify things, does this hold for Linux as well? Does things change between running things in Windowed mode versus fullscreen? It seems like it should, as the Window changes size depending on resultion while in fullscreen the image is always the same size, just with less or more density!
1
opengl storing multiple indices into indices buffer After parsing collada files I found out that we have to load two(or more) indices each pointing to say a vertex or normal etc. like this ( lt p 3 0 2 0 0 0 lt p ) Is there a way to load (and use)these indices to indices buffer(ELEMENT ARRAY BUFFER) like this (vnvnvn...) where v is vertex pointer, n is normal pointer from the above. with vertices loaded to VBO1 like (vvvv...) and in VBO2 like (nnnn...). So that we dont have to repeat the normals in VBO2. VBO like (vnvnvn...) is also ok. (and not something like this https stackoverflow.com questions 11148567 rendering meshes with multiple indices), I'm not trying to use multiple indices buffer. I'm trying to put all the given indices into one index buffer and use them alternately. (If I wasnt clear i will try for more clarification.)
1
Irrlicht app Creating STL exception leads to segfault I'm developing application with C language and Irrlicht engine. When I add std runtime error("err") expression into my code, I got segmentation fault at address 0 (before entering to main). If instead I use std exception(), app runs fine. How std runtime error() causes segfault and how to fix it? Thanks! include lt stdexcept gt include lt exception gt include "irrlicht.h" int main() irr IrrlichtDevice device irr createDevice() if (!device) throw std runtime error("err") without this run fine Compile g ggdb Wall Wextra std c 11 o main main2.cc lIrrlicht lGL lXxf86vm lXext lX11 lXcursor Backtrace (gdb) bt full 0 0x00000000 in ?? () No symbol table info available. 1 0xb5d88fe1 in init () at dlerror.c 177 No locals. 2 0xb5d8942e in dlerror run (operate operate entry 0xb5d88e10 lt dlsym doit gt , args args entry 0xbffff160) at dlerror.c 129 result lt optimized out gt 3 0xb5d88e98 in dlsym (handle 0x0, name 0xb7f8afe5 "posix memalign") at dlsym.c 70 args handle 0x0, name 0xb7f8afe5 "posix memalign", who 0xb7f674bc, sym 0xb5d65b20 result lt optimized out gt 4 0xb7f674bc in ?? () from usr lib nvidia 304 libGL.so.1 No symbol table info available. Backtrace stopped previous frame inner to this frame (corrupt stack?) LD debug LD DEBUG all LD DEBUG OUTPUT log.txt . main tail log.txt. 3848 symbol pthread key create lookup in file lib i386 linux gnu libgcc s.so.1 0 3848 symbol pthread key create lookup in file lib i386 linux gnu libc.so.6 0 3848 symbol pthread key create lookup in file usr lib nvidia 304 tls libnvidia tls.so.304.131 0 3848 symbol pthread key create lookup in file usr lib nvidia 304 libnvidia glcore.so.304.131 0 3848 symbol pthread key create lookup in file usr lib i386 linux gnu libXext.so.6 0 3848 symbol pthread key create lookup in file lib i386 linux gnu libdl.so.2 0 3848 symbol pthread key create lookup in file usr lib i386 linux gnu libxcb.so.1 0 3848 symbol pthread key create lookup in file lib ld linux.so.2 0 3848 symbol pthread key create lookup in file usr lib i386 linux gnu libXau.so.6 0 3848 symbol pthread key create lookup in file usr lib i386 linux gnu libXdmcp.so.6 0
1
How can I efficiently render lots of 2D quads? I have lots of 2D quads. I write their local vertex position info to a buffer (accompanied with a world transform matrix) which gets sent to the render thread. I then pass the world transform matrix as a uniform into the vertex shader which applies itself to each vertex and works fine. I however have to make a call to glDrawElements() for each quad I want to render because each one has a different world transform. What is the most efficient way for me to render all of these quads? Should I do what I'm already doing, or should I calculate the world position for each vertex on the CPU before I write them to the buffer, so I can attempt to batch more quads into a single glDrawElements() call? Or is there another approach?
1
Using multiple shaders in OpenGL3.3 guys, I got a question on how to use multiple shaders in my app. The app is simple I have a 3D scene, say, simple game and I want to show some 2D GUI in the scene. I was following this tutorial on how to add font rendering to my scene. One difference is that I am using Java and lwjgl, but everything is implemented as in the tutorial. So I have 2 sets of shaders (2 programs). 1st that handles the 3D models and scene at all. I added the second set of shaders, I just copied them from the tutorial. Here are they vertex version 330 in vec2 position in vec2 texcoord out vec2 TexCoords uniform mat4 projection void main() gl Position projection vec4(position, 0.0, 1.0) TexCoords texcoord and fragment version 330 in vec2 TexCoords out vec4 color uniform sampler2D text uniform vec3 textColor void main() vec4 sampled vec4(1.0, 1.0, 1.0, texture(text, TexCoords).r) color vec4(textColor, 1.0) sampled I compile shaders, link them into a separate programs. (so I have modelProgram and fontProgram). However, when I run my application, I see errors in the console (however, the application runs fine) WARNING Output of vertex shader 'TexCoords' not read by fragment shader ERROR Input of fragment shader 'vNormal' not written by vertex shader ERROR Input of fragment shader 'vTexCoord' not written by vertex shader ERROR Input of fragment shader 'vPosition' not written by vertex shader As you can see TexCoords is an out variable in font.vs.glsl and the other 3 are in variables in model.fs.glsl. So they belong to the other set of shaders, other program. My question is why this happen? It looks like the pipeline tries to combine one program with another, although the application runs smoothly. The other problem I have is that I do not see any text rendered. I don't know whether this is caused by this or it happens because something else. Any help will be appreciated! Thank you
1
How can I set up a pixel perfect camera using OpenGL and orthographic projection? I am creating a 2D game using OpenGL and orthographic projection. For sprites, I use textured quads (actually two triangles). I want the pixel art textures for the sprites to be displayed in a multiple of the original size to avoid blurry textures. How can I set up a pixel perfect camera using OpenGL and orthographic projection? After doing so, how do I size the qauds?
1
Is linear filtering possible on depth textures in OpenGL? I'm working on shadow maps in OpenGL (using C ). First, I've created a framebuffer and attached a depth texture as follows Generate the framebuffer. var framebuffer 0u glGenFramebuffers(1, amp framebuffer) glBindFramebuffer(GL FRAMEBUFFER, framebuffer) Generate the depth texture. var shadowMap 0u glGenTextures(1, amp shadowMap) glBindTexture(GL TEXTURE 2D, shadowMap) glTexImage2D(GL TEXTURE 2D, 0, GL DEPTH COMPONENT24, 1024, 1024, 0, GL DEPTH COMPONENT, GL FLOAT, null) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL NEAREST) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL NEAREST) glFramebufferTexture2D(GL FRAMEBUFFER, GL DEPTH ATTACHMENT, GL TEXTURE 2D, shadowMap, 0) Set the read and draw buffers. glDrawBuffer(GL NONE) glReadBuffer(GL NONE) Later (after rendering to the shadow map and preparing the main scene), I sample from the shadow map in a GLSL fragment shader as follows float shadow texture(shadowMap, shadowCoords.xy).r Where vec3 shadowCoords is the coordinates of the fragment from the perspective of the global directional light source (the one used to create the shadow map). The result is shown below. As expected, shadow edges are jagged due to using GL NEAREST filtering. To improve smoothness, I tried replaced the shadow map's filtering with GL LINEAR, but the results haven't changed. I understand there are other avenues I could take (like Percentage Closer Filtering), but I'd like to answer this question first, if only for my sanity. I've also noticed that other texture parameters (like GL CLAMP TO EDGE rather than GL REPEAT for wrapping) don't function for the shadow map, which hints that this may be a limitation of depth textures in general. To reiterate Is linear filtering possible using depth textures in OpenGL?
1
OpenGL threaded loading I'd like to introduce seamless level loading which means I need multiple threads. The main thread is for rendering the current scene (or for non seamless level loading a progress bar) while the other thread is loading the resource files and sending data to the graphics card. Now I have the following single threaded architecture when the graphics device is initialized, a dummy window and a "shared context" is created. This is the only rendering context (HGLRC in Windows terms) I'm creating. Every time a new window is created and the graphics device is asked to create a "renderable surface" into that window, the same "shared rendering context" is used. I can do this because I set the same pixel format for each window, so there is no point to make multiple rendering contexts (that would just make my life harder than it should). Here is a small image representing the current design I've read some topics about this but now I'm confused since I read really different point of views, so I have some questions As far as I know an OpenGL context can be active in a single thread only (at a time). This means that I need another "loading rc" which shares resources (through context creation or glShareList) with the "shared rc". Every time I create a new thread to load something, I have to set the "loading rc" active in that thread. Am I right? Can I use the same dummy device context (HDC) in the loading thread? Because I have a single GPU the commands sent to the graphics card are queued up. Right? Will this behaviour introduce switches between shared and loading contexts? Is there anything else I should take care of?
1
How to cache a large object I have an object made up of many vertices that changes very rare (for the most part of time, it looks the same) and I'm trying to figure out a way to avoid rendering all its vertices every frame. However, being a OpenGL beginner I have no idea how to do it. I was thinking I could render it to a texture and only show the texture every frame. But I don't know if it's possible or if there are better ways.
1
OpenGL shows a black screen I am new at OpenGL, I try this example https stackoverflow.com a 31524956 4564882 but I get only a black widget. The code is exactly the same. this is the code associated to the QopenGLWidget OGLWidget OGLWidget(QWidget parent) QOpenGLWidget(parent) OGLWidget OGLWidget() void OGLWidget initializeGL() glClearColor(0,0,0,1) glEnable(GL DEPTH TEST) glEnable(GL LIGHT0) glEnable(GL LIGHTING) glColorMaterial(GL FRONT AND BACK, GL AMBIENT AND DIFFUSE) glEnable(GL COLOR MATERIAL) void OGLWidget paintGL() glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glBegin(GL TRIANGLES) glColor3f(1.0, 0.0, 0.0) glVertex3f( 0.5, 0.5, 0) glColor3f(0.0, 1.0, 0.0) glVertex3f( 0.5, 0.5, 0) glColor3f(0.0, 0.0, 1.0) glVertex3f( 0.0, 0.5, 0) glEnd() void OGLWidget resizeGL(int w, int h) glViewport(0,0,w,h) glMatrixMode(GL PROJECTION) glLoadIdentity() gluPerspective(45, (float)w h, 0.01, 100.0) glMatrixMode(GL MODELVIEW) glLoadIdentity() gluLookAt(0,0,5,0,0,0,0,1,0) I tried the example here https doc.qt.io archives qt 5.3 qtopengl 2dpainting example.html. It works fine (trying the both base class QGLWidget and QOpenGLWidget. this is the code associated to the Widget GLWidget GLWidget(Helper helper, QWidget parent) QGLWidget(QGLFormat(QGL SampleBuffers), parent), helper(helper) elapsed 0 setFixedSize(200, 200) setAutoFillBackground(false) void GLWidget animate() elapsed (elapsed qobject cast lt QTimer gt (sender()) gt interval()) 1000 repaint() void GLWidget paintEvent(QPaintEvent event) QPainter painter painter.begin(this) painter.setRenderHint(QPainter Antialiasing) helper gt paint( amp painter, event, elapsed) painter.end() I use Qt 5.5.1 binairies built on my machine. I let the Build Configuration by default, so it is based on Qt ANGLE not Desktop OpenGL. My Graphic card Driver is updated. What is the problem of such a behaviour?
1
2D game with 3D models and terrains I am approaching OpenGL and general gaming development. I want to start by coding a simple 2D game in Java (lwjgl) but, since I can create models with Blender, I don't want to use 2D sprites, but a semi flat environment. My target is to remake a classic game but with some cool stuff like particle emitters, bump mapping, lighting and so on... The basic idea is to make something that looks like Super Mario Bros Wii (or Kirby's Adventures Wii, or Super Smash Bros. etc...). Do you think I have to use JavaMonkeyEngine? Some hints,code snippets, tutorials, links? Do I have to handle a "z fixed" camera or set up OpenGL to render in 2D mode? What about the HUD? Sorry if my question is too generic, but I would like to have a clear idea of what I have to do before starting writing code. As always, thanks for your help!
1
Pygame OpenGL init causes an X Error gt pygame.display.set mode(display, HWSURFACE OPENGL DOUBLEBUF) (Pdb) s X Error of failed request BadValue (integer parameter out of range for operation) Major opcode of failed request 154 (GLX) Minor opcode of failed request 3 (X GLXCreateContext) Value in failed request 0x0 Serial number of failed request 28 Current serial number in output stream 29 I've seen one other answer to this question related to video drivers, but I feel like I've already run my game at least once with the same video drivers installed. I'd just like to know if there's another possible cause for this issue before I reinstall my drivers for the umpteenth time on Linux.
1
Opengl mutiple texture in a shader Im using modern opengl and c . How do I draw a number of triangles each one having a different texture in one draw call when I only have 32 texture units on my graphics card and the max texture size is 1024? My video card has 2gb of memory yet it can only hold 32 texture at a time? Why do graphics cards have this limitation? I'm a c programmer and in c you don't have restrictions like that. The only restriction that you have is how much RAM you have which is fine.
1
For Vertex Buffer Steaming, Multiple glBufferSubData VS Orphaning? I was learning OpenGL recently. In games, we need to update the position of game objects frequently, and they will come in amp out of screen constantly. So it means in rendering we need to update the vertex buffer quite often as well. In OpenGL's context, one intuitive way is to use glBufferSubData to update those that changed. But I also read online a trick called Orphaning that creates a new buffer data and upload the whole vertices data to it. Also due to state chagnes cost and uploading cost, multiple glBufferSubData may also cost more. Here is my question, Which method is better? Is stalls really matters in this case? Is state changes and uploading cost really matters in this case? Thanks!
1
What should the Z coordinate be after transformed by the projection matrix? I'm working on an OpenGL 1.x implementation for the Sega Dreamcast. Because the Dreamcast didn't have any hardware T amp L the entire vertex transformation pipeline has to be done in software. What also has to be done in software is clipping to the near Z plane as failure to clip results in the polygon being dropped entirely from rendering by the hardware. I'm having some trouble getting the transform clip perspective divide process working correctly and basically I can sum up the problem as follows I transform polygon vertices by the modelview and projection matrices I clip each polygon against the W 0.000001 plane This results in new vertices on the near plane with a W of 0.000001, but a Z which is twice the near plane distance At perspective divide vertex.z vertex.w results in an extreme value because we're dividing a Z value (e.g. 0.2) by 0.000001 Something seems very wrong here. The projection matrix is being generated in the same way as described in the glFrustum docs. So my question is, if I have a coordinate on the near plane, should its Z value be zero after transform by the projection matrix or should it be the near z distance, or something else? After clipping polygons to the W 0.000001 plane, should the generated Z coordinates be 0.000001? Update Here is the projection matrix as calculated by gluPerspective(45.0f, 640.0f 480.0f, 0.44f, 100.0f) 1.810660 0.000000 0.000000 0.000000 0.000000 2.414213 0.000000 0.000000 0.000000 0.000000 1.008839 0.883889 0.000000 0.000000 1.000000 0.000000 Does this look correct? It's the value in the right hand column I'm not sure about...
1
GLM conversion from euler angles to quaternion and back does not hold I am trying to convert the orientation of an OpenVR controller that I have stored as a glm vec3 of Euler angles into a glm fquat and back, but I get wildly different results and the in game behavior is just wrong (hard to explain, but the orientation of the object behaves normally for a small range of angles, then flips in weird axes). This is my conversion code get orientation from OpenVR controller sensor data const glm vec3 eulerAnglesInDegrees orientation PITCH , orientation YAW , orientation ROLL debugPrint(eulerAnglesInDegrees) const glm fquat quaternion glm radians(eulerAnglesInDegrees) const glm vec3 result glm degrees(glm eulerAngles(quaternion)) debugPrint(result) result should represent the same orientation as eulerAnglesInDegrees I would expect eulerAnglesInDegrees and result to either be the same or equivalent representations of the same orientation, but that is apparently not the case. These are some example values I get printed out 39.3851 5.17816 3.29104 39.3851 5.17816 3.29104 32.7636 144.849 44.3845 147.236 35.1512 135.616 39.3851 5.17816 3.29104 39.3851 5.17816 3.29104 32.0103 137.415 45.1592 147.99 42.5846 134.841 As you can see above, for some orientation ranges the conversion is correct, but for others it is completely different. What am I doing wrong? I've looked at existing questions and attempted a few things, including trying out every possible rotation order listed here, conjugating the quaternion, and other random things like flipping pitch yaw roll. Nothing gave me the expected result. How can I convert euler angles to quaternions and back, representing the original orientation, using glm? Some more examples of discrepancies original 4 175 26 computed 175 4 153 difference 179 171 179 original 6 173 32 computed 173 6 147 difference 179 167 179 original 9 268 46 computed 170 88 133 difference 179 356 179 original 27 73 266 computed 27 73 93 difference 0 0 359 original 33 111 205 computed 146 68 25 difference 179 43 180 I tried to find a pattern to fix the final computed results, but it doesn't seem like there's one easy to identify. GIF video of the behavior Full video on YouTube Visual representation of my intuition current understanding The above picture shows a sphere, and I'm in the center. When I aim the gun towards the green half of the sphere, the orientation is correct. When I aim the gun towards the red half of the sphere, it is incorrect it seems like every axis is inverted, but I am not 100 sure that is the case.
1
recommended shader pipeline infrastructure in core opengl 3.3 I am writing a game project in Go and I am using an OpenGl 3.3 core context to do my rendering stuff. At the moment I have different types of renderers. Each renderer has it's own pair of vertex and fragment shader, a struct of uniform and attribute locations. A struct with glBuffers, vertex array object, and numverts (int) which contains all data required to render one object (mesh). Last but not least a constructor to create and initialize the attribute uniform locations a method to load a mesh into mesh data and the render method itself. I am absolutely not happy with this design. Every time I want to create a new simple shader, I have to write all this code, and I haven't found a way to make the overhead of a new shader smaller. All I was able to make with the help of go reflections is to automatically get the attribute uniform location based on the variable name in attribute uniform location struct and to automatically set the attribute pointers based on the layout of the vertex struct. another thing that I don't like, is that when I want to implement for example the functionality of the function glClipPlane of the fixed function pipeline, I need to add the uniform to each shader separately and I need to set this uniform in each shader and I need to implement the discard of fragments in each fragment shader Are there some common practices that significantly reduce the code overhead of a new shader? Are there good shader pipelines that you can recommend me to take a look at? Are there some good practices to add functionality to several shaders at once?
1
OpenGL object load in reverse I am trying to load a model and it is loading it in reverse. When I am trying to rotate it 180 degrees it changes the lightning as well. I am not sure what I need to do to change the position that eh model is facing when is being loaded. This is the object loader if (!submarineShader gt load("BasicView", "glslfiles basicTransformations.vert", "glslfiles basicTransformations.frag")) cout lt lt "failed to load shader" lt lt endl glUseProgram(submarineShader gt handle()) use the shader glEnable(GL TEXTURE 2D) cout lt lt " loading model " lt lt endl if (objLoader.loadModel("submarine submarine v2 submarine5.obj", model)) returns true if the model is loaded, puts the model in the model parameter cout lt lt " model loaded " lt lt endl model.calcVertNormalsUsingOctree() model.initDrawElements() model.initVBO(submarineShader) model.deleteVertexFaceData() else cout lt lt " model failed to load " lt lt endl
1
glDeleteTextures release data, but keep the texture ID? In OpenGL, is it possible to release texture data, but keep the same texture id? I want to unload textures when they aren't needed, but load them again later when they are. There are a lot of objects that store texture IDs. (For example spritesheets multiple objects use the same texture ID and represent a part of the image). glDeleteTextures releases the id, so another texture (image data) can acquire the same ID. I'd have to walk all objects and change the texture IDs they use. Is there a way to release the texture data only, but keep its ID, so that later I could do only glTexImage2D on the same ID and keep all references to this ID valid?
1
GLM Quaternion SLERP Interpolation I wish to interpolate two quaternion values. As I still can not get working results, can I kindly ask you to verify my function calls? The code below supports GLM (OpenGL Mathemathics) library, so this questions might be for those, who know it. Firstly, I perform Quaternion intialization from Euler Angles glm quat myAxisQuat(pvAnimation gt at(nFrameNo).vecRotation) glm quat myAxisNextQuat(pvAnimation gt at(nFrameNo 1).vecRotation) Secondly, I interpolate between the two input quaternions. The variable fInterpolation contains value in the range 0.0f 1.0f. myInterpolatedRotQuat glm mix(myAxisQuat, myAxisNextQuat, fInterpolationTime) Thirdly, I convert my interpolated quaternion back to Euler Angles vecInterpolatedRot glm gtx quaternion eulerAngles( myInterpolatedRotQuat) At the end, the values in vecInterpolatedRot do not represent the interpolated EulerAngles. It is difficult to understand the Quaternion values after conversion from Euler Angles, so I would like to ask you for your help, please. What can be wrong, please? I double and tripple checked input variables, I tried various approaches, and the only issue, at this moment might be with the third Aplha parameter in glm mix() Update To provide you with more information, the returned values in vecInterpolatedRot are extremely low. At the end of the interpolation, I would expect valid Euler angles. This is random sequence of interpolated values, as the object moves according to predefined animation path. rotX 1.7451 rotY 1.7993 rotZ 0.854642 rotX 1.06451 rotY 1.18485 rotZ 0.694015 rotX 0.254822 rotY 0.437004 rotZ 0.942035 rotX 0.578816 rotY 0.335103 rotZ 0.716057 rotX 1.53934 rotY 1.07602 rotZ 1.0182 rotX 2.5582 rotY 1.87737 rotZ 0.759468 rotX 2.58259 rotY 2.47432 rotZ 1.06071 rotX 1.35049 rotY 3.11548 rotZ 0.81839 rotX 0.0106472 rotY 2.78129 rotZ 1.04353 rotX 1.46636 rotY 2.33968 rotZ 0.879188 rotX 0.0289322 rotY 2.31166 rotZ 0.91746 rotX 1.47901 rotY 2.37235 rotZ 0.938591 rotX 2.59482 rotY 2.89469 rotZ 1.15554 rotX 2.47283 rotY 2.76131 rotZ 0.992493 rotX 1.73065 rotY 1.53285 rotZ 1.27898 rotX 0.85806 rotY 0.176976 rotZ 1.03487 rotX 0.452009 rotY 1.14604 rotZ 0.927788 rotX 0.0604701 rotY 2.12479 rotZ 1.05684 rotX 0.107648 rotY 2.07785 rotZ 1.05071 rotX 0.154894 rotY 2.03083 rotZ 1.04569 rotX 0.809623 rotY 2.14456 rotZ 1.31262 rotX 1.15268 rotY 0.332553 rotZ 0.983604 rotX 2.16299 rotY 0.545458 rotZ 1.11758 rotX 2.95376 rotY 1.2008 rotZ 0.846527 rotX 2.94892 rotY 0.892473 rotZ 1.17334 rotX 1.89716 rotY 1.30162 rotZ 1.53247 rotX 0.804938 rotY 1.93659 rotZ 1.37281 rotX 0.653453 rotY 1.73722 rotZ 1.14364 rotX 2.24713 rotY 0.658935 rotZ 1.03684 rotX 2.97528 rotY 0.508203 rotZ 0.559124 rotX 2.49988 rotY 0.640482 rotZ0.0117903 rotX 1.57379 rotY 1.16303 rotZ0.288639 rotX 1.4928 rotY 1.17794 rotZ0.902059 rotX 0.667796 rotY 1.94995 rotZ1.49074 rotX 2.12971 rotY 1.85782 rotZ0.904871 rotX 2.36951 rotY 2.03682 rotZ0.189242 rotX 1.5574 rotY 2.92156 rotZ 0.450418 rotX 1.6256 rotY 2.29519 rotZ 1.46659 rotX 2.85414 rotY 2.11303 rotZ 0.42888 rotX 2.48503 rotY 2.96942 rotZ0.189887 rotX 1.55656 rotY 3.00852 rotZ0.675669
1
alpha test shader 'discard' operation not working GLES2 I wrote this shader to illustare alpha test action in GLES2 (Galaxy S6). I think is not working at all cause I don't see any change with or without it. Is there anything Im missing? Any syntax error? I know its better not using if in shader but for now this is the solution I need. precision highp float precision highp int precision lowp sampler2D precision lowp samplerCube 0 CMPF ALWAYS FAIL, 1 CMPF ALWAYS PASS, 2 CMPF LESS, 3 CMPF LESS EQUAL, 4 CMPF EQUAL, 5 CMPF NOT EQUAL, 6 CMPF GREATER EQUAL, 7 CMPF GREATER bool Is Alpha Pass(int func,float alphaRef, float alphaValue) bool result true if (func 0) result false break if (func 1) result true break if (func 2) result alphaValue lt alphaRef break if (func 3) result alphaValue lt alphaRef break if (func 4) result alphaValue alphaRef break if (func 5) result alphaValue ! alphaRef break if (func 6) result alphaValue gt alphaRef break if (func 7) result alphaValue gt alphaRef break return result void FFP Alpha Test(in float func, in float alphaRef, in vec4 texel) if (!Is Alpha Pass(int(func), alphaRef, texel.a)) discard
1
How to randomly place entities that don't overlap? I am creating a randomly generated environment for a game I'm developing. I am using OpenGL and coding in Java. I am trying to randomly place trees in my world (to create a forest), but I don't want the models to overlap (which happens when two trees are placed too close to each other). Here's a picture of what I'm talking about I can provide more code if necessary, but here's the essential snippets. I am storing my objects in an ArrayList with List lt Entity gt entities new ArrayList lt Entity gt () . I am then adding to that list using Random random new Random() for (int i 0 i lt 500 i ) entities.add(new Entity(tree, new Vector3f(random.nextFloat() 800 400, 0, random.nextFloat() 600), 0, random.nextFloat() 360, 0, 3, 3, 3) where each Entity follows the following syntax new Entity(modelName, positionVector(x, y, z), rotX, rotY, rotZ, scaleX, scaleY, scaleZ) rotX is the rotation along the x axis, and scaleX is the scale in the x direction, etc. You can see that I am placing these trees randomly with random.nextFloat() for the x and z positions, and bounding the range so the trees will appear in the desired location. However, I would like to somehow control these positions, so that if it tries to place a tree too close to a previously placed tree, it will recalculate a new random position. I was thinking that each tree Entity could have another field, such as treeTrunkGirth, and if a tree is placed in the distance between another tree's location and it's treeTrunkGirth, then it will recalculate a new position. Is there a way to accomplish this? I am happy to add more code snippets and details as necessary.
1
gl VertexID values when calling glDrawElements I am struggling a bit to understand the values that gl VertexID primitive contains when the vertex shader is executed. I have the standard modern rendering pipeline, in which after setting up shaders, buffers, etc I call the code below to render a mesh glBindBuffer(GL ELEMENT ARRAY BUFFER, auxMesh gt indicesBuffer) glDrawElements(GL TRIANGLES, auxMesh gt numIndices, GL UNSIGNED INT, 0) glBindBuffer(GL ELEMENT ARRAY BUFFER, 0) In the vertex shader I want to manually access to the current vertex being rendered retrieving it from the original buffer (I know this sounds nonsense, but I need to do this for a reason), therefore I pass the vertex buffer as the texture buffer u tbo tex and I access to the actual coordinates as follows vec3 vertex 1 texelFetch(u tbo tex, gl VertexID 0).xyz vec3 vertex 2 texelFetch(u tbo tex, gl VertexID 1).xyz vec3 vertex 3 texelFetch(u tbo tex, gl VertexID 2).xyz And the values of these vertex vertex 1, vertex 2, vertex 3 make perfect sense. However, I can't really understand what values gl VertexID got in each iteration. Is gl VertexID sequentially assigned following the range of 0...auxMesh gt numIndices . Or are the values increased 3 at every iteration because I am drawing triangles? I need to understand this because I am interested in calling textureFetch(u tbo tex, i) where i is a arbitrary triangle in my mesh (or whatever i needs to be to access an arbitrary triangle), but I can't find the right way to access to it.
1
Does GL Memory model guarantees that late Depth Testing accesses depth buffer values in same draw call Suppose I issue a draw command that draws 3 overlapping triangles (T1, T2, T3). Fragment shader assigns Depth 3 to T1 fragments, 2 to T2 and 1 to T3 fragments. Depth Buffer is cleared to 0 before draw call. GL Test is set to GL Less. Now, when pixels for triangle 2 are depth tested by the hardware, is it guaranteed that hardware will read the latest written values i.e 3 (by T1 fragments)?
1
How can i draw a curve in modern Opengl? OpenGl is capable of rendering points, lines, triangles and quads but what if i want to draw a bezier curve? I read online you should use something called GL STRIP, but the solutions were using either legacy Opengl, or they were not explaining well the process. The question is How can i render bezier curves in Modern Opengl? I'm using C .
1
Collision between player and heightmap Im programming a small project, an opengl height map. It is build of triangles, which points are fit to the right y position (read out of an image). The player is represanted by a cuboid. (p1, p2, ... p8). now, i want to implement collision detection between the player and the heightmap. I thought about split the whole world in cubes, and this cubes again in cubes, and then distribute the triangles of the heightmap in this cubes (then check for the big cube, the smaller cube, which contains the player, and then check all triangles for collision). But which collision detection method would be the best here, and how could i implement it? (By the way, im using lwjgl java )
1
OpenGL How to draw each nth triangle using glVertexAttribPointer I have a vertex buffer. There are situations when I don't want to render the whole mesh, but let's say each nth triangle of the mesh. I am using VAOs and VBOs. My data in the buffer is like this first triangle indx seciond triangle indx 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ... What I want to do is render first triangle, then skip second, then render 3d and so on. Can it be done with adjusting glVertexAttribPointer() or are there some other gl methods that can manipulate the data like that, or I have to create a new buffer with skipped triangles ?
1
Camera scrolling hiccups I'm working on a 2D side scroller, and I've got the technical aspects working. OpenGL renderer, camera movement using acceleration. However, while scrolling I'm constantly seeing tiny random hiccups. Split second pauses, just enough to cause the perception of smooth scrolling to feel choppy instead. Turning on VSync actually accentuates the hiccups. But when I display frame render times, it's always constant (well, constantly flipping between 0.016 and 0.017 second). So far it's just a render loop, no processing going on in the background. When I look at other 2D side scrolling games, I never see this stutter, so I don't know if there's something I'm missing that other games know to do... I recorded a video of it from my phone, on a repeating row of tiles, to show what I mean. It's poor quality, but recorded at slow motion to emphasize the hiccups. Any tips on how to get rid of this would be great.
1
LibGDX strange camera behaviour I'm currently doing my final assessment for my study Game Development. I'm currently facing a real weird (maybe explainable) result from my camera. I just set up my scene with alot of tiles and a character. When I set up the camera's position and direction, I experience something strange. The camera is tilted somewhat, and I can't explain why that is happening. This is looking pretty good. Except for when I move the camera, the tilting is becoming worse. Is this behaviour I should expect from the camera? I made sure the camera is never rotated, is there any possibility the models are slightly rotated? I use LibGDX with exported FBX files from 3Ds Max, converted to LibGDX's format "G3DB", are there any known issues with that?
1
Individual Skeletal Animation with OpenGL I have been able to implement a pipeline in OpenGL which reads Collada files and can animate them through interpolating the keyframes. However, as somebody brand new to graphics, I don't quite know how one would go about doing individual joint animation. For example, if I have a sprite and I would like to move just their arm joint, none of that animation information is stored in a Collada file. How would I go about making a system that can move each joint independently?
1
LWJGL Mixing 2D and 3D I'm trying to mix 2D and 3D using LWJGL. I have wrote 2D little method that allow me to easily switch between 2D and 3D. protected static void make2D() glEnable(GL BLEND) GL11.glMatrixMode(GL11.GL PROJECTION) GL11.glLoadIdentity() glOrtho(0.0f, SCREEN WIDTH, SCREEN HEIGHT, 0.0f, 0.0f, 1.0f) GL11.glMatrixMode(GL11.GL MODELVIEW) GL11.glLoadIdentity() protected static void make3D() glDisable(GL BLEND) GL11.glMatrixMode(GL11.GL PROJECTION) GL11.glLoadIdentity() Reset The Projection Matrix GLU.gluPerspective(45.0f, ((float) SCREEN WIDTH (float) SCREEN HEIGHT), 0.1f, 100.0f) Calculate The Aspect Ratio Of The Window GL11.glMatrixMode(GL11.GL MODELVIEW) glLoadIdentity() The in my rendering code i would do something like make2D() draw 2D stuffs here make3D() draw 3D stuffs here What i'm trying to do is to draw a 3D shape (in my case a quad) and i 2D image. I found this example and i took the code from TextureLoader, Texture and Sprite to load and render a 2D image. Here is how i load the image. TextureLoader loader new TextureLoader() Sprite s new Sprite(loader, "player.png") And how i render it make2D() s.draw(0, 0) It works great. Here is how i render my quad glTranslatef(0.0f, 0.0f, 30.0f) glScalef(12.0f, 9.0f, 1.0f) DrawUtils.drawQuad() Once again, no problem, the quad is properly rendered. DrawUtils is a simple class i wrote containing utility method to draw primitives shapes. Now my problem is when i want to mix both of the above, loading rendering the 2D image, rendering the quad. When i try to load my 2D image with the following s new Sprite(loader, "player.png) My quad is not rendered anymore (i'm not even trying to render the 2D image at this point). Only the fact of creating the texture create the issue. After looking a bit at the code of Sprite and TextureLoader i found that the problem appears after the call of the glTexImage2d. In the TextureLoader class glTexImage2D(target, 0, dstPixelFormat, get2Fold(bufferedImage.getWidth()), get2Fold(bufferedImage.getHeight()), 0, srcPixelFormat, GL UNSIGNED BYTE, textureBuffer) Commenting this like make the problem disappear. My question is then why? Is there anything special to do after calling this function to do 3D? Does this function alter the render part, the projection matrix?
1
OpenGL object movement is not smooth and vibrating In my android NDK OpenGL C project, I have a render method which executes every frame on draw event so this is the algorithm void Engine render() deltaTime GetCurrentTime() lastFrame glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) update() renderDepthMaps() renderMeshes() if (skybox ! nullptr) shaders.drawSkyBox(skybox, camera, width, height) lastFrame GetCurrentTime() first I calculate the delta time between the last frame and the current frame, then I update all transformation and view matrices from input then render the scene so the game loop depends on the android draw frames, I have an object which moves over a terrain and a third person camera moves with it and rotates around it, so after the object moves for some distance it begins to flicker forward and backward, the update function for the object is double amp delta engine.getDeltaTime() GLfloat velocity delta movementSpeed glm vec3 t(glm vec3(0, 0, 1) velocity 3.0f) matrix glm translate(matrix, t) glm vec3 f(matrix 2 0 , matrix 2 1 , matrix 2 2 ) f glm normalize(f) velocity 3.0f camera gt translate(f) If it is an interpolation issue I don't know how to make interpolation with using translate matrix to its forward vector.
1
Plane is not affected by the light source I'm going through a problem and I don't have a reasonable explanation for why this occurs. Basically, I have 3D scene in which there is a light source and simple plane and cube (i.e. .obj models) modeled and exported from Blender. The result with the cube is as follows The light source is located at (0,2,0) represented as a while cube and the result of the reflected light on the cube is expected. However when I import the plane model, the result is as follows The bottom of the plane should be dark if we are looking from the below of the plane but this is not the case. What things can cause this kind of behavior? Note the code is identical for both cases. I use OpenGL and Shaders 3.3 in Windows 10. The API is GLFW. The plane model is Blender v2.78 (sub 0) OBJ File '' www.blender.org mtllib plane.mtl o Plane v 1.000000 0.000000 1.000000 v 1.000000 0.000000 1.000000 v 1.000000 0.000000 1.000000 v 1.000000 0.000000 1.000000 vn 0.0000 1.0000 0.0000 usemtl None s off f 1 1 2 1 4 1 3 1 whereas the cube model is Blender v2.78 (sub 0) OBJ File '' www.blender.org mtllib cube.mtl o Cube v 0.500000 0.500000 0.500000 v 0.500000 0.500000 0.500000 v 0.500000 0.500000 0.500000 v 0.500000 0.500000 0.500000 v 0.500000 0.500000 0.499999 v 0.499999 0.500000 0.500000 v 0.500000 0.500000 0.500000 v 0.500000 0.500000 0.500000 vn 0.0000 1.0000 0.0000 vn 0.0000 1.0000 0.0000 vn 1.0000 0.0000 0.0000 vn 0.0000 0.0000 1.0000 vn 1.0000 0.0000 0.0000 vn 0.0000 0.0000 1.0000 usemtl Material s off f 1 1 2 1 3 1 4 1 f 5 2 8 2 7 2 6 2 f 1 3 5 3 6 3 2 3 f 2 4 6 4 7 4 3 4 f 3 5 7 5 8 5 4 5 f 5 6 1 6 4 6 8 6 I use assimp library to import models in my application. The fragment shader is version 330 core out vec4 FragColor uniform vec3 objectColor uniform vec3 lightColor uniform vec3 lightPos in vec3 Normal in vec3 FragPos void main() float ambientStrength 0.1 vec3 ambient ambientStrength lightColor vec3 norm normalize(Normal) vec3 lightDir normalize(lightPos FragPos) float diff max(dot(norm,lightDir), 0.0) vec3 diffuse diff lightColor vec3 result (ambient diffuse) objectColor FragColor vec4(result, 1.0)
1
How are CSS and WebGL coordinates related? I'd like to build a simple framework which rendering combines web page DOM elements with WebGL, such that they're manipulable in the same coordinate space. How does the plain CSS coordinate system relate to the one used by WebGL? How can I make sure the two line up (e.g. have a div and a WebGL quad transformed the same way)? Can I expect CSS3D transform coordinates to correspond to WebGL?
1
Testing spheres without extracting planes I am currently a bit stuck. On OpenGL I am attempting to do view frustum culling, so far I managed to do it by using a PCM. Where center is the world position of the mesh. bool shouldRenderMesh(mat4 VP, vec3 center, float radius) vec4 transformedCentre VP vec4(center, 1) vec3 PCN vec3( transformedCentre.x transformedCentre.z, transformedCentre.y transformedCentre.z, transformedCentre.z transformedCentre.z ) if( abs(PCN.x) gt 1 abs(PCN.y) gt 1 abs(PCN.y) gt 1 ) return false return true The problem is, while the approach works, items will disappear too soon, as the testing is based on the centre point of the mesh, rather than the bounding box. Is it possible to test a sphere or box without extracting planes? Thanks
1
why doesnt this function work? glsl opengl c Im trying to move a transformation matrix onto the gpu, and I managed to find this code to help me on the way vertex.shader version 410 core layout ( location 0 ) in vec3 vertex position mat4 rotationMatrix(vec3 axis, float angle) axis normalize(axis) float s sin(angle) float c cos(angle) float oc 1.0 c return mat4(oc axis.x axis.x c, oc axis.x axis.y axis.z s, oc axis.z axis.x axis.y s, 0.0, oc axis.x axis.y axis.z s, oc axis.y axis.y c, oc axis.y axis.z axis.x s, 0.0, oc axis.z axis.x axis.y s, oc axis.y axis.z axis.x s, oc axis.z axis.z c, 0.0, 0.0, 0.0, 0.0, 1.0) void main() gl Position rotationMatrix(vertex position,45.0) However, this gives errorc1035 assignment of incompatible types. Shouldnt this work? What am I missing
1
Display Lists in OpenGL I heard that there was a faster method of displaying vertices, rather than recreating the GL TRIANGLES, each time the scene is drawn. I thought I read somewhere that this method was obselete. Why would the OpenGL group remove something that was faster? Even if it is obselete, how would I go about implementing this?
1
WebGL immediate mode I know that WebGL is based on OpenGL ES 2.0 and that glBegin and glEnd have been removed and replaced with vertex buffer objects. I understand that VBOs are faster and use less code but is there a library or add on for JavaScript WebGL that re implements these functions? After more research, I have discovered that this functionality is called immediate mode.
1
How to use assimp textures with openGL getting a better understanding I recently learned how to load a model to openGL using assimp, but I am having trouble figuring out how to do lighting calculations on this model. Previously with my own created objects I would set the normals manually and pass them to the shader as attributes. Then I would use one diffuse and one specular texture passed as uniforms for lighting. I'm trying to do the same with a model, but I can't seem to get it to behave correctly. When I use my own objects everything works normally, but when I try to draw the model some weird lighting occurs. Here's how the model should look But mine looks like this What confuses me in particular is that assimp loads both normal vertices and normal textures. I'm not sure what the difference between them is or if there is more about model loading that I've yet to learn, which is entirely possible since I don't know much about heightmaps and other things yet. Anyway, here's what I'm doing for calculations void main() vec3 normal normalize(Normal) vec3 viewDir normalize(viewPos fragPos) vec3 result for (int j 0 j lt sizeModels j ) result CalcDirLight(DirLights, normal, viewDir, models j ) FragColor vec4(result, 1.0) vec3 CalcDirLight(DirLight light, vec3 normal, vec3 viewDir, Material textures) vec3 lightDir normalize( light.direction) diffuse shading float diff max(dot(normal, lightDir), 0.0) specular shading vec3 reflectDir reflect( lightDir, normal) float spec pow(max(dot(viewDir, reflectDir), 0.0), textures.shininess) combine results vec3 ambient light.ambient vec3(texture(textures.diffuse, Tex)) vec3 diffuse light.diffuse diff vec3(texture(textures.diffuse, Tex)) vec3 specular light.specular spec vec3(texture(textures.specular, Tex)) return (ambient diffuse specular) Notice that I'm providing my own diffuse values to the DirLight struct. The Material struct is what contains the diffuse and specular textures, as well as a shininess float. For reference I am following guides from this site https learnopengl.com Any help would be appreciated!
1
Is there a way to bypass Directx Effect Files? I am now trying to abstract my rendering pipeline, and I've been able to abstract OpenGL fairly easily. But now I have ran into a rather ugly problem with Directx. Most of my knowledge about Dx9, 10, and 11 makes use of effect files. Something that openGL distinctly lacks. I know that this file does not have to be used. However, by the way all of DX's documentation had been written, it seems like it is required and there is no way around it. Can anyone give me a lead here?
1
How do i make floors of varying height in OpenGL? So I am currently working on a small Game( engine) in OpenGL and I want to build a little world editor. Nothing too fancy Just some functions to add objects to a scene and manipulate them (rotation, translation etc). Up to now I have everything set up for loading models into the scene and setting up simple directional and pointlights. But one thing has been bugging me since the start What is an average floor ground made out of? I would guess that, for exmaple, a grass floor is just a lot of vertices faces with a grass texture wrapped on them. But how are those textures connected and how can you change the elevation for certain vertices to create hills? I am essentially asking how the recommended implementation of floors is in 3D environments (preferrably OpenGL 3.3) since I can't find anything on that topic on Gergl.