_id
int64
0
49
text
stringlengths
71
4.19k
1
OpenGL Water reflection seems to follow camera yaw and pitch I'm attempting to add reflective water to my procedural terrain. I've got it to a point which seems like it's reflecting however when I move the camera left right up down the reflections move with it. I believe the problem has something to do with the way I convert from world space to clip space for the projective texture mapping. Here is a gif of what is happening. http i.imgur.com PDta5Qu.gifv Vertex Shader version 400 in vec4 vPosition out vec4 clipSpace uniform mat4 model uniform mat4 view uniform mat4 projection void main () clipSpace projection view model vec4(vPosition.x, 0.0, vPosition.z, 1.0) gl Position clipSpace Fragment Shader version 400 in vec4 clipSpace out vec4 frag colour uniform sampler2D reflectionTexture void main () vec2 ndc (clipSpace.xy clipSpace.z) 2.0 0.5 vec2 reflectTexCoords vec2(ndc.x, ndc.y) vec4 reflectColour texture(reflectionTexture, reflectTexCoords) frag colour reflectColour I'm using this code to move the camera under the water's surface to get the reflection float distance 2 (m camera gt GetPosition().y m water gt GetHeight()) m camera gt m cameraPosition.y distance m camera gt m cameraPitch m camera gt m cameraPitch If this is insufficient code to diagnose the problem, I'll post more. I tried to keep it to what I thought could be the problem.
1
Data Organization in Custom 3D Mesh File Format After careful consideration to use middleware, I have decided on creating my own 3d file format format to export meshes from 3D authoring application (Softimage) into my game. I will need to export the following Vertices Indices Normals UVs Material Information Animation information (no clue, how to import it) Collision mesh Game Properties defined within 3D authoring tool (object intelligence, aggressivity, etc..) ..another assets.. Can I kindly ask for a hint, how to construct my custom file format. How to organize data within my files, please? Does anoybody have a good adivce on exporting animation information, especially when the mesh changes its geometry? I would be thankful for advices that could point me into right direction. It would be nice to save some time instead of wasting it on incorrect approaches. I use Softimage as my 3D authoring tool. Target platform is OpenGL ES 2.0 running on mobile devices (iOS, Android). Programming language C .
1
Making a weapon stay with a first person camera I was looking all over the internet for any information on how to get a gun to stay with a camera as done in FPS games. I am using OpenGL and GLSL to carry this out. I knew a way of how to do this in earlier OpenGL versions, but I could never figure it out in the newer versions. The type of camera that I am trying to get is something similar to this With the view matrix and everything else, I should be able to figure out the movement of the hand and the shooting. Here is some of the code that I have so far Copyright(c) 2019 Ryan Hall All Rights Reserved I do not permit any of this code to be used elsewhere by anyone else except me for commercial purposes. The gun has two defining variables one that actually creates the gun and another that moves it around in object space weaponOfChoice glm translate(weaponOfChoice, glm vec3(camera.GetPosition().x 0.15, camera.GetPosition().y 0.15, camera.GetPosition().z 0.3)) weaponOfChoice glm rotate(gun, angle, glm vec3(0.0f, 1.0f, 0.0f)) weaponOfChoice glm scale(gun, glm vec3(0.005f, 0.005f, 0.005f)) glUniformMatrix4fv(glGetUniformLocation(shader.Program, "gun"), 1, GL FALSE, glm value ptr(weaponOfChoice)) I have spent quite some time working on how to fix the code so that it will render the gun correctly and have not been able to find any great sources online that will help solve my problem. How could I do this? Do I need an identity matrix as used by me in older versions of OpenGL? If so, how do I create a mimicking function of glLoadIdentity() that will help me with this problem? Thanks, rjhwinner03
1
What am I supposed to pass shaders to? I don't think I fully understand how rendering shaders work. I get that you give a shader to an object sprite when drawing it, and it applies the shader to it. But I don't think I'm doing it right. For example, I wrote a shader for when the player is underwater. It makes the player sway side to side, but the problem is I'm not sure how to tell when I can use it. Do I loop through all the water blocks, and If the player intersects, run the shader on the player? This brings up the problem where all of the player would wave instead of just the part of him below the water. I'm not sure what else I could do, and I apologize if this sounds confusing, I'm very confused myself and am not sure how to explain it. Do I run the shader on the water and somehow get it to run on all the pixels intersecting the water? What do I do? Also, just to make sure, I am using 2D.
1
glScalef game math issue I can't get my head wrapped around this issue. The issue is my latest code is making the camera zoom in and out really quickly. My approach is built on http www.zdnet.com blog burnette how to use multi touch in android 2 part 6 implementing the pinch zoom gesture 1847?tag content siu container The opengl scale will be getting the variable scale. gl.glScalef(scale, scale, 1) the distance is obtained by between two fingers, old distance (initial touch points), and new distance (dragging touch points). The zooming in and out works well. However, it would reset glScalef each time the user start using pinch zoom. scale newDistance oldDistance I tried calculating by additive ratio. The oldtscale handles the previous distance, if it is same, then it doesn't need to add up anything to scale. The zooming is really quick, I moved the fingers closer by mere 1 cm to 5 cm, zoom goes down or up fast. I think additive ratio is a bad solution. I think it might be incomplete solution. I'm trying to figure out what's wrong with it. additive ratio tscale (newDistance oldDistance) 1 if(oldtscale tscale) oldtscale tscale tscale 0 else oldtscale tscale adding up the additive ratio and scale tscale scale tscale checking tscale for limiting the maximum minimum scale if(tscale gt 2) tscale 2 else if(tscale lt 1) tscale 1 supply scale scale tscale
1
Normals showing unexpected results I am making a game in c with opengl and glm and was working on my terrain when this happend as it appears it is renderering just the way i planned it, but my question was, how do i make it appear so that the their are not as many shadows? i have no idea of how i can fix this. Here is the normal code glm vec3 calculateNormal(int x, int z) float heightL getHeight(x 1, z) float heightR getHeight(x 1, z) float heightD getHeight(x, z 1) float heightU getHeight(x, z 1) glm vec3 normal(heightL heightR, 2.0f, heightD heightU) glm vec3 result glm normalize(normal) return result and here is the terrain normal generation code for (int i 0 i lt vertex count i ) for (int j 0 j lt vertex count j ) float height getHeight(j, i) vert vertexPointer 3 (float)j ((float)vertex count 1) size vert vertexPointer 3 1 height vert vertexPointer 3 2 (float)i ((float)vertex count 1) size glm vec3 normal calculateNormal(j, i) norm vertexPointer 3 normal.x norm vertexPointer 3 1 normal.y norm vertexPointer 3 2 normal.z ... here is the terrain generation code float getNoise(int x, int z) int seed x z (std rand() (0 15)) return seed 2.0f 1.0f float generateheight(int x, int z) return getIntoplatedNoise(x 16.0f, z 16.0f) float getSmoothNoise(int x, int z) float corners (getNoise(x 1, z 1) getNoise(x 1, z 1) getNoise(x 1, z 1) getNoise(x 1, z 1)) 16.0f float sides (getNoise(x 1, z) getNoise(x 1, z) getNoise(x, z 1) getNoise(x, z 1)) 8.0f float center getNoise(x, z) 4.0f return corners sides center float getIntoplatedNoise(float x, float z) int intX (int)x int intZ (int)z float fracX x intX float fracZ z intZ float v1 getSmoothNoise(intX, intZ) float v2 getSmoothNoise(intX 1, intZ) float v3 getSmoothNoise(intX, intZ 1) float v4 getSmoothNoise(intX 1, intZ 1) float i1 interplate(v1, v2, fracX) float i2 interplate(v3, v4, fracX) return interplate(i1, i2, fracZ) float interplate(float a, float b, float blend) double theta blend M PI float f ((float) 1.0f cos(theta)) 0.5f return a (1.0f f) b f float getHeight(int x, int z) return generateheight(x, z) Thank you very much for you time
1
Detect collision with bullet physics, to make a character controller I inherited from btCollisionWorld ContactResultCallback but I really have no idea how to use this virtual function btScalar addSingleResult(btManifoldPoint amp cp, const btCollisionObjectWrapper colObj0Wrap, int partId0, int index0, const btCollisionObjectWrapper colObj1Wrap, int partId1, int index1) I thought about using btCollisionWorld ConvexResultCallback instead but there is no method in btCollisionWorld to use it. For now my only goal is to move a btCollisionObject around and detect collision with walls, to adjust the position and movement. I would just need the collision normal, some collision point, or anything else...
1
(GLSL) Lighting code outputting a black quad So, ive been transitioning to modern opengl recently and it's going rather well. But alas, something must go wrong. As the titel says, all I'm getting is a completely black quad. (Ive double checked my C code and I'm pretty sure it has nothing to do with that.) version 330 core Vertex shader layout(location 0) in vec3 vertexPosition modelspace layout(location 1) in vec3 vertexColor layout(location 2) in vec2 vertexUV layout(location 3) in vec3 vertexNormal out vec3 vertNorm out vec3 fragmentColor out vec3 vertPos out vec2 UV uniform mat4 MVP void main() gl Position MVP vec4(vertexPosition modelspace,1.0f) UV vertexUV fragmentColor vertexColor vertPos vertexPosition modelspace vertNorm vertexNormal Fragment shader version 330 core in vec3 Normal in vec3 fragmentColor in vec3 vertPos in vec2 UV out vec3 color uniform sampler2D textureSampler uniform mat4 Model void main() vec3 lightPos vec3(0, 0, 0) mat3 normalMatrix transpose(inverse(mat3(Model))) vec3 normal normalize(normalMatrix Normal) vec3 fragPosition vec3(Model vec4(vertPos, 1)) vec3 surfaceToLight lightPos fragPosition float brightness dot(normal, surfaceToLight) (length(surfaceToLight) length(normal)) brightness clamp(brightness, 0, 1) color vec3(brightness 1 (texture(textureSampler, UV).rgb fragmentColor)) But if you require my c code, say so and ill edit.
1
What is UVIndex and how do I use it on OpenGL? I am a noob in OpenGL ES 2.0 (for WebGL) and I'm trying to draw a simple model I've made with a 3D tool and exported to .fbx format. I've been able to draw some models that only have A vertex buffer, a index buffer for the vertices, a normal buffer and a texture coordinate buffer, but this model now has a "UVIndex" and I'm not sure where am I supposed to put this UVIndex. My code looks like this GL.bindBuffer(GL.ARRAY BUFFER, this.Model.House.VertexBuffer) GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute "vPosition" ,3,GL.FLOAT, false, 0, 0) GL.bindBuffer(GL.ARRAY BUFFER, this.Model.House.NormalBuffer) GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute "vNormal" , 3, GL.FLOAT, false, 0, 0) GL.bindBuffer(GL.ARRAY BUFFER, this.Model.House.TexCoordBuffer) GL.vertexAttribPointer(this.Shader.TextureAndLighting.Attribute "TexCoord" , 2, GL.FLOAT, false, 0, 0) GL.bindBuffer(GL.ELEMENT ARRAY BUFFER, this.Model.House.IndexBuffer) GL.bindTexture(GL.TEXTURE 2D, this.Texture.HTex1) GL.activeTexture(GL.TEXTURE0) GL.drawElements(GL.TRIANGLES, this.Model.House.IndexBuffer.Length, GL.UNSIGNED SHORT, 0) But my model renders totally incorrect and I think it has to do with the fact that I am ignoring this "UVIndex" in the .fbx file, since I've never drawn any model that uses this UVIndex I really have no clue on what to do with it. This is the json file containing the model's data http pastebin.com raw.php?i G294TVmz
1
GLSL, all in one or many shader programs? I am doing some 3D demos using OpenGL and I noticed that GLSL is somewhat "limited" (or is it just me?). Anyway I have many different types of materials. Some materials have ambient and diffuse color, some materials have ambient occlusion map, some have specular map and bump map etc. Is it better to support everything in one vertex fragment shader pair or is it better to create many vertex fragment shaders and select them based on currently selected material? What is the usual shader strategy in OpenGL or D3D?
1
How to rotate a direction I'm working a spotlight for my deferred renderer and I'm having trouble with matching the mesh to the visual representation of the light. Right now my mesh is a cone, the apex of the cone is at (0,0,0), it has a height of 1 and a radius of 1. The direction of this cone is (0, 1,0) when the rotation is (0,0,0). The relevant GLSL code float spot alpha dot( l,normalize(vec3(0, 1,0) )) lt float inner alpha cos(light.falloff) float outer alpha cos(light.radius) float spot clamp((spot alpha outer alpha) (inner alpha outer alpha),0.,1.) As you can see, the GLSL code uses a direction to define the area to be lit, so I could get this direction as a uniform, but I would need to find a rotation from that for the mesh to follow, or I can get the rotation and find a direction, but I don't know how to do either of these things. Can I rotate a direction? How do I do it? If I can't, is there another solution for this problem?
1
Geometry instancing in OpenGL ES 2.0 I am planning to do geometry instancing in OpenGL ES 2.0 Basically I plan to render the same geometry(a chair) maybe 1000 times in my scene. What is the best way to do this in OpenGL ES 2.0? I am considering passing model view mat4 as an attribute. Since attributes are per vertex data do I need to pass this same mat4, three times for each vertex of the same triangle(since modelview remains constant across vertices of the triangle). That would amount to a lot of extra data sent to the GPU( 2 extra vertices 16 floats (Number of triangles) amount of extra data). Or should I be sending the mat4 only once per triangle?But how is that possible using attributes since attributes are defined as "per vertex" data? What is the best and efficient way to do instancing in OpenGL ES 2.0?
1
OpenGL can an Element Array Buffer points to only one VBO I'm making a Minecraft style game and currently storing 36 vertices and texture coordinates per cube. Could I make an EBO for only the position and leave the texture coordinates as they are? OK i think thsi is how i will do it. I would have an array of 24 vertices(3 GLfloat) and 24 texture coordinates(2 GLfloat) for the corners then i would have an EBO pointing to the corners with 36 indices So memory total 24 3 4 24 2 4 36 4 624 bytes instead of 36 3 4 36 2 4 720 bytes
1
Linear filter problem with diagonal lines on adjecent tiles I am quite new at using OpenGL GLSL. Basically, the project I am working on is my first 'real' experience with it. I do not know whether this is relevant, but I use libgdx for my project. Currently, I am trying to use the GL LINEAR filters to draw tile of my (tile) map (before I was using GL NEAREST without problems). My map contains diagonal roads (just simple lines in this example). A partial overview example of my map (using the nearest filter without corrections) With the GL NEAREST filter, initially I added a 1 pixel border, so the tiles were slightly overlapping and the diagonal roads were draw without the small gaps at the edges of the tiles. This does not work with the GL LINEAR filter, because I suppose that the overlapping parts are drawn twice, resulting in a darker 'blob' I tried to get rid of the blobs, by making the horizontal lines shorter, but there is always an irregularity, either a blob or an 'decrease of line width' (as seen in the diagonal part of the line). So I removed the 1 pixel additional border and have this as a result Now I am trying to fill in the 'gaps' by drawing the two triangular fillers like this but I can't get it completely right (this is the best I managed to get) I suppose it is virtually impossible to get this perfectly right by 'manually' drawing these triangles. Also, when looking at the third image, the lines at the 'edges' (that need to be filled) are very sharp. I suppose this is problematic..? So, my question is how does one solve this problem? Do I need to prepare modify my textures for this? Can I use a shader to fill the gaps automatically? Is manually filling the gaps with the triangles the way to go? Or is there some entirely different technique to have smooth diagonal lines that span multiple tiles?
1
Modern shadow rendering techniques? What is the state of the art in terms of shadow rendering? My target is OpenGL 3.2, using a deferred rendering pipeline, if that matters. It's been years since I looked into shadow rendering, and at that time there were numerous techniques available, from stencils to the various shadow mapping methods. At that time, rendering shadows required separate rendering passes, controlled by the CPU. But then recently I saw a demo where a scene was rendered entirely on the GPU, including shadows. I have no idea how that would have been accomplished, or if it is even a reasonable thing to do (beyond a tech demo). Given the large amount of old info on the internet, I'd like to learn what methods people are using these days, and how much of it can be pushed to the GPU (assuming my target OpenGL version supports it).
1
State of art shadowing technique for OpenGL on isometric terrain? What's the most efficient way of creating shadows for object on a isometric terrain with OpenGL and JOGL? Note that this terrain is not flat and is not heightmap generated. Think it as another model. However objects' shadows on other objects is just nice to have.Objects must drop shadows just on the terrain.
1
Where to start learning OpenGL with C ? Possible Duplicate What are some good learning resources for OpenGL? I have learnt C and made some cool text based games and such but I would love to start graphical programming. I'm a decent artist (I will have some of my work below) I know the basics of C but I really would like to get into OpenGL. I need someone to show me some good tutorials for OpenGL with C so I can really get into game development. My goal is to be able to program a simple 2D game by the end of the year and I have lots of time to do so. I'm en rolled in a game development course next year and really need some help with starting off.
1
glBlendFunc transparency in cocos2d? GL ONE, GL ONE This makes the flamingoes transparent on http www.andersriggelsen.dk glblendfunc.php but not in cocos2d using sprite.blendFunc (ccBlendFunc) GL ONE, GL ONE How can I achieve a similar effect in cocos2d? Thank you.
1
Shader Transmittance or Absorption I am trying to create a transmittance or absorption shader (glsl, hlsl, cg, etc...) in realtime but I don't find any good tutorial or white paper about this subject. I only find offline rendering references. Is it possible to achieve this kind of effect in realtime using standard rasterisation of a 3D mesh ? How ?
1
Is linear filtering possible on depth textures in OpenGL? I'm working on shadow maps in OpenGL (using C ). First, I've created a framebuffer and attached a depth texture as follows Generate the framebuffer. var framebuffer 0u glGenFramebuffers(1, amp framebuffer) glBindFramebuffer(GL FRAMEBUFFER, framebuffer) Generate the depth texture. var shadowMap 0u glGenTextures(1, amp shadowMap) glBindTexture(GL TEXTURE 2D, shadowMap) glTexImage2D(GL TEXTURE 2D, 0, GL DEPTH COMPONENT24, 1024, 1024, 0, GL DEPTH COMPONENT, GL FLOAT, null) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL NEAREST) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL NEAREST) glFramebufferTexture2D(GL FRAMEBUFFER, GL DEPTH ATTACHMENT, GL TEXTURE 2D, shadowMap, 0) Set the read and draw buffers. glDrawBuffer(GL NONE) glReadBuffer(GL NONE) Later (after rendering to the shadow map and preparing the main scene), I sample from the shadow map in a GLSL fragment shader as follows float shadow texture(shadowMap, shadowCoords.xy).r Where vec3 shadowCoords is the coordinates of the fragment from the perspective of the global directional light source (the one used to create the shadow map). The result is shown below. As expected, shadow edges are jagged due to using GL NEAREST filtering. To improve smoothness, I tried replaced the shadow map's filtering with GL LINEAR, but the results haven't changed. I understand there are other avenues I could take (like Percentage Closer Filtering), but I'd like to answer this question first, if only for my sanity. I've also noticed that other texture parameters (like GL CLAMP TO EDGE rather than GL REPEAT for wrapping) don't function for the shadow map, which hints that this may be a limitation of depth textures in general. To reiterate Is linear filtering possible using depth textures in OpenGL?
1
Multiple Textures in Shader? I have this (pseudo) code unsigned int TextureLoc glGetUniformLocation(programID, "objectTexture") for(int i 0 i lt object gt texturesCount i ) glActivateTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, object gt textures i gt textureID) glUniform1i(TextureLoc, 0) And Fragment Shader code in vec2 UV out vec3 color uniform sampler2D objectTexture void main() color texture2D(objectTexture, UV).rgb My example is that of a house model that composes a single object, but the house walls are of Texture 1 and the roof is of Texture 2. At first I coded that each object had it's own material textures, but then I noticed it's a waste of memory if I'm using the same texture in multiple places, so now I have a list of objects and a list of textures. An object can have n textures. 1. How can I get a "new" in GLSL for dynamic textures count? Some objects might have 1, some might have 10, etc... How can I control that? 2. I have an array specifying which textureID each triangle in the object uses, but how can I use this information to properly draw the object? I send the array in the glEnableVertexAttribArray(int) function, but how can I know which texture I'm supposed to use when I'm in the shader code? Another option I thought of was to divide all objects into smaller objects if they don't have the same texture, but I'm not sure what's the best approach to this.
1
Best strategy to track object hierarchy using groups and obj files I am making a 3D game in OpenGL from scratch. In this game I have a ship with stuff inside it. How can I attach the stuff to the ship in the CAD program and maintain that hierarchy in my own game? For example say I have a fire extinguisher in my ship that mounts on the wall. There are two approaches both with problems. Solution 1 Save fire extinguisher and ship as separate obj files. Problem How can I place the fire extinguisher in the proper place inside my ship in my game? With hundreds of objects manually placing them is completely infeasible. I want to arrange stuff in my CAD and load it into the game and be done. Solution 2 Save fire extinguisher as its own group inside the ship obj file. Problem Now I can't reuse the fire extinguisher in other ships. The obj files for game assets will balloon out of control in size with new instances of reused sub objects. Is there some way I can specify the position of an external object? A 3d point in my ship obj file representing the origin of another obj model?
1
How to calculate normal from normal map in world space? (OpenGL) I'm trying to do normal mapping in a deferred renderer and I'm stuck on how to implement normal maps. I have a bool that passes whether or not to use a normal mapped value and thus, whether to calculate the TBN matrix. My vertex code for the geometry pass looks as follows version 410 core layout (location 0) in vec3 aPos layout (location 1) in vec3 aNormal layout (location 2) in vec2 aTexCoords layout (location 3) in vec3 aTangent Optional Texture coordinates layout (location 4) in vec3 aBitangent Optional Texture coordinates out vec3 FragPos out vec2 TexCoords out vec3 Normal out mat3 TBN uniform mat4 model uniform mat4 view uniform mat4 projection uniform bool hasNormalMap void main() vec4 worldPos model vec4(aPos, 1.0) FragPos worldPos.xyz TexCoords aTexCoords Normal transpose(inverse(mat3(model))) aNormal if(hasNormalMap) vec3 T normalize(vec3(model vec4(aTangent, 0.0))) vec3 N normalize(vec3(model vec4(aNormal, 0.0))) re orthogonalize T with respect to N T normalize(T dot(T, N) N) then retrieve perpendicular vector B with the cross product of T and N vec3 B cross(N, T) mat3 TBN mat3(T, B, N) gl Position projection view worldPos Here is where I am confused In my calculation, I multiplied T and N by the model matrix which should have moved it into world space. Now transposing (T,B,N) should move me back into model space (I think, I'm not sure). In the fragment shader how do I use the TBN to calculate the normal in world space? If there are better approaches, they are welcome. Thank you. Update I removed the transposing of the TBN as there's no reason to transform into tangent space if we want to pass it in world space. Now that we have the TBN matrix in the fragment shader, how do we apply it so that the normal is the correct value for lighting? Currently I've done version 410 core layout (location 0) out vec3 gPosition layout (location 1) out vec3 gNormal layout (location 2) out vec4 gAlbedoSpec in vec2 TexCoords in vec3 FragPos in vec3 Normal in mat3 TBN struct Material sampler2D diffuseMap sampler2D specularMap sampler2D normalMap float shininess uniform Material material uniform bool hasNormalMap void main() store the fragment position vector in the first gbuffer texture gPosition FragPos also store the per fragment normals into the gbuffer gNormal normalize(Normal) if(hasNormalMap) gNormal texture(material.normalMap, TexCoords).rgb TBN gNormal normalize(gNormal) and the diffuse per fragment color gAlbedoSpec.rgb texture(material.diffuseMap, TexCoords).rgb store specular intensity in gAlbedoSpec's alpha component gAlbedoSpec.a texture(material.specularMap, TexCoords).r but that doesn't feel right. I imagine that I would transform the sampled value from the normal map by the TBN matrix to get it in world space. Am I missing something?
1
Parsing Blender exported collada file in c Since I could not find any libraries that can help me parse collada files in my c opengl project, I decided to write my own. (I did find assimp, but binaries for Visual Studio 2013 were't available, and I don't know enough to build it myself from the source Can anyone build it and share it please? ) I read up a little about the structure of .dae files, figured I could do it, and exported Suzanne from Blender in a .dae file. Fine. Now I open up the .dae file in Notepad just to make sure I got the basic understanding right, and I found something weird. So the tags float array id "Suzanne mesh positions array" count "1521" and float array contain the huge list of vertex positions right? And the tags p and p inside polylist count "968" and polylist should contain vertex indices in triangular order right? But that doesn't make sense, cause then the first triangle is going to be vertex 46 , vertex 0 and vertex 0 .. But how is that possible? Any help please? If I've got it wrong, can someone direct me to some place where i can learn how to parse collada files. Thanks a lot. EDIT I finally managed to build Assimp for VS 2013 (YAAYYYY!!)after getting awesome instructions at http www.learnopengl.com !Model Loading Assimp But my confusion regarding the vertex indices remain.
1
How do you fix wobbling shadow edges? I've implemented an omni directional shadow map and I've noticed a rather unwanted behaviour on the shadows. It seems like when the angle between the occluded points and the light source is really steap, then the egde of the shadow starts to wobble. It's almost looks like it's pulsating. Could it have something to do with when the light source moves further away from the occluded polygons, the pixels next to each other are further away from each other than the corresponding texels on the shadow map face that I sample from which result in some sort of magnification side effect when performing aliasing? This is just a really wild guess! Is anyone familiar with this type of behaviour and or have any solution to it? Thank you!
1
GLSL C Only first sampler array index is accessible I have the following shader, which whose fragment shader contains a sampler array of 16 elements. Fragment version 330 core in vec2 uv flat in int instanceID out vec4 color uniform sampler2D sampler 16 void main() color texture(sampler instanceID , uv) Now when I try to set the second array element using the following method void Shader setUniform(const std string amp uniform, GLuint values, size t amount) const bindProgram() GLuint loc getUniformLocation(uniform " 0 ") for(int i 0 i lt amount i ) glUniform1ui(loc i, values i ) std cout lt lt values i lt lt " " For debug std cout lt lt std endl For debug It just sets the first element of the shader. Even though I'm passing 2 values. This is the output of the cout's 5 4 5 4 5 4 5 4 ... Looking up the shader variables with GDebugger, only the first element is set. It also only draws the first (index 0) image.
1
Distorted LookAt When Looking Up or Down? I have a weird problem and I have no idea what's going on with it. Recently started doing some OpenGL programming, going pretty well, hit some rough spots but worked my way through them and otherwise pretty happy! But I've hit a roadblock with something that has me confused as hell. I'm doing an OpenGL 4 based rendering system, it's early days and I've got some basic stuff like 3D model importer happening, but I've been having some problem with the maths involved in creating my own "LookAt" function like present in OpenGL 2. I've created my own Matrix amp Vector class, I think most of the functions are right but the LookAt does something weird when looking at different angles. For a simple test to show what happens, I created a slowly moving focus point for the lookat that starts off a few units ahead of the camera, then slowly moves closer and closer till it's 1 unit below the camera. As the focus point comes closer to the point of looking straight down, for some reason it's like the field of view is narrowing? It eventually zooms in so close that all you can see is one solid colour covering the entire screen, whatever pixel just happens to be on the ground directly below the camera as it zooms in with a field of view of basically 0 degrees. It may be nothing to do with field of view at all but I don't know and hence why I'm here. Here is some related code to the problem, forgive me if these implementations are not perfect, again I'm just starting out and I don't know the best methods yet, still learning My function for calculating a lookAt public static Transformation lookAt(Vector position, Vector target, Vector updirection) Transformation lookAt new Transformation() Vector vz position.clone() vz.subtract(target) vz.normalise() Vector vx Vector.xProduct(updirection, vz) vx.normalise() Vector vy Vector.xProduct(vz, vx) lookAt.m 0 vx.x lookAt.m 1 vy.x lookAt.m 2 vz.x lookAt.m 3 position.x lookAt.m 4 vx.y lookAt.m 5 vy.y lookAt.m 6 vz.y lookAt.m 7 position.y lookAt.m 8 vx.z lookAt.m 9 vy.z lookAt.m 10 vz.z lookAt.m 11 position.z lookAt.m 12 0 lookAt.m 13 0 lookAt.m 14 0 lookAt.m 15 1 return lookAt.inverse() My function for calculating perspective public static Transformation perspective(double FoV, double aspectRatio, double nearClip, double farClip) Transformation proj new Transformation() double FoVR Math.toRadians(FoV) proj.m 0 FoVR aspectRatio proj.m 5 FoVR proj.m 10 (farClip nearClip) (nearClip farClip) proj.m 11 2 (farClip nearClip) (nearClip farClip) proj.m 14 1 proj.m 15 0 return proj I use the function later on when generating a matrix to apply to the 3D letter G in the scene, like so Transformation MVP new Transformation() Transformation proj Transformation.perspective(FoV, (double)canvas.getWidth() (double)canvas.getHeight(), near, far) MVP.multiply(proj) Transformation view Transformation.lookAt(cameraPosition, cameraFocus, cameraUp) MVP.multiply(view) Transformation model Transformation.scaleRotateTranslate(objectScale, objectRotation, objectTranslation) MVP.multiply(model) And the MVP is simply sent to the shader and used as you'd expect to multiple against all the verticies normals. What do you think I'm doing wrong? What should I check first? Is there a better way of doing what I'm trying to do? Any help answers at all is greatly appreciated, I just don't know what to go after first. The perspective lookAt seems like it's almost working but something must be broken?
1
OpenGL noob Using VBO to draw a colored triangle I tried using straight vertex arrays to draw a triangle with different colors for each vertex and it works fine, but when I use VBO it won't work, so I'm doing something wrong. point 1 side 1.push back( 2.0f) x1 side 1.push back( 2.0f) y1 side 1.push back(0.0f) z1 side color side color 1.push back(0.8f) r1 side color 1.push back(0.0f) g1 side color 1.push back(1.0f) b1 point 2 side 1.push back(2.0f) x2 side 1.push back( 2.0f) y2 side 1.push back(0.0f) z2 ...etc init code glGenBuffers(2, amp m vertexBuffer 0 ) Generate a buffer for the vertices glBindBuffer(GL ARRAY BUFFER, m vertexBuffer 0 ) Bind the vertex buffer glBufferData(GL ARRAY BUFFER, sizeof(GLfloat) side 1.size(), amp side 1 0 , GL STATIC DRAW) Send the data to OpenGL glBindBuffer(GL ARRAY BUFFER, m vertexBuffer 1 ) Bind the vertex buffer glBufferData(GL ARRAY BUFFER, sizeof(GLfloat) side color 1.size(), amp side color 1 0 , GL STATIC DRAW) Send the data to OpenGL render code glEnableClientState(GL VERTEX ARRAY) enable vertex array glEnableClientState(GL COLOR ARRAY) enable color array bind the buffer to the VBO glBindBuffer(GL ARRAY BUFFER, m vertexBuffer 0 ) glBindBuffer(GL ARRAY BUFFER, m vertexBuffer 1 ) glVertexPointer(3, GL FLOAT, 0, 0) glColorPointer(3, GL FLOAT, 0, BUFFER OFFSET(sizeof(GLfloat) side 1.size())) glDrawArrays(GL TRIANGLES, 0, 3) glDisableClientState(GL COLOR ARRAY) glDisableClientState(GL VERTEX ARRAY)
1
Dynamic Terrain Triangulation Is there someone who know have an algorithm which can perform terrain triangulation like on the example image right under (there is a secondary image as well). The reason I say "Dynamic" is because I want it to support Dynamic changes to the terrain, like if one was to dig into the ground or into the side of a mountain, then it should be capable of triangulating the new changes without destroying the old terrain which doesn't need changes. Ignore the things like the trees, etc. This is only about the style of the terrain itself. Click here for fullscreen picture Secondary Image I've tried using the Marching Cubes Algorithm, though it didn't give me that triangle'ish feeling, like in the image(s) above (While ignoring the normal rendering in my example). Bottom Line Does anybody know have an algorithm, or any idea of how to... Triangulate terrain so it have that style like in the above example image(s). Support dynamic changes like Marching Cubes does, but with this triangle'ish style. Note All the "points" would be in an unsorted list, where the "points" of course is all the vertices, which should be used to generate all the triangles from. Disclaimer I usually answer and sometimes ask question on Stack Overflow, so if anything is wrong with my question here I apologize, though comment and I will fix whatever is wrong.
1
When drawing transparent object in OpenGL it cuts some sides of other object? I am making some OpenGL program, and I have a lost of cubes next to each other like this When I am making this hole, I am just skipping those cubes, just don't draw them. for (int i 0 i lt NUM OF CUBES i ) for (int j 0 j lt NUM OF CUBES j ) if ((i 3 amp amp j 3) (i 4 amp amp j 3) (i 3 amp amp j 4) (i 4 amp amp j 4)) continue cubes i j .draw(true) The result is But when I make those cubes invisible, by setting alpha to 0, the result is The code is for (int i 0 i lt NUM OF CUBES i ) for (int j 0 j lt NUM OF CUBES j ) if ((i 3 amp amp j 3) (i 4 amp amp j 3) (i 3 amp amp j 4) (i 4 amp amp j 4)) cubes i j .draw(true) else cubes i j .draw(false) true false fleg which I send to draw function just sets alpha to 0 or 1, if true alpha coordinate of glColor4f is set to 0, and if false is sent alpha coordinate is set to 1. Does anyone have idea why it looks like some sides of the other cubes are cut? Like they don't exist.
1
why specular light is not running? this is my method for lighting private void lights(GL gl) float LightPos 0.0f, 0.0f, 1.0f, 1.0f float LightAmb 0.2f, 0.2f, 0.2f, 1.0f float LightDif 0.6f, 0.6f, 0.6f, 1.0f float LightSpc 0.9f, 0.9f, 0.9f, 1.0f gl.glLightfv(GL.GL LIGHT1, GL.GL POSITION, LightPos, 0) gl.glLightfv(GL.GL LIGHT1, GL.GL AMBIENT, LightAmb, 0) gl.glLightfv(GL.GL LIGHT1, GL.GL DIFFUSE, LightDif, 0) gl.glLightfv(GL.GL LIGHT1, GL.GL SPECULAR, LightSpc, 0) gl.glLightfv(GL.GL LIGHT0, GL.GL SPECULAR, LightSpc, 0) gl.glEnable(GL.GL LIGHT0) gl.glEnable(GL.GL LIGHT1) gl.glShadeModel(GL.GL SMOOTH) gl.glEnable(GL.GL LIGHTING) and i see my objects flat, no specular light.. any ideas? ps. to render my objects gl.glColor3f(1f,0f,0f) gl.glBegin(GL.GL TRIANGLES) for(Triangle t tubeModel.getTriangles()) gl.glVertex3f(t.v1.x, t.v1.y, t.v1.z) gl.glVertex3f(t.v2.x, t.v2.y, t.v2.z) gl.glVertex3f(t.v3.x, t.v3.y, t.v3.z) gl.glEnd() this is what i'm looking for this is what i get well it's not all black but if i rotate it becomes grey white
1
Irrlicht engine game wont compile on linux. Undefined refrences(opengl and xfree) I'm trying to port over my game I'm developing to Linux. But when i compile i get a lot of undefined references to mostly functions that look like they belong to OpenGL. Most are titled gl... But one of them is called XFree. I compiled it with this command g main.cpp L.. .. .. LIB irrlicht 1.8.3 lib Linux lIrrlicht I.. .. .. LIB irrlicht 1.8.3 include one of the errors home owner LIB irrlicht 1.8.3 source Irrlicht COpenGLDriver.cpp 3746 undefined reference to glVertex3f
1
Lighting Objects with Textures OpenGL I'm trying to get lighting effects on a textured object I'm using .obj and .mtl files to define them. No matter what I try my object is either invisible, unlit (plain texture), or completely white. I defined my mtl by having my texture mapped to diffusion (that is, map Kd texture.png). I found that if I don't set both a Ka and Kd (in addition to the texture) the object is invisible. Also, it seems that the actual ambient values don't change anything they could be 0 and transparent, as long as they're defined. If I set my material to have white ambient, diffuse, and specular, then the object is 100 white, no matter what I do with my light source. This is also after I set it to gl separate specular color, as before that, I just got the plain texture. I tried making sure I had a normal defined for at least one face of the model, with no luck. I'm trying to have my texture be dominant, with a little bit of specular lighting on it. What am I doing wrong? Is the lighting "working" but the all white means I haven't set my shininess properly? Is it being infinitely reflective or something? Note I'm using OpenTK and working in C .
1
How do I position a 2D camera in OpenGL? I can't understand how the camera is working. It's a 2D game, so I'm displaying a game map from (0, 0, 0) to (mapSizeX, 0, mapSizeY). I'm initializing the camera as follow Camera Camera(void) position (0.0f, 0.0f, 0.0f), rotation (0.0f, 0.0f, 1.0f) void Camera initialize(void) glMatrixMode(GL PROJECTION) glLoadIdentity() glTranslatef(position .x, position .y, position .z) gluPerspective(70.0f, 800.0f 600.0f, 1.0f, 10000.0f) gluLookAt(0.0f, 6000.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f) glMatrixMode(GL MODELVIEW) glLoadIdentity() glEnable(GL DEPTH TEST) glDepthFunc(GL LEQUAL) So the camera is looking down. I currently see the up right border of the map in the center of my window and the map expand to the down left border of my window. I would like to center the map. The logical thing to do should be to move the camera to eyeX mapSizeX 2 and the same for z. My map has 10 x 10 cases with CASE 400, so I should have gluLookAt((10 2) CASE 2000 , 6000.0f, (10 2) CASE 2000 , 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f) But that doesn't move the camera, but seems to rotate it. Am I doing something wrong? EDIT I tried that gluLookAt(2000.0f, 6000.0f, 0.0f, 2000.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f) Which correctly moves the map in the middle of the window in width. But I can't move if correctly in height. It always returns the axis Z. When I go up, It goes down and the same for right and left. I don't see the map anymore when I do gluLookAt(2000.0f, 6000.0f, 2000.0f, 2000.0f, 0.0f, 2000.0f, 0.0f, 1.0f, 0.0f)
1
GLM Euler Angles to Quaternion I hope you know GL Mathematics (GLM) because I've got a problem, I can not break I have a set of Euler Angles and I need to perform smooth interpolation between them. The best way is converting them to Quaternions and applying SLERP alrogirthm. The issue I have is how to initialize glm quaternion with Euler Angles, please? I read GLM Documentation over and over, but I can not find appropriate Quaternion constructor signature, that would take three Euler Angles. The closest one I found is angleAxis() function, taking angle value and an axis for that angle. Note, please, what I am looking for si a way, how to parse RotX, RotY, RotZ. For your information, this is the above metnioned angleAxis() function signature detail tquat lt valType gt angleAxis (valType const amp angle, valType const amp x, valType const amp y, valType const amp z)
1
For deferred rendering and SSAO, what coordinate system are the normals actually in? So, I'm following the very helpful LearnOpenGL online tutorials, and I'm working on implementing SSAO. I don't have a deferred rendering pipeline, but I need to collect normals during my depth pass so that I can sample them for the SSAO effect. I'm following this tutorial https learnopengl.com Advanced Lighting SSAO, and referencing this one when needed https learnopengl.com Advanced Lighting Deferred Shading. However, neither of these tutorials really explain what's going on with the normals. What is the normalMatrix being used in the shader? mat3 normalMatrix transpose(inverse(mat3(view model))) Normal normalMatrix (invertedNormals ? aNormal aNormal) I'm not worried about the invertexNormals conditional, that's probably just something weird the author added for handling different conventions, but what the heck is going on with that normalMatrix? Why would we invert the view model matrix? If we're starting with aNormal in object space (whether it's coming directly from a vertex buffer or from a normal map TBN calculation), I can't for the life of me figure out what coordinate system that resulting normal is going to end up in! So, long story short, what coordinate frame should my normals be in if I'm to use them for SSAO? World space? View Space? Clip space? None of the above? If it's some weird coordinate system, can somebody help me get there? I'm probably more confused than I should be.
1
OpenGL Camera causes spatial distortion I'm trying to implement a 3D camera of the "Orbit around the origin" variety in a game engine I'm developing in order to learn about 3D graphics and game programming. I have a basic handle on the required math, and have implemented my own crude 3D math library and matrix stack (functionally similar to the one in OpenGL's compatibility profile). My first attempts at a camera failed miserably, until I decided that I should use quaternions to represent rotation. Once I'd done that, I had a camera that moved in the direction I expected it to at about the right speed and fairly smoothly, but there is an odd spatial distortion effect that I can't seem to get rid of. I have posted a brief video on YouTube showing exactly what is happening. I honestly don't know what could be causing this issue, the magnitude of the distortion appears to reach a maximum at certain angles, but beyond that I'm not sure what I should be looking at. Right now my chief suspect is my quaternion math code, because the matrix stack appears to be behaving itself and if something was wrong with my matrix math code I would be having more serious issues than this. I'm not sure what information or code I can provide that would be of any use, so I'll start with the code that handles quaternion math. public class Quaternion private float w private float x private float y private float z public Quaternion() this(1.0f,0.0f,0.0f,0.0f) public Quaternion(float W, float X, float Y, float Z) w W x X y Y z Z public void loadIdentity() w 1.0f x 0.0f y 0.0f z 0.0f public float magnitude() AKA the "norm" return (float) Math.sqrt(w w x x y y z z) public void qNormalize() float mag this.magnitude() w mag x mag y mag z mag public void qMultiply(Quaternion q) this.qMultiply(q.w, q.x, q.y, q.z) private void qMultiply(float w2, float x2, float y2, float z2) Performs A B, where this Quaternion is A this.w w w2 x x2 y y2 z z2 this.x w x2 x w2 y z2 z y2 this.y w y2 y w2 z x2 x z2 this.z w z2 z w2 x y2 y x2 public float qMatrix() Return a 4x4 matrix representation of this Quaternion, column major like OpenGL prefers float w2 w w float x2 x x float y2 y y float z2 z z if(Math.abs(w2 x2 y2 z2 1.0) gt 0.000001) this.qNormalize() return this.qMatrix() return new float 1.0f 2.0f y2 2.0f z2, 2.0f x y 2.0f w z, 2.0f x z 2.0f w y, 0.0f, 2.0f x y 2.0f w z, 1.0f 2.0f x2 2.0f z2, 2.0f y z 2.0f w x,0.0f, 2.0f x z 2.0f w y, 2.0f y z 2.0f w x, 1.0f 2.0f x2 2.0f y2,0.0f, 0.0f,0.0f,0.0f,1.0f public void qFromAA(float angle, float axis) create a Quaternion from an axis angle representation if(Vector.magnitude(axis) 0.0f) this.loadIdentity() return float theta angle 2.0f axis Vector.scale((float)Math.sin(theta), Vector.normalize(axis)) w (float)Math.cos(theta) x axis 0 y axis 1 z axis 2 public String toString() return "w " w " tx " x " ty " y " tz " z " tmag " (x x y y z z w w) Any ideas about where I should start?
1
How can I reduce a frustum to the subset that passes through a portal AABB? I'm trying to implement portal based occlusion culling There are sectors and portals. When a portal is visible, the sector it is connected to is rendered. The sector is made of polygons and portals. The sector is sealed by a portal to the next sector. The portal has an axis aligned bounding box. The view frustum has an array of mathematical planes in it. There is top, bottom, left, right, near and far. The view frustum planes change shape when in contact with a wall. When the view frustum planes are in contact with the portal's AABB, the sector get rendered. Then an array of mathematical planes are added behind the portal to create a reduced view frustum. Reduced frustums are made from other portals in contact with the view frustum until there are no more portals in view. How can I perform the bolded step, where I find a new collection of frustum planes representing the portion of the original frustum that continues through the portal's AABB?
1
Order of render with transparency opengl I tried to render using different render configurations (GL BLEND FUNC()) but I couldn't get the back object to render in certain angles. The first screenshot here shows one angle where the back object is not visible through the front one. In th second screenshot, I looked at the blocks from 180 degrees. Is there a way I could render them through? Or would I have to implement a sorting algorithm?
1
Rendering metallic look I would really like to get the same metallic reflection look as in CSR Racing for my game in the simplest way possible. It doesn't have to be top notch, but right now my rendering looks horrible and is in need of some serious improvement. Metallic reflection rendering seems pretty complex (for instance looking at this), so I need some advice on what to try. I realize CSR Racing renders using some type of environment mapping. Can I cheat using spherical env maps, or do I need cube maps? (Nonetheless I will use static images, it's too much work to generate on the fly, and I'll keep each level in the same "tone" so I think it will work well enough.) How should I map my UVs for the env map? Or should I use the normals? If so how to tranform 3D normals to 2D? What texture transformation to use? I tried to use the camera transformation with the same UVs I'd have used for a diffuse texture (and using glEnable(GL TEXTURE GEN S)), and there was some... result, but I think that's on the wrong track? Do I need to use specular maps, even for a shiny, clean metallic surface? When I look at what specular maps are used for it seems like they hold information of what is matte and what is shiny. Any tips and tricks highly appreciated!
1
Access vertex data stored in VBO in the shader If I wanted to store extra data in a VBO for skinning (indices for indexing into an array of matrices of bones and floats for applying weights to those bones) How would I go about accessing that data in GLSL? I already know how to put the data in the buffer, interleaved with the rest of the data. For example I want each vertex for a model to contain Vec3 position Vec3 normal Vec4 color Vec2 texCoords int boneCount int 3 boneIndex float 3 boneWeight Then drawing like bind(vertexBufferID) glVertexPointer(3, GL11.GL FLOAT, stridePlus, 0 4) glNormalPointer(GL11.GL FLOAT, stridePlus, 3 4) glColorPointer(4, GL11.GL FLOAT, stridePlus, (3 3) 4) glTexCoordPointer(2, GL11.GL FLOAT, stridePlus, (3 3 4) 4) glVertexPointer(7, GL11.GL FLOAT, stridePlus, (3 3 4 2) 4) glDrawArrays(GL11.GL TRIANGLES, 0, VPNCTEnd) Then in my vertex shader I want Get indices of bones Get weights of bones Get bone transforms for each bone transform, apply it to gl Position at the specified weight What is the GLSL required for the last part of the process? It seems like I need to set a attribute pointer to the data somehow, but I'm not sure where to do that when the data is stored in the VBO.
1
Build time GLSL syntax validation Is there a way to validate GLSL syntax build time instead of run time? My application takes a long time to start and I want to know at the earliest possible stage that my shaders are ok. I'm using Visual Studio Xcode. The solution probably involves running a tool as a part of build process, but I'm looking for such a tool.
1
How do I call OpenGL methods that require pointers in C ? I have found the following C code in this tutorial to draw a triangle with OpenGL 4 float points 0.0f, 0.5f, 0.0f, 0.5f, 0.5f, 0.0f, 0.5f, 0.5f, 0.0f GLuint vbo 0 glGenBuffers (1, amp vbo) glBindBuffer (GL ARRAY BUFFER, vbo) glBufferData (GL ARRAY BUFFER, 9 sizeof (float), points, GL STATIC DRAW) I use C and OpenTK, so I tried to translate the code float points 0.0f, 0.5f, 0.0f, 0.5f, 0.5f, 0.0f, 0.5f, 0.5f, 0.0f uint vbo 0 GL.GenBuffers(1, vbo) GL.BindBuffer(BufferTarget.ArrayBuffer, vbo 0 ) GL.BufferData(BufferTarget.ArrayBuffer, 9 sizeof(float), ppoints, BufferUsageHint.StaticDraw) The problem is that GL.GenBuffers() and GL.BufferData() require pointers (Visual Studio shows int or out int and ref float , IntPtr). I've tried to fix this, but it didn't work float points 0.0f, 0.5f, 0.0f, 0.5f, 0.5f, 0.0f, 0.5f, 0.5f, 0.0f uint vbo 0 GL.GenBuffers(1, vbo) GL.BindBuffer(BufferTarget.ArrayBuffer, vbo 0 ) unsafe fixed (float ppoints points) GL.BufferData(BufferTarget.ArrayBuffer, 9 sizeof(float), ppoints, BufferUsageHint.StaticDraw) So my question is how to use these methods in a good C style?
1
OpenGL strange lighting model problem I've made a .obj reader in c and I've tried rendering some models. I've got the whole concept from here https www.youtube.com watch?v KMWUjNE0fYI amp list PLRIWtICgwaX0u7Rf9zkZhLoLuZVfUksDP amp index 9 So I've tried to copy the export settings in blender what the guy used in the video (I'm not experienced in modelling so I had troubles because I have a newer version) and I've split every edge so they staied sharp even with the smooth shader. What my model looks like in blender I'm not sure if this is a texture or lighting bug. The models that the guy provides in the video work just fine. Edit My drawing code is this shader gt Enable() shader gt SetUniformMat4f("vw matrix", viewMatrix) m Model gt GetVAO().Bind() m Model gt GetIBO().Bind() glActiveTexture(GL TEXTURE0) m Texture gt Bind() glDrawElements(GL TRIANGLES, m Model gt GetIBO().GetCount(), GL UNSIGNED INT, 0) m Texture gt Unbind() m Model gt GetVAO().Unbind() m Model gt GetIBO().Unbind() shader gt Disable() Export settings Write Normals Include UVs Triangulate faces Objects as OBJ objects Edit2 I've added a pic how the model looks without edge split (above). My vertex shader version 400 core in vec3 position in vec2 textureCoords in vec3 normal out vec2 pass textureCoords out vec3 surfaceNormal out vec3 toLightVector uniform mat4 pr matrix mat4(1.0) uniform mat4 vw matrix mat4(1.0) uniform mat4 ml matrix mat4(1.0) uniform vec3 lightPosition void main(void) vec4 worldPosition ml matrix vec4(position, 1.0) gl Position pr matrix vw matrix worldPosition pass textureCoords textureCoords surfaceNormal (ml matrix vec4(normal, 0.0)).xyz toLightVector lightPosition worldPosition.xyz
1
In WebGL, how can I efficiently render a mesh many times per frame, without instancing? I'm using WebGL and I want to render a single mesh many times at different locations. I know about WebGL 2's drawArraysInstanced() and drawElementsInstanced() calls, but I think only Firefox supports them now. The only solutions I see are Put all the meshes in a single buffer and render it with a single call to drawArrays() drawElements(), but that would be a waste of memory because the mesh contains a lot of vertices and if the meshes are not static I would need to re load the vertices in gpu memory every frame or so. Have a single mesh in memory and render each instance with a separate call to drawArrays() drawElements(), updating the model matrix between each call. I don't think that's the best way. Any ideas? Thanks
1
Collisions between players at spawnpoints I am working on a FPS MMORPG and ran into a problem at spawnpoints, when I enabled collision detection between players. Neither player would be able to move if both spawned at the same time, because both players would essentially be "trapped" within each other's meshes. What is the best way to resolve this problem?
1
Can't get normals to work correctly with lighting in OpenGL I'm trying to light up a simple 2d triangle with my cursor as a diffuse light source but can't seem to set the normal correctly for the lighting to look right. The function that calculates the normal (m3dFindNormal) comes from the OpenGL Superbible (math3d.h) which takes 3 points on a plane and determines the cross product of 2 vectors from those points. Since this doesn't return a normal of length 1 I then call m3dNormalizeVector3 to get a normal vector of length 1. I then set the normal for the triangle using glNormal to that vector and draw the triangle. Using this code the triangle never lights up at all but when I comment out the glNormal call it lights up on the vertices so something must be wrong with the normal. glShadeModel(GL SMOOTH) GLfloat diffuseLight 0.9f, 0.9f, 0.9f, 1.0f glEnable(GL LIGHTING) glLightfv(GL LIGHT0, GL DIFFUSE, diffuseLight) GLfloat lightPos x, y, 1.0f, 1.0f x and y are the cursor's position glLightfv(GL LIGHT0, GL POSITION, lightPos) glEnable(GL LIGHT0) glEnable(GL COLOR MATERIAL) glColorMaterial(GL FRONT AND BACK, GL AMBIENT AND DIFFUSE) glColor3f(0.75f, 0.75f, 0.75f) glBegin(GL TRIANGLES) M3DVector3f vPoints 3 SCREEN WIDTH 2 40, SCREEN HEIGHT 2 40, 0 , SCREEN WIDTH 2 40, SCREEN HEIGHT 2 40, 0 , SCREEN WIDTH 2, SCREEN HEIGHT 2, 0 m3dFindNormal(vNormal, vPoints 0 , vPoints 1 , vPoints 2 ) m3dNormalizeVector3( vNormal ) glNormal3fv(vNormal) glVertex3fv( vPoints 0 ) glVertex3fv( vPoints 1 ) glVertex3fv( vPoints 2 ) glEnd() I know the light works because things light up when I move the cursor around but they light up at the wrong times. Anyone know what I'm doing wrong?
1
Tweening Colors in OpenGL I'm making a sky gradient in OpenGL by drawing with glColorPointer and glDrawArrays. I would like to be able to change the sky colour from morning to daytime to evening etc. I can either Make a number of sprites and fade them in with my framework Somehow tween the color vector in openGL over time and use a single sprite The second one seems like a more efficient option, but my framework doesn't pass the time delta into the draw method for me to decide how far I've progressed into fading. Here's my current code glDisable(GL BLEND) glDisable(GL DITHER) glDisable(GL FOG) glDisable(GL LIGHTING) glDisable(GL TEXTURE 2D) glShadeModel(GL SMOOTH) CGSize size CCDirector sharedDirector winSize float w size.width float h size.height const GLfloat vertices 0, 0, w, 0, 0, h 3, w, h 3, 0, h 2 3, w, h 2 3, 0, h, w, h, const GLubyte colors 254,255,134,255, 254,255,134,255, 230,157,0,255, 230,157,0,255, 230,60,0,255, 230,60,0,255, 167,0,86,255, 167,0,86,255, glVertexPointer(2, GL FLOAT, 0, vertices) glEnableClientState(GL VERTEX ARRAY) glColorPointer(4, GL UNSIGNED BYTE, 0, colors) glEnableClientState(GL COLOR ARRAY) glDrawArrays(GL TRIANGLE STRIP, 0, 8) glDisableClientState(GL COLOR ARRAY) glDisableClientState(GL VERTEX ARRAY) glEnable(GL TEXTURE 2D) Which gives me
1
How to implement a multi platform Java 2D game engine's graphics? I'm not sure whether this question should be posted here. I'm trying to make a basic generic game engine in Java. Here's what I have so far. public abstract class Device public abstract void startDevice() public abstract void closeDevice() public abstract int getWidth() public abstract int getHeight() public abstract void displayImage(Image im) public class Game public final Device device public synchronized void gameStart() ... public abstract class GameEntity implements Serializable public abstract void create(Game g) public abstract void updateAndDraw(Game g, Graphics2D g2d) public abstract void destroy(Game g) So far, I have implemented two concrete classes for Device, SwingDevice and J2meDevice, and it's working fine. Now the problem is I want to implement AndroidDevice. Everything will work just fine except for that I have enforced using java.awt.Image and java.awt.Graphics2D to paint Game's graphics offline. Then I forward this Image to the device for rendering. Is there any way I can absolutely decouple the Device's graphics from the rest of the engine? I have two solutions but I doubt they'll fully achieve what I need Stick with the java.awt.Image. Devices that support OpenGL will not fully utilize their hardware capabilities as drawing is done on CPU. And it might not work on platforms on which can't import java.awt (I doubt their existence). Create my Graphics2D like interface. This will make defining new devices harder and again it might hide some of the device's features.
1
Send OpenGL video flux via UDP It is possible to send the video flux of an OpenGL desktop app via UDP on Linux ? I looked up FBO and off screen rendering but I still can't figure out how to extract the video flux and send it. I'm working with C but if you have explanations in others langages go ahead. Thanks. My original post on stack overflow.
1
Shadow map shimmering, indexing outside the shadow map I have tried to reduce the shadow shimmering flickering using the technique described here http msdn.microsoft.com en us library windows desktop ee416324 28v vs.85 29.aspx It works as I want and shimmering is reduced but sometimes I have artifacts. It looks like my code tries to index space outside the shadow map. The article above writes about it but I didn't find a solution. When I played with the code I also got black strips on the corners. Code reduce shadow shimmering flickering Vector2 vecWorldUnitsPerTexel Vector2(D.x() (float)D.shadowMapSize(), D.y() (float)D.shadowMapSize()) get only x and y dimensions Vector2 min2D min.vector2(), max2D max.vector2() min2D vecWorldUnitsPerTexel min2D Round(min2D) min2D vecWorldUnitsPerTexel max2D vecWorldUnitsPerTexel max2D Round(max2D) max2D vecWorldUnitsPerTexel min.set(min2D, min.z) max.set(max2D, max.z) crop matrix based on this article https developer.nvidia.com gpugems GPUGems3 gpugems3 ch10.html Vector scale Vector offset scale.x 2.0f (max.x min.x) scale.y 2.0f (max.y min.y) scale.z 1.0f (max.z min.z) offset.x 0.5f (max.x min.x) scale.x offset.y 0.5f (max.y min.y) scale.y offset.z min.z scale.z Matrix4 m m.x Vector4(scale.x, 0, 0, 0) m.y Vector4(0, scale.y, 0, 0) m.z Vector4(0, 0, scale.z, 0) m.w Vector4(offset.x, offset.y, offset.z, 1.0f) I think that I should store in the depth map a slightly larger area but I'm not sure how to do this. I tried to change the scale of the crop matrix but it doesn't help. EDIT It seems I've found a solution. When I'm rounding (or flooring) min and max values I subtract one from the min value and add one to the max value. This makes the shadow map contain a slightly larger area and I don't see any artifacts.
1
Can't understand these UV texture coordinates (range is NOT 0.0 to 1.0) I am trying to draw a simple 3D object generated by Google SketchUp 8 Pro onto my WebGL app, the model is a simple cylinder. I opened the exported file and copied the vertices positions, indices, normals and texture coordinates into a .json file in order to be able to use it on javascript. Everything seems to work fine, except for the texture coordinates which have some pretty big values, like 46.331676 and also negative values. Now I don't know if I am wrong, but isn't 2D texture coordinates supposed to be in a range from 0.0 to 1.0 only? Well, drawing the model using these texture coordinates gives me a totally weird look, and I can only get to see the texture properly when I am very close (not really me, the cam) to the model, as if the texture has been insanely reduced in it's size and repeated infinitely across the model's faces. (yeah, I am using GL REPEAT on that texture wrap thing) What I noticed is that if I get all these coordinates and divide them by 10 or 100 I get a much "normal" look, but still not in the 0.0 to 1.0 range. Here's my json file http pastebin.com Aa4wvGvv Here are my GLSL Shaders http pastebin.com DR4K37T9 And here is the .X file exported by SketchUp http pastebin.com hmYAJZWE I've also tried to draw this model using XNA, but still not working. Using this HLSL shaders http pastebin.com RBgVFq08 I tried exporting the same model to different formats, collada, fbx, and x. All those yields the same thing.
1
Slick create new texture from part of existing texture Say I have the following image I want to resize this texture, so I can have resizable buttons with a texture. Now, as you can't repeat a certain area of an image in OpenGL, I have to create a new texture from the center (from (0.1f, 0.1f) to (0.9f, 0.9f) in texture coordinates. How can I create a new texture from the inside of this image using the Slick library?
1
How can I render a semi transparent model with OpenGL correctly? I'm using OpenGL ES 2 and I want to render a simple model with some level of transparency. I'm just starting out with shaders, and I wrote a simple diffuse shader for the model without any issues but I don't know how to add transparency to it. I tried to set my fragment shader's output (gl FragColor) to a non opaque alpha value but the results weren't too great. It sort of works, but it looks like certain model triangles are only rendered based on the camera position... It's really hard to describe what's wrong so please watch this short video I recorded http www.youtube.com watch?v s0JqA0rZabE I thought this was a depth testing issue so I tried playing around with enabling disabling depth testing and back face culling. Enabling back face culling changes the output slightly but the problem in the video is still there. Enabling disabling depth testing doesn't seem to do anything. Could anyone explain what I'm seeing and how I can add some simple transparency to my model with the shader? I'm not looking for advanced order independent transparency implementations. edit Vertex Shader color varying for fragment shader varying mediump vec3 LightIntensity varying highp vec3 VertexInModelSpace void main() vec4 LightPosition vec4(0.0, 0.0, 0.0, 1.0) vec3 LightColor vec3(1.0, 1.0, 1.0) vec3 DiffuseColor vec3(1.0, 0.25, 0.0) find the vector from the given vertex to the light source vec4 vertexInWorldSpace gl ModelViewMatrix vec4(gl Vertex) vec3 normalInWorldSpace normalize(gl NormalMatrix gl Normal) vec3 lightDirn normalize(vec3(LightPosition vertexInWorldSpace)) save vertexInWorldSpace VertexInModelSpace vec3(gl Vertex) calculate light intensity LightIntensity LightColor DiffuseColor max(dot(lightDirn,normalInWorldSpace),0.0) calculate projected vertex position gl Position gl ModelViewProjectionMatrix gl Vertex Fragment Shader varying to define color varying vec3 LightIntensity varying vec3 VertexInModelSpace void main() gl FragColor vec4(LightIntensity,0.5)
1
Should I use retained mode or immediate mode I'm trying to make a opengl wrapper for winforms(.net). Basically you code in gdi syntax but it gets rendered for opengl(using glcontrol of opentk). Which mode should I use for rendering ui? It has to be fast enough to support lagless animations and image drawing. Direct mode (glBegin, glEnd) is easy but I've read there are performance issues. On the other hand, retained mode will be relatively much complex to implement. My preference is first make for immediate mode then move to retained as I progress. Are there any issues with immediate mode(like missing functionality
1
What is GL MAX COMBINED TEXTURE IMAGE UNITS? I am a beginner in OpenGL. I am learning about textures in OpenGL. What I don't understand is how to determine how many texture units are in the GPU. I heard someone said that you can see how many texture units by writing the following code. int total units glGetIntegerv(GL MAX COMBINED TEXTURE IMAGE UNITS, amp total units) std cout lt lt total units lt lt ' n' the result is 192 Is there 192 texture units in my GPU? In documentation, it says GL MAX COMBINED TEXTURE IMAGE UNITS params returns one value, the maximum supported texture image units that can be used to access texture maps from the vertex shader and the fragment processor combined. If both the vertex shader and the fragment processing stage access the same texture image unit, then that counts as using two texture image units against this limit. The value must be at least 48. See glActiveTexture. So I wanted to know how many texture units can be used to access texture maps from the vertex and fragment shaders. So I wrote and run the following code. int vertex units, fragment units glGetIntegerv(GL MAX VERTEX TEXTURE IMAGE UNITS, amp vertex units) std cout lt lt vertex units lt lt quot n quot the result is 32 glGetInteferv(GL MAX TEXTURE IMAGE UNITS, amp fragment units) std cout lt lt fragment units lt lt quot n quot the result is also 32 So 32 32 64. But why does GL MAX COMBINED TEXTURE IMAGE UNITS shows me the result of 192? I think I am missing something. What do I need to calculate to get 192? And also, in OpenGL, why are there only GL TEXTURE0 to GL TEXTURE31 macros? I think these macros are for each shaders. Am I right?
1
small independent game development on a virtual machine I've been learning about OpenGL and SFML with c now for about 6 8 months, and would like to work on a small little personal game to put some of my skills to the test. Now I want to kill two birds with one stone, and also increase my knowledge of Ubuntu and linux development in general by developing this project in Ubuntu. Currently my main computer runs Windows 7 while I have an older computer than runs Fedora 18, so I set up an ubuntu 12.04 LTS virtual machine on my main computer. The problem is, what effects does this have graphics wise? I know windows virtual machines running games can often have some graphical and input bugs, should I not use a virtual machine? Or this fine and I'm simply over thinking the subject? Thanks! Note I am using VM VirtualBox on my windows 7 desktop.
1
Dynamic chunk loading with high FPS. But still chops I am creating a voxel world, (like any other person), but I currently have a small performance hit when loading unloading chunks. Right now I can load and unload chunks dynamically with "infinite" type world. I am using VBO's to store chunk data with float buffers. Also, (my big point), I am using Greedy meshing to only render the edges I need. My only thought of why it's not loading well is that I am using SSAO shading, which should have a hit on my FPS, but my FPS stays in the high 200's. Also, I am using frustum culling and normal culling. What I want is the player to have a really large render distance without actually rendering the chunks until the player is close. Almost like fog but instead of covering up unloading, it's actually reducing the detail as you move away. Also, when a player creates a voxel or destroys a voxel (basically any change in the chunk) it saves the chunk to a file in the corresponding world folder. Any ideas? Here is a video of the game in its' current stage. Here's a screenshot with a far render distance. Anything with a render distance that is far, hits the performance BAD
1
Getting choppy, jumpy animation Hey I'm trying to get a smooth animation for my player movement. Earlier I changed my frames to render at fixed time(1 60 sec). Well that makes my player now run at a constant speed but the animation now appear to be choppy and it seems like the player jumps from one point to another instead of smoothly transitioning between them. Below is my code and an gif animation of player movement. How do I make the animation appear smooth? float deltaTime 0 float oldTime 0 The direction in which the player will move int playerX 0 int playerY 0 callback method for keyboard events void keyboard callback(GLFWwindow window, int key, int scancode, int action, int mods) if (key GLFW KEY W amp amp action GLFW REPEAT) playerX 0 playerY 1 if (key GLFW KEY S amp amp action GLFW REPEAT) playerX 0 playerY 1 if (key GLFW KEY D amp amp action GLFW REPEAT) playerX 1 playerY 0 if (key GLFW KEY A amp amp action GLFW REPEAT) playerX 1 playerY 0 update player's position void update() player gt move(playerX, playerY, TIME PER FRAME) set up glfw keyboard callback function and other stuff void init() glClearColor(0, 0, 0, 0) player gt bindVertexAttributes(shader.getAttributeLocation("position")) glfwSetKeyCallback(window gt getGLFWWindow(), keyboard callback) wait logic float expected frame end glfwGetTime() TIME PER FRAME void wait() while (glfwGetTime() lt expected frame end) expected frame end TIME PER FRAME playerX playerY 0 rendering function void render() glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) player gt setUniformMatrixLocation(shader.getUniformLocation("projectionMatrix"), shader.getUniformLocation("transformationMatrix")) shader.useProgram() update() player gt render() wait() shader.stopProgram() main function int main(int argc, char argv) init() main loop while (!glfwWindowShouldClose(window gt getGLFWWindow())) render() window gt swapBuffers() glfwPollEvents() glfwTerminate() return 0
1
Create a crosshair openGL How do I draw a white crosshair in the middle of screen in openGL, it's all well and good knowing how to render objects in 3d space, but I have literally no idea on how to draw something that sticks on the screen no matter what Would this require a shader? one that does not take into account model view projection matrix? at what point would i draw the cross? after everything to coincide with the painters algorithm? Or do I give it a z value?
1
OpenGL ES Frame Skipping causing visible artifacts I'm using OpenGL ES 3.0 on Android 5.1, and am noticing the following issues. I tried to implement the game loop which does exactly 60 updates and do as many frames as it can. Everything is smooth, but the object is being rendered twice, I think due to interpolation by hardware. If you zoom the image (click to view in separate tab) you'll see that the green square is being rendered twice. And now if I remove the catch up code, it works correctly. As you can see it now renders fine. Now I'll show you my game loop. I'm using GLSurfaceView class, and doing my render in onDrawFrame of the renderer. double now TimeUtils.currentTime() if (previous 1) previous now double delta now previous if (delta gt SECOND) delta frameTime lag delta while (lag gt frameTime) This line plays catchup. If commented, it works otherwise the artifact occurs updates doUpdate(frameTime) lag frameTime frames glClear(GL COLOR BUFFER BIT) doRender(frameTime) if (now lastStatsTime gt SECOND) updatesPerSecond updates framesPerSecond frames updates frames 0 lastStatsTime now previous now Is there anything that I'm doing wrong? I'm using a dynamic VBO for rendering, the lines are generated every frame and I use glBufferData to send them to GPU. This happens every frame (This is being done for educational purpose, to learn how VBOs are streamed). Are there any issues that I'm forgetting?
1
glViewport() Win10 doesn't draw new parts I've been steadily working through building a basic 2D tile framework in python. I'm trying to use modern openGL so I have a simple shader and some vertex buffers and it's mostly doing what I want so far. I have a Scene with a method for centering the viewport on a Sprite def centre viewport(self, x, y) x x self.viewport width 2 y y self.viewport height 2 if x gt 0 x 0 if y lt 0 y 0 if x self.viewport width lt self.width x self.width self.viewport width if y self.viewport height gt self.height y self.height self.viewport height self.viewport x x self.viewport y y glViewport(x, y, self.viewport width, self.viewport height) self.viewport width and height are set when a scene is added to the window, so they have the same dimensions as whatever window is created. In my test game, the scene width and height are twice the window dimensions of 704x512. Everything works fine on my main mac development machine. On my work windows machine though, when the sprite moves and causes the scene to scroll, the previously hidden parts of the scene are not drawn My draw loop is very simple def draw(self) glBindBuffer(GL ARRAY BUFFER, self.sprite vertex buffer) glVertexAttribPointer(self.vertices, 3, GL FLOAT, GL FALSE, 0, None) glBindBuffer(GL ARRAY BUFFER, self.sprite texture buffer) glVertexAttribPointer(self.tex coords, 2, GL FLOAT, GL TRUE, 0, None) glDrawArrays(GL TRIANGLES, 0, self.sprite vertex count) glBindBuffer(GL ARRAY BUFFER, self.tile vertex buffer) glVertexAttribPointer(self.vertices, 3, GL FLOAT, GL FALSE, 0, None) glBindBuffer(GL ARRAY BUFFER, self.tile texture buffer) glVertexAttribPointer(self.tex coords, 2, GL FLOAT, GL TRUE, 0, None) glDrawArrays(GL TRIANGLES, 0, self.tile vertex count) and as mentioned, it works fine on my mac. Is glViewport() the appropriate way to be doing this, and why isn't it working properly on windows?
1
What's the recommended way of doing a HUD for an android game? Basically the question is in the title. I'm creating a RTS game and I will need buttons like attack move attack ground, etc. I am not using any engine. When people do games in OpenGL for android (my case), do they ever use android components to control the game or do they create their components in the game? What are the general recommended approach, if there's any? How about more complex components like scrolling lists of items , etc? I would also appreciate you to pair your answer with a brief comment about how was your experience using the approach(es) you describe. Thanks )
1
Most efficient way of drawing a lot of cubes OpenGL ES I have to draw a lot of cubes in my OpenGL programme for android. All the cubes have the same size but different colors. I know that calling glDrawArrays is expensive operation so I should call it less as possible. But as I know I have to call it 6 times (one per each side) and since I have more than 500 cubes it's not efficient at all. Does anyone have the idea what to do? Btw, I am using OpenGL ES 1.0. I saw that I can use one big VBO but I don't know how to do that.
1
Detect collision with bullet physics, to make a character controller I inherited from btCollisionWorld ContactResultCallback but I really have no idea how to use this virtual function btScalar addSingleResult(btManifoldPoint amp cp, const btCollisionObjectWrapper colObj0Wrap, int partId0, int index0, const btCollisionObjectWrapper colObj1Wrap, int partId1, int index1) I thought about using btCollisionWorld ConvexResultCallback instead but there is no method in btCollisionWorld to use it. For now my only goal is to move a btCollisionObject around and detect collision with walls, to adjust the position and movement. I would just need the collision normal, some collision point, or anything else...
1
Casting a Matrix4 class to float glUniformMatrix4fv(glGetUniformLocation(program, "projMatrix"), 1, false, (float ) amp projMatrix) projMatrix is a an object of type Matrix4 where the first variable declared is a float array. Does (float ) amp projMatrix therefore somehow retrieve this array? What does the casting appear to be doing?
1
App using LWJGL can't find display mode extension on Linux I'm running LWJGL app on Ubuntu virtual machine with no phisical graphic card. I set up Xvfb and Mesa3D but it fails with exception java.lang.ExceptionInInitializerError null at org.lwjgl.opengl.Pbuffer.createPbuffer(Pbuffer.java 234) lwjgl.jar na at org.lwjgl.opengl.Pbuffer. lt init gt (Pbuffer.java 219) lwjgl.jar na at org.lwjgl.opengl.Pbuffer. lt init gt (Pbuffer.java 190) lwjgl.jar na ... Caused by java.lang.RuntimeException org.lwjgl.LWJGLException No display mode extension is available at org.lwjgl.opengl.Display. lt clinit gt (Display.java 141) lwjgl.jar na ... 7 common frames omitted Caused by org.lwjgl.LWJGLException No display mode extension is available at org.lwjgl.opengl.LinuxDisplay.init(LinuxDisplay.java 724) lwjgl.jar na at org.lwjgl.opengl.Display. lt clinit gt (Display.java 138) lwjgl.jar na App is running from cron using script export DISPLAY localhost 1.0 export XAUTHORITY home ubuntu .Xauthority xauth add localhost 1 . 11111111111111111111111111111111 Xvfb 1 screen 0 1024x768x16 nolisten tcp home username start.sh Most of steps above were adopted from here but I'm not sure I did it correctly. Any help would be appreciated!
1
How to convert screen to world coordinates while using gluLookAt gluPerspective or similar matrix transforms? I am just starting an adventure in looking under the hood of graphics for a game project I've been working on for a while, and I could use some guidance. I am using Python Kivy (though that is not part of the concern), and am trying to use projection and modelview matrices to perform screen to world coordinate conversion. I am using something similar to gluLookAt and gluPerspective matrix transforms for those. The issue I'm running into, is that the coordinates I get out of multiplying the mv and p matrices together and inverting them, then multiplying by NDC screen coords, the resulting coordinates are only either a fraction of a pixel away from the world position look at is currently centered on, or a max of a few pixels that point. I know I'm missing something, and I would love if someone could help me understand. I wrote a standalone example gist, and made a short youtube video showing what problem I'm having. https youtu.be UxbWQO9e0NE https gist.github.com spinningD20 951e49cb836f08c434a0e9ab0e90c766 The code in question, is the screen to world method in the gist, when using the camera look at perspective method to create the MVP, which I will list here p Matrix() p.perspective(90., 16 9, 1, 1000) self.canvas 'projection mat' p self.canvas 'modelview mat' Matrix().look at(w x, w y 30, self.camera scale 350, w x, w y, 0, 0, 1, 0) That is for creating the matrices, and... def screen to world(self, x, y) proj self.canvas 'projection mat' model self.canvas 'modelview mat' get the inverse of the current matrices, MVP m Matrix().multiply(proj).multiply(model) inverse m.inverse() w, h self.size normalize pos in window norm x x w 2.0 1.0 norm y y h 2.0 1.0 p inverse.transform point(norm x, norm y, 0) print('convert from screen to world', x, y, p) return p 2 Was originally written to convert coordinates when using the previous projection matrix built using translate and creating a clip space (also included in the example). While the implementation appears to be specific to Kivy, it is just a modelview matrix and projection matrix being used, and their Matrix.transform point method used above is the same as multiplying a vec against the matrix in question. It can also include what appears to be the W part of a vec4, which I have also experimented with, with no apparent change. Here is a screenshot of the standalone example, painting where I have moved the mouse on the screen (red) and where the resulting world coordinate ends up being (green). The goal is for the converted coordinates in the world to fall directly under the red.
1
Wierd behaviour upon limiting rotation on Y axis Okay, this is the code that works for a FPS camera, except it allows the player to flip it over by going under over 90 rotation on Y axis transformation gt rotateObjectFromRight(glm radians(angleX), glm tvec3 lt GLfloat gt (0.0f, 1.0f, 0.0f)) transformation gt rotateObjectFromLeft(glm radians(angleY), glm tvec3 lt GLfloat gt (1.0f, 0.0f, 0.0f)) transformation object holds the data required to construct the view matrix, among others, a quaternion representing the rotation. The functions called will construct the quaternion with the given data and then multiply it with the rotation quaternion either from the left (relative to global axes) or the right (relative to local axes) side. angleX and angleY are mouse X and Y offsets representing the angles to rotate around. The first line yaws the camera around local Y axis, while the second then pitches it around global X axis, creating the characteristic FPS camera movement. Okay. So far so good. This is the code that should limit the pitch to never exceed the 90 boundaries transformation gt rotateObjectFromRight(glm radians(angleX), glm tvec3 lt GLfloat gt (0.0f, 1.0f, 0.0f)) glm tvec3 lt GLfloat gt forwardVector glm rotate(glm inverse(transformation gt getRotation()), glm tvec3 lt GLfloat gt (0.0f, 0.0f, 1.0f)) GLfloat pitchAngle forwardVector.y 90.0f angleY glm clamp(angleY, (pitchAngle 90.0f), (pitchAngle 90.0f)) transformation gt rotateObjectFromLeft(glm radians(angleY), glm tvec3 lt GLfloat gt (1.0f, 0.0f, 0.0f)) First and the last line are the same. What happens here? First I get the unit vector that lies on the Z axis and rotate it so that it represents the forward vector of the camera. I then take the Y component of that vector (ranging from 1 to 1) and multiply it with 90 to get the degrees. Then I clamp the angleY so that current rotation combined with the new one never exceeds the 90 boundary. If the rotation was about to flip the camera overhead, it would change the new rotation angle so the camera looks straight up. Similar thing for underfoot flipping. However, the camera acts weird. It's fastest at pitch angles close to 0 , but gets progressively slower as the pitch nears 90 . If I move my mouse slower, the camera is faster, if I move it faster, it's slower. It's as though it's discarding the too large small angleY values instead of clamping them to the appropriate values (but it's not, and I can confirm via the debugger all the calculated values look alright and are correct). What could be amiss here to cause such weird movement?
1
GLSL Shader compiles, but source is empty I'm trying to compile a GLSL shader, to which I use the following code. Initialization SDL Window boringInitStuff() SDL Init(SDL INIT VIDEO) SDL GL SetAttribute( SDL GL CONTEXT MAJOR VERSION, 3 ) SDL GL SetAttribute( SDL GL CONTEXT MINOR VERSION, 1 ) SDL GL SetAttribute( SDL GL CONTEXT PROFILE MASK, SDL GL CONTEXT PROFILE CORE ) Uint32 windowFlags SDL WINDOW OPENGL SDL Window sdlWindow SDL CreateWindow("Boooring", SDL WINDOWPOS UNDEFINED, SDL WINDOWPOS UNDEFINED, 400, 400, windowFlags) SDL GL CreateContext(sdlWindow) glewExperimental GL TRUE glewInit() return sdlWindow File parser void readFile(std string path, std string amp data) std ifstream f(path.c str(), std ios binary) data.assign((std istreambuf iterator lt char gt (f)), (std istreambuf iterator lt char gt ())) Main int main(int argv, char args) SDL Window sdlWindow boringInitStuff() std string buffer readFile(". compile test.vert", buffer) const char cBuffer buffer.c str() GLuint shaderID glCreateShader(GL VERTEX SHADER) glShaderSource(shaderID, 1, amp cBuffer, nullptr) glCompileShader(shaderID) while(!SDL QuitRequested()) SDL GL SwapWindow(sdlWindow) return 0 But when I try to inspect the source code in gDEBugger, the source code is gone. Linking of course doesn't work aswell. The weird thing is, that the compilation error checking works. EDIT When I copy amp paste the main part into another opengl project, it works.
1
How to create a second Window in OpenGL GLFW I would like to have 2 windows. 1 for display purposes and one for settings buttons (for example to change the color of the background in my main display). How would I go about having a second window for these settings? Would I just create another window? GLFWwindow window glfwCreateWindow(WINDOW WIDTH, WINDOW HEIGHT, "Settings", nullptr, nullptr)
1
Problems implementing a screen space shadow ray tracing shader Here I previously asked for the possibility of ray tracing shadows in screen space in a deferred shader. Several problems were pointed out. One of the most important problem is that only visible objects can cast shadows and objects between the camera and the shadow caster can interfere. Still I thought it'd be a fun experiment. The idea is to calculate the view coordinates of pixels and cast a ray to the light. The ray is then traced pixel by pixel to the light and its depth is compared with the depth at the pixel. If a pixel is in front of the ray, a shadow is casted at the original pixel. At first I thought that I could use the DDA algorithm in 2D to calculate the distance 't' (in p o t d, where o origin, d direction) to the next pixel and use it in the 3D ray equation to find the ray's z coordinate at that pixel's position. For the 2D ray, I would use the projected and biased 3D ray direction and origin. The idea was that 't' would be the same in both 2D and 3D equations. Unfortunately, this is not the case since the projection matrix is 4D. Thus, some tweak needs to be done to make this work this way. I would like to ask if someone knows of a way to do what I described above, i.e. from a 2D ray in texture coordinate space to get the 3D ray in screen space. I did implement a simple version of the idea which you can see in the following video video here Shadows may seem a bit pixelated, but that's mostly because of the size of the step in 't' I chose. And here is the shader version 330 core uniform sampler2D DepthMap uniform vec2 projAB uniform mat4 projectionMatrix const vec3 light p vec3( 30.0, 30.0, 10.0) noperspective in vec2 pass TexCoord smooth in vec3 viewRay layout(location 0) out float out AO vec3 CalcPosition(void) float depth texture(DepthMap, pass TexCoord).r float linearDepth projAB.y (depth projAB.x) vec3 ray normalize(viewRay) ray ray ray.z return linearDepth ray void main(void) vec3 origin CalcPosition() if(origin.z lt 60) discard vec2 pixOrigin pass TexCoord tex coords vec3 dir normalize(light p origin) vec2 texel size vec2(1.0 600.0) float t 0.1 ivec2 pixIndex ivec2(pixOrigin texel size) out AO 1.0 while(true) vec3 ray origin t dir vec4 temp projectionMatrix vec4(ray, 1.0) vec2 texCoord (temp.xy temp.w) 0.5 0.5 ivec2 newIndex ivec2(texCoord texel size) if(newIndex ! pixIndex) float depth texture(DepthMap, texCoord).r float linearDepth projAB.y (depth projAB.x) if(linearDepth gt ray.z 0.1) out AO 0.2 break pixIndex newIndex t 0.5 if(texCoord.x lt 0 texCoord.x gt 1.0 texCoord.y lt 0 texCoord.y gt 1.0) break As you can see, here I just increment 't' by some arbitrary factor, calculate the 3D ray and project it to get the pixel coordinates, which is not really optimal. Hopefully, I would like to optimize the code as much as possible and compare it with shadow mapping and how it scales with the number of lights. PS Keep in mind that I reconstruct position from depth by interpolating rays through a full screen quad.
1
Correct Rotation and Translation with a 4x4 matrix I am using a 4x4 matrix to transform verts in a shader. I multiply an identity matrix by a rotation matrix by a translation matrix. I am trying to first rotate the verts and then translate them, however in my result, it appears that the verts are being transformed and then rotated. My matrix looks something like this m00 m01 m02 tx m10 m11 m12 ty m20 m21 m22 tz 1 I am not using OpenGL's fixed function pipeline, I am multiplying matrices on the client side, and uploading the matrix to a GLSL shader. If it helps, I am using my own matrix multiplication code, but I have recreated this problem using matrices on my graphing calculator, so I don't believe my matrix code has errors.. I'll include a visual aid if needed.
1
How to stop scrolling of text in OpenGL I want to set a time limit for scrolling text in OpenGL amp GLUT. How can I stop scrolling at 250? Here is my code. I try with the if condition but it's not working. The scrolling is not stopping in 250, it scrolls infinitely. include lt fstream gt include lt iostream gt include lt stdlib.h gt include lt glut.h gt using namespace std float yr 0 float translate 0.0f, angle 0.0f identifiers void introscreen() void screen() void specialKey(int key, int x, int y) switch (key) case GLUT KEY UP translate 1.0f break case GLUT KEY DOWN translate 1.0f break case GLUT KEY LEFT angle 1.0f break case GLUT KEY RIGHT angle 1.0f break glutPostRedisplay() void SpeedText() GLfloat y GLfloat y2 GLfloat fSize 5 GLfloat fCurrSize fCurrSize fSize 2 for (y 0.0f y lt 250.0f yr y 5.0f) glLineWidth(fCurrSize) glBegin(GL LINES) glVertex3f( 200.0f, y translate, 0) glVertex3f( 180.0f, y translate, 0) glEnd() fCurrSize 1.0f introscreen() if (y translate gt 50) y 50 void renderbitmap(float x1, float y1, void font, char string) char c glRasterPos2f(x1, y1) for (c string c ! ' 0' c ) glutBitmapCharacter(font, c) void introscreen(void) glColor3f(0, 1, 0) char buf 10 ' 0' for (int row 0.0f row lt 250 yr row 5.0f) sprintf s(buf," d", row) renderbitmap( 220, (translate row), GLUT BITMAP TIMES ROMAN 24, buf) int main(int arg, char argv) glutInit( amp arg, argv) glutInitDisplayMode(GLUT SINGLE GLUT RGB) glutInitWindowSize(width, height) glutInitWindowPosition(50, 100) glutCreateWindow("HUD Lines") display() SpeedText() glutSpecialFunc(specialKey) glutMainLoop() return 0
1
Suitability of ground fog using layered alpha quads? A layered approach would use a series of massive alpha textured quads arranged parallel to the ground, intersecting all intervening terrain geometry, to provide the illusion of ground fog quite effectively from high up, looking down, and somewhat less effectively when inside the fog and looking toward the horizon (see image below). Alternatively, a primarily shader based approach would instead calculate density as function of view distance into the ground fog substrate, and output the fragment value based on that. Without having to performance test each approach myself, I would like first to hear others' experiences (not speculation!) on what sort of performance impact the layered alpha texture approach is likely to have. I ask specifically due to the oft cited impacts of overdraw (not sure how fill rate bound your average desktop system is). A list of games using this approach, particularly older games, would be immensely useful if this was viable on pre DX9 OpenGL2 hardware, it is likely to work fine for me. One big question is in regards to this sort of effect (Image credit goes to Lume of lume.com) Notice how the vertical fog gradation is continuous smooth. OTOH, using textured quad layers, I can only assume that layers would be mighty obvious when walking through them the more sparse they were, the more obvious this would be. This is in contrast to where fog planes are aligned to face the player every frame, where this coarseness would be much less obvious.
1
How can I render a semi transparent model with OpenGL correctly? I'm using OpenGL ES 2 and I want to render a simple model with some level of transparency. I'm just starting out with shaders, and I wrote a simple diffuse shader for the model without any issues but I don't know how to add transparency to it. I tried to set my fragment shader's output (gl FragColor) to a non opaque alpha value but the results weren't too great. It sort of works, but it looks like certain model triangles are only rendered based on the camera position... It's really hard to describe what's wrong so please watch this short video I recorded http www.youtube.com watch?v s0JqA0rZabE I thought this was a depth testing issue so I tried playing around with enabling disabling depth testing and back face culling. Enabling back face culling changes the output slightly but the problem in the video is still there. Enabling disabling depth testing doesn't seem to do anything. Could anyone explain what I'm seeing and how I can add some simple transparency to my model with the shader? I'm not looking for advanced order independent transparency implementations. edit Vertex Shader color varying for fragment shader varying mediump vec3 LightIntensity varying highp vec3 VertexInModelSpace void main() vec4 LightPosition vec4(0.0, 0.0, 0.0, 1.0) vec3 LightColor vec3(1.0, 1.0, 1.0) vec3 DiffuseColor vec3(1.0, 0.25, 0.0) find the vector from the given vertex to the light source vec4 vertexInWorldSpace gl ModelViewMatrix vec4(gl Vertex) vec3 normalInWorldSpace normalize(gl NormalMatrix gl Normal) vec3 lightDirn normalize(vec3(LightPosition vertexInWorldSpace)) save vertexInWorldSpace VertexInModelSpace vec3(gl Vertex) calculate light intensity LightIntensity LightColor DiffuseColor max(dot(lightDirn,normalInWorldSpace),0.0) calculate projected vertex position gl Position gl ModelViewProjectionMatrix gl Vertex Fragment Shader varying to define color varying vec3 LightIntensity varying vec3 VertexInModelSpace void main() gl FragColor vec4(LightIntensity,0.5)
1
Which c c model animation library for OpenGL I'm fairly new to game development, played around with xna before and just learning OpenGL amp c now and I'm interested to know which c c based model animation libraries are out there and which you would recommend? I don't have any particular model format in mind yet but probably a format that is supported by a free modelling tool like Blender.
1
How to compose a matrix to perform isometric (dimetric) projection of a world coordinate? I have a 2D unit vector containing a world coordinate (the player's direction), and I want to convert that to screen coordinates (classic isometric tiles). I'm aware I can achieve this by rotating around the relevant axis but I want to see and understand how to do this using a purely matrix approach? Partly because I'm learning 'modern OpenGL' (v2 ) and partly because I will want to use this same technique for other things so need a solid understanding and my math ability is a little lacking. If needed my screen's coordinate system has it's origin at top left with x amp y pointing right and down respectively. Also, my vertex positions are converted to the NDC range in my vertex shader if that's relevant. Language is C with no supporting libraries.
1
How do I make a gun alignment to camera matrix in OpenGL GLSL? I am trying to program a FPS game with OpenGL. I am using 3D eight the OpenGL 3.3 programmable pipeline. I have a gun and a camera that I loaded. When I load all of the assets for the game, I put everything int a 4x4 matrix, as it is compatible with the view matrix. I set the corresponding matrix for the model equal to all the needed variables, then I use another variable that I defined in a header file to draw the actual model... I did some searching because I do not have the code with me, and the closest I could find is learnopengl s model loading tutorial as that is the most similar. What I have tried I have tried multiplying the matrix for the gun s world info, which I will call the model view matrix (such as position and scale) times the view matrix. I have also tried doing the same but multiplying the model view times the inverse of the normal view matrix. I already set the guns position equal to the camera a position, but when I run the application and try to move around, the gun still moves away from the camera. I knew a way of how to fix this in OpenGL 1.1 by using glLoadIdentity, which is now deprecated with the core version of OpenGL 3. . When i try to set the two matrices equal, I get different results because the gun doesn t move at all. When I move the camera with the inverse of view times gun, it moves away from the camera... I learned about the inverse matrix thing from here https forums.khronos.org showthread.php 81932 My matrixes are also set up the same. The gun model view inverse(view matrix) has worked the best... this link is where I found it.
1
When is the Z coordinate normalized in GLSL? I thought that whenever you transform an object to world space, then view space and finally screen space, the last matrix you apply(the projection matrix) normalizes the z values between 0 and 1. However, I'm getting big z coordinates, which implies that the projection matrix didn't normalize it. Am I doing something wrong? I mean, all I do is gl Position projection view world gl Vertex
1
openGL textures in bitmap mode For reasons detailed here I need to texture a quad using a bitmap (as in, 1 bit per pixel, not an 8 bit pixmap). Right now I have a bitmap stored in an on device buffer, and am mounting it like so glBindBuffer(GL PIXEL UNPACK BUFFER, BFR.G (T 1) 2 ) glTexImage2D(GL TEXTURE 2D, 0, GL RGB, W, H, 0, GL COLOR INDEX, GL BITMAP, 0) The OpenGL spec has this to say about glTexImage2D "If type is GL BITMAP, the data is considered as a string of unsigned bytes (and format must be GL COLOR INDEX). Each data byte is treated as eight 1 bit elements..." Judging by the spec, each bit in my buffer should correspond to a single pixel. However, the following experiments show that, for whatever reason, it doesn't work as advertised 1) When I build my texture, I write to the buffer in 32 bit chunks. From the wording of the spec, it is reasonable to assume that writing 0x00000001 for each value would result in a texture with 1 px wide vertical bars with 31 wide spaces between them. However, it appears blank. 2) Next, I write with 0x000000FF. By my apparently flawed understanding of the bitmap mode, I would expect that this should produce 8 wide bars with 24 wide spaces between them. Instead, it produces a white 1 px wide bar. 3) 0x55555555 1010101010101010101010101010101, therefore writing this value ought to create 1 wide vertical stripes with 1 pixel spacing. However, it creates a solid gray color. 4) Using my original 8 bit pixmap in GL BITMAP mode produces the correct animation. I have reached the conclusion that, even in GL BITMAP mode, the texturer is still interpreting 8 bits as 1 element, despite what the spec seems to suggest. The fact that I can generate a gray color (while I was expecting that I was working in two tone), as well as the fact that my original 8 bit pixmap generates the correct picture, support this conclusion. Questions 1) Am I missing some kind of prerequisite call (perhaps for setting a stride length or pack alignment or something) that will signal to the texturer to treat each byte as 8 elements, as it suggests in the spec? 2) Or does it simply not work because modern hardware does not support it? (I have read that GL BITMAP mode was deprecated in 3.3, I am however forcing a 3.0 context.) 3) Am I better off unpacking the bitmap into a pixmap using a shader? This is a far more roundabout solution than I was hoping for but I suppose there is no such thing as a free lunch.
1
How can I split up a large model into an octree codewise? For collision detection, I am attempting to split up a large model into an octree. If we take this black thing thing to be the ship (104,000 vertices) Then I would like to split up the faces I could easily split them up, but the problem is one of location. For example, if we take the bottom left chunk, how can I know that the projectile (in this case, a cannonball) is at that specific chunk? And also how can I define the areas of the chunks in the first place? And for that matter, I need them to rotate with the ship as well! I'm hopelessly lost as of how to do this, and so I'm hoping someone can help me out.
1
Framerate limited by lack of mouse movement? Using Torque, it appears that the program is running at around 25fps when the mouse is still, but as long as I keep the mouse moving, the framerate can hit well over 300fps. What in the world would cause the framerate to be tied to mouse movement?
1
How do I implement object picking, using OBB in OpenGL? I am trying to make 3D drawing software. I wanted to have the drag feature so I am using object picking using the OBB algorithm. I am facing problems in understanding the algorithm, and my implementation thus is having bugs. How do I implement object picking, using OBB in OpenGL? As I am new to OpenGL (free glut library), a step by step explanation would be helpful.
1
Calling glGetError() in release builds? Currently, I'm calling glGetError() after each OpenGL function call in order to be able to detect and report bugs. I've been reading that glGetError() calls should be reduced to once per frame in release builds. But if I do this and something does not work properly I won't be able to report where the code is failing, I would get an exception report saying that something related to OpenGL is not working properly but I wouldn't be able to detect where the code is failing. What are the most common aproaches to report errors related to OpenGL? Should I keep all the glGetError() calls or should I call glGetError() just once per frame?
1
How to know if these profiler values are good or bad? I'm making a 2D game for Android Mobile using LibGDX and Java in Android Studio. My game mostly runs alright with 60FPS, but for some time to time it lowers to 50FPS. I'm trying to find out what causes it. I came across this profiler and implemented it and here are the values in a very "messy" frame with lots of objects going around and messing around 06 19 15 14 21.088 9999 10033 com.gadarts.parashoot.android I GL Report Calls 498 06 19 15 14 21.088 9999 10033 com.gadarts.parashoot.android I GL Report Draw Calls 37 06 19 15 14 21.088 9999 10033 com.gadarts.parashoot.android I GL Report Shader Switches 8 06 19 15 14 21.088 9999 10033 com.gadarts.parashoot.android I GL Report Bindings 36 06 19 15 14 21.088 9999 10033 com.gadarts.parashoot.android I GL Report Vertex Count 31.945946 My question is how am I supposed to know if these are considered to be high values or low? I read the LibGDX documentation but it doesn't give any direction or indication about the values effect. Yes, it might be an opinion based question, but at least someone can give relevant direction using these values? Thanks in advance
1
How do I move the camera in 2D LWJGL openGL? I am making a top down rpg game with the LWJGL but I can't figure out how to make the camera follow the player. I've tryed using GLU.gluLookAt() but it seems to be designed for 3D and when I try it with my game everything currently on the screen disappears. I have no idea where the camera gets moved to. Please explain simply how to use GLU.gluLookAt() or a better alternative to move the camera. public class Camera public float x 0 public float y 0 public void moveCamera(float toX, float toY) GLU.gluLookAt(x toX, y toY, 0.0f, 0.0f, 0.0f, 0.0f, 0f, 1f, 0f)
1
Android OpenGL and non premultiplied textures As I understand it by default all bitmaps will will get the alpha channel premultiplied in Android if loaded using BitmapFactory. My setup I have a 32bit bitmap(.bmp) where the alpha channel has a gloss map and the rgb is a specular map. Issue I found out that I can load the bitmap without being premultiplied using the bitmap option opts.inPremultiplied false Result When I render the alpha channel of the texture like this in the shader gl FragColor.rgb vec3(texture.a) The resulting mesh is all white and not shades of gray as expected. I upload the texture with this method GLUtils.texImage2D(GL TEXTURE 2D, 0, GL RGBA, image, 0) I have also tried using this approach GLES20.glteximage as well with the same result although I might have done something wrong there. Can anyone spot where I have gone wrong or what else I need to do?
1
Why are some of my normals facing away from the camera? I'm trying to use WebGL to render some simple models, and I'm running into issues where pixels near the edge of my model are passing normals to my fragment shader that point away from the camera. This is of course messing with my attempts to use physically based shading, because BRDFs are only defined in the hemisphere around the normal vector. These normals aren't just slightly pointing away from the screen if that was the case, I would chalk it up to floating point error and clamp to acceptable values. But some of my pixels have v dot n values of 0.5 or less. This happens on pretty much any mesh I try to render. I have verified that my vertex normals point away from the mesh. I have backface culling turned on. This happens with both perspective and orthographic projections. I have tested in both Firefox and Chrome. Everything else about my render looks fine. My vertex shader looks like this precision mediump float uniform mat4 M uniform mat4 V uniform mat4 P attribute vec3 vertexPos model attribute vec3 vertexNormal model varying vec3 pos world varying vec3 normal world void main() gl Position P V M vec4(vertexPos model, 1.0) pos world (M vec4(vertexPos model, 1.0)).xyz We can use M here instead of its inverse transpose because it does not scale the model. normal world mat3(M) vertexNormal model And my fragment shader looks like this precision mediump float uniform vec3 cameraPos world varying vec3 pos world varying vec3 normal world void main() vec3 n normalize(normal world) vec3 v normalize(cameraPos world pos world) float vAngleCos dot(n, v) This will highlight pixels whose view vectors are outside of the hemisphere around n. gl FragColor vec4(vec3(clamp( vAngleCos, 0.0, 1.0)), 1.0) Rendering with the fragment shader above gives me results like this Apologies if I'm missing something obvious!
1
How to implement a multi platform Java 2D game engine's graphics? I'm not sure whether this question should be posted here. I'm trying to make a basic generic game engine in Java. Here's what I have so far. public abstract class Device public abstract void startDevice() public abstract void closeDevice() public abstract int getWidth() public abstract int getHeight() public abstract void displayImage(Image im) public class Game public final Device device public synchronized void gameStart() ... public abstract class GameEntity implements Serializable public abstract void create(Game g) public abstract void updateAndDraw(Game g, Graphics2D g2d) public abstract void destroy(Game g) So far, I have implemented two concrete classes for Device, SwingDevice and J2meDevice, and it's working fine. Now the problem is I want to implement AndroidDevice. Everything will work just fine except for that I have enforced using java.awt.Image and java.awt.Graphics2D to paint Game's graphics offline. Then I forward this Image to the device for rendering. Is there any way I can absolutely decouple the Device's graphics from the rest of the engine? I have two solutions but I doubt they'll fully achieve what I need Stick with the java.awt.Image. Devices that support OpenGL will not fully utilize their hardware capabilities as drawing is done on CPU. And it might not work on platforms on which can't import java.awt (I doubt their existence). Create my Graphics2D like interface. This will make defining new devices harder and again it might hide some of the device's features.
1
Understanding the ModelView Matrix I want to analyze the each component of my 4 4 ModelView Matrix. I came to know that the starting 3 3 of ModelView Matrix stores rotation. If i want my object to have no rotation with respect to camera so My ModelView Matrix looks like this How to change my ModelView Matrix if i want to have NO Translation or Scaling ? Can anyone explain the Maths behind this
1
OpenGL Fragment Shader Interpolation Vs Inverse Calculation I have found openGL fragment shader tutorials on the web, some that use use inverse calculations to revert the fragment coordinate back to the world space and others that interpolate the position. With that said... Is there a difference between interpolated world space position and the actual world space position calculated using inverse matrices and the fragment window space coordinates? Does anyone know of any resources that would help me understand the mathematical differences between these two things, if they are different? If they aren't different, then why would anyone use inverse matrices instead of interpolating, which seems less expensive? I wrote a shader to test 1, but I cannot tell if rounding errors are to blame for the differences or not. Thanks!
1
Proper way to encapsulate a Shader into different modules I am planning to build a Shader system which can be accessed through different components modules in C . Each component has its own functionality like transform relevated stuff (handle the MVP matrix, ...), texture handler, light calculation, etc... So here's an example I would like to display an object which has a texture and a toon shading material applied and it should be moveable. So I could write ONE shading program that handles all 3 functionalities and they are accessed through 3 different components (texture handler, toon shading, transform). This means I have to take care of feeding a GLSL shader with different uniforms attributes. This implies to know all necessary uniform locations and attribute locations, that the GLSL shader owns. And it would also necessary to provide different algorithms to calculate the value for each input variable. Similar functions would be grouped together in one component. A possible way would be, to wrap all shaders in a own definition file written in JSON XML and parse that file in C to get all input members and create and compile the resulting GLSL. But maybe there is another way that is not so complex? So I'm searching for a way to build a system like that, but I'm not sure yet which is the best approach.
1
how could RTT be so slow on my intel card? I simply draw something into a render target, and use it as an normal texture. It's always working greate to me with my nvidia video card. But today I found my program ran terribly slow (less than 5 fps) on a intel card. After running an analyse I found glGenerateMipmapEXT is the trouble maker, it costs most of the cpu time. Here's how I bind a RT texture glBindTexture(GL TEXTURE 2D, m textureID) lt br gt glGenerateMipmapEXT(GL TEXTURE 2D) without glGenerateMipmapEXT the texture is nothing but pure white pic. Something wrong with my RTT?
1
Java LWJGL How can I click to interact with objects? I want to be able to click on a monster to walk to him and start attacking him. The part that doesnt make sense to me is the conversion between the mouse position, and the actual terrain position. There are camera angles to worry about, heights, seperate terrains, how is this done???? I am using Java LWJGL and rendering with OpenGL 4.4
1
How can I make OpenGL textures scale without becoming blurry? I'm using OpenGL through LWJGL. I have a 16x16 textured quad rendering at 16x16. When I change it's scale amount, the quad grows, then becomes blurrier as it gets larger. How can I make it scale without becoming blurry, like in Minecraft. Here is the code inside my RenderableEntity object public void render() Color.white.bind() this.spriteSheet.bind() GL11.glBegin(GL11.GL QUADS) GL11.glTexCoord2f(0,0) GL11.glVertex2f(this.x, this.y) GL11.glTexCoord2f(1,0) GL11.glVertex2f(getDrawingWidth(), this.y) GL11.glTexCoord2f(1,1) GL11.glVertex2f(getDrawingWidth(), getDrawingHeight()) GL11.glTexCoord2f(0,1) GL11.glVertex2f(this.x, getDrawingHeight()) GL11.glEnd() And here is code from my initGL method in my game class GL11.glEnable(GL11.GL TEXTURE 2D) GL11.glClearColor(0.46f,0.46f,0.90f,1.0f) GL11.glViewport(0,0,width,height) GL11.glOrtho(0,width,height,0,1, 1) And here is the code that does the actual drawing public void start() initGL(800,600) init() while(true) GL11.glClear(GL11.GL COLOR BUFFER BIT) for(int i 0 i lt entities.size() i ) ((RenderableEntity)entities.get(i)).render() Display.update() Display.sync(100) if(Display.isCloseRequested()) Display.destroy() System.exit(0)
1
How can I split up a large model into an octree codewise? For collision detection, I am attempting to split up a large model into an octree. If we take this black thing thing to be the ship (104,000 vertices) Then I would like to split up the faces I could easily split them up, but the problem is one of location. For example, if we take the bottom left chunk, how can I know that the projectile (in this case, a cannonball) is at that specific chunk? And also how can I define the areas of the chunks in the first place? And for that matter, I need them to rotate with the ship as well! I'm hopelessly lost as of how to do this, and so I'm hoping someone can help me out.
1
How can I implement real time mutual object reflection? So, given a scene like this (cubemap skybox with "real" spheres) Everything looks great, except for the fact that the spheres don't reflect each other. What's a good way to go about this? The first thing that came to mind was to render an environment map for each cube, and apply that to each cube along with the skybox. However, that will be insanely slow when updated per frame, given that the spheres are moving relative to each other and a static map won't work. I realize this may fall under the category of ray tracing and be difficult to achieve real time, but perhaps someone has addressed this problem in the past?
1
Rotation of a ball moving on a surface I have a ball that moves along a platform. The ball is characterized as a sphere that has a radius and a position. The platform basically is a rectangle consisting of two vertex triangles. Its class also stores the normal of the platform. While moving the ball and applying the physics of gravity doesn't pose a problem, I just can't get the sphere to rotate correctly. To represent the rotation(s) I'm using Qt's QQuaternion class. Here's my current rotation code QVector3D right QVector3D crossProduct(tile gt getSurfaceNormal(), step).normalized() QVector3D forward QVector3D crossProduct(right, tile gt getSurfaceNormal()).normalized() float radian QVector3D dotProduct((ball position), forward) radius angularRate ((radian interval) 180.0f M PI) right QQuaternion rotation(angularRate.length() interval, angularRate) orientation rotation orientation The orientation quaternion is initialized like this orientation QQuaternion(0.0f, QVector3D(0.0f, 0.0f, 1.0f)) When drawing the ball I update the model matrix (QMatrix4x4) like this modelMatrix.setToIdentity() modelMatrix.translate(position.x(), position.y(), position.z()) modelMatrix.rotate(orientation) modelMatrix.scale(radius) What I noticed when debugging is that the final model matrix gets NAN values in the first three columns. Is there any physical flaw in my calculation or could it have to do with the model matrix itself? If that's important, here's the full code for updating the ball's properties void Ball update(float interval) collision false Calculate new velocity velocity (MOVE FORCE mass interval) pushDirection velocity.setY(velocity.y() (GRAVITY interval)) Collision detection QVector3D normal(0.0f, 0.0f, 0.0f) QVector3D step interval velocity QVector3D ball position step Check all tiles for collision with ball for (unsigned int i 0 i lt tiles gt length() i ) Tile tile tiles gt at(i) QVector lt QVector3D gt mainVertices tile gt getVertices() QVector3D a1 VectorMath closestPointOnTriangle(ball, mainVertices 0 , mainVertices 1 , mainVertices 2 ) QVector3D a2 VectorMath closestPointOnTriangle(ball, mainVertices 2 , mainVertices 3 , mainVertices 0 ) float d1 (ball a1).length() float d2 (ball a2).length() QVector3D a (d1 lt d2) ? a1 a2 QVector3D aN a tile gt getAdjustedSurfaceNormal() float aBallLength (a ball).length() float aNormalBallLength (aN ball).length() if (aBallLength lt radius aNormalBallLength lt radius) QVector3D distance (aBallLength lt aNormalBallLength) ? a ball aN ball float l distance.length() QVector3D move ((radius l) l) distance Rotation QVector3D right QVector3D crossProduct(tile gt getSurfaceNormal(), step).normalized() QVector3D forward QVector3D crossProduct(right, tile gt getSurfaceNormal()).normalized() float radian QVector3D dotProduct((ball position), forward) radius angularRate ((radian interval) 180.0f M PI) right ball move normal move collision true Set new ball position position ball normal.normalize() Applying rebound if (collision) float velocityNormal QVector3D dotProduct(velocity, normal) QVector3D rebound ( (1 ELASTICITY) velocityNormal) normal velocity rebound Damping velocity DAMPING QQuaternion rotation(angularRate.length() interval, angularRate) orientation orientation rotation if (position.y() lt 10.0f) reset()
1
Rendering multiple textures on same image for terrain with index buffers (LWJGL 3) I have started with LWJGL3 and trying to built game engine. I'm stuck on generating terrains and this is my second big problem and I'm exhausted so I need some advice on how to solve my problem. What I want to achieve I want to have all textures in one place and to have some kind of texture map that I will use to render specific texture at that place. example textures map but result is Only thing that I think is problem is because I use same points (index buffers) So instead of 8 points, I have 6. Is it possible that then texture gets all messed up? Should I use all points, even if they are at same or almost the same location? But then I would have a lot more vertices than actually need it.