_id
int64
0
49
text
stringlengths
71
4.19k
1
How do I flip upside down fonts in FTGL I just use FTGL to use it in my app. I want to use the version FTBufferFont to render font but it renders in the wrong way. The font(texture?buffer?) is flipped in the wrong axis. I want to use this kind of orthographic settings void enable2D(int w, int h) winWidth w winHeight h glViewport(0, 0, w, h) glMatrixMode(GL PROJECTION) glLoadIdentity() I don't even want to swap the 3rd and 4th param because I like to retain the top left as the origin glOrtho(0, w, h, 0, 0, 1) glMatrixMode(GL MODELVIEW) I render the font like this No pushing and popping of matrices No translation font.Render("Hello World!", 1, position, spacing, FTGL RenderMode RENDER FRONT) On the other forums, they said, just scaled it down to 1, but it wont work in mine. Example font is above the screen I can't see relevant problem like in mine in google so I decide to ask this here again. How can I invert texture's v coordinate without modifying its source code ? (assume its read only)
1
Is Orthogonal in LWJGL 2D or 3D? I'm attempting to develop an orthogonal game with lwjgl 3 and OpenGL in Eclipse, but I can't find an answer for rendering objects in 2D or 3D. Are both applicable? And if so what are the advantages of either one? I would also like to rotate my camera if that changes your answer.
1
Loading multi texture 3ds c I have a question about loading 3ds using this tutorial. I want to use more than one texture on the model (because here all the models have more than one) but it seems that this library can't do that. Do you know any other alternatives or a way to edit this existing library to reach my aim?
1
New to OpenGL , having trouble understanding matrix transformation I have modest experience of developing games with sdl , libgdx , unity etc. But never got into learning any low level API. So I thought about learning OpenGL and got started with tutorials provided by lazyfoo.net.(Yes I know he is using the legacy OpenGL but he migrates to modern OpenGL as he progresses through the tutorials). On the fourth chapter he shows the use of glPushMatrix() and glPopMatrix (). Up until then I had the understanding that whenever I translate the modelview matrix that translates the whole gameworld. Now this process of saving the current matrix on stack has put me into some confusions. They are 1.If I push the current matrix on the matrix stack and make change to current state, then pop that saved state again how are these transformations combined into one matrix? 2.In the tutorial he shows how to do scrolling background with four quads in the background.He translates the matrix to position the camera in the proper place based on user input. Then he pops the matrix from the stack in the render method and translates it to four different positions to draw four quads. My question is why don't we see the quads moving with the camera(which means not moving at all relative to the camera) since they are drawn relative to the camera. In key input function Take saved matrix off the stack and reset it glMatrixMode( GL MODELVIEW ) glPopMatrix() glLoadIdentity() Move camera to position glTranslatef( gCameraX, gCameraY, 0.f ) Save default matrix again with camera translation glPushMatrix() in render function Pop default matrix onto current matrix glMatrixMode( GL MODELVIEW ) glPopMatrix() Save default matrix again glPushMatrix() Move to center of the screen glTranslatef( SCREEN WIDTH 2.f, SCREEN HEIGHT 2.f, 0.f ) Red quad glBegin( GL QUADS ) glColor3f( 1.f, 0.f, 0.f ) glVertex2f( SCREEN WIDTH 4.f, SCREEN HEIGHT 4.f ) glVertex2f( SCREEN WIDTH 4.f, SCREEN HEIGHT 4.f ) glVertex2f( SCREEN WIDTH 4.f, SCREEN HEIGHT 4.f ) glVertex2f( SCREEN WIDTH 4.f, SCREEN HEIGHT 4.f ) glEnd() These all might be very silly questions and I believe they are. I think there is a problem in my understanding of how matrices work in OpenGL. It would be very nice of you to enlighten me on this matter )
1
How to do gpu batching? I need to render two shapes with opengl (let's say a cube and a triangular prism) and one of them has texture and the other one has lightning fx. The thing I want to do is first draw texture and lightning and then render the shapes. I made a research, the only thing I could find is this is called gpu batching. Do you have any ideas or any tutorials etc. about this subject? How can I do this? Edit This video explains what I want to do Crysis 2 Rendering Pipeline Edit I heard that it is not possible for gpu to render different fxs for different geometries back to back. So I need to group geometries by fxs.(Cube has texture put it in the texture array, prism has lightning put it in the lightning array...then loop these arrays and render the scene) My aim is to first render fxs and then render geometries to render the whole scene.
1
Problems with texture orientation in space I am currently drawing texture in 3D space and have some problems with it's orientation. I'd like me textures always to be oriented with front face to user. My desirable result looks like Note, that text size stay without changes when we rotating world and stay oriented with front face to user. Now I can draw text in 3D space, but it is not oriented with front but rotating with world. Such results I got with following shaders Vertex Shader uniform vec3 Position void main() gl Position vec4(Position, 1.0) Geometry Shader layout(points) in layout(triangle strip, max vertices 4) out out vec2 fsTextureCoordinates uniform mat4 projectionMatrix uniform mat4 modelViewMatrix uniform sampler2D og texture0 uniform float og highResolutionSnapScale uniform vec2 u originScale void main() vec2 halfSize vec2(textureSize(og texture0, 0)) 0.5 og highResolutionSnapScale vec4 center gl in 0 .gl Position center.xy (u originScale halfSize) vec4 v0 vec4(center.xy halfSize, center.z, 1.0) vec4 v1 vec4(center.xy vec2(halfSize.x, halfSize.y), center.z, 1.0) vec4 v2 vec4(center.xy vec2( halfSize.x, halfSize.y), center.z, 1.0) vec4 v3 vec4(center.xy halfSize, center.z, 1.0) gl Position projectionMatrix modelViewMatrix v0 fsTextureCoordinates vec2(0.0, 0.0) EmitVertex() gl Position projectionMatrix modelViewMatrix v1 fsTextureCoordinates vec2(1.0, 0.0) EmitVertex() gl Position projectionMatrix modelViewMatrix v2 fsTextureCoordinates vec2(0.0, 1.0) EmitVertex() gl Position projectionMatrix modelViewMatrix v3 fsTextureCoordinates vec2(1.0, 1.0) EmitVertex() Fragment Shader in vec2 fsTextureCoordinates out vec4 fragmentColor uniform sampler2D og texture0 uniform vec3 u color void main() vec4 color texture(og texture0, fsTextureCoordinates) if (color.a 0.0) discard fragmentColor vec4(color.rgb u color.rgb, color.a) Any ideas how to get my desirable result? EDIT 1 I make edit in my geometry shader and got part of lable drawn on screen at corner. But it is not rotating. .......... vec4 centerProjected projectionMatrix modelViewMatrix center centerProjected centerProjected.w vec4 v0 vec4(centerProjected.xy halfSize, 0.0, 1.0) vec4 v1 vec4(centerProjected.xy vec2(halfSize.x, halfSize.y), 0.0, 1.0) vec4 v2 vec4(centerProjected.xy vec2( halfSize.x, halfSize.y), 0.0, 1.0) vec4 v3 vec4(centerProjected.xy halfSize, 0.0, 1.0) gl Position og viewportOrthographicMatrix v0 ..........
1
How do you create a 2d world then view it in 3d? I have been learning OpenGL for a while now and have a pretty good understanding so far. What I would like to know is if I create a 2D game in Orthographic Projection, is it possible to switch to a perspective projection and view the scene without having to manually adjust the camera etc? Thanks
1
Does Blitz3D use its own 3D engine or does it wrap OpenGL? How does Blitz3D work? I mean internally, does it use OpenGL with basic wrappers or it using some open source 3D engine that itself wraps OpenGL?
1
Orthographic camera produces strange results I've created a basic orthographic camera using the following view projection matrix 2 (right left), 0, 0, ((right left) (right left)), 0, 2 (top bottom), 0, ((top bottom) (top bottom)), 0, 0, 2 (zFar zNear), ((zFar zNear) (zFar zNear)), 0, 0, 0, 1 where right 1280, left 0, top 0, bottom 720, zFar 1.0 and zNear 1.0 The matrix gets used in the glsl shader like below layout(location 0) in vec3 a Position layout(location 1) in vec2 a TexCoord uniform mat4 u ViewProjection uniform mat4 u Transform out vec2 v TexCoord void main() v TexCoord a TexCoord gl Position u ViewProjection u Transform vec4(a Position, 1.0) where the u Transform uniform is passed a simple identity matrix. And I have declared a default rectangle that I am trying to get to the middle of my screen float vertices positions texcoords 0.5f, 0.5f, 0.0f, 1.0f, 1.0f, top right 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, bottom right 0.5f, 0.5f, 0.0f, 0.0f, 0.0f, bottom left 0.5f, 0.5f, 0.0f, 0.0f, 1.0f top left This results in the folowing image showing a single diagonal line The strange thing is that if I upscale the rectangle coordinates to the following float vertices positions texcoords 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, top right 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, bottom right 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, bottom left 1.0f, 1.0f, 0.0f, 0.0f, 1.0f top left It shows a full rectangle like below Expected result In both cases it should show a rectangle, with the second rectangle being larger. Actual result The first case doesnt even produce a rectangle, while the upscaled coordinates do show a rectangle. My question Is the problem in the math, and if so, what causes this behavior or should this be the expected result and am I wrong in my understanding of the model view projection matrix.
1
Reason for white lines in tile based game My game is tile based. I am getting this lines between two tiles. They don't look nice. How can i resolve this issue?
1
How to handle wildly varying rendering hardware getting baseline I've recently started with mobile programming (cross platform, also with desktop) and am encountering wildly differing hardware performance, in particular with OpenGL and the GPU. I know I'll basically have to adjust my rendering code but I'm uncertain of how to detect performance and what reasonable default settings are. I notice that certain shader functions are basically free in a desktop implemenation but can be unusable in a mobile device. The problem is I have no way of knowing what features will cause what performance issues on all the devices. So my first issue is that even if I allow configuring options I'm uncertain of which options I have to make configurable. I'm wondering also wheher one just writes one very configurable pipeline, or whether I should have 2 distinct options (high low). I'm also unsure of where to set the default. If I set to the poorest performer the graphics will be so minimal that any user with a modern device would dismiss the game. If I set them even at some moderate point, the low end devices will basically become a slide show. I was thinking perhaps that I just run some benchmarks when the user first installs and randomly guess what works, but I've not see a game do this before.
1
why collision not detection not happened in Two Sphere I start practice 3d collision detection of sphere in OpenGl containing two sphere one sphere hit using cannon gun but collision not detect when it hit and when i debug code collision statement triger when sphere is too far of each other here is sphere One code glPushMatrix() glTranslatef(400, 50, 400) drawBomb() glPopMatrix() Connon gun code is here which control by keyboard handler glPushMatrix() glTranslatef(600, 0, 400) glRotatef( longFire, 1, 0, 0) glRotatef( longFireChaw, 0, 1, 0) glTranslatef(0, 0, 25) drawFireHolder() printf s("Long fire is d", longFire) glPopMatrix() second sphere code is here which hit by mouse handler glPushMatrix() bombX is global variable of value 600 bombY and bombZ is global variable of value 0 glTranslatef(bombX, bombY, 400) glRotatef( longFire, xRotate, yRotate, zRotate) glRotatef( longFire, 1, 0, 0) glRotatef( longFireChaw, 0, 1, 0) glRotatef(longFire, 0, 1, 0) glTranslatef(0, 0, bombZ) glTranslatef(0, 0, 25) drawBomb() glPopMatrix() Timer and Shphere collsion detection code s here dx 400 bombX 400 for sphere one x and bombX for sphere two dy 50 bombY 50 for sphere one y and bombY for sphere two dz 400 ( 400) 50 for sphere one z and bombz for sphere two float distance sqrt(dx dx dy dy dz dz) if (distance lt 6 6) 6 is radius of first and second sphere printf("Collision Occur") bombX 1 bombY 0.3 bombY sin((longFire 3.1424543567) 180) bombZ printf("Y IS f",bombY) Keyboard Handler code is here if (key 't' key 'T') xRotate 1, yRotate 0, zRotate 0 longFire 5 handlerMove() if (key 'e' key 'E') xRotate 1, yRotate 0, zRotate 0 longFire 5 handlerMove() if (key 'r' key 'R') xRotate 0, yRotate 1, zRotate 0 longFireChaw 5 handlerMove() if (key 'w' key 'W') xRotate 0, yRotate 1, zRotate 0 longFireChaw 5 handlerMove()
1
Can diffuse and specular component of Phong model shine thru object? I have been implementing simple 3D engine in OpenGL, mostly based on tutorials by Tom Dalling. I have implemented the Phong lightening model as described in his tutorial, but I see light artifacts on concave shaped objects (and also when using normal mapping). I came to a point, where i don t know if my code is broken, or this is actually normal behaviour, and you need special handling for it. I think that these artifacts could be happening, because the normals of concave object at same points head back into the point light source, not actually considering there is a solid object in between. I tried to do a little scetch of this situation in 2D (for diffuse component) So I need to know, if this is a common problem of this light model, or my calculations are wrong.
1
Parsing OBJ file having multiple objects I am very new to game development and have been following the youtube channel ThinMatrix for its GameEngine tutorials https www.youtube.com playlist?list PLRIWtICgwaX0u7Rf9zkZhLoLuZVfUksDP In one of the videos, an OBJ loader is created. However, I noticed that for simplicity purpose, only the bare minimum parameters like faces, textures, normals, and vertices are parsed. (It is the same case for almost all the OBJ Parsers available online). What I understand is that the assumption of a single object or a group is made. On reading through the OBJ file format, I realized that there are several other attributes like Sub objects ( o), Groups ( g) etc. I am having trouble understanding how to parse these other parameters (if necessary). Should I load the vertices and faces data of every ( o) into separate VAOs? Also, in that case, how will the rotation of the whole object work? Will all Sub objects start rotating about their own centers? How to make the multiple sub objects rotate in unison like a single object and still have independent control over each whenever necessary? Kindly pardon me for my verbosity. This is my first post. Thank you!
1
Writing the correct value in the depth buffer when using ray casting I am doing a ray casting in a 3d texture until I hit a correct value. I am doing the ray casting in a cube and the cube corners are already in world coordinates so I don't have to multiply the vertices with the modelviewmatrix to get the correct position. Vertex shader world coordinate gl Vertex Fragment shader vec3 direction (world coordinate .xyz cameraPosition ) direction normalize(direction) for (float k 0.0 k lt steps k 1.0) .... pos direction delta step float thisLum texture3D(texture3 , pos).r if(thisLum gt surface ) ... Everything works as expected, what I now want is to sample the correct value to the depth buffer. The value that is now written to the depth buffer is the cube coordinate. But I want the value of pos in the 3d texture to be written. So lets say the cube is placed 10 away from origin in z and the size is 10 10 10. My solution that does not work correctly is this pos 10 pos.z 10 pos.z 1 vec4 depth vec gl ProjectionMatrix vec4(pos.xyz, 1.0) float depth ((depth vec.z depth vec.w) 1.0) 0.5 gl FragDepth depth
1
More than 8 lights without deferred shading lighting I want to know if there is any technique (efficient) to use more than 8 lights in a scene made with OpenGL and GLSL. Without making use of deferred shading lighting. I have not implementadon these techniques for their limitations and not being able to use transparency or antialiasing. If there is a good alternative, describe it with an example. I use OpenGL 2.0. Thank you.
1
Upload vertex data for particles I am kind of beginner with opengl. I am using point sprites (currently without any texturing) for my particles. I seem to fail to get them uploaded to the GPU memory correctly. I have readed lots of tutorials on VBOs and VAOs and I can not locate the reason why the particles seems to be wrong. The problem is, that some of the particles are behaving correctly, but others dont. They have random colors even if every one of them should be white and they are not in correct point in space. Also when a particle dies and the size of the array gets decreased, all particles warp in another location. Here is my particle GPU data struct StructLayout(LayoutKind.Sequential) public struct GPUParticle public Vector4 Position public Vector4 Color public static int SizeInBytes Vector4.SizeInBytes 2 Here is the VAO, VBO and EBO initialization GL.GenBuffers(1, out Vbo) GL.GenBuffers(1, out Ebo) GL.GenVertexArrays(1, out Vao) Bind the VAO as current GL.BindVertexArray(Vao) Bind the VBO to the VAO GL.BindBuffer(BufferTarget.ArrayBuffer, Vbo) Describe the VAO data GL.EnableVertexAttribArray(0) GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, false, 0, 0) GL.EnableVertexAttribArray(1) GL.VertexAttribPointer(1, 4, VertexAttribPointerType.Float, false, 0, Vector4.SizeInBytes) Bind the EBO to the VAO GL.BindBuffer(BufferTarget.ElementArrayBuffer, Ebo) Unbind the VAO GL.BindVertexArray(0) And here is the uploading which I do on every frame Bind the buffer and send the data to the GPU GL.BindBuffer(BufferTarget.ArrayBuffer, Vbo) Invalidate old data GL.BufferData (BufferTarget.ArrayBuffer, new IntPtr(Count GPUParticle.SizeInBytes), IntPtr.Zero, BufferUsageHint.StreamDraw) Upload new data GL.BufferData (BufferTarget.ArrayBuffer, new IntPtr(buffer.Length GPUParticle.SizeInBytes), buffer, BufferUsageHint.StreamDraw) Unbind GL.BindBuffer(BufferTarget.ArrayBuffer, 0) Console.WriteLine ("Uploaded array buffer. Count 0 ", buffer.Length) Update index buffer only if there are more elements in the buffer than there has ever been Generate index buffer for EBO indices new uint buffer.Length This is really stupid. Perhaps we really do not need any indices? for(ushort i 0 i lt buffer.Length i ) indices i i Bind the EBO for uploading GL.BindBuffer(BufferTarget.ElementArrayBuffer, Ebo) Invalidate old data GL.BufferData (BufferTarget.ElementArrayBuffer, new IntPtr(Count sizeof(uint)), IntPtr.Zero, BufferUsageHint.DynamicDraw) Upload new data GL.BufferData (BufferTarget.ElementArrayBuffer, new IntPtr(buffer.Length sizeof(uint)), indices, BufferUsageHint.DynamicDraw) Unbind EBO GL.BindBuffer(BufferTarget.ElementArrayBuffer, 0) Save the particle count for drawing and zeroing the buffer Count buffer.Length Here are the shaders version 330 core VERTEX SHADER layout(location 0) in vec4 vertex layout(location 1) in vec4 color uniform mat4 mP uniform mat4 mV out vec4 fragColor out vec4 pos For debugging void main() fragColor color pos vertex gl Position mP mV vec4(vertex.xyz, 1.0) version 330 core FRAGMENT SHADER in vec4 fragColor in vec4 pos layout(location 0) out vec4 color void main() color fragColor color vec4(1,1,1,1) color pos I have simplified the code a bit, but this is basically what I do. I am unable to spot what I am doing wrong. The most weird thing is, that when in vertex shader I swap the color to be used as vertex and vertex to be used as color, the motion of the particles gets corrected, but the colors are still going haywire. Any suggestions? I am out of ideas myself. Is the VAO initialization uploading code correct?
1
2D Tile based game Texture Atlas combining I am new to OpenGL and game dev, I have been to courses and trying to learing everything about this. My task is to implement a Texture Atlas in a 2D tile based game (very similar to Tibia) using OpenGL. Now the game have the sprite sheet loaded into memory and on each tile the program generates an image, load to a texture, bind and make a draw call. You can imagine how ineficcient it is. So what I have to do is to load this sprite sheet as a Texture Atlas and get the advantage that it has. The problem is that in the first step while generating an image, the program makes some combinations on the sprites. For example, the character has an outfit and player changes its colors (boots, pants, t shirt, hair) to do this there is a sprite for the base outfit, and a sprite for each element of the outfit, so the program apply a mask on color for each element and combine this five sprites into an image, then make the process that I mentioned above. This is an example that happens with outfit, but it happens to other things too, so this is not a special case with outfit only. My question is is there any way to make this combinations using OpenGL elements? Combining this multiple textures in my Texture Atlas into one draft and do it in a single draw call?
1
Checking Collisions In 2D Platformer With Tiles My team and I are developing a 2D platformer with C SDL OpenGL, and we already defined a collision system, but we have a problem checking collisions with the tilemap. The tiles of the tilemap are 32x32, so we try to define that the max speed in X and Y of the player it's less than 32 because in this case we found the problem that if the speed it's bigger than the tile size, when checking the collisions the position it's updated with the speed which it's more than 32, so in that case, the position skip a tile which causes a huge problem for verification, so at the momento we limit the X and Y speed to 30, but we don't know how make the speed bigger than the tile size without losing the complete collision detection with some possible tiles that may be skiped. Thanks a lot
1
Custom complex body shape with cut feature I need to create sprites (or scene2d actors or whatever) with custom shape, preferably generated from a png image (marching squares?). I followed some Mesh tutorials but they all hardcode the triangularized shape vertices in an array, which is completely insane. I will later have to give my sprite a cut feature, like this game. so I'll have to regenerate and redraw a new shape several times. Some told me to review the Drawable classes as well as Mesh, and create my own MeshDrawable that renders using meshes. Do you have an idea of how to dynamically generate vertices and shapes instead of hardcoding coordinates? at what level will my Mesh be responding to touch events and reacting with the user? Any clarification will be welcome, I'm kind of lost.
1
glGenBuffers fails with 0x0, win7, glew I try to run a simple renderer on my win7 machine but it dies at the first glGenBuffers call. The computer has an Intel HD3000 card, with the latest driver (OpenGL 3.0 support). I use glew 1.10 (self compiled with mingw). The linker command is g LC Program Files Microsoft SDKs Windows v7.1 Lib Lc opt glew 1.10.0 lib LC opt SDL 1.2.15 lib x86 o pysicsdemo.exe src main.o src Box.o lOpenGL32 lglu32 lSDLmain lSDL lglew32.dll And the result is 0x0. Most probably glew can't load the extension, but why?
1
Visualize quaternion euler angles without gimbal lock I fusion ACC and GYRO data with Mahony algorithm, and then I want to show a line chart representing roll pitch and yaw to the user, so they can understand the degrees. As you can imagine, gimbal lock comes in at 90 degrees. I also have an activity where the user can see a cube rotating correctly, having quaternions as input. Is there a way to visualize the angles in degrees without gimbal lock? I can only manage to show an OpenGL shape rotating correctly, but my goal is to show the degrees.
1
OpenGL missing GL SPECULAR light on the texture I'am missing specular lighting on the texture. I have include lt GL glext.h gt in the project, so basically I used glLightModeli(GL LIGHT MODEL COLOR CONTROL EXT, GL SEPARATE SPECULAR COLOR EXT) for specular effect. My init code glEnable(GL DEPTH TEST) glHint(GL PERSPECTIVE CORRECTION HINT, GL NICEST) glEnable(GL CULL FACE) GLfloat AmbientLight 0.3, 0.3, 0.3, 1.0 GLfloat DiffuseLight 0.7, 0.7, 0.7, 1.0 GLfloat SpecularLight 1.0, 1.0, 1.0, 1.0 GLfloat Shininess 90.0 GLfloat Emission 0.0, 0.0, 0.0, 1.0 GLfloat Global Ambient 0.1, 0.1, 0.1, 1.0 GLfloat LightPosition 7.0, 7.0, 7.0, 1.0 glLightModelfv(GL LIGHT MODEL AMBIENT, Global Ambient) glLightfv(GL LIGHT0, GL AMBIENT, AmbientLight) glLightfv(GL LIGHT0, GL DIFFUSE, DiffuseLight) glLightfv(GL LIGHT0, GL SPECULAR, SpecularLight) glLightfv(GL LIGHT0, GL POSITION,LightPosition) glLightf(GL LIGHT0, GL CONSTANT ATTENUATION, 0.05f ) glLightf(GL LIGHT0, GL LINEAR ATTENUATION, 0.03f ) glLightf(GL LIGHT0, GL QUADRATIC ATTENUATION, 0.002f) glMaterialfv(GL FRONT AND BACK, GL AMBIENT, AmbientLight) glMaterialfv(GL FRONT AND BACK, GL DIFFUSE, DiffuseLight) glMaterialfv(GL FRONT AND BACK, GL SPECULAR, SpecularLight) glMaterialfv(GL FRONT AND BACK, GL SHININESS, Shininess) glMaterialfv(GL FRONT AND BACK, GL EMISSION, Emission) glShadeModel(GL SMOOTH) glEnable(GL LIGHT0) glEnable(GL LIGHTING) glEnable(GL COLOR MATERIAL) glColorMaterial(GL FRONT AND BACK, GL AMBIENT AND DIFFUSE) My render code glClearColor( 0.117f, 0.117f, 0.117f, 1.0f ) glClearDepth(1.0f) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) load texture glEnable(GL TEXTURE 2D) glTexEnvf(GL TEXTURE ENV, GL TEXTURE ENV MODE, GL MODULATE) glEnable(GL LIGHTING) glLightModeli(GL LIGHT MODEL COLOR CONTROL EXT, GL SEPARATE SPECULAR COLOR EXT) glColor4f(1.0, 1.0, 1.0, 0.2) glDisable(GL COLOR MATERIAL) render geometry here glFlush() What is missing here?
1
Calculate vertex coordinates Newbie here! I need to know the effective coordinates of a triangle after applying transformations (rotations amp translations) to the current (MODELVIEW) matrix. That is, given a vertex P, I want to calculate, for instance, the new coordinates of P after a rotation of 90 on the x axis. How can I do that? Is it also possible to use only OpenGL matrix operations to do this? Thanks in advance for any tips. Edit The easy one is translation. I implemented it as define T(x, v) (x v) where I apply T to every coordinate of every vertex. But what about rotation over a specific axis. Can you please give me some hint?
1
Getting crash on glDrawElements Here is the code where I initialize the VAO, vertex attributes(also the main VBO) and EBO(im using my own wrapper class for these "databuffers" to hide some of the API features and make life easier so i dont think the problem will be in the generic class as it was working without problems) void initVAOManager(const bool amp ebo) if ( vaoID 0) glGenVertexArrays(1, amp vaoID) glBindVertexArray( vaoID) Here is the main data buffer (positions,colors,UVs) If it doesn t exist a new one is created if (! mainBuffer) mainBuffer new DataBuffer lt T gt (GL ARRAY BUFFER) mainBuffer gt bindBuffer() if (! eboBuffer amp amp ebo) eboBuffer new DataBuffer lt eboData gt (GL ELEMENT ARRAY BUFFER) eboBuffer gt bindBuffer() This is the position glEnableVertexAttribArray(0) glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, sizeof(Vertex), (void )offsetof(Vertex, position)) Color attrib pointer glEnableVertexAttribArray(1) glVertexAttribPointer(1, 4, GL UNSIGNED BYTE, GL TRUE, sizeof(Vertex), (void )offsetof(Vertex, color)) UV glEnableVertexAttribArray(2) glVertexAttribPointer(2, 2, GL FLOAT, GL TRUE, sizeof(Vertex), (void )offsetof(Vertex, uv)) mainBuffer gt unbindBuffer() if (ebo) eboBuffer gt unbindBuffer() glBindVertexArray(0) Then the render function (dont mind the for loop, as i want to render multiple objects from the batch in one function) void renderBatchNormal() uploadData() glBindVertexArray( VAOManager gt getVAO()) std vector lt eboData gt for (std size t i 0 i lt DATA.size() i ) glDrawElements(GL TRIANGLES, 6, GL UNSIGNED INT, 0) glBindVertexArray(0) clearData() The upload data function send a data from the vectors to their buffers, I can send it too but as Im using my generic wrapper and it worked before with normal drawings I assume there is no problem. And finally a class eboData(if anyone wondered) (basically just a blank class with an array of 6 indices) class eboData public GLuint indices 6 However, this is causing crashes on the line where I try to execute the glDrawElements command, I read that it can be caused with no binded VAO while binding the ELEMENT BUFFER but as you can see from the code I m doing it right(at least I think that). However, if I change the following line with std vector lt eboData gt eboVector glDrawElements(GL TRIANGLES, 6, GL UNSIGNED INT, eboVector.data()) The code is working (problem is also that I don t know how to render second item in the buffer as it is showing only the first one). Do you have any ideas what can cause this crash? PS glGetError() returns 0.
1
Multiple mains in vertex shader, GLSL I have a renderer where you can define passes where you pick the shader, and a signal, each object can register a listener to that signal in order to draw himself, the problem comes when some objects are normal meshes, others are sprites, some have skeletal animations, some don't. I was wondering if I could pack within the same vertex shader many different optional main functions, and pick the correct one for each object. If there is no way, I would have to compile a different shader for each of the types and bind it from zero for all objects, even though all fragment shaders are the same.
1
Why am I getting these artifacts when rendering polygons with OpenGL and SDL from far distance? I'm working in a toy BSP (quake3 version) renderer. I started creating the context and handling the input with GLFW but then I switched over SDL. The performance change was amazing with SDL leading with 100 fps. But when rendering some surfaces from far now I'm getting some strange and ugly artifacts in the borders or junctions. The only thing that I changed from GLFW to SDL was the initialization (code bellow) and the input handling but the OpenGL Draw calls are the same. Why am I getting these artifacts when rendering polygons with OpenGL and SDL from far distance? This only happens with some surfaces (mostly the ones without textures, only vertex colors) but there are some tiny surfaces (pendants or flags) that also have the same problem. Also, not all the borders of the same faces have the problem. if (SDL Init(SDL INIT VIDEO SDL INIT EVENTS) lt 0) cout lt lt "Failed to init SDL Video" lt lt endl SDL GL SetAttribute(SDL GL CONTEXT MAJOR VERSION, 4) SDL GL SetAttribute(SDL GL CONTEXT MINOR VERSION, 1) SDL GL SetAttribute(SDL GL CONTEXT FLAGS, SDL GL CONTEXT FORWARD COMPATIBLE FLAG) SDL GL SetAttribute(SDL GL CONTEXT PROFILE MASK, SDL GL CONTEXT PROFILE CORE) SDL GL SetAttribute(SDL GL DOUBLEBUFFER, 1) SDL GL SetAttribute(SDL GL MULTISAMPLEBUFFERS, 1) SDL GL SetAttribute(SDL GL MULTISAMPLESAMPLES, 4) window SDL CreateWindow("BSP Test", 0, 0, width, height, SDL WINDOW SHOWN SDL WINDOW OPENGL) if ( window nullptr) SDL Quit() return false SDL GLContext ctx SDL GL CreateContext( window) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glGetIntegerv(GL MAX PATCH VERTICES, amp ee) cout lt lt "Max patch vertices " lt lt ee lt lt endl glPatchParameteri(GL PATCH VERTICES, 9) cout lt lt "GL ERROR LOAD " lt lt glGetError() lt lt endl glClearColor(0.0f, 0.0f, 0.01f, 1.0f) glEnable(GL CULL FACE) glCullFace(GL BACK) glFrontFace(GL CCW) glEnable(GL DEPTH TEST) glEnable(GL MULTISAMPLE) glEnable(GL POLYGON SMOOTH) glEnable(GL LINE SMOOTH) SDL GL SetSwapInterval(0) matrices.proj glm perspective(glm radians(100.0f), (float) width (float) height, 1.0f, 4000.0f) camera.position glm vec3 41.415211f, 320.293121f, 537.225281f return true
1
How do multipass shaders work in OpenGL? In Direct3D, multipass shaders are simple to use because you can literally define passes within a program. In OpenGL, it seems a bit more complex because it is possible to give a shader program as many vertex, geometry, and fragment shaders as you want. A popular example of a multipass shader is a toon shader. One pass does the actual cel shading effect and the other creates the outline. If I have two vertex shaders, "cel.vert" and "outline.vert", and two fragment shaders, "cel.frag" and "outline.frag" (similar to the way you do it in HLSL), how can I combine them to create the full toon shader? I don't want you saying that a geometry shader can be used for this because I just want to know the theory behind multipass GLSL shaders )
1
How to create a skybox in an infinite world like minecraft? I'm making a minecraft clone game in C using OpenGL. I created a skybox using OpenGL's cube map but the camera can go outside of the skybox since it's an infinite world. Then I changed it to update the skybox's coordinate based on camera's coordinate like below but it didn't change anything. How can I make a skybox stay at a relative position from the camera? auto coord camera.getCoords() positive x skyboxMesh gt addFace( coord.x size, coord.y size, coord.z size, coord.x size, coord.y size, coord.z size, coord.x size, coord.y size, coord.z size, coord.x size, coord.y size, coord.z size )
1
Create YUV texture for GL TEXTURE EXTERNAL OES format 0 down vote favorite I need to create a yuv texture for GL TEXTURE EXTERNAL OES format. source https github.com crossle MediaPlayerSurface blob master src me crossle demo surfacetexture VideoSurfaceView.java I am doing all processing on YUV, so it would save clock cycles, if I can generate a yuv texture as output of texture2D. To get 'y' value, I need to take dotproduct of each texel with vec3 of (0.3,0.59,0.11) Since, for my purpose I need to take 3x3 pixel block's 'y' value and take a convolution of them, this results in performance impact. So it would save clock cycles, if I can generate a yuv texture as output of texture2D.
1
Just how expensive is it to bind textures in OpenGL? (LibGDX) I'm using LibGDX on top of OpenGL and currently my game engine does something along the lines of the following per frame Bind a terrain texture sprite atlas and a set of transparency masks in another texture atlas Render terrain tiles using the 2 bound textures to a FBO Bind a character and item texture sprite atlas Render characters over the terrain to the same FBO Bind the same transparency mask atlas and a normal map texture for the terrain Draw the same terrain tiles' normal map version to a different FBO Bind the character and item normal map texture atlas Render character normal maps over the terrain normal map FBO Bind this normal map FBO as a texture Render lighting information to a different FBO using the normal map texture information Bind the diffuse and lighting FBOs as textures Use these to render the combined final image to the main display So in summary, each frame I'm binding a total of 9 different textures, one or two at a time. Should I look into changing my code so all 9 textures are always bound, and the correct one is referenced at the right time? Or is this a reasonable amount of texture binds per frame that isn't going to impact overall performance to a noticable effect? Assume I'm aiming for 60fps and there's a fair amount of other calculations going on per frame.
1
opengl shadow map peter panning effect I am implementing the shadow map in opengl with opentk everything works fine except I have a peter panning effect that I can't solve it by changing the face culling to front then render to the depth buffer. the model is large so I scale it to small value to render it to the shadow map. what is the reason that may cause this problem?
1
how to use glm rotate with a eulerangle? I have a vec3 to represent my object's orientation rotation but the glm rotate method expects a quaternion. If I just convert it to a quaternion like this glm quat rot rotation The W value will just be zero, right? And then there won't be any changes in rotation. In my code I just want to be able to do rotation.x 5.0f in the update method of an object. This is the method I'm using for my transformations glm mat4 GameObject Transform(glm mat4 model, glm vec3 position, glm vec3 scale, glm quat angleAxis) model glm translate(model, position) if (angleAxis.w gt 360.0f) angleAxis.w 360.0f else if (angleAxis.w lt 0.0f) angleAxis.w 360.0f model glm rotate(model, angleAxis.w toRadians, glm vec3(angleAxis.x, angleAxis.y, angleAxis.z)) model glm scale(model, scale) return model Currently I'm just passing on that rotation vec3 to that angleAxis quaternion parameter but that obviously doesn't work. This is how i currently calculate my front, up, and right vectors void GameObject calculateCameraView() front.x cos(glm radians(rotation.x)) cos(glm radians(rotation.y)) front.y sin(glm radians(rotation.y)) front.z sin(glm radians(rotation.x)) cos(glm radians(rotation.y)) front glm normalize(front) right glm normalize(glm cross(front, worldUp)) up glm normalize(glm cross(right, front)) front.y invertMouse ? front.y 1 front.y
1
OpenGL Water Waves Problem I want to do a simple simulation of water drops producing waves in OpenGL with C C . I calculate height for each point of my plane grid in Vertex Shader with formula but it seems wrong at the beginning. I will atach a photo to ease the understanding This is how the grid looks like at time 0 (when the drop touched the water). The waves weren't generated only near the touch point and gradually advance. In the corners of the grid (and far from the touch point) there should be no change until a created wave approaches.
1
Texture mapping artifacts at triangle edge (for thin triangles?) I'm working on generating programmatic textures for 3d models and showing them in WebGL using Threejs. To do this I generate a texture image (PNG) that contains per triangle textures (a rectangle on the image, that is a rasterization of the 3d triangle), and an associated uv map. To prevent bleeding I generate a 2 pixel border around each triangle's texture rectangle in the resulting PNG. The rest of the texture image is black. This results in visible artifact lines when rendering The texture sampler is set to use NearestFilter for magnification and minification, and the texture's wrapping is ClampToEdge for both S and T, which makes things even more confusing. I've also tried with bigger per triangle image borders and the problem still persists. This is the texture I generate These lines seem to overlap with thin triangles, which are rasterized to 2 pixels along the shortest axis. I'm sure there is no physical gap in the model, because it renders fine as a solid, so it must be a texture sampling related thing, which is also confirmed by the fact that the color of the lines is affected by color of the "empty" parts of the texture. If I change those, then the line colors also change. I've also tried offsetting UVs by 0.5 tex size and the same thing is rendered. Any help would be useful. Thanks!
1
(While using a cube map) box like textures appearing around my scene whenever I move the camera I've been learning about cube map and I implemented one into my program. It seemed to work well until I started moving the camera around the scene and zooming out. As you can see in the attached gif, there seems to be a problem with the cube map since this weird behavior is occurring https gyazo.com 70ad4ce027d1e032bc19258e28def66f Main program unsigned int cubemapTexture texo.loadCubeMap(faces) glDepthFunc(GL EQUAL) bg.bind() skyBox.use() glm mat4 projection glm perspective(glm radians(fov), (float)scr width (float)scr height, 0.1f, 100.0f) glm mat4 view camera.GetViewMatrix() skyBox.setUniformMat4("projection", projection) skyBox.setUniformMat4("view", view) glActiveTexture(GL TEXTURE20) glBindTexture(GL TEXTURE CUBE MAP, cubemapTexture) GLCall(glDrawArrays(GL TRIANGLES, 0, 36)) loadCubeMap unsigned int Textures loadCubeMap(std vector lt std string gt faces) unsigned int textureID glGenTextures(1, amp textureID) glBindTexture(GL TEXTURE CUBE MAP, textureID) stbi set flip vertically on load(false) int width, height, nrChannels for (unsigned int i 0 i lt faces.size() i ) unsigned char data stbi load(faces i .c str(), amp width, amp height, amp nrChannels, 0) if (data) glTexImage2D(GL TEXTURE CUBE MAP POSITIVE X i, 0, GL RGB, width, height, 0, GL RGB, GL UNSIGNED BYTE, data ) stbi image free(data) else std cout lt lt "Cubemap tex failed to load at path " lt lt faces i lt lt std endl stbi image free(data) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP R, GL CLAMP TO EDGE) return textureID Vertex Shader version 330 core layout (location 0) in vec3 aPos out vec3 TexCoords uniform mat4 view uniform mat4 projection void main() vec4 pos projection view vec4(aPos, 1.0) TexCoords aPos Setting z value to w (1.0) so that the cube map is always in the background gl Position pos.xyww Fragment shader version 330 core out vec4 FragColor in vec3 TexCoords uniform samplerCube skybox void main() FragColor texture(skybox, TexCoords) The problem probably originates from one of these snippets, but if you have any other ideas I can provide more info.
1
Drawing Sprites in Android OpenGL efficiently? I want to basically give myself some sprite drawing functions (making use of openGL) such as draw(Texture,x,y) I want to do this using OpenGL ES 2.0 on Android. Since the textures can be varying sizes, I was thinking I would save the vertices along with the textures and pass them to the shaders every draw call using "glVertexAttribPointer". Is there a better(performance) way of doing this? I would also make use of a model matrix to translate rotate the sprites. Is this a normal thing to do for 2d rendering?
1
What does a matrix represent? I began learning OpenGL recently and am having problems visualizing what matrices are and their role in computer graphics. Given the template of a 4x4 matrix like this I would assume that each matrix like this are the coordinates of a vertex in world space. And several of them put together and shaded give an object? But why is there a Xx, a Xy and an Xz? I read that its a different axis (up, left, forward) but still can't make heads or tails of the significance.
1
OpenGl Error, The loaded object takes the same colors and style of the Texture? I'm new to OpenGl Faced this problem Draw function void Renderer Draw() glUseProgram(programID) shader.UseProgram() mat4 view mat4(mat3(myCamera gt GetViewMatrix())) glm mat4 VP myCamera gt GetProjectionMatrix() myCamera gt GetViewMatrix() shader.BindVPMatrix( amp VP 0 0 ) glm mat4 VP2 myCamera gt GetProjectionMatrix() myCamera gt GetViewMatrix() floorM model13D gt Render( amp shader, scale(100.0f, 100.0f, 100.0f)) scaling the skybox t2 gt Bind() model3D gt Render( amp shader, scale(2.0f, 2.0f, 2.0f)) scaling aircraft glUniformMatrix4fv(VPID, 1, GL FALSE, amp VP2 0 0 ) mySquare gt Draw() The code of loaded shader.LoadProgram() model3D new Model3D() model3D gt LoadFromFile("data models obj Galaxy galaxy.obj", true) model3D gt Initialize() myCamera gt SetPerspectiveProjection(90.0f, 4.0f 3.0f, 0.1f, 10000000.0f) model13D new Model3D() model13D gt LoadFromFile("data models obj skybox Skybox.obj", true) model13D gt Initialize() Projection matrix shader.LoadProgram() View matrix myCamera gt Reset( 0.0f, 0.0f, 5.0f, Camera Position 0.0f, 0.0f, 0.0f, Look at Point 0.0f, 1.0f, 0.0f Up Vector ) std string Images names 6 Images names 0 "right.png" Images names 1 "left.png" Images names 2 "top.png" Images names 3 "bottom.png" Images names 4 "back.png" Images names 5 "front.png" t new Texture(Images names, 0) t2 new Texture("arrakisday dn.tga", 1)
1
Making a button that spawn objects in the world? I want to make a button that can spawn an object that I've already created. Though I have no clue how to actually do this where to start. Has anyone ever done something similar?
1
Compute normal based on Voronoi pattern I am applying a 3D Voronoi pattern on a mesh. Using those loops, I am able to compute the cell position, an id and the distance. But I would like to compute a normal based on the generated pattern. How can I generate a normal or reorient the current normal based on this pattern and associated cells ? The aim is to provide a faced look for the mesh. Each cell's normal should point in the same direction and adjacent cells point in different directions. Those directions should be based on the original mesh normals, I don't want to totally break mesh normals and have those points in random directions. Here's how I generate the Voronoi pattern. float3 p floor(position) float3 f frac(position) float id 0.0 float distance 10.0 for (int k 1 k lt 1 k ) for (int j 1 j lt 1 j ) for (int i 1 i lt 1 i ) float3 cell float3(float(i), float(j), float(k)) float3 random hash3(p cell) float3 r cell f random angleOffset float d dot(r, r) if (d lt distance) id random distance d cellPosition cell p normal ? And here's the hash function float3 hash3(float3 x) x float3(dot(x, float3(127.1, 311.7, 74.7)), dot(x, float3(269.5, 183.3, 246.1)), dot(x, float3(113.5, 271.9, 124.6))) return frac(sin(x) 43758.5453123)
1
Combine textures with coordinates Is it possible to add one texture to another texture, at specific coordinates? Like if I want to add a small texture(16x16) to big texture (1368 x 768) with coordinates ( 100, 100) so the small texture goes to specific coordinates (100, 100 ).
1
Having trouble setting color in fragment shader For some reason, the color isn't applying to the object. Here's my fragment shader code. There's probably something obvious wrong with it that i'm not seeing. version 330 core out vec3 Color uniform int InColor void main() Color vec3(float(127 255), 0, 0)
1
Getting started with cross platform game development I am looking for a starting place to develop a crossplatform OpenGL game that runs on Mac, PC and potentially Linux. The difficulty is that I don't want to use an existing graphics library. I've done a lot of work with OpenGL in C for Windows. I have never developed anything for Mac. I plan to develop most of the Windows game and leave my project open with abstraction so that I can later implement Mac specific code. Is there anything that works radically different in Mac and Linux than in Windows? With Windows for example, I use DirectInput and XInput to capture controllers and keyboard, window messages to capture mouse and typing. Where can I find some type of "translation guide"? Also, do I still need to consider endianness? How can I design my project so that I can load it up in Visual Studio 2008 and easily compile it in OSX and Linux IDEs? Since I own Visual Studio, I'd like to use that for my Windows development, rather than switch to a cross platform IDE.
1
With what projection matrix should I render a portal to a texture? I'm using OpenGL. I have problem with my engine's portal implementation. To create the first portal I do create a virtual camera with the position of the second portal and the correct orientation render whole scene from virtual camera to the texture using FBO render the first portal using the texture created before. The problem The virtual camera is rendering to the texture with the main projection view, so when I use this texture to render the first portal, everything is rescaled. The virtual camera should only render some part of the view to prevent scaling. The problem could perhaps be solved by changing the virtual camera's projection matrix, but I don't know how to calculate such a matrix. Here's how it looks This picture shows one portal (the texture is a little bit deformed on bottom and top, but that's not important now) and some other objects. The second portal is behind the camera, and the camera is in the middle between the two portals. We can clearly see that the portal texture is rescaled down to the portal size. As I said before, I think the virtual camera should have another projection matrix, but how do I calculate it? Or is there a better way?
1
Linking error at tessellation shaders in GLSL I'm testing the triangle tessellation from the link http prideout.net blog ?p 48 shaders . All the shader are compiled correctly, but when I try to link the program using the command glLinkProgram(programID) I got the following error Tessellation control info (0) error C6029 No input primitive type Tessellation evaluation info (0) error c7005 No tessellation primitive mode specified Geometry info (0) error C6022 No input primitive type (0) error C6029 No ouput primitive type Not valid It's so strange I declare output for TC Shader using command layout(vertices 3) out And input layout for TE shader using command layout(triangles, equal spacing, cw) in Why I still got this error? I hope to see you answer about it. I put my shaders in below Vertex shader version 410 core in vec4 Position out vec3 vPosition void main() vPosition Position.xyz TC shader version 410 core layout(vertices 3) out in vec3 vPosition out vec3 tcPosition define ID gl InvocationID void main() float TessLevelInner 3 float TessLevelOuter 2 tcPosition ID vPosition ID if (ID 0) gl TessLevelInner 0 TessLevelInner gl TessLevelOuter 0 TessLevelOuter gl TessLevelOuter 1 TessLevelOuter gl TessLevelOuter 2 TessLevelOuter TE Shader version 410 core TessEval layout(triangles, equal spacing, cw) in in vec3 tcPosition out vec3 tePosition out vec3 tePatchDistance uniform mat4 Projection uniform mat4 Modelview void main() vec3 p0 gl TessCoord.x tcPosition 0 vec3 p1 gl TessCoord.y tcPosition 1 vec3 p2 gl TessCoord.z tcPosition 2 tePatchDistance gl TessCoord tePosition normalize(p0 p1 p2) gl Position Projection Modelview vec4(tePosition, 1) Geometry shader version 410 core geometry shader layout(triangles) in layout(triangle strip, max vertices 3) out in vec3 tePosition 3 in vec3 tePatchDistance 3 out vec3 gFacetNormal out vec3 gPatchDistance out vec3 gTriDistance uniform mat4 Modelview uniform mat3 NormalMatrix void main() vec3 A tePosition 2 tePosition 0 vec3 B tePosition 1 tePosition 0 gFacetNormal NormalMatrix normalize(cross(A, B)) gPatchDistance tePatchDistance 0 gTriDistance vec3(1, 0, 0) gl Position gl in 0 .gl Position EmitVertex() gPatchDistance tePatchDistance 1 gTriDistance vec3(0, 1, 0) gl Position gl in 1 .gl Position EmitVertex() gPatchDistance tePatchDistance 2 gTriDistance vec3(0, 0, 1) gl Position gl in 2 .gl Position EmitVertex() EndPrimitive() Fragment Shader version 410 core fragment shader out vec4 FragColor in vec3 gFacetNormal in vec3 gTriDistance in vec3 gPatchDistance in float gPrimitive uniform vec3 LightPosition float amplify(float d, float scale, float offset) d scale d offset d clamp(d, 0, 1) d 1 exp2( 2 d d) return d void main() vec3 AmbientMaterial vec3(0.04f, 0.04f, 0.04f) vec3 DiffuseMaterial vec3(0, 0.75, 0.75) vec3 LightPosition vec3(0.25, 0.25, 1) vec3 N normalize(gFacetNormal) vec3 L LightPosition float df abs(dot(N, L)) vec3 color AmbientMaterial df DiffuseMaterial float d1 min(min(gTriDistance.x, gTriDistance.y), gTriDistance.z) float d2 min(min(gPatchDistance.x, gPatchDistance.y), gPatchDistance.z) color amplify(d1, 40, 0.5) amplify(d2, 60, 0.5) color FragColor vec4(color, 1.0)
1
Chess engine on which to apply a custom made OpenGL skin Is there any open source chess engine that I can use to practice my OpenGL skill set? I think it would be a neat exercise.
1
OpenGL strange rendering problem when buffers have different sizes I have encountered a very odd error in my program, "odd" in the sense that everything the API says suggests that the error should not occur. I have a bunch of 2D un indexed vertex data, and I want to render it as lines. So far, so good. Then, I wanted to make each vertex have its own (RGB) color, so I generate a color for each vertex. For simplicity, I chose red. Works fine, except now only 2 3 of the points are being rendered! The problem arises from the fact that each vertex's position data consists of only 2 numbers, whereas the color data consists of 3 numbers. So, the "position" buffer has 2 elements per vertex while the "color" one has 3 elements per vertex. I thought that using glVertexAttribPointer to tell this information to OpenGL would be enough, but turns out it's not. In fact, if I say that the color data has only 2 elements per vertex, using glVertexAttribPointer(vertexColorID2,2,GL DOUBLE,GL FALSE,0,(void )0) (as opposed to 3), it renders all the points except now I can only specify two numbers for the RGB color, so I can't get the right color. The full code of the issue is below glUseProgram(programID2) draw the graph graph data graphData() std vector lt double gt graphcolordata(graph data.size() 2 3) for (int i 0 i lt graph data.size() i 3) graphcolordata i 1 glEnableVertexAttribArray(vertexPosition modelspaceID2) glBindBuffer(GL ARRAY BUFFER, graphbuffer) glBufferData(GL ARRAY BUFFER, graph data.size() sizeof(GLdouble), amp graph data 0 , GL STREAM DRAW) glVertexAttribPointer(vertexPosition modelspaceID2,2,GL DOUBLE,GL FALSE,0,(void )0) glEnableVertexAttribArray(vertexColorID2) glBindBuffer(GL ARRAY BUFFER, colorbuffer2) glBufferData(GL ARRAY BUFFER, graphcolordata.size() sizeof(GLdouble), amp graphcolordata 0 , GL STREAM DRAW) glVertexAttribPointer(vertexColorID2,3,GL DOUBLE,GL FALSE,0,(void )0) glDrawArrays(GL LINES, 0, graph data.size() 2) glDisableVertexAttribArray(vertexPosition modelspaceID2) glDisableVertexAttribArray(vertexColorID2) Note programID2 is my basic shader program, and the following variable definitions were previously used GLuint vertexPosition modelspaceID2 glGetAttribLocation(programID2, "vertexPosition modelspace") GLuint vertexColorID2 glGetAttribLocation(programID2, "vertexColor") Edit Incredibly stupid error, figured it out immediately after posting when it had previously stumped me for half an hour. std vector lt double gt graphcolordata(graph data.size() 2 3) for (int i 0 i lt graph data.size() i 3) graphcolordata i 1 should be std vector graphcolordata(graph data.size() 2 3) for (int i 0 i lt graphcolordata.size() i 3) graphcolordata i 1 When this initialization is fixed, it works fine. I would delete this, but I do not see how.
1
Translating an Object to a certain Vector 3 in OpenGL and Java LWJGL So after almost two hours, I got the hang of using glTranslated() (with Java and LWJGL). If I am correct, applying glTranslated on an object moves that object with the x,y,z relative to the previously moved object. I believe the correct term for this is local vs global, global being the one I want. I was wondering if there was a way to translate an object to a specific XYZ position, or relative to the origin. Thanks! (Code or other details can be supplied if it helps, just let me know. Also sorry if this is a noob comment, Im very new to OpenGL.)
1
In OpenGL, how can I discover the depth range of a depth buffer? I am doing a GL multi pass rendering app for iOS. The first pass renders to a depth buffer texture. The second pass uses the values in the depth buffer to control the application of a fragment shader. I want to rescale the values in the depth buffer to something useful but before I can do that I need to know the depth value range of the depth buffer values. How do I do this?
1
Is there any way in open gl to "bit crush" everything on the screen? I am making an educational game and am looking for options for a visual effect to show when something is "incorrect". So, I am looking to see if there's an easy way in openGL to temporarily distort everything on the screen (all sprites, everything), like pixelate it or something... Does anyone know if this is possible? I'm using cocos2d by the way... Edit here is an example of what I am looking for On the left, the original, on the right the pixelated bitcrushed version The thing is, I don't want to just do this to a single texture sprite, I want to take the entire display output, and perform this pixelated effect on it and then revert back to normal after a second or something.
1
Modern OpenGL Sprite Drawing Not Rendering I'm trying to render sprites based of this tutorial, in LWJGL. Here's what I have so far, with descriptions of what class I use Transformable2D is a class that holds transform matrices meant for 2D rendering, and Drawable is just an interface that has a draw() function. Texture is a container for loading and creating Textures, and its bind function just sets the active texture to it's ID and binds itself to the passed unit parameter. SpriteTechnique is a container for a shader program, and I will paste the shaders here too, although they are the same as from the tutorial. I make the shader and vao static, because every Sprite uses the same vao and shader. Finally, I pass a View to the draw() function, which holds the necessary info to make an orthogonal matrix based off the transforms of the object. This matrix is correct, as I also use it for text rendering, where it works as expected. All of the classes described above work completely I am 100 sure the problem is not within the backing classes. Here are the GL settings I set in my backend, too glFrontFace(GL CW) glCullFace(GL BACK) glEnable(GL CULL FACE) glEnable(GL DEPTH TEST) So, onto the actual class public class Sprite extends Transformable2D implements Drawable private Texture texture private Vector3f color private static SpriteTechnique tech new SpriteTechnique() private static boolean isRendererInitialized private static int vao public Sprite(Texture texture) this.texture texture color new Vector3f() public static boolean initRenderer() if (isRendererInitialized) return true if (!tech.init()) return false float vertices Pos Tex 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f FloatBuffer vertexBuf BufferUtils.createFloatBuffer(vertices.length) vertexBuf.put(vertices) vertexBuf.flip() vao glGenVertexArrays() glBindVertexArray(vao) int vbo glGenBuffers() glBindBuffer(GL ARRAY BUFFER, vbo) glBufferData(GL ARRAY BUFFER, vertexBuf, GL STATIC DRAW) glEnableVertexAttribArray(0) glVertexAttribPointer(0, 4, GL FLOAT, false, 16, 0) glBindBuffer(GL ARRAY BUFFER, 0) glBindVertexArray(0) isRendererInitialized true return true Override public void draw(View v, RenderStates states) if (!isRendererInitialized) System.err.println("Sprite renderer must be initialized before drawing sprites!") return tech.enable() tech.setTextureUnit(COLOR TEXTURE UNIT INDEX) tech.setColor(color) tech.setProjection(getOrthoTrans(v).getMatrix()) texture.bind(COLOR TEXTURE UNIT) glBindVertexArray(vao) glDrawArrays(GL TRIANGLES, 0, 6) glBindVertexArray(0) Here are my shaders Sprite.vs version 330 core layout (location 0) in vec4 vertex lt vec2 position, vec2 texCoords gt out vec2 TexCoords uniform mat4 gProjection void main() TexCoords vertex.zw gl Position gProjection vec4(vertex.xy, 0.0, 1.0) Sprite.fs version 330 core in vec2 TexCoords out vec4 color uniform sampler2D gSpriteTexture uniform vec3 gSpriteColor void main() color vec4(gSpriteColor, 1.0) texture(gSpriteTexture, TexCoords) When I try to render a Sprite, however, nothing appears on the screen. Not even a black quad. I've tried to scale the quad, but to no avail. What am I doing wrong here?
1
Texture mapping artifacts at triangle edge (for thin triangles?) I'm working on generating programmatic textures for 3d models and showing them in WebGL using Threejs. To do this I generate a texture image (PNG) that contains per triangle textures (a rectangle on the image, that is a rasterization of the 3d triangle), and an associated uv map. To prevent bleeding I generate a 2 pixel border around each triangle's texture rectangle in the resulting PNG. The rest of the texture image is black. This results in visible artifact lines when rendering The texture sampler is set to use NearestFilter for magnification and minification, and the texture's wrapping is ClampToEdge for both S and T, which makes things even more confusing. I've also tried with bigger per triangle image borders and the problem still persists. This is the texture I generate These lines seem to overlap with thin triangles, which are rasterized to 2 pixels along the shortest axis. I'm sure there is no physical gap in the model, because it renders fine as a solid, so it must be a texture sampling related thing, which is also confirmed by the fact that the color of the lines is affected by color of the "empty" parts of the texture. If I change those, then the line colors also change. I've also tried offsetting UVs by 0.5 tex size and the same thing is rendered. Any help would be useful. Thanks!
1
Transparent parts of texture are opaque black instead I render a sprite twice, one on top of the other. The sprites have transparent parts, so I should be able to see the bottom sprite under the top sprite. The transparent parts are black (the clear colour) and opaque instead though and the topmost sprite blocks the bottom sprite. My fragment shader is trivial uniform sampler2D texture varying vec2 f texcoord void main() gl FragColor texture2D(texture, f texcoord) I have glEnable(GL BLEND) and glBlendFunc(GL SRC ALPHA, GL ONE MINUS SRC ALPHA) in my initialization code. My texture comes from a PNG file that I load with libpng. I'm sure to use GL RGBA when initializing the texture with glTexImage2D (otherwise the sprites look like noise). Edit Here's a screenshot.
1
How to calculate normal from normal map in world space? (OpenGL) I'm trying to do normal mapping in a deferred renderer and I'm stuck on how to implement normal maps. I have a bool that passes whether or not to use a normal mapped value and thus, whether to calculate the TBN matrix. My vertex code for the geometry pass looks as follows version 410 core layout (location 0) in vec3 aPos layout (location 1) in vec3 aNormal layout (location 2) in vec2 aTexCoords layout (location 3) in vec3 aTangent Optional Texture coordinates layout (location 4) in vec3 aBitangent Optional Texture coordinates out vec3 FragPos out vec2 TexCoords out vec3 Normal out mat3 TBN uniform mat4 model uniform mat4 view uniform mat4 projection uniform bool hasNormalMap void main() vec4 worldPos model vec4(aPos, 1.0) FragPos worldPos.xyz TexCoords aTexCoords Normal transpose(inverse(mat3(model))) aNormal if(hasNormalMap) vec3 T normalize(vec3(model vec4(aTangent, 0.0))) vec3 N normalize(vec3(model vec4(aNormal, 0.0))) re orthogonalize T with respect to N T normalize(T dot(T, N) N) then retrieve perpendicular vector B with the cross product of T and N vec3 B cross(N, T) mat3 TBN mat3(T, B, N) gl Position projection view worldPos Here is where I am confused In my calculation, I multiplied T and N by the model matrix which should have moved it into world space. Now transposing (T,B,N) should move me back into model space (I think, I'm not sure). In the fragment shader how do I use the TBN to calculate the normal in world space? If there are better approaches, they are welcome. Thank you. Update I removed the transposing of the TBN as there's no reason to transform into tangent space if we want to pass it in world space. Now that we have the TBN matrix in the fragment shader, how do we apply it so that the normal is the correct value for lighting? Currently I've done version 410 core layout (location 0) out vec3 gPosition layout (location 1) out vec3 gNormal layout (location 2) out vec4 gAlbedoSpec in vec2 TexCoords in vec3 FragPos in vec3 Normal in mat3 TBN struct Material sampler2D diffuseMap sampler2D specularMap sampler2D normalMap float shininess uniform Material material uniform bool hasNormalMap void main() store the fragment position vector in the first gbuffer texture gPosition FragPos also store the per fragment normals into the gbuffer gNormal normalize(Normal) if(hasNormalMap) gNormal texture(material.normalMap, TexCoords).rgb TBN gNormal normalize(gNormal) and the diffuse per fragment color gAlbedoSpec.rgb texture(material.diffuseMap, TexCoords).rgb store specular intensity in gAlbedoSpec's alpha component gAlbedoSpec.a texture(material.specularMap, TexCoords).r but that doesn't feel right. I imagine that I would transform the sampled value from the normal map by the TBN matrix to get it in world space. Am I missing something?
1
OpenGL Two point perspective view Summary I want vertical lines to be drawn as vertical lines regardless of their position relative to the camera (distance offset). I want the projection to be perspective so the furthest objects from camera look smaller while keeping their respective proportions. What am I trying to achieve A two (vanishing) point perspective like this one http insidetheoutline.com wp content uploads 2019 07 2 Point Perspective Cityscape.jpg Why am I at it I'm really bad at 3D (obviously) therefore I'm trying to replace character models with 2D sprites like in Populous 3, or quot The Game quot quest in Fable 3. I also want to have a top downish view like in isometric games. Alas, perspective distortion totally kills the effect, making sprites look skewed whenever they are off center. What am I doing right now glMatrixMode(GL PROJECTION) glLoadIdentity gluPerspective(FOV, wndWidth wndHeight, zNear, zFar) Googling lead me to this MV matrix, alas it's not working for me 1, 0, 0, 0, 0, 1, 0, 0, 0, camY camZ, camZ, 0, 0, 0, 0, 1 glMatrixMode(GL MODELVIEW) glLoadMatrixf( cam) camX camY camZ 50 gluLookAt( camX, camY, camZ, 0, 0, 0, 0, 0, 1 ) then I draw a uniform grid 5..5 by 5..5 and slightly off center wireframe box to test if they are drawn correctly Does it work? No, it doesn't. Provided matrix produce no result at all, so I had to change it to 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, camY camZ, camZ, 1 . Just like any text on a two point perspective says quot Set one of the values in the last column to zero quot . I tried zero in any position, except the last one, to no avail. Generally, I get something like this The rightmost vertical line the one that aligns with screen center is vertical, just as I want all of them to be, but the others are skewed away. What else I managed to google Not much, really. I found an example that kinda looks like something I want to achieve, but they place the camera at ground level with zero Z delta, effectively eliminating the third vanishing point. In some other example, they just narrowed FOV to some absurdly low value, making it look like a two point perspective to some extent. I'm afraid I'm missing or misunderstanding something. I also not sure if the desired output will, in fact, look like I want, and I won't discard it if it's not.
1
Can I create custom framebuffer and render to it in cocos2dx? I want to do post processing effects so I was thinking If I could just make custom framebuffer in cocos2dx like OpenGL and render the ALL SCENES objects in it. I want a single frame buffer for all scenes. Is there any way? Thanks.
1
Rendering without VAO's VBO's? I am trying to port a demo I found on PositionBasedDynamics . It has a generic function which does the rendering and on their example works but they don't generate bind any Vertex Array Object or Vertex Buffer Object even though they use Core OpenGL and shaders. The function is this template lt class PositionData gt void Visualization drawTexturedMesh(const PositionData amp pd, const IndexedFaceMesh amp mesh, const unsigned int offset, const float const color, GLuint text) draw mesh const unsigned int faces mesh.getFaces().data() const unsigned int nFaces mesh.numFaces() const Vector3r vertexNormals mesh.getVertexNormals().data() const Vector2r uvs mesh.getUVs().data() std cout lt lt nFaces lt lt std endl glBindTexture(GL TEXTURE 2D, text) glEnableVertexAttribArray(0) glVertexAttribPointer(0, 3, GL REAL, GL FALSE, 0, amp pd.getPosition(offset) 0 ) glEnableVertexAttribArray(1) glVertexAttribPointer(1, 2, GL REAL, GL FALSE, 0, amp uvs 0 0 ) glEnableVertexAttribArray(2) glVertexAttribPointer(2, 3, GL REAL, GL FALSE, 0, amp vertexNormals 0 0 ) glDrawElements(GL TRIANGLES, (GLsizei)3 mesh.numFaces(), GL UNSIGNED INT, mesh.getFaces().data()) glDisableVertexAttribArray(0) glDisableVertexAttribArray(1) glDisableVertexAttribArray(2) glBindTexture(GL TEXTURE 2D, 0) I did the same thing and it render's a white screen . I used RenderDoc to check what's going on and it show's these https puu.sh rM1wB 80620ade8d.png . How can they get it to work while i can't?
1
Getting the real fragment depth in GLSL I am trying to write a simple GLSL shader that just renders the real (not normalized) depth of a fragment as a floating point value. So far, I've figured out how to get the depth of a vertex, and just interpolate across the fragments, but I'm starting to realize that this is not going to give me the correct depth of each fragment, is it? Here's what I have so far Vertex shader uniform mat4 world uniform mat4 view uniform mat4 projection attribute vec3 position attribute vec3 color varying float distToCamera void main() vec4 cs position view world vec4(position, 1.0) distToCamera cs position.z gl Position projection cs position Fragment shader varying float distToCamera void main() gl FragColor vec4(distToCamera, distToCamera, distToCamera, 1.0)
1
Shadow map shimmering, indexing outside the shadow map I have tried to reduce the shadow shimmering flickering using the technique described here http msdn.microsoft.com en us library windows desktop ee416324 28v vs.85 29.aspx It works as I want and shimmering is reduced but sometimes I have artifacts. It looks like my code tries to index space outside the shadow map. The article above writes about it but I didn't find a solution. When I played with the code I also got black strips on the corners. Code reduce shadow shimmering flickering Vector2 vecWorldUnitsPerTexel Vector2(D.x() (float)D.shadowMapSize(), D.y() (float)D.shadowMapSize()) get only x and y dimensions Vector2 min2D min.vector2(), max2D max.vector2() min2D vecWorldUnitsPerTexel min2D Round(min2D) min2D vecWorldUnitsPerTexel max2D vecWorldUnitsPerTexel max2D Round(max2D) max2D vecWorldUnitsPerTexel min.set(min2D, min.z) max.set(max2D, max.z) crop matrix based on this article https developer.nvidia.com gpugems GPUGems3 gpugems3 ch10.html Vector scale Vector offset scale.x 2.0f (max.x min.x) scale.y 2.0f (max.y min.y) scale.z 1.0f (max.z min.z) offset.x 0.5f (max.x min.x) scale.x offset.y 0.5f (max.y min.y) scale.y offset.z min.z scale.z Matrix4 m m.x Vector4(scale.x, 0, 0, 0) m.y Vector4(0, scale.y, 0, 0) m.z Vector4(0, 0, scale.z, 0) m.w Vector4(offset.x, offset.y, offset.z, 1.0f) I think that I should store in the depth map a slightly larger area but I'm not sure how to do this. I tried to change the scale of the crop matrix but it doesn't help. EDIT It seems I've found a solution. When I'm rounding (or flooring) min and max values I subtract one from the min value and add one to the max value. This makes the shadow map contain a slightly larger area and I don't see any artifacts.
1
How to compose a matrix to perform isometric (dimetric) projection of a world coordinate? I have a 2D unit vector containing a world coordinate (the player's direction), and I want to convert that to screen coordinates (classic isometric tiles). I'm aware I can achieve this by rotating around the relevant axis but I want to see and understand how to do this using a purely matrix approach? Partly because I'm learning 'modern OpenGL' (v2 ) and partly because I will want to use this same technique for other things so need a solid understanding and my math ability is a little lacking. If needed my screen's coordinate system has it's origin at top left with x amp y pointing right and down respectively. Also, my vertex positions are converted to the NDC range in my vertex shader if that's relevant. Language is C with no supporting libraries.
1
Using multiple shaders in OpenGL3.3 guys, I got a question on how to use multiple shaders in my app. The app is simple I have a 3D scene, say, simple game and I want to show some 2D GUI in the scene. I was following this tutorial on how to add font rendering to my scene. One difference is that I am using Java and lwjgl, but everything is implemented as in the tutorial. So I have 2 sets of shaders (2 programs). 1st that handles the 3D models and scene at all. I added the second set of shaders, I just copied them from the tutorial. Here are they vertex version 330 in vec2 position in vec2 texcoord out vec2 TexCoords uniform mat4 projection void main() gl Position projection vec4(position, 0.0, 1.0) TexCoords texcoord and fragment version 330 in vec2 TexCoords out vec4 color uniform sampler2D text uniform vec3 textColor void main() vec4 sampled vec4(1.0, 1.0, 1.0, texture(text, TexCoords).r) color vec4(textColor, 1.0) sampled I compile shaders, link them into a separate programs. (so I have modelProgram and fontProgram). However, when I run my application, I see errors in the console (however, the application runs fine) WARNING Output of vertex shader 'TexCoords' not read by fragment shader ERROR Input of fragment shader 'vNormal' not written by vertex shader ERROR Input of fragment shader 'vTexCoord' not written by vertex shader ERROR Input of fragment shader 'vPosition' not written by vertex shader As you can see TexCoords is an out variable in font.vs.glsl and the other 3 are in variables in model.fs.glsl. So they belong to the other set of shaders, other program. My question is why this happen? It looks like the pipeline tries to combine one program with another, although the application runs smoothly. The other problem I have is that I do not see any text rendered. I don't know whether this is caused by this or it happens because something else. Any help will be appreciated! Thank you
1
OpenGL Understanding the relationship between Model, View and World Matrix I am having a bit of trouble understanding how these matrixes work and how to set them up in relation to one another to get a proper system running. In my understanding the Model Matrix is the matrix of a object, for example a cube or a sphere, there will be many of these in the application game. The World Matrix is the matrix which defines the origin of the 3D world. the starting point. And the View Matrix is the "camera" everything gets translated with this to make sure you have the illusion of an actual camera when in fact everything is moving instead of this matrix? I am a bit lost here. So I was hoping someone here could help me understand this properly. Does every modelMatrix get translated multiplied with the world matrix and the worldMatrix then with the viewMatrix? Or does every modelMatrix get translated multiplied with the viewMatrix and then that with the worldMatrix? How do all these matrixes relate and how do you set up a world with multiple objects and a "camera"? EDIT Thanks a lot for the feedback already. I did some googling aswel and I think I do understand it a bit better now, however would it be possible to get some pseudo code advice? projectionMatrix Matrix makePerspective(45, width, height, 0.1, 1000.0, projectionMatrix) modelMatrix Matrix identity(modelMatrix) translate(modelMatrix, 0.0, 0.0, 10.0 ) move back 10 on z axis viewMatrix Matrix identity(viewMatrix) do some translation based on input with viewMatrix Do I multiply or translate the viewMatrix with the modelMatrix or the other way around? and what then? I currently have a draw method up in such a way that it only needs 2 matrixes for arguments to draw. Here is my draw method draw(matrix1 matrix2) bindBuffer(ARRAY BUFFER, cubeVertexPositionBuffer) vertexAttribPointer(shaderProgram.getShaderProgram().vertexPositionAttribute, cubeVertexPositionBuffer.itemSize, FLOAT, false, 0, 0) bindBuffer(ARRAY BUFFER, cubeVertexColorBuffer) vertexAttribPointer(shaderProgram.getShaderProgram().vertexColorAttribute, cubeVertexColorBuffer.itemSize, FLOAT, false, 0, 0) bindBuffer(ELEMENT ARRAY BUFFER, cubeVertexIndexBuffer) setMatrixUniforms(shaderProgram, matrix1, matrix2) drawElements(TRIANGLES, cubeVertexIndexBuffer.numItems, UNSIGNED SHORT, 0) What are those matrixes suppose to be? Thanks a lot in advance again guys.
1
Is this equivalent to D3DXVec3TransformNormal? I was porting some code from DirectX to OpenGL. I have the following code glm mat4 rotation(1.0f) rotation glm rotate(rotation, degrees, m up) m look rotation where rotation is a mat4 and m look is a vec3. I wanted to know if the last line here, m look rotation has the same effect as D3DXVec3TransformNormal( amp m look, amp m look, amp rotation ) in DirectX? If not, then what would be the correct alternative?
1
Cumulative transformation matrices for hierarchical object transformations I'm having a small issue with my design for hierarchical 3D objects. I'll try to sum things up concisely. Every object has a vector3 for its position, scale, and rotation, as well as a std vector of children objects. An object's render function takes an optional parameter for a parent's transformation matrix, which defaults to the identity matrix. In this render function, the object's transformation matrix is computed with this form transformation parentTransformation translationMatrix rotationMatrix scaleMatrix After the matrix is computed, all of the childrens' render functions are called with the computed matrix being passed, then the actual render for THIS object happens. My problem seems to be with the scaling bit. For this case, I have a wooden manikin built out of several spheres, all translated and scaled to give shape. For the legs and arms, I scaled the sphere pieces obviously not uniformly. To avoid the children objects of those body parts from also being scaled like that, I scale them by the inverse of that scale factor after scaling the particular body part. However, when I rotate a joint which is a child of one of these oddly scaled pieces, you can see how the scaling is really only in the childrens' "model space" and as a result, they get strangely contorted. Is there a misunderstanding on my part about how objects should inherit their parents' transformations? Here are some images to demonstrate my results. The second image is after rotating the knee joint 80 degrees.
1
OpenGL Transform Feedback output size I'm working on a particle system using transform feedback, and I would like to know if it is possible to render to anything other than floats, like halfs, using Transform Feedback (OpenGL 3.3)? It would save some bytes and possibly speed up the process a little bit.
1
How to make a triangle pixellated with GLFW? How can one make a triangle look pixelated without modifying the size of the window in GLFW3? Preferably, I'd like to accomplish this without the use of shaders. For example, by setting the size of the frame buffer to be 1 8th the size of the window. I've created an Xcode test project for this and tried setting a small window size to create a small framebuffer, then changing the window size to be big afterwards, but the framebuffer size always changes to match the window size. Example project output Desired output
1
Indexed draw vs draw array Lately I wondered, which draw command is faster, drawArrays or drawElements. I know difference between them, drawArrays just draws every vertex in the same order they were provided, and drawElements draws vertices based on the provided indices. But I'm still curious which command is faster, or when should I use drawArrays instead of drawElements and vice versa.
1
Does gluLookAt add or set the view matrix variables? I'm trying to rotate the camera view in PyOpenGL, but it's not working well. The weirdest behavior I've noticed is that putting gluLookAt in a loop seems to change the camera view, even when I'm not changing the inputs as the loop continues. So while I'd expect something like gluLookAt(0,0,0, 0,0, 5, 0,0,1) to keep the camera constantly pointing downwards, it seems to rotate the view in some strange way, with the objects being rendered leaving the maximum clipping radius after a while. My question is, does gluLookAt take into account the previous camera settings, or do I need to look for something else wrong in my code?
1
Blending and shadowmapping? I am trying to implement shadow mapping, and currently I have 2 point lights and 1 global ambient light source and my rendering loop looks roughly like this (the details are not relevant) void OpenGLRenderer DrawRenderables(const uint32 t windowWidth, const uint32 t windowHeight, const RenderQueue amp renderQueue, const RenderableLighting amp lighting) GLCALL(glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT)) for (uint32 t lightIndex 0 lightIndex lt lighting.mUsedLights lightIndex ) if (lighting.mLights lightIndex .mLightType LightType LIGHT TYPE POINT) GLCALL(glUseProgram(mShadowMapProgram.mProgramHandle)) GLCALL(glBindFramebuffer(GL FRAMEBUFFER, mFrameBuffer)) GLCALL(glViewport(0, 0, mShadowTextureWidth, mShadowTextureHeight)) for (uint32 t faceNum 0 faceNum lt CUBEMAP NUM FACES faceNum ) GLCALL(glFramebufferTexture2D(GL FRAMEBUFFER, GL DEPTH ATTACHMENT, GL TEXTURE CUBE MAP POSITIVE X faceNum, mShadowMapTexture, 0)) GLCALL(glClear(GL DEPTH BUFFER BIT)) ShadowPass() GLCALL(glBindFramebuffer(GL FRAMEBUFFER, 0)) final pass GLCALL(glUseProgram(mDefaultProgram.mProgramHandle)) GLCALL(glViewport(0, 0, (GLsizei)windowWidth, (GLsizei)windowHeight)) enable blending, how? ShadingPass() GLCALL(glUseProgram(0)) The problem is the screen will only be shown with last light in the light list, for example the ambient light and therefore ignoring the effects of the point lights. I assume I somehow needs to do blending but I was wondering how and where is this done? Thanks
1
LWJGL 3.0.0 glMapBuffer I am currently working on a project and usually in C I use the function. ByteBuffer glMapBuffer(int target, int access) usage FloatBuffer buffer glMapBuffer(GL ARRAY BUFFER, GL WRITE ONLY).order( ByteOrder.nativeOrder()).asFloatBuffer() But in java (lwjgl 3.0.0) this function returns null, because of this reason it throws a NullPointerException. Anyone any idea how to use this function in java? There are several functions glMapBuffer(int target, int access) glMapBuffer(int target, int access, ByteBuffer old buffer) glMapBuffer(int target, int access, long length, ByteBuffer old buffer) I hope this is specific enough, thank you for your help D
1
How can I convert a mouse click to a ray? I have a perspective projection. When the user clicks on the screen, I want to compute the ray between the near and far planes that projects from the mouse point, so I can do some ray intersection code with my world. I am using my own matrix and vector and ray classes and they all work as expected. However, when I try and convert the ray to world coordinates my far always ends up as 0,0,0 and so my ray goes from the mouse click to the centre of the object space, rather than through it. (The x and y coordinates of near and far are identical, they differ only in the z coordinates where they are negatives of each other) GLint vp 4 glGetIntegerv(GL VIEWPORT,vp) matrix t mv, p glGetFloatv(GL MODELVIEW MATRIX,mv.f) glGetFloatv(GL PROJECTION MATRIX,p.f) const matrix t inv (mv p).inverse() const float unit x (2.0f ((float)(x vp 0 ) (vp 2 vp 0 ))) 1.0f, unit y 1.0f (2.0f ((float)(y vp 1 ) (vp 3 vp 1 ))) const vec t near(vec t(unit x,unit y, 1) inv) const vec t far(vec t(unit x,unit y,1) inv) ray ray t(near,far near) What have I got wrong? (How do you unproject the mouse point?)
1
I get weird perspective using GLM where the depth is flipped. Please help The depth is rendered wrong and I can't figure out why. using namespace std using namespace glm int width 640 int height 480 float aspect (float)width height int fps 60 void start() glClearColor(0.0f,0.2f,0.6f,1) glEnable( GL TEXTURE 2D ) glEnable(GL DEPTH TEST) glDepthFunc(GL LESS) glFrontFace(GL CW) glCullFace(GL BACK) glEnable(GL CULL FACE) mat4 projectionMatrix perspective(radians(60.0f),aspect, 0.1f, 1000.0f) mat4 modelViewMatrix translate(vec3(0, 25, 100)) modelViewMatrix rotate(radians(50.0f),vec3(0.f, 1.f, 0.f)) modelViewMatrix rotate(radians( 90.0f),vec3(1.f, 0.f, 0.f)) mat4 modelViewProjMatrix projectionMatrix modelViewMatrix Shader shader shader.load("vertex.glsl","fragment.glsl") shader.bind() shader.uniform("modelViewProjMatrix",value ptr(modelViewProjMatrix)) Model model model.load("res bob.md5mesh") glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) model.render() glutSwapBuffers() void mainloop(int val) glutTimerFunc(1000 fps,mainloop,val) int main(int argc, char args ) glutInit( amp argc,args) glutInitContextVersion(3,0) glutInitDisplayMode(GLUT RGB GLUT DOUBLE GLUT DEPTH) glutInitWindowSize(width,height) glutCreateWindow("Test") glewInit() start() glutTimerFunc(1000 fps,mainloop,0) glutMainLoop() return 0 However, I found a way to fix it by multiplying the perspective by scale(1,1, 1), but it seems kind of a hack to me. mat4 projectionMatrix perspective(radians(60.0f),aspect, 0.1f, 1000.0f) scale(vec3(1,1, 1)) mat4 modelViewMatrix translate(vec3(0, 25,100)) I'd really appreciate it if someone can tell what I'm doing wrong.
1
Android Hardware Scaler I was reading through this using hardware scaler for performance and am a little confused by it. It says all you need to do to invoke the scaler is to set it like so surfaceView new GLSurfaceView(this) surfaceView.getHolder().setFixedSize(1280, 720) They says to use a fixed size for all devices. On my Google Nexus 10 (which has a resolution of 2560 x 1600). I've used 1280 800. It then goes on to say that the system will scale it to match the device's actual resolution. So this is what I get What am I missing here? Unfortunately the above link says that examples will be added soon but as yet there is nothing.
1
How to display three dimensional Objects in a 2D Game using OpenGL and orthographic Projection? I am creating a 2D (2.5D) game using OpenGL and orthographic projection. It is simple to have relatively flat objects, e.g. characters. I simply use a quad with a texture of the character and move that about. However, what is the best way to draw big objects that have depth, e.g. a big house? Do I use one quad with a three dimensional looking represantation of the house on it, or do I use multiple quads (e.g. front, side, top)? I prefer using one quad with a three dimensional looking texture on it. What are the drawbacks to this approach?
1
Unintended stuttering when moving camera I am using opengl and GLFW to make a rendering engine. Things work per se, but there is some weird flickering happening when I move the camera. Due the nature of the problem I need to link a youtube video https www.youtube.com watch?v XhFrXadnWbs amp feature youtu.be If you pay attention you will see the stuttering happening as I move the camera. I beleive this problem occurs due to how I have implemented my camera movement, which is GLFW key callback define CAM SPEED 0.2f void static key callback(GLFWwindow window, int key, int scancode, int action, int mods) if(key GLFW KEY W) c.translateForward(CAM SPEED) if(key GLFW KEY S) c.translateForward( CAM SPEED) if(key GLFW KEY A) c.translateSideways( CAM SPEED) if(key GLFW KEY D) c.translateSideways(CAM SPEED) Camera code void inline translateForward(float speed) glm vec3 hForward forward hForward.y 0 hForward normalize(hForward) position hForward speed void inline translateSideways(float speed) position side speed
1
What is an efficient way to deal with large, scrolling background images? In my mobile game you basically you just fly up (infinite height) and collect stars. I have many quite large background images, scaled down so that their width is the same as the device width. Then they are appended after each other during rendering. Since I implemented these backgrounds, my game runs poorly. I've got about 20 background images with a size of 800x480 each without backgrounds the game is quite smooth. Does anyone have an idea how to implement this many backgrounds without making the game slow down? The images are used as a 2DTexture. If I leave the clouds out of the image and "just" display the blue part, the app still slows down. Showing some code is a bit difficult, because I got many many classes which will do the loading, rendering and display stuff. Basically its done as Google does it in there "spriteMethodTest" example here http code.google.com p apps for android source browse trunk SpriteMethodTest 2 of these image set. First http picbox.im view b7c8c86abb 01.png Second http picbox.im view 3a8162314a 02.png
1
Separate shader programs or branch in shader? I have a bunch of point lights and directional lights. Instead of checking the light type in the fragment shader and then branch for either point light calculation or directional light calculation, is it more efficient to use two separate programs, one for point lights and one for directional lights? (Using deferred shading in OpenGL 3.3)
1
OpenGL How can I make the edges of this textured circle smoother? I'm building a game and I've applied a certain texture (RAW file) to a circle (GL POLYGON) in OpenGL. It loads correctly, with the right size and all, but the edges seem a bit jagged and I would prefer them smooth (without any evidence of a polygon being underneath). GLuint loadTex(int wrap) GLuint texture int width, height BYTE data FILE file file fopen( "tex.raw", "rb" ) if (file NULL) std cerr lt lt "O ficheiro da textura n o foi encontrado" lt lt std endl return 0 width 128 height 128 data (BYTE )malloc( width height 3 ) Aloca o do buffer fread( data, width height 3, 1, file ) Ler dados da textura fclose( file ) glGenTextures( 1, amp texture ) Nome da textura glBindTexture( GL TEXTURE 2D, texture ) Selec o da textura actual glTexEnvf( GL TEXTURE ENV, GL TEXTURE ENV MODE, GL MODULATE ) glTexParameterf( GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR MIPMAP NEAREST ) Par metros da Textura glTexParameterf( GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR ) glTexParameterf( GL TEXTURE 2D, GL TEXTURE WRAP S, wrap ? GL REPEAT GL CLAMP ) glTexParameterf( GL TEXTURE 2D, GL TEXTURE WRAP T, wrap ? GL REPEAT GL CLAMP ) gluBuild2DMipmaps( GL TEXTURE 2D, 3, width, height, GL RGB, GL UNSIGNED BYTE, data ) free(data) return texture void drawCircle(float cx, float cy, float r, float aspectRatio) GLuint t loadTex(1) float angle, radian, x, y, tx, ty, xcos, ysin glEnable(GL TEXTURE 2D) glBindTexture(GL TEXTURE 2D, t) glBegin(GL POLYGON) for (angle 0.0 angle lt 360.0 angle 2.0) radian angle (M PI 180.0f) xcos (float)cos(radian) ysin (float)sin(radian) x xcos r cx y ysin r aspectRatio cy tx xcos 0.5 0.5 ty ysin 0.5 0.5 glTexCoord2f(tx, ty) glVertex2f(x, y) glEnd() glDisable(GL TEXTURE 2D) Is this a problem from my texture file? Or is it from my code? I apply Line Smoothing before drawing anything (and it is being applied correctly in my lines I believe)
1
render trajectory of particle in forcefield using GLSL shader I want to visualize flow of particles in some forcefield e.g. electromagetic, gravitional or potential flow in fluid (streamlines) I was thinking to use Geometry Shader which would look like this (not tested, just sketch) version 330 core layout (points) in layout (line strip, max vertices 64) out uniform float dt vec3 getForce(in vec3 pos) ... don't care now ... void main() vec3 pos gl in 0 .gl Position for( int i 0 i lt 64 i ) pos getForce(pos) dt it's actually velocity field but who cares gl Position pos EmitVertex() EndPrimitive() However, I read that Geometry shaders are very slow (I did not yet tested). So I wonder if it would be helpful in this situation? Or would Tesselation shaders or Compute shaders be suitable for this job and achieving better performance?. background I have experience with vertex shaders, fragment shaders, OpenCL ... but I have never tried geometry ,tesselation nor compute shaders. EDIT there is my simple example implementation using geometry shader in 2D https github.com ProkopHapala SimpleSimulationEngine blob ca16b6f8d72864a8a572ed2a21ded99c5c2cc3e4 cpp sketches SDL OGL3 test GeometryShader.cpp which plots this (for velocity field defined by dipole)
1
360 degree video of my OpenGL game I want to make a 360 degree video of my OpenGL game. Concerning the rendering Is it enough to render it in OpenGL with a specific projection matrix? If yes, which one? Or can I render it into a cube map, and then encode it? (Which will require much more rendering power and be more complicated, which I want to avoid) And how do I encode a 360 degree video with FFMPEG?
1
How to share values between different shader programs? I am using Unity but this might concerns all type of shaders. I would like to know if this is possible to share values between different shader pass.Let's imagine that I am computing something in the first pass and I would like to use this value in a second pass, is it possible ? If this is possible, how can I do that ?
1
libgdx glClearColor not setting right color? i just new to libgdx and trying to understand the example code, the following code sets the bg color Gdx.gl.glClear(GL10.GL COLOR BUFFER BIT) Gdx.gl.glClearColor(60,181,00,0f) my expected color is green, R 60 Green 181 B 0, but the about code show me yellow color in my android app, anything i'm doing wrong? http rapid tools.net online color picker
1
Visual Studio 2010 C OpenGL project cannot detect .lib files So I'm trying to start a C , OpenGL project on Visual Studio 2010, and I have put glut.h and glut32.dll in the project directory, along with the glut.dll and glut32.lib files in a folder named "glut" in the project directory. I have also gone to "Porject Properties Linker Input Additional dependencies" and I have typed in "glut32.lib" at the end, and made sure I am not missing a semicolon. when I try doing include "glut.h" in the Main.cpp file, it complains error LNK1104 cannot open file 'glut32.libkernel32.lib' Am I missing any files? What am I doing wrong here?
1
How can I apply my camera to all graphics objects? Is there a way to have the camera be applied to all objects? Can you have the camera effect objects that where made with glVertex3f() This is the fourth time I have been writing this post. It keeps deleting. The first time it deleted almost all the text was when I logged in. Anyway This is some of the code that does the camera Projection glm perspective(fov, display.Ratio(), 0.001f, 1000.0f) glm vec3 Pos glm vec3(this gt position.x, this gt position.y, this gt position.z) glm vec3 Target glm vec3(this gt target.x, this gt target.y, this gt target.z) glm vec3 Up glm vec3(this gt up.x, this gt up.y, this gt up.z) glm mat4 Lookat glm lookAt(Pos, Target, Up) glm mat4 ModelV glm rotate(glm mat4(1.0f), 0.0f, Up) CamLook this gt Projection Lookat ModelV CamLook is passed to the shader as MVP Shader Code version 400 in vec3 VertexPosition in vec3 VertexColor uniform mat4 MVP out vec3 Color void main() Color vec3(1, 1, 1) gl Position MVP vec4(1, 1, 1, 1) Please someone tell me what is wrong with this? I have been working on this for almost a month now and I can move on unless this is done. Also I have another shader that does color on the objects.
1
rotate sphere horizontally around another sphere I currently have an earth and a moon. What I'm trying to achieve is to have the moon physically rotate around the earth horizontally along the equator along a circular path. moonAngle (moonAngle 0.5f) 360f xPath (float) Math.sin(Math.toRadians(moonAngle)) distance yPath (float) Math.cos(Math.toRadians(moonAngle)) distance gl.glTranslatef(xPath, yPath, 30f) The above works fine, except the moon is rotating around the earth vertically around the Prime Meridian like a wall clock. How do I adjust the angle of rotation? I've tried modifying the glTranslatef, but with no success.
1
How can I extrude a regular, grid based 2D shape to 3D? I have a list of vertex coordinates which encircle several 2D areas. Orthogonal lines only, but not necessarily convex areas... similar to PCB traces of conductive copper areas. I want to draw them like solid objects in 3D using OpenGL, but I'm still very new to 3D and struggling. Please point me how to do this. I've managed to draw the outline only using GL LINE STRIP, but when I try GL QUAD STRIP or GL TRIANGLE STRIP, the list of coordinates is not properly ordered for triangle strip and the fact the areas are not convex prevent me from doing a simple triangular tessellation. These are the 2D shapes I'd like to render them like this
1
Simulation of ball movement in a 3d landscape. The easiest way? I have a landscape(generated via Perlin noise) and a ball. I want the ball to move along the geodesic(implementation of basic physics gravitation, friction). I thought to do raycast around the ball to the landscape, choose the lowest point and move the ball to this point, but it won't work in every case and it won't allow the ball to jump (with inertia). So, what is the best way algorithm to implement such feature? P.S. I don't want to use any libraries. Thanks.
1
Is OpenGL appropriate for 2D games? I have been teaching myself the OpenGL library for a while now, and want to start making a game. However, for an easier introduction, I want to start with something 2D, such as a top down Pokemon style game. Is this a good plan, or is OpenGL made specifically for 3D?
1
Why bother with a separate normal matrix, if there is never non uniform scaling on the view matrix? I am updating one of my shaders to a version of OpenGL GLSL that doesn't automatically provide gl NormalMatrix (for educational purposes I'm not ripping out working code for the sake of it). Therefore, I need to compute my own normal matrix on the CPU and pass it in. I understand that a separate normal matrix is used to avoid issues with non uniform scaling affecting the direction of normals. However, I've noticed that in my code I never perform any scaling let alone a non uniform one. So, I'm tempted to use the upper left 3x3 sub matrix for transforming my normals and calling it a day (perhaps normalising them to allow for uniform scaling). My program assumes that every mesh it loads is already at the correct size, with no scaling required. Will I soon run into something that requires scaling (uniform or otherwise), or is there another reason for using a separate normal matrix that I haven't realised?
1
Batching and Z order with Alpha blending in a 3D world I'm working on a game in a 3D world with 2D sprites only (like Don't Starve game). (OpenGL ES2 with C ) Currently, I'm ordering elements back to front before drawing them without batch (so 1 element 1 drawcall). I would like to implement batching in my framework to decrease draw calls. Here is what I've got for the moment Order all elements of my scene back to front. Send order list of elements to the Renderer. Renderer look in his batch manager if a batch exist for the given element with his Material. Batch didn't exist create a new one. Batch exist for element with this Material Add sprite to the batch. Compute big mesh with all sprite for each batch (1 material type 1 batch). When all batches are ok, the batch manager compute draw commands for the renderer. Renderer process draw commands (bind shader, bind textures, bind buffers, draw element) Image with my problem here But I've got some problems because objects can be behind another objects inside another batch. How can I do something like that?
1
How to decide how many textures to break an object into? So I am making a couple spacecraft in my game. And I want them to look sharp and as photorealistic as possible. How can I decide if a certain craft should be broken into multiple meshes to each have its own texture based on its size? Is there a rule? Like if you are using 4096 size textures. Obviously a bicycle could be a single mesh with 1 texture. But what about a small sports car? An SUV? A dumptruck?
1
Understanding the ModelView Matrix I want to analyze the each component of my 4 4 ModelView Matrix. I came to know that the starting 3 3 of ModelView Matrix stores rotation. If i want my object to have no rotation with respect to camera so My ModelView Matrix looks like this How to change my ModelView Matrix if i want to have NO Translation or Scaling ? Can anyone explain the Maths behind this
1
How to change mac opengl version back to mesa A while ago I wanted to use the functionality of opengl 4.5 on my mac, so I installed mesa with macports, and it worked. Recently, however, I tried to use my code again and it said that these versions of opengl are not supported. I tried running glxinfo, and it seems to be telling me that my renderer is the default intel provided renderer. The last time it worked was a reasonably long time ago, and it is likely that the reason for the switch is that when I restarted my computer at some point it switched back to the old version. I don't know exactly when it stopped working. What I want to figure out is how to switch to the mesa version. I checked with macports and even looked at the files internally, and mesa is installed and updated to version 17.1.6, which is the latest version supported by macports. I can't find anything anywhere on how to change the renderer used by my computer. I think it has something to do with libGL maybe, but I can't find any information on how to change or even view libGL's environment variables. How can I switch to the mesa renderer? This is made especially difficult because I can't trust that the information from glxinfo is correct. Both mesa and the default intel renderers have their own versions of glxinfo, which may give information that is different from what the current opengl actually running on the system is. I'm not sure which one I am actually running when I run glxinfo with terminal, or how to find the executable for the mesa version. EDIT I have looked a little more, and it seems like I can control which version is used during context creation and API loading, but that requires manually doing those things, which is not something I really want to do. Currently, I have only tested this with JOGL, and I can't find anything on what it uses to create a context. I don't know if glfw will use the mesa version, but glew did install in the directory for the mesa X version and not the core version that I think JOGL is using. It did work before with JOGL, so clearly there is some system wide default setting that can change this, and I would like to know what that is.
1
Player being covered by glClearColor? Here is a video of the problem https www.youtube.com watch?v 6vnDOB1Vk4o (this is the game I've been making for fun practice). You notice, in the beginning of the video, I walk off the map and into the black. When I do so the black covers the player. I draw the player last in my code Draw the player amp the world public void draw(Graphics g, Graphics graphics, GameStateManager gsm) glClearColor(0, 0, 0, 0) System.out.println(player.getPlayerX() " " player.getPlayerY()) glViewport( (int)player.getPlayerX() 350, (int)player.getPlayerY() 300, Display.getWidth(), Display.getHeight()) worldGen.worldGenerator() Draw the player to the screen (60, 60) player.draw()
1
How and when to split draw calls in OpenGL and OpenGL ES My application (with a modernish OpenGL 3.0 and an OpenGL ES 2.0 backend) renders more or less streaming data with up to millions of vertices with different vertex layouts and sizes (some simple, some quite big). Some data is indexed, some is not. I'm currently hitting the problem of draw calls and buffers (index and or vertex) becoming too large, so I need to split them. I have a hard time understanding when and how I should correctly split draw calls. My questions The limits GL MAX ELEMENTS VERTICES and GL MAX ELEMENTS INDICES, according to the docs, tell the application the maximum value for well performing glDrawRangeElements calls. Why is this ONLY for glDrawRangeElements, why not glDrawRangeElementsBaseVertex or even glDrawElements?! This does not make sense to me, because glDrawRangeElements is more or less glDrawElements after all. Also I've see a couple of OpenGL implementations reliably crash when rendering indices above that limit, even with glDrawElements. On the other hand some stackoverflow answers say that the limits are not relevant anymore... What else should I use to decide how to split my draw calls then? What about glDrawArrays? Is there an upper limit (e.g. GL MAX ELEMENTS VERTICES)? For the OpenGL ES 2.0 backend I require the extension "GL OES element index uint" to use indices above the 65k limit. What limits should I choose there for draw call size? I have decided to limit my IBO and VBO sizes to a certain limit (max. 64MB), as there's no way to ask OpenGL about what buffer size I can allocate, else than checking for GL OUT OF MEMORY. That means I have to additionally split according to that buffer size (both IBO and VBO). Is there a better way to check buffer limits? What should be the preferred way to split draw calls? Make them small in the application already, then batch them when rendering? This will lead to lots of duplicate vertices in some case, which is not ideal. Splitting them in the render code can be complicated, because index data has to be analyzed, indices changed or memory copied, but in the end it is the only place where you can be sure all limits are respected... I'd prefer reliable, stable solutions even if they provide lower performance. I'm targeting OpenGL Desktop hardware newer than 2010 and Angle as a conversion layer from OpenGL ES 2.0 Direct3D in case of Remote Desktop connections.
1
How do I position a 2D camera in OpenGL? I can't understand how the camera is working. It's a 2D game, so I'm displaying a game map from (0, 0, 0) to (mapSizeX, 0, mapSizeY). I'm initializing the camera as follow Camera Camera(void) position (0.0f, 0.0f, 0.0f), rotation (0.0f, 0.0f, 1.0f) void Camera initialize(void) glMatrixMode(GL PROJECTION) glLoadIdentity() glTranslatef(position .x, position .y, position .z) gluPerspective(70.0f, 800.0f 600.0f, 1.0f, 10000.0f) gluLookAt(0.0f, 6000.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f) glMatrixMode(GL MODELVIEW) glLoadIdentity() glEnable(GL DEPTH TEST) glDepthFunc(GL LEQUAL) So the camera is looking down. I currently see the up right border of the map in the center of my window and the map expand to the down left border of my window. I would like to center the map. The logical thing to do should be to move the camera to eyeX mapSizeX 2 and the same for z. My map has 10 x 10 cases with CASE 400, so I should have gluLookAt((10 2) CASE 2000 , 6000.0f, (10 2) CASE 2000 , 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f) But that doesn't move the camera, but seems to rotate it. Am I doing something wrong? EDIT I tried that gluLookAt(2000.0f, 6000.0f, 0.0f, 2000.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f) Which correctly moves the map in the middle of the window in width. But I can't move if correctly in height. It always returns the axis Z. When I go up, It goes down and the same for right and left. I don't see the map anymore when I do gluLookAt(2000.0f, 6000.0f, 2000.0f, 2000.0f, 0.0f, 2000.0f, 0.0f, 1.0f, 0.0f)
1
ETC1 texture opacity I'm using the cocos2d x engine and want to support etc1 on android devices for my game. For ETC1 i'm using the mali compression tool, and GLSL. Everything is working, but I can't change the opacity of my sprites anymore. my alpha blending is GL SRC ALPHA, GL ONE MINUS SRC ALPHA . can anyone help? Here are one of my Textures with rgb tex alpha tex Vertex Shader attribute vec4 a position attribute vec2 a texCoord attribute vec4 a color varying vec4 v fragmentColor varying vec2 v texCoord varying vec2 v texCoord2 uniform sampler2D tex1 void main() gl Position CC PMatrix a position CC PMatrix comes from engine v fragmentColor a color v texCoord a texCoord v texCoord2 v texCoord Fragment Shader varying vec4 v fragmentColor varying vec2 v texCoord varying vec2 v texCoord2 uniform sampler2D u texture1 alpha texture void main() CC Texture0 comes from cocos2d x engine and represents the rgb vec3 tex texture2D(CC Texture0, v texCoord).rgb float alpha texture2D(u texture1, v texCoord).r gl FragColor vec4(tex,alpha)
1
Android Object get Jagged at the border I am new to OpenGL ES 1 in android. My 3D Model border getting Jagged. Please help me to look like a smooth border instead of jagged. Screenshot http i.stack.imgur.com 1Gq83.png private class Renderer implements GLSurfaceView.Renderer public Renderer() setEGLConfigChooser(8, 8, 8, 8, 16, 0) getHolder().setFormat(PixelFormat.TRANSLUCENT) setZOrderOnTop(true) public void onSurfaceCreated(GL10 gl, EGLConfig config) gl.glClearColor(0.0f,0.0f,0.0f, 0.0f) gl.glDisable(GL10.GL DITHER) gl.glEnable(GL10.GL DEPTH TEST) gl.glEnable(GL10.GL BLEND) gl.glBlendFunc(GL10.GL SRC ALPHA SATURATE, GL10.GL ONE) gl.glDepthFunc(GL10.GL LEQUAL) gl.glHint(GL10.GL POLYGON SMOOTH HINT, GL10.GL NICEST) gl.glHint(GL10.GL PERSPECTIVE CORRECTION HINT, GL10.GL FASTEST) gl.glEnable(GL10.GL TEXTURE 2D) gl.glShadeModel(GL10.GL SMOOTH) public void onSurfaceChanged(GL10 gl, int w, int h) mViewWidth (float)w mViewHeight (float)h gl.glViewport(0,0,w,h) gl.glMatrixMode(GL10.GL PROJECTION) gl.glLoadIdentity() GLU.gluPerspective(gl, 45, mViewWidth mViewHeight, 0.1f, 100f) gl.glMatrixMode(GL10.GL MODELVIEW) gl.glLoadIdentity() public void onDrawFrame(GL10 gl) gl.glClear(GL10.GL COLOR BUFFER BIT GL10.GL DEPTH BUFFER BIT) gl.glPushMatrix() gl.glDisable(GL10.GL DITHER) GLU.gluLookAt(gl, 0, 0, 10, 0, 0, 0, 0, 1, 0) draw model gl.glPushMatrix() if(mOrigin ! null amp amp mRotate ! null) gl.glTranslatef(mOrigin.x, mOrigin.y, mOrigin.z) gl.glRotatef(mRotate.x, 1f, 0f, 0f) gl.glRotatef(mRotate.y, 0f, 1f, 0f) gl.glRotatef(mRotate.z, 0f, 0f, 1f) if(mModel ! null) mModel.draw(gl) if(!RendererView.textureFileName.equals("")) mModel.bindTextures(mContext, gl) gl.glPopMatrix() gl.glPopMatrix() if(isPictureTake) w getWidth() h getHeight() b new int w (y h) bt new int w h IntBuffer ib IntBuffer.wrap(b) ib.position(0) gl.glReadPixels(0, 0, w, h, GL10.GL RGBA, GL10.GL UNSIGNED BYTE, ib) createBitmapFromGLSurface(context) isPictureTake false Thanks in advance.
1
Why do I have to switch T(v) texture coordinates while importing OpenGL to Direct3D? I am importing my code from OpenGL to Direct3D. My D3DTS PROJECTION uses D3DXMatrixPerspectiveFovRH, and my D3DTS VIEW uses D3DXMatrixLookAtRH to set a view equal to OpenGL's view. My question is why do I have to switch all of my 1.0000 Tex(v) texture coordinates to "minus value" in D3D to get equal texture mapping as in OpenGL. OpenGL T(u) T(v) 1.0000, 1.0000, 0.0000, 1.0000, 1.0000, 0.0000, 1.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 0.0000, 1.0000, 0.0000, 0.0000, 1.0000, 1.0000, 1.0000, 1.0000, 0.0000, 0.0000, 1.0000, 0.0000, ... Direct3D T(u) T(v) D3DXVECTOR2( 1.0000f, 1.0000f) D3DXVECTOR2( 0.0000f, 1.0000f) D3DXVECTOR2( 1.0000f, 0.0000f) D3DXVECTOR2( 1.0000f, 0.0000f) D3DXVECTOR2( 0.0000f, 1.0000f) D3DXVECTOR2( 0.0000f, 0.0000f) D3DXVECTOR2( 0.0000f, 1.0000f) D3DXVECTOR2( 0.0000f, 0.0000f) D3DXVECTOR2( 1.0000f, 1.0000f) D3DXVECTOR2( 1.0000f, 1.0000f) D3DXVECTOR2( 0.0000f, 0.0000f) D3DXVECTOR2( 1.0000f, 0.0000f) ... Is it because Direct3D's origin (0.0) of texture coordinates lies in different place than in OpenGL?