_id
int64
0
49
text
stringlengths
71
4.19k
1
Mesh rendering performance optimisation I'm working on a libgdx implementation of mesh terrain and running into some performance issues. The terrain is represented by a number of mesh tiles, each mesh is made of vertices laid onto a 2D plane. The implementation of the meshes is done via libgdx Mesh, each of which is cached after its initial generation. I use GL3.0 and therefore the vertices are handled via VertexBufferObjectWithVAO which as I understand it should allow GPU caching. Meshes are indexed. Aiming to optimise performance, I have tried to increase the number of vertices in each mesh (while keeping the same overall amount of vertices) but weirdly the performance gets worse rather than improving. Question 1 any possible reasons why given the same total number of vertices the scenario with lower amount of meshes ( 3 below) is slower than the scenarios with higher number of meshes? Question 2 based on the OPENGL pipeline summarised below is it correct to assume that VBOs are being transferred to the GPU once and then drawn via GPU memory reference? Performance comparison 1,600 meshes 3,042 vert (4.8M vertices) 131 FPS 625 meshes 11,250 vert (4.5M vertices) 132 FPS 100 meshes 45,000 vert (4.5M vertices) 113 FPS Hardware Details GTX660 2GB Memory Utilisation during test 70 in all scenarios. Vertex allocation memory impact seems to be negligible compared to textures. OPENGL pipeline From API TRACE, this is the frame life cycle in summary mesh generation (one off) glGenBuffers() glGenVertexArrays() render (every frame) glClear(GL COLOR BUFFER BIT) glEnable(GL BLEND) glBlendFunc(GL SRC ALPHA, GL ONE MINUS SRC ALPHA) glUseProgram(...) glUniformMatrix4fv(...) glUniform1f(...) glActiveTexture(...) glBindTexture(...) ... glBindVertexArray(...) glBindBuffer(...) glDrawElements(...) for each mesh edited to clarify that I have tried to group the vertices into less amount of meshes to reduce number of draw calls edited to provide more data and streamline questions.
1
How do I reuse the same vertex data, but with different colors, for my sphere objects? I'm using OpenGL to display a 3D network, with nodes represented as spheres (I haven't gotten to edges yet). I'm a total novice, and having a bit of trouble wrapping my head around OpenGL. These networks need to be generated programmatically, so I've got a function that produces vertices for a sphere with radius 1 centred at the origin, and I'm translating and scaling it to produce each of the nodes in the network. I'm using glDrawElements() to draw my nodes, so I'm storing indices for the vertices of my sphere as well. Once a network is produced, it doesn't change. The camera position can change, but nothing else will. My current strategy is the following When a network is generated, for each node, make a copy of the unit sphere, and apply the appropriate translation scale matrix. Store all of the vertices (and associated colours) in a single array VBO. When drawing, set the scene to camera and perspective matrices to a uniform, then use a single glDrawElements() call to draw all my nodes. So a couple of questions about this Since all my nodes are essentially identical, I could just store vertices for one sphere, and transform the sphere on the fly when I render the scene. There might be a lot of nodes, so this could save substantial memory. But each sphere has quite a different set of colours... is there a way to tell OpenGL to use an offset for the colour data but not the vertex data? Assuming the answer to part 1 is no, would it be better to keep vertices for a single sphere in my VBO, and then substitute the colour data and draw each node with a separate call to glDrawElements()? Or is there some other solution that I'm not thinking of?
1
Reading from depth textures always returns 1 I create a packed depth stencil texture and attach it to a framebuffer like this glGenTextures(1, amp depthStencilTexture) glBindTexture(GL TEXTURE 2D, depthStencilTexture) set filtering to gl nearest, not shown cause not important glTexImage2D(GL TEXTURE 2D,0,GL DEPTH24 STENCIL8,width,height,0,GL DEPTH STENCIL,GL UNSIGNED INT 24 8,nullptr) ... create framebuffer and attach some color attachments glFramebufferTexture(GL FRAMEBUFFER,GL DEPTH STENCIL ATTACHMENT, depthStencilTexture,0) The creation of the framebuffer causes no OpenGL status errors. In terms of usage for depth and stencil testing, the texture attachment seems to work correctly. If I disable depth testing, I get the usual artifacts of objects in the back being drawn over objects in the front if they are drawn in the wrong order. If I disable stencil testing, the effects I'm using related to that don't work anymore (I'm masking out areas that shouldn't be lit by the lighting pass). Also, if I simply don't attach the depth stencil buffer to the framebuffer, the related tests stop working (as expected). However, reading from that texture in a shader always returns 1 if I read the depth part, and 0 if I read from the stencil part. I confirmed this via glReadBuffer(GL DEPTH STENCIL ATTACHMENT) and glReadPixels. All integers read show up as 0xffffff00, directly after clearing them via glClear and a black clearcolor and also after the screen is drawn full of stuff. Drawing to and reading from the rest of the attached textures (four color buffers for now) works fine. I have to note that I'm not actually manually drawing anything to the depth stencil attachment in the fragment shader when rendering I'm assuming OpenGL automatically draws depth and stencil values to the correct buffer. What could be the cause of these incorrect values showing up when reading from the texture?
1
openGL Updating instanced model transform in vbo every frame I am using OpenGL to render a large number of models by instanced rendering (using LWJGL wrapper). As far as I can tell I have implemented the instancing correctly, although, after profiling, I've come upon an issue. The program is able to render a million cubes at 60fps when their model (world) transformations are not changing. Once I make them all spin though, the performance drops significantly. I deduced from the profiler that this is due to the way I write the matrix data to the VBO. My current approach is to give each unique mesh a new VAO (so all instances of cubes come under 1 VAO), have 1 VBO for vertex positions, textures, and normals and 1 instance array (VBO) for storing instance model matrices. All VBOs are interwoven. In order to make the cubes spin, I need to update the instance VBO every frame. I do that by iterating through every instance and copying the matrix values into the VBO. The code is something like this float matrices new float models by mesh.get(mesh).size() 16 for (int i 0 i lt models.size() i ) Model cube models.get(i) float matrix new float 16 cube.getModelMatrix(matrix) store model matrix into array System.arraycopy(matrix, 0, matrices, i 16, 16) glBindBuffer(GL ARRAY BUFFER, instance buffers by mesh.get(mesh) glBufferData(GL ARRAY BUFFER, matrices, GL STATIC DRAW) render I realise that I create new buffer storage and float array every frame by calling glBufferData instead of glBufferSubData but when I write outside loop soon after VBO creation glBufferData(GL ARRAY BUFFER, null, GL DYNAMIC DRAW) or stream when updating models glBufferSubData(GL ARRAY BUFFER, 0, matrices) nothing is displaying I'm not sure why, perhaps I'm misusing subData but that's another issue. I have been looking at examples of particle simulators (in OpenGL) and most of them update the instance VBO the same way as me. I'm not sure what the problem could be and I can't think of a more efficient way of updating the VBO. I'm asking for suggestions potential improvements with my code.
1
Rotation going wrong I'm calculating matrices by hand. Translations are fine void Translate (float x, float y, float z, float 4 4 m) Identity (m) m 3 0 x m 3 1 y m 3 2 z If I multiply a vector with this matrix, I get the correct transformation. My problem now is Rotations. I copied the definition from the OpenGL reference on glRotation, but I can't get it right. Can you spot my mistake? void Rotate (float angle, float x, float y, float z, float 4 4 m) float c cos (angle) float s sin (angle) m 0 0 x x (1 c) c m 0 1 y x (1 c) z s m 0 2 x z (1 c) y s m 0 3 0 m 1 0 x y (1 c) z s m 1 1 y y (1 c) c m 1 2 y z (1 c) x s m 1 3 0 m 2 0 x z (1 c) y s m 2 1 y z (1 c) x s m 2 2 z z (1 c) c m 2 3 0 m 3 0 0 m 3 1 0 m 3 2 0 m 3 3 1 I don't know what else is relevant, so, if you're kind enough to lend me a hand on this and if I just don't present enough info, just say it. Thank you for taking the time to read this question. EDIT The trouble I'm having is the following If I do Rotate (180,0,0,0), the vertexes are inverted as intended but the resulting triangle (in this case) is smaller. Screenshot
1
Is there a way to use other fonts, besides the default ones in OpenGLUT? I'm using OpenGLUT functions like glutBitmapString to render sentences and words in a game. However, there is a limited set of fonts to use and I need some bigger size fonts. Is there a way to add new fonts to these functions API? Thanks
1
What is the maximum number of shaders I can have in OpenGL 4? What is the maximum limit of shaders I can have on the GPU? With 1000 different objects, I might have 1000 5 shaders (vertex, tcs, tes, geo, frag) on the GPU at a time. Though only one will be active at once, I wonder what the upper limit is.
1
OpenGL multiple viewports with 3d 2d viewing I am trying to draw two viewports, the one on top having a 3D stuff into it, and the other at bottom with a 2D stuff fed into it. However, whatever 2D stuff i draw in the bottom viewport is not rendered to the screen. Below is the piece of my code. void display() first viewport of height h 8 glViewport(0, 0, width, height 8) glMatrixMode(GL PROJECTION) glLoadIdentity() glMatrixMode(GL MODELVIEW) glLoadIdentity() gluOrtho2D( 2.0, 2.0, 2.0, 2.0) glClear(GL COLOR BUFFER BIT) glColor3f(0.0f, 1.0f, 1.0f) glRectf( 1.0, 1.0, 1.0, 1.0) glFlush() second viewport of height 7h 8 glViewport(0, 2 height 8, width, 7 (height 8)) glMatrixMode(GL PROJECTION) glLoadIdentity() aspect (double)8 width ((double)height 7) gluPerspective(fieldofview, aspect, nearPlane, farPlane) setup viewing matrix glMatrixMode(GL MODELVIEW) glLoadIdentity() gluLookAt(0.0f, 20.0f, 10.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glPushMatrix() draw model(objs 0 ) draw the plane draw model(objs 1 ) draw the first diamond glPopMatrix() glPushMatrix() glRotatef(angle, 0.0f, 0.0f, 1.0f) glTranslatef(moving position.x, moving position.y, moving position.z) draw model(objs 2 ) draw the second diamond glPopMatrix() glutSwapBuffers() move(8) set up menus void reshape(int x, int y) width x height y if (height 0) not divided by zero height 1 void main(int argc, char argv ) glutInit( amp argc, argv) glutInitDisplayMode(GLUT DEPTH GLUT DOUBLE GLUT RGBA) glutInitWindowPosition(position x, position y) glutInitWindowSize(width, height) winId glutCreateWindow("Mesh Viewer") glutReshapeFunc(reshape) glutDisplayFunc(display) display function init() glutMainLoop() Below is my output, with no rectangle in 1st viewport Moreover, can I apply some background color to the second viewport without using the glScissor?
1
How to tell if a glut window has focus from c How can I tell if a glut window has focus? Im using c tao, Ill use p invokes if necessary. Basically I want to ignore input if it doesn't have focus.
1
How many OpenGL programs should I use to render multiple objects? My scene has multiple objects in it. (Let's say 3 cubes, 1 cylinder, 8 spheres.) I assume I should create a vertex shader for each. How many programs should I have? Alternatives One program per object One program for all cubes and another for all spheres (assuming they use the same shaders) One large program for everything What is the correct approach?
1
Checking if an object is inside bounds of an isometric chunk How would I check if an object is inside the bounds of an isometric chunk? for example I have a player and I want to check if its inside the bounds of this isometric chunk. I draw the isometric chunk's tiles using OpenGL Quads. My first try was checking in a square pattern kind of thing e object this isometric chunk if (e.getLocation().getX() lt this.getLocation().getX() World.CHUNK WIDTH World.TILE WIDTH amp amp e.getLocation().getX() gt this.getLocation().getX()) if (e.getLocation().getY() gt this.getLocation().getY() amp amp e.getLocation().getY() lt this.getLocation().getY() World.CHUNK HEIGHT World.TILE HEIGHT) return true return false What happens here is that it checks in a SQUARE around the chunk so not the real isometric bounds. Image example (THE RED IS WHERE THE PROGRAM CHECKS THE BOUNDS) What I have now Desired check Ultimately I want to do the same for each tile in the chunk. EXTRA INFO Till now what I had in my game is you could only move tile by tile but now I want them to move freely but I still need them to have a tile location so no matter where they are on the tile their tile location will be that certain tile. then when they are inside a different tile's bounding box then their tile location becomes the new tile. Same thing goes with chunks. the player does have an area but the area does not matter in this case. and as long as the X and Y are inside the bounding box then it should return true. they don't have to be completely on the tile.
1
What's the fastest way to copy a texture to another texture in OpenGL? Here's the options I've found glBlitFramebuffer Create framebuffers for the textures, bind textures as GL READ FRAMEBUFER and GL DRAW FRAMEBUFFER, call glBlitFramebuffer(). glCopyTexImage2D My research so far says this method is probably slow. Shaders Make a framebuffer and render target, render the source texture to the dest texture.
1
Workaround the flip queue (AKA pre rendered frames) in OpenGL? It appears that some drivers implement a "flip queue" such that, even with vsync enabled, the first few calls to swap buffers return immediately (queuing those frames for later use). It is only after this queue is filled that buffer swaps will block to synchronize with vblank. This behavior is detrimental to my application. It creates latency. Does anyone know of a way to disable it or a workaround for dealing with it? The OpenGL Wiki on Swap Interval suggests a call to glFinish after the swap but I've had no such luck with that trick.
1
Why does GLM only have a translate function that returns a 4x4 matrix, and not a 3x3 matrix? I'm working on a 2D game engine project, and I want to implement matrices for my transformations. I'm going to use the GLM library. Since my game is only 2D, I figured I only need a 3x3 matrix to combine the translation, rotation and scale operations. However, glm translation is only overloaded to return a 4x4 matrix, and never a 3x3. I thought a translation could be performed by using a 3x3 matrix why does GLM only have a translate function that returns a 4x4 matrix, and not a 3x3 matrix?
1
How to pass one float as four unsigned chars to shader by glVertexPointAttrib? For each vertex I use two floats as position and four unsigned bytes as color. I want to store all of them in one table, so I tried casting those four unsigned bytes to one float, but I am unable to do that correctly... All in all, my tests came to one point GLfloat vertices 1.0f, 0.5f, 0, 1.0f, 0, 0 glEnableVertexAttribArray(0) glVertexAttribPointer(0, 2, GL FLOAT, GL FALSE, 2 sizeof(float), vertices) VER1 draws red triangle unsigned char colors 0xff, 0, 0, 0xff, 0xff, 0, 0, 0xff, 0xff, 0, 0, 0xff glEnableVertexAttribArray(1) glVertexAttribPointer(1, 4, GL UNSIGNED BYTE, GL TRUE, 4 sizeof(GLubyte), colors) VER2 draws greenish triangle (not "pure" green) float f 255 lt lt 24 255 Hex 0xff0000ff float colors2 f, f, f glEnableVertexAttribArray(1) glVertexAttribPointer(1, 4, GL UNSIGNED BYTE, GL TRUE, 4 sizeof(GLubyte), colors2) VER3 draws red triangle int i 255 lt lt 24 255 Hex 0xff0000ff int colors3 i, i, i glEnableVertexAttribArray(1) glVertexAttribPointer(1, 4, GL UNSIGNED BYTE, GL TRUE, 4 sizeof(GLubyte), colors3) glDrawArrays(GL TRIANGLES, 0, 3) Above code is used to draw one simple red triangle. My question is why do versions 1 and 3 work correctly, while version 2 draws some greenish triangle? Hex values are one I read by marking variable during debug. They are equal for version 2 and 3 so what causes the difference? Edit Question is already answered, I just want to add some more information, to clarify my motives for eventual future readers. I am aware, that when copied to graphics memory, all values will be eventually stored as floats (even 4 unsigned bytes). "Packing" 4 colors into one float is used by me just to decrease memory usage by color values in my VertexBufferObject class, where all vertex values are stored in one float array. Thanks to this approach, color information for each vertex (RGBA) is stored into single float value, instead of four. Memory usage is significantly lower, at small cost of one additional reinterpret cast at color creation change
1
Can't use the hardware scissor any more, should I use the stencil buffer or manually clip sprites? I wrote a simple UI system for my game. There is a clip flag on my widgets that you can use to tell a widget to clip any children that try to draw outside their parent's box (for scrollboxes for example). The clip flag uses glScissor, which is fed an axis aligned rectangle. I just added arbitrary rotation and transformations to my widgets, so I can rotate or scale them however I want. Unfortunately, this breaks the scissor that I was using as now my clip rectangle might not be axis aligned. There are two ways I can think of to fix this either by using the stencil buffer to define the drawable area, or by having a wrapper function around my sprite drawing function that will adjust the vertices and texture coords of the sprites being drawn based on the clipper on the top of a clipper stack. Of course, there may also be other options I can't think of (something fancy with shaders possibly?). I'm not sure which way to go at the moment. Changing the implementation of my scissor functions to use the stencil buffer probably requires the smallest change, but I'm not sure how much overhead that has compared to the coordinate adjusting or if the performance difference is even worth considering.
1
OpenGL ES God Ray Precision error I have encountered the following (i think) precision error. (missing link need 10 rep) My source of inspiration was (missing link need 10 rep) On the PC everything works fine, but on android it shows those weird squares. I had the same problem with a procedurally masked sprite. When the radius of the circle was getting too big i had the same error so i changed the mask from a shader radius uniform to a texture mask uniform, so i guess there is a precision problem. This guy here had the same problem but unfortunately i can't see the answer. (missing link need 10 rep please someone upvote me.) The code adapted to OpenGL ES is the following version 100 precision mediump float uniform sampler2D tex diff uniform vec2 light on screen varying vec2 texture coord const int NUM SAMPLES 128 void main() const float exposure 0.0225 const float decay 0.95 const float density 0.95 const float weight 3.75 Inner used valuesa vec2 deltaTextCoord vec2(texture coord.st light on screen.xy) vec2 textCoo texture coord.st deltaTextCoord 1.0 float(NUM SAMPLES) density float illuminationDecay 1.0 vec4 c vec4(0, 0, 0, 0) for(int i 0 i lt NUM SAMPLES i ) textCoo deltaTextCoord textCoo.s clamp(textCoo.s, 0.0, 1.0) textCoo.t clamp(textCoo.t, 0.0, 1.0) vec4 sample texture2D(tex diff, textCoo) sample illuminationDecay weight c sample illuminationDecay decay c exposure c.r clamp(c.r, 0.0, 1.0) c.g clamp(c.g, 0.0, 1.0) c.b clamp(c.b, 0.0, 1.0) c.a clamp(c.a, 0.0, 1.0) gl FragColor c Showing the whole engine is useless since it's huge. All the shader input is correct, the coordinates are correct, the only problem is the inner shader computation. Anybody encountered this or has any ideas for any workarounds? Scanned the whole net for a solution for this, and i couldn't seem to find one. Any can point me in the right direction? Or maybe someone encountered this type of error in a different context or in a different shader? Maybe i can apply the same workaround here too.
1
OpenGL Calculating camera view matrix Problem I am calculating the model, view and projection matrices independently to be used in my shader as follows gl Position projection view model vec4(in Position, 1.0) When I try to calculate my camera's view matrix the Z axis is flipped and my camera seems like it is looking backwards. My program is written in C using the OpenTK library. Translation (Working) I've created a test scene as follows From my understanding of the OpenGL coordinate system they are positioned correctly. The model matrix is created using Matrix4 translation Matrix4.CreateTranslation(modelPosition) Matrix4 model translation The view matrix is created using Matrix4 translation Matrix4.CreateTranslation( cameraPosition) Matrix4 view translation Rotation (Not Working) I now want to create the camera's rotation matrix. To do this I use the camera's right, up and forward vectors Hard coded example orientation Normally calculated from up and forward Similar to look at camera. Vector3 r Vector.UnitX Vector3 u Vector3.UnitY Vector3 f Vector3.UnitZ Matrix4 rot new Matrix4( r.X, r.Y, r.Z, 0, u.X, u.Y, u.Z, 0, f.X, f.Y, f.Z, 0, 0.0f, 0.0f, 0.0f, 1.0f) This results in the following matrix being created I know that multiplying by the identity matrix would produce no rotation. This is clearly not the identity matrix and therefore will apply some rotation. I thought that because this is aligned with the OpenGL coordinate system is should produce no rotation. Is this the wrong way to calculate the rotation matrix? I then create my view matrix as OpenTK is row major so the order of operations is reversed Matrix4 view translation rot Rotation almost works now but the Z Z axis has been flipped, with the green cube now appearing closer to the camera. It seems like the camera is looking backwards, especially if I move it around. My goal is to store the position and orientation of all objects (including the camera) as Vector3 position Vector3 up Vector3 forward Apologies for writing such a long question and thank you in advance. I've tried following tutorials guides from many sites but I keep ending up with something wrong. Edit Projection Matrix Set up Matrix4 projection Matrix4.CreatePerspectiveFieldOfView( (float)(0.5 Math.PI), (float)display.Width display.Height, 0.1f, 1000.0f)
1
Pretty Sure I have Support for OpenGL 4, but it's not running. What can I do? Many Game Engines require OpenGL to run. I have one of those. I've confirmed that the program and any benchmarks for OpenGL above OpenGL 2 fail to run. Is there, like, a way to confirm I have dependencies or something to that effect? Is it installed or is it just a library included in other programs? Specifically using Ubuntu Linux 17.10, computer specs are fairly low but I've run the Engine on Windows on the same machine before. Linux should have more up to date graphics drivers, so I'm not sure what the root of the problem might be.
1
Asteroids Ship Movement I have read source code of asteroids game. I want to know why when updating the ship's position in X, and Y Axis, we must write it in sin and cosine of the current angle. Is it angular velocity ? why we can't use linear velocity and update the position by a linear velocity?
1
How could I do simple per fragment lighting on BSP geometries? I am programming a graphics engine for an old game. The game uses a BSP geometry which I have rendering perfectly. For it's lights however, it simply has light instances with the standard x, y, z, rgba, brightness, type. Now I know that OpenGL has an 8 light limit. How should I go about handling multiple lights. I am learning per fragment light just to have the concepts under my belt. I know per pixel lighting is the standard and I will eventually move there, just want to learn how to get this concept put in play as well. I assume I will just calculate which lights are the closest and render those 8. Does anyone have any other ideas?
1
How animate objets seperately? I've set up a simple scene with some triangle and quads. With glPushMatrix() and glPopMatrix() i achieved to move an object to a new position, relative to another object in my render scene, but what i want is, for example, that the first triangle translates every render() call 2 units on the z axis, but the other triangle translates only 1 unit. I couldn't find any information how to manage this. As yet every glTranslatef() affects all objets. So what I have to do?
1
Improve bloom quality I'm trying to understang how can I improve bloom effect quality using all known to me optimizations. Currently I'm making it as follows Create new texture using some threshold to extracts areas which should glow Downsample that texture 4 times (using bilinear filtering GL LINEAR) Blur each texture horizontally using 5x5 gaussian blur. Blur each texture vertically using 5x5 gaussian blur. Composite each blurred texture with final image. I've implemented gaussian blur using Incremental Gaussian Algorithm and set radius to 5. However because of size of last texture after composition bloom has very low quality especially when moving camera It doesn't look clear here but you can see it near bunny tail. Simple workaround to that issue was increasing the radius for smaller textures but then image was more blurred. Similar effect is when I use solution presented by Philip Rideout. I get better results when I use blur shader presented here. However I still see some kind of vertical stripes. I also tried to improve the algorithm by blurring the image and then downsampling it. Then I repeated whole process for the rest of images. But I haven't spotted any difference. I'm also wonder how it is done in Unreal Engine. I mean how effective blur radius is computed. Documentation claims that each Bloom Size value is "the size in percent of the screen width" and each texture is smaller from previous one 2 times.
1
Rendering two textures with blending and alpha test What I am looking for is the following I have a circle on a square image, alpha is 0 at the corners and also a square shadow, alpha is 0 everywhere else I would like to have as final result a blending of these two renders, plus the shadow not being rendered outside the circle How could I achieve that?
1
Easy method of obtaining texture coordinates for sprite sheet In my (OpenGL ES 2.0 Android) game, I use 2 types of Sprite Sheets. In one type all sprites frames are of equal size and in the other they are all different shapes and sizes. It is the latter one to which this question pertains. Basically, I've created a sprite sheet and I currently use Paint.net to find the coordinates a particular frame sits at. I then have to work out the texture coordinates by dividing the pixel by the width (for X) or height (for Y) so, if my image is 2048 x 2048 and a particular frame is at 300, 500, I work it out like so 300 2048 0.146484375 500 2048 0.244140625 So, my X (or S) is 0.146484375 and my Y (or T) is 0.244140625. I use the same method to work out the width and height. I then code this using a method like so setTexCoords(0.146484375f, 0.244140625f, 0.048828125f, 0.048828125f) X, Y, Width, Height When working with lots of sprites, this is a very tedious process. The main problem however, is if I, at a later date, want to add sprites to the sheet and make it bigger to accommodate the extra sprites, I have to revisit every sprite and work out it's new texture coords based on the new bitmap size. Is there any method I can use to make this process easier? Even if it's just a program where I can hover the mouse over a bitmap and it tells me the texture coordinate (between 0 1). Even that would take a little of the work out of it. Any suggestions welcomed. I have read a similar question on here however, none of the answers really helped
1
Are there still advantages to using gl quads? OK, I understand that gl quads are deprecated, and thus we're not 'supposed' to use them anymore. I also understand that a modern PC when running a game using gl quads is actually drawing two triangles. Now, I've heard because of that a game should be written using triangles instead. But I'm wondering if, due to the specifics of how OpenGL makes that quad into two triangles, it is ever advantageous to use quads still? Specifically, I'm currently rendering many unconnected quads from rather large buffer objects. One of the areas I have to be careful is how large the vector of floats I'm using to make update these buffer objects get (I have quite a few extra float values per vertex, and a lot of vertices in a buffer the largest buffers are about 500KB). So it strikes me that if I change my buffer objects to draw triangles this vertex data is going to be 50 larger (six vertices to draw a square rather than 4), and take 50 longer for the CPU to generate. If gl quads still work, am I getting a benefit here, or is the 50 extra memory and CPU time still being used on OpenGL's automatic conversion to two triangles?
1
Is glxinfo saying that the 980 GTX doesn't support a 32 bit depth buffer? I've been using the glxinfo command (glxinfo v) to explore the supported framebuffer configurations. There are two values relating to depth, "depth" and "depthsize." According the source, it appears that the "depth" value relates to the X config and the "depthsize" value relates to the OpenGL config. Assuming that is correct, would the lack of a "depthsize 32" entry suggest that 32 bit depth buffers aren't supported? Or is my understanding of the glxinfo output flawed?
1
How to keep same aspect ratio in different devices with cocos2dx? I have been making a board game and I am using cocos2dx. There are two scenes for now, One is main menu and the other is gameplay scene. When I run the apk on tablet with EXACT FIT resolution policy in AppDelegate.cpp sprites in the scene seems stretched out a little bit. How can I maintain the same aspect ration in all devices and also the positions of game entities? Thanks.
1
Do I need to "use" the shader program when buffering and defining VBO data? I'm trying to wrap my head around the relation between a GL "program", and the VAOs, buffers, textures, etc. I don't quite understand when I need to "use" my shader program, and when (if ever) I need to "un use" it? So when do I need to use the shader program (i.e. invoke glUseProgram), exactly? Does my shader program need to be in "use" when Creating and binding VAOs? Creating and binding VBOs? Buffering data to VBOs? Defining vertex attributes (i.e. invoking glVertexAttribPointer)?
1
Draw line around mesh with defined width at cut plane I want to display a solid line around mesh with a defined width at a cut plane. Currently I realized that using the same method as gl ClipDistance works using dot() operation. However, the output is not what I want since the line width changes depending on angle between the current triangle and the plane. There are also some bad aliasing effects. My guess is that the abs(factor) lt 0.1 is wrong here but I also dont know any other alternative. I'm fairly new to shaders... How can I solve that issue? Current output It should look similar to that output Fragment Shader version 330 core out vec4 fragColor in vec3 Normal in vec3 FragPos uniform vec4 clipPlane0 void main() float factor dot(vec4(FragPos, 1.0), clipPlane0) if (abs(factor) lt 0.1) fragColor.rgb vec3(1, 0, 0) else fragColor.rgb vec3(0, 1, 0)
1
What files and libraries do I need for OpenGL and controls I am starting a 3D game, however I find OpenGL file system very confusing. OpenGL package doesn't have one file, other package doesn't have another, so I need a summary of what files do I need to start playing with 3D. It would be good if it included keyboard and mouse controls. Shortened question Where can I get all files for OpenGl 4.0 and integrated controls libary?
1
Translate the ModelView matrix, or change vertex coordinates? If I have a simple 2D scene and I want to move the objects inside the scene on the X and Y axis, should I send OpenGL the original vertex coordinates with each move and apply a ModelView matrix transform, or should I simply send the updated coordinates? Since the scene is simple and there are really not that many vertices, I'm not using vertex buffers.
1
OpenGL and 3ds model loading Path of least resistance? Hey guys, im working on a final class project for a graphics class, and me and a teammate are making a simple 3d tower defense game. We're currently planning on using 3ds models and drawing them with OpenGL.However, niether of us have a lot of practice experience with loading drawing models. What is the fastest and or easiest (not neccesarily the best or most feature implemented) way to load a 3ds model and draw it with a OpenGL glut setup?
1
What makes a game look "good"? I am working on a 3D space game using OpenGL and C and I am planning to focus on giving the game modern, eye catching graphics, but the more I think of it the more I realise I don't really know what makes graphics "good". Sure, I can go and play some well known AAA games and bask in the amazingly put together graphics, but I don't really know how the graphics looks good. (this is why I consider games to be an art!) This is what I can think of now High quality textures High quality models A good lighting model Bumpmapping specularity mapping High quality UI, if applicable A wealth of not overdone posteffects I'm asking here in the hope of an experienced game developer who has produced games and know how they work inside and out can explain some techniques that warrant a game's graphics to look "good", and some not well known quirky tips. That'd be awesome. Thanks!
1
Replacing client side vertex arrays with glBufferData I need to replace client side vertex arrays in order to upgrade to a new version of OpenGL but I'm not sure what the best way to buffer data is now. What I have is a 2D sprite engine which is using batching to push as many vertices to the GPU (using fixed pipeline functions glVertexPointer etc...) but frequently the batch is only a single quad. Because of how sorting works the buffer needs to be updated every frame (or more). I need to use glBufferData glVertexAttribPointer now so what is the best way to handle this case? I can allocate the buffer for glBufferData large enough to hold the maximum size of a batch (which is more than maybe 5000 vertices) so should I just push the old vertex array to glBufferData every frame or use another method? Maybe calling glVertexPointer glBufferData has the same costs associated with them to copy memory to the GPU so I don't need to worry about it but I'd like to know since I'm still pretty new to OpenGL. CONCLUSION In my simple tests I found calling glBufferData every frame with all vertices (OpenGL 4.1) actually slightly faster than client side vertex arrays (OpenGL 2.1). Thanks.
1
FBO blit depth buffer to screen? I have an FBO in a deferred 4.3 OpenGL renderer, in which I setup the depth buffer of that FBO like this GLCALL(glGenRenderbuffers(1, amp mDepthbuffer)) GLCALL(glBindRenderbuffer(GL RENDERBUFFER, mDepthbuffer)) GLCALL(glRenderbufferStorageMultisample(GL RENDERBUFFER, 0, GL DEPTH32F STENCIL8, windowWidth, windowHeight)) GLCALL(glFramebufferRenderbuffer(GL FRAMEBUFFER, GL DEPTH STENCIL ATTACHMENT, GL RENDERBUFFER, mDepthbuffer)) GLCALL(glBindRenderbuffer(GL RENDERBUFFER, 0)) Normally when I debug I can output the color attachments to screen easily, like this for normals GLCALL(glReadBuffer(GBuffer GBUFFER COLOR ATTACHMENT NORMAL)) GLCALL(glBlitFramebuffer(0, 0, mWindowWidth, mWindowHeight, 0, 0, mWindowWidth, mWindowHeight, GL COLOR BUFFER BIT, GL LINEAR)) But how can I do the same for the depth buffer contents, as it is not a color buffer?
1
How to solve artifacts caused by vertex lighting in my voxel engine? My current lighting system bakes the light amount based on ray tracing from the light source to the 8 corners of the block (so per vertex) and the distance to the light on the blocks. It works acceptable, but it's definitely not perfect. Of course the blocks are made out of faces, which are made out of triangles. In the situation shown in the screenshot, where there is a light directly behind the camera, you get those weird triangle lighting issues. How can I fix this problem?
1
3D camera window resolution relation I'm having a bit of an issue when it comes to 3D and 2D camera(s), in relation to a game's window resolution. I want to let the player choose from different window resolutions, either from a menu or by just resizing the window by dragging the window border, and can't get the scaling of the graphics to work properly. In games I've played, I can go into the options menu, choose a screen resolution and then the window resize itself and the graphic's, menu buttons etc just scales up to the correct sizes. At the moment, I've tried two different window resolutions 800x600 and 1280x720 (don't mind the different aspect ratios). 800x600 1280x720 As you can see, more of the game world can be seen (more stone tiles can be seen, if that makes sense) in the window with a resolution of 1280x720 and I want my 3D 2D game(s) to be resolution independent and that the same amount of the game world to be seen, independent of what window resolution the player chooses to use. My camera class consist of Camera matrix used to translate and rotate my camera View matrix the inverse of the camera matrix Perspective matrix see code below View projection matrix view matrix perspective matrix Fov 45.0f Aspect ratio camera width camera height Note row major matrices Perspective matrix const float Tangent 1.0f tanf(DEGREES TO RADIANS(Fov 0.5f)) const float NearToFar FarClip NearClip PerspectiveMatrix(Identity) PerspectiveMatrix 0 Tangent AspectRatio PerspectiveMatrix 5 Tangent PerspectiveMatrix 10 FarClip NearToFar PerspectiveMatrix 11 1.0f PerspectiveMatrix 14 ( NearClip FarClip) NearToFar PerspectiveMatrix 15 0.0f At each update, I'm also setting the viewport by calling glViewport(0, 0, CameraWidth, CameraHeight). This might change in the future if I decide to make two cameras in a game, for a splitscreen game for example. How can I solve the window resolution independence issue? I have though about creating a framebuffer the size I want, attach it to a quad and then render the quad the size of the window, which will scale up down the framebuffer if the window is bigger smaller than the framebuffer's original size, but if it's an easier way of doing it, I would gladly use that instead.
1
Texture Mapping to procedurally generated geometry How can I calculate texture coordinates of such geometry? The angle shown in the image (89.90 degree) may vary, therefore the geometry figure is changing and is not always such uniform.(maybe like geometry in the bottom of image) and red dots are generated procedurally depends on degree of smoothness given.
1
Rotation ascending into infinity? I'm creating a rotation for my sun to move, however it quickly extends to Infinity before going into NaN. I thought that taking advantage of the Matrix4f would make this much easier but it does as previously stated. Ideally should have the sun rotate around the sunCenter. Why is it doing this and how would I fix it? private Vector3f getSunPosition() double rotation (time DAY LENGTH) 360 Matrix4f matrix new Matrix4f() Vector3f pos sun.getPosition() matrix.m03 pos.x matrix.m13 pos.y matrix.m23 pos.z Matrix4f.rotate((float) Math.toRadians(rotation), sunCenter, matrix, matrix) return new Vector3f(matrix.m03, matrix.m13, matrix.m23) I had it print out the Vector3f and rotation. The rotation looks a bit odd at first, however I can fix that another time. Vector3f 1.0, 1.0, 1.0 0.0 Vector3f 288375.56, 0.18377686, 287549.88 79.41000366210938 Vector3f 1.96323607E11, 125296.0, 1.96323607E11 158.82000732421875 Vector3f 1.05730567E17, 0.0, 1.05730567E17 238.23001098632812 Vector3f 5.5935644E22, 0.0, 5.5935644E22 240.02999877929688 Vector3f 2.9050322E28, 0.0, 2.9050322E28 241.8300018310547 Vector3f 1.4801178E34, 0.0, 1.4801178E34 243.6300048828125 Vector3f Infinity, 0.0, Infinity 243.8400115966797 Vector3f Infinity, NaN, Infinity 244.04998779296875 Vector3f NaN, NaN, NaN 244.25999450683594 Vector3f NaN, NaN, NaN 244.41000366210938 Vector3f NaN, NaN, NaN 244.55999755859375
1
Black or White Border Shadow around PNGs in SDL OPENGL having the same issue as this Why do my sprites have a dark shadow line frame surrounding the texture? however, when I do the fix suggested there (changing GL SRC ALPHA to GL ONE) it just replaces the black border with a white border on the images. was able to find a lot of results on google from people finding black borders and fixing it with the GL ONE change, but no results for white borders. any ideas? heres some of my relevant code. init glMatrixMode(GL PROJECTION) glLoadIdentity() glMatrixMode(GL MODELVIEW) glLoadIdentity() glEnable(GL DEPTH TEST) glEnable(GL MULTISAMPLE) glEnable(GL TEXTURE 2D) glTexEnvi(GL TEXTURE ENV, GL TEXTURE ENV MODE, GL MODULATE) glAlphaFunc(GL GREATER, 0.01) glEnable(GL ALPHA TEST) glBlendFunc(GL SRC ALPHA, GL ONE MINUS SRC ALPHA) glEnable(GL BLEND) when each texture is loaded glGenTextures(1, amp textureID) glBindTexture(GL TEXTURE 2D, textureID) gluBuild2DMipmaps(GL TEXTURE 2D, GL RGBA, surface gt w, surface gt h, GL BGRA, GL UNSIGNED BYTE, surface gt pixels) glTexParameterf(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR MIPMAP LINEAR) glTexParameterf(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) thanks!
1
ATI driver bug and rendering to a texture 2d array in OpenGL I am trying to render to a texture2Darray in OpenGL, with a similar setup as descriped in this post. My question is, if anyone has gotten this to work on ATI hardware? Or is there still a bug in the driver, preventing multi layered rendering? The bug is also mentioned in the forum post.
1
How to upload lights when doing one pass per light Suppose I have a reasonable amount of light sources that I upload at once in my forward rendering and accumulate them in shader. Now that I am willing to move to deferred, and so one pass per light, I have a design doubt. Suppose my light struct is no more than 64B, I have two options Still upload them to GPU once and then using a uniform to get the right index for each pass. Upload for each pass the correspondent light info. With option 1 I have the advantage of not uploading info at each frame, but just once. On the other hand, option 2 allow to have less data on memory, with the downside of having to load about 64B per pass per frame. Whilst I am leaning towards option 1, I am no expert and kinda new to the field. Knowing I don't think I'll have more than 100 lights, what's the option you'd choose? Any trade off aspect I am missing? EDIT If there's no much difference, I already have all the code ready for option 1, so that's another factor that lead me towards it. However, to be honest, I am more than happy to refactor if it means an advantage. Thanks
1
Calculating Directional Shadow Map using Camera Frustum I'm trying to calculate the 8 corners of the view frustum so that I can use them to calculate the ortho projection and view matrix needed to calculate shadows based on the camera's position. Currently, I'm not sure how to convert the frustum corners from local space into world space. Currently, I have calculated the frustum corners in local space as follows (correct me if I'm wrong) float tan 2.0 std tan(m Camera gt FOV 0.5) float nearHeight tan m Camera gt Near float nearWidth nearHeight m Camera gt Aspect float farHeight tan m Camera gt Far float farWidth farHeight m Camera gt Aspect Vec3 nearCenter m Camera gt Position m Camera gt Forward m Camera gt Near Vec3 farCenter m Camera gt Position m Camera gt Forward m Camera gt Far Vec3 frustumCorners 8 nearCenter m Camera gt Up nearHeight m Camera gt Right nearWidth, Near bottom left nearCenter m Camera gt Up nearHeight m Camera gt Right nearWidth, Near top left nearCenter m Camera gt Up nearHeight m Camera gt Right nearWidth, Near top right nearCenter m Camera gt Up nearHeight m Camera gt Right nearWidth, Near bottom right farCenter m Camera gt Up farHeight m Camera gt Right nearWidth, Far bottom left farCenter m Camera gt Up farHeight m Camera gt Right nearWidth, Far top left farCenter m Camera gt Up farHeight m Camera gt Right nearWidth, Far top right farCenter m Camera gt Up farHeight m Camera gt Right nearWidth, Far bottom right How do I move these corners into world space? Update I'm still not sure if what I'm doing is right. I've also attempted to build the ortho projection by looping through the frustum corners and getting the min and max x,y,z coordinates. Then simply setting the values of the projection as left minX right maxX top maxY botton minY near maxZ far minZ I've searched on the internet but all the tutorials use hard coded values so the shadow maps aren't applicable to an open world but a restricted portion of the scene. Any help? Pseudocode is preferred as my linear algebra skills (and reading skills) aren't that great
1
Making a house in jogl java eclipse? here's what I'm trying to do One house, with 3 rooms, a window in two rooms, doors in the front and back of the house and into each room (the back door has to be through one of the rooms). I've added a picture as an example I'm using JOGL 2.0, in eclipse. Clarification my question is how do I code it in java using jogl? This is what I have so far glBegin(GL QUADS) Floor glVertex3f( 1, 1, 1) glVertex3f(1, 1, 1) glVertex3f(1, 1,1) glVertex3f( 1, 1,1) Ceiling glVertex3f( 1,1, 1) glVertex3f(1,1, 1) glVertex3f(1,1,1) glVertex3f( 1,1,1) Walls glVertex3f( 1, 1,1) glVertex3f(1, 1,1) glVertex3f(1,1,1) glVertex3f( 1,1,1) glVertex3f( 1, 1, 1) glVertex3f(1, 1, 1) glVertex3f(1,1, 1) glVertex3f( 1,1, 1) glVertex3f(1,1,1) glVertex3f(1, 1,1) glVertex3f(1, 1, 1) glVertex3f(1,1, 1) glVertex3f( 1,1,1) glVertex3f( 1, 1,1) glVertex3f( 1, 1, 1) glVertex3f( 1,1, 1) glEnd()
1
Drawing Colored Geometry in OpenGL using SDL First off, I will confess I have asked the same question on stackoverflow, but I think this forum might be a better fit. I am trying to combine 2 things using SDL Draw a webcam feed via an SDL Surface (from OpenCV). Draw some plain colored geometry on top, using OpenGL. The problem I have is that I think the geometry is textured by the screen texture that I draw the webcam feed in, even if I call glDisable(GL TEXTURE 2D) right before drawing the quad. See the screenshot below, the square in the top left is supposed to be white, but it seems to have the color of the bottom right texel. The code in my Display function is as follows screen surface contains a frame from the camera SDL UpdateTexture(screen texture , NULL, screen surface gt pixels, screen surface gt pitch) SDL RenderClear(renderer ) SDL RenderCopy(renderer , screen texture , NULL, NULL) glLoadIdentity() glDisable(GL TEXTURE 2D) glColor3f(1.0, 1.0, 1.0) glBegin( GL QUADS ) glVertex3f( 10.0f, 50.0f, 0.0f ) Top Left glVertex3f( 50.0f, 50.0f, 0.0f ) Top Right glVertex3f( 50.0f, 10.0f, 0.0f ) Bottom Right glVertex3f( 10.0f, 10.0f, 0.0f ) Bottom Left glEnd( ) glColor3f(1.0, 1.0, 1.0) SDL RenderPresent(renderer ) You can view all the code of relevant functions here. I got a blanket answer saying "Don't mix SDL and OpenGL draw code" as addressed in this bug report. But that would mean I'm simply stuck waiting for that bug to be fixed, so I'm still looking for a way to disable texturing after SDL RenderCopy has been called. Edit I've confirmed that it's indeed still using the screen texture . Setting the texCoord to values between 0 100 shows a part of the webcam feed. (I tried 0 1 first but then I read that texCoords are different for rectangular images).
1
Loading non skeletal animation to opengl via assimp I'm a newbie in assimp and openGL. I'm trying to import .fbx or .dae formatted file to openGL via assimp. Importing skeletal animation was kind of easy. Lots of introductions and sample projects helped me to run several files properly. But, in case of none skeletal animation(does it named 'vertex animation'?), which has no bone, I can't find any of source, instructions, including assimp tutorials. Or, I'm not sure it is even possible, maybe assimp doesn't support this kind of animation. Is there any recommendable project for non skeletal animation? Any single comment, will be helpful. Thanks.
1
How do I fix this access violation when I exit my custom OpenGL game engine? I'm writing an game engine, where the engine is written in a project and exports a .dll file. In another project, in the same solution as the engine, there is a sandbox project which uses the engine. However, there is a bug. I run the sandbox project in debug mode with the engine dll. When I spam my mouse and keyboard for a few seconds, and close the program via the exit button, the program crashes with an error Exception thrown at 0x00000043 in phantom sandbox.exe 0xC0000005 Access violation executing location 0x00000043.If there is a handler for this exception, the program may be safely continued. I found the source of the bug. Since I'm currently writing the engine for OpenGL, I have to initialize GLEW, and I need to create a HGLRC. If I don't initialize this HGLRC, everything works. This is not the ideal solution, since I need to use OpenGL for my engine. I went forth without exporting the .dll from the engine, making the engine an application, instead. I made a main.cpp, and wrote it to use the engine, enabling OpenGL rendering. I tried to recreate the bug, but everything works! I thought it might OpenGL, but then now I'm thinking it might be my engine. How do I fix this? Here is my code, where it errors int const main result invoke main() telemetry main return trigger(nullptr) if(! scrt is managed app()) exit(main result)l if(!has actor) lt this is the line being sent to the call stack cexit() scrt unititialize crt(true, false) return main result except ( seh filter exe(GetExceptionCode(), GetExceptionInformation())) int const main result GetExceptionCode() if(! scrt is managed app()) exit(main result) Here is the call stack
1
How to compress repetitive information when uploading mesh data? I want to avoid sending repetitive information when drawing a mesh. If I use a single point for each face and two vectors as additional attributes that represent the travel of each vertex, I can use that information in a Geometry Shader to produce the normal and the two points followed by any additional attributes for that face. Is the computation cost worth the saved data needing to be transferred? Or is there a better way to optimize the pipeline for sending over a mesh to draw? Edit In particular this is a case where the mesh has attributes unique to the face and not the vertex, so if it were done the regular way each vertex would be multiplied by the number of faces used by it.
1
Setting a uniform float in a fragment shader results in strange values, is this a type conversion? How can it be fixed? First, some details I'm learning OpenGL from the tutorials on https open.gl My computer is running Linux Mint 18.1 Xfce 64 bit My graphics card is a GeForce GTX 960M OpenGL Version 4.5.0 NVIDIA 375.66 GLSL Version 4.50 NVIDIA Graphics Card Driver nvidia 375 Version 375.66 0ubuntu0.16.04.1 CPU Intel Core i7 6700HQ The code I'm working on can be found here https github.com Faison sdl2 learning blob 8a61032d20edf91cfa60f665e1bb4d72e58f634b phase 01 initial setup main.c (Makefile is located in the same directory) In a fragment shader, I'm trying to make an image do a sort of "flipping mirroring" animation (lines 44 57) version 450 core in vec3 Color in vec2 Texcoord out vec4 outColor uniform sampler2D tex uniform float factor void main() if (Texcoord.y lt factor) outColor texture(tex, Texcoord) vec4(Color, 1.0) else outColor texture(tex, vec2(Texcoord.x, 1.0 Texcoord.y)) When factor is 1, the image should be right side up and has some color on it. When factor is 0, the image should be upside down and has no color added to it. When factor is 0.5, the top half should be right side up and the bottom half should be upside down. Currently, that is only the case if I replace factor with the number. When I set the uniform factor with glUniform1f(), I'm getting very strange results. To illistrate, I added some debug code to lines 188 197 that sets the uniform with one number, retrieves the number from the uniform, and outputs both values to try and see what's going on. Here's the code GLfloat factorToSet 1.0f GLfloat setFactor 0.0f GLint uniFactor glGetUniformLocation(shader program, "factor") while (factorToSet gt 0.1f) glUniform1f(uniFactor, factorToSet) glGetUniformfv(shader program, uniFactor, amp setFactor) printf("Factor of .1f becomes f n", factorToSet, setFactor) factorToSet 0.1 And here are the results Factor of 1.0 becomes 0.000000 Factor of 0.9 becomes 2.000000 Factor of 0.8 becomes 0.000000 Factor of 0.7 becomes 2.000000 Factor of 0.6 becomes 0.000000 Factor of 0.5 becomes 0.000000 Factor of 0.4 becomes 2.000000 Factor of 0.3 becomes 36893488147419103232.000000 Factor of 0.2 becomes 0.000000 Factor of 0.1 becomes 36893488147419103232.000000 Factor of 0.0 becomes 0.000000 So with what little I understand about OpenGL and the way scalar types are stored in binary, I'm thinking that this issue is caused my GLfloat getting converted into something else on the way to the shader's uniform float. But I'm grasping at straws. What could be causing this strange conversion between the number I send to the uniform float and the value that the uniform float becomes? What could I do to fix it if it's possible to fix? Thanks in advanced for any help and leads, I really appreciate it ) An additional note after receiving a working answer George Hanna provided a link to a post where someone had a similar issue. I read over the comments and someone said to use DGL GLEXT PROTOTYPES as a CFLAG. So I rolled back my local code to use glUniform1f() again, added DGL GLEXT PROTOTYPES to the Makefile, and everything worked! Even crazier, all the compiler warnings I had for implicit declarations of OpenGL functions were gone! So in addition to the answer below, if you have this issue, try adding DGL GLEXT PROTOTYPES to your CFLAGS. (You can also get this affect by adding define GL GLEXT PROTOTYPES before any OpenGL includes)
1
Can you dynamically set which texture to use in shader? I'm working on a user interface system, and I want to be able to mix textured polies with frag coloured polies. Here's my shader code, that doesn't work attribute vec2 vertex coords attribute float texid attribute vec4 fragdetails varying float usingtex varying vec4 v fragdetails void main() gl Position gl ModelViewProjectionMatrix vec4(vertex coords,0.0,1.0) usingtex texid v fragdetails fragdetails Fragment varying float usingtex varying vec4 v fragdetails uniform sampler2D thetexture void main() if (usingtex ! 0.0) thetexture int(usingtex) gl FragColor texture2D(thetexture, vec2(v fragdetails 0 , v fragdetails 1 )) else thetexture gl FragColor v fragdetails Frag details consists of either r, g, b, a for a coloured poly, with texid set to 0, or texture x, texture y, 0, 0 for a textured one. However, samplers must be uniform, and uniforms can't be modified. So how on earth would I swap between textures in one draw? Do I have to set them all up as uniforms and then pick from those? I'd have to use an array, and know the length, which isn't really practical for a UI system where buttons will be clicked, tabs changed, and so on. Is there a way to do this?
1
glBufferData consuming system memory I am memory profiling my game in Visual Studio and about 60 of memory usage is happening from calls to glBufferData(). I may be missing something but should this consume GPU memory instead of system memory? I call it using GL ARRAY BUFFER and GL STATIC DRAW I was just wondering is there a way I can force it to use only VRAM? Visual Studio attributes the memory usage to "nvoglv32.dll"
1
How to implement a physical with perspective effect on Android? I'm working on a project that looks like PaperToss. Instead of tossing a page, you toss a coin. Suppose that I have a coin put in three dimensional that have coordinates at A(x,y,z). I throw that point ahead, after 1 100 second, that coin move from A(x,y,z) to A'(x',y',z'). By this way, I have two problems need to solve. Where will be the coin at time t? How can I display this on a screen? For 2., I thought about using Orthographic projection amp Perspective projection. I'm told that OpenGL can help me, but I don't know how. How can I solve 1. and 2.?
1
Why does the pitch affect the x component of the front vector? In every tutorial for implementing a camera in OpenGL, the front vector is calculated with something like this front.x cos(pitch) cos(yaw) front.y sin(pitch) front.z cos(pitch) sin(yaw) What I don't understand is why the pitch affects the x component? Shouldn't it just be front.x cos(yaw) Also, for the z component, why do we multiply cos(pitch) with sin(yaw). I understand that the pitch and the yaw both affect the z component, but why multiply? Why not add or something else?
1
How to draw Alpha Masked fragments' depth to depth buffer? I feel like an absolute idiot for asking this. But how exactly do safely draw the depth of a fragment featuring a Masked ( Alpha 1) texture on it's surface? So far I've literally been doing a depth test on truly opaque geometry. Here's a logarithic Z bufer from GTA. And yes... it's strange that I know that and how to do it... but not a alpha mask depth. EDIT From this, it looks like it's actually possible to write solid texture data to the depth buffer, and ignore the binary transparency. Here's an image that's an example of the problem I would like to solve.
1
Clamping large content to smaller area I'm using OpenGL (with LWJGL) in Java, but the question is language independent. I have some region (a rectangle for simplicity), and, let's say, a big tiled map which I want to show in this area. The area is not the whole screen, I want to render something around it. I can think of two approaches, but they are not very good and hard to do. Render the whole tiled map and everything else, including background and the frame, on top leaving the window. Yes, works, but it'd be pain. Render only visible tiles and only the visible portions of the border tiles. This is not possible if I for example render a font using external library there I don't have such fine control. Some OpenGL trick? Please, guide me.
1
Second glBindBuffer() call crashes program on Draw call Background Issue I'm pretty new to openGL and I'm trying to create a game engine (for learning purposes) and my program keeps crashing on my glDrawElements() call but only after trying to set glBindBuffer a second time. Code Below is some of my code in my engine. Basically, I have two possible objects I can draw with my engine, a triangle and a square. I'm trying to first send initial shape data down to GPU buffers within my RenderSystem's Initialize function like so RenderSystem.cpp bool RenderSystem Initialize() Send triangle shape data down to GPU MyOpenGL InitializeBuffers(ShapeData Triangle().vertices.size(), amp ShapeData Triangle().vertices.front(), ShapeData Triangle().indicies.size(), amp ShapeData Triangle().indicies.front(), triangleVertexBufferID , triangleIndexBufferID) Send square shape data down to GPU MyOpenGL InitializeBuffers(ShapeData Square().vertices.size(), amp ShapeData Square().vertices.front(), ShapeData Square().indicies.size(), amp ShapeData Square().indicies.front(), squareVertexBufferID, squareIndexBufferID) return true The MyOpenGL InitializeBuffers() function code is next void InitializeBuffers(int64 sizeOfGeometry, const void GeometryDataFirstElement, int64 sizeOfIndicies, const void indicieDataFirstElement, uint32 vertexBufferID, uint32 indexBufferID) glGenBuffers(1, amp vertexBufferID) glGenBuffers(1, amp indexBufferID) glBindBuffer(GL ARRAY BUFFER, vertexBufferID) glBindBuffer(GL ELEMENT ARRAY BUFFER, indexBufferID) glBufferData(GL ARRAY BUFFER, (sizeof(Vector2D) sizeOfGeometry), GeometryDataFirstElement, GL DYNAMIC DRAW) glBufferData(GL ELEMENT ARRAY BUFFER, (sizeof(uint16) sizeOfIndicies), indicieDataFirstElement, GL DYNAMIC DRAW) glEnableVertexAttribArray(0) glVertexAttribPointer(0, 2, GL FLOAT, GL FALSE, sizeof(GLfloat) 2, nullptr) Now, I want to call a draw function within my RenderSystems update() function which basically just calls this MyOpenGL Draw() function Passing in which ever BufferID's I want to draw (square or triangle) void Draw(uint32 vertexBufferID, uint32 indexBufferID, uint16 numOfIndices) glBindBuffer(GL ARRAY BUFFER, vertexBufferID) glBindBuffer(GL ELEMENT ARRAY BUFFER, indexBufferID) glDrawElements(GL TRIANGLES, numOfIndices, GL UNSIGNED SHORT, 0) ) However, after the glDrawElements call my program crashes. If I remove the glBindBuffer functions then it works, calling the last buffer object I bound everything to. Why is my program crashing when trying to rebind to whatever object I want to draw?
1
Best way to draw a textfield in OpenGL, when performance really matters? I'am creating my own GUI library in LWJGL (opengl for Java). I already managed in creating buttons and panels, and I've also got the hover and active states of the components implemented. This question is about finding a fast way to render a complex component. The library should be very performant! So I pre render everything I can, before the gameloop starts. I also want to decrease the use of VBO's, triangles, and textures to render a single component. My approach for creating a button is pre render the button appearance using a java Graphics object and put the drawing in a Power Of Two sized BufferedImage. That image is the texture. Then I've got a FilledRectangle class. That class gets a BufferedImage (the texture) and an x, y, width and height. The class creates a VBO that just renders the Button. For the hover state and active state, I really do the exact same thing, except that there is another drawing made with the Graphics object. So I just create another VBO for each state. When it comes to rendering components that change their appearance, I can't pre render too much. I want to create a TextField. Now I pre render the background (the box of the textfield). And I also pre render the cursor. The text has to be rendered during the game, and I have the following options Rendering the text using a Graphics object from the java graphics. Pre render a spritesheet (a character map) with all characters on it in a specific font, and then use that texture to create a new VBO for every time the text changes. Note that the box of the component and the cursor are seperate VBO's so, when the text changes, I only have to deal with text rendering in a fast way. My questions It's very easy for me to draw things using a java Graphics object, but isn't this too slow when I have to use this during the gameloop? Is it OK to recreate a VBO for every visible change?
1
Why does this order of Quaternion multiplication not introduce roll into my fps style character controller? I'm working on an OpenGL based project (in C ), employing Quaternions to rotate my camera I first tried to cameraOrientation cameraOrientation framePitch frameYaw This accumulated an undesired roll in my cam controller which made rotations unusable. I found a post on stack exchange which suggested this reordering of operations cameraOrientation framePitch cameraOrientation frameYaw Which completely solved this accumulation of roll. While I'm comfortable with matrix multiplication, I can't seem to understand why this removes roll accumulation. Does anybody have any articles or images so I can grok what's happening here? It feels weird not to understand such a fundamental operation in my project. Thanks!
1
OpenGL Blending GUI Textures I'm currently creating a menu for my project and I'm trying to get the textures to blend so I'm only left with the actual Image on the texture and not the background. The problem is the whole texture is somewhat transparent, it's not just removing the background. My RGBA texture looks like And the black background needs to be removed from the image. I was using GL SRC ALPHA, GL ONE MINUS SRC, ALPHA for my blend function but it wasn't blending anything. I changed to GL ONE, GL ONE and now I'm at where I am now. You can see the text is there but its also transparent, but the background has been removed which is good. This is how i'm drawing my button. The world behind it is drawn after (i've tried switching order didnt change anything) and it's drawn using VBO's whereas the Buttons are drawn in immediate mode. glEnable(GL BLEND) glBlendFunc(GL ONE, GL ONE) left get button pos Point3 lt float gt pos button gt getPos() get button dimensions int width button gt getWidth() int height button gt getHeight() bind button texture and draw quad glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, button gt getState() ? button gt getDownTex() gt getTextureID() button gt getUpTex() gt getTextureID()) glBegin(GL QUADS) glTexCoord2i(0, 1) glVertex3f(pos.x, pos.y, 0.0f) glTexCoord2i(1, 1) glVertex3f(pos.x width, pos.y, 0.0f) glTexCoord2i(1, 0) glVertex3f(pos.x width, pos.y height, 0.0f) glTexCoord2i(0, 0) glVertex3f(pos.x, pos.y height, 0.0f) glEnd() I'm using sdl to load the texture SDL Surface image IMG Load(textureName) data.m w image gt w data.m h image gt h data.m bitsPerPixel image gt format gt BitsPerPixel data.m alpha image gt format gt alpha int colourMode image gt format gt BytesPerPixel if (colourMode 4) internalFormat GL RGBA if (colourMode 3) internalFormat GL RGB if (colourMode 1) internalFormat GL LUMINANCE if (SDL BYTEORDER SDL LIL ENDIAN) if (colourMode 4) format GL BGRA else format GL BGR else if (colourMode 4) format GL RGBA else format GL RGB GLuint texture 1 create texture handle glGenTextures(1, amp texture) gen texture bindTexture(GL TEXTURE 2D, texture) glTexParameterf( GL TEXTURE 2D, GL TEXTURE WRAP S, GL REPEAT) glTexParameterf( GL TEXTURE 2D, GL TEXTURE WRAP T, GL REPEAT) glTexParameterf( GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR ) glTexParameterf( GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR ) glTexImage2D(GL TEXTURE 2D, 0, internalFormat, image gt w, image gt h, 0, format, GL UNSIGNED BYTE, image gt pixels) normal texture
1
How do you determine which object surface the user's pointing at with lwjgl? Title pretty much says it all. I'm working on a simple 'lets get used to lwjgl' project involving manipulation of a rubik's cube, and I can't figure out how to tell which side square the user's pointing at.
1
Keeping ratio the same across devices on fixed screen game My game is an Android game using OpenGL ES 2.0 (But this question could apply to any platform). I have read many questions on here regarding ratio management, and also read many tutorials outside of this site, but I'm still really confused as to how to manage this. My game is a Fixed screen 2d platformer. By fixed screen, I mean the player sees the whole screen at once and the screen doesn't scroll. All action takes place on this one screen (kind of like Bubble Bobble). Therefore scrolling is not possible as we need to see the whole play area. On my development device, I've written everything to look perfect, like so What I've currently done is when the game is run on other devices, I resize my GLViewport so that I maintain ratio like so Obviously, this has it's own problem namely, it wastes screen real estate. Now, I would accept this reluctantly, if I couldn't find a better solution, however, Google's documentation states that it's not allowed (of sorts) see App uses the whole screen in both orientations and does not letterbox to account for orientation changes. So, finally, I just stretched it out to fit the screen like so This takes the whole screen, but frankly looks a little naff as everything is stretched. Am I out of options? I see some similar games on the Play Store (ie, fixed screen) and they seem to look identical on different screens (and nothing is stretched) and there doesn't appear to be any extra space, but I have absolutely no idea how they achieve this. Would love to hear from someone who has dealt with this problem themselves or has any ideas on how best to proceed.
1
Instancing meshes messing up scene lighting I've been rendering a scene (some objects over a large field of grass) to test shadow mapping which is working fine. But when I use instancing to "gain" performance, I not only get a decrease in performance (by losing 2 4 ms per frame !), but my lighting also gets messed up. Comparaison Scene without instancing Scene with instancing The shadows are still rendered fine, but the grass lighting is completely off, alternating between dark and bright grass. The small "scene" above the grass is also very dark (I use the same shader as the one for the grass, for convenience purposes). Even the debug text in the upper left corner isn't being rendered anymore... I use the same shader for both scenes, except I use a uniform to let the shader know when I use instancing and when I'm not. Here is the lighting shader Vertex Shader version 330 core layout (location 0) in vec3 position layout (location 1) in vec2 texCoords layout (location 2) in vec3 normal layout (location 3) in mat4 a model out vec2 TexCoords out VS OUT vec3 FragPos vec3 Normal vec2 TexCoords vec4 FragPosLightSpace vs out uniform bool instanciate uniform mat4 projection uniform mat4 view uniform mat4 model uniform mat4 lightSpaceMatrix uniform float u time void main() mat4 model1 if (instanciate) model1 a model else model1 model Grass movement float time u time 8.5f gl InstanceID vec3 pos position float fact 7 position.y float sx sin(pos.x 32.0 time 4.0) fact 0.5 0.5 float cy cos(pos.y 32.0 time 4.0) fact 0.5 0.5 vec3 displacement vec3(sx, cy, sx cy) vec3 normalN normal.xyz 2.0 1.0 pos pos normalN displacement vec3(0.06, 0.06, 0.06) vec3(8.0, 3.0, 1.0) gl Position projection view model1 vec4(pos, 1.0f) Used for shadow mapping vs out.FragPos vec3(model1 vec4(position, 1.0)) vs out.Normal transpose(inverse(mat3(model1))) normal vs out.TexCoords texCoords vs out.FragPosLightSpace lightSpaceMatrix vec4(vs out.FragPos, 1.0) The fragment shader didn't change when I implemented instancing, so I don't think it's relevant to put it here. Also, here is how I render the scene glUniform1i(glGetUniformLocation(programMesh, "instanciate"), GL TRUE) The buffer is setup before with the matrices glEnableVertexAttribArray(3) glBindBuffer(GL ARRAY BUFFER, modelbuffer) model matrices buffer glVertexAttribPointer(3, 4, GL FLOAT, GL FALSE, 4 vec4Size, (GLvoid )0) glEnableVertexAttribArray(4) glVertexAttribPointer(4, 4, GL FLOAT, GL FALSE, 4 vec4Size, (GLvoid )(vec4Size)) glEnableVertexAttribArray(5) glVertexAttribPointer(5, 4, GL FLOAT, GL FALSE, 4 vec4Size, (GLvoid )(2 vec4Size)) glEnableVertexAttribArray(6) glVertexAttribPointer(6, 4, GL FLOAT, GL FALSE, 4 vec4Size, (GLvoid )(3 vec4Size)) glVertexAttribDivisor(3, 1) glVertexAttribDivisor(4, 1) glVertexAttribDivisor(5, 1) glVertexAttribDivisor(6, 1) double instanceTime elapsedTIme glUniform1f(u time, instanceTime) glDrawArraysInstanced(GL TRIANGLES, 0, vertices.size(), 100) glUniform1i(glGetUniformLocation(programMesh, "instanciate"), GL FALSE) I don't really know where the problem could be. I can give more code if needed. Thanks in advance for any help.
1
Is Ada suitable for game development? Would Ada be a practical language for game development (at least for the PC I understand there aren't compilers for PlayStation and other consoles)? It has bindings to OpenGL, it has decent performance, and it's less error prone than C . Please feel encouraged to support your point with relevant example projects you know about.
1
LWJGL 3.0.0 glMapBuffer I am currently working on a project and usually in C I use the function. ByteBuffer glMapBuffer(int target, int access) usage FloatBuffer buffer glMapBuffer(GL ARRAY BUFFER, GL WRITE ONLY).order( ByteOrder.nativeOrder()).asFloatBuffer() But in java (lwjgl 3.0.0) this function returns null, because of this reason it throws a NullPointerException. Anyone any idea how to use this function in java? There are several functions glMapBuffer(int target, int access) glMapBuffer(int target, int access, ByteBuffer old buffer) glMapBuffer(int target, int access, long length, ByteBuffer old buffer) I hope this is specific enough, thank you for your help D
1
Trouble getting shadow maps working I am trying to implement shadow maps in my game following this tutorial. For some reason, the light is not being occluded. In the above screenshot, the big white sprite in the foreground is a rendering of what the occlusion map looks like. In the background, you can see the result does not produce any shadows. It's hard to see, but in the top left it shows the shadow map. Enlarged version of the shadow map The occlusion map and the shadow map seem to be generating correctly, so it must be an issue with how I'm taking it into account when rendering the light. Here is the fragment shader for the light version 330 uniform sampler2D uDiffuseTexture uniform sampler2D uNormalsTexture uniform sampler2D uShadowMap uniform vec4 uLightColor uniform float uConstAtten uniform float uLinearAtten uniform float uQuadradicAtten uniform float uColorIntensity uniform vec4 uAmbientColor in vec2 TexCoords in vec2 GeomSize out vec4 FragColor float sample(vec2 coord, float r) return step(r, texture2D(uShadowMap, coord).r) float occluded() float PI 3.14 vec2 normalized TexCoords.st 2.0 1.0 float theta atan(normalized.y, normalized.x) float r length(normalized) float coord (theta PI) (2.0 PI) vec2 tc vec2(coord, 0.0) float center sample(tc, r) float sum 0.0 float blur (1.0 GeomSize.x) smoothstep(0.0, 1.0, r) sum sample(vec2(tc.x 4.0 blur, tc.y), r) 0.05 sum sample(vec2(tc.x 3.0 blur, tc.y), r) 0.09 sum sample(vec2(tc.x 2.0 blur, tc.y), r) 0.12 sum sample(vec2(tc.x 1.0 blur, tc.y), r) 0.15 sum center 0.16 sum sample(vec2(tc.x 1.0 blur, tc.y), r) 0.15 sum sample(vec2(tc.x 2.0 blur, tc.y), r) 0.12 sum sample(vec2(tc.x 3.0 blur, tc.y), r) 0.09 sum sample(vec2(tc.x 4.0 blur, tc.y), r) 0.05 return sum smoothstep(1.0, 0.0, r) float calcAttenuation(float distance) float linearAtten uLinearAtten distance float quadAtten uQuadradicAtten distance distance float attenuation 1.0 (uConstAtten linearAtten quadAtten) return attenuation vec3 calcFragPosition(void) return vec3(TexCoords GeomSize, 0.0) vec3 calcLightPosition(void) return vec3(GeomSize 2.0, 1.0) float calcDistance(vec3 fragPos, vec3 lightPos) return length(fragPos lightPos) vec3 calcLightDirection(vec3 fragPos, vec3 lightPos) return normalize(lightPos fragPos) vec4 calcFinalLight(vec2 worldUV, vec3 lightDir, float attenuation) float diffuseFactor dot(normalize(texture2D(uNormalsTexture, worldUV).rgb), lightDir) vec4 diffuse vec4(0.0) vec4 lightColor uLightColor uColorIntensity if(diffuseFactor gt 0.0) diffuse vec4(texture2D(uDiffuseTexture, worldUV.xy).rgb, 1.0) diffuse diffuseFactor lightColor diffuseFactor else discard return (uAmbientColor diffuse lightColor) attenuation void main(void) vec3 fragPosition calcFragPosition() vec3 lightPosition calcLightPosition() float distance calcDistance(fragPosition, lightPosition) float attenuation calcAttenuation(distance) vec2 worldPos gl FragCoord.xy vec2(1024, 768) vec3 lightDir calcLightDirection(fragPosition, lightPosition) lightDir (lightDir 0.5) 0.5 float atten calcAttenuation(distance) FragColor calcFinalLight(worldPos, lightDir, atten) vec4(vec3(1.0), occluded())
1
Get fragment from mouse position I have a painting app for texture artists that I am working on. I am able to paint to a flat canvas that updates the texture of a 3d object in an object viewer. Now I want to be able to paint directly to the 3d model. One way I can think of is to get the uv coordinate from the mouse position, and use that as the position to paint onto my 2d canvas, which updates the 3d models texture. Oh and only one object at a time is active, so that should make things a little simpler. Is this the right approach? If it is then how should I start. Or is there a simpler better way of painting directly to a 3d model? How does zbrush do it?
1
Difference in glDrawArrays and glDrawElements While refreshing my mind on OpenGL ES, I came across glDrawArrays and glDrawElements. I understand how they are used and sort of understand why they are different. What I do not seem to understand is that, I fail to see how glDrawElements can save draw calls (saving draw calls is a description that is mentioned by most of the books I have read, hence my mentioning of it here). Imagine a simple scenario in which I tried to draw a square using 2 triangles. I would need to have a set of 6 vertices using glDrawArrays while using glDrawElements, I would need only 4 in addition to an index array that has 6 elements. Given the above, here is what I do not understand how could glDrawElements save draw calls, if it still needs to use the index array (6 indices, in case of a square) to index into my vertex array that had 4 elements (6 times)? Other words, does it mean glDrawElements would still need to have a total of 6 draw calls just like glDrawArrays? how would using glDrawElements save space, if one would still need to have 2 arrays, namely, one for the vertices and one for the indices? In the case of drawing a square from 2 triangles, for simplicity, how many draw calls does glDrawElements (4 items in vertex array and 6 items in index array) and glDrawArrays (6 items only in vertex array) need, individually? Thanks.
1
Lasers in 3D space game I am creating this little 3d space shooter and am somewhat unsure on how to implement the laser beams. I'm thinking about something along Star Wars and the like where mostly you shoot rather short laser beams but very many of those. Should I create a "tube" for each of the beams, render them via instancing and give them a better look just with the shader? How would you go on about this? I was rather perplexed that there was no good tutorial on this matter out there.
1
Geometry shader wireframe not rendering correctly GLSL OpenGL C Im trying to make a tool for skinning 3D models, and as part of that, I need to show faces wireframed, making use of the geometry shader stage. Im following the approach suggested here and here. My problem however, is that it ends up looking like this Where some of the lines get thicker when the faces are oriented in a specific way. This is my geometry shader (The vertex shader just passes vertices, so theres no need to show it) version 400 layout(triangles) in layout(triangle strip, max vertices 3) out noperspective out vec3 gDist void main() 800,600 window size(make uniform later) vec2 p0 vec2(800,600) gl in 0 .gl Position.xy gl in 0 .gl Position.w vec2 p1 vec2(800,600) gl in 1 .gl Position.xy gl in 0 .gl Position.w vec2 p2 vec2(800,600) gl in 2 .gl Position.xy gl in 0 .gl Position.w vec2 v0 p2 p1 vec2 v1 p2 p0 vec2 v2 p1 p2 float area abs(v1.x v2.y v1.y v2.x) gDist vec3(area length(v0),0,0) gl Position gl in 0 .gl Position EmitVertex() gDist vec3(0,area length(v1),0) gl Position gl in 1 .gl Position EmitVertex() gDist vec3(0,0,area length(v2)) gl Position gl in 2 .gl Position EmitVertex() EndPrimitive() and frag shader version 400 noperspective in vec3 gDist const vec4 wire color vec4(0.0,0.5,0.0,1) const vec4 fill color vec4(1,1,1,0) void main() float d min(gDist 0 ,min(gDist 1 ,gDist 2 )) float i exp2( 2 d d) gl FragColor i wire color (1.0 i) fill color So what am I doing wrong here? I feel like im missing something. Is anyone familiar with this?
1
Rotate OpenGL quad around its center Sorry, I searched it but couldn't figure how to apply it to my code reading the other answers to similar questions. I what to rotate a quad, my code is the following glEnable(GL TEXTURE 2D) glBindTexture(GL TEXTURE 2D, player texture) glBegin(GL QUADS) glColor4ub(255, 255, 255, 255) glTexCoord2d(0,0) glVertex2f(hero.xValue(), hero.yValue()) glTexCoord2d(1,0) glVertex2f(hero.xValue() hero.lValue(), hero.yValue()) glTexCoord2d(1,1) glVertex2f(hero.xValue() hero.lValue(), hero.yValue() hero.hValue()) glTexCoord2d(0,1) glVertex2f(hero.xValue(), hero.yValue() hero.hValue()) glEnd() glDisable(GL TEXTURE 2D) I'm not sure about how to use the glTranslatef function, I tell it what is the center of my object? In that case it would be hero.xValue() hero.lValue() 2 (x) and hero.yValue() hero.hValue() 2 (y) I know I have to use both glTranslatef and glRotate, by the way. When I try, setting glTranslatef with the values I mentioned above, I get a texture floating far from where the hero actually is. Any help on this matter will be appreciated, thank you. Obs hValue height, lValue lenght Edit New information to show the changes I made after user55564's comment I call init() once, on there I set basic openGL and SDL stuff, among them glViewport(0, 0, WIDTH, HEIGHT) glMatrixMode(GL MODELVIEW) then I call my draw function, it has in it while (running) glLoadIdentity() . . stuff non related to drawing . glClear(GL COLOR BUFFER BIT) glClearColor(0, 0, 0, 0) glPushMatrix() glOrtho(0, WIDTH, HEIGHT, 0, 1, 1) draws the player glEnable(GL TEXTURE 2D) glBindTexture(GL TEXTURE 2D, player texture) glTranslatef(hero.xValue(), hero.yValue(), 0) glRotatef(0, 0, 0, 1) glTranslatef( (hero.xValue() hero.lValue() 2), (hero.yValue() hero.hValue() 2), 0) glBegin(GL QUADS) glColor4ub(255, 255, 255, 255) glTexCoord2d(0,0) glVertex2f(hero.xValue(), hero.yValue()) glTexCoord2d(1,0) glVertex2f(hero.xValue() hero.lValue(), hero.yValue()) glTexCoord2d(1,1) glVertex2f(hero.xValue() hero.lValue(), hero.yValue() hero.hValue()) glTexCoord2d(0,1) glVertex2f(hero.xValue(), hero.yValue() hero.hValue()) glEnd() glDisable(GL TEXTURE 2D) glPopMatrix() then I use pushmatrix again to draw the enemies and the health bars of both enemies and player (these don't need rotation) glPushMatrix() glOrtho(0, WIDTH, HEIGHT, 0, 1, 1) . . .
1
How to access an uniform array with a float as index in GLSL? I'm trying to do basic multitexturing of terrain in OpenGL. Im building the terrain with an image representing different elements (beach, water, jungle...) Im trying to map each color of this image with a texture in my game green color means grassTexture (id 0), blue color means waterTexture (id 1), etc... My fragment shader contains an uniform array containing all textures that I use (grass, water, ...) Terrain.frag uniform sampler2D terrainTextures 50 For each vertex of the terrain, I pass a float in TextureId corresponding to a texture id Terrain.vert layout (location 0) in vec3 in Vertex layout (location 1) in vec4 in Color layout (location 2) in vec3 in Normal layout (location 3) in vec2 in TexCoord layout (location 4) in float in TextureId Then im trying to display the correct texture for a vertex with this textureId in the fragment shader outputColor texture(terrainTextures int(textureId) , texCoord) With this line, I have this result If I modify the code of the fragment shader with the following code highp int tid int(textureId) if(tid 2) outputColor texture(terrainTextures int(textureId) , texCoord) else if(tid 1) outputColor texture(terrainTextures int(textureId) , texCoord) else outputColor texture(terrainTextures int(textureId) , texCoord) It works better I can't understand why it's working with this code, i'm doing exactly the same thing in the first example. Maybe it's not the good way to access an uniform array ?
1
2D Sidescroller camera I'm using OpenGL. For my tiles, I'm using a display list and I'm just using immediate more for my player (for now). When I move the player, I want to center him in the center of the window, but allow him to jump around on the y axis and not have the camera follow him. But the problem is, is that I can't figure out how to center the player in the viewport! Here is the player update method public void update() if (Keyboard.isKeyDown(Keyboard.KEY D)) World.scrollx Constants.scrollSpeed setCurrentSprite(Sprite.PLAYER RIGHT) if (Keyboard.isKeyDown(Keyboard.KEY A)) World.scrollx Constants.scrollSpeed setCurrentSprite(Sprite.PLAYER LEFT) move((Constants.WIDTH) 2 World.scrollx, getY()) World.scrollx and World.scrolly are variables that I increase decrease to move the tiles. move() is just a method that sets the player position, nothing else. I render the player at his current coordinates like this public void render() glBegin(GL QUADS) Shape.renderSprite(getX(), getY(), getCurrentSprite()) glEnd() Shape.renderSprite is this public static void renderSprite(float x, float y, Sprite sprite) glTexCoord2f(sprite.x, sprite.y Spritesheet.tiles.uniformSize()) glVertex2f(x, y) glTexCoord2f(sprite.x Spritesheet.tiles.uniformSize() , sprite.y Spritesheet.tiles.uniformSize()) glVertex2f(x Constants.PLAYER WIDTH, y) glTexCoord2f(sprite.x Spritesheet.tiles.uniformSize(), sprite.y) glVertex2f(x Constants.PLAYER WIDTH, y Constants.PLAYER HEIGHT) glTexCoord2f(sprite.x, sprite.y) glVertex2f(x, y Constants.PLAYER HEIGHT) Pretty simple, I just render the quad at the current player's position. This is how I actually render everything public void render(float scrollx, float scrolly) Spritesheet.tiles.bind() glPushMatrix() glTranslatef(scrollx, scrolly, 0) glCallList(tileID) glPopMatrix() player.render() This is the part I'm confused on. I translate the tiles according to the scrollx and scrolly variables, and then I render the player at his current position. But the player moves faster than the tiles scroll, and he can escape out the side of the screen! How do I center the player with moving tiles? Thanks for any help!
1
Failed to understand how to use glm unProject (OpenGL 4.3) Situation I use OpenGL 4.3, FreeGLUT 3.0, and GLM library. Let say i have a simple 2D object (a ball) and it moves accordingly to the simple equations x x 0 v 0 t cosf(alpha) y y 0 v 0 t sinf(alpha) 0.5 g t t So i can be sure what are x and y in every time step. My GLUT mouse callback is void onMouse(int button, int state, int mx, int my) if (state ! GLUT DOWN) return view glm lookAt(glm vec3(0.0, 0.0, 10.0), glm vec3(0.0, 0.0, 0.0), glm vec3(0, 1, 0) projection glm perspective(45.0f, 1.0f 640 480, 0.1f, 10.0f) glm vec4 viewport glm vec4(0, 0, 640, 480) glm vec3 wincoord glm vec3(mx, 480 my, 0.0f) glm vec3 objcoord glm unProject(wincoord, view, projection, viewport) So, when the ball moves and i click on it, i suppose to get (objcoord.x, objcoord.y) very close to the center of the ball (x,y). Coordinates of the ball itself (x,y) are calculated correctly. But when i click on the ball, (objcoord.x, objcoord.y) are not even close to the (x,y). I always get very small values like (0.0012, 0.099) wherever i click on the screen. I feel that it might be due to some Z (depth) issues, but i don't have enough knowledge here and ask for help now.
1
Multisampled Texture i have some doubts with multisampled textures. In fragment shader, how to pass multiple samples? We use glTexImage2DMultisample instead of glTeximage2D. so, how to upload texture data? I want to use default FBO for this.
1
Simple OpenGL program major slow down at high resolution I have created a small OpenGL 3.3 (Core) program using freeglut. The whole geometry is two boxes and one plane with some textures. I can move around like in an FPS and that's it. The problem is I face a big slow down of fps when I make my window large (i.e. above 1920x1080). I have monitors GPU usage when in full screen and it shows GPU load of nearly 100 and Memory Controller load of 85 . When at 600x600, these numbers are at about 45 , my CPU is also at full load. I use deferred rendering at the moment but even when forward rendering, the slow down was nearly as severe. I can't imagine my GPU is not powerful enough for something this simple when I play many games at 1080p (I have a GeForce GT 120M btw). Below are my shaders, First Pass VS version 330 core uniform mat4 ModelViewMatrix uniform mat3 NormalMatrix uniform mat4 MVPMatrix uniform float scale layout(location 0) in vec3 in Position layout(location 1) in vec3 in Normal layout(location 2) in vec2 in TexCoord smooth out vec3 pass Normal smooth out vec3 pass Position smooth out vec2 TexCoord void main(void) pass Position (ModelViewMatrix vec4(scale in Position, 1.0)).xyz pass Normal NormalMatrix in Normal TexCoord in TexCoord gl Position MVPMatrix vec4(scale in Position, 1.0) FS version 330 core uniform sampler2D inSampler smooth in vec3 pass Normal smooth in vec3 pass Position smooth in vec2 TexCoord layout(location 0) out vec3 outPosition layout(location 1) out vec3 outDiffuse layout(location 2) out vec3 outNormal void main(void) outPosition pass Position outDiffuse texture(inSampler, TexCoord).xyz outNormal pass Normal Second Pass VS version 330 core uniform float scale layout(location 0) in vec3 in Position void main(void) gl Position mat4(1.0) vec4(scale in Position, 1.0) FS version 330 core struct Light vec3 direction uniform ivec2 ScreenSize uniform Light light uniform sampler2D PositionMap uniform sampler2D ColorMap uniform sampler2D NormalMap out vec4 out Color vec2 CalcTexCoord(void) return gl FragCoord.xy ScreenSize vec4 CalcLight(vec3 position, vec3 normal) vec4 DiffuseColor vec4(0.0) vec4 SpecularColor vec4(0.0) vec3 light Direction normalize(light.direction) float diffuse max(0.0, dot(normal, light Direction)) if(diffuse 0.0) DiffuseColor diffuse vec4(1.0) vec3 camera Direction normalize( position) vec3 half vector normalize(camera Direction light Direction) float specular max(0.0, dot(normal, half vector)) float fspecular pow(specular, 128.0) SpecularColor fspecular vec4(1.0) return DiffuseColor SpecularColor vec4(0.1) void main(void) vec2 TexCoord CalcTexCoord() vec3 Position texture(PositionMap, TexCoord).xyz vec3 Color texture(ColorMap, TexCoord).xyz vec3 Normal normalize(texture(NormalMap, TexCoord).xyz) out Color vec4(Color, 1.0) CalcLight(Position, Normal) Is it normal for the GPU to be used that much under the described circumstances? Is it due to poor performance of freeglut? I understand that the problem could be specific to my code, but I can't paste the whole code here, if you need more info, please tell me.
1
How do you display non cutout transparent 2D textures with a depth buffer? (OpenGL) I've been able to get my 2D renderer to display transparent cutout textures by testing the alpha of a fragment and discarding if it is less than 1 (or any fraction really). The problem is I want to support using translucent textures. The current way I sort my sprites is by what texture they use, so that I can minimize texture changes. The only way I can think of getting this to work properly is by scrapping that and only sorting by z order. But I don't want to throw away the optimization I already did. Is there any way to do both? Does only rendering in 2D simplify the problem at all? I was hoping to support translucent sprites, but my font renderer makes translucent font textures, so I can't just only use cutouts. EDIT After doing some research, it seems there really is no easy way to do this. (depth peeling for a 2D renderer seems a little overkill) I'm going to compromise by having my renderer hold 2 different sets of sprites, cutouts and translucent. I can draw the cutouts first in whatever order I want, making full use of texture atlases. The translucent textures, however, will need to be in z order, ignoring atlases. If anyone can tell me a better way, I'm all ears.
1
Making a HUD GUI with OpenGL (LWJGL) I'm at the stage in my game development where I need to make a HUD or GUI. I've never gotten to this part, so I don't know how its done. I tried rendering a simple quad at a fixed position on the screen, but there's a problem. To make my camera work with orthographic, I use this public void lookThrough() GL11.glMatrixMode(GL11.GL PROJECTION) GL11.glLoadIdentity() GL11.glOrtho(position.x, position.x Display.getDisplayMode().getWidth() zoom, position.y Display.getDisplayMode().getHeight() zoom, position.y, 1, 1) GL11.glMatrixMode(GL11.GL MODELVIEW) I don't see how I would be able to make something fixed on the screen using this method? Any way around this? Thanks )
1
OpenGL ES screen to world coordinate I am currently attempting to convert my screen coordinates to world coordinates, to be able to interact with objects. I am using glm and unProject to try and achieve this, so far this is my code glm vec4 viewPort glm vec4(0.0f, 0.0f, width, height) glm mat4 tmpView sceneCamera gt updateView() glm mat4 tmpProj sceneCamera gt updateProjection() glm vec3 screenPos glm vec3(touchPosition.x, height touchPosition.y 1.0f, 1.0f) glm vec3 worldPos glm unProject(screenPos, tmpView, tmpProj, viewport) Renderer gt SceneObjects 120 gt translateX(worldPos.x) Renderer gt SceneObjects 120 gt translateY(worldPos.y) I am trying to get a sprite to equal the position where I tap. The issues is that the further I click going down the screen, the further the sprite overshoots, and the same horizontally. So if I click 2 3 the way down the screen, the sprite will overshoot the bottom of the screen.
1
Am I allowed to make my Minecraft clone open source? I'm developing in my spare time a game like Minecraft. In fact, it isn't "like Minecraft", because I'm trying to make it a close as possible copy of it (meant as exercise for myself at the age of 16 and simply because of it is fun to me). Of course, I'm not copying the code using the Minecraft Coder Pack (MCP). I started the game from scratch in Java using OpenGL. So, my question is am I allowed to put my source code online on a public source code versioning host like GitHub, Google Code, et cetera (which makes my code open source, because I don't want to pay to use a private host)? Of course, I don't want to sell the game, because the game is from Notch. A detail which might be important is that I'm using a custom texture pack (so, not the one that is shipped with the real Minecraft). If it is allowed, are there any rules? I took a look to this page, but it seems that he doesn't say anything about this http www.minecraft.net terms Edit There is the game called Terasology (started under the name Blockmania) from Begla. That is a nice project, but it is not meant to be as close as possible to Minecraft. That project is open source.
1
std map for storing static const Objects I am making a game similar to Minecraft, and I am trying to fine a way to keep a map of Block objects sorted by their id. This is almost identical to the way that Minecraft does it, in that they declare a bunch of static final Block objects and initialize them, and then the constructor of each block puts a reference of that block into whatever the Java equivalent of a std map is, so there is a central place to get ids and the Blocks with those ids. The problem is, that I am making my game in C , and trying to do the exact same thing. In Block.h, I am declaring the Blocks like so Block.h public static const Block Vacuum static const Block Test And in Block.cpp I am initializing them like so Block.cpp const Block Block Vacuum Block("Vacuum", 0, 0) const Block Block Test Block("Test", 1, 0) The block constructor looks like this Block Block(std string name, uint16 id, uint8 tex) Check for repeat ids if (IdInUse(id)) fprintf(stderr, "Block id u is already in use!", (uint32)id) throw std runtime error("You cannot reuse block ids!") id id Check for repeat names if (NameInUse(name)) fprintf(stderr, "Block name s is already in use!", name) throw std runtime error("You cannot reuse block names!") name name tex tex fprintf(stdout, "Using texture u n", tex) transparent false solidity 1.0f idMap id this nameMap name this And finally, the maps that I'm using to store references of Blocks in relation to their names and ids are declared as such std map lt uint16, Block gt Block idMap std map lt uint16, Block gt () The map of block ids std map lt std string, Block gt Block nameMap std map lt std string, Block gt () The map of block names The problem comes when I try to get the Blocks in the maps using a method called const Block GetBlock(uint16 id), where the last line is return idMap.at(id) . This line returns a Block with completely random values like visibility 0xcccc and such like that, found out through debugging. So my question is, is there something wrong with the blocks being declared as const obejcts, and then stored at pointers and accessed later on? The reason I cant store them as Block amp is because that makes a copy of the Block when it is entered, so the block wouldn't have any of the attributes that could be set afterwards in the constructor of any child class, so I think I need to store them as a pointer. Any help is greatly appreciated, as I don't fully understand pointers yet. Just ask if you need to see any other parts of the code.
1
Just how expensive is it to bind textures in OpenGL? (LibGDX) I'm using LibGDX on top of OpenGL and currently my game engine does something along the lines of the following per frame Bind a terrain texture sprite atlas and a set of transparency masks in another texture atlas Render terrain tiles using the 2 bound textures to a FBO Bind a character and item texture sprite atlas Render characters over the terrain to the same FBO Bind the same transparency mask atlas and a normal map texture for the terrain Draw the same terrain tiles' normal map version to a different FBO Bind the character and item normal map texture atlas Render character normal maps over the terrain normal map FBO Bind this normal map FBO as a texture Render lighting information to a different FBO using the normal map texture information Bind the diffuse and lighting FBOs as textures Use these to render the combined final image to the main display So in summary, each frame I'm binding a total of 9 different textures, one or two at a time. Should I look into changing my code so all 9 textures are always bound, and the correct one is referenced at the right time? Or is this a reasonable amount of texture binds per frame that isn't going to impact overall performance to a noticable effect? Assume I'm aiming for 60fps and there's a fair amount of other calculations going on per frame.
1
Is "pure" OpenGL productive enough? I know that this is a difficult question and I hope I can convey my meaning. Over time I've used many different engines from XNA over Unity to Panda3d and even tried native directX once. My final impression is that an engine basically serves to do this implement a scene graph offer classes like actor to add them to the scene graph implement an asset pipeline that exports to actors or something similar allow for custom logic scripts that hook onto the engine and are called at the right times. In Unity you can override update() in panda3d you can add methods to the taskmanager that are then called repeatedly. In Jmonkey they are called controls. It's basically the same. Additionally you can listen to physics events and the like. then there's work done under the hood physics, rendering and networking. I appreciate all of these points very much and acknowledge the comfort a good engine can offer. The problem is I'd like to create a game in a comparably young language (Google's Go) and there's no engine out there yet. Bindings for OpenGL exist and I figure I could easily hack a little engine of my own together. The points one, two and four are not too difficult. I'd do the rendering with custom shaders anyways so that's work that has to be done in any case. For now I don't need complex physics and the standard raycast and collision check wouldn't be too hard for me. The only problem is an asset pipeline because I've got no experience in processing 3d data at all. I've only lived on the programmatical side up to now and the 3d models I've used were admittedly ugly. In the end it will probably add up to a weekend of coding and ongoing maintaining work. The bottom line is I don't have to use go but then again there's no deadline for the project and I figured it might be fun to use this language. Is anyone out there still using "pure" OpenGL and can tell me about the work that lies ahead? Have you ever done this before? Do you think that low level OpenGL is a too complex choice for a one man team or does the work to hack together something of my own pay off in the end ? EDIT You might want to read the comments below Nicol Bolas post since they explain the question a little better.
1
Do Java and Actionscript use OpenGL? As far as I know there are only 3 base graphics libraries on Windows, the GDI, OpenGL and DirectX, is that correct, so that means that Java, Actionscript and all language use one of these 3 libraries if they are to display graphics, or maybe Java has it's own graphics library API?
1
What is Vulkan and how does it differ from OpenGL? Khronos Group (the standards body behind OpenGL) has just announced Vulkan Vulkan is the new generation, open standard API for high efficiency access to graphics and compute on modern GPUs. This ground up design, previously referred to as the Next Generation OpenGL Initiative, provides applications direct control over GPU acceleration for maximized performance and predictability. Their page is quite marketese jargon heavy, as is the press release In simple terms, what does Vulkan mean to game developers? (Gabe Newell is quoted as being strongly in favour, without further explanation.) What exactly is Vulkan's relationship to OpenGL? Its previous name "glNext" (short for "Next Generation OpenGL Initiative") makes it sound like a replacement. Update The Vulkan 1.0 spec was released on 16 02 2016.
1
alpha test shader 'discard' operation not working GLES2 I wrote this shader to illustare alpha test action in GLES2 (Galaxy S6). I think is not working at all cause I don't see any change with or without it. Is there anything Im missing? Any syntax error? I know its better not using if in shader but for now this is the solution I need. precision highp float precision highp int precision lowp sampler2D precision lowp samplerCube 0 CMPF ALWAYS FAIL, 1 CMPF ALWAYS PASS, 2 CMPF LESS, 3 CMPF LESS EQUAL, 4 CMPF EQUAL, 5 CMPF NOT EQUAL, 6 CMPF GREATER EQUAL, 7 CMPF GREATER bool Is Alpha Pass(int func,float alphaRef, float alphaValue) bool result true if (func 0) result false break if (func 1) result true break if (func 2) result alphaValue lt alphaRef break if (func 3) result alphaValue lt alphaRef break if (func 4) result alphaValue alphaRef break if (func 5) result alphaValue ! alphaRef break if (func 6) result alphaValue gt alphaRef break if (func 7) result alphaValue gt alphaRef break return result void FFP Alpha Test(in float func, in float alphaRef, in vec4 texel) if (!Is Alpha Pass(int(func), alphaRef, texel.a)) discard
1
How can I set the attribute index location? I am trying to set up a shader which takes three input parameters. I have the following code GLuint vert glCreateShader(GL VERTEX SHADER) GLuint frag glCreateShader(GL FRAGMENT SHADER) const GLchar vertsrc src.vertex.c str() const GLchar fragsrc src.fragment.c str() glShaderSource(vert, 1, amp vertsrc, NULL) glShaderSource(frag, 1, amp fragsrc, NULL) glCompileShader(vert) glCompileShader(frag) GLuint program glCreateProgram() glAttachShader(program, vert) glAttachShader(program, frag) glBindAttribLocation(program, 0, "position") if (glGetError() ! GL NO ERROR) ... glBindAttribLocation(program, 1, "normal") if (glGetError() ! GL NO ERROR) ... glBindAttribLocation(program, 2, "texcoord") if (glGetError() ! GL NO ERROR) ... glLinkProgram(program) ...check linking errors auto p glGetAttribLocation(program, "position") auto n glGetAttribLocation(program, "normal") auto t glGetAttribLocation(program, "texcoord") GL MAX VERTEX ATTRIBS is 16. I don't understand why, after this code executes, the values I get from glGetAttribLocation are as follows p 0 n 1 t 1 Needless to say, I am unable to pass anything beyond vertex attrib array no. 0 to my shader. What am I doing wrong?
1
Anti Aliasing in OpenGL C I'm trying to make anti aliasing work inside of OpenGL, here's what I've tried glEnable(GL POINT SMOOTH) glHint(GL POINT SMOOTH HINT, GL NICEST) glEnable(GL LINE SMOOTH) glHint(GL LINE SMOOTH HINT, GL NICEST) glEnable(GL POLYGON SMOOTH) glHint(GL POLYGON SMOOTH HINT, GL NICEST) glEnable(GL BLEND) glBlendFunc(GL SRC ALPHA, GL ONE MINUS SRC ALPHA) But so far none of these have worked. I have gotten antialiasing to work by enabling it on the control panel for my video card (Catalyst Control Center in my case), but I would like to get it working inside my program instead. This is what the rendering looks like with 4x antialiasing enabled via the video card control panel And this is what it looks like when I do it with my program How do I get antialiasing to work?
1
Problems when rendering code on Nvidia GPU I am following OpenGL GLSL cookbook 4.0, I have rendered a tesselated quad, as you see in the screenshot below, and i am moving Y coordinate of every vertex using a time based sin function as given in the code in the book. This program, as you see on the text in the image, runs perfectly on built in Intel HD graphics of my processor, but i have Nvidia GT 555m graphics in my laptop, (which by the way has switchable graphics) when I run the program on the graphic card, the OpenGL shader compilation fails. It fails on following instruction.. pos.y sin.waveAmp sin(u) giving error Error C1105 Cannot call a non function I know this error is coming on the sin(u) function which you see in the instruction. I am not able to understand why? When i removed sin(u) from the code, the program ran fine on Nvidia card. Its running with sin(u) fine on Intel HD 3000 graphics. Also, if you notice the program is almost unusable with intel HD 3000 graphics, I am getting only 9FPS, which is not enough. Its too much load for intel HD 3000. So, sin(X) function is not defined in the OpenGL specification given by Nvidia drivers or something else??
1
Can I directly pass a Boost ptr vector list to glBufferData? I have a data structure like this typedef struct vertex float x float y float z float s float t vertex Then I add to a list called boost ptr vector lt vector gt vertices Is there a way to use vertices to provide the parameters for glBufferData?
1
moving glDepthMask into a shader Can a fragment shader make per fragment decisions on whether the fragment updates the depth buffer or not, even if the fragment is not discarded and the color is written?
1
gl VertexID values when calling glDrawElements I am struggling a bit to understand the values that gl VertexID primitive contains when the vertex shader is executed. I have the standard modern rendering pipeline, in which after setting up shaders, buffers, etc I call the code below to render a mesh glBindBuffer(GL ELEMENT ARRAY BUFFER, auxMesh gt indicesBuffer) glDrawElements(GL TRIANGLES, auxMesh gt numIndices, GL UNSIGNED INT, 0) glBindBuffer(GL ELEMENT ARRAY BUFFER, 0) In the vertex shader I want to manually access to the current vertex being rendered retrieving it from the original buffer (I know this sounds nonsense, but I need to do this for a reason), therefore I pass the vertex buffer as the texture buffer u tbo tex and I access to the actual coordinates as follows vec3 vertex 1 texelFetch(u tbo tex, gl VertexID 0).xyz vec3 vertex 2 texelFetch(u tbo tex, gl VertexID 1).xyz vec3 vertex 3 texelFetch(u tbo tex, gl VertexID 2).xyz And the values of these vertex vertex 1, vertex 2, vertex 3 make perfect sense. However, I can't really understand what values gl VertexID got in each iteration. Is gl VertexID sequentially assigned following the range of 0...auxMesh gt numIndices . Or are the values increased 3 at every iteration because I am drawing triangles? I need to understand this because I am interested in calling textureFetch(u tbo tex, i) where i is a arbitrary triangle in my mesh (or whatever i needs to be to access an arbitrary triangle), but I can't find the right way to access to it.
1
OpenGL draw functions and multi threading. How they work together? I want to apply multi thread in a simple way to control and draw 4000 objects. I am using SDL and OpenGL. control locations, collisions, calculations ... etc draw OpenGL draw functions glDrawArrays,glDrawElements etc For 4000 objects i think like this do you think it works ?
1
Why nearby triangles tend to disappear? I've just enabled back face culling and I'm noticing a weird behavior when all vertices of my triangle is outside the view and 2 of them is behind me (I think) the triangle disappears. So to see it, here is a GIF. I suspect the projection matrix reverses the order of the two vertices when they fall behind me, and changes the winding of my triangle. But it's unclear why does the triangles disappear only if all vertices out of view... How can I work around this problem, if possible? I develop on Linux if that matters. UPDATE It's pointed out it might not be due to the back face culling. I disabled it and I can indeed reproduce it. The cubes are 20 20 and the vertical field view is 90 . Its vertical apparent size roughly fills the window. UPDATE 2 Ok I'll post the relevant part of the code, projection and view matrixes are set up using my own functions void createViewMatrix( GLfloat matrix 16 , const Vector3 forward, const Vector3 up, const Vector3 pos ) Setting up perpendicular axes Vector3 rright Vector3 rup up Vector3 rforward forward vbonorm( amp rright, amp rup, amp rforward) Orthonormalization (right is computed from scratch) Filling the matrix matrix 0 rright.x matrix 1 rup.x matrix 2 rforward.x matrix 3 0 matrix 4 rright.y matrix 5 rup.y matrix 6 rforward.y matrix 7 0 matrix 8 rright.z matrix 9 rup.z matrix 10 rforward.z matrix 11 0 matrix 12 vdp(pos, amp rright) matrix 13 vdp(pos, amp rup) matrix 14 vdp(pos, amp rforward) matrix 15 1 void createProjectionMatrix( GLfloat matrix 16 , GLfloat vfov, GLfloat aspect, GLfloat near, GLfloat far ) GLfloat vfovtan 1 tan(RAD(vfov 0.5)) memset(matrix, 0, sizeof( matrix) 16) matrix 0 vfovtan aspect matrix 5 vfovtan matrix 10 (near far) (near far) matrix 11 1 matrix 14 (2 near far) (near far) Projection matrix set up with this call createProjectionMatrix(projMatrix, VERTICAL FOV, ASPECT RATIO, Z NEAR, 10000) (VERTICAL FOV 90, ASPECT RATIO 4.0 3, Z NEAR 1) Level drawing is simply void drawStuff() GLfloat projectView 16 glClearColor(0, 0, 0, 1) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) createViewMatrix(viewMatrix, amp camera.forward, amp camera.up, amp camera.pos) multiplyMatrix(projectView, viewMatrix, projMatrix) lt Row mayor multiplication. glUniformMatrix4fv(renderingMatrixId, 1, GL FALSE, projectView) bailOnGlError( FILE , LINE ) renderLevel( amp testLevel) Cubes are rendered wall by wall (optimizing this will be another story) for (j 0 j lt 6 j ) glBindTexture(GL TEXTURE 2D, cube gt wallTextureIds j ) bailOnGlError( FILE , LINE ) glDrawElements(GL TRIANGLE FAN, 4, GL UNSIGNED INT, (void )(sizeof(GLuint) 4 j)) bailOnGlError( FILE , LINE ) glUniform4f(extraColorId, 1, 1, 1, 1) bailOnGlError( FILE , LINE ) Vertex shader version 110 attribute vec3 position attribute vec3 color attribute vec2 texCoord varying vec4 f color varying vec2 f texCoord uniform mat4 renderingMatrix void main() gl Position renderingMatrix vec4(position, 1) f color vec4(color, 1) f texCoord texCoord Fragment shader version 110 varying vec4 f color varying vec2 f texCoord uniform sampler2D tex uniform vec4 extraColor void main() gl FragColor texture2D(tex, f texCoord) vec4(f color) extraColor The depth buffer simply set up by enabling it.
1
Opengl mutiple texture in a shader Im using modern opengl and c . How do I draw a number of triangles each one having a different texture in one draw call when I only have 32 texture units on my graphics card and the max texture size is 1024? My video card has 2gb of memory yet it can only hold 32 texture at a time? Why do graphics cards have this limitation? I'm a c programmer and in c you don't have restrictions like that. The only restriction that you have is how much RAM you have which is fine.
1
OpenGL never reverts to default FrameBuffer So I'm trying to change the texture using FBOs, When I run it, The whole game gets drawn in the new FBO that I created then deleted, Here's the function public void ChangeTextureBuffer(int width, int height, ByteBuffer newBuffer) int frameID glGenFramebuffers() glBindFramebuffer(GL FRAMEBUFFER BINDING, frameID) glFramebufferTexture2D(GL FRAMEBUFFER, GL COLOR ATTACHMENT0, GL TEXTURE 2D, ID, 0) glViewport(0, 0, width, height) Texture texture new Texture(width, height, newBuffer) glBindTexture(GL TEXTURE 2D, texture.ID) glBegin(GL QUADS) glTexCoord2f(0, 0) glVertex2f( 1, 1) glTexCoord2f(1, 0) glVertex2f(1, 1) glTexCoord2f(1, 1) glVertex2f(1, 1) glTexCoord2f(0, 1) glVertex2f( 1, 1) glEnd() texture.Delete() glDeleteFramebuffers(frameID) glBindTexture(GL TEXTURE 2D, 0) glBindFramebuffer(GL FRAMEBUFFER BINDING, 0) And here's what I'm getting It's getting drawn where I want it to be with the size that I want, I just don't want everything to be drawn there. Also when I delete a texture or framebuffer, OpenGL isn't reusing the id, I don't know if that's a feature or a bug. EDIT Here's how its supposed to look, Everything getting drawn in its place, I just want to change the text on the bottom left side to show the FPS
1
physically based shading, how to combine specular diffuse parts? After writing 'standard' phong amp blinn shaders for a while, I recently started to dabble in physically based shading. A resource that helped me a lot are these course notes, especially this paper it explains how to make blinn more shading physically plausible. I implemted the blinn model propsed in the paper, and I really like how it looks. The most significant change proposed ( imo ) is the inclusion of the fresnel reflectance, and this is also the part that gives me problems. Unfortunately, the author chose to focus on the specular part only, omitting diffuse reflectance. Given e.g. a lambertian diffuse reflection, I just don't know how to combine it with the 'improved' blinn because just adding diffuse amp specular parts does not seem to be right any more. In some shaders I've seen a floating point 'fresnel term' in range 0 1 being used, based on the indices of refraction of the participating media. Schlick's approximation is used every time float schlick( in vec3 v0, in vec3 v1, in float n1, in float n2 ) float f0 ( n1 n2 ) ( n1 n2 ) f0 f0 return f0 ( 1 f0 ) pow( dot( v0, v1 ), 5 ) Doing it like this, one can then linearly interpolate between diffuse and specular contribution based on the fresnel term, e.g. float fresnel schlick( L, H, 1.0002926 air , 1.5191 other material ) vec3 color mix( diffuseContrib, specularContrib, fresnel ) In the paper, the author states that this approach is incorrect because it basically just darkens the specular color whenever L is parallel or nearly parallel to H and that instead of computing a f0 based on the indices of refraction, you should treat the specular color itself as f0 and have your schlick approximation compute a vec3, like this vec3 schlick( in vec3 v0, in vec3 v1, in vec3 spec ) return spec ( vec3( 1.0 ) spec ) pow( dot( v0, v1 ), 5 ) This results in the specular color going towards white at glancing angles. Now my question is, how would I introduce a diffuse component into this? At 90 the specular contribution is fully white, this means all incoming light is reflected, so there can't be a diffuse contribution. For incidence angles lt 90 , can I just multiply the whole diffuse part with ( vec3( 1 ) schlick ), i.e. the proportion of light that isn't reflected? vec3 diffuseContrib max( dot( N, L ), 0.0 ) kDiffuse ( vec3( 1.0 ) schlick( L, H, kSpec ) ) Or do I need a completely different approach?
1
OpenGL SFML GLFW3 I'am maybe asking a stupid question, but can we mix OpenGl SMFL and add to it some GLFW in the same SFML window ?
1
How to use multiple custom vertex attributes in OpenGL My code currently uses glBindAttribLocation and glVertexAttribPointer to specify two custom vertex attributes in indices 6 and 7. This seems to work fine, but I wish to add another attribute and no index other than 6 or 7 will work the shader instead acts like the attribute is always set to a value of 0. I'm using gl Vertex, gl Normal, gl Color and gl MultiTexCoord0, and apparently some nVidia thing means indices 0, 2, 3 and 8 are off limits, but that should still leave other indices. I don't use gl SecondaryColor or gl FogCoord anywhere in my code or shaders for example, but indices 4 and 5 still don't work. If I change graphics cards for an ATI one which supports more than 16 attributes, then indices 16 work fine, but I want to support cards with only 16 attributes.
1
OpenGL concatenate vertex data into one big VBO I successfully can render two triangles (with a texture) that looks like the side of a crate and also "walk" around it. Also, a small .obj file parser is available which gives me a float array of the vertices, texture coords and an unsigned int array of the indices (of a simple cube). The vertices and texture coords look like the following float vertices positions texture coords 0.5f, 0.5f, 0.0f, 1.0f, 1.0f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.0f, 0.0f, 0.0f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f And the indices unsigned int indices 0, 1, 3, first triangle 1, 2, 3 second triangle To assign the vertices and tex coords I use the following unsigned int VBO glGenBuffers( 1, amp VBO ) glBindBuffer( GL ARRAY BUFFER, VBO ) glBufferData( GL ARRAY BUFFER, 5 4 sizeof( float ), vertices, GL STATIC DRAW ) Creating an IBO call glEnableVertexAttribArray() glVertexAttribPointer() Creating a shader, a texture etc.. and draw it with glDrawElements(). This works all very well, I just can't wrap my head around how to use just a single VBO with the "split" data that my .obj parser gives back. I now (when using the .obj parser) have the vertices and texture coords separated how do I put that two arrays into one VBO? The picture I have in mind is to have fewer draw calls and also prevent redundant code if I, for example, use the same model for 100 times.