_id
int64
0
49
text
stringlengths
71
4.19k
1
"Normal" Blend Mode with OpenGL Trouble I've been having a lot of trouble trying to get a OpenGL blend function to work as I'd expect it to with like what I'd expect (or from any sensible image editing program). As an example, I'll use these two images to blend (bit difficult to see on white backgrounds so the colors are labeled) Images to be blended This is what I expect to happen (and what happens in paint.net) Expected Result Obviously opengl's default blend function makes it look like this (very wrong) glBlendFunc(GL SRC ALPHA, GL ONE MINUS SRC ALPHA) After a ton of testing, this is the closest I could get to creating a "good" blend function glBlendFuncSeparate(GL SRC ALPHA, GL ONE MINUS SRC ALPHA, GL ONE, GL ONE MINUS SRC ALPHA) Looking back at the original expected result though, you'll notice that some of the colors are a bit dimmer than they should be (the middle left part). Specifically, they are premultiplied to half their color value (because of the .5 alpha), and I can't seem to make a function that does not do this (without causing odd blending issues with the barely visible red transparent part). Does anyone know a solution to this issue? One that I had was to use premultiplied alpha in the sources (while I dont want to do this because it requires extra work to convert every color I use in my game to premultiplied or just write some stuff in each shader) and do it like that glBlendFuncSeparate(GL ONE, GL ONE MINUS SRC ALPHA, GL ONE, GL ONE MINUS SRC ALPHA) (No premultiplication) Obviously thats wrong too, but this is actually the only correct result I've gotten so far glBlendFuncSeparate(GL ONE, GL ONE MINUS SRC ALPHA, GL ONE, GL ONE MINUS SRC ALPHA) (Premultiplied inputs) Only problem is, how would I get rid of the premultiplication to display it on the screen? It would probably require an additional render cycle for each thing I blend and that seems way too complex for this issue, so I'm still looking for an answer (its interesting that I cant find anything on this, because OpenGL is so widely used that I'd image someone else ran into this problem). Sources Online blend function testing http www.andersriggelsen.dk glblendfunc.php Bottom Layer Image http i.stack.imgur.com XjLNW.png Top Layer Image http i.stack.imgur.com 9CN6w.png
1
Problems with texture orientation in space I am currently drawing texture in 3D space and have some problems with it's orientation. I'd like me textures always to be oriented with front face to user. My desirable result looks like Note, that text size stay without changes when we rotating world and stay oriented with front face to user. Now I can draw text in 3D space, but it is not oriented with front but rotating with world. Such results I got with following shaders Vertex Shader uniform vec3 Position void main() gl Position vec4(Position, 1.0) Geometry Shader layout(points) in layout(triangle strip, max vertices 4) out out vec2 fsTextureCoordinates uniform mat4 projectionMatrix uniform mat4 modelViewMatrix uniform sampler2D og texture0 uniform float og highResolutionSnapScale uniform vec2 u originScale void main() vec2 halfSize vec2(textureSize(og texture0, 0)) 0.5 og highResolutionSnapScale vec4 center gl in 0 .gl Position center.xy (u originScale halfSize) vec4 v0 vec4(center.xy halfSize, center.z, 1.0) vec4 v1 vec4(center.xy vec2(halfSize.x, halfSize.y), center.z, 1.0) vec4 v2 vec4(center.xy vec2( halfSize.x, halfSize.y), center.z, 1.0) vec4 v3 vec4(center.xy halfSize, center.z, 1.0) gl Position projectionMatrix modelViewMatrix v0 fsTextureCoordinates vec2(0.0, 0.0) EmitVertex() gl Position projectionMatrix modelViewMatrix v1 fsTextureCoordinates vec2(1.0, 0.0) EmitVertex() gl Position projectionMatrix modelViewMatrix v2 fsTextureCoordinates vec2(0.0, 1.0) EmitVertex() gl Position projectionMatrix modelViewMatrix v3 fsTextureCoordinates vec2(1.0, 1.0) EmitVertex() Fragment Shader in vec2 fsTextureCoordinates out vec4 fragmentColor uniform sampler2D og texture0 uniform vec3 u color void main() vec4 color texture(og texture0, fsTextureCoordinates) if (color.a 0.0) discard fragmentColor vec4(color.rgb u color.rgb, color.a) Any ideas how to get my desirable result? EDIT 1 I make edit in my geometry shader and got part of lable drawn on screen at corner. But it is not rotating. .......... vec4 centerProjected projectionMatrix modelViewMatrix center centerProjected centerProjected.w vec4 v0 vec4(centerProjected.xy halfSize, 0.0, 1.0) vec4 v1 vec4(centerProjected.xy vec2(halfSize.x, halfSize.y), 0.0, 1.0) vec4 v2 vec4(centerProjected.xy vec2( halfSize.x, halfSize.y), 0.0, 1.0) vec4 v3 vec4(centerProjected.xy halfSize, 0.0, 1.0) gl Position og viewportOrthographicMatrix v0 ..........
1
How to position transform vertices for 2D UI in shaders? I am building a 3D engine and have a rendering abstraction that focuses on writing shaders. Most my 3D shaders have gl Position output like gl Position projection view model vec4(position, 1) I would like to draw some 2D UI on top of all the 3D draw calls. I am having a hard time understanding how to do this. Let's say I wanted to draw a HUD player life bar that is fixed to the bottom left corner of the screen. I can imagine drawing a quad. and maybe applying a texture to it. I would have 4 vertices for the quad to feed to the shader. How would I change the gl Position calculation to handle drawing only to 2D screen space? Would I update the transformation to only apply the projection matrix? gl Position projection vec4(position, 0, 1) How to determine the local coordinates of the quad verts? When the screen scales or resolution varies, How to ensure the quad stays positioned to the lower bottom screen? How to ensure the quad dimensions stay the same ratio?
1
Does gluLookAt add or set the view matrix variables? I'm trying to rotate the camera view in PyOpenGL, but it's not working well. The weirdest behavior I've noticed is that putting gluLookAt in a loop seems to change the camera view, even when I'm not changing the inputs as the loop continues. So while I'd expect something like gluLookAt(0,0,0, 0,0, 5, 0,0,1) to keep the camera constantly pointing downwards, it seems to rotate the view in some strange way, with the objects being rendered leaving the maximum clipping radius after a while. My question is, does gluLookAt take into account the previous camera settings, or do I need to look for something else wrong in my code?
1
Model gets distorted when rotating the camera I'm currently developing my own 3d graphics engine and I'm having a hard time figuring out why my 3D models gets distorted when rotating the camera around. This is my projection matrix. I'm following the OpenGL's Model. def get projection mat(aspect ratio, camera) fov camera.fov z near camera.z near z far camera.z far top math.tan(fov 0.5) z near bottom top right top aspect ratio left right projection mat np.identity(4, dtype float) projection mat 0, 0 2 z near (right left) projection mat 0, 2 (right left) (right left) projection mat 1, 1 2 z near (top bottom) projection mat 1, 2 (top bottom) (top bottom) projection mat 2, 2 (z far z near) (z far z near) projection mat 2, 3 2 z far z near (z far z near) projection mat 3, 2 1 projection mat 3, 3 0 return projection mat This is my view matrix def camera matrix(self) camera matrix np.identity(4, dtype float) camera matrix 3, 0 self.left camera matrix 3, 1 self.up camera matrix 3, 2 self.forward camera matrix 3, 3 self.pos return camera matrix This is how I get the ViewProjection matrix projection mat get projection mat(self.aspect ratio, self.camera) view mat self.camera.camera matrix() view mat np.linalg.inv(view mat) self.view projection mat np.dot(projection mat, view mat) And, finally, this is the ViewProjectionModel matrix view projection model mat np.dot(model.transform mat.T, self.view projection mat) This is the method that I use to rotate the camera def rotate(self, yaw, pitch, degress True) rotation mat y helper.rotate matrix y(yaw, degrees) rotation mat x helper.rotate matrix x(pitch, degrees) rotation mat np.dot(rotation mat y, rotation mat x) self.forward np.dot(np.insert(self.forward, 3, 1), rotation mat) self.forward helper.normalized((self.forward self.forward 3 ) 3 ) self.up np.dot(np.insert(self.up, 3, 1), rotation mat) self.up helper.normalized((self.up self.up 3 ) 3 ) self.left np.dot(np.insert(self.left, 3, 1), rotation mat) self.left helper.normalized((self.left self.left 3 ) 3 ) A video demo, showing the problem To summarize, the order of multiplication is ProjectionMat X ViewMat x ModelMat x ModelFaces. If it isn't clear by now, I'm using Python for this project. Any help would be greatly appreciated. With best regards, Jo o Pedro EDIT Like this? directions, r np.linalg.qr(np.array( self.forward, self.up, self.left )) self.forward directions 0 self.up directions 1 self.left directions 2
1
OpenGL lighting with dynamic geometry I'm currently thinking hard about how to implement lighting in my game. The geometry is quite dynamic (fixed 3D grid with custom geometry in each cell) and needs some light to get more depth and in general look nicer. A scene in my game always contains sunlight and local light sources like lamps (point lights). One can move underground, so sunlight must be able to illuminate as far as it can get. Here's a render of a typical situation The lamp is positioned behind the wall to the top, and in the hollow cube there's a hole in the back, so that light can shine through. (I don't want soft shadows, this is just for illustration) While spending the whole day searching through Google, I stumbled on some keywords like deferred rendering, forward rendering, ambient occlusion, screen space ambient occlusion etc. Some articles tutorials even refer to "normal shading", but to be honest I don't really have an idea to even do simple shading. OpenGL of course has a fixed lighting pipeline with 8 possible light sources. However they just illuminate all vertices without checking for occluding geometry. I'd be very thankful if someone could give me some pointers into the right direction. I don't need complete solutions or similar, just good sources with information understandable for someone with nearly no lighting experience (preferably with OpenGL).
1
How does GL INT 2 10 10 10 REV work for color data? Can anybody tell me how exactly to use GL INT 2 10 10 10 REV as type parameter in glVertexAttribPointer()? I am trying to pass color values using this type. What is the significance of "REV" suffix in this type? Does it require any special treatment in the shaders?
1
How scanline rendering finds an intersection with an object I'm a newbie with graphics and after I read many articles on the web I still don't understand how in rasterizing from a pixel coordinate like (0 0) on the screen the intersection with an object (let's say a sphere) is found before determining its color and how the Z buffer comes into play. I know something about ray tracing.. is a ray shot from the 0 0 pixel and the intersection point thus determined by the equality between the ray equation with the sphere equation? If it works that way I don't see what the Z buffer is useful to. I don't know if I explained myself correctly, if not please let me know and I'll try to be as clear as I can on my doubt.
1
Camera rotation around point, but without centering Let's say i have the following Point somewhere in space Camera with position and orientation (up, right, forward) I want to rotate camera around the point, but also keep this point in same place on screen. So, if point was on (32, 32) on window, after rotation i want it to still be on (32, 32). I've seen How can I orbit a camera about it 39 s target point? , and it was somewhat helpful. I needed code to rotate point around arbitrary axis (camera's up and right), so i used this resource. Problem is, i got something like numerical errors and my camera started to wander weirdly when rotating around camera's both up and right (it seems fine when i rotate only around one of them). I tested my implementation with code Matrix m1 MatrixRotate(Vector( 1, 1, 1), 33) Matrix m2 MatrixRotate(Vector( 1, 1, 1), 33) Vector a Vector( 1, 1, 1) Vector c a c m1 c c m2 c printf(" f f f f n", c.x, c.y, c.z, c.w) And got 1.028036 0.960396 1.124331 0.000000 It worked fine when rotation axis was something 'normal' like (1, 0, 0) or (0, 0, 1). So, how else can i rotate camera around point, while keeping said point in same point on screen?
1
How can I add a parallax effect into my side scrolling game? How can I add a parallax effect into my side scrolling game? I read a lot about parallax scrolling so I know what the logic is and what parallax is but I can't create a dynamic parallax effect. I have draw and update functions like this void UpdateBackground(Background amp back) back.x back.velX back.dirX if(back.x back.width lt 0) back.x 0 void DrawBackground(Background amp back) draw bitmap(back.image, back.x, back.y, 0) if(back.x back.width lt WIDTH) draw bitmap(back.image, back.x back.width, back.y, 0) So this draws a parallax effect with two background objects but it draws it statically. I'm creating a 2D side scrolling game and I'm translating my character and my camera position along x axis. So I have to translate my parallax effect with my camera but when I add to background position.x it doesn't work. How can I solve this?
1
Render in a imGui Window How do i render my game scene into an imgui window? I want to get from this to this
1
Are there still advantages to using gl quads? OK, I understand that gl quads are deprecated, and thus we're not 'supposed' to use them anymore. I also understand that a modern PC when running a game using gl quads is actually drawing two triangles. Now, I've heard because of that a game should be written using triangles instead. But I'm wondering if, due to the specifics of how OpenGL makes that quad into two triangles, it is ever advantageous to use quads still? Specifically, I'm currently rendering many unconnected quads from rather large buffer objects. One of the areas I have to be careful is how large the vector of floats I'm using to make update these buffer objects get (I have quite a few extra float values per vertex, and a lot of vertices in a buffer the largest buffers are about 500KB). So it strikes me that if I change my buffer objects to draw triangles this vertex data is going to be 50 larger (six vertices to draw a square rather than 4), and take 50 longer for the CPU to generate. If gl quads still work, am I getting a benefit here, or is the 50 extra memory and CPU time still being used on OpenGL's automatic conversion to two triangles?
1
Pre or post multiplication for rotation between coordinate frames I have three 3D coordinate frames O, A and B, as shown below. I want to know the rotation matrix RAB between A and B, that is the rotation that is required, with respect to the frame A, to move from A to B. Let us imagine that all I know, is the rotation matrix RAO between A and O, and the rotation matrix ROB between O and B. So, what is the correct way to determine RAB? There are two suggestions that come to mind (1) RAB RAO ROB (2) RAB ROB RAO Now, my intuition is that (1) is correct, i.e. post multiplication. This is because I am multiplying everything with respect to the local coordinate frame (as discussed in http web.cse.ohio state.edu whmin courses cse5542 2013 spring 6 Transformation II.pdf). However, when I compute this, I get a different answer to that which I get by inspection of the diagram. The correct answer, I have noticed, is equal to the pre multiplication solution of (2). Please can somebody explain to me why (2) seems to be correct, rather than (1)? I was under the impression that if all your transformations are with respect to the current local frame, then post multiplication should be done, i.e. multiply the matrices from left to right as you move between each frame. However, when doing the maths, pre multiplication gives the expected answer.
1
glTextImage2D with GL UNSIGNED BYTE giving weird results while with GL FLOAT just works, driver bug? Update OK, not being able to see the textures loaded by FreeImage was just one of the common mistakes when using modern OpenGL. My texture loading code did not set GL TEXTURE WRAP S T and GL TEXTURE MIN MAG FILTER, and didn't have any mipmaps either Textures quot don 39 t work quot when I don 39 t specify any texture parameters. Is this a driver bug or intended behavior? May then the random colors I'm getting with the testing code be caused by the texture being too small? Maybe If I hardcode a 64x64 red square I will get same results in both cases. It is still weird that with float it works at 2x2 size. I hit a wall trying to rewrite old sprite rendering code based on SDL to use modern OpenGL directly. I'm loading images using FreeImage library and I already confirmed that the pixel data in memory is as expected. But at rendering I was getting no textures. So I harcoded a 2x2 image as a single dimensional array to pass to glTexImage2D and found something weird. When using float, result is as expected but when using char or uint8 t I get random colors. Test code with floats float pixels 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f glGenTextures(1, amp mRedSquareTex) glBindTexture(GL TEXTURE 2D, mRedSquareTex) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glTexImage2D(GL TEXTURE 2D, 0, GL RGB, 2, 2, 0, GL RGB, GL FLOAT, pixels) Test code with uint8 t uint8 t pixels 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0 glGenTextures(1, amp mRedSquareTex) glBindTexture(GL TEXTURE 2D, mRedSquareTex) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glTexImage2D(GL TEXTURE 2D, 0, GL RGB, 2, 2, 0, GL RGB, GL UNSIGNED BYTE, pixels) I think that if I walk pixel by pixel the data I'm loading using FreeImage and write them to a float array, I can finally see the textures loaded from disk. But It would be nice to just be able to use them as they are. As I'm not sure about the channels ordering of images loaded with FreeImage I'm avoiding the alpha channel for now, until I confirm the channels order, but in theory, as long as I avoid the alpha channel, I must be able to see the texture drawn, maybe with the wrong colors. Shaders don't do anything special yet. The Vertex Shader just pass the textures coordinates as they are and the Fragment Shader just call texture(). I'm testing this in a machine with a NVidia GT 640, driver version 340.93, Ubuntu 14.04 64 bits.
1
Hardware Fragment Sorting? I'm writing a rendering engine in OpenGL. I Want to do order independent transparency. I had heard somewhere that some GPUs have support for actually sorting the fragments of all the objects in the scene based on depth and then drawing them. I then realized that this feature is likely very important to many people. Does OpenGL have a built in fragment sorting algorithm, or access to this hardware?
1
Aiming with a crosshair with a lot of polygons triangles I'm working on a 3d kindof game where I'll eventually be able to modify the shapes present in the environment by pulling their faces with a crosshair. The thing is that I don't know how to achieve ray quads or ray triangle collision. I did it once with a teacher at school, but he didn't have much time and it was not well constructed ( I lost this code). I was also wondering how I could check for theses collisions with thousands of shapes? Because I know this is quite resource demanding and doing this for 1000 spheres cube freeform shapes is not easy. If anyone could help me push my knowledge a bit further, it'd be very much appreciated! I also work in Java using LWJGL, so OpenGl.
1
Help understanding gluLookAt() I am fairly new to openGL( 3 months ) and am asking for assistance in understanding the fundamentals behind gluLookAt(). Currently I have spent most of my time with openGL modeling scenes with fixed views, and I wanted to begin using gluLookAt combined with keyboard and mouse callbacks to "explore" my scenes. I created a simple program to play around with the functionality of gluLookAt when I first realized I may not understand fully what is happening. My program creates a world using glOrtho( 4,4, 4,4, 4,4) confining my area somewhat around the origin (0,0,0) upon which I place the standard glutSolidTeapot(0.5). Then in an attempt to create a "camera" revolve around the teapot, in an idle callback function I write this glMatrixMode(GL MODELVIEW) glLoadIdentity() gluLookAt( cos(business), 0, sin(business), 0.0,0.0,0.0, 0.0,1.0,0.0 up vector ) glutPostRedisplay() Turns out, this worked. The view shows a revolving teapot. So I take another step and attempt to create an elliptical path around the teapot, multiplying either cos() or sin() (not both) by 2. This works as well but does not have the same effect I was expecting. The view rotates around the teapot, but I imagined along the path the teapot would "appear" to be closer at points, a zooming in effect of sorts. This leads to my question. What exactly do I not understand about this? or.. Why is this not doing what I am expecting?
1
Why does my simple OpenGL shadow map fail? I want to render a simple shadow map for grass, where closer looks brighter and further looks darker, from the view of the light point. I can't get it to work. Here is the relevant code setting up buffers gl.glGenFramebuffers(1, framebuff) gl.glBindFramebuffer(GL4.GL FRAMEBUFFER, framebuff.get(0)) gl.glGenTextures(2, textureBuff) gl.glBindTexture(GL4.GL TEXTURE 2D, textureBuff.get(0)) gl.glTexStorage2D(GL4.GL TEXTURE 2D, 1, GL4.GL R32F, displayWidth, displayHeight) gl.glTexParameteri(GL4.GL TEXTURE 2D, GL4.GL TEXTURE MAG FILTER, GL4.GL LINEAR) gl.glTexParameteri(GL4.GL TEXTURE 2D, GL4.GL TEXTURE MIN FILTER, GL4.GL LINEAR MIPMAP LINEAR) gl.glFramebufferTexture(GL4.GL FRAMEBUFFER, GL4.GL COLOR ATTACHMENT0, textureBuff.get(0), 0) gl.glBindTexture(GL4.GL TEXTURE 2D, textureBuff.get(1)) gl.glTexStorage2D(GL4.GL TEXTURE 2D, 1, GL4.GL DEPTH COMPONENT32F, displayWidth, displayHeight) gl.glTexParameteri(GL4.GL TEXTURE 2D, GL4.GL TEXTURE MAG FILTER, GL4.GL LINEAR) gl.glTexParameteri(GL4.GL TEXTURE 2D, GL4.GL TEXTURE MIN FILTER, GL4.GL LINEAR MIPMAP LINEAR) gl.glTexParameteri(GL4.GL TEXTURE 2D, GL4.GL TEXTURE COMPARE MODE, GL4.GL COMPARE REF TO TEXTURE) gl.glTexParameteri(GL4.GL TEXTURE 2D, GL4.GL TEXTURE COMPARE FUNC, GL4.GL LEQUAL) gl.glFramebufferTexture(GL4.GL FRAMEBUFFER, GL4.GL DEPTH ATTACHMENT, textureBuff.get(1), 0) gl.glDrawBuffer(GL4.GL NONE) if(gl.glCheckFramebufferStatus(GL4.GL FRAMEBUFFER) ! GL4.GL FRAMEBUFFER COMPLETE) System.out.println(gl.glCheckFramebufferStatus(GL4.GL FRAMEBUFFER)) Drawing command (unsure if it's correct) gl.glBindFramebuffer(GL4.GL FRAMEBUFFER, framebuff.get(0)) gl.glViewport(0, 0, displayWidth, displayWidth) gl.glEnable(GL4.GL POLYGON OFFSET FILL) gl.glPolygonOffset(2.0f, 4.0f) gl.glClearBufferfv(GL4.GL COLOR, 0, new float 0, 0, 0 , 0) gl.glClearDepth(1.0f) gl.glClear(GL4.GL DEPTH BUFFER BIT) setupMVPMatrix() gl.glBindVertexArray(vaoBuff.get(0)) gl.glUseProgram(shaderProgram) gl.glDrawArraysInstanced(GL4.GL TRIANGLE STRIP, 0, 5, 512 512) gl.glDisable(GL4.GL POLYGON OFFSET FILL) gl.glBindFramebuffer(GL4.GL FRAMEBUFFER, 0) When I comment the glBindFramebuffer(), the grass appears correctly with the white color (from the light point of view, which shows the matrix should be correct) But if I call glBindFramebuffer() with the depth test enabled, everything just disappears. I have also checked the framebuffer status, yet it seems there is no error. What might cause this?
1
Can I make color data not render as gradient? I would like for the color between my vertices to not be rendered as a gradient, but as a hard break. Is there any way to accomplish this in OpenGL GLSL?
1
OpenGL slower than Canvas Up to 3 days ago I used a Canvas in a SurfaceView to do all the graphics operations but now I switched to OpenGL because my game went from 60FPS to 30 45 with the increase of the sprites in some levels. However, I find myself disappointed because OpenGL now reaches around 40 50 FPS at all levels. Surely (I hope) I'm doing something wrong. How can I increase the performance at stable 60FPS? My game is pretty simple and I can not believe that it is impossible to reach them. I use 2D sprite texture applied to a square for all the objects. I use a transparent GLSurfaceView, the real background is applied in a ImageView behind the GLSurfaceView. Some code public MyGLSurfaceView(Context context, AttributeSet attrs) super(context) setZOrderOnTop(true) setEGLConfigChooser(8, 8, 8, 8, 0, 0) getHolder().setFormat(PixelFormat.RGBA 8888) mRenderer new ClearRenderer(getContext()) setRenderer(mRenderer) setLongClickable(true) setFocusable(true) public void onSurfaceCreated(final GL10 gl, EGLConfig config) gl.glEnable(GL10.GL TEXTURE 2D) gl.glShadeModel(GL10.GL SMOOTH) gl.glDisable(GL10.GL DEPTH TEST) gl.glDepthMask(false) gl.glEnable(GL10.GL ALPHA TEST) gl.glAlphaFunc(GL10.GL GREATER, 0) gl.glEnable(GL10.GL BLEND) gl.glBlendFunc(GL10.GL ONE, GL10.GL ONE MINUS SRC ALPHA) gl.glHint(GL10.GL PERSPECTIVE CORRECTION HINT, GL10.GL NICEST) public void onSurfaceChanged(GL10 gl, int width, int height) gl.glViewport(0, 0, width, height) gl.glMatrixMode(GL10.GL PROJECTION) gl.glLoadIdentity() gl.glOrthof(0, width, height, 0, 1f, 1f) gl.glMatrixMode(GL10.GL MODELVIEW) gl.glLoadIdentity() public void onDrawFrame(GL10 gl) gl.glClear(GL10.GL COLOR BUFFER BIT) gl.glMatrixMode(GL10.GL MODELVIEW) gl.glLoadIdentity() gl.glEnableClientState(GL10.GL VERTEX ARRAY) gl.glEnableClientState(GL10.GL TEXTURE COORD ARRAY) Draw all the graphic object. for (byte i 0 i lt mGame.numberOfObjects() i ) mGame.getObject(i).draw(gl) Disable the client state before leaving gl.glDisableClientState(GL10.GL VERTEX ARRAY) gl.glDisableClientState(GL10.GL TEXTURE COORD ARRAY) mGame.getObject(i).draw(gl) is for all the objects like this HERE there is always a translatef and scalef transformation and sometimes rotatef gl.glBindTexture(GL10.GL TEXTURE 2D, mTexPointer 0 ) Point to our vertex buffer gl.glVertexPointer(3, GL10.GL FLOAT, 0, mVertexBuffer) gl.glTexCoordPointer(2, GL10.GL FLOAT, 0, mTextureBuffer) Draw the vertices as triangle strip gl.glDrawArrays(GL10.GL TRIANGLE STRIP, 0, mVertices.length 3) EDIT After some test it seems to be due to the transparent GLSurfaceView. If I delete this line of code setEGLConfigChooser(8, 8, 8, 8, 0, 0) the background becomes all black but I reach 60 fps. What can I do?
1
How can I render a font in C with OpenGL? What I tried I was testing some things in order to render text with stb truetype.h and OpenGL in C. I took as a reference the example that appears here. Basically, this example, loads a .ttf file and returns the raw information in bytes, that can be used to generate a texture in OpenGL. I adapted the example, mentioned before, into modern OpenGL, because, the example uses OpenGL deprecated functions, like glVertex2f. The only thing I get to output on screen was this kind of noise of strange colors The code I use texture t fnt texture GLuint fnt shader unsigned char ttf buffer 1 lt lt 20 unsigned char temp bitmap 512 512 stbtt bakedchar cdata 96 ASCII 32..126 is 95 glyphs define FONT VS quot version 330 core n quot quot layout(location 0) in vec3 m Position quot quot layout(location 1) in vec2 m TexCoords quot quot out vec2 TexCoords n quot quot void main() n quot quot TexCoords m TexCoords n quot quot gl Position vec4(m Position, 1.0) n quot quot n quot define FONT FS quot version 330 core n quot quot in vec2 TexCoords n quot quot uniform sampler2D Texture n quot quot void main() n quot quot gl FragColor texture(Texture, TexCoords) n quot quot n quot void font init(void) fread(ttf buffer, 1, 1 lt lt 20, fopen( quot c windows fonts times.ttf quot , quot rb quot )) stbtt BakeFontBitmap(ttf buffer, 0, 32.0, temp bitmap, 512, 512, 32, 96, cdata) no guarantee this fits! glGenTextures(1, amp fnt texture 3 ) My texture type, is an array that saves the texture on the 3rd position. glBindTexture(GL TEXTURE 2D, fnt texture 3 ) glTexImage2D(GL TEXTURE 2D, 0, GL RGBA8, 512, 512, 0, GL RGBA, GL UNSIGNED BYTE, temp bitmap) glGenerateMipmap(GL TEXTURE 2D) can free temp bitmap at this point glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glBindTexture(GL TEXTURE 2D, 0) fnt shader shader init(FONT VS, FONT FS) void font render(model t model) shader bind(fnt shader) texture bind(fnt texture, 0) model begin(model) model draw(model, GL TRIANGLES) The model (vao, vbo, ibo) is rendering the whole buffer, not individual glyphs model end() texture unbind() shader unbind() Can someone tell me what I'm doing wrong, and, how I'm suposed to render correctly text, with modern OpenGL, with textures and buffers, in order to read the .ttf file and create the necessary information with stb truetype.h and, then, render the text?
1
Create a white background for the texture and then blend using GLSL I have a transparent png texture and I'd like to create a white background and then blend this on top of that. Is this possible using just GLSL? I can't multiply, add or mix colors because I don't want to overlay the color white. I want it to be in the background of the texture. I can achieve what I want by creating 2 objects with the exact same dimensions and position, set the color of the object behind to white and the object in the front to the transparent texture. But this seems less than ideal to me. Any suggestions will be much appreciated!
1
Can instantiated objects have different material texture? While I have some experience with simple 2D games, I am new to more process demanding 3D games. One basic question that has been concerning me recently and for which I am having difficulties to find a proper detailed answer, is the following. I understand that when we instantiate an object, we are saving memory because the instantiated copy does not have to save mesh memory (like vertices' position, UV cood and normals). So, the instantiated copy only needs to save a transformation matrix in order to properly position, scale and rotate the mesh structure they share with the original mesh. But can the instantiated copies have different texture or material from the original mesh from which they were instantiated? If so, then it means the instantiated objects actually save with themselves more than transformation matrices. I would love to read more on that, so suggestions are welcome.
1
Practice openGL or learn a specific engine? Possible Duplicate Should I use Game Engines to learn to make 3D games? I am a university student. I want to work in the game industry. Now I am thinking about either practicing my openGL skills or learn a complete new game engine during my time in school. I will do this by developing a smartphone game. I am debating over using just openGL or using a game engine. If I learn a game engine, then if the company I want to work for does not use that game engine, then wouldn't it be a waste of time to learn that game engine now instead of solidifying my openGL skills? So, openGL or game engine? Thanks.
1
How can I render a single object that uses multiple textures? I'm looking for a technique to render an object with multiple texture sources. One texture is static, the other is generated dynamically (it's a render target). For example, say I was rendering a TV. The frame of the TV is a static texture and the image comes from a render to texture pass. It doesn't sound difficult but I've been unable to find a decent approach. Some of my ideas are Instead of rendering to a unique texture I render to a part of the other texture (so the part with the TV screen is overwritten, the frame remains). This doesn't work if I want to combine multiple textures (two TVs with different frames, same show playing) Create a two part object render the TV frame as one object then render the image as another. This would require leaving a hole in the one model, or putting the other model slightly on top. Is there a drawback to this approach? Is there another approach that works well?
1
Getting crash on glDrawElements Here is the code where I initialize the VAO, vertex attributes(also the main VBO) and EBO(im using my own wrapper class for these "databuffers" to hide some of the API features and make life easier so i dont think the problem will be in the generic class as it was working without problems) void initVAOManager(const bool amp ebo) if ( vaoID 0) glGenVertexArrays(1, amp vaoID) glBindVertexArray( vaoID) Here is the main data buffer (positions,colors,UVs) If it doesn t exist a new one is created if (! mainBuffer) mainBuffer new DataBuffer lt T gt (GL ARRAY BUFFER) mainBuffer gt bindBuffer() if (! eboBuffer amp amp ebo) eboBuffer new DataBuffer lt eboData gt (GL ELEMENT ARRAY BUFFER) eboBuffer gt bindBuffer() This is the position glEnableVertexAttribArray(0) glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, sizeof(Vertex), (void )offsetof(Vertex, position)) Color attrib pointer glEnableVertexAttribArray(1) glVertexAttribPointer(1, 4, GL UNSIGNED BYTE, GL TRUE, sizeof(Vertex), (void )offsetof(Vertex, color)) UV glEnableVertexAttribArray(2) glVertexAttribPointer(2, 2, GL FLOAT, GL TRUE, sizeof(Vertex), (void )offsetof(Vertex, uv)) mainBuffer gt unbindBuffer() if (ebo) eboBuffer gt unbindBuffer() glBindVertexArray(0) Then the render function (dont mind the for loop, as i want to render multiple objects from the batch in one function) void renderBatchNormal() uploadData() glBindVertexArray( VAOManager gt getVAO()) std vector lt eboData gt for (std size t i 0 i lt DATA.size() i ) glDrawElements(GL TRIANGLES, 6, GL UNSIGNED INT, 0) glBindVertexArray(0) clearData() The upload data function send a data from the vectors to their buffers, I can send it too but as Im using my generic wrapper and it worked before with normal drawings I assume there is no problem. And finally a class eboData(if anyone wondered) (basically just a blank class with an array of 6 indices) class eboData public GLuint indices 6 However, this is causing crashes on the line where I try to execute the glDrawElements command, I read that it can be caused with no binded VAO while binding the ELEMENT BUFFER but as you can see from the code I m doing it right(at least I think that). However, if I change the following line with std vector lt eboData gt eboVector glDrawElements(GL TRIANGLES, 6, GL UNSIGNED INT, eboVector.data()) The code is working (problem is also that I don t know how to render second item in the buffer as it is showing only the first one). Do you have any ideas what can cause this crash? PS glGetError() returns 0.
1
GLSL to Cg fragment shader I have found very useful resource on the Swiftless website on OpenGL. Unfortunately, I cannot manage to adapt a GLSL fragment shader to my project, which uses Cg. Here it is uniform sampler2D color texture uniform sampler2D normal texture void main() Extract the normal from the normal map vec3 normal normalize(texture2D(normal texture, gl TexCoord 0 .st).rgb 2.0 1.0) Determine where the light is positioned (this can be set however you like) vec3 light pos normalize(vec3(1.0, 1.0, 1.5)) Calculate the lighting diffuse value float diffuse max(dot(normal, light pos), 0.0) vec3 color diffuse texture2D(color texture, gl TexCoord 0 .st).rgb Set the output color of our current pixel gl FragColor vec4(color, 1.0) I have tried something struct fsOutput vec4 color COLOR uniform sampler2D detailTexture TEXUNIT0 uniform sampler2D bumpTexture TEXUNIT1 fsOutput FS Main(float2 detailCoords TEXCOORD0, float2 bumpCoords TEXCOORD1) fsOutput fragm float4 anorm tex2D(bumpTexture, bumpCoords) vec3 normal normalize(anorm.rgb 2.0f 1.0f) vec3 light pos normalize(vec3(1.0f, 1.0f, 1.5f)) float diffuse max(dot(normal, light pos), 0.0) vec3 color diffuse texture2D(detailTexture, detailCoords).rgb fragm.color vec4(color, 1.0f) return fragm But it doesn't work. To debug, I have a function that catches Cg errors, and my program breaks at this point. I have identified the two texture IDs in the main program. Can you suggest any improvement for this Cg shader?
1
A few questions about Order Independent Transparency I've been looking through several different Order Independent Transparency algorithms. But very few of them seem to answer a few things. I understand that the idea of OIT is to not worry so much about ordering. But does it still matter in some cases? And is there a way to preserve it if a certain ordering is desired? For starters, does presorting impact OIT in any way? Such as speeding it up, or producing different visual results? And is lighting still handled per pixel as normal when you submit your geometry to be rendered. Or does it happen defacto? A few algorithms I have been looking at. Intel's OIT solution. This also seems to be fairly popular with AMD as well. AMD's Powerpoint Depth Peeling. Can't find a good link. And then Depth Weighted blending. Which seems to produce muddy results with layers. Linky I guess I should also ask how necessary, and how fast is this. My main issue isn't layer's of transparent objects with overlaps. Things like holograms and layers upon layers of glass are likely to be rare. But mostly with particles and their drawing order from different particle systems, and them interacting with other transparent objects correctly. Currently... the engine treats a particle engine like one complete object, and renders them directly to the backbuffer on the transparency pass. Though because the particles are volumes... it causes some graphical errors. For example if there was a huge explosion that covers a massive chunk of land... there will be a random stack of smoke that renders on top of it. And I don't want to break up all the particle instancing to batch them with the rest of the geometry... as it would likely massively raise the draw calls for every break. I currently can't provide any pictures right now. The engine is undergoing a serious rewrite to get it more usable. So a good portion of the code is slashed out. I'm just trying to solve problems I have noticed in my first right up.
1
Nvidia High Performance Processor Setting leads to graphical bug (Seizure Warning) with current lighting system, drawing completely in the shader code I followed the Lighting tutorial on learnopenGL, modifying some of the code to work in a 2D game engine. Everything was looking great and my team got our game done and the lights were quite simple for our designers to use. However we ran into a rare bug. as shown here https www.youtube.com watch?v to0mMP5I0cs one team member was able to recreate the bug by switching his Nvidia settings to use the "High Performance Processor" as opposed to "Integrated Graphics". Otherwise everything renders properly. The bug doesn't appear when there are no lights and everything is rendered in its full color. We have gone through alot of Ideas already but they haven't worked and now I am at a loss. Does anyone have any ideas about what is going on?
1
Quaternion rotation around center, undefined behavior Here's my code vec4 qx, qy, qz mat4 mx, my, mz rotating using quaternions glm quat(qx, to radians(a gt rx), 1.0f, 0.0f, 0.0f) glm quat(qy, to radians(a gt ry), 0.0f, 1.0f, 0.0f) glm quat(qz, to radians(a gt rz), 0.0f, 0.0f, 1.0f) turning the quaternions into matrices glm quat mat4(qx, mx) glm quat mat4(qy, my) glm quat mat4(qz, mz) mat4 trans 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 mat4 rot 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 mat4 final combining the rotations into one. glm mat4 mulN((mat4 ) amp mx, amp my, amp mz , 3, rot) translating the trans matrix. glm translate(trans, (vec3) a gt x, a gt y, a gt z ) finally combining the translation with the rotation into one. glm mat4 mul(trans, rot, final) My desired behavior is that the object rotates around its center, but here is what happens instead So , it seems that my object is rotating around some weird other undefined point. I have no idea why this happens. Any ideas? Thank you.
1
openGL Vertex Projection does not work as expected I try to render a simple grid using glBegin(GL LINES). I have a class Camera, which provides values like this float farPlane 100.0f float nearPlane 0.1f float screenRatio 1,42857 width height 1000 700 float frustum 70.0f glm vec3 position glm vec3(3.0f, 3.0f, 3.0f) glm vec3 UP glm vec3(0.0f, 1.0f, 0.0f) glm mat4 view glm lookAt(position, position glm normalize(position), UP) makes the lookAt vector always point towards (0, 0, 0) glm mat4 projection glm perspective(frustum, screenRatio, nearPlane, farPlane) using the view and projection matrices, i transform every vertex from my Grid model and render it. glm mat4 viewProjectionMatrix Graphic camera gt getViewProjectionMatrix() returns Camera projection Camera view glBegin(GL LINES) for (unsigned int i 0 i lt vertexNum i) glColor3f(vertexArray i .normal.x, vertexArray i .normal.y, vertexArray i .normal.z) normal vector is used for color in this case glm vec4 translatedPosition(viewProjectionMatrix gridTransformationMatrix (glm vec4(vertexArray i .position, 1.0f))) glVertex3f(translatedPosition.x, translatedPosition.y, translatedPosition.z) glEnd() but this is what i see when i move the camera along the line (0,0,0) u (1,1,1) http i.imgur.com PrcDcLs.gifv (you can see the camera cooridnates in the console)
1
interpolating frames in a vertex shader My models are stored as a set of meshes, each with a vertex list and normal list per key frame, and indices for GL TRIANGLES which is shared for all frames. Each frame I lerp between two adjacent key frames to generate the vertices for that frame on the CPU and then draw it. How can I move this into a GLSL vertex shader? Can a shader interpolate between two sets of vertices, and how can I store those vertices on the GPU?
1
How to render animated models via instanced rendering? I have an animated model with a maximum of 60 bones. That means i have an array of 60 matrices when i want to render the model. Previously i would just create a uniform with the fixed size (mat4 60 ), but now i want to render the model via instanced rendering, which means i need to write 60 matrices per instance into the VAO of the model. This would mean i have to create 240 vertex attributes each containing 4 floats because thats the maximum number of floats an attribute can have. This is obviously a ridiculous solution, besides the fact that i think it exceeds the maximum number of attributes a VAO can have. So how do i render animated models via instance rendering?
1
CubeRealm OpenGL rotation problems, need help. I'm totally new to OpenGL and I'm working on a Sandbox game called CubeRealm. My problem you see is rotation. I've got it in my head that to rotate the 'camera' I just rotate all the scene by the negative value of the camera's rotational values. However so far it seems to not be working. Here's the code snippets glPushMatrix() lighting() (640 2) (64 2) amount of cubes on one 640x640 plane(409,600,4,096) TODO fix z axis problem for translating glRotatef( player.camera.rotation.x,1.0,0.0,0.0) glRotatef( player.camera.rotation.y,0.0,1.0,0.0) glRotatef( player.camera.rotation.z,0.0,0.0,1.0) renderGrid() glPopMatrix() glutSwapBuffers() SNIPPET2... case LOOK UP if(player.camera.rotation.y! 90.0) if they are not looking up make them look up player.camera.rotation.y 90.0 TODO set direction the camera is facing in break case ROTATE LEFT if(player.camera.rotation.x! 0.0) player.camera.rotation.x 90.0 TODO set direction the camera is facing in else player.camera.rotation.x 360.0 player.camera.rotation.x 90.0 TODO set direction the camera is facing in break case LOOK DOWN if(player.camera.rotation.y! 90.0) if they are not looking down make them look down player.camera.rotation.y 90.0 TODO set direction the camera is facing in break case ROTATE RIGHT if(player.camera.rotation.x! 360.0) player.camera.rotation.x 90.0 TODO set direction the camera is facing in else player.camera.rotation.x 0 player.camera.rotation.x 90.0 TODO set direction the camera is facing in break When I press the right arrow key(rotate to the right) it doesn't rotate also when I rotate up(up arrow) it goes weird instead of allowing me to see the top of the skybox it still shows me my test cubes. So guys how do I fix the rotation, what am I doing wrong? Note the 'TODO set direction' stuff is for my local axis system. Those TODOs are irrelevant to the question.
1
How can I render text using the new(ish) JOGL GPU curve rendering classes? I'm fairly new to OpenGL JOGL, working through various tutorials and books and making steady progress. Text, however, is an area where I'm stuck. I figured out one way using BufferedImage and Graphics2D to draw strings and then swizzle the pixels and copy to an OpenGL texture, but the quality is low, it is resolution dependent, and it's not efficient. I found this http forum.jogamp.org GPU based Resolution Independent Curve Rendering td2764277.html. Unfortunately while there are some demos in the GitHub repo I can't quite get my head around them. The code I've tried to use is below In the init() method InputStream fontFile getClass().getResourceAsStream("media futura.ttf") try font FontFactory.get(fontFile, true) catch (IOException e) System.err.println("Couldn't open font!") e.printStackTrace() RenderState renderState RenderState.createRenderState(SVertex.factory()) renderState.setColorStatic(1, 1, 1, 1) renderState.setHintMask(RenderState.BITHINT GLOBAL DEPTH TEST ENABLED) renderer RegionRenderer.create(renderState, RegionRenderer.defaultBlendEnable, RegionRenderer.defaultBlendDisable) renderer.init(gl, Region.VBAA RENDERING BIT) util new TextRegionUtil(Region.VBAA RENDERING BIT) In the display() method PMVMatrix pmv renderer.getMatrix() pmv.glMatrixMode(GLMatrixFunc.GL MODELVIEW) pmv.glLoadIdentity() pmv.glTranslatef(0, 0, 300) float pixelSize font.getPixelSize(32, 96) util.drawString3D(gl, renderer, font, pixelSize, "Test", null, samples) I've searched and searched for a tutorial on this stuff or a simple, commented code example explaining how it works but to no avail. If anyone can help me I'd be extremely grateful!
1
Draw multiple times same object but translated and rotated I want to draw lots of spheres in different locations and orientations with Opengl4 and JOGL. As the vertexes and colours are the same for all of them, I have just one array for vertexes and another for colours. For the positions and orientations, I have another big matrix where I have all data for all spheres. In principle, drawing one with glDrawArrays is not a problem but for severals, I have read that I should use glDrawArraysInstanced instead. My problem is that I am a bit confused about how to apply each transformation for my particles. How should I introduce this array into the shader? Should I send the matrix model after doing the transformations in the cpu or should I send the positions and orientations and transform them inside the shader? How do I connect the data to the shader? How should the shader look like?
1
How to use modern OpenGL for 2D games? I've found a plethora of "modern" OpenGL (3.0 ) tutorials for 3D, but I have found next to nothing when looking for information on how to use it for 2D game development. How can I get started using OpenGL for 2D gamedev? Specifically, I'm interested in getting answers to the following topics How should I set up my various matrices for orthographic projection? Are shaders as heavily used in 2D applications as in 3D ones? If so, what is their purpose in the 2D setting? How should I handle the massive number of textures obviously required for a 2D game? I apologize for the relatively broad question, but I've spent a long time searching and I've found very little useful information that applies to modern OpenGL.
1
Rotate an image and get back to its original position opengles glkit I need to rotate an image in opengles GLkit and get it back to its original position in GLkit. rotation 5 modelViewMatrix GLKMatrix4Rotate( modelViewMatrix, GLKMathDegreesToRadians(5), 1, 0, 0) modelViewMatrix GLKMatrix4Rotate( modelViewMatrix, GLKMathDegreesToRadians(rotation), 1,0,0) I need to move it in x axis for certain amount and getting back to its original position from where it started. How should i do it?
1
Can't get world position from reverse Z buffer I'm using this solution to render using a reversed Z buffer. This looks fine and completely fixes all my z fighting, but it breaks what I use in shader to derive the world position from the depth for various purposes such as deferred lighting and fog (it has worked for a regular projection matrix) vec4 screenSpacePosition vec4(texcoord 2.0 1.0, depth 2.0 1.0, 1) vec4 worldSpacePosition invProjView screenSpacePosition vec3 finalPosition worldSpacePosition.xyz worldSpacePosition.w return finalPosition I've thought that this was due to this using a single view projection matrix, so I've also tried this vec4 clipSpacePosition vec4(texcoord 2.0 1.0, depth 2.0 1.0, 1.0) vec4 viewSpacePosition invProj clipSpacePosition viewSpacePosition viewSpacePosition.w vec4 worldSpacePosition invView viewSpacePosition return worldSpacePosition.xyz That must be suboptimal performance wise, but it doesn't work either anyway. Any idea what I'm missing?
1
Is it possible to store diffuse and normal maps in the same texture area and preserve SRGB linear space? Usually, one would want to upload texture data to OpenGL with GL SRGB for the internalformat of a texture, and GL RGB (or some other linear format) for normal data or specular highlight maps. We can minimize context switches by using a texture array, but that forces all textures to have the same internalformat. Is there a way to store all needed textures in a single texture array, but preserve colour spaces? Or should I convert from SRGB to linear space when uploading texture data?
1
Texture coordinates for custom geometry in SceneKit ios9 I am trying to texture the a custom plane shape I created in scenekit on iOS9. I need the texture to spread out and stretch over the entire surface. I have a vertex and fragment shader on an SCNprogram. But it seems I am doing something wrong and fiddling with this all day, I have come to the conclusion that somehow the Texture coordinates I specify are not correct. My question is, when creating a geometry, does SceneKit create the texture coordinates automatically so when I specify CGpoint(0,0) for one corner of the texture, it automatically maps it to the bottom left corner of the geometry? Or do I have to manually specify the texture coordinates uv for the geometry I create? This problem is a part of a larger question posted here https stackoverflow.com questions 34104369 custom shader scnprogram ios 9 scenekit 34105050 34105050 Please help, I have spent all day tinkering but no success (
1
Processing through multiple shaders (LWJGL Java OpenGL) Very simple question Is it possible to process a vbo through different shaders? If so, how? What I want is sth like this reflectionShader.bind() starts shader reflectionShader.load(some values) loads uniform vars into shader process(vbo) "renders" vbo using bound shader reflectionShader.unbind() stops shader refractionShader.bind() refractionShader.load(some other values) process(vbo) refractionShader.unbind() render(vbo) output to screen
1
OpenGL C Rotate relative to shooting gun I'm trying to make a 2D game where I have a gun that i use to shot things in the direction of the mouse resulting in an angle that I can get. I have some problems with the trajectory of the bullet because I don't know what to put in the Transpose and Rotate function. For example Point 1 has x 20 and y 30. I want to shot in the direction of Point 2 at x 50 and y 50. To do that i get an angle 't' which I use in the rotate function. But the reality is that I get the animation that I presented in the photo. I increase the x coord of the bullet because I can only shot in right side so only positive x. The bullet is moving on the 0x axis rotated at an angle t but at the same y as the gun. What I want is something like I'm sorry if this is a duplicate I couldn't find a solution to my problem. I think the resulting Matrix should have the form Translate Rotate Translate but I can't find the right x and y. Sorry if this seemed dumb and thank you for your time!
1
Aspect ratio of drawn quad messed up after rotating When I draw a quad that is rotating the aspect ratio of the quad gets messed up and the size changes. Gif of what is happening I am confident it has something to do with the way I calculate the size because that is relative to the width and height of the screen, I just don't know why the aspect ratio would change since the aspect ratio of the screen doesn't change. Code of transformationMatrix creation Matrix4f matrix Maths.createTransformationMatrix( new Vector2f( gui.getPosition().getX() Display.getWidth() 2 1 gui.getSize().getX() Display.getWidth(), gui.getPosition().getY() Display.getHeight() 2 1 gui.getSize().getY() Display.getHeight() ), gui.getRotation() ,new Vector2f( gui.getSize().getX() Display.getWidth(), gui.getSize().getY() Display.getHeight() ) ) shader.loadTransformation(matrix) The x size, y size, x position and y position are in pixels, if you put in 1600 pixels for x position and the screen is 1600 pixels wide it will fill the whole screen. The calculation that is done here will translate those sizes to a value that the shaders will display on the screen. I had tried calculating the sizes in a different way gui.getSize().getX() (float) (Display.getWidth() Math.abs(Math.cos((Math.toRadians(gui.getRotation())))) Display.getHeight() Math.abs(Math.sin((Math.toRadians(gui.getRotation()))))), but this does not work at all, my thought process was that if it is sideways it should use Display.getHeight to calculate the x size and if it is vertical it should use Display.getWidth. This does work for 0, 90, 180 and 270 degrees rotation, but in between it gets smaller and weird. Of course I've also done this for the x position but then cos and sin switched around. video of this method And this is the createTransformationMatrix method public static Matrix4f createTransformationMatrix(Vector2f translation, float rotation, Vector2f scale) Matrix4f matrix new Matrix4f() matrix.setIdentity() Matrix4f.translate(translation, matrix, matrix) Matrix4f.rotate((float) Math.toRadians(rotation), new Vector3f(0, 0, 1), matrix, matrix) Matrix4f.scale(new Vector3f(scale.x, scale.y, 1f), matrix, matrix) return matrix
1
Is there a way to get what pixel is being processed within the fragment shader? In OpenGL, a fragment shader goes through each pixel, right? So is it possible (within the shader itself) to get what pixel it is processing and color each specific pixel?
1
How can I find the location of OpenGL object after rotation? I have a rotating object, a cube, which I rotate in OpenGL as follows gl.glPushMatrix() gl.glTranslatef(400.0f, 300.0f, 1300.0f) gl.glRotatef(m x, 4.0f, 0.0f, 0.0f) gl.glRotatef(m y, 0.0f, 4.0f, 0.0f) gl.glRotatef(m z, 0.0f, 0.0f, 42.0f) gl.glCallList( shapeNumber) cube 2 gl.glPopMatrix() (m x, m y, and m z are the change in the rotation each frame.) I want to detect a collision between the camera and that rotating object, and to do that I need to get the coordinates of the rotated object in real time. How can I accomplish that?
1
Converting 3D coordinates to 2D and back? I'm wondering if there is a simple way to convert 3D coordinates to 2D coordinates. Also, if it's possible, to convert in the reverse direction. I'm using OpenGL(GLUT) in my C project. I am also using SFML for the 2D information (sprites text etc.) I found out that I can use gluProject(), but I have no idea how to use this. I'm asking for a simple example of using gluProject() or another example to convert 3D coordinates (such as from the player) to 2D coordinates. If I can't get the simple process I'm confident that I can figure out the rest.
1
How can I place a ProgressBar in Android using Cocos 2d? I want to place a horizontal progress bar in my Android application and I want to change its progress color. I used the following code, but the progress bar is not being displayed. CCProgressTimer progressBar CCProgressTimer.progress("progressbar.png") progressBar.setType(kCCProgressTimerTypeHorizontalBarLR) progressBar.setScale(5) progressBar.setAnchorPoint(CGPoint.ccp(0, 0)) progressBar.setPosition(CGPoint.ccp(0,0)) addChild(progressBar)
1
does glBindAttribLocation silently ignore names not found in a shader? Does glBindAttribLocation silently ignore names that are not found? For example, in a shader Some vertex shader in vec3 position in vec3 normal ... And in some set up code While setting up shader GLuint program glCreateProgram() glBindAttribLocation(program, 0, "position") glBindAttribLocation(program, 1, "normal") glBindAttribLocation(program, 2, "color") What about this one? glLinkProgram(program)
1
Replace glTranslatef and glRotatef with matrixes I'm not an opengl expert, and, as a novice, I prefer to practice a little bit with the old opengl just to be sure to understand correctly the basic concept of computer graphics before deal with shaders and modern opengl (3.x). I don't want to start a flame with this so I'll go through my question. I just know that what I'm using is deprecated. What I wanto to render is this and I'm drawing it using this piece of code draw grid drawGrid(10, 1) draw a teapot glPushMatrix() glTranslatef(modelPosition 0 , modelPosition 1 , modelPosition 2 ) glRotatef(modelAngle 0 , 1, 0, 0) glRotatef(modelAngle 1 , 0, 1, 0) glRotatef(modelAngle 2 , 0, 0, 1) drawAxis(4) drawTeapot() glPopMatrix() Now, I'd like to replace the last glTranslatef and glRotatef with matrixes, and I'm doing in this way Matrix4 matrixModel matrixModel.identity() matrixModel.translate(modelPosition 0 , modelPosition 1 , modelPosition 2 ) matrixModel.rotateX(modelAngle 0 ) matrixModel.rotateY(modelAngle 1 ) matrixModel.rotateZ(modelAngle 2 ) glLoadMatrixf( matrixModel.getTranspose()) And I don't see anymore the teapot. So I thought that this matrixModel is not complete because it is only the model matrix and I need a modelview so I have to multiply it with the projection matrix but.. this is what I get where am I wrong?
1
What is a VAO in Opengl? i've just started out with Opengl, and i've got to know what Vertex Buffer Objects are, but i really don't understand what VAOs are. Can someone help me?
1
How can I draw the depth value in GLSL? I want to draw the depth buffer in the fragment shader, I do this Vertex shader varying vec4 position gl Position gl ModelViewProjectionMatrix gl Vertex position gl ModelViewProjectionMatrix gl Vertex Fragment shader float depth ((position .z position .w) 1.0) 0.5 gl FragColor vec4(depth, depth, depth, 1.0) But all I print is white, what am I doing wrong?
1
Restoring projection matrix I am learning to use FBOs and one of the things that I need to do when rendering something onto user defined FBO, I have to setup the projection, modelview and viewport for it. Once I am done rendering to the FBO, I need to restore these matrices. I found glPushAttrib(GL VIEWPORT BIT) glPopAttrib() to restore the viewport to its old state. Is there a way to restore the projection and modelview matrix to whatever it was earlier ? Tech C OpenGL Thanks!
1
How to scale to fit OGL viewport to GLFW window? Say my window is 1280x720. I want to render my stuff to lower resolution and then stretch it to window. I've tried this glViewport(0, 0, 1280 2, 720 2) When I call glViewport and pass lesser width and height than window, I get all my OpenGL rendering in left bottom 1 4 of window. I need to scale it (linear filtering preferably) back to window size so I would get a pixelated effect. I wonder if there is such possibility within glad glfw API.
1
How to render portals in OpenGL? I am making RPG in OpenGl and I need to make some portals. How should I render it if I want to see through the portal on the other side?
1
GL INVALID OPERATION in glGenerateMipmap(incomplete cube map) I'm trying to learn OpenGL and i'm using SOIL to load images. I have the following piece of code GLuint texID 0 bool loadCubeMap(const char baseFileName) glActiveTexture(GL TEXTURE0) glGenTextures(1, amp texID) glBindTexture(GL TEXTURE CUBE MAP, texID) const char suffixes "posx", "negx", "posy", "negy", "posz", "negz" GLuint targets GL TEXTURE CUBE MAP POSITIVE X, GL TEXTURE CUBE MAP NEGATIVE X, GL TEXTURE CUBE MAP POSITIVE Y, GL TEXTURE CUBE MAP NEGATIVE Y, GL TEXTURE CUBE MAP POSITIVE Z, GL TEXTURE CUBE MAP NEGATIVE Z for (int i 0 i lt 6 i ) int width, height std string fileName std string(baseFileName) " " suffixes i ".png" std cout lt lt "Loading " lt lt fileName lt lt std endl unsigned char image SOIL load image(fileName.c str(), amp width, amp height, 0, SOIL LOAD RGB) if (!image) std cerr lt lt FUNCTION lt lt " cannot load image " lt lt fileName lt lt " (" lt lt SOIL last result() lt lt ")" lt lt std endl return false glTexImage2D(GL TEXTURE 2D, 0, GL RGB, width, height, 0, GL RGB, GL UNSIGNED BYTE, image) SOIL free image data(image) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE MIN FILTER, GL LINEAR MIPMAP LINEAR) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP R, GL CLAMP TO EDGE) glGenerateMipmap(GL TEXTURE CUBE MAP) glBindTexture(GL TEXTURE CUBE MAP, 0) return true When i call this, the images load successfully, but then i get an error in console OGL DEBUG message lt 1 gt 'API' reported 'Error' with 'High' severity GL INVALID OPERATION in glGenerateMipmap(incomplete cube map) BACKTRACE and no cubemap is displaying at all. Do you see any mistake in this code?
1
OpenGL fovx question To boil my question down to the simplest form, I fear I am oversimplifying how mat4 perspective works. I am using mat4.perspective(45, 2, 0.1, 1000.0) (the binding is WebGL fwiw). With a fovy of 45, and an aspect ratio of 2, I expect to have a fovx of 90. Thus, if I position my camera at (0, 0, 50), looking towards the origin, I expect to see a cube positioned at (50, 0, 0) (45 degrees) right at the very periphery of my screen, half on, half off,. Instead, a cube at (50, 0, 0) is totally off screen, and my actually periphery occurs at about (41.1, 0, 0). What am I missing here? Thanks, nick
1
How do I create good looking plasma explosion effects? Is this just a billboard quad with a bloom shader?
1
Most efficient way to draw large number of the same objects, but with different transforms I'd like to draw a large number (multiple thousands) of simple meshes (each maybe... a maximum of 50 triangles, but even that is a very large upper bound), all of which are exactly the same. As a simple brute force way, right now I'm just doing thousands of draw calls, where I just change the transformation matrices that the shaders need and then draw the same objects again. Of course this is hideously slow, since (I'm guessing) there are too many draw calls for the driver and my PC to handle efficiently. What can I do to make it faster?
1
Efficient Rendering for multiple light sources I'm building a 3D rendering engine using C and OpenGL. Right now, I've added support for multiple light sources in my GLSL shaders, but I've hit a bit of a bump for my rendering methods. I'm worried about my engines performance if it has to render a lot of objects for a lot of light sources. The straight forward approach would be something like this for every object to be rendered closestLights for every light in the scene if (light closer than all lights in closestLights) swap furthest light from closestLights with current light loadLightSourcesToShader(closestLights) renderObject(object) However, I'm afraid the performance of this rendering loop will degrade quickly since when either the amount of lights or the amount of objects increases, the amount of calculations will rise even quicker. 1) Is there any way to circumvent this? 2) How is this implemented in 'real' engines? 3) How can I find the closest K lightsources to an object efficiently? I've heard about kd trees and octrees, but I feel like building a tree everytime I render is probably worse than what I'm thinking about doing right now. 4) I could keep an octree and a kd tree for every object (which keeps track of all other objects), which when using (smart) pointers, could be quite efficient in terms of time (but not space?). Is this a good approach? I know the saying 'Premature optimization is the root of all evil', but I'm pretty sure this WILL become a problem when a game program reaches a certain complexity size.
1
How many OpenGL programs should I use to render multiple objects? My scene has multiple objects in it. (Let's say 3 cubes, 1 cylinder, 8 spheres.) I assume I should create a vertex shader for each. How many programs should I have? Alternatives One program per object One program for all cubes and another for all spheres (assuming they use the same shaders) One large program for everything What is the correct approach?
1
What advantages does multisampling have over supersampling? I never really fully understood this, or found an article which explained all the steps in a friendly way. I'll start with what I do know already (which I hope do not contain misconceptions). I'm pretty sure allocating a multi sampled frame buffer requires as many times the memory (of a regular buffer) as the number of samples (N). This makes sense because each pixel may be sampled up to N times. During rasterization, the GPU generates a fragment for the MS frame buffer by testing if each sample is inside of the geometry being drawn. This is what provides edge anti aliasing. Each sample produces a fragment. I'm unsure about what occurs when all samples of a pixel are inside the geometry. How many fragments are generated? Is this configurable? What if I want to sample the "inside" pixels 4 times, and the edge pixels 16 times? This would require a 16x MS frame buffer. Are there other differences? It seems like if the fragment shader is run once on each sample then we are left with something not much different from basic supersampling with the exception of jittered sample locations. Actually, I'm also a bit unsure about what a frogment really is. It seems like a fragment shader gets (can get) executed more than once per pixel in a multi sampled scene, however this doesn't seem to necessarily mean that a fragment is more related to the sample than the pixel. Is a fragment best thought of as a sample, a pixel, or something else?
1
Can you sync screen update on vertical retrace with OpenGL? In OpenGL, is there a way to ensure I get exactly, no more nor less, 60 (or whatever rate my monitor is set for) frames per second? Of course given that the new frame can be calculated in less than 1 60 second. I was thinking Windows more than Linux or Mac OSX, even though it is interesting to keep an eye on portability.
1
Understanding how OpenGL blending works I am attempting to understand how OpenGL (ES) blending works. I am finding it difficult to understand the documentation and how the results of glBlendFunc and glBlendEquation effect the final pixel that is written. Do the source and destination out of glBlendFunc get added together with GL FUNC ADD by default? This seems wrong because "basic" blending of GL ONE, GL ONE would output 2,2,2,2 then (Source giving 1,1,1,1 and dest giving 1,1,1,1). I have written the following pseudo code, what have I got wrong? struct colour float r, g, b, a colour blend factor( GLenum factor, colour source, colour destination, colour blend colour ) colour colour factor float i min( source.a, 1 destination.a ) From http www.khronos.org opengles sdk docs man xhtml glBlendFunc.xml switch( factor ) case GL ZERO colour factor 0, 0, 0, 0 break case GL ONE colour factor 1, 1, 1, 1 break case GL SRC COLOR colour factor source break case GL ONE MINUS SRC COLOR colour factor 1 source.r, 1 source.g, 1 source.b, 1 source.a break ... return colour factor colour blend( colour amp source, colour destination, GLenum source factor, from glBlendFunc GLenum destination factor, from glBlendFunc colour blend colour, from glBlendColor GLenum blend equation from glBlendEquation ) colour source colour blend factor( source factor, source, destination, blend colour ) colour destination colour blend factor( destination factor, source, destination, blend colour ) colour output From http www.khronos.org opengles sdk docs man xhtml glBlendEquation.xml switch( blend equation ) case GL FUNC ADD output add( mul( source, source colour ), mul( destination, destination colour ) ) case GL FUNC SUBTRACT output sub( mul( source, source colour ), mul( destination, destination colour ) ) case GL FUNC REVERSE SUBTRACT output sub( mul( destination, destination colour ), mul( source, source colour ) ) return output void do pixel() colour final colour Blending if( enable blending ) final colour blend( current colour output, framebuffer pixel , ... ) else final colour current colour output Thanks!
1
How to achieve cavalier projection using OpenGL fixed pipeline? I want to make a quick demo program showing a cube, or a user loaded model, rotating in screen rendered with one of three projections perspective, isometric and cavalier. Using the fixed pipeline, how can I build a projection matrix for cavalier projection? I think I can start with the orthographic projection matrix and then tweak the values, by eye, until I get the z of the vertices go to the right and up as farther the z is. I want the lines parallel to the z axis rendered as vertical lines 45 degrees rotated to the right.
1
Positioning a texture inside a 3D object with GLSL I have a 3D object in my scene and a texture that is the same size of the screen (a render to texture). Is there a way to make the object act like a "mask" for the texture(using glsl), so the texture is aligned with the screen 2D space but only shows what is "inside" the mask object? What I'm trying to achieve here is this I have a 3D scene made of cubic tiles, some of these tiles are going to be water and I want to distort whatever is behind them. My idea was to pass the render to texture and then distort it to make a refraction effect. Is this going to work? Am I even suposed to do refraction like this?
1
Help in understanding atmospheric scattering shading I have a made a planet and wanted to make an atmosphere around it. So I was referring to this site Click to visit site I don't understand this As with the lookup table proposed in Nishita et al. 1993, we can get the optical depth for the ray to the sun from any sample point in the atmosphere. All we need is the height of the sample point (x) and the angle from vertical to the sun (y), and we look up (x, y) in the table. This eliminates the need to calculate one of the out scattering integrals. In addition, the optical depth for the ray to the camera can be figured out in the same way, right? Well, almost. It works the same way when the camera is in space, but not when the camera is in the atmosphere. That's because the sample rays used in the lookup table go from some point at height x all the way to the top of the atmosphere. They don't stop at some point in the middle of the atmosphere, as they would need to when the camera is inside the atmosphere. Fortunately, the solution to this is very simple. First we do a lookup from sample point P to the camera to get the optical depth of the ray passing through the camera to the top of the atmosphere. Then we do a second lookup for the same ray, but starting at the camera instead of starting at P. This will give us the optical depth for the part of the ray that we don't want, and we can subtract it from the result of the first lookup. Examine the rays starting from the ground vertex (B 1) in Figure 16 3 for a graphical representation of this. First Question isn't optical depth dependent on how you see that is, on the viewing angle? If yes, the table just gives me the optical depth of the rays going from land to the top of the atmosphere in a straight line. How to find the optical depth in the case where the ray pierces the atmosphere and goes through it to the camera? Second Question What is the vertical angle it is talking about...like, is it the same as the angle with the z axis as we use in polar coordinates? (I am having a very hard time understanding this angle) Third Question The article talks about scattering of the rays going to the sun..shouldn't it be the other way around? like coming from the sun to a point? Any explanation on the article or on my questions will help a lot. Thanks in advance!
1
Why am I having these weird framerate issues with OpenGL on Windows? I'm using OpenGL on windows (have been for a while now), and I've come across a strange issue. Once every so often, the rate at which frames are presented on the screen drops to roughly 10 fps. However, my framerate counter stays at the usual framerate (2000fps in the menu, 300fps in game). My framerate counter is based on the time between draw calls, so the graphics card is definitly rendering 2000 frames a second. What is the problem? How can I fix it? EDIT I forgot to mention, this only happens when running in windowed mode.
1
What should the Z coordinate be after transformed by the projection matrix? I'm working on an OpenGL 1.x implementation for the Sega Dreamcast. Because the Dreamcast didn't have any hardware T amp L the entire vertex transformation pipeline has to be done in software. What also has to be done in software is clipping to the near Z plane as failure to clip results in the polygon being dropped entirely from rendering by the hardware. I'm having some trouble getting the transform clip perspective divide process working correctly and basically I can sum up the problem as follows I transform polygon vertices by the modelview and projection matrices I clip each polygon against the W 0.000001 plane This results in new vertices on the near plane with a W of 0.000001, but a Z which is twice the near plane distance At perspective divide vertex.z vertex.w results in an extreme value because we're dividing a Z value (e.g. 0.2) by 0.000001 Something seems very wrong here. The projection matrix is being generated in the same way as described in the glFrustum docs. So my question is, if I have a coordinate on the near plane, should its Z value be zero after transform by the projection matrix or should it be the near z distance, or something else? After clipping polygons to the W 0.000001 plane, should the generated Z coordinates be 0.000001? Update Here is the projection matrix as calculated by gluPerspective(45.0f, 640.0f 480.0f, 0.44f, 100.0f) 1.810660 0.000000 0.000000 0.000000 0.000000 2.414213 0.000000 0.000000 0.000000 0.000000 1.008839 0.883889 0.000000 0.000000 1.000000 0.000000 Does this look correct? It's the value in the right hand column I'm not sure about...
1
Anti Aliasing in OpenGL C I'm trying to make anti aliasing work inside of OpenGL, here's what I've tried glEnable(GL POINT SMOOTH) glHint(GL POINT SMOOTH HINT, GL NICEST) glEnable(GL LINE SMOOTH) glHint(GL LINE SMOOTH HINT, GL NICEST) glEnable(GL POLYGON SMOOTH) glHint(GL POLYGON SMOOTH HINT, GL NICEST) glEnable(GL BLEND) glBlendFunc(GL SRC ALPHA, GL ONE MINUS SRC ALPHA) But so far none of these have worked. I have gotten antialiasing to work by enabling it on the control panel for my video card (Catalyst Control Center in my case), but I would like to get it working inside my program instead. This is what the rendering looks like with 4x antialiasing enabled via the video card control panel And this is what it looks like when I do it with my program How do I get antialiasing to work?
1
Ara matrices calculated on the GPU or on the CPU? Would built in matrix functions be faster than my custom ones? If I add a math library (for example containing a Matrix class) and use it in my program drawing with OpenGL, will my be work slower than if I used standard OpenGL functions for matrix calculations? Does the same hold true for DirectX?
1
Texturing a PyOpenGL 3D Cube with PySDL2 So, I've just started learning OpenGL with PySDL2, and I've created a class that will create a cube to the window that I've created with PySDL2. What I'd like to do now, is to figure out a way to texture the cube. I don't believe that the actual texturing part is the problem, but actually loading the image. I've tried to do this with PySDL's image loader, but this causes an error from the OpenGL texture creator. Here is my code to load an texture and bind it to OpenGL def LoadTexture(self, filename) """Loads a texture for the cube""" surface SDL LoadBMP(filename) Checks if the loading succeeded if surface Translate the LP SDL Surface pointer got from SDL LoadBMP() to a real SDL Surface texture surface surface.contents texture format GL.GL RGBA GL TEXTURE ID GL.glGenTextures(1) GL.glBindTexture(GL.GL TEXTURE 2D, GL TEXTURE ID) GL.glPixelStorei(GL.GL UNPACK ALIGNMENT, 1) GL.glTexImage2D(GL.GL TEXTURE 2D, 0, 3, texture surface.w, texture surface.h, 0, texture format, GL.GL UNSIGNED BYTE, texture surface) return GL TEXTURE ID return None And the error that gives looks like this TypeError ("No array type handler for type lt class 'sdl2.surface.SDL Surface' gt (value lt sdl2.surface.SDL Surface object at 0x030762B0 gt ) registered", lt OpenGL.GL.images.ImageInputConverter object at 0x02EC49F0 gt ) I'm using Python 3 so using something like PIL won't work for me. Any ideas how to get the texture loaded, preferably with PySDL2?
1
What does glMultiDraw do? I'm having trouble understanding exactly what glMultiDraw does, and when it should be used. Is it to be used if I have one VBO with multiple objects in it or do I use it with many VBO's? It would be great if someone could give a real example of how to use this is a VAO VBO as all the stuff I've found on google just explain what the function does rather that how to use it practically.
1
Random lines away from images in java2d opengl game Working on a java game that uses some pngs images for icons textures. A few images ( fewer than 5, out of dozens ) are showing some odd parallel "artifact" lines. They move with the image, are certainly not in the PNG itself. I can't figure out what's causing it. Using slick2d, lwjgl. I've tried slcik2d and opengl methods of clamping textures but that didn't help. This black line is showing off the right hand side of an icon (maybe 50 of the actual image width away). It should not be there.
1
OpenGL pitching problem I've been trying to implement several camera movements for my application. So far yawing, rolling, strafing, walking has been working properly, but I can't get my pitching to work properly. If I continue pitching upwards, it doesn't rotate back 360 degree, and instead gets stuck looking at the top of the screen. Here's my code for Camera class class Camera float m 16 Vector dirForward, dirUp, dirRight void loadCameraDirections() glGetFloatv(GL MODELVIEW MATRIX, m) dirForward Vector(m 2 , m 6 , m 10 ).unitVector() dirUp Vector(m 1 , m 5 , m 9 ).unitVector() dirRight Vector(m 0 , m 4 , m 8 ).unitVector() public Vector position Vector lookAt Vector up Camera() position Vector(300, 0, 100) lookAt Vector(0, 0, 100) up Vector(0, 0, 1) void rotatePitch(double rotationAngle) glPushMatrix() glRotatef(rotationAngle, dirRight.x, dirRight.y, dirRight.z) loadCameraDirections() glPopMatrix() lookAt position.repositionBy(dirForward.reverseVector()) cam I call the gluLookAt funtion by gluLookAt(cam.position.x, cam.position.y, cam.position.z, cam.lookAt.x, cam.lookAt.y, cam.lookAt.z, cam.up.x, cam.up.y, cam.up.z)
1
Animation of moving circle by square's outer surface I want to make this animation in OpenGL, here I attached a simple gif how I want it to looks like Main problem is that I can not figure out how to move by corner circle should smoothly move by corner, not to just jump to another side's begin position. import Blender from Blender import Draw,BGL from Blender.BGL import from math import sin,cos import time squareLength, squareX, squareY, circleRadius 200, 200, 200, 20 movingX, movingY 0, 0 def event(evt, val) if evt Draw.ESCKEY Draw.Exit() def gui() global movingX, movingY glClearColor(0.17, 0.24, 0.31, 1.0) glClear(BGL.GL COLOR BUFFER BIT) glLineWidth(1) glColor3f(0.74, 0.76, 0.78) glBegin(GL QUADS) glVertex2i(squareX, squareY) glVertex2i(squareX, squareY squareLength) glVertex2i(squareX squareLength, squareY squareLength) glVertex2i(squareX squareLength, squareY) glEnd() glColor3f(0.58, 0.65, 0.65) glPushMatrix() if movingX circleRadius if movingY lt squareLength movingY 1 else movingX 0 movingY squareLength circleRadius else if movingY squareLength circleRadius if movingX lt squareLength movingX 1 else movingX squareLength circleRadius movingY squareLength else if movingY lt 0 if movingX gt 0 movingX 1 else movingX circleRadius movingY 0 else if movingY gt 0 movingY 1 else movingX squareLength movingY circleRadius glTranslatef(movingX, movingY, 0) glBegin(GL LINE LOOP) for i in xrange(0, 360, 1) glVertex2f(squareX sin(i) circleRadius, squareY cos(i) circleRadius) glEnd() glPopMatrix() Draw.Redraw(1) Draw.Register(gui, event, None) Can you please say me, how can I optimize this code to make circle moving by corners? Just start learning computer graphics and think up such exercises for training, so I will really appreciate for any help.
1
GLSL sampler2D fallback to constant color? So I have the following situation I'm sharing a blinn shader accross many meshes. Some meshes have specular amp normal maps, others do not. I'd like to, without making the shader code too complicated, be able to specify a constant color instead of a texture, for the normal or specular maps. This is for example if a given mesh doesn't need one of those maps. The way I imagine it, I would just pass a flat "grey" as the specular map, for instance, and the shader could just act as if a texture was passed in. Is this possible? Ideally, I don't want to have an extra uniform for each mesh specifying whether or not the texture should be used. Another alternative would be to actually create a grey texture on the fly, if this is the better way, please advise on the simplest way to do this.
1
Any GL transformation not working Today I was trying to make a test camera, with a new method(I usually use gluLookAt) So I got a problem, void GameDraw() glPushMatrix() glMatrixMode(GL PROJECTION) glLoadIdentity() gluPerspective(90, WIDTH HEIGHT, 0.001, 10000.0) pitch 1 glRotatef(pitch, 1.0, 0.0, 0.0) glRotatef(yaw, 0.0, 1.0, 0.0) glMatrixMode(GL MODELVIEW) glLoadIdentity() s gt bind() test gt drawMesh() glPopMatrix() And my draw function is glDrawElements(GL TRIANGLES, indices, GL UNSIGNED INT, nullptr) if it isn't working with glDrawElements is there any alternative to use? Edit Disabling shader it works fine, but how I can use this with Shader Please don't use glm examples, because I don't like to use it, I feel more comfortable by using the my math
1
What is a fast way to darken the vertices I'm rendering? To make a lighting system for a voxel game, I need to specify a darkness value per vertex. I'm using GL COLOR MATERIAL and specifying a color per vertex, like this glEnable(GL COLOR MATERIAL) glBegin(GL QUADS) glColor3f(0.6f, 0.6f, 0.6f) glTexCoord2f(...) glVertex3f(...) glColor3f(0.3f, 0.3f, 0.3f) glTexCoord2f(...) glVertex3f(...) glColor3f(0.7f, 0.7f, 0.7f) glTexCoord2f(...) glVertex3f(...) glColor3f(0.9f, 0.9f, 0.9f) glTexCoord2f(...) glVertex3f(...) glEnd() This is working, but with many quads it is very slow.. I'm using display lists too. Any good ideas in how to make vertices darker?
1
client side array in the OpenGL 3.3. core It is possible the topic (not using VBO)? in the OpenGL 3.0 compatible profile I can to draw this way GLint position index attrib location get("VertexPosition") gl EnableVertexAttribArray(position index) gl VertexAttribPointer(position index, 3, gl FLOAT, false, 0, pos Data) gl DrawArrays(gl TRIANGLES, 0, count of vertices) Bat in the OpenGL 3.3 core profile it displays a blank screen. It is right?
1
Basic 2D Lighting Optimization Issue in Fragment Shader with OpenGL (GLSL) I'm using a fragment shader to implement 2D lighting (code further below). Even though I am satisfied with the visuals of the light i noticed that it has a quite big GPU usage, and when trying to add about 40 light sources the usage is close to 100 (GTX 1050). I have a uniform array of structs that contain data about each light source and for loop that goes through all of them. At first I thought I was pushing too much data to the GPU so I combined the RGB values of the light color in a single 32 bit integer and the two strengths of the light in a single 32 bit integer as well. Then I tried simplifying the formulas I used (using a composed, by composed I'm not referring to the operation, function from multiple linear functions) but it seemed that just made the matters worse. I think it's worth noting the difference between LightStrength and VisualStrength values that I used in the code, the LightStrength is the strength of the light that lights up the medium around it and the VisualStrength is the strength of the colored hue around the light. And there is also a dark hue variable that is used to make the scene darker as of in the different times of the day. The code of the fragment shader version 450 core in vec2 texCoord0 uniform vec3 CameraPosition uniform mat4 Projection uniform float DarkHue uniform sampler2D u Texture uniform vec2 u resolution struct LightSource vec2 Position int LightColor int ArgumentValue uniform LightSource LightSources 300 uniform int LightSourceCount float GetLightfactor(float x,float Streght) return min(1 (x (Streght 100.0) 1),1) void main() gl FragColor texture2D(u Texture,texCoord0) vec3 LightSum vec3(0) vec4 PCameraPosition vec4(CameraPosition,0) Projection vec2 NormalizedPosition gl FragCoord.xy 2 u resolution 1 float LightFactor,VisualFactor,LightStreght,VisualStreght for (int i 0 i lt LightSourceCount i) vec4 Pos vec4(LightSources i .Position,0,0) Projection PCameraPosition vec2 coord (NormalizedPosition Pos.xy) u resolution LightFactor 0.0 VisualFactor 0.0 LightStreght LightSources i .ArgumentValue amp 0xffff VisualStreght (LightSources i .ArgumentValue gt gt 16) amp 0xffff float lng length(coord) LightFactor GetLightfactor(lng,LightStreght) VisualFactor GetLightfactor(lng,VisualStreght) LightSum mix(LightSum,vec3(1),gl FragColor.rgb LightFactor (1 DarkHue)) vec3(((LightSources i .LightColor gt gt 16) amp 0xff) 255.0,((LightSources i .LightColor gt gt 8) amp 0xff) 255.0,(LightSources i .LightColor amp 0xff) 255.0) VisualFactor gl FragColor.rgb DarkHue gl FragColor.rgb LightSum The code of the c function that adds a light source. (Yes when setting uniforms caching is used) static void AddLightSource(Vec2 Position, uint8 t R, uint8 t B, uint8 t G, uint16 t LightStrenght,uint16 t VisualStrenght) std string access quot LightSources quot std to string(ActiveLightSources) quot quot int Value (VisualStrenght lt lt 16) LightStrenght int Color (R lt lt 16) (G lt lt 8) B Vec3 Translated VertexArrayManager TranslateValue shader gt setUniform2f(access quot .Position quot , glm vec2(Position.x Translated.x,Position.y Translated.y)) shader gt setUniform1i(access quot .LightColor quot , Color) shader gt setUniform1i(access quot .ArgumentValue quot , Value) ActiveLightSources shader gt setUniform1i( quot LightSourceCount quot , ActiveLightSources)
1
3D sphere generation I have found a 3D sphere in a closed source game engine that I really like the look of and would like to have something similar in my own 3D engine. This is how the sphere looks like when it is created in the game engine, at program game start At program start, a function named CreateSphere is called and the user has the option to choose a 3D position and a radius of the sphere. That's all I know about the function since the engine is closed source. Anyone have any idea of how this sphere might be generated programmatically? I have checked other posts sites discussing spheres but none of them has the look of the sphere in the image. Edit removed some unnecessary information to get to the point of what I need help with.
1
Graphical mesh lags behind collision shape in BulletPhysics debug drawing In Bullet Physics when I debug draw physics world the graphical mesh lags behind the collsion shape. m DynamicsWorld gt stepSimulation(1 60.0f, 10) btTransform trans, trans2 trans2 m RigidBodies.at("CubeMesh") gt m Body gt getWorldTransform() m RigidBodies.at("CubeMesh") gt m Body gt getMotionState() gt getWorldTransform(trans) using m RigidBodies.at("CubeMesh") gt m Body gt getMotionState() gt getWorldTransform(trans) causes the graphical mesh to lag behind the collision shape in DebugDraw call Checking with debugger shows that the lag error keeps on increasing glm mat4 M trans.getOpenGLMatrix(glm value ptr(M)) m RigidBodies.at("CubeMesh") gt GetGraphicalObject() gt Model M if (m DebugDraw) m DynamicsWorld gt debugDrawWorld() If I use the OpenGL matrix from trans2 then debug drawing is perferct. But with trans the graphical mesh lags behind the collision shape in debug draw. I am using btDefaultMotionState when setting the CubeMesh rigid body. btTransform tr tr.setIdentity() tr.setOrigin(btVector3(0.0f, 90.0f, 0.0f)) cubeMesh gt m PhysicsBody gt m MotionState new btDefaultMotionState(tr) Can anyone explain reason for this lag?
1
Should I distribute shaders in a compiled form or in plain text? Having an application that uses shaders that have been wrote in GLSL, what is the best strategy for the distribution in the real world and for the desktop and mobile? I'm aiming to distribute this in a binary form or as plain serialized text, i would like a good suggestion on this.
1
Why is the size of glm's vec3 struct 12 bytes? When trying to determine the size of glm vec3 (from GLM math library) by using the size of operator like so sizeof(glm vec3) I get 12 returned. When I look at the definition of a vec3 struct I see this template lt typename T, precision P defaultp gt struct tvec3 Implementation detail typedef tvec3 lt T, P gt type typedef tvec3 lt bool, P gt bool type typedef T value type ifdef GLM META PROG HELPERS static GLM RELAXED CONSTEXPR length t components 3 static GLM RELAXED CONSTEXPR precision prec P endif GLM META PROG HELPERS Data if GLM HAS ANONYMOUS UNION union struct T x, y, z struct T r, g, b struct T s, t, p ifdef GLM SWIZZLE GLM SWIZZLE3 2 MEMBERS(T, P, tvec2, x, y, z) GLM SWIZZLE3 2 MEMBERS(T, P, tvec2, r, g, b) GLM SWIZZLE3 2 MEMBERS(T, P, tvec2, s, t, p) GLM SWIZZLE3 3 MEMBERS(T, P, tvec3, x, y, z) GLM SWIZZLE3 3 MEMBERS(T, P, tvec3, r, g, b) GLM SWIZZLE3 3 MEMBERS(T, P, tvec3, s, t, p) GLM SWIZZLE3 4 MEMBERS(T, P, tvec4, x, y, z) GLM SWIZZLE3 4 MEMBERS(T, P, tvec4, r, g, b) GLM SWIZZLE3 4 MEMBERS(T, P, tvec4, s, t, p) endif GLM SWIZZLE other code...... For which I see three structs, each with three member variables of templated type T, which in GLM defaults to float type. My question is why is the sizeof() operator returning 12 bytes as the size of glm vec3 when it looks like it should be 36 bytes 3 structs, with 3 float members each 3 3 4(number of bytes in a float) 36.
1
Vertex data split into separate buffers or one one structure? Is it better to have all vertex data in one structure like this class MyVertex int x,y,z int u,v int normalx, normaly, normalz Or to have each component (location, normal, texture coordinates) in separate arrays buffers? To me it always seemed logical to keep the data grouped together in one structure because they'd always be the same for each instance of a shared vertex and that seems to be true for things like character models (ex the normal should be an average of adjacent normals for smooth lighting). One instance where this doesn't seem to work is other kinds of meshes like say a cube where the texture coordinates for each may be the same but that causes them to be different where the vertices are shared. Are they normally kept separate? Won't this make them less space efficient if there needs to be an instance of texture coordinates and normals for each vertex (they won't be indexed)? Can OpenGL even handle this mixing of indexed (for location) vs non indexed buffers in the same VBO?
1
LibGDX 2D Silhouette recently I decided to implement in my game the drawing of a silhouette of the player when he is behind objects (the top layer of the map). Found a similar question here, but it doesn't matter to me to understand anything. Can someone please explain how it all works? I will be very grateful to you! Specifically, I'm interested in the issue of creating the Vertex and Fragments of the shaders? And why is only one shader passed to two setSheader methods. Code that I do not understand Rendering the upper map layer Simple quot if (gl FragColor.a 0.0) discard quot fragment shader renderer.getBatch().setShader(shader) Rendering the silhouettes quot gl FragColor vec4(0.0, 1.0, 1.0, 0.2) texture2D(u texture, v texCoord0).a quot batch.setShader(shader) EDITS I tried adding this code and this is what I got Java code above render the normal player texture outside the shader code mapRenderer.render(lowerLayer) player.update(mapMgr, game.batch, delta, world) mapRenderer.render(upperLayer) Gdx.gl20.glClear(GL20.GL STENCIL BUFFER BIT) Gdx.gl20.glEnable(GL20.GL STENCIL TEST) Gdx.gl20.glStencilFunc(GL20.GL ALWAYS, 0x1, 0xFF) Gdx.gl20.glStencilOp(GL20.GL REPLACE, GL20.GL REPLACE, GL20.GL REPLACE) mapRenderer.getBatch().setShader(shader) mapRenderer.render(upperLayer) mapRenderer.getBatch().setShader(null) Gdx.gl20.glStencilFunc(GL20.GL LEQUAL, 0x1, 0xFF) Gdx.gl20.glStencilOp(GL20.GL KEEP, GL20.GL KEEP, GL20.GL KEEP) game.batch.setShader(shader) player.update(mapMgr, game.batch, delta, world) game.batch.setShader(null) Gdx.gl20.glDisable(GL20.GL STENCIL TEST) Fragment GLSL code varying vec4 v color varying vec2 v texCoords uniform sampler2D u texture void main() vec4 c vec4(.5, .5, .5, texture2D(u texture, v texCoords).a) if (c.a 0.0) discard gl FragColor c It didn't work out that way. In addition to the fact that the color of the top layer changes, the player is also completely painted, regardless of whether he is under the top layer or not. I think the problem is in some of this This part of the code does not work. Because even if you remove it, nothing changes. Gdx.gl20.glClear(GL20.GL STENCIL BUFFER BIT) Gdx.gl20.glEnable(GL20.GL STENCIL TEST) Gdx.gl20.glStencilFunc(GL20.GL ALWAYS, 0x1, 0xFF) Gdx.gl20.glStencilOp(GL20.GL REPLACE, GL20.GL REPLACE, GL20.GL REPLACE) Gdx.gl20.glStencilFunc(GL20.GL LEQUAL, 0x1, 0xFF) Gdx.gl20.glStencilOp(GL20.GL KEEP, GL20.GL KEEP, GL20.GL KEEP) Gdx.gl20.glDisable(GL20.GL STENCIL TEST) Fragment shader is not written correctly I spent all day solving this problem. Nothing works. I hope someone will figure it out! Thanks.
1
does glBindAttribLocation silently ignore names not found in a shader? Does glBindAttribLocation silently ignore names that are not found? For example, in a shader Some vertex shader in vec3 position in vec3 normal ... And in some set up code While setting up shader GLuint program glCreateProgram() glBindAttribLocation(program, 0, "position") glBindAttribLocation(program, 1, "normal") glBindAttribLocation(program, 2, "color") What about this one? glLinkProgram(program)
1
LWJGL Lighting advice fix Problem The spotlight I've set up with OpenGL won't light up anything behind it, no matter what values I set for GL SPOT DIRECTION float LightDir new float 0,0, 1,0 float LightPos new float 0,0,15f,1 Initialization code for lighting glEnable(GL LIGHTING) glEnable(GL LIGHT0) glLightModel(GL LIGHT MODEL AMBIENT,asFlippedFloatBuffer(new float 0.1f, 0.1f, 0.1f, 1f )) glLight(GL LIGHT0,GL DIFFUSE,asFlippedFloatBuffer(new float 0.5f, 0.5f, 0.5f, 1f )) glLight(GL LIGHT0, GL POSITION, asFlippedFloatBuffer(new float 0, 0, 0, 1 )) glLightf(GL LIGHT0,GL SPOT CUTOFF,60.0f) glLightf(GL LIGHT0,GL SPOT EXPONENT,2.0f) glEnable(GL COLOR MATERIAL) glColorMaterial(GL FRONT AND BACK, GL AMBIENT AND DIFFUSE) glShadeModel(GL SMOOTH) Lighting code ran every loop glLight(GL LIGHT0,GL SPOT DIRECTION,asFlippedFloatBuffer(LightDir)) glLight(GL LIGHT0,GL POSITION,asFlippedFloatBuffer(LightPos)) The Following shows To try and change the direction I hit a key that does LightDir 1 The angle of the light goes up as expected, but seems to stop turning once it's pointed straight up. Which looks like Which is the problem, it wont turn around. Iv'e tried every possible value for LightDir in all it's values, but I cannot get it to illuminate the back pf the tube. I've even tried to transform to light around the scene with glRotate and glTranslate. The light is supposed to function as a flashlight. Is this something that would be potentially solved with Shaders? Thanks in advance! EDIT Anytime LightDir 2 goes above 0 the light goes dark.
1
glScalef game math issue I can't get my head wrapped around this issue. The issue is my latest code is making the camera zoom in and out really quickly. My approach is built on http www.zdnet.com blog burnette how to use multi touch in android 2 part 6 implementing the pinch zoom gesture 1847?tag content siu container The opengl scale will be getting the variable scale. gl.glScalef(scale, scale, 1) the distance is obtained by between two fingers, old distance (initial touch points), and new distance (dragging touch points). The zooming in and out works well. However, it would reset glScalef each time the user start using pinch zoom. scale newDistance oldDistance I tried calculating by additive ratio. The oldtscale handles the previous distance, if it is same, then it doesn't need to add up anything to scale. The zooming is really quick, I moved the fingers closer by mere 1 cm to 5 cm, zoom goes down or up fast. I think additive ratio is a bad solution. I think it might be incomplete solution. I'm trying to figure out what's wrong with it. additive ratio tscale (newDistance oldDistance) 1 if(oldtscale tscale) oldtscale tscale tscale 0 else oldtscale tscale adding up the additive ratio and scale tscale scale tscale checking tscale for limiting the maximum minimum scale if(tscale gt 2) tscale 2 else if(tscale lt 1) tscale 1 supply scale scale tscale
1
How stencil buffer and glFrontFace help making a shadow? I'm trying to understand the tutorial 27th in Nehe website. This is about how to cast a shadow of an object using stencil buffer in OpenGL. The idea here is checking the directions of all the faces of the object with the direction of light, and the result will define whether or not this face make a shadow. In the case of making a shadow, we will draw "shadow" into the stencil buffer as the following code. First Pass. Increase Stencil Value In The Shadow glFrontFace( GL CCW ) glStencilOp( GL KEEP, GL KEEP, GL INCR ) doShadowPass( object, lightPosition ) Second Pass. Decrease Stencil Value In The Shadow glFrontFace( GL CW ) glStencilOp( GL KEEP, GL KEEP, GL DECR ) doShadowPass( object, lightPosition ) My question here is why "glFrontFace" function in the second pass help remove shadow between objects in the scene? I Hope to see your answer. Thanks so much! P S this is the explanation in the tutorial, but I don't get it "They are rendered in two passes as you can see, one incrementing the stencil buffer with the front faces (casting the shadow), the second decrementing the stencil buffer with the backfaces ("turning off" the shadow between the object and any other surfaces)."
1
How can I compute the orbit of one body around another? I'm attempting to have a planet (with a known mass and radius) orbit it's sun (also with a known mass and radius). It doesn't have to be 100 realistic, but it should be possible that the sun have more than one planet orbit it at a time. What equations should I use to accomplish this?
1
Understanding normal mapping I am trying to understand the basics of normal mapping using this tutorial http ogldev.atspace.co.uk www tutorial26 tutorial26.html What I don't get there is the following equation E1 ( U1 U0 ) T ( V1 V0 ) B How do they came to this equation? This it out of nowhere for me. What is E1? The tutorial say that E1 is one edge of the triangle. But I don't get it, in the equation E1 seems to be a real number, not a vector ( which an edge is supposed to be right? he have a x and y component ).
1
Making a weapon stay with a first person camera I was looking all over the internet for any information on how to get a gun to stay with a camera as done in FPS games. I am using OpenGL and GLSL to carry this out. I knew a way of how to do this in earlier OpenGL versions, but I could never figure it out in the newer versions. The type of camera that I am trying to get is something similar to this With the view matrix and everything else, I should be able to figure out the movement of the hand and the shooting. Here is some of the code that I have so far Copyright(c) 2019 Ryan Hall All Rights Reserved I do not permit any of this code to be used elsewhere by anyone else except me for commercial purposes. The gun has two defining variables one that actually creates the gun and another that moves it around in object space weaponOfChoice glm translate(weaponOfChoice, glm vec3(camera.GetPosition().x 0.15, camera.GetPosition().y 0.15, camera.GetPosition().z 0.3)) weaponOfChoice glm rotate(gun, angle, glm vec3(0.0f, 1.0f, 0.0f)) weaponOfChoice glm scale(gun, glm vec3(0.005f, 0.005f, 0.005f)) glUniformMatrix4fv(glGetUniformLocation(shader.Program, "gun"), 1, GL FALSE, glm value ptr(weaponOfChoice)) I have spent quite some time working on how to fix the code so that it will render the gun correctly and have not been able to find any great sources online that will help solve my problem. How could I do this? Do I need an identity matrix as used by me in older versions of OpenGL? If so, how do I create a mimicking function of glLoadIdentity() that will help me with this problem? Thanks, rjhwinner03
1
Split up a screen into regions My task I want to split up a screen into 3 regions for buffs bar (with picked items), score info and a game map. It doesn't matter are regions intersect with each other or not. For example I have a screen with width 1 height 1 and the origin of coordinates (0 0) is the left bottom point. I have 3 functions draw items, draw info, draw map. If I use it without any matrix transformations, it draws fullscreen, because it's vertex coordinates are from 0 0 to 1 1. (pseudo code) drawItems() drawInfo() drawMap() And after that I see only map onto info onto items. My goal I have some matrixes for transformation vertexes with 0 0 1 1 coordinates to strict regions. There is only one thing, what I need to do set matrix before drawing. So my call of drawItems function is like (pseudo code) adjustViewMatrixes andSomethingElse(items.position of the region there it should be drawn, items.sizes of region to draw) setItemsMatrix() drawItems() the same function with vertex coordinates 0 0 gt 1 1, but it draws in other coordinates, because I have just set the matrix for region I know only some people will understand me, so there is a picture with regions which I need to make. Every region has 0 0 1 1 inner coordinates.
1
What does glMultiDraw do? I'm having trouble understanding exactly what glMultiDraw does, and when it should be used. Is it to be used if I have one VBO with multiple objects in it or do I use it with many VBO's? It would be great if someone could give a real example of how to use this is a VAO VBO as all the stuff I've found on google just explain what the function does rather that how to use it practically.
1
Can I use the HD Graphics 3000's quad list primitive type via D3D? I was studying some technical documentation on the Intel HD Graphics 3000 GPU, which I'm using as a lower end reference for my 2D game engine. I noticed the hardware supports a nice "Quad List" topology, where every 4 vertices in a vert buffer are interpreted as an independent quad. My engine draws nothing but sprite quads, so that would be a nice optimization in my case, to eliminate the need for copying caching processing an accompanying index buffer with 12 bytes per quad of trivial triangle edge definitions (I'm using indexed TriangleList primitives at the moment). From what I can tell though, DirectX 9 doesn't expose quad list, and after some cursory peeking around in DX10 11 (which I don't plan to support at this point) it seems they don't expose this feature either. Does anyone know if there's a way to use Quad Lists in DirectX? I'd be curious about OpenGL also, if anyone has experience on that side.
1
What technique should I use to create models and animation sequences in OpenGL code? I'm getting into game development using OpenGL (and the LWJGL library) and I want to create models for characters, NPC's etc. in the code, as well as animation sequences (for example the way the models are done in Minecraft). What is the process to go about doing something like this? Is there a particular feature set that is used or common methods of doing this? I'm basically looking for pointers as to what to search for when trying to find examples of how this is done.
1
What advantage do OpenGL, SFML and SDL have over software rendering? I started watching the Handmade Hero stream, where Casey Muratori creates a game engine without using frameworks or such. Yesterday I got to the part where he showed how an image is drawn onto the screen. As far as I understood it he just allocated some memory as big as the size of the screen he wants to draw to. And then he created a bitmap which he passed to the buffer memory he allocated and drew it to the screen using a os specific function. This seems quite straight forward. I used GameMaker, changed to Love2D, worked a little bit with Sprite Kit but I was always wondering what was really happening beneath this sometimes confusing layers. Given that, why even bother using graphics libraries (OpenGL, SFML, SDL, ) when all you have to do is simply allocate some buffer, pass a bitmap and draw it to the screen? If you then want to draw distinct things to you screen you just write them to your bitmap which then gets passed into the buffer. I'm quite new to programming, but this seems quite simple to me. Please correct me if I'm wrong.
1
How to read rendered textures back without killing performance I have an application where I need to render some OpenGL based dlls, read the rendered textures back, and send them to another directx based application that will render them in directx. Right now, I am rendering them using FBOs, reading them back, and then sending the data across the network. However, the "reading them back" step is killing the performance. My basic loop is as follows while(true) ... render 5 to 10 textures ... foreach( texture in renderedTextures ) bind texture glGetTexImage( data ) push data onto a queue to send asyncronously Commenting out the glGetTextImage call causes performance to improve drastically. What techniques can I use to read the data back faster?