_id
int64
0
49
text
stringlengths
71
4.19k
1
Bullet Physics implementing custom MotionState class I'm trying to make my engine's camera a kinematic rigid body that can collide into other rigid bodies. I've overridden the btMotionState class and implemented setKinematicPos which updates the motion state's tranform. I use the overridden class when creating my kinematic body, but the collision detection fails. I'm doing this for fun trying to add collision detection and physics to Sean O' Neil's Procedural Universe I referred to the bullet wiki on MotionStates for my CPhysicsMotionState class. If it helps I can add the code for the Planetary rigid bodies, but I didn't want to clutter the post. Here is my motion state class class CPhysicsMotionState public btMotionState protected This is the transform with position and rotation of the camera CSRTTransform m srtTransform btTransform m btPos1 public CPhysicsMotionState(const btTransform amp initialpos, CSRTTransform srtTransform) m srtTransform srtTransform m btPos1 initialpos virtual CPhysicsMotionState() TODO Auto generated destructor stub virtual void getWorldTransform(btTransform amp worldTrans) const worldTrans m btPos1 void setKinematicPos(btQuaternion amp rot, btVector3 amp pos) m btPos1.setRotation(rot) m btPos1.setOrigin(pos) virtual void setWorldTransform(const btTransform amp worldTrans) btQuaternion rot worldTrans.getRotation() btVector3 pos worldTrans.getOrigin() m srtTransform gt m qRotate CQuaternion(rot.x(), rot.y(), rot.z(), rot.w()) m srtTransform gt SetPosition(CVector(pos.x(), pos.y(), pos.z())) m btPos1 worldTrans I add a rigid body for the camera Create rigid body for camera btCollisionShape cameraShape new btSphereShape(btScalar(5.0f)) btTransform startTransform startTransform.setIdentity() forgot to add this line CVector vCamera m srtCamera.GetPosition() startTransform.setOrigin(btVector3(vCamera.x, vCamera.y, vCamera.z)) m msCamera new CPhysicsMotionState(startTransform, amp m srtCamera) btScalar tMass(80.7f) bool isDynamic (tMass ! 0.f) btVector3 localInertia(0,0,0) if (isDynamic) cameraShape gt calculateLocalInertia(tMass,localInertia) btRigidBody btRigidBodyConstructionInfo rbInfo(tMass, m msCamera, cameraShape, localInertia) m rigidBody new btRigidBody(rbInfo) m rigidBody gt setCollisionFlags(m rigidBody gt getCollisionFlags() btCollisionObject CF KINEMATIC OBJECT) m rigidBody gt setActivationState(DISABLE DEACTIVATION) This is the code in Update() that runs each frame CSRTTransform srtCamera CCameraTask GetPtr() gt GetCamera() Quaternion qRotate srtCamera.m qRotate btQuaternion rot btQuaternion(qRotate.x, qRotate.y, qRotate.z, qRotate.w) CVector vCamera CCameraTask GetPtr() gt GetPosition() btVector3 pos btVector3(vCamera.x, vCamera.y, vCamera.z) CPhysicsMotionState cameraMotionState CCameraTask GetPtr() gt GetMotionState() cameraMotionState gt setKinematicPos(rot, pos)
1
How to create more vertexes from within a shader in OpenGL? when rendering voxels in octrees, the only information necessary is the current octree level, position and colour texture. But one has to send eight vertices to the rendering pipeline in order to render a complete cube and still needs colour texture. Is there a better way? Since geometry shaders wont be wide spread for quite some time the "emit vertex" method in the geometry pass wont suffice (or am i wrong? i don't think mesa currently supports geometry shaders). Also, afaik, backface culling is done before the geometry pass, so that would lead to additional overhead. My current idea is to send 'dummy vertices' and have the actual drawing data in between but that really shouldn't be the solution. kind regards
1
When would you use an octree in a voxel Engine? I've been reading about Octrees, and I understand how they work (or at least I think I do). However, I can't figure out why you would use octrees instead of a simple multidimensional array. In my project, I use chunks in my world. Every chunk is made up of 16x16x16 voxels. If I need to access a voxel, I just use the notation myChunk x y z . If I need to check neighboring voxels, I can use the same notation. I've already implemented frustum culling, face merging, and hidden surface determination. With these optimizations and this simple multidimensional array, I can render about 500 chunks at 80 fps. So, in which situation would I use octrees in this kind of Voxel Engine? Or is it useless? I can see the purpose of octrees if I would implement LOD into my voxel engine (which I'm not planning to do). Is my lack of experience in Game Development blinding me on something?
1
"Highlighting" Faces Edges Corners How would I "highlight" faces edges corners? I would prefer an explanation OpenGL if possible. Here's an example
1
My model is drawn wrong in 3d I'm learning OpenGL and now after I made everything I needed, I started with 3D, my first problem is that my mesh doesn't draw like I intended... Here's my rectangle (uses GL TRIANGLES) std vector lt GLfloat gt Vpos 0.5, 0.5, 0.0, 0.5, 0.5, 0.0, 0.5, 0.5, 0.0 0.5, 0.5, 0.0 0.5, 0.5, 0.0, 0.5, 0.5, 0.0 comms are the 3d version And here I draw my array glDrawArrays(GL TRIANGLES, 0, verNum 2) it is binded with vertex array, I use shaders And my fragment shader is version 330 in vec2 passTextureCoords out vec4 outColor uniform sampler2D ourTexture void main() outColor vec4(0.0, 1.0, 0.5, 1.0) texture(ourTexture, passTextureCoords) I don't have a texture so I use a lightgreen for debug proposes Also my vertex shader version 330 layout (location 0) in vec3 inVertPos layout (location 1) in vec2 inTexCoords out vec2 passTextureCoords void main() gl Position vec4(inVertPos.x, inVertPos.y, inVertPos.z, 1.0) passTextureCoords inTexCoords Thanks for help by now, I pretty sure that I missed something small but important.
1
Everything turning black when pitching down Just a quick questions about something that's occurring in my world. Every time I pitch my camera downward, everything starts turning black, and if I pitch upward, everything sort of intensifies. I'm multiplying my normals by the normal matrix in the shader, and I'm multiplying my lights direction by the model view matrix. If I leave the normal and light dir in world space everything ends up fine. I thought putting them both in view space would not cause those weird things to happen?
1
LWJGL loading textures of various types I googled around a bit and nobody seems to have asked this question. I have images in multiple color formats (all of them are PNGs). Most of them are ARGB but my bitmap fonts are gray scale, and I would like them to stay that way. All I want to do is find out what format BufferedImage uses to store my pixel data and then use that information with glTexImage2D. Java, in all its wisdom, seems to be determined to hide that information from me at all costs... I also need to know how BufferedImage aligns its pixel data in both of these formats (glTexImage2D cares). Could someone please tell me how to Determine the pixel format of my BufferedImage. If it is ARGB32, I'm going to have to reorder the bytes and use GL RGBA. If it's grayscale, I will be using GL INTENSITY. Extract the actual bytes from the image. I have seen a few examples on the web that use BufferedImage.getRaster().getDataBuffer(). This is nonsensical. Why are there different types of buffers like DataBufferInt? Because of Java's strong typing, I need DataBufferByte. If this is the only way, could somebody give me specific directions to use the different type of buffers with glTexImage2D? Figure out how the aforementioned image data is aligned. I will use this information with glPixelStorei. In addition, I come from C and C programming. In C this was 100 lines of simple libPNG and GL calls. Should I expect more trouble like this in the future?
1
Custom Model Format Loading I've been working on a small engine for educational purposes. I understand that going production with "raw" model files like .obj is wrong mostly because of parsing and loading time. Thus, I want to create a program which parses these raw files and formats it in a new way, let's say using protobuf. My questions Are there any best practices, or any example lib I can take a look at? What should I do with attached texture files? Seperate files? How can I support multiple types of models and or attributes in the models like materials, uv mapping, etc...
1
How to remove seams from a tile map in 3D? I am using my OpenGL custom engine to render a tilemap made with Tiled, using a well spread tileset from the web. There is nothing fancy going on. I load the TMX file from Tiled and generate vertex arrays and index arrays to render the tilemap. I am rendering this tilemap as a wall in my 3D world, meaning that I move around with a fly camera in my 3D world and at Z 0 there is a plane showing me my tiles. Everything is working correctly but I get ugly seems between the tiles. I've tried orthographic and perspective cameras and with either I found particular sets of parameters for the projection and view matrices where the artifacts did not show, but otherwise they are there 99 of the time in multiple patterns, depending on the zoom and camera parameters like field of view. Here's a screenshot of the artifact being shown http i.imgur.com HNV1g4M.png Here's the tileset I am using (which Tiled also uses and renders correctly) http i.imgur.com SjjHK4q.png My tileset has no mipmaps and is set to GL NEAREST and GL CLAMP TO EDGE values. I've looked around many articles in the internet and nothing helped. I tried uv correction so the uv fall at half of the texel, rather than the end of the texel to prevent interpolating with the neighbour value(which is transparency). I tried debugging with my geometry and I verified that with no texture and a random color in each tile, I don't seem to see any seams. All vertices have integer coordinates, i.e, the first tile is a quad from (0,0) to (1,1) and so on. Tried adding a little offset both to the UV and to the vertices to see if the gaps cease to exist. Disabled multisampling too. Nothing fixed it so far. Thanks.
1
Migrating from OpenGL to XNA (MonoGame)? I just started working in MonoGame with 3D code. I've used OpenGL before in the past, and I've also built engines with 2D Sprites SpriteBatch's in MonoGame. However, I'm getting extremely stumped with going from GL to XNA, likely due to the fact that I believe XNA is no longer a "state" machine like GL is. I've looked thru the tutorials by RB Whitaker and Riemers but most of the stuff they have written covers models, etc. I've already migrated some of my code to build procedural meshes into a VertexBuffer. I'd like to test it now, but I'm getting very confused as to how to setup the World, View, Projection matrices, etc. Are there any good tutorials that either show how to move from GL to XNA or optionally any good tutorials that assume you're not using Models, etc? I'm looking to write something fairly low level when it comes to primitives.
1
2D Sprite batching in OpenGL How to send transformation data to GPU OpenGL newbie here. So I'm trying to implement sprite batching to draw 100 sprites per draw call. I've created a VBO that contains texture coordinates, vertex coordinates, color data (for tinting), and a mat4 matrix for transforms. So far, so good, right? Well, everything is working but the transform portion of my VBO. My theory is that I could push a mat4 with all of the rotation, scaling, flipping, and world location data applied to it, so that it gets multiplied in with viewports (see glsl code below). version 130 Vertex position attribute in vec2 vertexPos Texture coordinate attribute in vec2 texCoordIn out vec2 texCoord Vertex color in vec4 textureColorIn out vec4 textureColor Matrix transformations for the sprite in mat4 transformation void main() Process texCoord texCoord texCoordIn Process texture color textureColor textureColorIn Process vertex gl Position gl ProjectionMatrix gl ModelViewMatrix vec4( vertexPos.x, vertexPos.y, 0.0, 1.0 ) transformation So here's my question, if you were to send transform data to the GPU, how would you do it? Sending a mat4 over the wire per vertex seems like overkill.
1
rotating 3D object around the center I have object moving from A to B on x axis and there is no translation of object apart from it. Now, while moving, i want to rotate it around y axis and the motion should change accordingly, i mean if i rotate it right when moving from x to x axis, it should move towards near plane. I have variable in gltranslatef which is modified in the loop after that i have glscalef to scale whole object which is made of hierarchical structure. Now i tried following code to achieve the expected result but its not working properly glTranslatef(move, 0, 0) If I comment these 3 lines, it does not affect the output glTranslatef( move, 0, 0) glRotatef(rotate,0,1,0) glTranslatef(move, 0, 0) glScalef(0.2, 0.2, 1.0)
1
Qt 5 QOpenGLWidget not updating the screen I'm creating a level editor for my game using Qt for gui and i'm in really early stages. Right now i'm trying to dynamically add objects ( entities ) on screen when i click a button. So far the objects are pushed into a stack but i can't see them on screen. When my widget inherits QGLWidget the screen is updated immediately and i can see the objects being drawn but since QGLWidget is deprecated i use QOpenGLWidget. Strangely this class does not auto update the screen. I have also connected update() to a timer and set the interval to 0 to update the screen in real time. connect( amp timer, SIGNAL(timeout()), this, SLOT(update())) timer.setInterval(0) timer.start() Am i missing something ?
1
Is glEnable obsolete unneeded in OpenGL ES 2? In an iOS app I am writing I am now culling all the GL 1 crap from my GL 2 code. Can I safely remove glEnable?
1
3D collision detection on non flat surface I am developing a game which needs an accurate collision detection algorithm, when an object travels down a slope which isn't flat. To be more precise I need to simulate a skier who travels down a ramp. My first idea was to create simple bounding box around the skies and the ramp. Then place the skier in mid air and start calculating gravity. When the two bounding boxes intersect collision detected (this is how I did it in 2D). But the problem is that the skies are flat and the slope isn't. So there is a chance that the ski will stick into the jump (due to curved surface of the jump) or even worse the skier will go under the jump (I don't want the skies to "sink" into the jump as well) . What would be the best way of solving this? What I could think of is when collision is detected, rotate the skies until the front and the back end of the skies are colliding with the surface. Is this idea any good? Did any face this kind of problem? Would be better to morph the skies according to the slope (but this would probably be an overhead I can't afford)? P.S. if I didn't explain enough, write a comment and make a sketch.
1
What is GL MAX COMBINED TEXTURE IMAGE UNITS? I am a beginner in OpenGL. I am learning about textures in OpenGL. What I don't understand is how to determine how many texture units are in the GPU. I heard someone said that you can see how many texture units by writing the following code. int total units glGetIntegerv(GL MAX COMBINED TEXTURE IMAGE UNITS, amp total units) std cout lt lt total units lt lt ' n' the result is 192 Is there 192 texture units in my GPU? In documentation, it says GL MAX COMBINED TEXTURE IMAGE UNITS params returns one value, the maximum supported texture image units that can be used to access texture maps from the vertex shader and the fragment processor combined. If both the vertex shader and the fragment processing stage access the same texture image unit, then that counts as using two texture image units against this limit. The value must be at least 48. See glActiveTexture. So I wanted to know how many texture units can be used to access texture maps from the vertex and fragment shaders. So I wrote and run the following code. int vertex units, fragment units glGetIntegerv(GL MAX VERTEX TEXTURE IMAGE UNITS, amp vertex units) std cout lt lt vertex units lt lt quot n quot the result is 32 glGetInteferv(GL MAX TEXTURE IMAGE UNITS, amp fragment units) std cout lt lt fragment units lt lt quot n quot the result is also 32 So 32 32 64. But why does GL MAX COMBINED TEXTURE IMAGE UNITS shows me the result of 192? I think I am missing something. What do I need to calculate to get 192? And also, in OpenGL, why are there only GL TEXTURE0 to GL TEXTURE31 macros? I think these macros are for each shaders. Am I right?
1
GLSL shader compilation When i'm compiling a shader does it have to be complete? Can i use glCompileShader on a shader without a main() function? The OpenGL reference documentation has a nice writeup on program linking errors, but i can't find one for shader linking, so i have to ask here. I want to have the ability to have each part of a shader in a different file. So f.e. i'll have a material calculating function and i'll have a "main" shader with the main function that'll only reference the material calculating function. Right now i have my shaderes as arrays of strings that they can read from a file, then i put together an array of string pointeres and compile that into a single shader (vertex, fragment, geometry). But if i could compile each invidividual shaders (parts of vertex shaders, instead of the whole vertex shader put together in an array of strings) and put them together when i'm linking the program, that would make the code much clearer and i could move the shader compilation code from the program managing code to the shader object itself (the one that load the string from file and exposes it).
1
Testing spheres without extracting planes I am currently a bit stuck. On OpenGL I am attempting to do view frustum culling, so far I managed to do it by using a PCM. Where center is the world position of the mesh. bool shouldRenderMesh(mat4 VP, vec3 center, float radius) vec4 transformedCentre VP vec4(center, 1) vec3 PCN vec3( transformedCentre.x transformedCentre.z, transformedCentre.y transformedCentre.z, transformedCentre.z transformedCentre.z ) if( abs(PCN.x) gt 1 abs(PCN.y) gt 1 abs(PCN.y) gt 1 ) return false return true The problem is, while the approach works, items will disappear too soon, as the testing is based on the centre point of the mesh, rather than the bounding box. Is it possible to test a sphere or box without extracting planes? Thanks
1
Is OpenGL appropriate for 2D games? I have been teaching myself the OpenGL library for a while now, and want to start making a game. However, for an easier introduction, I want to start with something 2D, such as a top down Pokemon style game. Is this a good plan, or is OpenGL made specifically for 3D?
1
transformation before the perspective divide but after projecting perspectively My problem is that I would like to confine a scene render to a (possibly rotated) rectangle without using glViewport(). I don't want to use it to save, if possible, some cycles that would otherwise be spent on state switching. Also, there is the tantalizing possibility of rotating the scene render, which is not possible with glViewport... Is it possible to confine a scene render to this (possible rotated) rectangle in this way?
1
How to not render what is behind object? I am working on a 3d project using openGL. I am looking for a way to optimize my renderings. Is there a way to tell if an object is behind another object and thus not visible to not waste time rendering it ? I am already working on a frustrum culling implementation to not display what is not visible because out of the viewing frustrum, but I didn't find a way to know if an object in located behind another object. Can any of you help me about this, please ?
1
GLSL shader time uniform freezed? I have a simple fragment shader to simulate "falling water". I'm using Ogre3D and opengles2, this is my code version 130 uniform sampler2D valveTex uniform sampler2D noiseTex uniform float time uniform vec3 sprayColor varying vec2 uv varying float rand void main() gl FragColor.w texture2D(valveTex, uv).w float noiseW texture2D(noiseTex, uv vec2(rand, time)).x gl FragColor.w noiseW noiseW gl FragColor.xyz sprayColor The shader works well and I see the animation happening, but after some time running the animation stops like if the variable time stopped updating, and the water stream looks still. What can be causing this? Edit1 variable time is derived from Ogre's time 0 1.
1
Texture object and texture unit in GL As I understand texture usage consist of two parts How to store this discrete data about texture internally. How much dimensions, channels, etc. How to fetch sample filter The question relative to OpenGL are q1 What parameters are stored in texture object (glGenTexture) and what parameters are store in texture unit (glActiveTexture) ? q2 Does glTexParameter perform setup per texture object or per texture unit?
1
How do I render a PNG image with OpenGL? I have an image (100 200 px) painted with Paint.NET and saved as a .png with 32 bit color depth. How can I render it using OpenGL 1.1 (with the LWJGL binding) or higher inside the display? I tried creating a ByteBuffer, then loading the pixel color values inside that buffer, but I'm missing the OpenGL settings that need to be enabled. Some help?
1
OpenGL ES 2.0 Moving Camera in Orthogonal (2D) Projection I have quite large 2D game scene. The scene is much larger than the screen of the LCD. Therefore, I have to move the camera (view) in desired directions, to display particular parts of the scene. What is the correct way of moving viewport over the scene, in Orthogonal (2D) projection, please? There are two solutions coming to my mind Using GluLookAt(), which is designed to move camera moslty in Perspective Projection, but it could work well for 2D as well. However, not all of the GluLookAt() parameters would be utilized in Ortho configuration. Using opposite View Translate transformation. This would mean, if I need to move my 2D camera 10 units to the right(positive x axis), I would apply opposite Translate transformation to every scene vertex (negative x axis). This way, I would create illusion, the camera moves to the right. These are the solutions coming from my mind. However, because, i am self taught, is there any correct and recommended way of moving camera over the 2D scene, please?
1
OpenGL Understanding the relationship between Model, View and World Matrix I am having a bit of trouble understanding how these matrixes work and how to set them up in relation to one another to get a proper system running. In my understanding the Model Matrix is the matrix of a object, for example a cube or a sphere, there will be many of these in the application game. The World Matrix is the matrix which defines the origin of the 3D world. the starting point. And the View Matrix is the "camera" everything gets translated with this to make sure you have the illusion of an actual camera when in fact everything is moving instead of this matrix? I am a bit lost here. So I was hoping someone here could help me understand this properly. Does every modelMatrix get translated multiplied with the world matrix and the worldMatrix then with the viewMatrix? Or does every modelMatrix get translated multiplied with the viewMatrix and then that with the worldMatrix? How do all these matrixes relate and how do you set up a world with multiple objects and a "camera"? EDIT Thanks a lot for the feedback already. I did some googling aswel and I think I do understand it a bit better now, however would it be possible to get some pseudo code advice? projectionMatrix Matrix makePerspective(45, width, height, 0.1, 1000.0, projectionMatrix) modelMatrix Matrix identity(modelMatrix) translate(modelMatrix, 0.0, 0.0, 10.0 ) move back 10 on z axis viewMatrix Matrix identity(viewMatrix) do some translation based on input with viewMatrix Do I multiply or translate the viewMatrix with the modelMatrix or the other way around? and what then? I currently have a draw method up in such a way that it only needs 2 matrixes for arguments to draw. Here is my draw method draw(matrix1 matrix2) bindBuffer(ARRAY BUFFER, cubeVertexPositionBuffer) vertexAttribPointer(shaderProgram.getShaderProgram().vertexPositionAttribute, cubeVertexPositionBuffer.itemSize, FLOAT, false, 0, 0) bindBuffer(ARRAY BUFFER, cubeVertexColorBuffer) vertexAttribPointer(shaderProgram.getShaderProgram().vertexColorAttribute, cubeVertexColorBuffer.itemSize, FLOAT, false, 0, 0) bindBuffer(ELEMENT ARRAY BUFFER, cubeVertexIndexBuffer) setMatrixUniforms(shaderProgram, matrix1, matrix2) drawElements(TRIANGLES, cubeVertexIndexBuffer.numItems, UNSIGNED SHORT, 0) What are those matrixes suppose to be? Thanks a lot in advance again guys.
1
Libgdx Transparent color over texture I am attempting to tint a texture a color but I want the texture to show under the tint. For example, I have a picture of a person but I want to tint them a light green and not change the transparency of the actual person itself. So far I have attempted to use the SpriteBatch method setColor which takes rgba values. When I set the alpha value to .5 it will render the tinting and the texture with that alpha value. Is there any way to separate the alpha values of the tint and the texture? I know I could draw another texture on top of it but I don't want to have two draw passes for the one texture because it will be inefficient. If there's anyway to do it in raw OpenGL that'd be great too.
1
Split up a screen into regions My task I want to split up a screen into 3 regions for buffs bar (with picked items), score info and a game map. It doesn't matter are regions intersect with each other or not. For example I have a screen with width 1 height 1 and the origin of coordinates (0 0) is the left bottom point. I have 3 functions draw items, draw info, draw map. If I use it without any matrix transformations, it draws fullscreen, because it's vertex coordinates are from 0 0 to 1 1. (pseudo code) drawItems() drawInfo() drawMap() And after that I see only map onto info onto items. My goal I have some matrixes for transformation vertexes with 0 0 1 1 coordinates to strict regions. There is only one thing, what I need to do set matrix before drawing. So my call of drawItems function is like (pseudo code) adjustViewMatrixes andSomethingElse(items.position of the region there it should be drawn, items.sizes of region to draw) setItemsMatrix() drawItems() the same function with vertex coordinates 0 0 gt 1 1, but it draws in other coordinates, because I have just set the matrix for region I know only some people will understand me, so there is a picture with regions which I need to make. Every region has 0 0 1 1 inner coordinates.
1
How do I change a sprite's color? In my rhythm game, I have a note object which can be of a different color depending on the note chart. I could use a sprite sheet with all the different color variations I use, but I would prefer to parametrize this. (Each note sprite is made of different shades of a hue. For example a red note has only red, light red and dark red.) How can I colourise a sprite anew? I'm working with OpenGL, but any algorithm or math explanation will do. )
1
How forward rendering done using OpenGL? Recently I come across the term forward rendering. I'm kind of curious how this could be done in OpenGL. I have done a lot of search on this, majority of the result I get are on theory but not code implementation. May I know is there any code implementation on OpenGL for this rendering technique?
1
Aggregation of value in GLSL loop results in 0 I'm banging my head against a wall trying to understand why this code is giving me some reasonable results with some visible colors on parts of the screen... version 130 in vec2 vTex must match name in vertex shader flat in int iLayer must match name in fragment shader out vec4 fragColor first out variable is automatically written to the screen uniform sampler2DArray tex uniform sampler2DArray norm define MAX LIGHTS 4 struct Light vec3 position vec4 color vec3 falloff vec3 aim float aperture float aperturehardness uniform Light lights MAX LIGHTS void main() vec4 DiffuseColor texture(tex, vec3(vTex.x, vTex.y, iLayer)) if (DiffuseColor.a 0) discard vec3 NormalMap texture(norm, vec3(vTex.x, vTex.y, iLayer)).rgb NormalMap.g 1.0 NormalMap.g vec3 FinalColor vec3(0,0,0) for (int i 0 i lt MAX LIGHTS i ) vec3 LightDir vec3((lights i .position.xy gl FragCoord.xy) vec2(320.0, 200.0).xy, lights i .position.z) float D length(LightDir) vec3 N normalize(NormalMap 2.0 1.0) vec3 L normalize(LightDir) vec3 Diffuse (lights i .color.rgb lights i .color.a) max(dot(N, L), 0.0) vec3 sd normalize(vec3(gl FragCoord.xy, 1.0) lights i .position) float Attenuation smoothstep(lights i .aperture lights i .aperturehardness, lights i .aperture, dot(sd,lights i .aim)) (lights i .falloff.x (lights i .falloff.y D) (lights i .falloff.z D D) ) vec3 Intensity Diffuse Attenuation FinalColor max(FinalColor, DiffuseColor.rgb Intensity) fragColor vec4(FinalColor, DiffuseColor.a) ... but when I change FinalColor max(FinalColor, DiffuseColor.rgb Intensity) to FinalColor clamp(FinalColor DiffuseColor.rgb Intensity, vec3(0,0,0), vec3(1,1,1)) everything goes black. Is there something I don't understand about aggregation of variables in loops in GLSL, or about the operator operating on vectors? By my logic, the changed code should return more brightness than the original code because the changed code would be adding all the values whereas the old code would just be taking the highest one.
1
OpenGL texture releasing If I call glDeleteTextures, will it release memory immediately? And what is the performance? I know that glGenTexture is quite cheap, because it only alocated id, but I have nor found anything about deleting.
1
Generating 3D like effect I'm making a 2D sidescroller game and want to give the blocks a 3D like effect. This way it looks like the player is walking on 3D blocks while walking on a 2D plane (thus having only x,y coordinates). I have already written the code to generate the terrain in 2D, but the 3D effect is now baked in. It looks like this At this moment the 3D effect is baked into the blocks, but I want to try to generate it. Is there a common way to achieve this 3D like effect while still staying 2D? Do I have to convert from 2D to 3D or is it also possible to do this with calculations shaders? EDIT The reason I want the top texture of the block separate of the block is because when the camera would go up, the top should get bigger as the camera sees more of the top of the block and the other way around when the camera goes down.
1
What does the keyword "interface" mean in GLSL? interface vs out vec3 get color() struct vs out impl vs out vec3 get color() return vec3(1,0,0) Where is this behavior defined? None of the specifications uses the keyword interface. It is generally reserved in version 460. I myself accidentally, at Visual Studio, discovered that this keyword is recognized, thought about it and remembered the Nvidia language C for graphics. It has the ability to create interfaces. I tried it and it worked! This code does not crash with compilation errors. It compiles fine. To check, instead of the diffuse color, I used the method vs out impl get color(), and the objects were painted red. Where to see the specification of this functionality? On the Internet, a bunch of words interface and GLSL leads to articles on interface blocks, which is not the same.
1
Why, after calling SetVideoMode as in the following, does nothing appear on my screen? I am trying to create an application that will need to use double buffering (for purpose of vsync). I am using SDL.NET. From what I understood, in order to have double buffering, I have to SetVideoMode with the opengl paremeter set to true. Here's the code I'm using Video.Initialize() Video.GLSetAttribute(OpenGLAttr.DoubleBuffer, 1) Video.GLSetAttribute(OpenGLAttr.SwapControl, 1) Video.GLSetAttribute(OpenGLAttr.RedSize, 8) Video.GLSetAttribute(OpenGLAttr.GreenSize, 8) Video.GLSetAttribute(OpenGLAttr.BlueSize, 8) Video.GLSetAttribute(OpenGLAttr.DepthSize, 16) Video.SetVideoMode(VideoInfo.ScreenWidth, VideoInfo.ScreenHeight, false, true, true, true) If 4th parameter (bool opengl) is false, it works a new fullscreen window is created and displayed (but I assume the OpenGLAttr's set above are meaningless in this case). If 4th parameter is true, nothing happens. A new window gets created (at least, it appears in the list of open windows) but I cannot alt tab into it and nothing appears on the screen. What am I doing wrong?
1
Best strategy to track object hierarchy using groups and obj files I am making a 3D game in OpenGL from scratch. In this game I have a ship with stuff inside it. How can I attach the stuff to the ship in the CAD program and maintain that hierarchy in my own game? For example say I have a fire extinguisher in my ship that mounts on the wall. There are two approaches both with problems. Solution 1 Save fire extinguisher and ship as separate obj files. Problem How can I place the fire extinguisher in the proper place inside my ship in my game? With hundreds of objects manually placing them is completely infeasible. I want to arrange stuff in my CAD and load it into the game and be done. Solution 2 Save fire extinguisher as its own group inside the ship obj file. Problem Now I can't reuse the fire extinguisher in other ships. The obj files for game assets will balloon out of control in size with new instances of reused sub objects. Is there some way I can specify the position of an external object? A 3d point in my ship obj file representing the origin of another obj model?
1
Rendering perspective independent polygons in perspective view I am creating an editor where I have "grabhandles" that can be dragged to transform a primitive object. My problem is that I can't find the correct solution to show these grabhandles in a perspective view. They need to have the correct position AND same size independent of where they are,how far away they are on the perspective viewport and independent of what the viewport dimensions are. In orthographic mode I do not have this problem as I create the handle polygons by multiplying it with a zoom factor.
1
How to create a MessageBox in OpenGL using C I have a game that needs results displayed upon exit with a messagebox. I have the following code that works but I need to add an integer variable to the text. This works MessageBox(NULL, L"Description nline2", L"Info", MB OK MB ICONEXCLAMATION) But I need to say "Description " num I tried to pass a string but it won't compile MessageBox(NULL, L(string), L"Info", MB OK MB ICONEXCLAMATION)
1
Why does scissor test happen after fragment shading? In OpenGL why is scissor test performed after shading. Wouldn't be more efficient to do the discarding before shading? Reference pipeline overview, the scissor test is inside the Fragment Processing stage.
1
Why does my depth test fail on Nvidia cards? I sent a test version of my in development game to some friends, and they found out that the Depth Test in OpenGL does not work on Nvidia cards. I'm using LWJGL. I use my own matrices and send them to the shader, and at the start of evey game loop I use glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT ) clear the display On an Nvidia card, you can see mountains through other mountains and stuff. On my Radeon HD 6650M it works perfectly fine. Any ideas? I don't have anything special in the shaders just some basic lighting calculations. I dont touch the gl FragDepth. Here's a screenshot (with placeholder textures) I use these calculations for the Projection Matrix public Matrix4f getProjectionMatrix() Setup projection matrix Matrix4f projectionMatrix new Matrix4f() float fieldOfView 40.0f float aspectRatio (float)Display.getWidth() (float)Display.getHeight() float near plane 0.1f float far plane 1000f float y scale coTangent((float) Math.toRadians(fieldOfView 2f)) float x scale y scale aspectRatio float frustum length far plane near plane projectionMatrix.m00 x scale projectionMatrix.m11 y scale projectionMatrix.m22 ((far plane near plane) frustum length) projectionMatrix.m23 1 projectionMatrix.m32 ((2 near plane far plane) frustum length) projectionMatrix.m33 0 return projectionMatrix
1
Are there any good techniques for reducing or smoothing stutter after a longer frame? I've been using SDL2 with OpenGL to play around with some very basic game engine development. I'm running everything on a newer laptop with Linux and Intel integrated graphics. Regardless of whether I have VSync on or off, occasionally a call to SDL GL SwapWindow (glxSwapBuffers under the hood) will take 22 33ms which causes a noticeable stutter in any moving sprites. These long frames happen even if the game loop contains nothing but a call to glClear followed by SDL GL SwapWindow, so I'm assuming this is just due to everything running on a relatively underpowered system (and I don't see the long frames or stutter on my desktop with dedicated graphics). I would have given up here and left it at the laptop not being cut out for OpenGL, but when I look at some indie games running on the same laptop, I'm unable to notice any of these stutter effects leading me to believe there may be a way to eliminate them (or that I'm doing something wrong). A few of the indie games were built with Monogame and at least one is also using SDL2 and OpenGL. I used apitrace to look at the OpenGL calls being made by these games and nothing looks very different from what I'm doing. I then spun up a very basic Monogame project on my own machine, but still experienced the long frames and sprite stutter. I've tried different variations of fixed and variable game loops timesteps as well as deltatime averaging to try to smooth over longer frames, but I still have a noticeable stutter. Is there anything else I can do to eliminate the stutter or the long frames?
1
JOGL hardware based shadow mapping computing the texture matrix I am implementing hardware shadow mapping as described here. I've rendered the scene successfully from the light POV, and loaded the depth buffer of the scene into a texture. This texture has correctly been loaded I check this by rendering a small thumbnail, as you can see in the screenshot below, upper left corner. The depth of the scene appears to be correct objects further away are darker, and that are closer to the light are lighter. However, I run into trouble while rendering the scene from the camera's point of view using the depth texture the texture on the polygons in the scene is rendered in a weird, nondeterministic fashion, as shown in the screenshot. I believe I am making an error while computing the texture transformation matrix, but I am unsure where exactly. Since I have no matrix utilities in JOGL other then the gl Load Mult Matrix procedures, I multiply the matrices using them, like this void calcTextureMatrix() glPushMatrix() glLoadIdentity() glLoadMatrixf(biasmatrix, 0) glMultMatrixf(lightprojmatrix, 0) glMultMatrixf(lightviewmatrix, 0) glGetFloatv(GL MODELVIEW MATRIX, shadowtexmatrix, 0) glPopMatrix() I obtained these matrices by using the glOrtho and gluLookAt procedures glLoadIdentity() val wdt width 45 val hgt height 45 glOrtho(wdt, wdt, hgt, hgt, 45.0, 45.0) glGetFloatv(GL MODELVIEW MATRIX, lightprojmatrix, 0) glLoadIdentity() glu.gluLookAt( xlook lightpos. 1, ylook lightpos. 2, lightpos. 3, xlook, ylook, 0.0f, 0.f, 0.f, 1.0f) glGetFloatv(GL MODELVIEW MATRIX, lightviewmatrix, 0) My bias matrix is float biasmatrix new float 16 0.5f, 0.f, 0.f, 0.f, 0.f, 0.5f, 0.f, 0.f, 0.f, 0.f, 0.5f, 0.f, 0.5f, 0.5f, 0.5f, 1.f After applying the camera projection and view matrices, I do glTexGeni(GL S, GL TEXTURE GEN MODE, GL EYE LINEAR) glTexGenfv(GL S, GL EYE PLANE, shadowtexmatrix, 0) glEnable(GL TEXTURE GEN S) for each component. Does anybody know why the texture is not being rendered correctly? Thank you.
1
Texture Mapping to procedurally generated geometry How can I calculate texture coordinates of such geometry? The angle shown in the image (89.90 degree) may vary, therefore the geometry figure is changing and is not always such uniform.(maybe like geometry in the bottom of image) and red dots are generated procedurally depends on degree of smoothness given.
1
SFML Segmentation Fault when using VBOs? I'm trying to follow along with the gltut tutorials and for some reason when I call GLDrawArrays my program segmentation faults. I've been looking at the state of my application with the Mac OpenGL Profiler and Googling and I can't seem to figure out why it isn't working. I may have missed setting some OpenGL state somewhere. My code is here, it segfaults on line 32. If I don't call glEnableClientState(GL VERTEX ARRAY) and the respective glDisableClientState(GL VERTEX ARRAY) it compiles and runs, but I get no rendering, which is what I would expect given my limited OpenGL knowledge.
1
Per fragment lighting with OpenGL 4.x tessellated model I'm experienced with OpenGL 3 . I'm dabbling with tessellation shaders and have now got to a point where I have a nicely tessellated teapot plane demo (quick look here) As can be seen from the screenshots, the lighting is broken (though admittedly doesn't look too bad in the image) I've tried to add a normal map to the equation but it still doesn't come out right, I can calculate the normals, tangents and binormals per triangle in the geometry shader but still looks wrong. I think the question would be How do I add per fragment lighting to a tessellated model? The teapot is 32 16 point patches, the plane is one single 16 point patch. The shaders are here, but they are a complete mess, so I don't blame anyone who cant make sense of them. But peruse at your leisure if you like. Also, if this question is more suited to be somewhere else i.e. Stack Overflow or the Programming stack please let me know.
1
How to convert screen to world coordinates while using gluLookAt gluPerspective or similar matrix transforms? I am just starting an adventure in looking under the hood of graphics for a game project I've been working on for a while, and I could use some guidance. I am using Python Kivy (though that is not part of the concern), and am trying to use projection and modelview matrices to perform screen to world coordinate conversion. I am using something similar to gluLookAt and gluPerspective matrix transforms for those. The issue I'm running into, is that the coordinates I get out of multiplying the mv and p matrices together and inverting them, then multiplying by NDC screen coords, the resulting coordinates are only either a fraction of a pixel away from the world position look at is currently centered on, or a max of a few pixels that point. I know I'm missing something, and I would love if someone could help me understand. I wrote a standalone example gist, and made a short youtube video showing what problem I'm having. https youtu.be UxbWQO9e0NE https gist.github.com spinningD20 951e49cb836f08c434a0e9ab0e90c766 The code in question, is the screen to world method in the gist, when using the camera look at perspective method to create the MVP, which I will list here p Matrix() p.perspective(90., 16 9, 1, 1000) self.canvas 'projection mat' p self.canvas 'modelview mat' Matrix().look at(w x, w y 30, self.camera scale 350, w x, w y, 0, 0, 1, 0) That is for creating the matrices, and... def screen to world(self, x, y) proj self.canvas 'projection mat' model self.canvas 'modelview mat' get the inverse of the current matrices, MVP m Matrix().multiply(proj).multiply(model) inverse m.inverse() w, h self.size normalize pos in window norm x x w 2.0 1.0 norm y y h 2.0 1.0 p inverse.transform point(norm x, norm y, 0) print('convert from screen to world', x, y, p) return p 2 Was originally written to convert coordinates when using the previous projection matrix built using translate and creating a clip space (also included in the example). While the implementation appears to be specific to Kivy, it is just a modelview matrix and projection matrix being used, and their Matrix.transform point method used above is the same as multiplying a vec against the matrix in question. It can also include what appears to be the W part of a vec4, which I have also experimented with, with no apparent change. Here is a screenshot of the standalone example, painting where I have moved the mouse on the screen (red) and where the resulting world coordinate ends up being (green). The goal is for the converted coordinates in the world to fall directly under the red.
1
How to render models using a personalized projection matrix? I'm creating a game with a great sprite demand. Out team is considering automatizing the sprite generation using 3D models. The problem is we have a very particular ortographic projection We have already set up a good projection matrix. The problem is none of the 3D renderer of nowadays have an option for using custom matrixes. How is this kind of problem dealt with?
1
GLSL Fragment Shader compiles fine on certain computers I got this piece of code in a fragment shader if (color.a lt 0.01f) discard It runs fine on an older graphic card but can't even compile on my newer (maybe not new, but..) GTX 770. I got the most recent drivers and everything. The versions is correct. OpenGL 4.4. The shader compiles like normal if I remove the discard statement. I've been searching around and I'm finding nothing about this. Does the discard just not work on certain graphic cards or what?
1
How can I compute the orbit of one body around another? I'm attempting to have a planet (with a known mass and radius) orbit it's sun (also with a known mass and radius). It doesn't have to be 100 realistic, but it should be possible that the sun have more than one planet orbit it at a time. What equations should I use to accomplish this?
1
how to move a 2d shape in opengl 4.6 (glfw) Class Button So from my understanding in order to move a 2d shape you need manipulate the vertices and then update???? If i do need to update the buffer how i would go by doing that? float xpos 150 Cords relative to the window application so let say the window size 480 by 260 float ypos 60 Where would I plug in position in the vertices array? float vertices .5f, .5f,0, .5f, .5f,0, .5f,.5f,0, .5f,.5f,0 shader.setup( quot Resources Shaders shape.vertex quot , quot Resources Shaders shape.frag quot ) glGenVertexArrays(1, amp VAO) glGenBuffers(1, amp VBO) bind the Vertex Array Object first, then bind and set vertex buffer(s), and then configure vertex attributes(s). glBindVertexArray(VAO) glBindBuffer(GL ARRAY BUFFER, VBO) glBufferData(GL ARRAY BUFFER, sizeof(vertices), vertices, GL DYNAMIC DRAW) glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, 3 sizeof(float), (void )0) glEnableVertexAttribArray(0) note that this is allowed, the call to glVertexAttribPointer registered VBO as the vertex attribute's bound vertex buffer object so afterwards we can safely unbind glBindBuffer(GL ARRAY BUFFER, 0) You can unbind the VAO afterwards so other VAO calls won't accidentally modify this VAO, but this rarely happens. Modifying other VAOs requires a call to glBindVertexArray anyways so we generally don't unbind VAOs (nor VBOs) when it's not directly necessary. glBindVertexArray(0)
1
Directional light shader not behaving as expected I coded my first glsl shader which manage diffuse and specular effects of a directional light. This is the fragment shader. version 120 matrix uniform mat4 model matrix directional light position in world coordinates uniform vec3 light position uniform vec3 eye send to fragment shader varying vec4 color varying vec4 normal varying vec4 vertex void main() float light diffuse intensity float light specular intensity float light mod vec3 world normal vec3 world vertex vec3 world reflec vec3 vertex to light vec3 vertex to eye world normal (model matrix normal).xyz world vertex (model matrix vertex).xyz vertex to light light position world vertex vector from vertex to light vertex to eye eye world vertex vector from vertex to eye world reflec 2 dot(vertex to light, normalize(world normal.xyz)) normalize(world normal) vertex to light reflection of vertex to light light diffuse intensity dot(normalize(vertex to light), normalize(world normal.xyz)) computing angle between light and normal light specular intensity dot(normalize(vertex to eye), normalize(world reflec)) computing angle between eye and reflection if (light diffuse intensity gt 0) light mod light diffuse intensity light specular intensity pow(light specular intensity, 8.0) else light specular intensity 0 gl FragColor vec4(mix(color.rgb light diffuse intensity, vec3(1.0,1.0,1.0), light specular intensity), 1.0) As you can see from the screenshot below, in the first screen, where I look at the sphere from the direction of the light, the border of the sphere is white because of the specular effect. However if the shader were correct it should not look like that. It should be white only in the middle of the sphere. In the second screen, I look at both the sphere and the source of the directional light (the yellow sun) and everything looks just fine. I cannot find the error in my shader, could someone help please? Thank you very much!
1
how to use a mask texture? A texture pack for a sponza model contains mask textures (black and white). I guess that I should read only a red channel from that texture, right ? I use deferred rendering and for shading calculations I use additive blending. A result is first save in a texture (for some post process operations), so doesn't go directly to the default framebuffer. How to use a mask texture ?
1
LWJGL GL QUADS texture artifact I managed to get working LWJGL in Java, and I loaded a test image (tv test card), but I keep getting weird artifacts outside the image. Code glBegin(GL QUADS) glTexCoord2f(0, 0) glVertex2i(10, 10) glTexCoord2f(1, 0) glVertex2i(500, 10) glTexCoord2f(1, 1) glVertex2i(500, 500) glTexCoord2f(0, 1) glVertex2i(10, 500) glEnd() What could be the cause?
1
How can I render text using the new(ish) JOGL GPU curve rendering classes? I'm fairly new to OpenGL JOGL, working through various tutorials and books and making steady progress. Text, however, is an area where I'm stuck. I figured out one way using BufferedImage and Graphics2D to draw strings and then swizzle the pixels and copy to an OpenGL texture, but the quality is low, it is resolution dependent, and it's not efficient. I found this http forum.jogamp.org GPU based Resolution Independent Curve Rendering td2764277.html. Unfortunately while there are some demos in the GitHub repo I can't quite get my head around them. The code I've tried to use is below In the init() method InputStream fontFile getClass().getResourceAsStream("media futura.ttf") try font FontFactory.get(fontFile, true) catch (IOException e) System.err.println("Couldn't open font!") e.printStackTrace() RenderState renderState RenderState.createRenderState(SVertex.factory()) renderState.setColorStatic(1, 1, 1, 1) renderState.setHintMask(RenderState.BITHINT GLOBAL DEPTH TEST ENABLED) renderer RegionRenderer.create(renderState, RegionRenderer.defaultBlendEnable, RegionRenderer.defaultBlendDisable) renderer.init(gl, Region.VBAA RENDERING BIT) util new TextRegionUtil(Region.VBAA RENDERING BIT) In the display() method PMVMatrix pmv renderer.getMatrix() pmv.glMatrixMode(GLMatrixFunc.GL MODELVIEW) pmv.glLoadIdentity() pmv.glTranslatef(0, 0, 300) float pixelSize font.getPixelSize(32, 96) util.drawString3D(gl, renderer, font, pixelSize, "Test", null, samples) I've searched and searched for a tutorial on this stuff or a simple, commented code example explaining how it works but to no avail. If anyone can help me I'd be extremely grateful!
1
What is the fastest way of drawing simple, textured geomtries and keeping the depth test? I'm looking for a fast way to draw simple 3D geometries that will consist of up to 10 vertices. Each of them will have a texture (though varying between geometries). I also want to store the fragment depth for depth testing. There will be no lighting and only very simple transformations (this actually is for isometric game). So far what I did was trying to achieve top performance using OpenGL for drawing simple quads (what the problem actually boils down to). For my machine (from around 2008), I had the following results Using immediate mode 4800 quads took 120ms Using vertex arrays and VBOs this took around 40 ms What I see as a problem is the bottleneck of glDraw calls. I want to somehow get below 4 5 ms , as there will be really a lot of objects on the screen. Any ideas about how could (or perhaps couldn't) achieve that?
1
How do I pick tiles from an isometric map with slopes? I'm looking for a way to convert mouse screen coordinates to isometric map coordinates, with the addition that the world has slopes and cliffs, and I have to be able to tell which quadrant of the tile is being pointed at by the mouse. The textures are handled by OpenGL, so I can't (easily) pick directly based on the tile sprite. I've found several similar solutions, (e.g. Isometric Tiles Math, XNA Resources amp Mouse Maps for Isometric Height Maps with the latter looking most promising) but none of them seem to quite fit my requirements. My tiles look like this What algorithm or technique could I use here?
1
GLSL variables as main function params vs on their own line? I am learning OpenGL and GLSL. I was taught that the in out variables should be formatted like this in vec3 something out vec3 somethingElse int main() etc... However, I ran across code like this online (ShaderToy) int main(in vec3 something, out vec3 somethingElse) Is there a difference? Is there any difference?
1
Does SDL2 completely encapsulates Direct3D and OpenGL? I'm starting to study game development but the concepts of how the SDL2 lib works are still a bit blurry to me. I get that both Direct3D and OpenGL are two sides of the same coin. They both are used to draw vectors in the screen and are used as a graphics hardware abstraction layer. From what I read about the SDL2 lib it seems to be a multi purpose library that encapsulates some functionalities as user input, threading, networking, window creation, and so on and also quot graphics hardware via OpenGL and Direct3D quot . Now the questions The SDL2 webpage states that it provides access to quot graphics hardware via OpenGL and Direct3D quot . What exactly that means? If I'm making a 3D game do I need to decide beforehand which graphics API I want to use (Direct3D or OpenGL) and use it alongside with the SDL2? If not, does SDL2 completely encapsulates the graphics API so I could easily deploy my game targeting Direct3D or OpenGL without adapting too much code for each one? In the case SDL2 does not completely abstract the graphics API, what texts and terms should I search to learn how to implement such an abstraction layer using SDL2? I want to be able to easily build for Direct3D and OpenGL. Thanks very much!
1
In a shader, why does substituting a variable with the expression producing it cause different behaviour? I have this correctly working OpenGL shader producing Perlin noise float left lerp(fade(v), downleft, topleft) float right lerp(fade(v), downright, topright) float result lerp(fade(u), left, right) Then I tried plugging the definitions of right and left into result float result lerp(fade(u), lerp(fade(v), downleft, topleft), lerp(fade(v), downright, topright)) Surprisingly, this behaves completely differently, giving visible edges in my Perlin noise. Below are both results My whole 30 line shader is here. What is the difference between those?
1
How can I unit test rendering output? I've been embracing Test Driven Development (TDD) recently and it's had wonderful impacts on my development output and the resiliency of my codebase. I would like to extend this approach to some of the rendering work that I do in OpenGL, but I've been unable to find any good approaches to this. I'll start with a concrete example so we know what kinds of things I want to test lets say I want to create a unit cube that rotates about some axis, and that I want to ensure that, for some number of frames, each frame is rendered correctly. How can I create an automated test case for this? Preferably, I'd even be able to write a test case before writing any code to render the cube (per usual TDD practices.) Among many other things, I'd want to make sure that the cube's size, location, and orientation are correct in each rendered frame. I may even want to make sure that the lighting equations in my shaders are correct in each frame. The only remotely useful approach to this that I've come across involves comparing rendered output to a reference output, which generally precludes TDD practice, and is very cumbersome. I could go on about other desired requirements, but I'm afraid the ones I've listed already are out of reach.
1
Calculating vertex normals (vn) causes ugly lines C openGL Im trying to procedurally generate planets for a project I'm working on. By adding noise to each vertex, I'm able to generate elevation, but without having updated vertice normals for my shader. It originally looked something like the picture below. When I tried to recalculate the vertex normals, I got some ugly sharp lines, most likely as a result from the way I was parsing the model. this is the code Im using to recalculate the normals for (auto amp x normals)x vec3(0) zero normals vec3 facenormal buffer indices is the faces triangles indexed, in pairs of 3. for (size t i 0 i lt indices.size() ) facenormal cross( (shape indices i 1 shape indices i ), (shape indices i 2 shape indices i ) ) normals indices i facenormal normals indices i facenormal normals indices i facenormal for (auto amp x normals)x normalize(x) normalize adjacent faces Basically I add up the adjacent faces to the vertices they belong to, then normalize the value they have. Then, based on suggestions, I took a look at the parsing and made sure it doesnt share vertices in the indexing. This seems to remove the lines, but im still stuck flat shading? But according to this, it should be accurate. So I'm still not fixing the problem. What is the correct formula to do the vertex calculation, if its not what I'm already doing. Like user Kolenda suggests, I was not modifying values at the base mesh level. but rather, doing them after loading the indexed copies. So, what I end up with is a combination of flat shading faces "shading" ghost vertice normals. Iterating over the faces on a mesh level, with the same formula, solves the problem.
1
Are there alternatives to multiple buffering? I'm learning about game programming. So far, my sources lead me to believe that multiple buffering is the best technique for redrawing in OpenGL. Are there other redrawing techniques? What are their advantages disadvantages? I'm not sure about terminology yet, but googling as best as I can hasn't yielded much result yet. Does anyone have any experience with redrawing techniques besides multiple buffering?
1
Texture loading Everything at once OR un loading the needed assets? Good evening. We've been developing quite a huge game for android on the basis of AndEngine. So we have a lot of assets to load, especially textures. At the moment everything (sound, textures etc) for every screen (menu, shop, etc etc) is loaded when the app starts (while showing a progress bar). This way the user only has to wait once about 16 seconds at the start of the game. We think that this is a pretty pleasant solution from the users perspective but might it be bad in terms of battery usage memory usage or any other reasons? What arguments speak for a solution where we unload all the screen specific assets of the active screen and load the assets needed for the next screen? Thank you for your opinion!
1
Unintended stuttering when moving camera I am using opengl and GLFW to make a rendering engine. Things work per se, but there is some weird flickering happening when I move the camera. Due the nature of the problem I need to link a youtube video https www.youtube.com watch?v XhFrXadnWbs amp feature youtu.be If you pay attention you will see the stuttering happening as I move the camera. I beleive this problem occurs due to how I have implemented my camera movement, which is GLFW key callback define CAM SPEED 0.2f void static key callback(GLFWwindow window, int key, int scancode, int action, int mods) if(key GLFW KEY W) c.translateForward(CAM SPEED) if(key GLFW KEY S) c.translateForward( CAM SPEED) if(key GLFW KEY A) c.translateSideways( CAM SPEED) if(key GLFW KEY D) c.translateSideways(CAM SPEED) Camera code void inline translateForward(float speed) glm vec3 hForward forward hForward.y 0 hForward normalize(hForward) position hForward speed void inline translateSideways(float speed) position side speed
1
searching for "university kind" free online course about OPENGL I know there are a lot of free university courses, but I'm trying to find one about OpenGL. Do you know where can I find one, online?
1
3D model rendered in halftone I am writing a 3D renderer that would render 3D models as if they had been drawn in halftone. Feature wise I need nothing fancy no colors, just pure 3D geometry and basic lighting like here Image credit https blenderartists.org forum showthread.php?438820 Suzanne Sketch Notes I have written a basic 3D renderer in the past, however, I am really inexperienced when it comes to shaders (again, I do know basics) and I have absolutely no idea where to start with this. The example image has been created in Blender, but I am writing my custom renderer. This is not a duplicate of "How can I make a shader effect that looks like a lightly shaded pencil drawing?", as my input is a 3D model, whereas the OP in the linked question used an image.
1
OpenGL ES draw pre rendered background onto depth buffer I want to create a scene with 2D pre rendered background and 3D models for characters (like those classic Final fantasy games). For the background, I have 2 textures one to be displayed, with colors, details... one to be used as depth buffer (depth value presented as grayscale value maybe) I want to write a shader program to help me discard all the pixels(frags) which have the depth smaller than current depth. But then, do the background and the models have to use the same shader? What value to compare with the depth value? Is this approach even possible?
1
Palette reduction to pre defined palette I'm writing a bunch of GLSL effects for fun, but I can't wrap my head around this. Basically, I want to reduce a texture's palette into a pre defined set of colors. For example, a post processing shader would take the FBO result texture and a 1D 2D texture that would contain the reduced palette (say, 64 colors), and the output would then be sampled based on these. Something like this in its core sampler2D source sampler2D palette vec4 source texture2D(...) Source texture to sample vec4 palette texture2D(...) Palette vec3 result color from the source converted to the closest value available in palette Any ideas how to do this? And to be more precise, I don't mean palette swapping the source texture is full colored result, and this shader would ultimately be a post processing shader, reducing the amount of colors to the ones that are predefined in the palette texture.
1
Stencil buffer VS conditional discard in fragment shader I have a continuous height mapped mesh to represent landscape. I also have 1 to let's say 10 wells on this landscape represented by additional models. What I want to achieve is to create an illusion of an actual hole in landscape at the place where the well sits. I see the solution in two ways Draw holes to stencil buffer an then use it in fragment shader to discard landscape fragments. Send an array of uniforms (vec2) and then conditionally discard fragments if they are near those hole points. The question is what way should I prefer if I'm good with rectangular holes and want to get best performance?
1
Poor performance wth custom particle system in LibGDX I'm using a custom particle system for my LibGDX Java based game project (because I used Slick2D earlier on, need more parameters so I made my own and then ported). The system is fairly standard as far as I'm concerned, uses particle pooling (each emitter has its own fixed size particle pool) and renders each particle in a batch and uses one single (2048x1028) packed texture with all particle textures on it. Here is how I render the particles in semi pseudo code (because the entire code isn't relevant) ParticleSystem class Note ExtendedBatch is my custom sprite batch implementation, largely just the normal SpriteBatch with two additional vertices for grayscale factor and additive tinting void renderEmitter(ParticleEmitter emitter, ExtendedBatch batch) ParticlePool pool particlePools.get(emitter) if (emitter.shouldScissor) Renderer.pushScissor(emitter.scissor) for (Particle particle pool.particles) if (particle.inUse) particle.render(batch) if (emitter.shouldScissor) Renderer.popScissor() Particle class void render(ExtendedBatch batch) relativeX getRelativeX() relativeY getRelativeY() if (isInScreenBounds(relativeX, relativeY)) batch.setColor(myColor) batch.draw(myTexture, position, origin, size, scale, rotation) Now for some reason with only around 300 particles (split up into 10 emitters with varying sizes) the performance drops to awful 30FPS on my notebook's integrated GPU (Intel HD 4400) when I need want 60FPS at all times. I know iGPUs aren't great, but that one is one of the better ones out there and games like Ori or Braid which have thousands of similar particles run without any problems at 60FPS on that very chip. I also doubt (and hope) that it's not just Java vs C which is causing this huge performance drop here. Looking at in game profiling data The in game profiling data however shows a few things Particles aren't really taking that long to render and there is a lot of idle time. To me, that doesn't really make much sense. It looks like there would be enough resources to easily render everything at 200FPS or more, but it is stuck at a horrible 30FPS. There are a lot of things I already tried that didn't help Packing all particle textures into one (which is 2048x1024) Batching all particle draw calls Profiling to find out the cause (see above) VisualVM to find potential memory issues, didn't help Disabling vSync and FPS locks doesn't help For the record, here's a VisualVM CPU sample over a timespan of 2 minutes There must be something I'm doing wrong and I also want need more particles than just 300 so I definitely have to fix this but I don't know how. Update Using the default SpriteBatch implementation and default shaders doesn't improve performance either. Update Solution I forgot to turn off MSAA. I had it running at 2x sampling rate for smoother antialising, I completely forgot to check for that. In the meantime I improved performance in a lot of other parts, but that's what finally did it.
1
Wrong faces culled in OpenGL when drawing a rectangular prism I'm trying to learn opengl. I did some code for building a rectangular prism. I don't want to draw back faces so I used glCullFace(GL BACK), glEnable(GL CULL FACE) . But I keep getting back faces also when viewing from front and also sometimes when rotating sides are vanishing. Can someone point me in right direction? glPolygonMode(GL FRONT,GL LINE) draw wireframe polygons glColor3f(0,1,0) set color green glCullFace(GL BACK) don't draw back faces glEnable(GL CULL FACE) don't draw back faces glTranslatef( 10, 1, 0) position glBegin(GL QUADS) face 1 glVertex3f(0, 1,0) glVertex3f(0, 1,2) glVertex3f(2, 1,2) glVertex3f(2, 1,0) face 2 glVertex3f(2, 1,2) glVertex3f(2, 1,0) glVertex3f(2,5,0) glVertex3f(2,5,2) face 3 glVertex3f(0,5,0) glVertex3f(0,5,2) glVertex3f(2,5,2) glVertex3f(2,5,0) face 4 glVertex3f(0, 1,2) glVertex3f(2, 1,2) glVertex3f(2,5,2) glVertex3f(0,5,2) face 5 glVertex3f(0, 1,2) glVertex3f(0, 1,0) glVertex3f(0,5,0) glVertex3f(0,5,2) face 6 glVertex3f(0, 1,0) glVertex3f(2, 1,0) glVertex3f(2,5,0) glVertex3f(0,5,0) glEnd()
1
btRigidBody world transform and scale issue When I create a collision shape for rigid body (in this case box) I use vertices positions and scale from the opengl matrix, the code looks this way glm vec3 boxDim getBoxDim(verticesPositions, scale) collisionShape .reset(new btBoxShape(btVector3(boxDim.x, boxDim.y, boxDim.z))) When a simulation step is done I need to update opengl matrix for the object. I calculate a new model matrix this way btTransform transform rigidBody gt getWorldTransform() float openglMatrix 16 transform.getOpenGLMatrix(openglMatrix) glm mat4 newModelMatrix glm make mat4(openglMatrix) newModelMatrix glm scale(newModelMatrix, glTransform.getScale()) My code works but I wonder if there is any possibility to have already scale in the transform returned by getWorldTransform. Then I can ommit this line of the code newModelMatrix glm scale(newModelMatrix, glTransform.getScale())
1
Arbitrary number of VBO to Vertex Shader I am currently using standard modern OpenGL way to render a mesh via VBO and attributes glEnableVertexAttribArray(aVertexPosition) glBindBuffer(GL ARRAY BUFFER, VBO) glVertexAttribPointer(aVertexPosition, 3, GL FLOAT, GL FALSE, 0,0) glBindBuffer(GL ARRAY BUFFER, 0) and in the GLSL code also the standard way to received into as in vec3 aVertexPosition. However, I wonder if there is anyway to make this for a dynamic amount of VBOs. Let's say my software creates a number of VBOs that depend on the inputs, and I successfully create and allocate memory for all of them, how do I pass all of them to the shader? Does the number need to be constant? If so, what is the best way to do it?
1
How can I animate a portion of the textures on a model? I have a model to which I have attached multiple textures. Both textures are currently static, but if I want to move (or slide) the texture which is on the top (in UV space), is that possible? Maybe by moving the texture coordinates or something?
1
How can I extrude a regular, grid based 2D shape to 3D? I have a list of vertex coordinates which encircle several 2D areas. Orthogonal lines only, but not necessarily convex areas... similar to PCB traces of conductive copper areas. I want to draw them like solid objects in 3D using OpenGL, but I'm still very new to 3D and struggling. Please point me how to do this. I've managed to draw the outline only using GL LINE STRIP, but when I try GL QUAD STRIP or GL TRIANGLE STRIP, the list of coordinates is not properly ordered for triangle strip and the fact the areas are not convex prevent me from doing a simple triangular tessellation. These are the 2D shapes I'd like to render them like this
1
Accessing 3D texture data without normalized coordinates directly, but with filtering texelFetch() exists to access texture data with texture coordinates in "image dimensions", but texelFetch skips filtering. In case of 2D textures, it's possible to use a rectangle texture sampler to access textures without normalized coordinates with filtering, but the same equivalent doesn't seem to exist for 3D textures. Is this somehow possible with 3D textures?
1
draw two models changing the MVP matrix (android opengl) I'm newbie with opengl2 in android, and I'm making an app in which I'm testing some things. Now I was trying to duplicate a sphere object with a texture, so I thought it was enough to change my MVP matrix applying a small translation to the Model matrix, using same View Matrix and same projection matrix. int numIndices balon.getNumIndices() ShortBuffer indices balon.getIndices() GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0) for (int j 0 j lt numIndices.length j ) GLES20.glDrawElements(GLES20.GL TRIANGLES, numIndices j , GLES20.GL UNSIGNED SHORT, indices j ) GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix2, 0) for (int j 0 j lt numIndices.length j ) GLES20.glDrawElements(GLES20.GL TRIANGLES, numIndices j , GLES20.GL UNSIGNED SHORT, indices j ) balon is an object of type Sphere. mMVPMatrix and mMVPMatrix2 are the MVP matrices I was talking about. I simply changed the MVP matrix and did the drawing again. The result is that I have the two spheres, but one of them is behaving strangely, appears distorted, the position in which appears is right, the two spheres move along, but one of them changes it's shape. What I'm missing?, I guess I have to duplicate some things in my code before I do this, but what?, thought that being the same model, this was enough... This is how I changed the MVP matrix Matrix.setIdentityM(mModelMatrix, 0) Matrix.setIdentityM(mModelMatrix2, 0) Matrix.translateM(mModelMatrix, 0, posX, posY, posZ) Matrix.translateM(mModelMatrix2, 0, posX 0.5f, posY 0.5f, posZ) Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0) Matrix.multiplyMM(mMVPMatrix2, 0, mViewMatrix, 0, mModelMatrix2, 0) Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0) Matrix.multiplyMM(mMVPMatrix2, 0, mProjectionMatrix, 0, mMVPMatrix2, 0)
1
What is glViewport for and why it is not necessary sometimes? I develop my game most of time under ArchLinux. I have decided to try it on the Ubuntu 16.04 recently. The situation I came across was really strange No glGetError()s at all. No errors, warnings, anything bad in the gl debug output. All assertions that I made for all bad situations did not trigger. glClear successfully cleared the buffers I specified, but glDraw did not draw anything. Luckily I have found a solution which was to set glViewport. However, I don't quite understand, why is it necessary for drawing in Ubuntu but not necessary in ArchLinux? The also difference is between graphics adaptors on archlinux I use NVIDIA 1060, on Ubuntu I use integrated HD Graphics P530 (Skylake GT 2). If it is important, I use SDL2 with opengl 4.4.
1
orbit object around another object I am building 3D solar system containing (sun earth moon). I have difficult to add the part which the moon rotate around earth. I separated between "moon" and other stars. The problem is that the center of earth is not const like the sun. I know that earth is in starsList 1 so I thought to translate the moon to this place but got weird result.. Someone have an idea how to do this? mat4.identity(mvMatrix) mat4.translate(mvMatrix, 0, 0, z ) for (i 0 i lt starsList.length i ) mvPushMatrix() if (starsList i .name "moon") mat4.rotate(mvMatrix, degToRad(starsList i .angle), 0, 1, 0 ) mat4.translate(mvMatrix, starsList 1 .initPlace) mat4.rotate(mvMatrix, degToRad(starsList i .angle), 0, 1, 0 ) else mat4.rotate(mvMatrix, degToRad(starsList i .angle), 0, 1, 0 ) mat4.translate(mvMatrix, starsList i .initPlace) mat4.rotate(mvMatrix, degToRad(starsList i .angle), 0, 1, 0 ) mvPopMatrix()
1
Error OpenGL null Cocos 2D Visual Studio When I try to execute a game code fully functional on Visual Studio 2013, using Cocos 2D (3.8.1) library this error jumps on screen. (I'm working on a laptop) OpenGL 1.5 or higher is required (your version is (null)). Please upgrade the driver of your video card. And then this one My Graphic card is an AMD Radeon HD 7600M (OpenGL version 4.x), with de drivers fully updated. Neither my college teacher nor me know how to fix it. What can I do to fix it?
1
How can i port my OpenGL game to linux? I made a game with OpenGL 4.3(core profile) and C . I used GLFW3 for window and context management. I am also using bunch of third party library which are also available for linux. What things do i need to consider if i want to port the game and how can i make it support all windows and linux?
1
How can I disable dim the screen when I click the pause button? I am working in an android game using cocos2d. I want to dim the background screen when I click the pause button. How can I do this ?
1
camera movement along with model I am making a game in which a cube travels along a maze with the motive of crossing the maze safely. I have two problems in this. The cube needs to have a smooth movement like it is traveling on a frictionless surface. So could someone help me achieve this. I need to have this done in a event callback function I need to move the camera along with the cube. So could someone advice me a good tutorial about camera positions along with an object?
1
Got garbage when getting shader compile log I got garbage when trying to compile shader and get error log. I checked my code against some sample code, and could figure out where I did wrong. Below is my code bool LoadShader(GLuint ShaderType, char const ShaderFileName) os file ShaderFile OSReadFile(ShaderFileName) GLint ShaderHandle glCreateShader(ShaderType) DebugPrint("Shader source s n . s n", ShaderFileName, ShaderFile.Length, ShaderFile.Contents) GLchar ShaderSource (GLchar ) amp ShaderFile.Contents GLint ShaderSourceLength ShaderFile.Length glShaderSource(ShaderHandle, 1, amp ShaderSource, amp ShaderSourceLength) glCompileShader(ShaderHandle) OSFreeFile( amp ShaderFile) GLint Success glGetShaderiv(ShaderHandle, GL COMPILE STATUS, amp Success) if (!Success) GLchar CompileLog 512 glGetShaderInfoLog(ShaderType, 512, NULL, CompileLog) switch (ShaderType) define SHADER TYPE ERROR(ShaderType) case ShaderType DebugPrint( ShaderType " Compile error! n") break SHADER TYPE ERROR(GL VERTEX SHADER) SHADER TYPE ERROR(GL FRAGMENT SHADER) undef SHADER TYPE ERROR DebugPrint(" s n n", CompileLog) return false return true Below is some output Shader source shader fragment.glsl version 330 core out vec4 Colour void main() Colour vec4(1.0, 0.5, 0.2, 1.0) GL FRAGMENT SHADER Compile error! p R
1
glVertexAttribPointer stride GL INVALID VALUE opengl 3.3 I'm experiencing issues similar to described in this question, which was never answered. Basicly when I'm calling glVertexAttribPointer with stride greater than this exact value 640. OpenGL GL INVALID VALUE error is raised. According to documentation such an arror can be raised in one case GL INVALID VALUE is generated if stride is negative. Which is obviously not my case. In OpenGL 4.4 the maximum value is specified and set to GL MAX VERTEX ATTRIB STRIDE according to this site Is there a certain magic number in older versions of OpenGL (3.3 in my case) for a maximum vertex stride? Is there any other reason that this function can raise GL INVALID VALUE?
1
How can I achieve a good fire effect with alpha blending and particles? Using the following setting for the OpenGL particle effect SRC GL SRC ALPHA DST GL ONE Creates an additive blend, which looks spectacular on a black background but terrible on brighter colours, as it begines to fade to white. I then used alpha blending SRC GL SRC ALPHA DST GL ONE MINUS SRC ALPHA This allows other backgrounds to be used without affecting the color of the particles, but the particles themselves look dull compared to the additive blend. How can I achieve a good fire effect with alpha blending and particles? Additive Alpha UPDATE Following David's advice below, I created a separate texture and then used additive blend on the particle effect before drawing onto the texture. The problem with that is that drawing on an alpha 0 texture resulted in just the coloured parts of the particle appearing in front of my world map, since normally you have a black background instead. The trick was to use two textures. I created a black texture and then drew the particles on it. Then I removed the alpha layer of the particles from this texture, effectively removing all the surrounding solid black and fading out the partially visible particles, while leaving the underlying black as you'd expect when making additive blend particles on a black background. In short, a gruelling process, but I got there eventually Here's the thread where I posted my process http www.cocos2d iphone.org forum topic 28707?replies 8 post 141528 Video http www.youtube.com watch?v JptGbEO3b5E
1
Why does OpenGL 3 only allow VBOs? I see that OpenGL versions 3 and up eliminate the use of client side rendering. Immediate mode has been eliminated, and vertex arrays seem to be deprecated. Instead, if I understand correctly, VBOs are the main way of rendering vertices. While I see the logic behind having a uniform way of rendering everything, is it the case that VBOs have no major downsides over vertex arrays? I thought VBOs were supposed to be large buffers containing 1MB of data, generally. What if I have a scene that has a lot of smaller geometry? I have a scene graph with a large number of nodes, each of which needs its own transform, etc. Each node should also be able to be deleted separately, added to separately, etc. I was using vertex arrays before. So my first question is whether, if I switch to VBOs, there will be a greater overhead to my scene graph objects now because a VBO needs to be allocated for each object. Another concern is that the geometry I'm rendering can be highly dynamic. In the worst case, there may be times when all geometry needs to be resent every frame for some period of time. Will VBOs have worse performance than vertex arrays in this use case, or do VBOs at worst do just as much work as vertex arrays but no more? So, in more concise format, my questions are 1) Is there a substantial overhead to allocating deallocating VBOs (I mean the mere act of setting up a buffer)? 2) If I'm updating the data from the CPU every frame, can this be substantially worse than if I had used vertex arrays? And finally, I'd like to know 3) If the answer to either of the above questions is "yes", why deprecate other modes of rendering that could have advantages over VBOs? Is there something I'm missing here, like techniques I'm supposed to use to mitigate some of these potential allocation costs, etc.? 4) Do the answers to any of these questions change substantially depending on what OpenGL version I'm using? If I refactor my code to be OpenGL 3 or 4 forward compatible by using VBOs in a way that is performant, will the same techniques be likely to perform well with OpenGL 2, or is it likely that certain techniques are much faster with OpenGL 3 and others with OpenGL 2? I asked this question on stack overflow, but I am re posting here because I realized this site may be more appropriate for my question.
1
Texture filtering of look up table in post process shader I am doing post processing by drawing to an FBO and then applying a certain fragment shader when drawing the FBO's texture to the screen. I want to use a look up table texture to apply color grading. Since I am targeting OpenGL ES 2.0 and possibly older PCs, I cannot use 3D textures. Instead I can use a 2D texture as an atlas of the layers of my 3D look up table. What I'm concerned about is correctness and performance when the texture lookup is performed. My look up texture needs to have linear filtering so I don't have to have a full 256 3 sized look up table to cover all hues. My fragment shader would look something like this Round B up and down to get the two regions of the texture to sample from, add offsets accordingly to R G and sample the texture twice with the two offset .rg values. And finally linearly interpolate based on B. But I as I understand it, the when the GPU encounters a texture2D call on a texture with linear filtering, it will be calculating the input coordinates for those calls for neighboring pixels in parallel to get a derivative. This derivative is used to determine how to sample the texture pixels. Since this is a post process, I don't want two neighboring pixels to influence each other. It could be a black pixel from the edge of a sprite next to a bright blue sky pixel. In the look up table, these two would need to sample distant points. So is the GPU going to decide the derivative is huge (minification) and try to linearly sample and involve all the random in between texels on my look up table? Is there a way to get my linear filter to ignore neighboring pixels and only interpolate from the 4 nearest texels? Sort of like treating everything as magnification? The problem is similar to this question that was asked in regards to HLSL, but I'm targeting OpenGL 3.0 and OpenGL ES 2.0 Color grading, shaders and 3d textures
1
How do I convert my matrix from OpenGL to Marmalade? I am using a third party rendering API, Marmalade, on top of OpenGL code and I cannot get my matrices correct. One of the API's authors states this We're right handed by default, and we treat y as up by convention. Since IwGx's coordinate system has (0,0) as the top left, you typically need a 180 degree rotation around Z in your view matrix. I think the viewer does this by default. In my OpenGL app I have access to the view and projection matrices separately. How can I convert them to fit the criteria used by my third party rendering API? I don't understand what they mean to rotate 180 degrees around Z, is that in the view matrix itself or something in the camera before making the view matrix. Any code would be helpful, thanks.
1
Flickering tearing with glfwSwapBuffers I recently separated my logic and rendering threads to fix this problem GLFW window freezes on title bar hold drag This means the logic and rendering are no longer running in lock step as they were before, so it is possible that the logic has not finished executing by the time we are ready to render. To handle this case, the rendering thread will keep sleeping for 1ms until the logic has finished executing. Pseudo code Logic thread while (!exiting) timeBefore now() updateGame() renderingDue true timeElapsed now() timeBefore sleep(MS PER FRAME timeElapsed) Rendering thread while (!exiting) if (renderingDue) render() renderingDue false GLFW.glfwSwapBuffers(window) else Not yet ready to render, but try again in a short while sleep(1) This seems to work fine for me, at least with vsync disabled. When I tested with vsync enabled I encountered stuttering, which I suppose is because if we miss a rendering frame, we have to wait until the next monitor refresh interval before we can catch up. However, on some other systems, in particular low end laptops, there is a lot of flickering tearing during gameplay. I don't understand what would cause this, since we are always calling glfwSwapBuffers() after rendering, not during. Is there something fundamentally wrong with my approach? I know ideally the rendering thread should take a delta value so that it can render the state "between" logic threads (yes, I have read Fix Your Timestep!), but I am nonetheless curious about what is happening here.
1
Best way to do buttons for an OpenGL ES iPhone game I'm making a simple 2d game in OpenGL ES and I want to add movement buttons to it. What's the best way of going about this? In previous projects I've simply added UIButtons to the view but I hear there are performance implications in doing so with OpenGL ES so I'm wondering what the possible alternatives are if so.
1
Setting up OpenGL camera with off center perspective I'm using OpenGL ES (in iOS) and am struggling with setting up a viewport with an off center distance point. Consider a game where you have a character in the left hand side of the screen, and some controls alpha'd over the left hand side. The "main" part of the screen is on the right, but you still want to show whats in the view on the left. However when the character moves "forward" you want the character to appear to be going "straight", or "up" on the device, and not heading on an angle to the point that is geographically at the mid x position in the screen. Here's the jist of how i set my viewport up where it is centered in the middle setup the camera glMatrixMode(GL PROJECTION) glLoadIdentity() const GLfloat zNear 0.1 const GLfloat zFar 1000.0 const GLfloat fieldOfView 90.0 can definitely adjust this to see more less of the scene GLfloat size zNear tanf(DEGREES TO RADIANS(fieldOfView) 2.0) CGRect rect rect.origin CGPointMake(0.0, 0.0) rect.size CGSizeMake(backingWidth, backingHeight) glFrustumf( size, size, size (rect.size.width rect.size.height), size (rect.size.width rect.size.height), zNear, zFar) glMatrixMode(GL MODELVIEW) rotate the whole scene by the tilt to face down on the dude const float tilt 0.3f const float yscale 0.8f const float zscale 4.0f glTranslatef(0.0, yscale, zscale) const int rotationMinDegree 0 const int rotationMaxDegree 180 glRotatef(tilt (rotationMaxDegree rotationMinDegree) 2, 1.0f, 0.0f, 0.0f) glTranslatef(0, yscale, zscale) static float b 25 0 static float c 0 rotate by to face in the direction of the dude float a RADIANS TO DEGREES( atan2f( gCamera.orientation.x, gCamera.orientation.z)) glRotatef(a, 0.0, 1.0, 0.0) and move to where it is glTranslatef( gCamera.pos.x, gCamera.pos.y, gCamera.pos.z) draw the rest of the scene ... I've tried a variety of things to make it appear as though "the dude" is off to the right do a translate after the frustum to the x direction do a rotation after the frustum about the up y axis move the camera with a biased lean to the left of the dude Nothing i do seems to produce good results, the dude will either look like he's stuck on an angle, or the whole scene will appear tilted. I'm no OpenGL expert, so i'm hoping someone can suggest some ideas or tricks on how to "off center" these model views in OpenGL. Thanks!
1
Unexpectedly fast rotation after refactoring OpenGL code to add more abstraction I've been working on an OpenGL program that simply renders a square that rotates in 3D space. The square also has a texture applied to both sides. Here you can see an example screenshot of the program running I used very little abstraction in that project and most of the code was in main.cpp (apart from the shader loading and usage). After finishing this project, I wanted to improve it as much as I could with my current knowledge. I started abstracting away functionality. The final result was a very clean main.cpp file with multiple classes (Mesh, Shader, Resource Manager and Window classes). Apart from the texture, everything should functionally be the same, but I experienced the following after using the rotate function left is the abstracted version of the first project, right is the first project The square in the new project rotates at a much higher speed than in the previous project. The implementation is the same, as in, the degrees that the square should rotate around the y axis are the same in both projects the time the program has been running, which is gotten through GLFW's glfwGetTime(). I checked nearly everything. The .lib files are linked the same way between the projects. There are no differences between the vertex data. Both projects are running in DEBUG mode. The only difference I could think of is the texture not being applied in the second project, but I recall that the textureless version of the first project was still running with the same speed as it is currently. But rendering a 64x64 texture on a square shouldn't be that expensive to run. NOTE I'm using GLM for the matrices and for the rotation transformation. Does abstracting away all of OpenGL's initializing and buffer code improve performance by that much? I'm using GLFW for window and context creation, and thus using (float)glfwGetTime() for the angle of rotation along the y axis. The second project basically moves over all functionality into categorized classes. So everything to do with meshes was moved to a Mesh class, loading of GLSL shaders and textures was moved over to a ResourceManager class etc. So all functionality should be the same.
1
Limit Clamp camera movement using quaternions I'm making a camera object for rendering with OpenGL. However, instead of using the typical "LookAt" method I'm trying to use just a Quaternion for orientation and a Vector3 for position. Instead of messing with euler angles to rotate the Camera I use quaternions directly, like this glm vec3 up glm normalize(glm inverse(cam.Orientation) glm vec3(0, 1, 0)) glm vec3 right glm vec3( 1, 0, 0) cam.Orientation glm rotate(cam.Orientation, glm radians( xoffset), up) cam.Orientation glm rotate(cam.Orientation, glm radians( yoffset), right) xoffset and yoffset are just the amount the mouse has moved since the last frame. I create my view matrix from the inverse of the two variables. I've already got FPS style camera movement and rotation working. The problem I have now is that I want to prevent the camera from being able to flip over. I.E. moving the mouse up or down enough that the view becomes upside down. Is there a way to clamp the rotation using purely quaternions to prevent this?
1
OpenGL tile rendering Currently I'm trying to render a TileMap using OpenGL 2.1, GLSL 1.2. I would like to draw every tile in just one draw call. I use a single texture with all tiles, identifying each one by an index. The vertex data per tile is vec2 worldPos the position to transform the tile Quad in world coordinates vec2 texCoord the uv coordinates, calculated using the tile index, by CPU (top left corner). But I can't find a way to draw everything with one draw call The uv coordinates can't be calculated in shader because the vertex shader don't know which corner of the quad it is processing. Can't draw by element because my quad vertex data only contains 4 vertices, store repeated vertices is a memory waste. Only if I could use a separate element buffer just for the vertex (0, 1, 2, 3, 0, 1, ...). Does someone have any suggestion about how should I proceed? Thanks!
1
Why is my diffuse lighting changing with camera position? I am trying to do lighting in view space, but I am afraid I have an issue. I calculate the normals as N norm mat vec4(vert norm,0.0) where norm mat is the transpose(inverse(model view mat)). Then in the lighting shader (for a directional light) perform the following N normalize(N.xyz) vec4 view L pos vec4(lightPosition.xyz, 0.0) view mat vec3 L normalize(view L pos.xyz) out col vec3(dot(N,L)) However, moving the camera changes the floor's lighting What am I doing wrong?
1
What is the correct multiplication order for a 2D matrix? I'm currently trying to create a camera and entity model matrix for my 2D game similar to that of Unity3D. I've already tried to find answers to this question on stackoverflow gamedev but i couldn't find an answer that explains how to center the camera and image. Goal Have a camera matrix wich is centered at the position of the camera. Have a entity matrix wich draws everything centered Current implementation camera matrix Matrix.translation(Screen.width 0.5, Screen.height 0.5) Matrix.rotation(camera.rotation radians) Matrix.scale(camera.scale.x, camera.scale.y) Matrix.translation(camera.x, camera.y) entity matrix Matrix.rotation(entity.rotation radians) Matrix.scale(entity.scale.x, entity.scale.y) Matrix.translation(entity.x, entity.y) image component draw function setMatrix(camera.matrix entity.matrix) drawImage(x 0, y 0, image) position is already included in the entity matrix Results Image rotates around the upper left corner of the screen not around it's center Image is not centered. The image's origin is at its upper left corner. The camera matrix seems to be correct. Questions What is the correct multiplication order for the entity model matrix? Can i use a single matrix for all components of a entity or do i need to calculate in the width height of the image text animation component.
1
2D Tile based game Texture Atlas combining I am new to OpenGL and game dev, I have been to courses and trying to learing everything about this. My task is to implement a Texture Atlas in a 2D tile based game (very similar to Tibia) using OpenGL. Now the game have the sprite sheet loaded into memory and on each tile the program generates an image, load to a texture, bind and make a draw call. You can imagine how ineficcient it is. So what I have to do is to load this sprite sheet as a Texture Atlas and get the advantage that it has. The problem is that in the first step while generating an image, the program makes some combinations on the sprites. For example, the character has an outfit and player changes its colors (boots, pants, t shirt, hair) to do this there is a sprite for the base outfit, and a sprite for each element of the outfit, so the program apply a mask on color for each element and combine this five sprites into an image, then make the process that I mentioned above. This is an example that happens with outfit, but it happens to other things too, so this is not a special case with outfit only. My question is is there any way to make this combinations using OpenGL elements? Combining this multiple textures in my Texture Atlas into one draft and do it in a single draw call?
1
OpenGl indices array I have a class terrain which create a grid of Quads. I do it like this for(int z 0 z lt length z ) for(int x 0 x lt width x ) vertices.push back(vec3((float)x 250, 0.f, (float)z 250)) for(int z 0 z lt ( length 1) z) for(int x 0 x lt ( width 1) x) int index z width x Vertex vertices Vertex(vertices.at(index),vec3(0, 0, 0)), Vertex(vertices.at(index 1),vec3(0, 0, 0)), Vertex(vertices.at(index width),vec3(0, 0, 0)), Vertex(vertices.at(index 1 width),vec3(0,0,0)) unsigned short indices index,index 1,index width,index 1,index width,index width 1 Quad quad( vertices, 4, indices, 6) squares.push back(quad) i The vertices and the logic are correct, but the indices aren't, for some reason. here is the output for this code But when I change this indices to this unsigned short indices 0,1,2,1,2,3 It works great The problem is I don't understand why this line unsigned short indices index,index 1,index width,index 1,index width,index width 1 doesn't work. And if it worked, my grid would consume a lot less ressources. If someone could explain me why it doesn't work, It would be great, thanks you. In case you need to know how I draw a Quad, here is the code class Quad public Quad(Vertex vertices, int n, unsigned short indices, unsigned short numIndices) for(int i 0 i lt numIndices i ) indices.push back( indices i ) for(int i 0 i lt n i ) vec3 v vec3( vertices i .position, lengthPower) position.push back(v) glGenVertexArrays(1, amp mVertexArray) glBindVertexArray(mVertexArray) glGenBuffers(1, amp mPositionBuffer) glBindBuffer(GL ARRAY BUFFER, mPositionBuffer) glBufferData(GL ARRAY BUFFER, sizeof(vec3) position.size(), position.data(), GL STATIC DRAW) glGenBuffers(1, amp mIndicesBuffer) glBindBuffer(GL ELEMENT ARRAY BUFFER, mIndicesBuffer) glBufferData(GL ELEMENT ARRAY BUFFER, sizeof(unsigned short) indices.size(), indices.data(), GL STATIC DRAW) void draw() glEnableVertexAttribArray(0) glBindBuffer(GL ARRAY BUFFER, mPositionBuffer) glVertexAttribPointer(0, 4, GL FLOAT, GL FALSE, 0, 0) glBindBuffer(GL ELEMENT ARRAY BUFFER, mIndicesBuffer) glDrawElements(GL TRIANGLES, indices.size(), GL UNSIGNED SHORT, 0) glDisableVertexAttribArray(0) Quad() private std vector lt unsigned short gt indices std vector lt vec3 gt position GLuint mVertexArray GLuint mPositionBuffer GLuint mIndicesBuffer I'm using, OpenGL, glm, glfw etc.
1
GLSL Sphere from Vertex I am working on a particle simulation where we have a lot of spheres which can have different radii. Using this tutorial http mmmovania.blogspot.de 2011 01 point sprites as spheres in opengl33.html (see also code below) I was able to create a sphere from a point, but they all have the size from glPointSize(). Is it possible to extend this with a radius? version 330 out vec4 vFragColor uniform vec3 Color uniform vec3 lightDir void main(void) calculate normal from texture coordinates vec3 N N.xy gl PointCoord 2.0 vec2(1.0) float mag dot(N.xy, N.xy) if (mag gt 1.0) discard kill pixels outside circle N.z sqrt(1.0 mag) calculate lighting float diffuse max(0.0, dot(lightDir, N)) vFragColor vec4(Color,1) diffuse Maybe of interest I am using version 120 for my shaders. Or, is there a better way to do this? So, combining the two answers I now have void Draw setGeometry(float geometry, float velocity, float radius, GLuint size) this gt size size glGenBuffers(1, amp vertexData) glBindBuffer(GL ARRAY BUFFER, vertexData) glBufferData(GL ARRAY BUFFER, size 3 sizeof(float), geometry, GL STATIC DRAW) glBindBuffer(GL ARRAY BUFFER, 0) glGenBuffers(1, amp velocityData) glBindBuffer(GL ARRAY BUFFER, velocityData) glBufferData(GL ARRAY BUFFER, size 3 sizeof(float), velocity, GL STATIC DRAW) glBindBuffer(GL ARRAY BUFFER, 0) glGenBuffers(1, amp radiusData) glBindBuffer(GL ARRAY BUFFER, radiusData) glBufferData(GL ARRAY BUFFER, size sizeof(float), radius, GL STATIC DRAW) glBindBuffer(GL ARRAY BUFFER, 0) And the drawing method void Draw paint(GLenum mode) glBindBuffer(GL ARRAY BUFFER, vertexData) glEnableVertexAttribArray(0) glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, 0, 0) glEnableVertexAttribArray(1) glVertexAttribPointer(1, 3, GL FLOAT, GL FALSE, 0, 0) glEnableVertexAttribArray(2) glVertexAttribPointer(2, 1, GL FLOAT, GL FALSE, 0, 0) glDrawArrays(mode, 0, size 3 sizeof(float)) glDisableVertexAttribArray(0) And the Vertex Shader version 130 attribute vec3 position attribute vec3 velocity attribute float size uniform mat4 MVP uniform mat4 MV void main() gl PointSize size gl Position MVP vec4(position, 1.0) The problem is that when gl PointSize 15 is set, they are drawn well. When I make it equal with size, then it looks like this http picpaste.com Big particles daIBnJCG.png Works better now. Particles still getting bigger as they move along.