_id
int64
0
49
text
stringlengths
71
4.19k
1
Skeletal animation in OpenGL I'm using Assimp to do skeletal animation in my OpenGL application. I used Blender to export this one boned model to a COLLADA file The model has only one bone, called arm bone, that controls the arm mesh. All the other meshes are static. I made several structures and classes that help me play animations. All the nodes are added to an std vector of Node objects. each Node contains aiNode data and a toRoot matrix. The bone hierarchy is encapsulated in a Skeleton class, and the animation matrix (T R) are updated for each bone in a class called Animation. My Model Draw() function is this void Model draw() iterate through all animation sets. if the animation is running, update the bones it affects. for(size t i 0 i lt animations.size() i ) if( animations i .running()) animations i .updateAnimationMatrices( amp skeleton) calculate Bone finalMatrix for each bone skeleton.calculateFinalMatrices( skeleton.rootBone()) iterate through the nodes and draw their meshes. for(size t i 0 i lt nodes.size() i ) shaderProgram.setUniform("ModelMatrix", nodes i .toRoot()) nodes i .draw() To get the "animationMatrix" for each bone (the TR matrix) I call Animation updateAnimationMatrices(). Here's what it looks like void Animation updateAnimationMatrices(Skeleton skeleton) double time ((double) timer.elapsed() 1000.0) while(time gt animation gt mDuration) time animation gt mDuration iterate through aiNodeAnim (called channels) and update their corresponding Bone. for(unsigned int iChannel 0 iChannel lt animation gt mNumChannels iChannel ) aiNodeAnim channel animation gt mChannels iChannel Bone bone skeleton gt getBoneByName(channel gt mNodeName.C Str(), skeleton gt rootBone()) rotation glm mat4 R ... calculate rotation matrix based on time translation glm mat4 T ... calculate translation matrix based on time set animation matrix for the bone bone gt animationMatrix T R bone gt needsUpdate true Now in order to calculate the "finalMatrix" for each bone (based on animationMatrix, offsetMatrix etc..), and upload it to the vertex shader, I call Skeleton calculateFinalMatrices(). void Skeleton calculateFinalMatrices(Bone root) if(root) Node node getNodeByName(root gt name gt C Str()) if(node nullptr) std cout lt lt "could not find corresponding node for bone " lt lt root gt name gt C Str() lt lt " n" return if(root gt needsUpdate) update only the bones that need to be updated (their animationMatrix has been changed) root gt finalMatrix root gt animationMatrix root gt offsetMatrix upload the bone matrix to the shader. the array is defined as "uniform mat4 Bones 64 " std string str "Bones " char buf 4 0 itoa s(root gt index, buf, 10) str buf str " " shaderProgram gt setUniform(str.c str(), root gt finalMatrix) root gt needsUpdate false for(unsigned int i 0 i lt root gt numChildren i ) calculateFinalMatrices(root gt children i ) Here's my bone structure, if it helps. My glsl vertex shader is pretty standard. Here it is. And finally, here's the result I get (ignore the model's static legs. that must be some bug in the Blender exporter). And here's the result I should get (using a 3d party software) It looks like there's something wrong with the bone's matrix calculation, although I don't know what. Any ideas or tips? Thanks!
1
C Help with Separating Axis Theorem I am trying to detect collision between two triangles using Separating Axis Theorem however I am unaware what is wrong with my code. The CollisionHelper isTriangleIntersectingTriangle is called every frame and passes in the vertices of both triangles. It never returns true, however. I've been stuck on this for days now. Any help is appreciated. glm vec3 CalcSurfaceNormal(glm vec3 tri1, glm vec3 tri2, glm vec3 tri3) Subtracts each coordinate respectively glm vec3 u tri2 tri1 glm vec3 v tri3 tri1 glm vec3 nrmcross glm cross(u, v) nrmcross glm normalize(nrmcross) return nrmcross bool SATTriangleCheck(glm vec3 axis, glm vec3 tri1vert1, glm vec3 tri1vert2, glm vec3 tri1vert3, glm vec3 tri2vert1, glm vec3 tri2vert2, glm vec3 tri2vert3) int t1v1 glm dot(axis, tri1vert1) int t1v2 glm dot(axis, tri1vert2) int t1v3 glm dot(axis, tri1vert3) int t2v1 glm dot(axis, tri2vert1) int t2v2 glm dot(axis, tri2vert2) int t2v3 glm dot(axis, tri2vert3) int t1min glm min(t1v1, glm min(t1v2, t1v3)) int t1max glm max(t1v1, glm max(t1v2, t1v3)) int t2min glm min(t2v1, glm min(t2v2, t2v3)) int t2max glm max(t2v1, glm max(t2v2, t2v3)) if ((t1min lt t2max amp amp t1min gt t2min) (t1max lt t2max amp amp t1max gt t2min)) return true if ((t2min lt t1max amp amp t2min gt t1min) (t2max lt t1max amp amp t2max gt t1min)) return true return false bool CollisionHelper isTriangleIntersectingTriangle(glm vec3 tri1, glm vec3 tri2, glm vec3 tri3, glm vec3 otherTri1, glm vec3 otherTri2, glm vec3 otherTri3) Triangle surface normals, 2 axes to test glm vec3 tri1FaceNrml CalcSurfaceNormal(tri1, tri2, tri3) glm vec3 tri2FaceNrml CalcSurfaceNormal(otherTri1, otherTri2, otherTri3) glm vec3 tri1Edge1 tri2 tri1 glm vec3 tri1Edge2 tri3 tri1 glm vec3 tri1Edge3 tri3 tri2 glm vec3 tri2Edge1 otherTri2 otherTri1 glm vec3 tri2Edge2 otherTri3 otherTri1 glm vec3 tri2Edge3 otherTri3 otherTri2 axes TODO may need to (un)normalize the cross products glm vec3 axis1 tri1FaceNrml glm vec3 axis2 tri2FaceNrml glm vec3 axis3 glm normalize(glm cross(tri1Edge1, tri2Edge1)) glm vec3 axis4 glm normalize(glm cross(tri1Edge1, tri2Edge2)) glm vec3 axis5 glm normalize(glm cross(tri1Edge1, tri2Edge3)) glm vec3 axis6 glm normalize(glm cross(tri1Edge2, tri2Edge1)) glm vec3 axis7 glm normalize(glm cross(tri1Edge2, tri2Edge2)) glm vec3 axis8 glm normalize(glm cross(tri1Edge2, tri2Edge3)) glm vec3 axis9 glm normalize(glm cross(tri1Edge3, tri2Edge1)) glm vec3 axis10 glm normalize(glm cross(tri1Edge3, tri2Edge2)) glm vec3 axis11 glm normalize(glm cross(tri1Edge3, tri2Edge3)) Perform SAT if (SATTriangleCheck(axis1, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true if (SATTriangleCheck(axis2, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true if (SATTriangleCheck(axis3, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true if (SATTriangleCheck(axis4, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true if (SATTriangleCheck(axis5, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true if (SATTriangleCheck(axis6, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true if (SATTriangleCheck(axis7, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true if (SATTriangleCheck(axis8, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true if (SATTriangleCheck(axis9, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true if (SATTriangleCheck(axis10, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true if (SATTriangleCheck(axis11, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true return false
1
How to use a buffer in GLSL to do a LUT lookup? I am currently on a medical application which needs different kinds (up to totally individual) lookup tables (LUT) for image display. And this done with 10bit finish. So it is most of the time required to use 16 bit gray scale pixels. I chose to use modern OpenGL and thus I'll want the lookup to happen in a GLSL shader program. I had a try in uploading a 1D Tedture (R16) using sampler1D to do the lookup as such uniform sampler2D tex uniform sampler1D lutData in vec2 fragTexCoord out vec4 finalColor void main() float primaryLvl texture(tex, fragTexCoord) index image 'color' in Lut Texture float red texture(lutData, primaryLvl) finalColor vec4(red, red, red, 1.0) It works about as expected however Textures (also 1D aparently) are limited to 8192 in size (on my machine). So the requirement for full range 16 bit images is not met. For this above test I upload my images as R16 (for me it is gray) normalized float 0 1.0. Would it be feasible to upload a buffer (arbitrary data) and do the lookup there? How would I do this (Syntax)? And will my current pixel format interfere with my idea so I have to upload it as R16 INT instead. Or can (and should) I convert it in the shader from Normalized to Int (how is this done)? Thanks
1
Directional light type I am currently trying to implement a specific directional light type. This light type has been used in the game INSIDE and is called orthogonal spotlight (aka local directional light). I assume that this is a directional light which behaves like a spot light and have a squared or rectangular attenuation but I have some difficulties to integrate it in my deferred pipeline and get the general concept of this light type. Implementing a simple directional light is simple dot(worldNormal, lightDir) but what kind of data should I use to constraint its application into a square or a rectangle ? I hope that you'll be able to give me some clues. Thanks a lot !
1
Finding correct index of triangles I'm generating a basic terrain and it looks something like this Load the vertex and index array with the terrain data. for (j 0 j lt (m terrainHeight 1) j ) for (i 0 i lt (m terrainWidth 1) i ) index1 (m terrainHeight j) i Top left. index2 (m terrainHeight (j 1)) i Bottom left. index3 (m terrainHeight j) (i 1) Top right. index4 (m terrainHeight (j 1)) (i 1) Bottom right. Top left vertices index .position glm vec3(m heightMap index1 .x, m heightMap index1 .y, m heightMap index1 .z) vertices index .texture glm vec2(m heightMap index1 .tu 0.0f, m heightMap index1 .tv 0.0f) indices index index index Bottom left. vertices index .position glm vec3(m heightMap index2 .x, m heightMap index2 .y, m heightMap index2 .z) vertices index .texture glm vec2(m heightMap index2 .tu 0.0f, m heightMap index2 .tv 1.0f) indices index index index Top right. vertices index .position glm vec3(m heightMap index3 .x, m heightMap index3 .y, m heightMap index3 .z) vertices index .texture glm vec2(m heightMap index3 .tu 1.0f, m heightMap index3 .tv 0.0f) indices index index index Top right. vertices index .position glm vec3(m heightMap index3 .x, m heightMap index3 .y, m heightMap index3 .z) vertices index .texture glm vec2(m heightMap index3 .tu 1.0f, m heightMap index3 .tv 0.0f) indices index index index Bottom left. vertices index .position glm vec3(m heightMap index2 .x, m heightMap index2 .y, m heightMap index2 .z) vertices index .texture glm vec2(m heightMap index2 .tu 0.0f, m heightMap index2 .tv 1.0f) indices index index index Bottom right vertices index .position glm vec3(m heightMap index4 .x, m heightMap index4 .y, m heightMap index4 .z) vertices index .texture glm vec2(m heightMap index4 .tu 1.0f, m heightMap index4 .tv 1.0f) indices index index index Now If I wanted to find which grid square the player is on, we could simply (if the camera position is relative to the terrain) Grid square the camera is on int gridX (int)std floor(cameraX gridSquareSize) int gridZ (int)std floor(cameraZ gridSquareSize) Continuing, we know that each face consists of two triangles, and the top triangle is represented by three vertices which index corresponds to top left, bottom left and top right. The bottom triangle would then be top right, bottom left and bottom right. Now I'm having a hard time trying to find height values (y values) at these points because if I try to do Top left triangle glm vec3(0, vertices (m terrainHeight gridZ) gridX .position.y, 0) Top left glm vec3(0, vertices (m terrainHeight (gridZ) 1) gridX .position.y, 1) Bottom left glm vec3(1, vertices ((m terrainHeight gridZ)) (gridX 1) .position.y, 0) Top right we conclude that the top left and bottom left position will be okay, but the top right position won't get the correct index, because as it is now (if we say that gridX 0 and gridZ 0), the index will generate a value of 1, which is not the correct height value at that position because the index we want should be 2 (top right). So, is there a way I could write the code so I would get the correct index for the triangles of the terrain?
1
What's the best practice to use the pbo to upload multi textures? I have a basic model to upload textures as shown in the following picture. I design this for several reasons Only the primary thread owns the OpenGL context, so I choose to create buffers, map buffers and unmap buffers in the primary thread. I have many pictures to load and I don't want them to block the primary thread, so I use the subthread to load images and copy the memory. Here are my questions Is my model correct? Is my model the best practice? Should I create a PBO for each picture or create two PBO for all pictures and use them in turn? Should I use a shared context? Thank you for helping me out
1
OpenGL back front end threading and Doom 3 BFG engine Introduction I have been reading through the source code of id Software's Doom 3 BFG engine. The whole codebase is on GitHub at id Software DOOM 3 BFG. The architecture is both clean and elegant. The whole code base is of exceptional code quality. (It makes for a great code read.) Fabian Sanglard has an easy to follow code review of the engine. In particular, I am interested in id Software's implementation of the renderer, which is explained here http fabiensanglard.net doom3 bfg renderer.php. Architecture of renderer Fabian's site offers this architectural diagram of the renderer. From the source code, and from the review, you can see that the renderer consists of a frontend and a backend. The engine is multithreaded in two ways, first there is a job pool whose task items are consumed by worker threads without relying on mutexes (this is awesome). In addition to the job pool, the renderer front end and back also each have their own thread. My implementation of something similar I played around with trying to build something similar in my own game engine. I will probably make the repo public and link to it here at some point, in the future. The renderer front end lives on the same thread as the program entry point main(), where my Service Provider and Subsystem Singletons also live. Upon startup it launches the backend on a separate thread. The front end talks to the back end via a non blocking command buffer. I implemented it using this awesome single producer single consumer non locking queue on GitHub at cameron314 readerwriterqueue. The backend consumes commands of the command buffer queue, and turns them into OpenGL calls. The backend creates and manages the OpenGL context lifecycle. (At the moment this is an obstacle, because I am using SDL2, and I don't want SDL API related functionality, like input, running on the renderer backend thread. Is it worth it? Aren't OpenGL calls suppose to occur asynchronously, given that we are somewhat successful at minimizing GPU CPU synchronization? Is all the added architectural complexity worth it from a performance perspective? With more modern rendering techniques like vertex buffer streaming, it seems as if more and more work is done to transfer load from the CPU to the GPU, and to make drawing asynchronous. Adding the command buffer queue also adds a relatively heavy layer of abstraction between the originating draw service call and the underlying driver call. This complexity certainly must impact cache coherence and branch prediction. Newer, closer to the silicon graphics APIs like METAL, only further exaggerate my point. I guess moving away from the CPU is the obvious future path of progression for graphics performance. Is it relevant? As elegant as the Doom 3 BFG engine is, the architecture is probably a bit dated, and probably misses out on some new innovations. Did the engine originally rely on a fixed pipeline? This might explain this design choice. Do any modern game engines use something like this? Apology Sorry for the long winded and multi faceted question. I hope the community will forgive me. I also realize, profiling would be able to answer my question. Unfortunately, neither my implementation of the technique I described, nor the method without it, is mature enough, to do any meaningful profiling. Not to mention, there are a whole lot of other variables as well. I'd rather not build a complete rendering pipeline around a flawed and outdated idea, only to discover that it is over engineered and slow. In the meantime, I will continue to develop my job schedular approach, but this could be a feature design improvement. Any insights or discussion would be greatly appreciated. Thanks!
1
rotating objects in a double orbit I have a object at the center. Other set of objects rotating around center in first orbit. Now i want other objects to rotate around the objects in 1st orbit. In the above fig, set of triangles are rotating around the square and circles are rotating around the triangle. I have the code which works for 1st orbit, but i am not able to render that second orbit. I am calling display in loop. i is static global variable. which transformations will do the second orbit?
1
OpenGL textures look poor I'm having some issues loading in textures in OpenGL, as my textures keep rendering incorrectly or coming out looking muddy. For instance, here I tried to load a 256x256 color spectrum image. On the left is how it looks in OpenGL and on the right is how it looks in an image viewing program As you can see, while the left image resembles the right image, the left image appears to squish the blues, and greens, and extend the pinks. I also tried loading in this 512x512 image of a dog and the result came out like this (again, left is OpenGL, right is image viewer) For this image, the image looks like it has lost a lot of its color, resulting in something that looks white washed and like it came out of a 1970s camera. (the fact that is flipped is fine however since the cube that I am drawing this on has some texture coordinates flipped to accommodate for a different image). I load in these .BMP textures using SOIL, as such glEnable(GL TEXTURE 2D) GLuint texID 0 glGenTextures(1, amp texID) int height 0, width 0 unsigned char imgData SOIL load image(filePath.c str(), amp width, amp height, 0, SOIL LOAD AUTO) glBindTexture(GL TEXTURE 2D, texID) glTexImage2D(GL TEXTURE 2D, 0, GL RGB, width, height, 0, GL RGB, imgData) set texture filtering, gen mip map Then in my fragment shader I do the following to apply the texture version 330 core in vec2 TexCoord uniform sampler2D textureSampler void main() gl FragColor texture2D(textureSampler, TexCoord)
1
Do OpenGL buffers overflow to CPU memory? This is a question about OpenGL buffers and memory. My game world is mid sized, one contiguous space, unchanging, and only partially visible from any position. Will modern OpenGL overflow buffers into CPU memory? And if so, can I just allocate buffers (vertex and texture) for all of it, and adjust my draw calls to skip non visible areas, and let OpenGL pull buffers into GPU as needed, hopefully minimizing thrashing? (Or does OpenGL just fail on the allocate after a while?) EDIT The above was ambiguous. But maybe both variations are interesting. Could I just allocate one huge vertex buffer and a few giant textures, and hope that OpenGL moves parts of it in and out of the GPU as needed, and maybe enough early depth testing lets it skip some of the texture drawing... Could I have a handful, or maybe a lot, of "sensibly sized" vertex and texture buffers, but only glDrawXxx() some of them on each frame? In that case, would OpenGL move them up and down from the GPU, maybe in a least recently used sort of way?
1
Why does reading my depth texture in GLSL return less than one? I've created a depth texture in OpenGL (using C ) as follows Create the framebuffer. var framebuffer 0u glGenFramebuffers(1, amp framebuffer) glBindFramebuffer(GL FRAMEBUFFER, framebuffer) Create the depth texture. var depthTexture 0u glGenTextures(1, amp depthTexture) glBindTexture(GL TEXTURE 2D, depthTexture) glTexImage2D(GL TEXTURE 2D, 0, GL DEPTH COMPONENT24, 800, 600, 0, GL DEPTH COMPONENT, GL FLOAT, null) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL REPEAT) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL NEAREST) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL NEAREST) glFramebufferTexture2D(GL FRAMEBUFFER, GL DEPTH ATTACHMENT, GL TEXTURE 2D, depthTexture, 0) Later, I sample from the depth texture as follows float depth texture(depthTexture, texCoords).r But even when no geometry has been rendered to that pixel, the depth value coming back is less than 1 (seems to be very slightly above 0.5). This is confusing to me since, per the documentation on glClearDepth, the default value is 1. Note that this is not a problem of linearizing depth since I'm attemping to compare depth directly (using the same near and far planes), not convert that depth back to world space. Why is my depth texture sample returning lt 1 when no geometry has been rendered?
1
Can't get simple OpenGL texture working using SDL2 and FreeImage3 I created a simple OpenGL program that display a quad with texture, but it doesn't seem to be working as it only displays a white quad. What could possibly be wrong? I checked everything I could think of that could cause the issue. The bitmap is also 256x256. Any clues? Here's the source code http pastebin.com 5nVbMPVp I'm also on Linux if that helps. Thanks!
1
Why is this orthographic projection matrix not showing my textured quad? I've been following tutorials, mainly this one, and I am still not quite sure why my textured quad is not showing inside the frustum that I've rendered before. I can see it if and only if I don't multiply gl Position with OrthoProjMatrix vertexmodelspace, and instead, multiply gl position with vertexmodelspace. Here is some of my code my Main.CPP is also available via PasteBin. Orthographic Projection Matrix Setup Code void OpenGL Engine OrthoProjectionSetup(GLuint program) GLfloat Right 100.0 GLfloat Left 50.0 GLfloat Top 100.0 GLfloat Bottom 50.0 GLfloat zFar 1.0 GLfloat zNear 1.0 GLfloat LeftAndRight 2.0f (Right Left) GLfloat TopAndBottom 2.0f (Top Bottom) GLfloat ZFarAndZNear 2.0f (zFar zNear) GLfloat orthographicprojmatrix XX XY XZ XW LeftAndRight, 0.0, 0.0, (Right Left) (Right Left), YX YY YZ YW 0.0, TopAndBottom, 0.0 , (Top Bottom) (Top Bottom), ZX ZY ZZ ZW 0.0, 0.0 , ZFarAndZNear, WX WY WZ WW (zFar zNear ) (zFar zNear), 0.0, 1.0 GLint orthographicmatrixloc glGetUniformLocation(program, "OrthoProjMatrix") glUniformMatrix4fv(orthographicmatrixloc, 1, GL TRUE, amp orthographicprojmatrix 0 ) Vertex Shader Code version 330 core layout(location 0) in vec4 vertexposition modelspace layout(location 1) in vec2 vertexUV out vec2 UV uniform mat4 OrthoProjMatrix void main() gl Position OrthoProjMatrix vertexposition modelspace UV vertexUV I'm having problems with the orthographic projection matrix either it is not being done correctly, not setup correctly, my shader is not setup correctly or it's the textured quad that is not in view. Please note that I do not want to use a library for this. What am I doing wrong?
1
How can I pass an array of matrices to a VertexShader I'm trying to learn how to use OpenGL but I'm having problems trying to pass an array of matrices to my VertexShader. I think the problem is in the VertexShader because it seems the values in modelvalue are passed correctly to the components of model, but for some reason at the end only one of the three cubes I've created is drawn. Here's the Main code. glm mat4 model 3 GLint ModelLocation glGetUniformLocation(MyShader.program, "model") vector lt GLfloat gt modelvalue const int n int(sizeof(model 0 ) sizeof(GLfloat)) for (int i 0 i lt 3 i ) model i glm translate(model i , cubePositions i ) for (int j 0 j lt n j ) modelvalue.push back(glm value ptr(model i ) j ) glUniformMatrix4fv(ModelLocation, 3, GL FALSE, amp modelvalue 0 ) The VertexShader version 400 core layout (location 0) in vec3 Position layout (location 1) in vec2 texCoord out vec2 TexCoord out vec4 position0 out vec4 position1 out vec4 position2 uniform mat4 model 3 uniform mat4 view uniform mat4 projection void main() position0 projection view model 0 vec4(Position,1.0f) position1 projection view model 1 vec4(Position,1.0f) position2 projection view model 2 vec4(Position,1.0f) TexCoord vec2(texCoord.x,1 texCoord.y) And the FragmentShader version 400 core layout (location 0) in vec3 Position layout (location 1) in vec2 texCoord out vec2 TexCoord out vec4 position0 out vec4 position1 out vec4 position2 uniform mat4 model 3 uniform mat4 view uniform mat4 projection void main() position0 projection view model 0 vec4(Position,1.0f) position1 projection view model 1 vec4(Position,1.0f) position2 projection view model 2 vec4(Position,1.0f) TexCoord vec2(texCoord.x,1 texCoord.y)
1
OpenGL have object follow mouse I want to have an object follow around my mouse on the screen in OpenGL. (I am also using GLEW, GLFW, and GLM). The best idea I've come up with is Get the coordinates within the window with glfwGetCursorPos. The window was created with window glfwCreateWindow( 1024, 768, "Test", NULL, NULL) and the code to get coordinates is double xpos, ypos glfwGetCursorPos(window, amp xpos, amp ypos) Next, I use GLM unproject, to get the coordinates in "object space" glm vec4 viewport glm vec4(0.0f, 0.0f, 1024.0f, 768.0f) glm vec3 pos glm vec3(xpos, ypos, 0.0f) glm vec3 un glm unProject(pos, View Model, Projection, viewport) There are two potential problems I can already see. The viewport is fine, as the initial x,y, coordinates of the lower left are indeed 0,0, and it's indeed a 1024 768 window. However, the position vector I create doesn't seem right. The Z coordinate should probably not be zero. However, glfwGetCursorPos returns 2D coordinates, and I don't know how to go from there to the 3D window coordinates, especially since I am not sure what the 3rd dimension of the window coordinates even means (since computer screens are 2D). Then, I am not sure if I am using unproject correctly. Assume the View, Model, Projection matrices are all OK. If I passed in the correct position vector in Window coordinates, does the unproject call give me the coordinates in Object coordinates? I think it does, but the documentation is not clear. Finally, to each vertex of the object I want to follow the mouse around, I just increment the x coordinate by un 0 , the y coordinate by un 1 , and the z coordinate by un 2 . However, since my position vector that is being unprojected is likely wrong, this is not giving good results the object does move as my mouse moves, but it is offset quite a bit (i.e. moving the mouse a lot doesn't move the object that much, and the z coordinate is very large). I actually found that the z coordinate un 2 is always the same value no matter where my mouse is, probably because the position vector I pass into unproject always has a value of 0.0 for z. Edit The (incorrectly) unprojected x values range from about 0.552 to 0.552, and the y values from about 0.411 to 0.411.
1
smooth shading vs flat shading, what's the difference in the models? I'm loading the exact same model with Assimp, except one is exported from Blender, shaded smoothly, and the other was exported from Blender, shaded flatly. Here is my results from loading both into my game The Flat drawn model has 1968 vertices and the Smooth drawn model only has 671, why is this happening, I don't understand why there would be less vertices when it's shaded smoothly...?
1
Rotation along normal vector I have a triangle in the 3D world. I have the positions for the three points the normal vectors for the three points (all the same, because vertices are not shared) This triangle faces to a direction, so it has a XYZ rotation (but I dont know the rotation itself). If I put a cube into the same scene, how I can achieve the same rotation for the cube? So if the triangle "looks" upwards, the cube should do the same. If it has a 10f x rotation, how can I get what is the x?
1
Rotate an object given only by its points? I was recently writing a simple 3D maze FPP game. Once I was done fiddling with planes in OpenGL, I wanted to add support for importing Blender objects. The approach I used was triangulization of the object, then using Three.js to export the points to plaintext and then parsing the result JSON in my app. The example file can be seen here https github.com d33tah tinyfpp blob master Data Models cross.txt The numbers represent x,y,z,u,v of a single vertex, which combined in three make a triangle. Then I rendered such an object triangle by triangle and played with it. I could move it back and forth and sideways, but I still have no idea how to rotate it by some axis. Let's say I'd like to rotate all the points by five degrees to the left, how would a code doing it look like?
1
Shadowmap first phase and shaders I am using OpenGL 3.3 and am tryin to implement shadow mapping using cube maps. I have a framebuffer with a depth attachment and a cube map texture. My question is how to design the shaders for the first pass, when creating the shadowmap. This is my vertex shader in vec3 position uniform mat4 lightWVP void main() gl Position lightWVP vec4(position, 1.0) Now, do I even need a fragment shader in this shader pass? from what I understand after reading http www.opengl.org wiki Fragment Shader, by default gl FragCoord.z is written to the currently attached depth component (to which my cubemap texture is bound to). Thus I shouldnt even need a fragment shader for this pass and from what I understand, there is no other work to do in the fragment shader other than writing this value. Is this correct?
1
How do i writing timing function for lightning flash in C ? I need to write a horror scene with lightning flash. Unfortunately I am new to both C and OpenGL and I am looking for efficient way to mimic lightning timing in C OpenGL? I don't need graphical implementation, just algorithmic implementation. as such, the function should return a float representing lighting intensity and the delay of lightning flash should be random (5 seconds followed by 10 seconds then 3 seconds.. etc) I hope I made everything clear!
1
OpenGL global local states OpenGL has so many APIs which can modify different states. It's not so obvious whether an API has global effect or local effect. For example, rumour says glTexEnv has global effect while glTexParameter has local effect (it only affects the current bound texture). Is this true? How to determine whether an API has global or local effect?
1
reflecting objects by rendering them to the cubemap I have 2 sphere objects and it isn't reflecting the other one. The picture is when it simply reflects the cube map but when I try to render to the cube map the spheres turn out black. In camera class void switchToFace(int faceIndex) For Cube Map camera switch (faceIndex) case 0 m pitch 0 m yaw 90 break case 1 m pitch 0 m yaw 90 break case 2 m pitch 90 m yaw 180 break case 3 m pitch 90 m yaw 180 break case 4 m pitch 0 m yaw 180 break case 5 m pitch 0 m yaw 0 break default break In main static GLuint createEmptyCubeMap(int width, int height) GLuint textureID glGenTextures(1, amp textureID) glBindTexture(GL TEXTURE CUBE MAP, textureID) for (GLuint i 0 i lt 6 i) glTexImage2D(GL TEXTURE CUBE MAP POSITIVE X i, 0, GL RGB, width, height, 0, GL RGB, GL UNSIGNED BYTE, nullptr) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP R, GL CLAMP TO EDGE) glBindTexture(GL TEXTURE CUBE MAP, 0) return textureID main() stuff glm vec3 spherePositions glm vec3(0.0f, 0.0f, 0.0f), glm vec3( 1.5f, 2.2f, 2.5f), new cubemap Camera reflectionCamera(glm vec3(0.0f, 0.0f, 0.0f)) Camera position. Should be on ball GLint imageWidth, imageHeight SOIL load image(faces 1 , amp imageWidth, amp imageHeight, 0, SOIL LOAD RGB) GLuint newEnvironment createEmptyCubeMap(imageWidth, imageHeight) GLuint FBO , DBO glGenFramebuffers(1, amp FBO) glBindFramebuffer(GL FRAMEBUFFER, FBO) glDrawBuffer(GL COLOR ATTACHMENT0) glGenRenderbuffers(1, amp DBO) glBindRenderbuffer(GL RENDERBUFFER, DBO) glRenderbufferStorage(GL RENDERBUFFER, GL DEPTH COMPONENT24, imageWidth, imageHeight) glFramebufferRenderbuffer(GL FRAMEBUFFER, GL DEPTH ATTACHMENT, GL RENDERBUFFER, DBO) glViewport(0, 0, imageWidth, imageHeight) for (GLuint i 0 i lt 6 i) glFramebufferTexture2D(GL FRAMEBUFFER, GL COLOR ATTACHMENT0, GL TEXTURE CUBE MAP POSITIVE X i, newEnvironment, 0) reflectionCamera.switchToFace(i) glBindFramebuffer(GL FRAMEBUFFER, 0) glViewport(0, 0, WIDTH, HEIGHT) while(window.isOpen() SFML) glBindVertexArray(sphereVAO) for (GLuint i 0 i lt 2 i) model glm mat4() model glm translate(model, spherePositions i ) glUniformMatrix4fv(modelLoc, 1, GL FALSE, glm value ptr(model)) modelLoc is location of uniform model glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE CUBE MAP, newEnvironment) glDrawElements(GL TRIANGLES, numberOfIndexes, GL UNSIGNED INT, 0) The sphereVAO is simply the Vertex Array Object for the sphere and here we bind the texture to the newEnvironment which is supposed to reflect the other object in spherePositions i The picture above is the sphere Position at (0, 0, 0). But the code results in this
1
Understanding VAOs and adding different arrays to VAOs I'm really confused on what you do. I can do them, however I got this problem. Say you have several squares, say 1,000 squares. Now I can make a VAO for each 1,000 squares and then do some for loop to render all of the squares. I can also use shaders to move the squares. However, is it possible to put all 1,000 squares in one VAO. The problem is I'm going to create a voxel game. So I need to render about 10,000 cubes. However, the game will lag a lot even if I render 200 squares how I'm currently doing it. float points 0.0f, 0.0f, 0.0f, 0.3f, 0.0f, 0.0f, 0.3f, 0.3f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.3f, 0.0f, 0.3f, 0.3f, 0.0f float colours 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f unsigned int points vbo 0 glGenBuffers (1, amp points vbo) glBindBuffer (GL ARRAY BUFFER, points vbo) glBufferData (GL ARRAY BUFFER, 18 sizeof (float), amp points, GL STATIC DRAW) unsigned int colours vbo 0 glGenBuffers (1, amp colours vbo) glBindBuffer (GL ARRAY BUFFER, colours vbo) glBufferData (GL ARRAY BUFFER, 18 sizeof (float), amp colours, GL STATIC DRAW) unsigned int vao 0 glGenVertexArrays (1, amp vao) glBindVertexArray (vao) glBindBuffer (GL ARRAY BUFFER, points vbo) glVertexAttribPointer (0, 3, GL FLOAT, GL FALSE, 0, (GLubyte )NULL) glBindBuffer (GL ARRAY BUFFER, colours vbo) glVertexAttribPointer (1, 3, GL FLOAT, GL FALSE, 0, (GLubyte )NULL) glEnableVertexAttribArray (0) glEnableVertexAttribArray (1) Setup shaders std string vertex shader loadshaders("test vs.txt") std string fragment shader loadshaders("test fs.txt") unsigned int vs glCreateShader (GL VERTEX SHADER) const char str vertex shader.c str () glShaderSource (vs, 1, amp str, NULL) glCompileShader (vs) unsigned int fs glCreateShader (GL FRAGMENT SHADER) const char strb fragment shader.c str () glShaderSource (fs, 1, amp strb, NULL) glCompileShader (fs) unsigned int shader programme glCreateProgram () glAttachShader (shader programme, fs) glAttachShader (shader programme, vs) glLinkProgram (shader programme) main loop while (!glfwWindowShouldClose(window)) float ratio int width, height glfwGetFramebufferSize(window, amp width, amp height) glClear (GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glClearColor (0.6f, 0.6f, 0.8f, 1.0f) glUseProgram (shader programme) glBindVertexArray (vao) glDrawArrays (GL TRIANGLES, 0, 6) glfwSwapBuffers(window) glfwPollEvents() So how would I say add points2 , points3 and put them into the same VAO. How can I design my shader so I can then do transformation on the points in the arrays say translate points2 array 4 blocks right, without effecting every other point.
1
Screen space decals, converting world to decal space I'm trying to do screen space deferred decals following the presentation made by Pope Kim about SSDs in WH40K Space Marine (link). I've gotten to the point where I can render a decal if the bounding box is placed at the world space origin (0, 0, 0). The moment I move the bounding volume the decal is still trying to render at the world space origin and can be seen if you look "through" the bounding volume. The red planes in the picture are the bounding volume rendered for reference, the circular patch is the decal. Picture showing a decal rendered correctly at world space origin Picture showing a decal clipped when rendered offset from world space origin Decal as seen "through" offset bounding volume. My code is fairly similar to that off Kim, with minor differences accounting for the D3D to OpenGL transition. Vertex shader version 430 layout (shared) uniform PerFrameBlock mat4 gView mat4 gProjection uniform mat4 modelMatrix uniform vec3 decalSize layout ( location 0 ) in vec4 positionIN layout ( location 1 ) in vec4 normalIN layout ( location 2 ) in vec4 tangentIN layout ( location 3 ) in ivec4 boneIndices layout ( location 4 ) in vec4 boneWeights layout ( location 5 ) in vec2 uvIN out vec4 posFS out vec4 posW out vec2 uvFS void main() posW modelMatrix vec4(positionIN.xyz 1, positionIN.w) Move position to clip space posFS gProjection gView posW uvFS uvIN gl Position posFS Fragment shader version 430 extension GL ARB texture rectangle enable in vec4 posFS in vec4 posW in vec2 uvFS uniform sampler2D gNormalDepth uniform sampler2D gDiffuse uniform float gGamma uniform mat4 invProjView uniform mat4 invModelMatrix vec4 reconstruct pos(float z, vec2 uv f) vec4 sPos vec4(uv f 2.0 1.0, z, 1.0) sPos invProjView sPos return vec4((sPos.xyz sPos.w ), sPos.w) layout ( location 1 ) out vec4 diffuseRT layout ( location 2 ) out vec4 specularRT layout ( location 3 ) out vec4 glowMatIDRT void main() vec2 screenPosition posFS.xy posFS.w vec2 depthUV screenPosition 0.5f 0.5f depthUV vec2(0.5f 1280.0f, 0.5f 720.0f) half pixel offset float depth texture2D(gNormalDepth, depthUV).w vec4 worldPos reconstruct pos(depth, depthUV) vec4 localPos invModelMatrix worldPos float dist 0.5f abs(localPos.y) float dist2 0.5f abs(localPos.x) if (dist gt 0.0f amp amp dist2 gt 0) vec2 uv vec2(localPos.x, localPos.y) 0.5f vec4 diffuseColor texture2D(gDiffuse, uv) diffuseRT diffuseColor else diffuseRT vec4(1.0f, 0, 0, 1) I think my problem stems from the conversion from world space to decal space using the inverse model matrix. The model matrix is build by the following code glm quat rot glm mat4 rotationMatrix glm mat4 translationMatrix glm mat4 scaleMatrix rotationMatrix glm toMat4(rotation) translationMatrix glm translate(glm mat4(1.0f), position) scaleMatrix glm scale(scaleX, scaleY, scaleZ) modelMatrix translationMatrix rotationMatrix scaleMatrix To get the inverse I just use the following glm inverse(modelMatrix) What I'm wondering is if inverting the model matrix like this actually correct or am I doing something with it which would cause this kind of behaviour? I've tried offsetting the calculated local space position in the shader with no luck (probably because the distance gets borked) and I'm starting to run out of ideas. Any help is much appreciated!
1
How can I get the texture file name for my polygon? I have a problem with the FBX SDK. I read in the data for the vertex position and the UV coordinates. It works fine, but now I want to read for each polygon to which texture it belongs, so that I can have models with multiple textures. Can anyone tell me how I can get the texture file name for my polygon? My code to read in vertex position and uv coordinates is the following int i, j, lPolygonCount pMesh gt GetPolygonCount() FbxVector4 lControlPoints pMesh gt GetControlPoints() int vertexId 0 for (i 0 i lt lPolygonCount i ) int lPolygonSize pMesh gt GetPolygonSize(i) for (j 0 j lt lPolygonSize j ) int lControlPointIndex pMesh gt GetPolygonVertex(i, j) FbxVector4 pos lControlPoints lControlPointIndex current model vertex index .x pos.mData 0 pivot offset 0 current model vertex index .y pos.mData 1 pivot offset 1 current model vertex index .z pos.mData 2 pivot offset 2 FbxVector4 vertex normal pMesh gt GetPolygonVertexNormal(i,j, vertex normal) current model vertex index .nx vertex normal.mData 0 current model vertex index .ny vertex normal.mData 1 current model vertex index .nz vertex normal.mData 2 read in UV data FbxStringList lUVSetNameList pMesh gt GetUVSetNames(lUVSetNameList) get lUVSetIndex th uv set const char lUVSetName lUVSetNameList.GetStringAt(0) const FbxGeometryElementUV lUVElement pMesh gt GetElementUV(lUVSetName) if(!lUVElement) continue only support mapping mode eByPolygonVertex and eByControlPoint if( lUVElement gt GetMappingMode() ! FbxGeometryElement eByPolygonVertex amp amp lUVElement gt GetMappingMode() ! FbxGeometryElement eByControlPoint ) return index array, where holds the index referenced to the uv data const bool lUseIndex lUVElement gt GetReferenceMode() ! FbxGeometryElement eDirect const int lIndexCount (lUseIndex) ? lUVElement gt GetIndexArray().GetCount() 0 FbxVector2 lUVValue get the index of the current vertex in control points array int lPolyVertIndex pMesh gt GetPolygonVertex(i,j) the UV index depends on the reference mode int lUVIndex pMesh gt GetTextureUVIndex(i, j) lUVValue lUVElement gt GetDirectArray().GetAt(lUVIndex) current model vertex index .tu (float)lUVValue.mData 0 current model vertex index .tv (float)lUVValue.mData 1 vertex index float v1 3 , v2 3 , v3 3 v1 0 current model vertex index 3 .x v1 1 current model vertex index 3 .y v1 2 current model vertex index 3 .z v2 0 current model vertex index 2 .x v2 1 current model vertex index 2 .y v2 2 current model vertex index 2 .z v3 0 current model vertex index 1 .x v3 1 current model vertex index 1 .y v3 2 current model vertex index 1 .z collision model gt addTriangle(v1,v2,v3)
1
glGetUniformLocation returns 23724032 as the title says, I have a problem using glGetUniformLocation call that returns, for the following code, the value 23724032. I'm writing a little engine and the draw function of my models is void Model draw() enable the shader program m material gt enable() set the current value of the ModelViewProjection matrix m material gt getShaderProgram() gt uniform("u mvp", getPose()) draw the mesh using VAO m mesh gt draw() disable the shader program m material gt disable() where void ShaderProgram uniform(std string i varName, glm mat4 amp i value) get the location of the uniform variable in the shader program GLint loc glGetUniformLocation(m programId, i varName.c str()) if(loc ! 1) glUniformMatrix4fv(loc, 1, GL FALSE, glm value ptr(i value)) My shaders are very simple vertex shader version 330 uniform mat4 u mvp attribute vec4 i position attribute vec4 i normal attribute vec2 i texcoords void main(void) gl Position u mvp i position and fragment shader version 330 precision highp float void main(void) gl FragColor vec4(1.0, 0.0, 0.0, 1.0) So, I have a question why I get this weird value from the glGetUniformLocation call? I found this topic (link) from the OpenGL discussion board, but I still don't understand why my code doesn't draw anything. If the Intel integrated video card uses its own ids for attributes and uniforms, why is this a problem? What I have to do to fix it? Note that if I run my application on high performance (with the dedicated video card), the function returns 0 as expected. My notebook is a HP Pavilion dv7 6c90el with a ATI Radeon HD 7690M XT and a Intel 3000. Thanks.
1
How to manage different shaders dynamically? Currently I have only a really basic shader and a shader class. My question is that if I want to make different shaders (with different uniforms, inputs, etc) how should the architecture look like? Edit I want it in a way that I can add remove modify shaders without recompiling. ( 1, 2, 3 were written before this edit, so I'm sorry if they are irrelevant now) (I wrote 4 just now, and currently I am very happy with it, but I would like to hear your opinion about it.) I have a few ideas (1.) Should I make an abstract shader class, and a derived class for every invidual shader program? (So the abstract class handles the general things, and the derived class handles the specific things, like uniforms, inputs, etc) But in my opinion this isn't a good practice, because for every shader I create, I have to create a new class too, which means that if I want to add a new shader, or modify one's source, I have to recompile the engine. (2.) Should I register the shader specific things (location of inputs, uniforms, etc.), and pass EVERY possible required data to the shader, which will use only the ones which were registered during the shader's compile time? Like Shader A needs position and normal, Shader B needs position and UV. position, normal and UV will be passed to the active shader object, and it will buffer only those which it needs (based on the registered inputs, uniforms, etc) and won't care about the others. (3.) Having every possible input, uniform etc in all the shaders. The active shader will receive every required input, uniform, etc. So a shader is only unique in its main(). Like Shader A uses input 1,2,3, shader B uses input 1,2,5. (But both of them will receive all inputs) These are the way I thought of, but these aren't very good in my opinion, there must be a good way for this. Edit (4.) What I most recently thought of Only one shader class. has a field unordered map lt name, location at shader During shader initialization, I store the input, uniforms, etc in the previously mentioned map. When I want to render the Object, I call the active shader's SetInput(...) or SetUniform(...) method These methods receive a string and a data (like vec3, vec4, mat4, etc) In these methods, I check if the string is in the map, and if it is, I buffer the data to the location. I think this is good, because there will be a few method calls for every object, and then finding a key in an unordered map is really fast. What do you say?
1
OpenGL Stack overflow if I do, Stack underflow if I don't! I'm in a multimedia class in college, and we're "learning" OpenGL as part of the class. I'm trying to figure out how the OpenGL camera vs. modelview works, and so I found this example. I'm trying to port the example to Python using the OpenGL bindings it starts up OpenGL much faster, so for testing purposes it's a lot nicer but I keep running into a stack overflow error with the glPushMatrix in this code def cube() for x in xrange(10) glPushMatrix() glTranslated( positionx x 1 10, 0, positionz x 1 10) translate the cube glutSolidCube(2) draw the cube glPopMatrix() According to this reference, that happens when the matrix stack is full. So I thought, "well, if it's full, let me just pop the matrix off the top of the stack, and there will be room". I modified the code to def cube() glPopMatrix() for x in xrange(10) glPushMatrix() glTranslated( positionx x 1 10, 0, positionz x 1 10) translate the cube glutSolidCube(2) draw the cube glPopMatrix() And now I get a buffer underflow error which apparently happens when the stack has only one matrix. So am I just waaay off base in my understanding? Or is there some way to increase the matrix stack size? Also, if anyone has some good (online) references (examples, etc.) for understanding how the camera model matrices work together, I would sincerely appreciate them! Thanks! EDIT Here is the pastebin of the full code http pastebin.com QXxNisuA
1
How forward rendering done using OpenGL? Recently I come across the term forward rendering. I'm kind of curious how this could be done in OpenGL. I have done a lot of search on this, majority of the result I get are on theory but not code implementation. May I know is there any code implementation on OpenGL for this rendering technique?
1
How to modify VBO data I am learning LWJGL so i can start working on my game. In order to learn LWJGL I got the idea to implement the map builder so I can get comfortable with graphics programming. Now, for the map creation tool I need to draw new elements or draw the old one's with different coordinates. Let me explain this My game will be a 2D scroller. The map will be consisting of multiple rectangles ( 2 strip triangles). When I click my left mouse button i want to start the rectangle and when I release it I want to stop the rectangle bottom right at that position. As I want to use VBOs I want to know how to modify data inside the VBO based on user input. Should i have a copy of a vertex array and then add the whole array to the VBO at each user input? How is usually implemented the VBO update?
1
What are the implications of using multiple OpenGL Contexts on a single thread? I'm trying to integrate two third party OpenGL rendering pipelines into the same application, namely Cinder's OpenGL API for 3D drawing and backbuffer rendering, and Google Skia's API for 2D drawing. Unfortunately, Skia tends to trigger a lot of GL state changes in general use, and offers no functionality to reset the GL state. To make matters worse, Cinder (GLNext branch) tries to keep an internal record of all of its GL state changes so that they can be easily 'unwound', but bad things can happen if its internal representation becomes different from the actual GL state. The easiest way to alleviate these problems was to create a new OpenGL context for use exclusively by Skia, performing context switches only when 2D updates were necessary. However, I've noticed some weird behaviours when I'm required to switch context more frequently, like certain draw calls failing or blend states flickering. Everything I've read about GL contexts indicates that they're meant for use in multiple threads or multiple windows. I've also read about context switches failing in certain circumstances? Is there anything terribly wrong with switching context in a single threaded, single windowed application?
1
Libgdx Transparent color over texture I am attempting to tint a texture a color but I want the texture to show under the tint. For example, I have a picture of a person but I want to tint them a light green and not change the transparency of the actual person itself. So far I have attempted to use the SpriteBatch method setColor which takes rgba values. When I set the alpha value to .5 it will render the tinting and the texture with that alpha value. Is there any way to separate the alpha values of the tint and the texture? I know I could draw another texture on top of it but I don't want to have two draw passes for the one texture because it will be inefficient. If there's anyway to do it in raw OpenGL that'd be great too.
1
Rendering up close, OpenGL performance issues I have been developing a massive RTS during the past 18 months, which is comming together nicely! Now i have an issue i can't really wrap my head around. If i render 100 models from a distance where the camera can see them all, everthing works perfectly. If i then render them when there camera is up close, my GPU goes to 100 and of course everything starts to lag. I don't really get the difference, the math is the same. Works on Nvidia m1000m and burns on AMD RX5700. OpenGl version 4.1. CPU, memory and GPU memory stays the same so i don't suspect any leaks. I have tried debugging with glGetError() and CodeXL, no luck. Am i missing something simple in OpenGL?
1
Best way to initialize values on 32 bit FP framebuffer in OpenGL I have a framebuffer bound to 32 bit FP texture glGenTextures(1, amp texColor) glBindTexture(GL TEXTURE 2D, texColor) glTexImage2D(GL TEXTURE 2D, 0, GL R32F, w, h, 0, GL RGBA, GL FLOAT, (GLvoid )NULL) glBindTexture(GL TEXTURE 2D, 0) glGenFramebuffers(1, amp fbo) glBindFramebuffer(GL FRAMEBUFFER, fbo) glFramebufferTexture2D(GL FRAMEBUFFER, GL COLOR ATTACHMENT0, GL TEXTURE 2D, texColor, 0) What's the best way to initialize framebuffer to a value 1000.0? The problem is that glClearColor(1000.0f, 0.0f, 0.0f, 0.0f) glClear(GL COLOR BUFFER BIT) clamps values to 0,1 interval, and clearing the texture texColor would require memory allocation of w h sizeof(float) bytes of memory and manual filling of that buffer with value 1000.0 which is slow (I would inject it using glTexImage2D). I'm using OpenGL 3.2 Core profile. Thanks, Josip
1
Opengl Quad Tessellation Control Shader I have the generic tessellation evaluation shader for triangles but I need to make it work for quads. Is there any chance someone could explain what is happening here and point me in the right direction? layout(triangles, equal spacing, cw) in in vec3 tcPosition out vec3 tePosition out vec3 tePatchDistance uniform mat4 Projection uniform mat4 Modelview void main() vec3 p0 gl TessCoord.x tcPosition 0 vec3 p1 gl TessCoord.y tcPosition 1 vec3 p2 gl TessCoord.z tcPosition 2 tePatchDistance gl TessCoord tePosition normalize(p0 p1 p2) gl Position Projection Modelview vec4(tePosition, 1) I understand that they're multiplying the control points (the original patch vertices) by the components of the new vertex (I guess relative to the patch verts?) and then normalizing the sum of these vectors to get the actual position of the new vertex which conforms to a nice unit sphere when applied to a cube of patches. I want to extend this to work on quads layout(quads, ...) in and add the tcPosition 3 vector but I don't really know how to get the same behavior with the fourth vertex.
1
OpenGL Cubemap skybox edge issue I implemented a skybox into my program using a tutorial, and using the provided 6 textures from that tutorial to make a cube map texture, my skybox looked fine. However, ever since then every other skybox texture set I have tried to add has had issues with the edges not blending together. Here is generally how they always end up looking here is my code for for loading the textures and the parameters glActiveTexture(GL TEXTURE0) textureID SOIL load OGL cubemap(textures 0 , textures 1 , textures 2 , textures 3 , textures 4 , textures 5 , 0, 0, SOIL LOAD RGB) glBindTexture(GL TEXTURE CUBE MAP, textureID) if (textureID 0) glGenTextures(1, amp textureID) int width, height unsigned char image glBindTexture(GL TEXTURE CUBE MAP, textureID) for (GLuint i 0 i lt 6 i ) GLuint sID SOIL load OGL texture(textures i , 0, 0, 0) image SOIL load image(textures i , amp width, amp height, 0, SOIL LOAD RGB) glTexImage2D(GL TEXTURE CUBE MAP POSITIVE X i, 0, GL RGB, width, height, 0, GL RGB, GL UNSIGNED BYTE, image) SOIL free image data(image) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP R, GL CLAMP TO EDGE) glBindTexture(GL TEXTURE CUBE MAP, 0) And code for rendering the skybox glDepthMask(GL FALSE) shader gt bind() glBindTexture(GL TEXTURE CUBE MAP, textureID) viewMatrix.r4.x 0 viewMatrix.r4.y 0 viewMatrix.r4.z 0 shader gt loadMatrix("viewMatrix", amp viewMatrix.m11) glBindVertexArray(vao) glEnableVertexAttribArray(0) glDrawArrays(GL TRIANGLES, 0, 36) glBindVertexArray(0) glBindTexture(GL TEXTURE CUBE MAP, 0) shader gt unbind() glDepthMask(GL TRUE) And the shader code const char vertexShader " version 330 r n" "in vec3 position r n " "out vec3 texCoord r n" "uniform mat4 projectionMatrix r n" "uniform mat4 viewMatrix r n" "void main() r n" " r n" "gl Position projectionMatrix viewMatrix vec4(position, 1) r n" "texCoord position r n" " " const char fragmentShader " version 330 r n" "in vec3 texCoord r n" "out vec4 color r n" "uniform samplerCube sampler r n" "void main() r n" " r n" "color texture(sampler, texCoord) r n" " " Does anyone see something that I might be missing that's causing my skybox to not render properly for most cube maps?
1
alpha test shader 'discard' operation not working GLES2 I wrote this shader to illustare alpha test action in GLES2 (Galaxy S6). I think is not working at all cause I don't see any change with or without it. Is there anything Im missing? Any syntax error? I know its better not using if in shader but for now this is the solution I need. precision highp float precision highp int precision lowp sampler2D precision lowp samplerCube 0 CMPF ALWAYS FAIL, 1 CMPF ALWAYS PASS, 2 CMPF LESS, 3 CMPF LESS EQUAL, 4 CMPF EQUAL, 5 CMPF NOT EQUAL, 6 CMPF GREATER EQUAL, 7 CMPF GREATER bool Is Alpha Pass(int func,float alphaRef, float alphaValue) bool result true if (func 0) result false break if (func 1) result true break if (func 2) result alphaValue lt alphaRef break if (func 3) result alphaValue lt alphaRef break if (func 4) result alphaValue alphaRef break if (func 5) result alphaValue ! alphaRef break if (func 6) result alphaValue gt alphaRef break if (func 7) result alphaValue gt alphaRef break return result void FFP Alpha Test(in float func, in float alphaRef, in vec4 texel) if (!Is Alpha Pass(int(func), alphaRef, texel.a)) discard
1
OpenGL SDL textures... game shuts down I'm going to create a game in C with SDL amp openGL but adding textures won't work. the code is in some different classes. here's the main file include "ZOMBOX.h" ZOMBOX ZOMBOX() isRunning true int ZOMBOX Execute() Init() bool mainloop false SDL Event event Create an texture unsigned int Ball texture 0 Load the image into the texture using the function Ball texture loadTexture("Smile.png") std cout lt lt "OpenGL is running n" while(isRunning) while(SDL PollEvent( amp event)) Event( amp event) if(mainloop false) std cout lt lt "Main loop has started n" mainloop true Logic() Render() Clear() SDL Quit() return 0 int main(int argc, char argv ) ZOMBOX theApp return theApp.Execute() Ok, now the rendering include "ZOMBOX.h" void ZOMBOX Render() extern float ballX extern float ballY extern float ballWH extern int vellX extern int vellY extern unsigned int Ball texture Ball texture loadTexture("Smile.png") RENDERING to the screen Enable textures when we are going to blend an texture glClear(GL COLOR BUFFER BIT) glPushMatrix() Start rendering phase glOrtho(0,800,600,0, 1,1) Set the matrix glColor4ub(0,0,0,255) White color glEnable(GL TEXTURE 2D) glBindTexture(GL TEXTURE 2D, Ball texture) glBegin(GL QUADS) Start drawing the pad We set the corners of the texture using glTexCoord2d glTexCoord2d(0,0) glVertex2f(ballX,ballY) Upper left corner glTexCoord2d(1,0) glVertex2f(ballX ballWH,ballY) Upper right corner glTexCoord2d(1,1) glVertex2f(ballX ballWH,ballY ballWH) Down right corner glTexCoord2d(0,1) glVertex2f(ballX,ballY ballWH) Down left corner glEnd() End drawing Disable textures when we are done using them glDisable(GL TEXTURE 2D) glPopMatrix() End rendering phase SDL GL SwapBuffers() SDL Delay(1) i know the comments are not right but i understand it... now the texture here's the fault i think include "ZOMBOX.h" Function for loading an image into an texture GLuint ZOMBOX loadTexture( const std string amp fileName ) SDL Surface image IMG Load( fileName.c str() ) SDL DisplayFormatAlpha(image) unsigned object(0) glGenTextures(1, amp object) glBindTexture(GL TEXTURE 2D, object) glTexParameterf(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameterf(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameterf(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameterf(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glTexImage2D(GL TEXTURE 2D, 0, GL RGBA, image gt w, image gt h, 0, GL RGBA, GL UNSIGNED BYTE, image gt pixels) Free surface SDL FreeSurface(image) return object I use variables you cant see but that's no problem thus they are not nessecary(i think). Sorry for my bad English im just a dutch highschool student.
1
Adding 2d hud gui to a 3d game engine which has no 2d features I have been following the 3d game engine tutorial series made by theBennyBox for a few months now, and have finally decided to create a game with it. My problem is that it is a 3d only engine which does not have an easy way to draw in 2d. The code for it is here (github) I've figured that I need to change the camera class to take in a generic matrix, so The old camera class was this public class Camera extends GameComponent private Matrix4f projection public Camera(float fov, float aspect, float zNear, float zFar) this.projection new Matrix4f().initPerspective(fov, aspect, zNear, zFar) public Matrix4f getViewProjection() Matrix4f cameraRotation getTransform().getTransformedRot().conjugate().toRotationMatrix() Vector3f cameraPos getTransform().getTransformedPos().mul( 1) Matrix4f cameraTranslation new Matrix4f().initTranslation(cameraPos.getX(), cameraPos.getY(), cameraPos.getZ()) return projection.mul(cameraRotation.mul(cameraTranslation)) Override public void addToEngine(CoreEngine engine) engine.getRenderingEngine().addCamera(this) To this public class Camera extends GameComponent private Matrix4f projection public Camera(Matrix4f projection) this.projection projection public Matrix4f getViewProjection() Matrix4f cameraRotation getTransform().getTransformedRot().conjugate().toRotationMatrix() Vector3f cameraPos getTransform().getTransformedPos().mul( 1) Matrix4f cameraTranslation new Matrix4f().initTranslation(cameraPos.getX(), cameraPos.getY(), cameraPos.getZ()) System.out.println("Location " cameraPos.getX() " " cameraPos.getY() " " cameraPos.getZ()) return projection.mul(cameraRotation.mul(cameraTranslation)) Override public void addToEngine(CoreEngine engine) engine.getRenderingEngine().setMainCamera(this) But now I don't know what to do. I can't work out how to switch between 2 cameras and draw in 2d. How would I achieve this? Thanks!!
1
How do I add a border to rectangles using a shader? I want to draw some rectangles with a border. Currently I render the fill with glDrawArrays(Triangles, ...) and the border with glDrawArrays(LineLoop, ...). Is there a neater way? I'm targeting OpenGL 3.3. Details Illustrative excerpt vertexBuffer gt VertexAttribArray(0) colorBuffer gt VertexAttribArray(1) texCoordBuffer gt VertexAttribArray(2) GL.DrawArrays(PrimitiveType.Triangles, 0, 12) 2 test rectangles GL.DisableVertexAttribArray(1) Disable color array GL.DisableVertexAttribArray(2) Disable texture array GL.DrawArrays(PrimitiveType.LineLoop, 0, 6) First rectangle border GL.DrawArrays(PrimitiveType.LineLoop, 6, 6) Second rectangle border Lots of draw calls for every rectangle, if there are more than the 2 test rectangles... This works fine, but I don't like having to draw the additional LineLoops for every rectangle. How can I improve this? Could I do this elegantly with a shader? Something like this VertexArributes GL.DrawArrays(Triangles, 0, vertices.Length) Shader does the border DisableVertexAttributes My current shaders Vertex version 330 core layout(location 0) in vec4 position layout(location 1) in vec4 color layout(location 2) in vec2 texCoord uniform mat4 projMatrix uniform mat4 worldMatrix out vec4 vColor out vec2 texCoords void main() gl Position projMatrix worldMatrix position texCoords 0 texCoord vColor color Fragment version 330 core in vec4 vColor in vec2 texCoords uniform sampler2D tex out vec4 fColor void main(void) fColor texture2D(tex, texCoords 0 .st) vColor
1
Memory leak with glfwSetWindowTitle? I am using GLFW for a game, and I have a function which allows me to set the window title. PROBLEM WITH INCREASING MEMORY USAGE void WindowSystem setTitle( const string amp title ) glfwSetWindowTitle( window, title.c str() ) This function is called once during every iteration of the main program loop, so that I can have a basic FPS counter. Under Xcode the program memory usage increases faster and faster. If I replace title.c str() with a literal, the problem does not happen, i.e., the following does not cause the same memory usage growth. DOESN'T CAUSE PROBLEM void WindowSystem setTitle( const string amp title ) glfwSetWindowTitle( window, "Hello world!" ) This where the function is called. int SystemControl update() double time glfwGetTime() double timeStep time control gt lastUpdate control gt lastUpdate time float frameRate 1.0f timeStep string windowTitle std to string( frameRate ) control gt windowSystem gt setTitle( windowTitle ) control gt physicsSystem gt update( timeStep ) return OK The string title is allocated on the stack, and should be freed at the end of update. Could there possibly be a memory leak? I understand that the pointer for c str() is internally managed, and does not need to be freed.
1
OpenGL get the outline of multiple overlapping objects I just had an idea for my on going game made with opengl in c I'd like to have a big outline (5 6 pixel) on multiple overlapping object when the player win something. I thought the best way is to use stencil buffer but it's few hours that I' trying to do some off screen render of the stencil buffer and I can't achieve any kind of result so probab. there are some other techniques! This is what I want to get Any ideas?
1
Issue translating objects by one unit in the y direction I'm currently making space invaders (in OpenGL SDL) and I'm running into an issue with the movement of the aliens. I have a 2 d vector of aliens (5 rows of 11) and I'm translating each in the x direction by a constant factor times elapsed time. If the right or leftmost alien in the vector hits either side of the screen, I decrement the y position of every alien. For some reason, when it collides the side of the screen, it infinitely translates downward in the y direction. I thought that this would not happen since I wasn't multiplying the y position by elapsed time, but I think that its probably happening due to the fact that the update() method is being called once per frame. Here is the code for the update function (inside of the enemies class which holds the 2 d array) void update(float elapsed) Update positions float minX 5.2 float maxX 5.2 bool hitRight false bool hitLeft false if the enemies haven't hit the right side if (((enemyVect 0 rightmost gt x) ((enemyVect 0 rightmost gt width) 2.0) lt maxX)) for (int i 0 i lt enemyVect.size() i ) for (int j 0 j lt enemyVect i .size() j ) enemyVect i j gt x elapsed 0.4 xDirect if they've hit the right side else if(((enemyVect 0 rightmost gt x) ((enemyVect 0 rightmost gt width) 2.0) gt maxX)) xDirect 1 hitRight true for (int i 0 i lt enemyVect.size() i ) for (int j 0 j lt enemyVect i .size() j ) if(hitRight)enemyVect i j gt y 0.6 move each row of enemyVect down to the next level hitRight false if the enemies haven't hit the left side if (((enemyVect 0 leftmost gt x) ((enemyVect 0 leftmost gt width) 2.0) gt minX)) for (int i 0 i lt enemyVect.size() i ) for (int j 0 j lt enemyVect i .size() j ) enemyVect i j gt x elapsed 0.4 xDirect if they've hit the left side else if (((enemyVect 0 leftmost gt x) (enemyVect 0 leftmost gt width 2.0) lt minX)) xDirect 1 for (int i 0 i lt enemyVect.size() i ) for (int j 0 j lt enemyVect i .size() j ) enemyVect i j gt y 0.6 move each row of enemyVect down to the next level How should I go about making it so that the y position of each alien only gets decremented once right after the collision with either side of the screen?
1
How To Resize Existing Texture In OpenGL 4.3 I would like to know how to resize an existing OpenGL 4.3 texture while keeping the current contents. i'm using glTexImage2D(). Do I simply re call glTexImage2D() with nullptr for the data paramater?
1
OpenGL rendering to multiple windows, having 1 main loop for each window I have written a little OpenGL framework in the past year that i would like to extend to support multiple windows in the near future. I have an idea about what I would like to do but I am not sure if that is possible so far. I a nutshell the inner working of my framework is as following A window is created. A render context for that window is created. An application deriving from an interface is created. window.run() is called, passing the application running in the window and the render context used as parameter. sglr W32Window w(nCmdShow) w.create(L"DemoApp", sglr DISPLAY MODE ENUM WINDOWED, sglr Resolution(800, 800)) sglr W32Rendercontext r r.create(w, 4, 0, sglr CONTEXT TYPE ENUM DEBUG) TestApp app return w.run(app, r) The run method on the Win32 implementation looks like following (just some of the code) mRunningApp gt onInit() start message loop while (1) while (PeekMessage( amp msg, NULL, 0, 0, PM REMOVE)) TranslateMessage( amp msg) DispatchMessage( amp msg) if message is WM QUIT, exit loop if (msg.message WM QUIT) break mRunningApp gt onUpdate(mDt) mRenderContext gt setAsCurrent() mRunningApp gt onRender() mRenderContext gt disableAsCurrent() What I would like to do is to create a seperate window, rendercontext and application and start a seperate main loop by calling run() just like with the first window. I already have a seperate rendercontext for each window and know about setting it active before rendering to each window. What possible solutions would exist to do that? Just running each run() method in a seperate threads sound like a kinda naive approach to me. (I admit I already tried it with failure D)
1
Simple mouseray picking in openGL Ive been looking at tutorials and trying to figure out how to do basic ray picking. But I'm stuck at figuring out what space to do the distance calculations in. What space does glm unproject() lead to exactly? This is what I'm doing first I get the mouse unprojected, like so mouse ray start vec3 m uproj glm unProject( vec3(mouse xy .x glutGet(GLUT WINDOW WIDTH), mouse xy .y glutGet(GLUT WINDOW HEIGHT),0.0f), workshop.access gui() gt view mat(), workshop.access gui() gt proj mat(), glm ivec4(0, 0, glutGet(GLUT WINDOW WIDTH), glutGet(GLUT WINDOW HEIGHT))) end of ray vec3 m uproj2 glm unProject( vec3(mouse xy.x glutGet(GLUT WINDOW WIDTH), mouse xy.y glutGet(GLUT WINDOW HEIGHT), 1.0f), workshop.access gui() gt view mat(), workshop.access gui() gt proj mat(), ivec4(0, 0, glutGet(GLUT WINDOW WIDTH), glutGet(GLUT WINDOW HEIGHT))) Then I find its direction, mray, like so vec3 mouse ray normalize(m uproj2 m uproj) get mray direction And Im expecting to find the closest point to some object by using this calculation vec3 closest point mouse ray glm dot(locations i , mouse ray) closest point But locations seems to be in the wrong space? Or am I thinking about this the wrong way? Ive been looking around, but I cant find anywhere that explains just this part that I must be misunderstanding. the idea is to compare the distance between closest point and locations i , but the results are incorrect. Im getting something like this Where it should be red only if the cursor is over the square. What space does glm unproj() put my ray in anyway? And in what space should I put the objects that I want to pick highlight?
1
using DirectX to generate a sprite sheet I am building a site in HTML5 for my client and it must run on the iPad iPhone (i.e. Safari on iOS). They want a 3D effect where they have a simple, yet, specific product they want to show on the site in 3D with user generated data as the texture. Normally, I would use either flash silverlight webgl but these are not supported on iOS. Also, CSS 3D transforms are close, but since I need a specific shape and size, creating hundreds of div elements and transforming them just wont work in the long run. My idea, is to use server side code to generate a sprite sheet that could be used on the site to fake the 3D ness of the object. What I really lack is the knowledge around DirectX OpenGL in order to automate this process. My question is, can I use DirectX OpenGL from a command line (or programatically) to pass is a model and textures to generate a PNG JPG sprite sheet of a rotating object?
1
OpenGL texture2d image sampling issue. Strange artifacts in texture I have an issue when using textures in OpenGL, strange artifacts occur where geometry overlaps, but not always. Video Reference. I am using a GL TEXTURE 2D with GL ARB image load store to make a custom depth test shader that stores material data for opaque and transparent geometry. The video given shows the artifacts occur where the support structure for a table is occluded behind the top of the table, but strangely, not occurring where the base of the table is occluded by the support. version 450 core in VS OUT vec3 Position vec3 Normal vec2 TexCoords mat3 TanBitanNorm fs in Material data uniform sampler2D uAlbedoMap uniform sampler2D uNormalMap uniform sampler2D uMetallicMap Material info out layout(rgba16f) coherent uniform image2D uAlbedoDepthOpaque layout(rgba16f) coherent uniform image2D uNormalMetallicOpaque layout(rgba16f) coherent uniform image2D uAlbedoDepthTransparent layout(rgba16f) coherent uniform image2D uNormalAlphaTransparent Depth info in out layout(r8) uniform image2D uDepthBufferOpaque layout(r8) uniform image2D uDepthBufferTransparent void main() vec3 n tex texture(uNormalMap, fs in.TexCoords).xyz n tex n tex 2.0f 1.0f ivec2 tx loc ivec2(gl FragCoord.xy) const float opaque depth imageLoad(uDepthBufferOpaque, tx loc).r Stored depth of opaque const float trans depth imageLoad(uDepthBufferTransparent, tx loc).r Stored depth of transparent Depth processing if (gl FragCoord.z gt opaque depth) bool tran false if (trans depth gt opaque depth) tran trans depth gt gl FragCoord.z else tran true Transparent if (texture(uAlbedoMap, fs in.TexCoords).a lt 1.0f amp amp tran) imageStore(uDepthBufferTransparent, tx loc, vec4(gl FragCoord.z)) imageStore(uAlbedoDepthTransparent, tx loc, vec4(texture(uAlbedoMap, fs in.TexCoords).rgb, gl FragCoord.z)) imageStore(uNormalAlphaTransparent, tx loc, vec4(abs(length(n tex) 1.0f) gt 0.1f ? fs in.Normal normalize(fs in.TanBitanNorm n tex), texture(uAlbedoMap, fs in.TexCoords).a)) Opaque else imageStore(uDepthBufferOpaque, tx loc, vec4(gl FragCoord.z)) imageStore(uAlbedoDepthOpaque, tx loc, vec4(texture(uAlbedoMap, fs in.TexCoords).rgb, gl FragCoord.z)) imageStore(uNormalMetallicOpaque, tx loc, vec4(abs(length(n tex) 1.0f) gt 0.1f ? fs in.Normal normalize(fs in.TanBitanNorm n tex), texture(uMetallicMap, fs in.TexCoords).r)) if (opaque depth 0.0f) imageStore(uDepthBufferOpaque, tx loc, vec4(0.125f)) else imageStore(uDepthBufferOpaque, tx loc, vec4(0.125f opaque depth)) Render with overlapping geometry shows that artifacts still occur outside of reading from the texture. Also in the video, I move the camera back and forth (with orthographic projection) and the artifacts become brighter and darker. Render with overlapping geometry w out depth processing shows that the brighter darker values were from the depth test. Any ideas on why this occurs, and how can I fix it?
1
Texture Not Rendering in C SDL OpenGL(glut) I don't understand why my texture("texture.bmp"), is not showing on the screen. Please help me. The whole drawing function HERE EVERYTHING IS BEING RENDERED void Main display() int startTime GetTickCount() BACKGROUND COLOR glClearColor(0.3f, 0.3f, 2.0f, 1.0f) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) LOAD THE IDENTITY MATRIX FOR THE DRAWING glLoadIdentity() SO EVERYTHING HAS THE CORRECT SIZE glTranslatef(0.0f, 0.0f, 5.0f) CAMERA MOVEMENT glTranslatef(cameraX, 0.0f, 0.0f) THE SUN glBegin(GL QUADS) glColor3f(1.0f,1.0f,0.0f) glVertex2f( 3.0f, 2.0f) The bottom left corner glVertex2f( 3.0f, 2.5f) The top left corner glVertex2f( 2.5f, 2.5f) The top right corner glVertex2f( 2.5f, 2.0f) The bottom right corner glEnd() THE GROUND glBegin(GL QUADS) glColor3f(0.5f,2.0f,0.5f) glVertex2f( 20.0f, 4.0f) glVertex2f( 20.0f, 1.0f) glVertex2f(130.0f, 1.0f) glVertex2f(130.0f, 4.0f) glColor3f(0.0f,1.0f,0.0f) for(float i 0.0f i lt 100.0f i 5.0f) glVertex2f(i, 1.0f) glVertex2f(i,ranFloat static cast lt int gt (i 4.0f) ) glVertex2f(i 1.0f,ranFloat static cast lt int gt (i 4.0f) ) glVertex2f(i 1.0f, 1.0f) glEnd() THE GRASS glBegin(GL QUADS) glColor3f(0.0f,0.5f,0.0f) glVertex2f( 20.0f, 1.2f) glVertex2f( 20.0f, 1.0f) glVertex2f(130.0f, 1.0f) glVertex2f(130.0f, 1.2f) glEnd() THE PLAYER glTranslatef(playerX,playerY,0.0f) glBegin(GL QUADS) glColor3f(1.0f,0.0f,0.0f) glVertex2f( 3.0f, 1.0f) glVertex2f( 3.0f,0.0f) glVertex2f( 2.8f,0.0f) glVertex2f( 2.8f, 1.0f) glEnd() TEXTUREEEEEEEEEEEEEEEEEEEEEE glBindTexture(GL TEXTURE 2D,texture) glEnable(GL TEXTURE 2D) int X 0 int Y 0 int Width 1 int Height 1 glBegin(GL QUADS) glTexCoord2f(0, 0) glVertex3f(X, Y, 0) glTexCoord2f(0, 1) glVertex3f(X Width, Y, 0) glTexCoord2f(1, 1) glVertex3f(X Width, Y Height, 0) glTexCoord2f(1, 0) glVertex3f(X, Y Height, 0) glEnd() glDisable(GL TEXTURE 2D) CLOUDS glBegin(GL QUADS) glColor3f(1.0f,1.0f,1.0f) for(float i 0.0f i lt 100.0f i 5.0f) glVertex2f(i,3.0f) glVertex2f(i,4.0f) glVertex2f(i 4.0f,3.0f) glVertex2f(i 4.0f,4.0f) glEnd() if(Input flying) Input returnFly(0.007f) elapsedMS GetTickCount() startTime Time since start of loop std cout lt lt "FPS " lt lt elapsedMS lt lt std endl THE ACTUAL MOVEMENT playerY playerFly elapsedMS playerX cameraSpeed 1 elapsedMS cameraX cameraSpeed elapsedMS SWAP THE BUFFERS FOR SMOOTH RENDERING glutSwapBuffers() The rest of my code has nothing to do with the texture loading. The loading is not the problem, because I didn't get any errors.
1
How to properly delete unwanted models I am implementing bullets into my game, and each bullet has a lifetime. Whenever the bullet has existed for a set amount of time or collides with an object, the bullet should be deleted. I implemented the timer properly and whenever the bullet reaches the end of its life, it gets deleted. The problem occurs whenever I shoot another bullet. This instantly causes a crash with the code Exception thrown at 0x044765D7 (nvoglv32.dll) in gravityGame.exe 0xC0000005 Access violation reading location 0x00000000. After some research, I've found that this usually means that I am trying to render something that is a nullptr. My thinking is that I am not deleting the previous bullet properly and this affects the new one. A single bullet object is created on startup, and every new bullet is copied using the following line of code std shared ptr lt Bullet gt bullet std make shared lt Bullet gt ( std dynamic pointer cast lt Bullet gt (modelsLoaded.at(i))) The reason for the dynamic pointer cast is because it is in a vector of its base class that it inherits from. Whenever the bullet needs to get deleted, it is erased from the bullets vector using bullets.erase(bullets.begin() index) and the object it is pointing to should go out of scope, there is nothing else pointing to it. I feel like somehow having that object getting deleted has an adverse effect on the other bullets in the vector, but I do not know how. A new object is created and all the data from the original bullet gets copied over. How could deleting a copy of an object mess with the other copied objects? The functions to create, and render the models are below. Generate Model Function binds id glGenBuffers(1, amp VBO) glGenVertexArrays(1, amp VAO) glGenBuffers(1, amp EBO) glGenTextures(1, amp texture) glBindVertexArray(VAO) glBindBuffer(GL ARRAY BUFFER, VBO) glBufferData(GL ARRAY BUFFER, verticesSizeTexture 8 sizeof(float), verticesTexture, GL STATIC DRAW) position attribute glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, 8 sizeof(float), (void )0) glEnableVertexAttribArray(0) color attribute glVertexAttribPointer(1, 3, GL FLOAT, GL FALSE, 8 sizeof(float), (void )(3 sizeof(float))) glEnableVertexAttribArray(1) glBindBuffer(GL ELEMENT ARRAY BUFFER, EBO) glBufferData(GL ELEMENT ARRAY BUFFER, indicesSizeTexture 4, indicesTexture, GL STATIC DRAW) texture glBindTexture(GL TEXTURE 2D, texture) glVertexAttribPointer(2, 2, GL FLOAT, GL FALSE, 8 sizeof(float), (void )(6 sizeof(float))) glEnableVertexAttribArray(2) glBindBuffer(GL ARRAY BUFFER, 0) glBindVertexArray(0) Rendering Function unsigned int transformLoc glGetUniformLocation(shader gt ID, quot location quot ) glUniformMatrix4fv(transformLoc, 1, GL FALSE, glm value ptr(trans)) glBindTexture(GL TEXTURE 2D, texture) glBindVertexArray(VAO) glDrawElements(GL TRIANGLES, indicesSizeTexture, GL UNSIGNED INT, 0) glBindVertexArray(0) Destructor glDeleteVertexArrays(1, amp VAO) glDeleteBuffers(1, amp VBO) glDeleteBuffers(1, amp EBO) Update After removing the contents of the destructor, the problem goes away. However, I would like to have some sort of destructor to allow for the proper freeing of memory.
1
Libgdx, TiledMap, and tearing My TiledMap tears when the window is specific sizes, resized, or the camera moves. The tileset is padded by 2px and have tried as high as 10px. Each tile is 70x70. Here are the pack.json settings, paddingX 2, paddingY 2, bleed true, edgePadding true, duplicatePadding true, maxWidth 4096, maxHeight 4096, filterMin Nearest, filterMag Nearest, ignoreBlankImages true, wrapX ClampToEdge, wrapY ClampToEdge, grid true, fast false Here is the code. Parameters params new Parameters() params.textureMinFilter TextureFilter.Nearest params.textureMagFilter TextureFilter.Nearest TileMap map new TmxMapLoader().load(file.getAbsolutePath(),params) So what am I doing wrong? If nothing, does that mean libgdx absolutely cannot handle TileMaps of any sort?
1
OpenGL not rendering textures i'm using OpenGL 2.1 with SDL2.0 and i'm trying to render a texture, using this steps load the image Image image new Image() image gt image SDL LoadBMP(path.c str()) if (!image gt image) throw std runtime error(SDL GetError()) return image and then generate a texture for it Texture2D Texture2D fromImage(const Image image) GLint format Texture2D t2d new Texture2D() glGenTextures(1, amp t2d gt tid) glBindTexture(GL TEXTURE 2D, t2d gt tid) t2d gt size.setWidth(image gt getWidth()) t2d gt size.setHeigth(image gt getHeight()) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL NEAREST) switch (image gt getBPP()) case 24 format GL RGB break case 32 format GL RGBA break default throw std runtime error("Unsurpoted pixel format!") break glTexImage2D(GL TEXTURE 2D, 0, format, image gt getWidth(), image gt getHeight(), 0, format, GL UNSIGNED BYTE, image gt getPixels()) return t2d and then i try to render it void Painter drawTexture(Texture2D texture, const SizeF amp size, const PointF amp position) camera.set(camera.getWidth(), camera.getHeight()) texture gt bind() only call glBindTexture(GL TEXTURE 2D, tid) glBegin(GL QUADS) glColor4f(color.getR(), color.getG(), color.getB(), color.getA()) glTexCoord2f(0.0f, 1.0f) glVertex2f(position.getX(), position.getY()) glTexCoord2f(1.0f, 1.0f) glVertex2f(position.getX() size.getWidth(), position.getY()) glTexCoord2f(1.0f, 0.0f) glVertex2f(position.getX() size.getWidth(), position.getY() size.getHeight()) glTexCoord2f(0.0f, 0.0f) glVertex2f(position.getX(), position.getY() size.getHeight()) glEnd() but all i got is box with the defined color, i don't understand why, i have already followed this steps before to render a texture, but this time it is not working, i'm not using glEnable(GL TEXTURE 2D) because i read that this is not necessary, but when i do call it i got the error code 1282, but i got no errors on the textures creation, can someone give me some light here? thanks
1
LWJGL OpenGL Texture shading I want to use LWJGL to create a shader that all it does is change the color of the given texture. For example I tell it to draw the letter A using a sprite sheet then I can tell the shader to draw the letter in a certain color. How would you do something like this without needed to create different colored letter sprite sheets? Task for the shader Simply change all pixels to a certain color in the texture. Input Color , texture. Output it draws onto the screen the new colored texture. How do i accomplish such a thing?
1
How should I organize my matrices in a 3D game engine? I'm working with a group of people from around the world to create a game engine (and hopefully a game with it) within the next upcoming years. My first task is to write a camera class for the engine to use in order to add cameras to the scene, with position and follow points. The problem I have is with using matrices for transformations in the class, should I keep matrices separate to each class? Such as have the model matrix in the model class, camera matrix in the camera class, or have all matrices placed in one class chuck? I could see pros and cons for each method, but I wanted to hear some input form a more professional standpoint.
1
OpenGL RayCasting and Intersection with plane I have been trying for a couple of days to raycast. I have a Renderable Texture Primitive Whatever placed in the WORLD at 0.0f,0.0f,0.0f, and when I click the mouse I want to know at what X,Y coordinates was the Primitive "clicked". We can safely assume that the primitive will always be at Z 0 Basically, I should do raycasting. But it does not work. I tried finding the 3D position of the ends of the ray(near far plane), and then compute the intersection, but it does not work. I tried getting Z from the DEPTH BUFFER, but I always get 0 and it still does not work. Does anyone have any fast, simple method for this?
1
Is OpenGl still supported in C? I'm reading the OpenGl superbible 3th edition (well I plan to) and it's all in C. Since this book was written in 2004 and all the other later versions are written in C , i was wondering whether or not OpenGl is still supported in C, and whether the relevant downloads required will still be available. I want to do it this way, because I'm more comfortable with C, and I'm still learning C .
1
matrix 4x4 position data I understand that a 4x4 matrix holds rotation and position data. The rotation data is held in the 3x3 sub matrix at the top left of the matrix. The position data is held in the last column of the matrix. e.g. glm vec3 vParentPos( mParent 3 0 , mParent 3 1 , mParent 3 2 ) My question is am I accessing the parent matrix correctly in the example above? I know that opengl uses a different matrix ordering that directx, (row order instead of column order or something), so, should the mParent be accessed as follows instead? glm vec3 vParentPos( mParent 0 3 , mParent 1 3 , mParent 2 3 ) thanks!
1
Why am I experiencing weird texture warping? I'm using cubemaps to render a skybox, in my game. Thinking this would be a simple task, I threw some stuff together using the tutorials I found online, particularly this one. Taking all of this together, I then adjusted the program to compensate for my particularly junky Linux laptop, by converting the GLSL to 120. However, this turned sour when I discovered that the texture I was loading, which is just a straight black line meant to test this very issue, is being horribly bent by my program. A short clip of my problem is shown on Youtube, here. In desperation, I asked this question on gamedev.net, without much success. My Shader code looks like this Vertex Shader version 120 attribute vec3 verPos uniform mat4 MVP uniform vec3 camPos varying vec3 texCoord void main() vec4 vertexPosition vec4(verPos, 1.0) texCoord verPos gl Position MVP vertexPosition Fragment Shader version 120 varying vec3 texCoord uniform samplerCube CubeMap void main() gl FragColor textureCube(CubeMap, texCoord) Rendering Code To avoid being spammy, I'll just assume that I loaded the images correctly, and not show off that particular mess. I'm using IMAGE Load from SDL2 to handle the texture loading, and I made the cube map clamp to edge and such. void SkyBox Draw() if(!loadedCheck) return glUseProgram(myProgID) glDisable(GL DEPTH TEST) glDepthFunc(GL LEQUAL) glm mat4 perspect glm perspective(45.0f, float(640) float(480), 0.1f, 3000.0f) vector3f playerPos player.myNPC.myEnt.myObject.getPosition() float playerAngY player.myNPC.myEnt.myObject.myAngle.y camPos glm vec3( 15.0f float(cos( playerAngY 1.62)), 0, 15.0f float(sin( playerAngY 1.62))) Convert my vector3f class to glm vec3 glm vec3 playerGLM glm vec3(playerPos.x, playerPos.y, playerPos.z) float playerAngX player.myNPC.myEnt.myObject.myAngle.x camPos is the position of the camera relative to the player's position glm mat4 view glm lookAt(camPos playerGLM, playerGLM, glm vec3(0,1,0)) view glm mat4(glm mat3(view)) To get rid of the translation matrix view 3 0 0 view 3 1 0 view 3 2 0 comb is equivalent to an MVP matrix glm mat4 comb perspect view GLuint matrixID glGetUniformLocation(myProgID, "MVP") glUniformMatrix4fv(matrixID, 1, GL FALSE, glm value ptr(comb)) GLuint camID glGetUniformLocation(myProgID, "camPos") GLuint sampler glGetUniformLocation(myProgID, "CubeMap") float camP 3 camPos.x, camPos.y, camPos.z glUniform3fv(camID, 3, camP) glActiveTexture(GL TEXTURE0 0) glBindTexture(GL TEXTURE CUBE MAP, myTexture) glBindBuffer(GL ARRAY BUFFER, vertexBuffer) glEnableVertexAttribArray(verAttrib) glVertexAttribPointer( verAttrib, 3, GL FLOAT, GL FALSE, 0, (void )0 ) glDrawArrays(GL TRIANGLES, 0, 36) glDepthFunc(GL LESS) glEnable(GL DEPTH TEST) All in all, I feel like this should be working. Why am I experiencing weird texture warping?
1
Moving a camera without rolling (tumble not arcball rotation) I'm trying to move my camera around using mouse keyboard. I'm looking for specific camera behaviour however, and I cant figure out how to get it right. What I've found, is that to get a maya tumble style camera, I need to multiply my xrotation in world space then my yrotation. Reversing the order it looks like this view cam move wasd in cam space, this works correctly yrot before world to camera space view xrot after model to world space This means my xrot(world) doesn't interfere with my yrot(eye), which avoids unwanted roll induced by regular cam space x rotation. My problem however, is that I want the xrotation to happen around a point (as if done in eye space) but without creating unwanted roll. I do not know how to get that result. According to my sources It should be as easy as translating inversely before xrotating, then translating back after, but its just not working. Perhaps someone knows how I can get my cam to spin (yaw) around a point, without causing the unwanted roll? SOLUTION Keeping track of pitch yaw (mouse move.xy) seperately, as well as the camera location, and making sure to multiply the yaw before the pitch. Pitching will create unwanted roll, but yawing wont create unwanted pitch, so its a matter of which you do first here. To move the camera in eye space, I do this make a mat3 rotation matrix and multiply it with current movement if (keys 'A' keys 'D' keys 'W' keys 'S' ) camlocation rotated then moved (glm normalize(vec3(keys 'A' keys 'D' , 0, keys 'W' keys 'S' )) move speed) mat3(glm rotate(mat4(1.0f), pitch, vec3(1, 0, 0))) mat3(glm rotate(mat4(1.0f), yaw, vec3(0, 1, 0))) which will be a value to add in world space Then under camera update, or where I build the view matrix, I do this view glm rotate(mat4(1.0f), pitch, vec3(1, 0, 0)) pitch radian float glm rotate(mat4(1.0f), yaw, vec3(0, 1, 0)) yaw also in radian glm translate(mat4(1.0f), camlocation) Just remember that in OpenGL GLM the matrix transformations are reverse order, due to how OpenGL does column row matrix computations. (Compared to directx it saves a transpose operation in the shader (apparently)) Making sure to yaw before pitching, and both after translating (reversed on paper) avoids the unwanted roll, wich yields the desired tumble maya effect!
1
Sorting a vector using array version of quick sort I'm coding a simple rendering engine using OpengGL and I wrote a class that manages a particle system. The class creates the particles and pushes them into a vector to be used by a renderer to draw them on screen. I need to order the particles from the farthest to the nearest to the camera (in order to solve problems with alpha blending and depth, having disabled depth testing for transparency), so I tried with the following code void ParticleRenderer render() prepare renderer ..... update particle state (position,ecc..) and remove it if necessary update returns a bool indicating if particle is still alive while (it ! mParticles.end()) if (!( it) gt update()) delete it it mParticles.erase(it) if (it mParticles.end()) break else continue it if (mParticles.size() gt 0) sortParticles( amp mParticles 0 , 0, mParticles.size() 1) for (Particle particle mParticles) ..... render particle ..... std cout lt lt getCameraDistance(particle) lt lt std endl std cout lt lt " " lt lt std endl mShader.unuse() glDisable(GL BLEND) void ParticleRenderer sortParticles(Particle particles, int low, int high) if (low gt high) return Particle temp particles low while (true) while (compareParticles(temp, particles high ) gt 0 amp amp low lt high) high if (low high) break else particles low particles high while (compareParticles(temp, particles low ) lt 0 amp amp low lt high) low if (low high) break else particles high particles low particles high temp int middle high sortParticles(particles, low, middle 1) sortParticles(particles, middle 1, high) double ParticleRenderer compareParticles(Particle particle1, Particle particle2) return getCameraDistance(particle1) getCameraDistance(particle2) double ParticleRenderer getCameraDistance(Particle particle) glm vec3 distance mCamera.getPosition() particle gt getPosition() return glm length(distance) but what I get on the console are all unsorted particles (the cout statement) and of course I have problems when the particles are rendered on screen. Am I missing something? I know I could use std sort and the STL algorithms, but since I'm learning I would like to know what I am doing wrong.
1
Heightmap terrain picking I've implemented an OpenGL based terrain unsing a tesselation shader for dividing each 'terrain cell' into the desired tiles. The heightmap is uploaded to the GPU and applied on the shader. When it comes to picking, I don't really know what to do. Testing triangles on intersection with the pick ray is not possible because only the GPU 'knows' the mesh. Is render to texture with an unique color the way to go? Or holding an additional mesh in RAM, only for picking collision?
1
C OpenGL loading models fast I'm looking for a faster solution that will allow me to load objects faster with OpenGL and currently when trying to load, parse and pass the data on my main thread it takes a while... Would multi threading be a reasonable solution to this and if so, how would I go about it? Would I make a second thread with the same context and save the data once it has loaded or... Would I create a second thread to load and parse the data, then pass it back to the first thread to save to the GPU? This is the limit of my knowledge of this subject at the minute, if anyone has any better solutions please drop me a comment.
1
How to draw the simplest grid map OpenGL 1.0 I want to draw a simple black amp white grid map, like that I have been searching for a way to generate tile, a tile map and tho and I want to draw this map and thats all. I mean that I want to draw it only once so it should be easy. I use OpenGL 1.0. How can I draw that and how should I draw should I draw an Image and place it like a map, should I draw a tile and multiple it ?
1
Difference and interaction between Viewport and Camera I'm very very confused about some very basic concepts in game grafical developing. The interaction between the individual components (camera, viewport, window sizes, gameworld sizes, ...) when rendering graphical things are still confusing me. As far as I understand A camera "filming" viewing something as long it is between the near and the far clipping plane Camera has a Width and Height which defines the area of what will get "viewed"... ? The viewport can connect to an camera the viewport is the area which will get shown withing the window the cameras view will get projected onto the viewport so if the camera has a size of 1024x768 and the viewport ist only 300x400, the 1024 pixels will get upset on 300 "real" pixels As you may have have recognized Im very unsure about this. Maybe you can help me with that. I want to run the test app down bellow in a way, that the picture will not get stretched. Also Im wondering what "game coordinates" are in that context. For Demo Cases and learning I've just created a piece of src package com.mygdx.game import ... public class MyGdxGame extends ApplicationAdapter public PerspectiveCamera cam public Model model public ModelBatch modelBatch public ModelInstance instance private FitViewport viewport Override public void create() modelBatch new ModelBatch() creating a camera gets width and height from the window so it will "fit" best in the current window cam new PerspectiveCamera(67, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()) cam.position.set(10f, 10f, 10f) cam.lookAt(0, 0, 0) cam.near 1f cam.far 300f cam.update() creating viewport width n height r not really necessary since it gets new values within the rezise() method correct? viewport new FitViewport(1920, 1080, cam) ModelBuilder modelBuilder new ModelBuilder() model modelBuilder.createBox(5f, 5f, 5f, new Material(ColorAttribute.createDiffuse(Color.GREEN)), Usage.Position Usage.Normal) instance new ModelInstance(model) Override public void render() Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()) Gdx.gl.glClear(GL20.GL COLOR BUFFER BIT GL20.GL DEPTH BUFFER BIT) what is happening here? cam.update() modelBatch.begin(cam) modelBatch.render(instance) modelBatch.end() Override public void dispose() model.dispose() Override public void resize(int w, int h) putting new values into the viewport and updating it will "resize" the camera ... ? float aspectRatio (float) w (float) h viewport.update((int) (h aspectRatio), h) viewport.update(1920, 1080) viewport.update(w, h)
1
How to reduce image size without pixelation? I see lots of games with smooth edges characters and high res images, however when I try to reduce images to say 64x64 for my character I just get a pixelated mess. even if I start with a 64x64 canvas I get pixelated edges. Should I be scaling with OpenGL? or is there some technique perhaps with photoshop or pixen that I am unaware of?
1
GLSL shader without a vertex array Ok so I have a idea for a neat GPU driven curve renderer, and I realised that the vertex shader can be hardwired to generate points of the curve segment (to be rendered as a line strip) without sending any vertex positions gl Position could be set completely procedurally. That said I'd still need to specify a "t" value per point via vertex attributes. Is it possible to specify attributes (ie via glVertexAttribPointer) without specifying vertices? Or does the GL need "space" in the buffers for vertices even if they aren't initialized.
1
How does GL INT 2 10 10 10 REV work for color data? Can anybody tell me how exactly to use GL INT 2 10 10 10 REV as type parameter in glVertexAttribPointer()? I am trying to pass color values using this type. What is the significance of "REV" suffix in this type? Does it require any special treatment in the shaders?
1
Detect Sprites, they are in Shape or Not which random draw in Cocos2d I have searched a lot on the web and found some helpful links but none solve my question. Link 1 Link 2 I have written some code, to draw lines with Touch methods, but I still have questions. Here is what I need help understanding. 1 Line has been cleared after draw complete (this might go in ccTouchesEnded method UIPangesture state End.) I am not sure how to draw custom shapes with UIPanGesture or Touch methods. 2 Is there any difference between capturing the whole game scenario for collision as like scheduleUpdate method and how do I find out if objects are colliding (these are randomly moving sprites and they have random shapes)? 3 How to Check whether a User drawn shape is closed or open (a convex shape or not)? 4 How to check random moving sprites inside drawn shape or outside? Here I have some code, but I am not sure whether it's useful or not. This code is in init method linePoints NSMutableArray alloc init create a simple rendertexture node and clear it with the color white rtx CCRenderTexture renderTextureWithWidth winSize.width height winSize.height rtx.position CGPointMake(winSize.width 2, winSize.height 2) self addChild rtx z 6 tag 101 brush CCSprite spriteWithFile "Line1.png" retain brush.scale 5.0f Draw line by touch move method. (void) ccTouchesMoved (NSSet )touchess withEvent (UIEvent )event UITouch touchMyMinge touchess anyObject CGPoint start self convertTouchToNodeSpace touchMyMinge CGPoint end touchMyMinge previousLocationInView touchMyMinge.view end CCDirector sharedDirector convertToGL end end self convertToNodeSpace end NSValue value NSValue valueWithCGPoint start linePoints addObject value rtx begin float distance ccpDistance(start, end) for (int i 0 i lt distance i ) float difx end.x start.x float dify end.y start.y float delta (float)i distance brush setPosition ccp(start.x (difx delta), start.y (dify delta)) brush visit rtx end I have attached two screen shots for more clarification. Please help me about on my technical query.
1
What's wrong with this camera implementation? I'm using WebGL and glMatrix and I implemented a camera. When a move backward, no problem. But when I move to the side and particularly forward, the camera becomes all glitchy. I implemented almost exactly the same camera before in C and it worked perfectly. I put the whole code on jsFiddle (with glMatrix because it's not in the list and I didn't find a url, sry). https jsfiddle.net ydx0Lr1v 14 Click and move the mouse to move around. Also, the more I decrease the speed of the camera movement, the more it works well. I put a high speed to dramatize the effect. I know I should make the speed of the camera depend on the time between each frame, but I haven't implemented an fps counter yet. Thanks! Edit Hum, that's strange. In jsFiddle the camera seems to work better when moving forward.
1
interpolating frames in a vertex shader My models are stored as a set of meshes, each with a vertex list and normal list per key frame, and indices for GL TRIANGLES which is shared for all frames. Each frame I lerp between two adjacent key frames to generate the vertices for that frame on the CPU and then draw it. How can I move this into a GLSL vertex shader? Can a shader interpolate between two sets of vertices, and how can I store those vertices on the GPU?
1
LibGDX Max number of textures? I've been developing a game targeted at android. I know not to think about program optimization until the project is finished, but I have to wonder how many textures most phones can handle safely without melting or murdering battery. I've seen that the max SIZE of a texture should be 1024 or 2048 pixels squared (I've been combining sprite sheets into 1024x1024 atlas's) but I can't help but wonder if there is a limit to how many of these I can use at runtime. I understand it should depend on the GPU but what spec determines it?
1
Directional shadow mapping view projection matrices I've got shadow mapping with directional lights working, but I believe I am constructing the view projection matrices wrong. Here's how I build them Mat4 viewMatrix LookAt(lighting.mCameraPosition (directionalLight.mLightDirection Z FAR 2.0f), directionalLight.mLightDirection, Vec3(0.0f, 1.0f, 0.0f)) Mat4 lightVP CreateOrthographicMatrix( Z FAR, Z FAR, Z FAR, Z FAR, Z NEAR, Z FAR) viewMatrix The problem is the shadows become skewed and change depending on the camera position, which is bad. What is the proper way of doing this? EDIT 1) No, that is what I'm not sure about currently as you said it is the camera position and thus the shadows moves around when I move the camera, which is bad what should be the actual position, since directional lights have no position? 2) Also using Z FAR DOES make it pixelated which is my second problem but must not the boundaries be Z FAR in all directions, otherwise objects in the view range may not cast shadows?
1
Is there an efficient, available online, algorithm for converting a matrix of squares into a triangle strip? Imagine having a 4x4 square, with 16 smaller squares inside of it, with associated data on what the squares should look like (ie., opacity, colour, etc...). Is there an existing, efficient, algorithm, for converting this set of squares into an open gl compatible triangle strip?
1
Rendering text with SDL2 and OpenGL I've been trying to have text rendering in my OpenGL scene using SDL2. The tutorial I came across is this one Rendering text I followed the same code, and I get text rendering fine. However the issue I'm having is that there is obviously a "conflict" between the OpenGL rendering and the SDL Renderer used in the code. When I run my scene the text displays but everything else keeps flickering. The following gif shows the issue Any idea on how to overcome this while still just using SDL2 and OpenGL, not another library or anything. Thanks
1
Playing movies with OpenGL in Java I am trying to play a movie file into an OpenGL texture in a Java application. I am using JOGL and have a basic OpenGL scene, but I have no idea how to play a movie into a texture. The only thing I could find was this http paulo.ragonha.me blog 2008 08 java movie playback jogl fobs4jmf.html It is quite old and uses JOGL 1.1 and Fobs4JMF, which is no longer maintained. I managed to get it to build in eclipse but it wasn't able to read my movie clip. I could probably convert the movie clip to an older codec to see if that works, but I would rather have a modern solution. I am a professional game developer, so the OpenGL part is no problem, but I am new to Java (coming from a C background). Is there any modern library that wraps this functionality in an easy to use package?
1
What advantages does multisampling have over supersampling? I never really fully understood this, or found an article which explained all the steps in a friendly way. I'll start with what I do know already (which I hope do not contain misconceptions). I'm pretty sure allocating a multi sampled frame buffer requires as many times the memory (of a regular buffer) as the number of samples (N). This makes sense because each pixel may be sampled up to N times. During rasterization, the GPU generates a fragment for the MS frame buffer by testing if each sample is inside of the geometry being drawn. This is what provides edge anti aliasing. Each sample produces a fragment. I'm unsure about what occurs when all samples of a pixel are inside the geometry. How many fragments are generated? Is this configurable? What if I want to sample the "inside" pixels 4 times, and the edge pixels 16 times? This would require a 16x MS frame buffer. Are there other differences? It seems like if the fragment shader is run once on each sample then we are left with something not much different from basic supersampling with the exception of jittered sample locations. Actually, I'm also a bit unsure about what a frogment really is. It seems like a fragment shader gets (can get) executed more than once per pixel in a multi sampled scene, however this doesn't seem to necessarily mean that a fragment is more related to the sample than the pixel. Is a fragment best thought of as a sample, a pixel, or something else?
1
How to handle wildly varying rendering hardware getting baseline I've recently started with mobile programming (cross platform, also with desktop) and am encountering wildly differing hardware performance, in particular with OpenGL and the GPU. I know I'll basically have to adjust my rendering code but I'm uncertain of how to detect performance and what reasonable default settings are. I notice that certain shader functions are basically free in a desktop implemenation but can be unusable in a mobile device. The problem is I have no way of knowing what features will cause what performance issues on all the devices. So my first issue is that even if I allow configuring options I'm uncertain of which options I have to make configurable. I'm wondering also wheher one just writes one very configurable pipeline, or whether I should have 2 distinct options (high low). I'm also unsure of where to set the default. If I set to the poorest performer the graphics will be so minimal that any user with a modern device would dismiss the game. If I set them even at some moderate point, the low end devices will basically become a slide show. I was thinking perhaps that I just run some benchmarks when the user first installs and randomly guess what works, but I've not see a game do this before.
1
OpenGL HUD is flickering when rotating the camera I draw game HUD after drawing 3D scene. When I move the camera quickly, HUD starts flickering. I noticed that it happens only if the distance between the camera and an object its looking at changes after camera's position transformation.It does not happen if there is no mesh in front of the camera. I use separate shader program for HUD rendering. Have tried to enable disable depth test, blending already. Using VBOs to draw stuff. Render code def render() glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glEnable(GL DEPTH TEST) glDepthFunc(GL LESS) glEnable(GL CULL FACE) Shader.setCurrentShader(Shader.SHADER VOXEL) Shader.currentShader().enable() set3DMatrices() updateShaderVOXEL() val world WorldRegistry.getWorldByID(0) world.renderWorld() Shader.currentShader().disable() Shader.setCurrentShader(Shader.SHADER SIMPLE) Shader.currentShader().enable() set3DMatrices() ManagerRegistry.renderManagerSceneBoundingBox.render() ManagerRegistry.renderManagerSceneRectangle.render() ManagerRegistry.renderManagerSceneLine.render() Shader.currentShader().disable() glDepthMask(false) glDisable(GL DEPTH TEST) glDisable(GL CULL FACE) Shader.setCurrentShader(Shader.SHADER HUD) Shader.currentShader().enable() set2DMatrices() GL13.glActiveTexture(GL13.GL TEXTURE0) Texture unit 0 GL11.glBindTexture(GL11.GL TEXTURE 2D, Texture.getTexture("screen cursor default.png")) HUDRenderRegistry.HUDRender.render() Shader.currentShader().disable() glDepthMask(true) glfwSwapBuffers(window) EDIT I made a video(captured on the phone, so the quality is bad, sorry) and the rectangle in it is actually fully red, not yellow. https youtu.be HhHEfrgsfiU
1
What is a fast way to darken the vertices I'm rendering? To make a lighting system for a voxel game, I need to specify a darkness value per vertex. I'm using GL COLOR MATERIAL and specifying a color per vertex, like this glEnable(GL COLOR MATERIAL) glBegin(GL QUADS) glColor3f(0.6f, 0.6f, 0.6f) glTexCoord2f(...) glVertex3f(...) glColor3f(0.3f, 0.3f, 0.3f) glTexCoord2f(...) glVertex3f(...) glColor3f(0.7f, 0.7f, 0.7f) glTexCoord2f(...) glVertex3f(...) glColor3f(0.9f, 0.9f, 0.9f) glTexCoord2f(...) glVertex3f(...) glEnd() This is working, but with many quads it is very slow.. I'm using display lists too. Any good ideas in how to make vertices darker?
1
Generating geometry when using VBO Currently I am working on a project in which I generate geometry based on the players movement. A glorified very long trail, composed of quads. I am doing this by storing a STD Vector, and removing the oldest verticies once enough exist, and then calling glDrawArrays. I am interested in switching to a shader based model, usually examples I see the VBO is generated at start and then that's basically it. What is the best route to go about creating geometry in real time, using shader VBO approach
1
How to draw a mini map OpenGL OpenGL ES? I'm trying to draw a mini map. Succeeded to put current screen to smaller screen (mini map) via FBO. But I do not know how to make the mini screen brighter when hover. You can imagine that, the real screen is very large, and you have a mini screen in the left bottom (like StarCraft), and when you hover on it, it's brighter than others. I'm trying to use multiple attachment in FBO but it doesn't work. In fact that I don't know how does multiple attachment work, how is it used for?. My way, draw map with attachment 0 first, draw a hovering rectangle with attachment 1. Bind then draw all of the attachment textures in legal sequence (NOT WORKED). I mean, the first draws a mini "large screen". The second draws a brighter rectangle when hover. Can anyone lead me how to do this or give me any suggestion? Thank you very much!
1
With what projection matrix should I render a portal to a texture? I'm using OpenGL. I have problem with my engine's portal implementation. To create the first portal I do create a virtual camera with the position of the second portal and the correct orientation render whole scene from virtual camera to the texture using FBO render the first portal using the texture created before. The problem The virtual camera is rendering to the texture with the main projection view, so when I use this texture to render the first portal, everything is rescaled. The virtual camera should only render some part of the view to prevent scaling. The problem could perhaps be solved by changing the virtual camera's projection matrix, but I don't know how to calculate such a matrix. Here's how it looks This picture shows one portal (the texture is a little bit deformed on bottom and top, but that's not important now) and some other objects. The second portal is behind the camera, and the camera is in the middle between the two portals. We can clearly see that the portal texture is rescaled down to the portal size. As I said before, I think the virtual camera should have another projection matrix, but how do I calculate it? Or is there a better way?
1
How to implement translation, scale, rotation gizmos for manipulating 3D object's transforms? I am in the process of developing a basic 3D editor. It uses OpenGL for rendering a 3D world. Right now my scene is just a few boxes of different sizes and I am at the stage where I want to be able to select each box and then move scale rotate it to achieve any transform I want. How can I solve the problem of implementing both the rendering of these tool's gizmos(or handles, or how people usually call them), and also picking them on each axis to perform the change in the transform with my mouse? For clarity My research so far suggested the cleanest approach is to have an axis aligned bounding box per arrow in the gizmo and another one per square (the ones that move the object in a plane rather than a single axis) and then cast a ray from the mouse position and see what it collides with. But this is still a bit too abstract for me, I would appreciate further guidance in how this algorithm would go (pseudocode is more than enough)
1
GLSL Sphere from Vertex I am working on a particle simulation where we have a lot of spheres which can have different radii. Using this tutorial http mmmovania.blogspot.de 2011 01 point sprites as spheres in opengl33.html (see also code below) I was able to create a sphere from a point, but they all have the size from glPointSize(). Is it possible to extend this with a radius? version 330 out vec4 vFragColor uniform vec3 Color uniform vec3 lightDir void main(void) calculate normal from texture coordinates vec3 N N.xy gl PointCoord 2.0 vec2(1.0) float mag dot(N.xy, N.xy) if (mag gt 1.0) discard kill pixels outside circle N.z sqrt(1.0 mag) calculate lighting float diffuse max(0.0, dot(lightDir, N)) vFragColor vec4(Color,1) diffuse Maybe of interest I am using version 120 for my shaders. Or, is there a better way to do this? So, combining the two answers I now have void Draw setGeometry(float geometry, float velocity, float radius, GLuint size) this gt size size glGenBuffers(1, amp vertexData) glBindBuffer(GL ARRAY BUFFER, vertexData) glBufferData(GL ARRAY BUFFER, size 3 sizeof(float), geometry, GL STATIC DRAW) glBindBuffer(GL ARRAY BUFFER, 0) glGenBuffers(1, amp velocityData) glBindBuffer(GL ARRAY BUFFER, velocityData) glBufferData(GL ARRAY BUFFER, size 3 sizeof(float), velocity, GL STATIC DRAW) glBindBuffer(GL ARRAY BUFFER, 0) glGenBuffers(1, amp radiusData) glBindBuffer(GL ARRAY BUFFER, radiusData) glBufferData(GL ARRAY BUFFER, size sizeof(float), radius, GL STATIC DRAW) glBindBuffer(GL ARRAY BUFFER, 0) And the drawing method void Draw paint(GLenum mode) glBindBuffer(GL ARRAY BUFFER, vertexData) glEnableVertexAttribArray(0) glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, 0, 0) glEnableVertexAttribArray(1) glVertexAttribPointer(1, 3, GL FLOAT, GL FALSE, 0, 0) glEnableVertexAttribArray(2) glVertexAttribPointer(2, 1, GL FLOAT, GL FALSE, 0, 0) glDrawArrays(mode, 0, size 3 sizeof(float)) glDisableVertexAttribArray(0) And the Vertex Shader version 130 attribute vec3 position attribute vec3 velocity attribute float size uniform mat4 MVP uniform mat4 MV void main() gl PointSize size gl Position MVP vec4(position, 1.0) The problem is that when gl PointSize 15 is set, they are drawn well. When I make it equal with size, then it looks like this http picpaste.com Big particles daIBnJCG.png Works better now. Particles still getting bigger as they move along.
1
Texture being rendered to main frame buffer? I'm using Ogre 1.10.12 (openglES2 as render system) to create a manual texture like this rtt texture Ogre TextureManager getSingleton().createManual("RttTex", Ogre ResourceGroupManager DEFAULT RESOURCE GROUP NAME, Ogre TEX TYPE 2D, m size.width(), m size.height(), 0, Ogre PF A1R5G5B5, Ogre TU RENDERTARGET) m renderTexture static cast lt Ogre GLES2FBORenderTexture gt (rtt texture gt getBuffer() gt getRenderTarget()) m renderTexture gt addViewport(m camera) m renderTexture gt getViewport(0) gt setClearEveryFrame(true) m renderTexture gt getViewport(0) gt setBackgroundColour(Ogre ColourValue Red) m renderTexture gt getViewport(0) gt setOverlaysEnabled(false) then, I bind the texture to the FBO and retrieve the FBO's ID like Ogre GLES2FrameBufferObject ogreFbo 0 m renderTexture gt getCustomAttribute("FBO", amp ogreFbo) Ogre GLES2FBOManager manager ogreFbo gt getManager() manager gt bind(m renderTexture) GLint id glGetIntegerv(GL FRAMEBUFFER BINDING, amp id) My concern is that id is 0 so I cannot render this texture outside my display, it's getting visible which I don't want it to be. Shouldn't be ogre creating an unused frame buffer object when creating a manual texture with the TU RENDERTARGET parameter?
1
Why would I want to set a minimal OpenGL version before creating the context? Typical GLFW applications have these lines after glfwInit() glfwWindowHint(GLFW CONTEXT VERSION MAJOR, 3) glfwWindowHint(GLFW CONTEXT VERSION MINOR, 3) glfwWindowHint(GLFW OPENGL PROFILE, GLFW OPENGL CORE PROFILE) glfwWindowHint(GLFW OPENGL FORWARD COMPAT, GL TRUE) And similarly glfwMakeContextCurrent(window) I get that glfwMakeContextCurrent is needed before sending calls to the OpenGL API, but the doc says You can require a minimum OpenGL version by setting the GLFW CONTEXT VERSION MAJOR and GLFW CONTEXT VERSION MINOR hints before creation But why would I want that? Why would I cut options for OpenGL versions? I did not understand that, if I am interpreting it correctly in the first place
1
OpenGL Applications Bring computer to halt Whenever I run any application that utilizes the OpenGL interface, my entire computer comes to a halt, but it doesn't do this when it utilizes the DirectX interface. I run both Linux (Ubuntu 15.10) and Windows 10 so this isn't exactly caused by the operating system. I'm running the latest drivers from NVidia and both OS's are completely up to date. This is happening on a Dell Precision M6300 laptop (Core 2 Dou 2.5ghz, NVidia Quadro FX 1600M, 4gb ram) and although it's a bit old it should be completely capable of rendering a blank OpenGL window using GLFW, however it slows down my entire computer (every application starts freezing to where it becomes unusable until the application is closed). This happens in games like Left4Dead, Half life 2, etc., but also in my own OpenGL programs. The same programs and games do not have the same effect on my desktop (although much better hardware, a blank OpenGL window shouldn't matter). Any help would be greatly appreciated, thank you. Also my apologies if I left out any vital information or made a confusing question. Just ask me to clarify or add something and I shall. Added the code for the blank OpenGL window in question include lt stdio.h gt include lt stdlib.h gt include lt GL glew.h gt include lt GLFW glfw3.h gt include lt glm glm.hpp gt using namespace glm int main(int argc, char argv ) if ( !glfwInit() ) fprintf(stderr, "Failed to initialize GLFW n") return 1 glfwWindowHint(GLFW SAMPLES, 4) 4x AA glfwWindowHint(GLFW CONTEXT VERSION MAJOR, 3) GL 3.3 glfwWindowHint(GLFW CONTEXT VERSION MINOR, 3) glfwWindowHint(GLFW OPENGL FORWARD COMPAT, GL TRUE) glfwWindowHint(GLFW OPENGL PROFILE, GLFW OPENGL CORE PROFILE) GLFWwindow window window glfwCreateWindow(1024, 768, "OpenGL Tutorial", NULL, NULL) if (window NULL) fprintf(stderr, "Failed to create OpenGL Window.") return 1 glfwMakeContextCurrent(window) glewExperimental true if (glewInit() ! GLEW OK) fprintf(stderr, "Failed to initiate the glew context!") return 1 glfwSetInputMode(window, GLFW STICKY KEYS, GL TRUE) do glfwSwapBuffers(window) glfwPollEvents() while (glfwGetKey(window, GLFW KEY ESCAPE ) ! GLFW PRESS amp amp glfwWindowShouldClose(window) 0)
1
glTranslate, how exactly does it work? I have some trouble understanding how does glTranslate work. At first I thought it would just simply add values to axis to do the transformation. However then I have created two objects that would load bitmaps, one has matrix set to GL TEXTURE public class Background float vertices new float 0f, 1f, 0.0f, 4f, 1f, 0.0f, 0f, 1f, 0.0f, 4f, 1f, 0.0f .... private float backgroundScrolled 0 public void scrollBackground(GL10 gl) gl.glLoadIdentity() gl.glMatrixMode(GL10.GL MODELVIEW) gl.glTranslatef(0f, 0f, 0f) gl.glPushMatrix() gl.glLoadIdentity() gl.glMatrixMode(GL10.GL TEXTURE) gl.glTranslatef(backgroundScrolled, 0.0f, 0.0f) gl.glPushMatrix() this.draw(gl) gl.glPopMatrix() backgroundScrolled 0.01f gl.glLoadIdentity() and another to GL MODELVIEW public class Box float vertices new float 0.5f, 0f, 0.0f, 1f, 0f, 0.0f, 0.5f, 0.5f, 0.0f, 1f, 0.5f, 0.0f .... private float boxScrolled 0 public void scrollBackground(GL10 gl) gl.glMatrixMode(GL10.GL MODELVIEW) gl.glLoadIdentity() gl.glTranslatef(0f, 0f, 0f) gl.glPushMatrix() gl.glMatrixMode(GL10.GL MODELVIEW) gl.glLoadIdentity() gl.glTranslatef(boxScrolled, 0.0f, 0.0f) gl.glPushMatrix() this.draw(gl) gl.glPopMatrix() boxScrolled 0.01f gl.glLoadIdentity() Now they are both drawn in Renderer.OnDraw. However background moves exactly 5 times faster. If I multiply boxScrolled by 5 they will be in sinc and will move together. If I modify backgrounds vertices to be float vertices new float 1f, 1f, 0.0f, 0f, 1f, 0.0f, 1f, 1f, 0.0f, 0f, 1f, 0.0f It will also be in sinc with the box. So, what is going under glTranslate?
1
I can't seem to figure out what's causing this bug with the "layout" keyword in GLSL I have a GL shader file whose first few lines currently look like this version 120 layout (location 0) in vec3 position layout (location 1) in vec2 vertexUV When I try to compile this shader when running the program, it returns this error 0 2(1) error syntax error, unexpected NEW IDENTIFIER From what I know, this means that the word layout is giving it trouble. What am I doing wrong? Do I need to provide more information?
1
Why should i set glClearColor and setProjectionMatrix in render method many time in LibGDX? import com.badlogic.gdx.Game import com.badlogic.gdx.graphics.OrthographicCamera import com.badlogic.gdx.graphics.g2d.SpriteBatch public class MyGame extends Game public OrthographicCamera camera public SpriteBatch batch .. Override public void create() .. batch new SpriteBatch() Gdx.gl.glClearColor(0, 0, 0, 1) batch.setProjectionMatrix(camera.combined) Override public void render(float delta) Gdx.gl.glClear(GL20.GL COLOR BUFFER BIT) .. In LibGDX Doc Gdx.gl.glClearColor(0, 0, 0, 1) and batch.setProjectionMatrix(camera.combined) is setting in render() method at a time. I use this definitions in create() method at once, and it is running perfect. So why should i use this definitions in render() method? is't it a performance loss?
1
OBJ file, face materials and drawing them with OpenGL I'm implementing a model class which loads OBJ and MTL files, and ran into an issue or question with face materials. Consider the following example It's a cube with 5 sides Gray and 1 side Green. mtllib Materials.mtl o Cube v 1.000000 1.000000 1.000000 v 1.000000 1.000000 1.000000 v 1.000000 1.000000 1.000000 v 1.000000 1.000000 1.000000 v 1.000000 1.000000 1.000000 v 1.000000 1.000000 1.000000 v 1.000000 1.000000 1.000000 v 1.000000 1.000000 1.000000 usemtl Green f 7 8 4 3 usemtl Gray f 5 8 2 1 f 6 7 3 2 f 8 5 1 4 f 1 2 3 4 f 8 7 6 5 The OBJ file obviously shares same vertices for adjacent faces even if the faces get different materials. When drawing such a cube with OpenGL however, I would need to duplicate the vertices of the adjacent faces with different materials, since every vertex specifies the color it has and I have two different colors for the same vertices here, depending on the face and the different material it has. If I wouldn't do that, the adjacent faces would smooth over to the different materials. First Did I get this correctly or am I missing an OpenGL "feature" here which lets me "specify" the color of a whole triangle face rather than a single vertex? Second If not, is there an easier way to keep track of the faces materials and duplicating the adjacent vertices if they get different materials? What would be the easiest way to implement different materials on one object?
1
Adjusting view matrix when glViewPort resizing I have an issue when my window resizes. I have a simple MVP shader like this gl Position projection view model vec4(in Position, 1.0) to render a map in my game scene. The map is extremely simple that it simply offers a lane's vertex. And what I need to do is drawing it inside the window. So I set up my MVP matrix individually, the "view" and "model" matrix remains the same because all I want to do is drawing a static map. "perspective" is the only matrix need to change since I want to have a zoom feature that I can zoom in out by changing fovy. "perspective" is updated by calling glm perspective(glm radians(fovy), (float)width height, near, far) and my view matrix is calculated by glm lookAt(camera pos, camera pos camera front, camera up) However, when I resize the viewport, I find everything changed. I mean by default, the resolution gonna drop when my window goes larger if I didn't do anything(like updating the glViewPort()). But the behavior I wanna achieve is like "I could see more part of the map when I made my window size larger". Therefore I believe that I should keep my viewport resolution same (glViewPort(0, 0, screen width, screen height)). And updating perspective matrix by changing the ratio when new window's width and height comes in. Then the final step is to update the camera's position to update view matrix. However I stuck here for a while coz I don't know how to adjust it. I believe this is a common issue but I cannot even find a source talking about it. Do I think it the wrong way? Do I need to change Camera's position to update my view matrix to achieve the resize behavior? Platform Ubuntu 16.04, OpenGL3.3 , C 14
1
Making a camera in a 2D game (glOrtho) I'm trying to make a camera that follows my character and it seems I've managed. However, I don't know how to limit that my camera don't follow me when my character reachs the boundaries of the windows (it's ugly see black spaces beyond my tile map x x). So, this is my code public class Camera public void update(Vector2f spritePos) Vector2f pos new Vector2f(spritePos.getX() 368, spritePos.getY() 268) if((pos.getX() 368 gt 368) amp amp (pos.getY() 268 gt 268)) glMatrixMode(GL PROJECTION) glLoadIdentity() glOrtho(pos.getX(), pos.getX() Display.getDisplayMode().getWidth(), pos.getY() Display.getDisplayMode().getHeight(), pos.getY(), 1, 1) glMatrixMode(GL MODELVIEW) I substract 368 to the X position, because I want to place my character at the center of my camera. Same with 268. This is a bad way to archieve this, because my camera "jumps" roughly to the position of my character . Thank you very much.
1
Opengl create tunnel TRIANGLES and texture coords on a 3d path I have a path, lets say a bezier, or a circle. I d like to make a continously 3d cylinder (tunnel) on it (R 0.5f) How can I calculate tha TRIangles (TRIANGLE STRIP) coordinate, and the texture coords right?
1
OpenGL ES 2.0 Controlling Transparency in Fragment Shader The following is the OpenGL ES 2.0 simple GLSL Fragment Shader, I use to place textures on polygons, to render 2D sprites. varying mediump vec2 TextureCoordOut uniform sampler2D Sampler void main() gl FragColor texture2D(Sampler, TextureCoordOut) gl FragColor vec4(texture2D(Sampler, TextureCoordOut).xyz, TextureCoordOut.w 0.5) The fragment shader places voxels with alpha information taken from the source 2D texutre(.png image). Apart from alpha information, I need to control overall polygon sprite transparency to achieve Fade In Fade Out effects. Could you show me, please, how to modify the above shader to control the overall transparency, besides the alpha information? Note The commented out line is used for my attempts to achieve the transparency. I wish to combine both the alpha information with the overall polygon sprite transparency. Thanks.
1
Are interleaved vertex data formats better than non interleaved formats? I have been reading up on data formatting for 3D objects so that I can render my meshes as fast as possible in OpenGL. I am quite new to OpenGL so bear with me. The format for interleaving your meshes goes something like position, normal, texture1, texture2 this I understand Most formats for 3D meshes, however, don't use this structure. From my understanding the interleaved format is fast for execution but isn't necessarily the best in terms of size, since you never really have as many unique normals as you do vertices, since (if you are using hard shading) all normals for a face would be the same. A hard shaded basic cube for example has 8 unique vertices and 6 unique normals. So my question is, is it worth it to set up an interleaved format like this despite the fact that if you had 3 separate buffers you would use MUCH less data? Also (remember I'm new) is it somehow possible to do pack your vertex information like all positions, all normals, all tex1, all tex2 so all of one type in a sequence and have different sets of indices for each type? Or is that just dumb?
1
Lightmap not moving properly with camera movement I ve implemented a 2d lighting system (also with support of 2d shadows). Everything was fine until today when I realised that its not working when moving the camera, as it looks like the lightmap has still some offset than the camera position (for example, if camera.x is 100, the lightmap is x 200) Little video showing the problem Video I can post any piece of code, for now Im sending you my light shader(fragment) version 440 in vec4 fragmentColor in vec2 TexCoords out vec4 color uniform float ambientStrength uniform vec2 resolution uniform sampler2D textureSampler uniform sampler2D lightMapTexture uniform bool textureON void main() Calc the ligthMap vec2 lightCoord (gl FragCoord.xy resolution) vec4 lightMap texture(lightMapTexture,lightCoord) if (textureON) vec4 textureColor texture(textureSampler, TexCoords) vec4 finalColor lightMap textureColor fragmentColor color vec4(finalColor.rgb,textureColor.a fragmentColor.a) else color fragmentColor Vertex version 440 in vec3 vertexPosition in vec4 vertexColor in vec2 vertexUV out vec4 fragmentColor out vec2 TexCoords uniform mat4 Projection uniform mat4 View uniform mat4 Model void main() gl Position Projection View Model vec4(vertexPosition.xy,0,1) fragmentColor vertexColor TexCoords vec2(vertexUV.x, 1.0 vertexUV.y) At first I thought that its a problem with resolution, but everything is good. Everything is working fine until I move the camera(or change its position from 0,0 to something else all other objects have good position, also mouse coords)
1
Why does this geometry shader slow down my program so much? I have an OpenGL program, and I'm rendering a terrain mesh. I displace the vertices in the vertex buffer and don't really color them in the fragment shader yet. I'm adding a geometry shader one part at a time. Before I added the geometry shader, when I was just programming the fragment and vertex shading steps of the pipeline, I was getting framerates of about 30 . Enough that I couldn't notice any choppiness. After adding the geometry shader, I get around 5 frames per second. Why? This is the entirety of the geometry shader version 420 layout (triangles) in layout (triangle strip, max vertices 3) out void main() for (int i 0 i lt gl in.length() i ) gl Position gl in i .gl Position EmitVertex() EndPrimitive() Isn't this exactly what OpenGL was doing without the geometry shader?
1
Why does this geometry shader slow down my program so much? I have an OpenGL program, and I'm rendering a terrain mesh. I displace the vertices in the vertex buffer and don't really color them in the fragment shader yet. I'm adding a geometry shader one part at a time. Before I added the geometry shader, when I was just programming the fragment and vertex shading steps of the pipeline, I was getting framerates of about 30 . Enough that I couldn't notice any choppiness. After adding the geometry shader, I get around 5 frames per second. Why? This is the entirety of the geometry shader version 420 layout (triangles) in layout (triangle strip, max vertices 3) out void main() for (int i 0 i lt gl in.length() i ) gl Position gl in i .gl Position EmitVertex() EndPrimitive() Isn't this exactly what OpenGL was doing without the geometry shader?
1
HDR Tone Mapping choosing parameters I implement HDR in my graphics engine (deferred rendering) based on this document link I save a luminance in a texture (RGBA16F) this way const float delta 1e 6 vec3 color texture(texture0, texCoord).xyz float luminance dot(color, vec3(0.2125, 0.7154, 0.0721)) float logLuminance log(delta luminance) fragColor vec4(logLuminance, logLuminance, 0.0, 0.0) first channel stores max luminance during a minification process Then I can calculate an average luminance and find a max luminance. The delta 1e 6. Is that a good choice ? Can I calculate "a" (equation 2) dynamically to achieve better results ?