_id
int64 0
49
| text
stringlengths 71
4.19k
|
---|---|
1 | Replace glTranslatef and glRotatef with matrixes I'm not an opengl expert, and, as a novice, I prefer to practice a little bit with the old opengl just to be sure to understand correctly the basic concept of computer graphics before deal with shaders and modern opengl (3.x). I don't want to start a flame with this so I'll go through my question. I just know that what I'm using is deprecated. What I wanto to render is this and I'm drawing it using this piece of code draw grid drawGrid(10, 1) draw a teapot glPushMatrix() glTranslatef(modelPosition 0 , modelPosition 1 , modelPosition 2 ) glRotatef(modelAngle 0 , 1, 0, 0) glRotatef(modelAngle 1 , 0, 1, 0) glRotatef(modelAngle 2 , 0, 0, 1) drawAxis(4) drawTeapot() glPopMatrix() Now, I'd like to replace the last glTranslatef and glRotatef with matrixes, and I'm doing in this way Matrix4 matrixModel matrixModel.identity() matrixModel.translate(modelPosition 0 , modelPosition 1 , modelPosition 2 ) matrixModel.rotateX(modelAngle 0 ) matrixModel.rotateY(modelAngle 1 ) matrixModel.rotateZ(modelAngle 2 ) glLoadMatrixf( matrixModel.getTranspose()) And I don't see anymore the teapot. So I thought that this matrixModel is not complete because it is only the model matrix and I need a modelview so I have to multiply it with the projection matrix but.. this is what I get where am I wrong? |
1 | What is actually drawn when glDrawArrays and glDrawElements are called? In my journey out of immediate mode I've come across a snag that I haven't been able to find a decent answer for in any tutorial or API, namely Which data structures are actually invoked when I make a call to a glDraw function in OpenGL3.3 ? For example, if I want to draw two 3D models and I've put their vertex data in two different VBO's, does invoking glDrawArrays draw everything set under the current VAO? Or does it only draw the currently bound VBO sampling from the currently bound texture? I basically understand VAO's and VBO's conceptually but its down to implementation that I'm running into problems, it's a big jump to go from the "stability" of immediate mode to the a synchronicity of modern OpenGL. |
1 | Should texture regions sizes be power of two? Is it advisable for my individual texture regions inside this texture to be power of two, or that won't make any difference? I've read on several places that it's recommended to use power of two sized textures (e.g. Should I use textures not sized to a power of 2?). |
1 | Shader Color blending I would like to know how to blend colors in a specific way. Let's imagine that I have a color (A) and an other color (B). I would like to blend them in such a way that if I choose white for the (B) then the output color is (A) but if have any other color for (B) it outputs a blending of (A) and (B). I've tried the addition, but it doesn't give the expected result. I've tried the multiplicative blending it's quite good for the (B) white value but it fail for a blue (B) and red (A) colors. Any idea how to do that ? Thanks a lot. |
1 | "Render" unsigned integer values to texture without clamping to 0,1 I am trying to render to an unsigned integer texture, with a blending function enable, so that in the end, the value in each texel will be the number of objects rendered on the texel. So I assume I cannot use normalized integers. For example, if I have a 2x2 texture, and I render (0,0) (0,0) (0,1) (1,0) , the texture result should be, 2 1 1 0 . I am trying to set up the frameBuffer, but the call to glTexImage2D, causes the GL FRAMEBUFFER INCOMPLETE ATTACHMENT when I use the formats I assume I need. this works glTexImage2D(GL TEXTURE 2D, 0, GL RGB, 1024, 768, 0, GL RGB, GL UNSIGNED INT, 0) this causes GL FRAMEBUFFER INCOMPLETE ATTACHMENT glTexImage2D(GL TEXTURE 2D, 0, GL RGB32UI, width, height, 0, GL RGB INTEGER, GL UNSIGNED INT, 0 ) Is what I want to do even possible? If so, which settings do I need to use to accomplish this? |
1 | How can I consistently map mouse movements to camera rotation? I am writing a OpenGL game engine and also an editor for the engine. In my editor, I can import 3D models from FBX Collada as a scene graph. Now I want to implement the option for the user to rotate and the camera in the viewport using mouse. I found many links to rotate the camera by some angle based on the delta x and delta y of the mouse. This is fine. But my problem is selecting the axis for rotation. For example, if the user moves the mouse in the x axis, I am changing the camera's local rotation angle along y axis (the up axis). But this is not always working. In case if the camera node's parent node is rotated 90 degree in the x axis, when I change the camera's local y axis rotation angle, the scene is rotating in wrong direction. So in this case I have to rotate the camera's z axis angle. This is my problem. So how can I ensure the camera always finally rotate left and right (whatever the parent nodes' angles are rotated) when the user moves the mouse horizontally? Also I want to mention that when the user moves mouse up and down, I want the camera to rotate up and down always. |
1 | Rendering Performance num Draw Calls num Texture Bindings I'm making a game with Libgdx 1.6.4 and experience some lag issues on iPhone 4 and then discovered in the constructor GLProfiler.enable() ... in the render method Gdx.app.debug("draw calls " GLProfiler.drawCalls) Gdx.app.debug("texture bindings " GLProfiler.textureBindings) GLProfiler.reset() What the log shows is that the number of draw calls is always equal to the number of texture bindings. Do you know if this is ok? It seems strange because I have all images as TextureRegion in only one TextureAtlas with size 1024x512. Isn't it supposed to be that I will have for example 50 draw calls and only 1 2 texture binding instead of 50 draw calls and 50 texture bindings. Can this be the source of the lag? Another clue may be, that I use a SpriteBatch and Scene2d Stage at the same time, but they both use the same TextureAtlas. The Scene2d Skin is loading the TextureAtlas and SpriteBatch draws TextureRegions from the skins' regions. Thank you for any help. Update The main SpriteBatch (it's only one) is making 9 totalRenderCalls. The value of maxSpritesInBatch is 26. These values seem normal from what I read in the Docs and the FPS during rendering with the SpriteBatch is 60 FPS. Which is ok, no problem there. The problem with the lag is when I use a Scene2D Stage to display a Dialog above the GameScreen. The Stage has it's own batch which I don't manipulate directly. When the Dialog is displayed the frame rate drops to 49 50 and the Animations in the Dialog are crappy. I guess the problem has something to do with Texture Binding of the main SpriteBatch and the Stage's SpriteBatch. |
1 | Orthographic viewport zooming I'm a bit new in opengl so please bear with me. I have a viewport window like this I have implemented zoom in out features using the arrow keys. The problem is it zooms in and out with the whole image. Can anyone point me in the right direction as to how I can make the GUI on the sides stay in place while making my viewport (the white rectangle) zoom in and out? Thank you! |
1 | How to render apart blocks of a cone in OpenGL I want to render a cone in parts like on the image. My problem is to calculate the arc of each block in 3D space. has someone an idea how to handle this? |
1 | WebGL transparent gradient mask at edge I need to make a feature in WebGL, where I have horizontal list of meshes (example 20) and I want to show only 3 and 2 at edge are fade in fade out. And it slowly animates from one side no another (sliding to left or right). Here is a sample code, how it is done using CSS. (https jsfiddle.net jqsLh4vg 91 ) What would be possible options to have this kind of effect? My current approach, what I am trying (could be not the most perfomant solution) Calculate visible area Calculate how much visible area (in percentage) each mesh is visible in visible area on each render Draw gradient using frag shader at the edges (here it is were I stucked and can't figure out the solution) But maybe there is easier approach for this kind of effect? (Image if I have 100 elements and I need to manually calculate their visible area on each render time). |
1 | How do particle systems work? I want to implement a particle system in my game, but I've never programmed a particle system and don't know where to start. I only want to display pixels (GL POINTs) with different sizes in different places, something like Terraria or Minecraft when you hit a block, without the texture. Just a dead simple example. Google search results for this tend to be very complex, or don't explain the concepts, or are written in hard to read C without OOP I'm using Java and OpenGL. |
1 | OpenGL FBO to OpenCV image I am trying to figure out the best way to share an image between OpenGL and OpenCV libraries. I perform a render to texture on OpenGL, so I have and FBO texture that then I want to pass it OpenCV, where (ideally) I will be some GPU stuff with it. I know a naive way would be using glReadPixels, but this is obviously way to slow. It requires to copy the data from GPU memory to memory and then load it through cvLoadImage or similar in OpenCV. I am pretty sure there must be a way to bind or map the memory from OpenCV, so I can "load" an image by just accessing to its FBO pointer or something like that... but I cannot figure out how. I have goggled a lot about it, and so far I could just find this answer that I don't quite understand. What do they mean by glbuffer.bind() ? Any guess? Thanks. |
1 | Creating map files for a 3D game I've created plenty of 2D games and now that I've gotten my hands dirty working with 3D in opengl I want to start a game. The issue is I don't know how I can store all the map data. Not only the terrain but the textures and objects in the world, lighting, what's intractable etc. it seems that it'll get way to big if I don't have an efficient system for storing and loading maps. So what methods or articles can you guys suggest on this? |
1 | Tweening Colors in OpenGL I'm making a sky gradient in OpenGL by drawing with glColorPointer and glDrawArrays. I would like to be able to change the sky colour from morning to daytime to evening etc. I can either Make a number of sprites and fade them in with my framework Somehow tween the color vector in openGL over time and use a single sprite The second one seems like a more efficient option, but my framework doesn't pass the time delta into the draw method for me to decide how far I've progressed into fading. Here's my current code glDisable(GL BLEND) glDisable(GL DITHER) glDisable(GL FOG) glDisable(GL LIGHTING) glDisable(GL TEXTURE 2D) glShadeModel(GL SMOOTH) CGSize size CCDirector sharedDirector winSize float w size.width float h size.height const GLfloat vertices 0, 0, w, 0, 0, h 3, w, h 3, 0, h 2 3, w, h 2 3, 0, h, w, h, const GLubyte colors 254,255,134,255, 254,255,134,255, 230,157,0,255, 230,157,0,255, 230,60,0,255, 230,60,0,255, 167,0,86,255, 167,0,86,255, glVertexPointer(2, GL FLOAT, 0, vertices) glEnableClientState(GL VERTEX ARRAY) glColorPointer(4, GL UNSIGNED BYTE, 0, colors) glEnableClientState(GL COLOR ARRAY) glDrawArrays(GL TRIANGLE STRIP, 0, 8) glDisableClientState(GL COLOR ARRAY) glDisableClientState(GL VERTEX ARRAY) glEnable(GL TEXTURE 2D) Which gives me |
1 | How do I store an FBO'S as a cube map? As of late I've been trying to implement Cubemaps in my engine, and have managed to get the rendering side of them working. Currently, I'm trying to implement a function for creating them, but I somehow hit a road block. The way I'm trying to implement this is by rendering my scene once into an FBO from the perspective of the cubemap node (x6 times, one in each direction), and cycle the format of the texture bound to the color attachment as GL TEXTURE CUBE MAP POSITIVE X, GL TEXTURE CUBE MAP NEGATIVE X, etc For whatever reason, I haven't been getting any expected results. All I want to do is take snapshots from the camera's perspective, but I need them in texture units I can use multiple frames over. I've tried so many ways that I couldn't possibly recount them all here, but the general gist of how I've been trying to go about doing this, is this DTextureCubemapComponent cubemap getTexture() My Texture Component GLuint target cubemap gt textureObj() CUBEMAP texture handle glBindTexture(GL TEXTURE CUBE MAP, target) Make cubemap active for (int x 0 x lt 6 x ) Render from 6 different angles gbuffercube gt swapFinalFormat(GL TEXTURE CUBE MAP POSITIVE X x) gbuffercube gt StartFrame() my FBO Class, binds itself, etc ... RENDER SCENE ... Draw into my FBO, etc gbuffercube gt BindForFinalPass() Read buffer my FBO glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP T, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE CUBE MAP, GL TEXTURE WRAP R, GL CLAMP TO EDGE) Where "SwapFinalFormat()" is void GBuffer swapFinalFormat(GLuint format) format fmt glTexImage2D(format, 0, GL RGBA32F, size.width(), size.height(), 0, GL RGBA, GL UNSIGNED BYTE, NULL) glFramebufferTexture2D(GL DRAW FRAMEBUFFER, GL COLOR ATTACHMENT6, format, m finalTexture, 0) I know that my FBO class works (labeld GBufferCube above) , as it is the same class I use normally for my deferred shading system. |
1 | How to implement a pixelated screen transition shader? I am interested in creating a screen transition seen in a lot of retro games. The transition is just a kind of pixelated distortion that increases or decreases in granularity over time. The effect is present in Super Mario World, and can be seen recreated in this clip. Also, here is an image depicting the transition (ignore the gradual lightening of the screen) I want to achieve this by animating the uniforms of a GLSL shader. Unfortunately, I don't know how to design the shader. I know how to sample gradient textures to create various screen wipes, as well as how to sample noise textures to create simple distortion effects. But I can't figure out exactly how to create this effect. Any advice on how to set up a shader to achieve this effect? |
1 | Combining varying vertex variable and vertex variable in OpenGL. How to? Let's say I have 2 triangles sharing an edge defined by 4 vertices. Though the normals for these 2 triangles are unique for each triangle and are defined per vertex. So say I have 4 vertices, 6 normals (as shown in the figure below). I would like to know how this can be dealt with in OpenGL? I have been using glDrawElement so far, but I understand that this would only work if I had 4 normals (1 normal per vertex) and this it wouldn't allow me to define 3 normals per triangle. What's the best common most efficient of dealing with this case? Do I need to also duplicate the vertices (3 unique vertices per triangle or is there a more clever solution to this?) so that I have as many vertices than I have normals? |
1 | Loading screen OpenGL with glut API I'm trying to make a loading screen. My idea is to have two threads working one in the background loading the 'game' scene, one displaying 'load' scene. When building 'game' scene is finished, it's displayed instead of 'load' scene. Before threading I want to make it work the usual way first display 'load' scene, then 'game' scene. Both GameScene and LoadScene inherit from a virtual function Scene. Here's how my main looks like LoadScene load scene GameScene game scene Scene scene int main(int argc, char argv) Init GLUT and create window glutInit( amp argc, argv) glutInitDisplayMode(GLUT DEPTH GLUT DOUBLE GLUT RGBA GLUT STENCIL) glutInitWindowPosition(initWindowX, initWindowY) glutInitWindowSize(windowWidth, windowHeight) glutCreateWindow("OpenGL") Register callback functions for change in size and rendering. glutDisplayFunc(renderScene) glutReshapeFunc(changeSize) glutIdleFunc(renderScene) Register Input callback functions. 'Normal' keys processing glutKeyboardFunc(processNormalKeys) glutKeyboardUpFunc(processNormalKeysUp) Special keys processing glutSpecialFunc(processSpecialKeys) glutSpecialUpFunc(processSpecialKeysUp) Mouse callbacks glutMotionFunc(processActiveMouseMove) glutPassiveMotionFunc(processPassiveMouseMove) void glutMouseFunc(void( func)(int button, int state, int x, int y)) glutMouseFunc(processMouseButtons) glutMouseWheelFunc(processMouseWheel) Position mouse in centre of windows before main loop (window not resized yet) glutWarpPointer(windowWidth 2, windowHeight 2) Hide mouse cursor glutSetCursor(GLUT CURSOR NONE) warp Initialise input and scene objects. input new Input() scene new LoadingScene(input) load scene new LoadScene() game scene new GameScene(input) load scene Create GameScene() Create LoadingScene(GameScene ) delete LoadingScene() LoadingScene thread tells game to start loafing assetss Enter GLUT event processing cycle glutMainLoop() return 1 The game scene reassignment must be called in changeSize function. I thought it should be in the renderScene function but it's the only way it works. Here's how it looks like void changeSize(int w, int h) scene load scene scene gt resize(w, h) Any ideas how I could go on about it? So far if I assign to 'scene' pointer 'load scene' object it's displaying 'load' scene. Same goes for 'game' scene. Is it even possible to achieve a transition between scenes without threading? |
1 | Efficiently color a procedural mesh? I'm creating a procedural world with LWJGL and GLSL. I want to better visualize the biome map being produced and the height map it creates, but my attempts so far have been very inefficient. My first idea was to make a vec3 holding the biome color for each vertex, but that very quickly took the game from 150 fps to 10 15. The time to render a 512x512 chunk went from around 3 per second to one per five seconds. As you could expect, there was also a problem with it very quickly using up 12gb of ram. I then thought about making an int array for each vertex where each point of the mesh would be able to use as a dictionary to grab a vec3 for the color. So 512 512 integers in an array getting from an array of 15 colors intead of a single array of 512 512 vec3s. This seemed even worse somehow in both FPS and chunk rendering time. To summarize again, I'm trying to color each vertex with its own color to visually show the biomes. What is a more efficient way to do this? |
1 | Speed of procedural geometries I'm evaluating solutions for creating a VR videogame that should run on cellphones. The assets will be streamed from our servers and that could be a problem on limited capabilities devices. Since the geometries are not going to be too complex (mostly boxes and wardrobes) I was thinking about procedurally generating them. Is generating geometries procedurally usually faster or slower than just streaming a bunch of vertex data? |
1 | Do I lose gain performance for discarding pixels even if I don't use depth testing? When I first searched for discard instruction, I've found experts saying using discard will result in performance drain. They said discarding pixels will break GPU's ability to use zBuffer properly because GPU have to first run Fragment shader for both objects to check if the one nearer to camera is discarded or not. For a 2D game I'm currently working on, I've disabled both depth test and depth write. I'm drawing all objects sorted by their depth and that's all, no need for GPU to do fancy things. now I'm wondering is it still bad if I discard pixels in my fragment shader? |
1 | 1D vs 2D vs 3D texture performance I'm working on a 3D game with large blocky untextured, but individually colored, voxels (similar to CubeWorld). For rendering, right now I naively convert online all visible faces ahead of time to triangles with the color information stored in the vertex attributes. The screenshot below is an example of what I'm currently doing. There's a lot of room to improve here and I want to switch to a system that uses fewer vertices and stores color information in a texture. What I'd like to do is combine adjacent triangles with the same normals and simplify the mesh so I'm left with a fraction of the vertices. My big hangup is color information. It's easy to add UVs, but the correct choice for texturing isn't obvious. I'm trying to figure out the relative performance of these options Use a 1D texture. This is compact but I'm concerned that large 1D textures won't play nicely with texture mapping units optimized for 2D textures. Use a 2D texture. Simple, but it'll end up with gaps. Use a 3D texture with the full set of voxels. Does anyone have insight as to the relative performance of these options? No mipmapping or other filtering is needed here, so all three options should be viable. For reference I'm making this in C using OpenGL on Windows. My target GPU architectures are GTX 780 Ti and higher or AMD equivalent. |
1 | GLSL Rewriting shaders from 330 to 130 I recently created a game (LD21) that uses a geometry shader to convert points into textured triangles culling. Since I was under the impression that the support for 330 was widespread I only wrote 330 shaders, but it seems that a lot of not so old hardware only support 130 (according to GLView) Now, since I'm only familiar with the 330 core functionality I am having trouble rewriting my shaders to 130. The fragment shader was quite trivial to rewrite, but I've only managed to get my vertex and geometry shader down to 150. So, is it possible to rewrite the shaders, or would it require a lot of changes in my rendering engine? Geometry shader version 150 layout(points) in layout(triangle strip, max vertices 4) out uniform mat4 oMatrix in VertexData vec4 position vec4 texcoord vec4 size vert out vec2 gTexCoord void main() if(vert.position.x gt 4f amp amp vert.position.x lt 4f amp amp vert.position.y gt 2f amp amp vert.position.y lt 2f) gTexCoord vec2(vert.texcoord.z,vert.texcoord.y) gl Position vert.position vec4(vert.size.x,vert.size.y,0,0) EmitVertex() gTexCoord vec2(vert.texcoord.x,vert.texcoord.y) gl Position vert.position vec4(0.0,vert.size.y,0,0) EmitVertex() gTexCoord vec2(vert.texcoord.z,vert.texcoord.w) gl Position vert.position vec4(vert.size.x,0.0,0,0) EmitVertex() gTexCoord vec2(vert.texcoord.x,vert.texcoord.w) gl Position vert.position EmitVertex() EndPrimitive() Vertex shader version 150 extension GL ARB explicit attrib location enable layout (location 0) in vec2 position layout (location 1) in vec4 textureCoord layout (location 2) in vec2 size uniform mat4 oMatrix uniform vec2 offset out VertexData vec4 position vec4 texcoord vec4 size outData void main() outData.position oMatrix vec4(position.x offset.x,position.y offset.y,0,1) outData.texcoord textureCoord outData.size oMatrix vec4(size.x,size.y,0,0) |
1 | calculate the offset between a quad and the mouse position Opengl I am working on a very simple "GUI framework" for my game. As you can see from the GIF, i have a window object that can be selected when the mouse is inside of it and dragged, when the left mouse button is being pressed. I tried to solve the problem this way Basically i calculate the x and y component of the distance between the center of the window and the mouse cursor. I then add the component values to the window function, which looks something like mousePosition is a Vector2() that stores the mouse position x component is a float, calculated by doing float x component mousePosition.x window.getPosition().x same for the y component variable. window.setPosition(Vector2(mousePosition.x x component, mousePosition.y y component)) set position takes in a Vector2 But for some reason, this is not working. I think that it's because we are not taking in consideration the fact that the x or y component could be negative, in relation to where we press the mouse inside the window. Can someone help me? |
1 | OpenGL ES screen to world coordinate I am currently attempting to convert my screen coordinates to world coordinates, to be able to interact with objects. I am using glm and unProject to try and achieve this, so far this is my code glm vec4 viewPort glm vec4(0.0f, 0.0f, width, height) glm mat4 tmpView sceneCamera gt updateView() glm mat4 tmpProj sceneCamera gt updateProjection() glm vec3 screenPos glm vec3(touchPosition.x, height touchPosition.y 1.0f, 1.0f) glm vec3 worldPos glm unProject(screenPos, tmpView, tmpProj, viewport) Renderer gt SceneObjects 120 gt translateX(worldPos.x) Renderer gt SceneObjects 120 gt translateY(worldPos.y) I am trying to get a sprite to equal the position where I tap. The issues is that the further I click going down the screen, the further the sprite overshoots, and the same horizontally. So if I click 2 3 the way down the screen, the sprite will overshoot the bottom of the screen. |
1 | How to pass array of dozens of floats to OpenGL 3.0 vertex shader? I use PyOpengl and Python 3. I have 50 thousand vertices. Position of each vertex could be calculated in vertex shader as version 300 es uniform v coefficients weights COEFFICIENTS AMOUNT in vec3 v coefficients COEFFICIENTS AMOUNT in vec3 v initial position out vec3 v position void main() v position v initial position v coefficients coefficients weights Amount of coefficients varies between 0 and 199 mdash it's not a problem for me to generate 200 vertex shaders to cover each situation. I need to change position of vertices tens of thousands times per application calls as fast as it's possible, so I cannot calculate them once on start. Though, I cannot provide arrays for each vertex because documentation says Attribute variables cannot be declared as arrays or structures. I see following solutions of this issue (will not work) hardcode suffices of variables names to emulate arrays use vectors with names like coefficients 000, coefficients 001, ..., coefficients 199 use matrices to have 4 vectors in each of variable mdash will it be better than previous solution with vectors (maybe performance of matrices product is faster than 4 products of vectors)? calculate vertices positions by myself using C NumPy calculate vertices positions by myself in parallel using OpenCL (will not work) store all coefficients in uniform int v coefficients COEFFICIENTS AMOUNT VERTICES AMOUNT and access needed ones according to vertex number, which will be stored in in int vertex id store coefficients in textures (proposed by HolyBlackCat). Are there any other solutions of my issue? Is there optimal solution in proposed above? Will I go out of memory if I will use 200 float 3 vectors for each vertex? I imagine the solution with hardcoded indices as version 300 es uniform v coefficients weights COEFFICIENTS AMOUNT in vec3 v coefficients 000 in vec3 v coefficients 001 ... in vec3 v coefficients 199 in vec3 v initial position out vec3 v position void main() v position v initial position v coefficients 0 coefficients weights 000 v coefficients 1 coefficients weights 001 ... v coefficients 199 coefficients weights 199 |
1 | Unpacking Sprite Sheet Into 2D Texture Array I am using WebGL 2. A tag for it does not exist but it should. I have a 10x10 sprite sheet of squares that are 16x16 pixels in size (all in one PNG image). I'd like to create a 2D texture array out of them, where each 16x16 square gets its own, unique Z depth value. let texture gl.createTexture() let image new Image() image.onload function() gl.bindTexture(gl.TEXTURE 2D ARRAY, texture) gl.pixelStorei(gl.UNPACK FLIP Y WEBGL, false) gl.texStorage3D(gl.TEXTURE 2D ARRAY, 5, gl.RGBA, 16, 16, NUM IMAGES) Now what? gl.texSubImage3D doesn't let me copy in a section of the src image image.src "https source url.fake image.png" I know that gl.texSubImage3D exists but it only accepts an entire image as a source? glTexSubImage3D https www.khronos.org registry OpenGL Refpages gl2.1 xhtml glTexSubImage3D.xml |
1 | OpenGL VBO Additional Attributes If I have a buffer with my vertices, normals and texture coordinates, and I use glDrawArrays to draw the VBO to the screen, how can I send attributes per vertex that I placed in an array to the shader program? I'm making a voxel terrain, and every chunk is 1 VBO, this works all great, but I want to send the light value of each block (Just an array filled with int's) to the shader as well, how would I do that? Edit After reading some comments, here some clarification I'm trying to avoid the deprecated glVertexPointer etc, so I'm using Vertex Arrays. Here is the layout of my vertex array X, Y, Z, NX, NY, NZ, X, Y, Z, NX, NY, NZ,X, Y, Z, NX, NY, NZ I'm using this to draw it (ofcourse with the array buffer bound) glVertexAttribPointer(0, 3, GL FLOAT, false, 12, 0) glVertexAttribPointer(1, 3, GL FLOAT, false, 12, 12) glDrawArrays(GL TRIANGLES, 0, arraySize 6) However I get some weird shapes on my screen. Apart from the light problem, that is essentially the same thing, I don't get why this doesn't just work. When I remove the normals, and use just 1 vertexattribpointer it works fine. Yes, the attributes are bound to in position and in normal, and the vertex shader is nothing more than gl Position gl ModelViewProjectionMatrix vec4(in position, 1.0) Thanks! This is what I get (This are 6X6 chunks, so this code is called 6 6 times.) |
1 | OpenGL using gluSphere I have an OpenGL code that currently draws several spheres at different locations. I generate the vertex buffer data (and normal data) myself. However, to simplify the code and increase efficiency, I am now trying to use gluSphere instead. The problem is that if I want to draw several spheres at different locations, I think I will need several different model matrices, and thus several different MVP matrices (because I am using OpenGL 4.3 and there is no glTranslate). But if I want to rotate the whole scene, I will need to rotate all of these model matrices, instead of just one as before. Is there a workaround for this? Is there a simple way to draw several different things with gluQuadrics objects at different locations? |
1 | Any idea why my openGL camera is stuttering only on the X axis? and only when using nvidia drivers? As part of my assignment, I've been building a scene graph framework in C OpenGL. My camera works absolutely fine going backwards and forwards, but when I strafe move on the X axis, the camera just starts stuttering. It doesn't make much sense to me, because it's the same principle to move back forward as it is to move right left. Everything is also normalised and being applied by deltaTime, so that's not an issue. Interestingly, this didn't happen when I was using my intel built in gpu, but only started occurring when I forced my nvidia card to handle the program. Here is a video of the aforementioned stutter https www.youtube.com watch?v DyTIv0yiaP0 amp feature youtu.be My scene graph is available for viewing on my GitHub The repo https github.com charliegillies fullmetal. The camera https github.com charliegillies fullmetal blob master fullmetal.cpp (line 503 730, mostly get set methods.) The camera controller https github.com charliegillies fullmetal blob master fullmetal helpers.cpp (line 14 204) Please note that this framework is still in its early days. Let me know what you think, and I'll be sure to post a fix if I find it. |
1 | .md5mesh normals are not smooth I'm currently working on a project that requires me to load .md5mesh format and draw it. Following this link I've managed to load the mesh into the engine successfully, but a problem arises when calculating normals they just don't seem to smooth. To clarify that it was not my rendering or shader code that was the problem, I loaded a model of .OBJ format, and that lights smoothly. The mesh is calculated correctly too, as I am able to load in complex models with multiple joints and mesh parts. Here's a screenshot of the lighting. And here is how I currently calculate the normals (All normals are set to zero before computing) EDIT Amended the psuedo code to be more accurate to what I have. The original may have been confusing. for (unsigned int i 0 i lt NumberOfTriangles i ) Math3 vec3 r, s, result Math3 vec3 p1, p2, p3 p1 Triangle i .Vertex 0 p2 Triangle i .Vertex 1 p3 Triangle i .Vertex 2 r p2 p1 s p3 p1 result Math3 Cross(s, r) Add the triangles face normal to each vertex The Vertex's are not local to the triangles. Triangle i .Vertex j is just an index. Vertex Triangle i .Vertex 0 .normal result Vertex Triangle i .Vertex 1 .normal result Vertex Triangle i .Vertex 2 .normal result After the loop I normalise each vertex normal to find the average normal. EDIT Here is how I find the average normal for (unsigned int i 0 i lt NumberOfVerts i ) float nx Vertex i .normal.x float ny Vertex i .normal.y float nz Vertex i .normal.z float len sqrt(nx nx ny ny nz nz) Vertex i .normal.x len Vertex i .normal.y len Vertex i .normal.z len As you can see in the image the .md5mesh model is shading flat instead of shading smoothly like the .OBJ model. So what am I missing? |
1 | Loading Wavefront Data into VAO and Render It I have successfully loaded a triangulated Wavefront (.obj) into 6 vectors, the first 3 vectors contain the locations for vertices, UV coords, and normals. The last three have the indices stored for each of the faces. I have been looking into using VAO's and VBO's to render, and I'm not quite sure how to load and render the data. One of my biggest concerns is the fact that indexed rendering only allows you to have one array of indices, meaning I somehow have to make all of the first three vectors the same size, the only way I thought of doing this, is to make 3 new vertex's of equal size, and load in the data for each face, but that would completely defeat the purpose of indexing. Any help would be appreciated. |
1 | Running Slick2d outside of the IDE I am looking for advice how to solve an exception I am getting, I've looked around and seen people get the same error but most people whom had it seems to run on old graphics drivers and once they fixed that it seemed to work for most people. I doubt that's the reason causing it for me since I have the latest nVidia drivers and the game runs well from the IDE. I also did create a jar of the game before I implemented Slick2d and that ran without exceptions. I used to get another exception, that it couldn't find the right natives for lwjgl so I removed the natives from the build and made a fat jar with jarsplice and added them during that process and that's when I ended up at this point. I've tried and tried for days and it's probably something really stupid and easy to fix but I havn't built a lot of project to be ran from outside Eclipse so any help would be greatly appreciated. Some basic info on my rig OS Windows 7 Enterprise CPU Intel Core i7 2600K GPU nVidia GTX 980 RAM 8192MB (forgot the brand and make, it's old lol) |
1 | Dynamic VBO update possibly corrupting data? I want to draw a line between two vertices. On a mouseclick the vertex data will change and I want to update the line to use the new values. I am using a VBO for this and it looks like the update process is somehow corrupting the VBO data. The line is not drawing in a direction that I would expect given the values of the vertices. There are four lines in total and I am updating only the last two vertices. The vertex data is initialized as float vertLineBuffer 1.0f,0.0f,0.0f, right 0.0f,1.0f,0.0f, top 0.0f,0.0f,0.0f, center 1.0f,0.0f,0.0f, START 0.0f, 1.0f,0.0f END The init routine public void setLineVBOs(GL4 gl) throws Exception vboLineHandles new int 2 gl.glGenBuffers(2, vboLineHandles, 0) populate the position buffer FloatBuffer fbData Buffers.newDirectFloatBuffer(vertLineBuffer) gl.glBindBuffer(GL4.GL ARRAY BUFFER, vboLineHandles VERTICES LINE IDX ) the vertex data gl.glBufferData(GL4.GL ARRAY BUFFER, fbData.capacity() 4, fbData, GL4.GL STATIC DRAW) fbData.clear() don't need this anymore populate the color buffer FloatBuffer fbColor Buffers.newDirectFloatBuffer(vertLineColor) gl.glBindBuffer(GL4.GL ARRAY BUFFER, vboLineHandles VERTICES LINECOLOR IDX ) the vertex data gl.glBufferData(GL4.GL ARRAY BUFFER, fbColor.capacity() 4, fbColor, GL4.GL STATIC DRAW) fbColor.clear() don't need this anymore set vertex array index IntBuffer intBuffer BufferUtil.newIntBuffer(1) gl.glGenVertexArrays(1, intBuffer) iLineVao intBuffer.get(0) gl.glBindVertexArray(iLineVao) gl.glEnableVertexAttribArray(VERTICES LINE IDX) gl.glEnableVertexAttribArray(VERTICES LINECOLOR IDX) gl.glBindBuffer(GL4.GL ARRAY BUFFER, vboLineHandles VERTICES LINE IDX ) the vertex data Associate Vertex attribute 1 with the last bound VBO gl.glVertexAttribPointer(0,3, GL2ES2.GL FLOAT, false,0,0) color gl.glBindBuffer(GL4.GL ARRAY BUFFER, vboLineHandles VERTICES LINECOLOR IDX ) the vertex data gl.glVertexAttribPointer(1,3, GL2ES2.GL FLOAT, false,0,0) And then I attempt to update the VBO on an event (mouseclick) like this public void updateLine(GL4 gl, float pos) pos contains the new vertex data vertLineBuffer 9 pos 0 vertLineBuffer 10 pos 1 vertLineBuffer 11 pos 2 vertLineBuffer 12 pos 3 vertLineBuffer 13 pos 4 vertLineBuffer 14 pos 5 FloatBuffer fbData Buffers.newDirectFloatBuffer(vertLineBuffer) gl.glBufferSubData(GL4.GL ARRAY BUFFER, 0, fbData.capacity() 4, fbData) gl.glBindBuffer(GL4.GL ARRAY BUFFER, vboLineHandles VERTICES LINE IDX ) fbData.clear() The vertex shader version 430 layout (location 0) in vec3 vertexPosition layout (location 1) in vec3 vertexColor out vec3 Color void main(void) Color vertexColor gl Position vec4(vertexPosition, 1.0) The frag shader version 430 in vec3 Color layout (location 0) out vec4 FragColor void main(void) FragColor vec4(Color, 1.0) Does this look right? Can anyone see where this might damage the contents of the VBO? |
1 | searching for "university kind" free online course about OPENGL I know there are a lot of free university courses, but I'm trying to find one about OpenGL. Do you know where can I find one, online? |
1 | searching for "university kind" free online course about OPENGL I know there are a lot of free university courses, but I'm trying to find one about OpenGL. Do you know where can I find one, online? |
1 | What to do with unused vertices? Imagine yourself a vertex array in OpenGL representing blocks in a platform game. But some vertices may be not used. The environment is dynamic, so there always some vertex may suddenly become invisible. What is the best way to make them not draw? Graphic cards are complicated and it's hard to predict what is best approach. Few best ways I can think of delete and move all vertices after deleted one to fill freed space (sounds extremely inefficient) set positions to 0 set transparency to maximum I could of course benchmark, but what on my computer works faster doesn't have to on other. |
1 | 3.3x OpenGL Camera how to do 3d rotation? I'm trying my hardest to understand what this code is doing so far I think the sin(theta) and cos(theta) of the code is representing a point with an angle. This point with an angle sin(theta) and cos(theta) will allow the xmouse 16 value and ymouse 16 value to rotate or translate on this point with an angle because of their layout representation in the GLM LookAt() function (view camera matrix). The result of doing this is that the view matrix rotates on the x axis left and right when I move the mouse left and right and the y axis up and down when I move the mouse up and down. Is this correct? It took a lot work to understand this and if I am correct does anyone know how to move the mouse left, right, up and down from the centre of the screen (as in the mouse cursor at centre of the screen when moved to the left, right, up and down will move the camera respectively) rather then from a small region of the top left hand corner thanks!. N.B. I'm using Model Matrix and Perspective Projection Matrix with this in the middle so Model View Perspective C OpenGL 3.3x GLSL 3.30 CORE Windows 10 Code GLuint OpenGL Engine SetupViewCamera(GLFWwindow RenderWindow, glm vec3 eyepos, glm vec3 originpos, glm vec3 viewrotationpos, GLuint program, GLfloat RotAng) double xmouse 0.0f, ymouse 0.0f glfwGetCursorPos(RenderWindow, amp xmouse, amp ymouse) double theta glm radians(RotAng) glm vec3 camera(cos(theta), 0.0f, sin(theta)) camera 5 glm mat4 ViewMatrix glm lookAt( camera, glm vec3(xmouse 16, ymouse 16, 0.0), glm vec3(0.0, 1.0, 0)) glm mat4 IViewMatrix glm inverse(ViewMatrix) GLint Viewmatrixloc glGetUniformLocation(program, "VM") glUniformMatrix4fv(Viewmatrixloc, 1, GL FALSE, amp IViewMatrix 0 0 ) return 80 |
1 | Is it possible to get vertices after Primitive Assembly step (after face culling)? I would like to check which vertices will be rendered from direction A (with face culling enabled) and render it from another direction to visualize face culling effect. I have found information that using Transform Feedback I can received data from vertex or geometry shader. My question is if is it possible to get data from primitive assembly step and use it to render scene from another direction or should I calculate by myself which faces should not be rendered? Any help and information would be greatly appreciated |
1 | Issues with percentage closer filtering I'm trying to code a simple game via OpenGL C . I've implemented a point light, parallax effect with textures, objects from blender etc. but I seem to have an issue with my percentage closer filtering implementation. The shadows are casted on top of a floor which is just a rendered texture below my objects. Im pretty new with how shadows work in OpenGL. This is my attempt to achieve pcf in the fragment shader in vec4 shadowMapCoord layout(binding 10) uniform sampler2D shadowMapTex area around each fragment that will be sampled const int pcfCount 10 void main() float shadowFactor 0.0f for (int x pcfCount x lt pcfCount x ) x offset from inner pixel for (int y pcfCount y lt pcfCount y ) y offset from inner pixel shadowFactor textureProjOffset(shadowMapTex, shadowMapCoord, ivec2(x,y)) float visibility shadowFactor Does someone maybe spot any issue with my implementation of textureProjOffset() or can I achieve pcf with another type of implementation? |
1 | Rendering only a part of the screen in high detail If graphics are rendered for a large viewing angle (e.g. a very large TV or a VR headset), the viewer can't actually focus on the entire image, just a part of it. (Actually, this is the case for regular sized screens as well.) Combined with a way to track the viewer's eyes (which is mostly viable in VR I guess), you could theoretically exploit this and render the graphics away from the viewer's focus with progressively less details and resolution, gaining performance, without losing perceived quality. Are there any techniques for this available or under development today? |
1 | Does opengl performs Visibilty algorithms based on z index? Does OpenGL performs Visibilty algorithms based on z index? Or we have to write our own algorithms. Mainly I'm referring to z buffer algorithm. Is it in built? |
1 | Free camera rotation in 3D space I'm using a view matrix to do camera movement and rotation. viewMatrix new Matrix4f() viewMatrix.setIdentity() Matrix4f.rotate((float) Math.toRadians(rx), new Vector3f(1,0,0), viewMatrix, viewMatrix) Matrix4f.rotate((float) Math.toRadians(ry), new Vector3f(0,1,0), viewMatrix, viewMatrix) Matrix4f.rotate((float) Math.toRadians(rz), new Vector3f(0,0,1), viewMatrix, viewMatrix) Matrix4f.translate(new Vector3f(x ( 1), y ( 1), z ( 1)), viewMatrix, viewMatrix) I'm manipulating the rotation values around each axis here left and right public void rotateY(float amount) ry amount Up and down public void rotateUp(float amount) rx amount if(rx lt 90) rx 90 if(rx gt 90) rx 90 This all works fine. Now i'm trying to create a more "free" camera so that i always turn left an right and don't simply rotate around the Y Axis. For the rotation up you simply remove the if cause. But for the rotation sidewards you need to calculate the parts for the x and the z axis using sin and cos. That's where i get confused. How can i rotate sidewards correctly? |
1 | How do I generate a 3D race track from a spline? I want to generate a 3 dimensional race track around a spline that describes its shape. Here's an illustrative video. The track should be an endless tunnel that sweeps along a 3D spline, with some obstacles thrown in. My best idea so far has been to loft a circle shape along the spline. I would also like some pointers into how to manage the geometry i.e. how to create it from the loft operation, manage its 'lifecycle' in memory, collision and texturing. |
1 | Flipped model has wrong triangle order I have list of models and transform matrix for each of them. Some of models are flipped along X or Y or Z axis. This meshes will be rendered wrong, back face is rendered instead of front. I tried to use glFrontFace(GL CW) and that makes wrong models fixed but previous correct models are wrong. How can I fix this? Can I detect from transform matrix which models need glFrontFace switch? Right cube and sphere are normal and left are flipped |
1 | Trying to get the fragment shader to output a list I am trying to figure out a way to get the Fragment Shader to output a list of gl VertexID s. I want to use the GPU to get a list of vertices in the viewing frustum. Is there any way I can get a per fragment shader invocation variable or something else that is possible to use as a pointer to a texture? If it is not possible, is there a possibility to use a sparse texture to render to, using gl VertexID s as a pointer?. I will be rendering about a million or two vertices at a time (using GL POINTS), and the list should be tightly packed to be of use. Some work may be offloaded to the CPU, but this is part of a real time render engine so the execution times need to be under 10ms. I am using OpenGL 4.4. |
1 | Is it a good idea to perform all matrix operations on the GPU? I was wondering if it is a improvement to use OpenGL for matrix calculations instead of using the CPU. And if it is a improvement, is it worth it to change the math class to use OpenGL? |
1 | Any GL transformation not working Today I was trying to make a test camera, with a new method(I usually use gluLookAt) So I got a problem, void GameDraw() glPushMatrix() glMatrixMode(GL PROJECTION) glLoadIdentity() gluPerspective(90, WIDTH HEIGHT, 0.001, 10000.0) pitch 1 glRotatef(pitch, 1.0, 0.0, 0.0) glRotatef(yaw, 0.0, 1.0, 0.0) glMatrixMode(GL MODELVIEW) glLoadIdentity() s gt bind() test gt drawMesh() glPopMatrix() And my draw function is glDrawElements(GL TRIANGLES, indices, GL UNSIGNED INT, nullptr) if it isn't working with glDrawElements is there any alternative to use? Edit Disabling shader it works fine, but how I can use this with Shader Please don't use glm examples, because I don't like to use it, I feel more comfortable by using the my math |
1 | Having problems with making a button class (checkIfClick) opengl Just basing how sfml does it, I get the globalbound of the object and then check if the mouseposition is within the object. I tried do something like that but I dont know exactly how to do it with opengl. I tried changing the projection from 1 to 1 coords to the size of the application in the hope I could just get the position of the object, but when doing std cout, it still shows the position in 1 to 1 coords. So my question is quot How would I convert my mouseposition to 1 to 1 or vi versa. quot Its actually 0 to 2 and not 1 to 1 So far i have this I dont know exactly how to get the top left right with opengl I know with sfml. I would use the position and subtract the half of the width and height to get it but like I said, idk how to do that in opengl. I tried doing that but the object position i still 1 to 1 even though I passed in the glm mat4 proj glm ortho(0,width, 0, height, 1,1) I know my projection is working, because when I enter any big number, my object gets really small. FYI, just because I see these types of answers alot I'm not asking you to write me a fully functional button class. I just want a general idea on how it would be done or some article that could help me do that (or a very basic example). void contain(glm vec2 position) if (position.x gt button.left amp amp position.x lt button.right amp amp position.y gt button.top amp amp position.y lt button.bottom) std cout lt lt quot True n quot else std cout lt lt quot False n quot |
1 | GLSL shader error on different computers I recently discovered a strange error with my fragment shader version 330 core in vec4 v color in vec2 v texCoords 10 out vec4 frag color uniform int actual textures uniform sampler2D u texture 10 void main() vec4 final color texture2D(u texture 0 , v texCoords 0 ) for(int i 1 i lt actual textures i ) vec4 tex texture2D(u texture i , v texCoords i ) final color tex tex.a final color (1 tex.a) frag color v color final color On my PC with a GTX 960 and OpenGL 4.5.1 there is no error at all. But on my laptop with an Intel HD 3000 and OpenGL 3.3, I get a compile error that the texture2D function is deprecated and that the sampler2D array size is too big because I only pass 3 textures at the moment and I want the array to be able to handle up to 10 textures. On any other computer I testedm the error didn't occur, but they all had nvidia or amd gpus. Why do I keep getting this error on my laptop? |
1 | 3D sphere generation I have found a 3D sphere in a closed source game engine that I really like the look of and would like to have something similar in my own 3D engine. This is how the sphere looks like when it is created in the game engine, at program game start At program start, a function named CreateSphere is called and the user has the option to choose a 3D position and a radius of the sphere. That's all I know about the function since the engine is closed source. Anyone have any idea of how this sphere might be generated programmatically? I have checked other posts sites discussing spheres but none of them has the look of the sphere in the image. Edit removed some unnecessary information to get to the point of what I need help with. |
1 | Selection with region (when rendering with shaders and VAO) I am currently render my geometry using "Modern OpenGL" approach (with shaders and buffers). I have already implemented picking of single primitives using glReadPixels. Now I am faced with problem how to implement selection of multiple objects. I cannot use glReadPixels because maximum (and it seems meaningful for me) number of objects that I could select with 10x10 box is 100. But it is really necessary for my purposes to have ability to select thousands of objects in small selection area. I already read answer to similar question where was recommended to test each point against selection rectangle. But it gives new problems all my vertices are already stored in buffers and it looks like huge performance to test each point on CPU. Could someone help with such issue? |
1 | Issues with OpenGL rotation matricies and shaders I am having an issue with my rotations in my opengl shaders. My program works fine before I add in my rotation matrices. Here is an example, at YouTube. After I add rotations to my shaders, there are issues. Here is an example. Here is my vertex shader version 130 in vec3 position in vec4 color in vec2 texcoord uniform mat4 view array uniform mat4 end array uniform mat4 projection matrix uniform mat4 trans matrix uniform mat4 x rotation matrix uniform mat4 y rotation matrix uniform mat4 z rotation matrix out vec4 pos out vec2 texture coordinate out vec4 color value void main() mat4 vw matrix mat4(1.0) mat4 rotation matrix x rotation matrix y rotation matrix mat4 model matrix trans matrix rotation matrix pos projection matrix vw matrix model matrix vec4(position, 1.0) gl Position pos texture coordinate texcoord color value color Here are the values for my matrices rotation matrices x rotation matrix 0 1 x rotation matrix 1 0 x rotation matrix 2 0 x rotation matrix 3 0 x rotation matrix 4 (float) cos( yaw (PI 180)) x rotation matrix 5 (float) sin( yaw (PI 180)) x rotation matrix 6 0 x rotation matrix 7 (float) sin( yaw (PI 180)) x rotation matrix 8 (float) cos( yaw (PI 180)) x rotation matrix 9 0 x rotation matrix 10 1 x rotation matrix 11 0 x rotation matrix 12 0 x rotation matrix 13 0 x rotation matrix 14 0 x rotation matrix 15 1 y rotation matrix 0 (float) cos( pitch (PI 180)) y rotation matrix 1 0 y rotation matrix 2 (float) sin( pitch (PI 180)) y rotation matrix 3 0 y rotation matrix 4 0 y rotation matrix 5 1 y rotation matrix 6 (float) sin( pitch (PI 180)) y rotation matrix 7 0 y rotation matrix 8 (float) cos( pitch (PI 180)) y rotation matrix 9 0 y rotation matrix 10 1 y rotation matrix 11 0 y rotation matrix 12 0 y rotation matrix 13 0 y rotation matrix 14 0 y rotation matrix 15 1 z rotation matrix 0 (float) cos( pitch (PI 180)) z rotation matrix 1 (float) sin( pitch (PI 180)) z rotation matrix 2 0 z rotation matrix 3 (float) sin( pitch (PI 180)) z rotation matrix 4 (float) cos( pitch (PI 180)) z rotation matrix 5 1 z rotation matrix 6 0 z rotation matrix 7 0 z rotation matrix 8 0 z rotation matrix 9 0 z rotation matrix 10 1 z rotation matrix 11 0 z rotation matrix 12 0 z rotation matrix 13 0 z rotation matrix 14 0 z rotation matrix 15 1 GL20.glUniformMatrix4fv( GL20.glGetUniformLocation(programID, "x rotation matrix"), true, x rotation matrix) GL20.glUniformMatrix4fv( GL20.glGetUniformLocation(programID, "y rotation matrix"), true, y rotation matrix) GL20.glUniformMatrix4fv( GL20.glGetUniformLocation(programID, "z rotation matrix"), true, z rotation matrix) Here is my transformation matrix trans matrix 0 1 trans matrix 5 1 trans matrix 10 1 trans matrix 12 x trans matrix 13 y trans matrix 14 z trans matrix 15 1 What am I doing wrong? |
1 | OpenGL Filtering antialising textures in a 2D game I'm working on a 2D game using OpenGL 1.5 that uses rather large textures. I'm seeing aliasing effects and am wondering how to tackle those. I'm finding lots of material about antialiasing in 3D games, but I don't see how most of that applies to 2D games e.g. antisoptric filtering seems to make no sense, FSAA doesn't sound like the best bet either. I suppose this means texture filtering is my best option? Right now I'm using bilinear filtering, I think glTexParameterf(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameterf(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) From what I've read, I'd have to use mipmaps to use trilinear filtering, which would drive memory usage up, so I'd rather not. I know the final sizes of all the textures when they are loaded, so can't I somehow size them correctly at that point? (Using some form of texture filtering). |
1 | Handling 3D perspective for different aspect ratio I have problem for supporting different aspect ratio in developing Mobile Game. I was developing on 2 3 screen which is iPhone 4 (Retina). When I use width and height for screen size and use fixed FOV, then most of cases look ok, but in 3 4 (which is iPad), the user can "See More" horizontally. If I force the aspect ratio to 2 3, then looks "squeezed". I am lost adjusting other values like moving camera, or changing FOV. Is there any proper way or tips?? Or, What is good FOV for portrait mobile device screen which can cover most of aspect ratio look ok? I am currently using 45 and force aspect ratio to 2 3 |
1 | GLSL shader performance reduced by loop? I hav a fragment shader like this version 330 core in vec4 v color in vec2 v texCoords 10 out vec4 frag color uniform int actual textures uniform sampler2D u texture 10 void main() vec4 final color texture(u texture 0 , v texCoords 0 ) for(int i 1 i lt actual textures i ) vec4 tex texture(u texture i , v texCoords i ) final color tex tex.a final color (1 tex.a) frag color v color final color But somehow this slows my game down by 30 40fps in comparison to this approach version 330 core in vec4 v color in vec2 v texCoords 10 out vec4 frag color uniform sampler2D u texture 10 void main() vec4 final color texture(u texture 0 , v texCoords 0 ) vec4 tex texture(u texture 1 , v texCoords 1 ) final color tex tex.a final color (1 tex.a) tex texture(u texture 2 , v texCoords 2 ) final color tex tex.a final color (1 tex.a) ... frag color v color final color Why does a single loop slow the gpu down so much? |
1 | Do UV coordinates need correction for moving object? The image above is a static capture of a dynamic OpenGL project I created in which I wrapped a NASA albedo, i.e., sans clouds, image on an OpenGL generated sphere. In so doing, I also generated the UV coordinates associated with each vertex position. This was an incremental learning effort in which I had already applied model matrix corrections to the vertex positions for the rotating and translating (orbiting) "Earth". I was surprised to find that I did not have to apply a model matrix correction to the UV coordinates. I have tentatively concluded that once the jpg image coordinates are associated with the corresponding vertex positions with the UV coordinates in the range of 0, 1 , they are fixed and need no further correction. Does that sound correct, or is there more to the situation? |
1 | Texture rotation inside a quad How can I rotate a texture inside a quad without rotating the quad? Here's the code but I'm rotating the whole quad here. glBindTexture(GL TEXTURE 2D, texName5) glMatrixMode(GL TEXTURE) glLoadIdentity() glMatrixMode(GL MODELVIEW) glPushMatrix() glTranslatef(12.8, 10, 10) glRotatef(rotateAd, 1, 0, 0) glBegin(GL QUADS) glNormal3f(1, 0, 0) glTexCoord2f(0.0, 0.0) glVertex3f(0, 3, 3) glTexCoord2f(0.0, 1.0) glVertex3f(0, 3, 3) glTexCoord2f(1.0, 0.0) glVertex3f(0, 3, 3) glTexCoord2f(1.0, 1.0) glVertex3f(0, 3, 3) glEnd() glPopMatrix() Thanks. |
1 | What is the correct multiplication order for a 2D matrix? I'm currently trying to create a camera and entity model matrix for my 2D game similar to that of Unity3D. I've already tried to find answers to this question on stackoverflow gamedev but i couldn't find an answer that explains how to center the camera and image. Goal Have a camera matrix wich is centered at the position of the camera. Have a entity matrix wich draws everything centered Current implementation camera matrix Matrix.translation(Screen.width 0.5, Screen.height 0.5) Matrix.rotation(camera.rotation radians) Matrix.scale(camera.scale.x, camera.scale.y) Matrix.translation(camera.x, camera.y) entity matrix Matrix.rotation(entity.rotation radians) Matrix.scale(entity.scale.x, entity.scale.y) Matrix.translation(entity.x, entity.y) image component draw function setMatrix(camera.matrix entity.matrix) drawImage(x 0, y 0, image) position is already included in the entity matrix Results Image rotates around the upper left corner of the screen not around it's center Image is not centered. The image's origin is at its upper left corner. The camera matrix seems to be correct. Questions What is the correct multiplication order for the entity model matrix? Can i use a single matrix for all components of a entity or do i need to calculate in the width height of the image text animation component. |
1 | Pretty Sure I have Support for OpenGL 4, but it's not running. What can I do? Many Game Engines require OpenGL to run. I have one of those. I've confirmed that the program and any benchmarks for OpenGL above OpenGL 2 fail to run. Is there, like, a way to confirm I have dependencies or something to that effect? Is it installed or is it just a library included in other programs? Specifically using Ubuntu Linux 17.10, computer specs are fairly low but I've run the Engine on Windows on the same machine before. Linux should have more up to date graphics drivers, so I'm not sure what the root of the problem might be. |
1 | Using compressed(ETC1) textures in LibGDX I use standard android tool for compressing PNG texture and archiving it with gzip android sdks tools etc1tool texture.png encodeNoHeader gzip texture.pkm Then I try to load it FileHandle file ... ETC1.ETC1Data data new ETC1.ETC1Data(file) ETC1TextureData td new ETC1TextureData(data, false) Texture texture new Texture(td) But I get an java.nio.BufferOverflowException inside ETC1Data (FileHandle pkmFile) constructor in new DataInputStream(new BufferedInputStream(new GZIPInputStream(pkmFile.read()))) int fileSize in.readInt() compressedData BufferUtils.newUnsafeByteBuffer(fileSize) int readBytes 0 while ((readBytes in.read(buffer)) ! 1) compressedData.put(buffer, 0, readBytes) Exception occurs here compressedData.position(0) compressedData.limit(compressedData.capacity()) How to fix it? Thanks. ps if it's important unarchived pkm file size is 33KB |
1 | Creating a movable camera using glm lookAt() I came across this tutorial on how to create a movable camera in OpenGL using glm lookAt(glm vec3 position, glm vec3 target, glm vec3 up). In the tutorial, in order to keep the camera always facing in one direction while moving, the view matrix is created as such view glm lookAt(cameraPos, cameraPos cameraFront, cameraUp) , where cameraPos, cameraFront, and cameraUp are all glm vec3 type. What I would like to ask is why does the second argument have to be cameraPos cameraFront? If the camera position moved to the right without changing cameraFront, wouldn't cameraPos cameraFront have an effect of rotating to the right as opposed to staying in the same direction (which I think is what should be needed)? |
1 | Creating a voxel chunk with a VBO How to translate the coordinates of each block and add it to the VBO chunk? I'm trying to make a voxel engine similar to minecraft as a little learning experience and a way to learn some opengl. I have created a chunk class and I want to put all of the vertices for the whole chunk into a single VBO. I was previously only putting each block into a vbo and making a call to render each block. Anyways, I am a bit confused about how I can translate the coordinates of each block in the chunk when I'm putting all vertices into one vbo. This is what I have at the moment. public void putVertices(float tx, float ty, float tz) float l length 1.0f float l height 1.0f float l width 1.0f vertexPositionData.put(new float xOffset l length tx, l height ty, zOffset l width tz, xOffset l length tx, l height ty, zOffset l width tz, xOffset l length tx, l height ty, zOffset l width tz, xOffset l length tx, l height ty, zOffset l width tz, xOffset l length tx, l height ty, zOffset l width tz, xOffset l length tx, l height ty, zOffset l width tz, xOffset l length tx, l height ty, zOffset l width tz, xOffset l length tx, l height ty, zOffset l width tz, xOffset l length tx, l height ty, zOffset l width tz, xOffset l length tx, l height ty,zOffset l width tz, xOffset l length tx, l height ty,zOffset l width tz, xOffset l length tx, l height ty, zOffset l width tz, xOffset l length tx, l height ty, zOffset l width tz, xOffset l length tx, l height ty,zOffset l width tz, xOffset l length tx, l height ty, zOffset l width tz, xOffset l length tx, l height ty, zOffset l width tz, xOffset l length tx, l height ty, zOffset l width tz, xOffset l length tx, l height ty, zOffset l width tz, xOffset l length tx, l height ty, zOffset l width tz, xOffset l length tx, l height ty,zOffset l width tz, xOffset l length tx, l height ty,zOffset l width tz, xOffset l length tx, l height ty, zOffset l width tz, xOffset l length tx, l height ty, zOffset l width tz, xOffset l length tx, l height ty, zOffset l width tz ) public void createChunk() vertexPositionData BufferUtils.createFloatBuffer((24 3) activateBlocks) Random random new Random() for (int x 0 x lt CHUNK SIZE x ) for (int y 0 y lt CHUNK SIZE y ) for (int z 0 z lt CHUNK SIZE z ) if(blocks x y z .getActive()) putVertices(x 2.0f, y 2.0f, z 2.0f) What's any easy way to translate the vertices of each block into its correct position? I was previously using glTranslatef with each call to render block but this won't work now. What I am doing now also does not work, the blocks all render in stacks on top of each other and it looks like this Thanks |
1 | i want to upload to 2gb model animation to vram with VBO technic but i am getting this error with nvidia (with amd no error) when i try to work with amd its working with amd hd6870 and ati x1550 but when i try to work with gtx 950 its giving this error. if i upload less than 2gb example 300mb then its working but i want to upload 2gb. what can i do ? sorry for bad english. |
1 | OpenGL ES 2.0 Point Sprites Size I am trying to draw point sprites in OpenGL ES 2.0, but all my points end up with a size of 1 pixel...even when I set gl PointSize to a high value in my vertex shader. How can I make my point sprites bigger? |
1 | How can I adjust the origin for 2D UI in OpenGL? As I am quite used to screen geometry, where 0,0 is at the top left corner or the screen (as it is for just about every desktop app and Web app in the world), but I find it very hard to actually implement a UI in OpenGL since it's coordinate system is vertically inverted from what I have always worked with. I'm trying to implement a GUI for a plugin for the game Rising World, and this is driving me crazy. The added restriction that I have is that I do not have access to the actual screen resolution, but can only achieve full screen width and height using relative positioning and size (from 0 to 1). Sure, I can use pixels to position shapes and controls, but I cannot say, for example place this at 50 20 pixels it is either one or the other per shape. I am Googling for things I can read about UI design with "inverted" Y coordinates (i.e. since we read from top to bottom, it is inverted to me), but I can't find anything relevant. And since I cannot use any third party GUI library but what the game is offering, I am restricted to basic shapes and positioning (i.e. doing things 100 manually). Is there a way in this context that I can author my UI in a coordinate system where the origin is the top left? |
1 | OpenGL Maximal amount of debug string reached I was trying to check my current openGL code in gDEBugger, everything works fine. But, I get these messages Debug String gt gt gt gt gt gt gt gt gt 1 Debug String gt gt gt gt gt gt gt gt gt 2 Debug String gt gt gt gt gt gt gt gt gt 3 ... Debug String gt gt gt gt gt gt gt gt gt 497 Debug String Maximal amount of debug string reached (500 strings). Additional debug strings will be ignored What are these messages, everything in my code runs fine. But its troubling to see these debug strings. |
1 | OpenGL What to do after running glBufferData? I am interested in understanding a bit more behind how OpenGL does its memory management and what are some good practices before I start heavily coding and back myself into a corner. The real question I have is regarding memory. I have a class called Mesh that consists of a pointer to some vertex data, a pointer to the face data, as well as the number of vertices and faces. I am familiar with one possible process that OpenGL uses to get these drawn on the screen generate a buffer for vertices and a buffer for faces, bind them as GL ARRAY BUFFER and GL ELEMENT ARRAY BUFFER, call glBufferData to move the data to the buffers, and lastly call glDrawElements to draw them. From here I can understand how I can load several Mesh objects and draw them on screen, and I have no issue there. My question then lies in where the memory is stored and what I should be doing in order to conserve memory. My assumption is that after the call to glBufferData, the data is copied to the GPU (or to some other register to later be copied to the GPU). Is it safe for me to delete the original Mesh object that was used with the buffer calls? In my head, the mesh data is retained inside the GPU so the data is not lost, and so long as I do not need to reload this data using OpenGL later then I won't miss this data at any point. Also in my head, this frees up RAM space for other uses, so my Mesh, in some sense, now totally lives in GPU memory and nowhere else, and I can do my drawing solely by making OpenGL calls to the correct buffers (so I obviously need to save the generated buffer IDs, but that part goes without saying). If someone could tell me if this is correct incorrect logic, as well as good bad game dev practice? |
1 | Android Hardware Scaler I was reading through this using hardware scaler for performance and am a little confused by it. It says all you need to do to invoke the scaler is to set it like so surfaceView new GLSurfaceView(this) surfaceView.getHolder().setFixedSize(1280, 720) They says to use a fixed size for all devices. On my Google Nexus 10 (which has a resolution of 2560 x 1600). I've used 1280 800. It then goes on to say that the system will scale it to match the device's actual resolution. So this is what I get What am I missing here? Unfortunately the above link says that examples will be added soon but as yet there is nothing. |
1 | GLSL Efficient Point inside Box Check I'm attempting to improve the performance of a shader that changes the colour of a region of the world that is inside a "zone". I am using a deferred lighting system, so the colour and world space position of each pixel on the screen are stored in two separate textures, gColor and gPosition. The zonePos uniform stores the two corners of each zone. The zoneColor uniform stores the colour of each zone. The totalZones uniform stores the amount of zones in the current game. Here is the fragment shader out vec4 FragColor in vec2 texCoords layout (binding 0) uniform sampler2D gColor layout (binding 1) uniform sampler2D gPosition uniform vec3 zonePos 10 uniform vec3 zoneColor 5 uniform int totalZones void main() vec3 FragPos texture(gPosition, texCoords).rgb vec3 Albedo texture(gColor, texCoords).rgb vec3 j vec3 k for (int i totalZones i gt 0 i ) j zonePos i 2 k zonePos i 2 1 if (FragPos.x gt j.x amp amp FragPos.x lt k.x amp amp FragPos.y gt j.y amp amp FragPos.y lt k.y amp amp FragPos.z gt j.z amp amp FragPos.z lt k.z) Albedo zoneColor i FragColor vec4(Albedo, 1.0) The changes I have made so far are Using j and k, rather than accessing zonePos 6 times in the if statement Looping down from totalZones to 0, as a comparison against 0 is faster Any suggestions are greatly appreciated. |
1 | Fog shader camera problem I have some difficulties with my vertex fragment fog shader in Unity. I have a good visual result but the problem is that the gradient is based on the camera's position, it moves as the camera moves. I don't know how to fix it. Here is the shader code. struct v2f float4 pos SV POSITION float4 grabUV TEXCOORD0 float2 uv depth TEXCOORD1 float4 interpolatedRay TEXCOORD2 float4 screenPos TEXCOORD3 v2f vert(appdata base v) v2f o o.pos mul(UNITY MATRIX MVP, v.vertex) o.uv depth v.texcoord.xy o.grabUV ComputeGrabScreenPos(o.pos) half index v.vertex.z o.screenPos ComputeScreenPos(o.pos) o.interpolatedRay mul(UNITY MATRIX MV, v.vertex) return o sampler2D GrabTexture float4 frag(v2f IN) COLOR float3 uv UNITY PROJ COORD(IN.grabUV) float dpth UNITY SAMPLE DEPTH(tex2Dproj( CameraDepthTexture, uv)) dpth LinearEyeDepth(dpth) float4 wsPos (IN.screenPos dpth IN.interpolatedRay) Here is the problem but how to fix it float fogVert max(0.0, (wsPos.y Depth) ( DepthScale 0.1f)) fogVert fogVert fogVert (exp ( fogVert)) return fogVert Thanks a lot ! |
1 | How do you create a fractal cube map? I want to create a map similar to how Mincraft and other related games do. I just haven't the faintest clue on how to do so. Can anyone point me to a decent tutorial or give me a decent run through? I program in java and use openGL. |
1 | Should I batch up debug primitives for rendering in modern OpenGL? I've recently started porting some old rendering demos I did to modern OpenGL. I had a debug drawing class in my old code which used immediate mode glBegin(), glEnd() etc. for rendering debug objects such as triangles, cubes and spheres etc. Originally I replaced the code in these functions with code to generate the VAO and VBOs (for position and colour), bind them, assign the vertex data, render, then disable and delete the VBOs and VAO. Is there anything wrong with this? I've been thinking about modifying the class so that every time it gets a draw primitive call it generates the VAO and VBOs and assigns the vertex data but doesn't actually do the rendering until after all the other rendering has been completed. Then it renders all the debug prims that have been stored at once. I'm not sure of the benefits of either way of doing this and was just wondering what people thought. |
1 | How can I rotate a quad around it's center? My code is GL11.glPushMatrix() rotate around center GL11.glTranslatef( 200 2, 200 2, 0) GL11.glRotatef(30, 0.0f, 0.0f, 1) GL11.glTranslatef(200 2, 200 2, 0) draw quad GL11.glBegin(GL11.GL QUADS) GL11.glTexCoord2f(0f,0.5f) GL11.glVertex3f(50,50,1) GL11.glTexCoord2f(0.5f,0.5f) GL11.glVertex3f(50 200,50,1) GL11.glTexCoord2f(0.5f,0f) GL11.glVertex3f(50 200,50 200,1) GL11.glTexCoord2f(0f,0f) GL11.glVertex3f(50,50 200,1) GL11.glEnd() GL11.glPopMatrix() and it's not rotating it around the center, am I doing this correctly? if not how would I? |
1 | How do I calculate a vertex's position on the CPU? I'm creating light processor and I need light position and vertex position after translating, rotating and scaling (to calculate distance between them to check if light is affecting my vertex somehow). But my question is how to properly multiply my position in Vector3 with transformation matrix to get real vertex position in the world? To calculate it in GLSL I just simply use something like this vec4 vertexInWorldPosition transformMatrix vec4(vertexPosition,1.0) But how properly do that with my vertex (Stored in Vector3) and Matrix4 in C code? Do I need to multiply each row or column by my original vertexPosition stored to Vector4? I have no idea what to do. |
1 | Tips for writing 3D Collision detection with opengl I would like to any tips articles tutorials on how to write collision detection using OpenGL and C in 3D mainly just simple box collisions etc but also if there are any advanced resources that would be great to. i also don't wont to use any external library s if possible. thanks |
1 | OpenGL calculate UV sphere vertices I am trying to implement a class Sphere in C . Therefore I want to calculate the vertices in the constructor of the class (or in a seperate function..). Although I read tons of articles about creating spheres in different ways (UV Sphere, Quad Sphere, Icosphere etc.) I did not understand how to create the vertices for my buffer object. I decided to use the UV Sphere since it is easy to map a texture on it. A "UV sphere" in this sense is one where the (quad) edges run like lines of latitude and longitude, and the texture is mapped to the sphere like an equirectangular projection. (Equirectangular Earth texture by Strebe via Wikimedia Commons, CC BY SA 3.0) But how can I calculate all the vertex positions and texture coordinates? For my vertices I use a Vertex struct struct Vertex glm vec3 position glm vec2 texture |
1 | How can I make this tile map correctly? I'm having the following problem creating this hexa tiled map. this is what I wanted EDIT Ok I've Managed to do this const float scaleX ((float)tileWidth) 10000, scaleY ((float)tileHeight ) 10000 const float hexagon r scaleY 2 const float hexagon dx hexagon r cosf(30 3.14f 180.0) const float hexagon dy hexagon r sinf(30 3.14f 180.0) const float hexagon gx 2.0 hexagon dx const float hexagon gy 2.0 hexagon dx sinf(60 3.14f 180.0) Vector2 auxPos auxPos.x 0 auxPos.y 0 float j 0 for (unsigned int y 0 y lt layer gt layerHeight y , j hexagon gx 2) auxPos.y y (hexagon gy 2 ) auxPos.x j for (unsigned int x 0 x lt layer gt layerWidth x ) auxPos.x hexagon gx 2 float angle 90 3.14f 180 auxPos.y hexagon gy 2 SetData(stack gt vData, stack gt vCounter 0, auxPos.x (( scaleX) cosf(angle) ( scaleY) sinf(angle))) SetData(stack gt vData, stack gt vCounter 1, auxPos.y (( scaleX) sinf(angle) ( scaleY) cosf(angle))) SetData(stack gt vData, stack gt vCounter 2, layer gt layerDepth) SetData(stack gt vData, stack gt vCounter 3, auxPos.x ((scaleX) cosf(angle) ( scaleY) sinf(angle))) SetData(stack gt vData, stack gt vCounter 4, auxPos.y ((scaleX) sinf(angle) ( scaleY) cosf(angle))) SetData(stack gt vData, stack gt vCounter 5, layer gt layerDepth) SetData(stack gt vData, stack gt vCounter 6, auxPos.x ((scaleX) cosf(angle) (scaleY) sinf(angle))) SetData(stack gt vData, stack gt vCounter 7, auxPos.y ((scaleX) sinf(angle) (scaleY) cosf(angle))) SetData(stack gt vData, stack gt vCounter 8, layer gt layerDepth) SetData(stack gt vData, stack gt vCounter 9, auxPos.x (( scaleX) cosf(angle) (scaleY) sinf(angle))) SetData(stack gt vData, stack gt vCounter 10, auxPos.y (( scaleX) sinf(angle) (scaleY) cosf(angle))) SetData(stack gt vData, stack gt vCounter 11, layer gt layerDepth) can't seem to make this work... |
1 | Create 2D sprites with libGdx using a shape and a texture separately I am creating a 2D game with LibGdx that will have creatures that are generated from dozens of characteristics with potentially millions of unique combinations. For each segment of each creature, I want to use two sprites so that I can mix shapes and colors and cut down on my resources. I would like one sprite to be a black and white (or grayscale) of the shape of the body part while the second sprite is just a color pattern. I think I may need to write a shader that splices the color pattern onto my shape to give me the sprite I need. I just need some guidance on how to get started. Another way of explaining this is with the example of clothing. I want to be able to draw a shirt shape, and pants shape, a socks shape, etc. and also draw different fabric patterns. Then I would mix and match the clothing with the pattern instead of drawing every possible combination. How can I accomplish this with LibGdx? |
1 | SDL, SFML, OpenGL, or just move to Java I recently started a new project, and I'm wondering if I should change the technology now before it's too late. I'm using SDL with C , I have around 6 classes, and the game is going alright, but I got to this point where I have to rotate my sprite to make it point to the coordinates of the mouse. It's a 2D topdown game, so if I pre cached the images, I'd have to load 360 images, which is bad for memory. If I used SDL glx, I could do the rotation real time, but I heard it'd drop my frame rate very drastically, and I don't want my game to be slow. I also read I could use OpenGL, which is faster at these things. The problem is, how much would I have to change my code if I moved to OpenGL with SDL. So, I considered moving to a completely different language Java which is much simpler, and would allow me to focus on the game and handle networking much more easily than with C . I am very confused, and would like to hear your opinion thank you! |
1 | Passing array to uniform in glsl error Here is my attemps to pass array to uniform array struct Vector float x,y,z float threshold 2 0.5, 0.25 Vector kernel new Vector kernel size kernel size 16 fill kernel glUniform1fv(glGetUniformLocation( program, "t"), 2, threshold) glUniform3fv(glGetUniformLocation( program, "kernel"), kernel size, (const GLfloat )kernel) because Vector contains only 3 float fields this kind of casting should be ok shader uniform float t 2 uniform vec3 kernel 16 And the results are weird. Only first float or first vector are filled with proper values. For example t 0.5, 7.1830559e 042 Even when I try change only one value (name "t 1 ") it doesn't work. I checked this on AMD CodeXL. My graphics card is Radeon HD 5770 and I have the newest drivers. I'm using OpenGL 3.3 and GLSL 330. What am I doing wrong? |
1 | Practice openGL or learn a specific engine? Possible Duplicate Should I use Game Engines to learn to make 3D games? I am a university student. I want to work in the game industry. Now I am thinking about either practicing my openGL skills or learn a complete new game engine during my time in school. I will do this by developing a smartphone game. I am debating over using just openGL or using a game engine. If I learn a game engine, then if the company I want to work for does not use that game engine, then wouldn't it be a waste of time to learn that game engine now instead of solidifying my openGL skills? So, openGL or game engine? Thanks. |
1 | JME3 Fragment Shader not compiling Here is the fragment shader code (MyShaders Shader1.frag) void main() gl FragColor vec4(1.0,1.0,1.0,1.0) And the vertex shader code (MyShaders Shader1.vert) void main(void) gl Position vec4(1.0,1.0,1.0,0.0) And the .j3md material code MaterialDef Shader1 MaterialParameters Technique VertexShader GLSL100 GLSL150 MyShaders Shader1.frag FragmentShader GLSL100 GLSL150 MyShaders Shader1.vert The error stack trace is WARNING Bad compile of 1 version 110 2 define FRAGMENT SHADER 1 3 void main(void) 4 5 6 gl Position vec4(1.0,1.0,1.0,0.0) 7 8 May 01, 2018 3 35 57 PM com.jme3.app.LegacyApplication handleError SEVERE Uncaught exception thrown in Thread jME3 Main,5,main com.jme3.renderer.RendererException compile error in ShaderSource name MyShaders Shader1.vert, defines, type Fragment, language GLSL100 ERROR 0 6 Use of undeclared identifier 'gl Position' And here is the code for instantiating the material Material mat new Material(assetManager,"MyShaders Shader1.j3md") I think that the bug is somewhere in me not passing any gl Position parameters into the shader, but how do I do that? I am using JME3 and Java. |
1 | Opengl Compute Shaders support? I have a question about compute shaders. My GPU is an AMD mobility Radeon 6490 which as the AMD website says supports OpenGL 4.1. However, when I check for my compatibility version via 'glGetString(GL VERSION)' I get 4.3.12618 Compatibility Profile Context 13.250.18.0. Does that mean I should be able to use compute shaders as they are introduced in OpenGL 4.3? If I try to create a compute shader in my program GLenum type GL COMPUTE SHADER is unknown but I'm not sure if newest headers are included. My GPU also supports the 'GL ARB compute shader' extension. So is there a special way to enable this extension or should I try to somehow include newer headers? Or will I not be able to use compute shaders at all (I don't want to use OpenCL)? Thanks for answer. |
1 | Giving values to uniform in OpenGL First thing is that I know how to give values to uniforms in OpenGL. Second thing is that it is a question related to optimization and performance. The habit for changing the uniforms, we preferred is like this Consider that we have shaders having 'aValue' as uniform During initialization GLint loc glGetUniformLocation(program, "aValue") During a loop or when required to change uniform's value if (loc ! 1) glUniform1f(loc, 0.75) But what if it is like this 'program' is a global variable Can be anywhere but not after destructing 'program' void updateUniformf( const char name, float value ) glUniform1f(glGetUniformLocation(program, name), value) During a loop or when required to change uniform's value updateUniformf("aValue", 0.75) How much would such approach decreases the performance? Or would this approach even affects the performance? It will be appreciable to have some measurements or practical example rather than all theories. Of course, I need to know reasons as well. Thanks for answering this question! |
1 | How do I ensure my skybox is always in the background, with OpenGL? I created a skybox in OpenGL (through LWJGL), but the only way I found to render it behind all objects was to make it very big. This leads to ugly edges between the 6 skybox planes. Optimally, I would draw a small skybox and disable DepthTesting on all other objects, but since I want those to display in the right depth order, that isn't possible. How can I do this? |
1 | Having problems with making a button class (checkIfClick) opengl Just basing how sfml does it, I get the globalbound of the object and then check if the mouseposition is within the object. I tried do something like that but I dont know exactly how to do it with opengl. I tried changing the projection from 1 to 1 coords to the size of the application in the hope I could just get the position of the object, but when doing std cout, it still shows the position in 1 to 1 coords. So my question is quot How would I convert my mouseposition to 1 to 1 or vi versa. quot Its actually 0 to 2 and not 1 to 1 So far i have this I dont know exactly how to get the top left right with opengl I know with sfml. I would use the position and subtract the half of the width and height to get it but like I said, idk how to do that in opengl. I tried doing that but the object position i still 1 to 1 even though I passed in the glm mat4 proj glm ortho(0,width, 0, height, 1,1) I know my projection is working, because when I enter any big number, my object gets really small. FYI, just because I see these types of answers alot I'm not asking you to write me a fully functional button class. I just want a general idea on how it would be done or some article that could help me do that (or a very basic example). void contain(glm vec2 position) if (position.x gt button.left amp amp position.x lt button.right amp amp position.y gt button.top amp amp position.y lt button.bottom) std cout lt lt quot True n quot else std cout lt lt quot False n quot |
1 | Do shader program compilers optimise divide by PoT constants to bitshift operations? So just to restate that, let's say we have this float f g 2 Given the divisor is a constant, will the shader compiler auto optimise this to a bitshift operation, as some language compilers are known to do? Where would I find more info on shader program compiler implementation details? (I'm guessing there will be different answers for OpenGL amp DirectX.) |
1 | OpenGL ES 2.0 Controlling Transparency in Fragment Shader The following is the OpenGL ES 2.0 simple GLSL Fragment Shader, I use to place textures on polygons, to render 2D sprites. varying mediump vec2 TextureCoordOut uniform sampler2D Sampler void main() gl FragColor texture2D(Sampler, TextureCoordOut) gl FragColor vec4(texture2D(Sampler, TextureCoordOut).xyz, TextureCoordOut.w 0.5) The fragment shader places voxels with alpha information taken from the source 2D texutre(.png image). Apart from alpha information, I need to control overall polygon sprite transparency to achieve Fade In Fade Out effects. Could you show me, please, how to modify the above shader to control the overall transparency, besides the alpha information? Note The commented out line is used for my attempts to achieve the transparency. I wish to combine both the alpha information with the overall polygon sprite transparency. Thanks. |
1 | Why is the size of glm's vec3 struct 12 bytes? When trying to determine the size of glm vec3 (from GLM math library) by using the size of operator like so sizeof(glm vec3) I get 12 returned. When I look at the definition of a vec3 struct I see this template lt typename T, precision P defaultp gt struct tvec3 Implementation detail typedef tvec3 lt T, P gt type typedef tvec3 lt bool, P gt bool type typedef T value type ifdef GLM META PROG HELPERS static GLM RELAXED CONSTEXPR length t components 3 static GLM RELAXED CONSTEXPR precision prec P endif GLM META PROG HELPERS Data if GLM HAS ANONYMOUS UNION union struct T x, y, z struct T r, g, b struct T s, t, p ifdef GLM SWIZZLE GLM SWIZZLE3 2 MEMBERS(T, P, tvec2, x, y, z) GLM SWIZZLE3 2 MEMBERS(T, P, tvec2, r, g, b) GLM SWIZZLE3 2 MEMBERS(T, P, tvec2, s, t, p) GLM SWIZZLE3 3 MEMBERS(T, P, tvec3, x, y, z) GLM SWIZZLE3 3 MEMBERS(T, P, tvec3, r, g, b) GLM SWIZZLE3 3 MEMBERS(T, P, tvec3, s, t, p) GLM SWIZZLE3 4 MEMBERS(T, P, tvec4, x, y, z) GLM SWIZZLE3 4 MEMBERS(T, P, tvec4, r, g, b) GLM SWIZZLE3 4 MEMBERS(T, P, tvec4, s, t, p) endif GLM SWIZZLE other code...... For which I see three structs, each with three member variables of templated type T, which in GLM defaults to float type. My question is why is the sizeof() operator returning 12 bytes as the size of glm vec3 when it looks like it should be 36 bytes 3 structs, with 3 float members each 3 3 4(number of bytes in a float) 36. |
1 | Rotate a plane defined by its normal and its distance First apologies for the amount of pictures, it's a bit hard trying to explain my problem without pictures. Hope I've provided all the relevant code. If you feel you want to know about how I am doing something else, please tell me and I will include it. I've been trying to work around rotating a plane in 3D space, but I keep hitting dead ends. The following is the situation I have a physics engine where I simulate a moving sphere inside a cube. To make things simpler, I have only drawn the top and bottom plane and moved the sphere vertically. I have defined my two planes as follows CollisionPlane p new CollisionPlane(glm vec3(0.0, 1.0, 0.0), 5.0) CollisionPlane p2 new CollisionPlane(glm vec3(0.0, 1.0, 0.0), 5.0) Where the vec3 defines the normal of the plane, and the second parameter defines the distance of the plane from the normal. The reason I defined their distance as 5 is because I have scaled the the model that represents my two planes by 10 on all axis, so now the distance from the origin is 5 to top and bottom, if that makes any sense. To give you some reference, I am creating my two planes as two line loops, and I have a model which models those two line loop, like the following top plane std shared ptr lt Mesh gt f1 std make shared lt Mesh gt (GL LINE LOOP) std vector lt Vertex gt verts Vertex(glm vec3(0.5, 0.5, 0.5)), Vertex(glm vec3(0.5, 0.5, 0.5)), Vertex(glm vec3( 0.5, 0.5, 0.5)), Vertex(glm vec3( 0.5, 0.5, 0.5)) f1 gt BufferVertices(verts) bottom plane std shared ptr lt Mesh gt f2 std make shared lt Mesh gt (GL LINE LOOP) std vector lt Vertex gt verts2 Vertex(glm vec3(0.5, 0.5, 0.5)), Vertex(glm vec3(0.5, 0.5, 0.5)), Vertex(glm vec3( 0.5, 0.5, 0.5)), Vertex(glm vec3( 0.5, 0.5, 0.5)) f2 gt BufferVertices(verts2) std shared ptr lt Model gt faceModel std make shared lt Model gt (std vector lt std shared ptr lt Mesh gt gt f1, f2 ) And like I said I scale the model by 10. Now I have a sphere that moves up and down, and collides with each face, and the collision response is implemented as well. The problem I am facing is when I try to rotate my planes. It seems to work fine when I rotate around the Z axis, but when I rotate around the X axis it doesn't seem to work. The following shows the result of rotating around Z However If I try to rotate around X, the ball penetrates the bottom plane, as if the collisionplane has moved down The following is the code I've tried to rotate the normals and the planes for (int i 0 i lt m entities.size() i) glm mat3 normalMatrix glm mat3 cast(glm angleAxis(glm radians(6.0f), glm vec3(0.0, 0.0, 1.0))) CollisionPlane p (CollisionPlane )m entities i gt GetCollisionVolume() glm vec3 normalDivLength p gt GetNormal() glm length(p gt GetNormal()) glm vec3 pointOnPlane normalDivLength p gt GetDistance() glm vec3 newNormal normalMatrix normalDivLength glm vec3 newPointOnPlane newNormal (normalMatrix (pointOnPlane glm vec3(0.0)) glm vec3(0.0)) p gt SetNormal(newNormal) float newDistance newPointOnPlane.x newPointOnPlane.y newPointOnPlane.z p gt SetDistance(newDistance) I've done the same thing for rotating around X, except changed the glm vec3(0.0, 0.0, 1.0) to glm vec3(1.0, 0.0, 0.0) m entites are basically my physics entities that hold the different collision shapes (spheres planes etc). I based my code on the answer here Rotating plane with normal and distance I can't seem to figure at all why it works when I rotate around Z, but not when I rotate around X. Am I missing something crucial? |
1 | OpenGL Texture disappears I'm making a simple program with C , SDL and GLEW. So far it is going great but I ran into a weird problem. One of my four textures would not show up on screen even though it used the same code as the other three. EXAMPLE https www.dropbox.com s 4038p0yv041mhdy Sk C3 A4rmdump 202014 09 10 2021.37.48.png?dl 0 HOW IT SHOULD BE https www.dropbox.com s huhln17zjip1pjb Sk C3 A4rmdump 202014 09 10 2021.39.04.png?dl 0 But the weird thing is how I fixed it. NOT WORKING sprite 0 .init(0.0f, 0.0f, 1.0f, 1.0f, "kitten.png") sprite 1 .init( 1.0f, 1.0f, 1.0f, 1.0f, "kitten1.png") sprite 2 .init(0.0f, 1.0f, 1.0f, 1.0f, "kitten1.png") sprite 3 .init( 1.0f, 0.0f, 1.0f, 1.0f, "kitten.png") WORKING sprite 1 .init(0.0f, 0.0f, 1.0f, 1.0f, "kitten.png") Switched around the 0 and the 1 sprite 0 .init( 1.0f, 1.0f, 1.0f, 1.0f, "kitten1.png") sprite 2 .init(0.0f, 1.0f, 1.0f, 1.0f, "kitten1.png") sprite 3 .init( 1.0f, 0.0f, 1.0f, 1.0f, "kitten.png") I only changed the index when the init function was called and all of a sudden it started to work again. So my question is Why does the texture not display when I'm using index 1 but working when I'm using index 0. Nothing happened when I only changed the draw function. INIT FUNCTION void Sprite init(float x, float y, float w, float h, std string image) Loads image, initilaze the varibles and generate the Vertex Buffer Object(VBO) texture ResourceManager getTexture(image) x x y y w w h h if( vboid 0) glGenBuffers(1, amp vboid) if( eboid 0) glGenBuffers(1, amp eboid) The vertex data Vertex vertexData 4 Specify which vertices to reuse GLuint elementdata 6 0, 1, 2, 2, 3, 0 Top right corner position vertexData 0 .setPosition( x w, y h) vertexData 0 .setTexCoord(1, 0) Top left corner position vertexData 1 .setPosition( x, y h) vertexData 1 .setTexCoord(0, 0) Bottom left corner position vertexData 2 .setPosition( x, y) vertexData 2 .setTexCoord(0, 1) Bottom right corner position vertexData 3 .setPosition( x w, y) vertexData 3 .setTexCoord(1, 1) for(int i 0 i lt 4 i ) vertexData i .setColor(255, 255, 255, 255) Let's set some special color Binds the buffer, then upload data to the GPU and the last thing it does is unbind the buffer glBindBuffer(GL ARRAY BUFFER, vboid) glBufferData(GL ARRAY BUFFER, sizeof(vertexData), vertexData, GL DYNAMIC DRAW) glBindBuffer(GL ARRAY BUFFER, 0) glBindBuffer(GL ELEMENT ARRAY BUFFER, eboid) glBufferData(GL ELEMENT ARRAY BUFFER, sizeof(elementdata), elementdata, GL DYNAMIC DRAW) glBindBuffer(GL ELEMENT ARRAY BUFFER, 0) |
1 | How can i port my OpenGL game to linux? I made a game with OpenGL 4.3(core profile) and C . I used GLFW3 for window and context management. I am also using bunch of third party library which are also available for linux. What things do i need to consider if i want to port the game and how can i make it support all windows and linux? |
1 | 3d Picking under reticle i'm currently trying to work out some 3d picking code that I started years ago, but then lost interested the assignment was completed (this part wasn't actually part of the assignment). I am not using the mouse coords for picking, i'm just using the position in 3d space and a ray directly out from there. A small hitch though is that I want to use a cone and not a ray. Here are the variables i'm using float iReticleSlope 95 3000 inverse reticle slope float baseReticle 1 radius of the reticle at z 0 float maxRange 3000 max range to target Quaternion orientation the cameras orientation Vector3d position the cameras position Then I loop through each object in the world Vector3d transformed object position after transformations float d, r holder variables for(i 0 i lt objects.length i ) transformed position objects i .position transform the position relative to camera orientation.multiply(transformed) orient the object relative to the camera if(transformed.z lt 0) d sqrt(transformed 0 transformed 0 transformed 1 transformed 1 ) r transformed 2 iReticleSlope objects i .radius if(d lt r amp amp transformed 2 objects i .radius lt maxRange) the object is under the reticle else the object is not under the reticle else the object is not under the reticle Now this all works fine and dandy until the window ratio doesn't match the resolution ratio. Is there any simple way to account for that |
1 | Stencil buffer mask calculation I dont understand how exactly stencil buffer works in openGL. One aspect that confuses me is why do we use a bit operator there in glStencilFunc. Some text say that is is used to achieve multiple stencil planes but I am not sure how that can be done. Can anybody please explain the whole procedure of stencil buffer calculation during the rendering cycle. Thank you. |
1 | glReadPixels with GL DEPTH COMPONENT into PBO is slow I need to read depth buffer back to cpu memory. It may be few frames old, so I use glReadPixels with a buffer bound to GL PIXEL PACK BUFFER. I use several buffers and ping pong them. Finally, I read from the PBO with glGetBufferSubData. I have tried creating the buffers with GL STATIC READ, GL DYNAMIC READ and GL STREAM READ all with same results. Unfortunately, all this is still horribly slow. class DepthBuffer private static const uint32 PboCount 2 Buffer buffer uint32 index uint32 pbo PboCount uint32 w PboCount , h PboCount public DepthBuffer() DepthBuffer() void performCopy(uint32 fbo, uint32 w, uint32 h) returns 0..1 in logarithmic depth float valuePix(uint32 x, uint32 y) pix in 0..(cw ch 1) float valueNdc(float x, float y) ndc in 1..1 DepthBuffer DepthBuffer() index(0), pbo 0, 0 , w 0, 0 , h 0, 0 glGenBuffers(PboCount, pbo) DepthBuffer DepthBuffer() glDeleteBuffers(PboCount, pbo) void DepthBuffer performCopy(uint32 fbo, uint32 paramW, uint32 paramH) copy framebuffer to pbo glBindBuffer(GL PIXEL PACK BUFFER, pbo index ) if (w index ! paramW h index ! paramH) glBufferData(GL PIXEL PACK BUFFER, paramW paramH sizeof(float), nullptr, GL STREAM READ) w index paramW h index paramH glBindFramebuffer(GL READ FRAMEBUFFER, fbo) CHECK GL FRAMEBUFFER(GL READ FRAMEBUFFER) glReadPixels(0, 0, w index , h index , GL DEPTH COMPONENT, GL FLOAT, 0) CHECK GL("read the depth (framebuffer to pbo)") copy gpu pbo to cpu buffer index (index 1) PboCount float depths nullptr uint32 reqsiz w index h index sizeof(float) if (buffer.size() lt reqsiz) buffer.allocate(reqsiz) depths (float )buffer.data() glBindBuffer(GL PIXEL PACK BUFFER, pbo index ) glGetBufferSubData(GL PIXEL PACK BUFFER, 0, w index h index sizeof(float), depths) glBindBuffer(GL PIXEL PACK BUFFER, 0) CHECK GL("read the depth (pbo to cpu)") float DepthBuffer valuePix(uint32 x, uint32 y) if (w index h index 0) return nan1() assert(x lt w index amp amp y lt h index ) return ((float )buffer.data()) x y w index float DepthBuffer valueNdc(float x, float y) assert(x gt 1 amp amp x lt 1 amp amp y gt 1 amp amp y lt 1) return valuePix((x 0.5 0.5) (w index 1), (y 0.5 0.5) (h index 1)) The glReadPixels is taking all the time (on cpu). Am I doing something wrong? I thought that reading into the PBO should be asynchronous. Thanks. |
1 | Using same buffer for vertex and index data? Is it possible to use the same buffer for both GL ARRAY BUFFER and GL ELEMENT ARRAY BUFFER? I load both vertex data and index data into a big slab of memory, so it would be easier for me to just load it all into a single buffer. So naturally, I do like this glBindBuffer(GL ARRAY BUFFER, vboId) glBufferData(GL ARRAY BUFFER, dataSize, data, usage) glBindBuffer(GL ARRAY BUFFER, 0) Is it legal to ndash during rendering ndash simply use it as both? glBindBuffer(GL ARRAY BUFFER, vboId) glBindBuffer(GL ELEMENT ARRAY BUFFER, vboId) glVertexAttribPointer(...) ... glDrawElements(mode, count, dataType, (void )indexOffset) I can't find anything in the spec saying it's ok to do so, but I can't find anything that says that I can't either. Googling doesn't turn up much either, but I might be looking in the wrong places. |
1 | Accessing struct in glut I have to write a game for a uni project using openGL glut. I am just going to use a generic example code as I'm just looking for a solution to one specific problem Using glut, I have to call a function 'glutDisplayFunc(display) ' in the main function. I am also using an IdleFunc(update). The problem is that, as described here, the 'display' function cannot pass anything. I have some structs outside my main that I wish to be initialized in the main, and be accessible by display and update. Hopefully some code will explain my problem better include lt gl glut.h gt struct Player GLfloat x GLfloat y GLfloat z int score ... function prototypes (showing how I would normally pass the struct) void InitPlayer (Player amp player) void DrawPlayer (Player amp player) void UpdatePlayer (Player amp player) void main (int argc, char argv) Player player InitPlayer(player) ... glut openGL initialisation code left out ... glutDisplayFunc (display) glutReshapeFunc (reshape) glutIdleFunc (update) glutMainLoop() void display() DrawPlayer(player) void update () UpdatePlayer(player) glutPostRedisplay () end The above code doesn't work I hope it demonstrates what I would like to do though. Access the struct 'player' in 'display' and 'update', having the same values stored globally. How would I go about ? |
1 | Underwater Shader Animation Help I found an underwater (distort) effect and I got it to work but somehow I cannot make it animate given the offset here Fragment Shader code uniform sampler2D fbo texture uniform float offset varying vec2 f texcoord void main(void) vec2 texcoord f texcoord texcoord.x sin(texcoord.y 4 2 3.14159 offset) 100 gl FragColor texture2D(fbo texture, texcoord) Original Source here |
1 | Geometric Transformations on the CPU vs GPU I've noticed that many 3d programs normally do vector matrix calculations as well as geometric transformations on the CPU. Has anyone found an advantage in moving these calculations into vertex shaders on the GPU? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.