_id
int64
0
49
text
stringlengths
71
4.19k
1
Weird problem with advect program in fluid simulator I implemented 2d fluid simulator. Solver runs entirely on GPU. All works fine... on my work PC. But on home PC I have some awful glitches, and I can t understand how to fix them. Empirically I discovered that problem is localized somewhere in advect program. This is very strange cause at work I have integrated video, and at home NVIDIA GeForce 9800 GT. Here is the GLSL source of advect program (some lines were dropped for clearness) version 130 out vec3 value uniform sampler2D q uniform sampler2D velocity uniform float dt uniform float inverseSize void main() vec2 p gl FragCoord.xy inverseSize vec2 np p dt texture(velocity, p).xy value texture(q, np).xyz And some screenshots. Work PC Home PC
1
Opengl texture rendered as solid color,why? her is my vertex array and texture coordinate.. float POS 0.5, 0.5, 1.0, 0.5, 0.5, 1.0, 0.5,0.5, 1.0, 0.5,0.5, 1.0 float texCoords 0.0,0.0, 1.0,0.0, 1.0,1.0, 0.0,1.0 here is my texture creation stbi set flip vertically on load(1) openglTexBuff stbi load("lena.jpg", amp w, amp h, amp c,0) glGenTextures(1, amp TexID) glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D,TexID) glTexImage2D(GL TEXTURE 2D,0,GL RGB,w,h,GL FALSE,GL RGB,GL UNSIGNED BYTE,openglTexBuff) glGenerateMipmap(GL TEXTURE 2D) glTexParameteri(GL TEXTURE 2D,GL TEXTURE WRAP S,GL REPEAT) glTexParameteri(GL TEXTURE 2D,GL TEXTURE WRAP T,GL REPEAT) glTexParameteri(GL TEXTURE 2D,GL TEXTURE MAG FILTER,GL NEAREST) glTexParameteri(GL TEXTURE 2D,GL TEXTURE MIN FILTER,GL NEAREST) glBindTexture(GL TEXTURE 2D,0) stbi image free(openglTexBuff) ' and these are my shaders shader vertex version 330 core layout(location 0) in vec4 positions layout(location 1) in vec2 texCoords uniform mat4 translation matrix inactive shader variables.. uniform mat4 rotation matrix mat4 model matrix mat4 projection matrix out vec4 pos out vec2 frag TexCoord void main() rotation matrix gl Position rotation matrix translation matrix positions translation matrix operated with POS vector.. pos positions frag TexCoord texCoords translation matrix shader fragment version 330 core layout(location 0) out vec4 color in vec4 pos in vec2 frag TexCoord rag TexCoord uniform float col uniform sampler2D a texture void main() col a texture color texture(a texture,frag TexCoord) when i use (solid)color instead of texture in shader..everything works fine but when i use shader it gives solid color instead of texture..It seems like my whole tex is super zoomed to output single color here is draw call glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) box.shader.setUniformat4x4("translation matrix",translation) box.shader.setUniformat4x4("rotation matrix",rotation arr) box.shader.setUniform1f("col",1.0) glBindTexture(GL TEXTURE 2D,TexID) box.shader.setUniform1i("a texture",0) box.render() box.render void render() vertices.bind() indices.bind() shader.bind() glDrawElements(render As,vertices.getSize(),GL UNSIGNED INT,NULL) vertices.unbind() indices.unbind()
1
Difference between the terms Material Effect I'm making an effect system right now (I think, because it may be a material system... or both!). The effects system follows the common (e.g. COLLADA, DirectX) effect framework abstraction of Effects have Techniques, Techniques have Passes, Passes have States amp Shader Programs. An effect, according to COLLADA, defines the equations necessary for the visual appearance of geometry and screen space image processing. Keeping with the abstraction, effects contain techniques. Each effect can contain one or many techniques (i.e. ways to generate the effect), each of which describes a different method for rendering that effect. The technique could be relate to quality (e.g. high precision, high LOD, etc.), or in game situation (e.g. night day, power up mode, etc.). Techniques hold a description of the textures, samplers, shaders, parameters, amp passes necessary for rendering this effect using one method. Some algorithms require several passes to render the effect. Pipeline descriptions are broken into an ordered collection of Pass objects. A pass provides a static declaration of all the render states, shaders, amp settings for "one rendering pipeline" (i.e. one pass). Meshes usually contain a series of materials that define the model. According to the COLLADA spec (again), a material instantiates an effect, fills its parameters with values, amp selects a technique. But I see material defined differently in other places, such as just the Lambert, Blinn, Phong "material types shaded surfaces", or as Metal, Plastic, Wood, etc. In game dev forums, people often talk about implementing a "material effect system". Is the material not an instance of an effect? Ergo, if I had effect objects, stored in a collection, amp each effect instance object with there own parameter setting, then there is no need for the concept of a material... Or am I interpreting it wrong? Please help by contributing your interpretations as I want to be clear on a distinction (if any), amp don't want to miss out on the concept of a material if it should be implemented to follow the abstraction of the DirectX FX framework amp COLLADA definitions closely.
1
What is causing this problem with my object rotation? I'm having an issue here with rotation in OpenGL. Before I changed my rendering function the rotation I had for my object worked fine, but now it seems to be messed up. I changed my rendering function because I wanted to be able to clip the image (render frames of the image), whereas before I was just rendering things on a 1 object, 1 image basis. The rendering is fine, just the rotation to clarify that. Here's my current rendering code void DrawRotateAdv(int x, int y, int width, int height, float sourceX, float sourceY, float imageWidth, float imageHeight, GLuint texture, float angle, bool blendFlag) glTranslatef((GLfloat) x (width 2), (GLfloat) y (height 2), 0.0) glRotatef(angle, 0.0, 0.0, 1.0) if (blendFlag) glEnable(GL BLEND) glBlendFunc(GL SRC ALPHA, GL ONE MINUS SRC ALPHA) glBindTexture(GL TEXTURE 2D, texture) float texscaleX 1.0f (float)(imageWidth width) float texscaleY 1.0f (float)(imageHeight height) sourceX sourceX (float)imageWidth sourceY sourceY (float)imageHeight glBegin(GL QUADS) Top left vertex (corner) glTexCoord2f(sourceX, sourceY) glVertex2i( ( width 2), ( height 2)) Bottom left vertex (corner) glTexCoord2f(sourceX, sourceY texscaleY) glVertex2i( (width 2), ( height 2)) Bottom right vertex (corner) glTexCoord2f(sourceX texscaleX, sourceY texscaleY) glVertex2i( (width 2), (height 2)) Top right vertex (corner) glTexCoord2f( sourceX texscaleX, sourceY) glVertex2i( ( width 2) , (height 2)) glEnd() glLoadIdentity() I think that perhaps my translation is wrong? Unsure though, guess that's a stab in the dark on my part. As for the getting the angle angle atan2((cEnemy gt position.y (float)position.y), cEnemy gt position.x (float)position.x) 3.14159265f 180 But I don't think that's the issue, like I said it worked before hand perfectly. Screenshot of the output
1
LWJGL loading textures of various types I googled around a bit and nobody seems to have asked this question. I have images in multiple color formats (all of them are PNGs). Most of them are ARGB but my bitmap fonts are gray scale, and I would like them to stay that way. All I want to do is find out what format BufferedImage uses to store my pixel data and then use that information with glTexImage2D. Java, in all its wisdom, seems to be determined to hide that information from me at all costs... I also need to know how BufferedImage aligns its pixel data in both of these formats (glTexImage2D cares). Could someone please tell me how to Determine the pixel format of my BufferedImage. If it is ARGB32, I'm going to have to reorder the bytes and use GL RGBA. If it's grayscale, I will be using GL INTENSITY. Extract the actual bytes from the image. I have seen a few examples on the web that use BufferedImage.getRaster().getDataBuffer(). This is nonsensical. Why are there different types of buffers like DataBufferInt? Because of Java's strong typing, I need DataBufferByte. If this is the only way, could somebody give me specific directions to use the different type of buffers with glTexImage2D? Figure out how the aforementioned image data is aligned. I will use this information with glPixelStorei. In addition, I come from C and C programming. In C this was 100 lines of simple libPNG and GL calls. Should I expect more trouble like this in the future?
1
Why does GLM only have a translate function that returns a 4x4 matrix, and not a 3x3 matrix? I'm working on a 2D game engine project, and I want to implement matrices for my transformations. I'm going to use the GLM library. Since my game is only 2D, I figured I only need a 3x3 matrix to combine the translation, rotation and scale operations. However, glm translation is only overloaded to return a 4x4 matrix, and never a 3x3. I thought a translation could be performed by using a 3x3 matrix why does GLM only have a translate function that returns a 4x4 matrix, and not a 3x3 matrix?
1
JME3 Fragment Shader not compiling Here is the fragment shader code (MyShaders Shader1.frag) void main() gl FragColor vec4(1.0,1.0,1.0,1.0) And the vertex shader code (MyShaders Shader1.vert) void main(void) gl Position vec4(1.0,1.0,1.0,0.0) And the .j3md material code MaterialDef Shader1 MaterialParameters Technique VertexShader GLSL100 GLSL150 MyShaders Shader1.frag FragmentShader GLSL100 GLSL150 MyShaders Shader1.vert The error stack trace is WARNING Bad compile of 1 version 110 2 define FRAGMENT SHADER 1 3 void main(void) 4 5 6 gl Position vec4(1.0,1.0,1.0,0.0) 7 8 May 01, 2018 3 35 57 PM com.jme3.app.LegacyApplication handleError SEVERE Uncaught exception thrown in Thread jME3 Main,5,main com.jme3.renderer.RendererException compile error in ShaderSource name MyShaders Shader1.vert, defines, type Fragment, language GLSL100 ERROR 0 6 Use of undeclared identifier 'gl Position' And here is the code for instantiating the material Material mat new Material(assetManager,"MyShaders Shader1.j3md") I think that the bug is somewhere in me not passing any gl Position parameters into the shader, but how do I do that? I am using JME3 and Java.
1
Taking cube from camera space to clip space, error in my math? watching Ken Joy's Computer Graphics lectures on youtube. One thing I'm confused about is after he gets the cube from the camera space to clip space, from my calculations the cube doesn't look like that. I expected the cube to look like that pink parallelogram in my picture, if we assume the Z of the front face of the cube to be 4 3 and the back face to be 2 then the Ws come out to be 4 3 and 2 respectively. So can someone explain how after multiplying by the viewing matrix, the cube comes out to look like how Ken has it. Ken's view matrix After view matrix has been applied What I think the side of the cube should look like(the pink parallelogram) after view matrix has been applied my reasoning is, after the perspective divide by W, the blue and green vectors should get truncated to create that pink parallelogram. So I'm struggling to understand this. Thanks in advance.
1
Providing texture coordinates and using indexed drawing at the same time Please consider the following vertex structure struct vertex vec3 posL, normalL Using this vertex layout, we can provide the vertex data in an interleaved way, i.e. (posL,normalL),(posL',normalL'),... This works pretty fine in combination with indexed drawing via glDrawElements(). However, if we add texture coordinates to our vertex structure, i.e. consider struct vertex vec3 posL, normalL vec2 tex things get more complicated. Since one and the same vertex (meaning it's position) may participate in a variety of different faces with varying texture coordinates, I wondered how one would provide such vertex data to the vertex buffer. One solution would be to quit storing the data interleaved and provide it a linear way, i.e. (posL,posL',...),(normalL,normalL',...),(tex,tex',...) where each tuple has the same length. Doing so, we would hold related things together (i.e. the k th element of each tuple forms exactly one input the vertex shader sees). But we would push much more data to the pipeline than necessary. So, what's the "ideal" solution to this problem? If one suggests to use different buffers, how does the vertex shader know, that texture coordinate tex at position i in Buffer 2 corresponds to the tuple (posL,normalL) at position j in Buffer 1?
1
LWJGL loading textures of various types I googled around a bit and nobody seems to have asked this question. I have images in multiple color formats (all of them are PNGs). Most of them are ARGB but my bitmap fonts are gray scale, and I would like them to stay that way. All I want to do is find out what format BufferedImage uses to store my pixel data and then use that information with glTexImage2D. Java, in all its wisdom, seems to be determined to hide that information from me at all costs... I also need to know how BufferedImage aligns its pixel data in both of these formats (glTexImage2D cares). Could someone please tell me how to Determine the pixel format of my BufferedImage. If it is ARGB32, I'm going to have to reorder the bytes and use GL RGBA. If it's grayscale, I will be using GL INTENSITY. Extract the actual bytes from the image. I have seen a few examples on the web that use BufferedImage.getRaster().getDataBuffer(). This is nonsensical. Why are there different types of buffers like DataBufferInt? Because of Java's strong typing, I need DataBufferByte. If this is the only way, could somebody give me specific directions to use the different type of buffers with glTexImage2D? Figure out how the aforementioned image data is aligned. I will use this information with glPixelStorei. In addition, I come from C and C programming. In C this was 100 lines of simple libPNG and GL calls. Should I expect more trouble like this in the future?
1
How to save am image of a screen using JOGL Hope this is a better place to ask things like this. I have a 2D scene with some sprites drawn in Swing frame. I need them to be saved as an image. The problem is every tutorial I found seem to be obsolete. I found using glReadPixels and putting result to a BufferedImage should help but all those snippets look stale, API has changed. I'm using JOGL 2.1.5 that is current at the moment. I have a feeling this should be a widely known use case for professionals. Any help would be appreciated! Updated I adopted some stale snippets (please, don't tell me API hasn't changed significantly, there're dozens of movings and renamings) No the problem is I'm getting black screen in png file. It comes from byte buffer being read to with zeros only. public static void main(String args) TestPNGSaver saver new TestPNGSaver() saver.run() private void writeBufferToFile(GLAutoDrawable drawable, File outputFile) int width drawable.getWidth() int height drawable.getHeight() ByteBuffer pixelsRGB Buffers.newDirectByteBuffer(width height 3) GL2 gl drawable.getGL().getGL2() gl.glReadBuffer(GL.GL BACK) gl.glPixelStorei(GL.GL PACK ALIGNMENT, 1) gl.glReadPixels(0, 0, width, height, GL.GL RGB, GL.GL UNSIGNED BYTE, pixelsRGB) int pixels new int width height int firstByte width height 3 int sourceIndex int targetIndex 0 int rowBytesNumber width 3 for (int row 0 row lt height row ) firstByte rowBytesNumber sourceIndex firstByte for (int col 0 col lt width col ) if (pixelsRGB.get(sourceIndex) ! 0) System.out.println(sourceIndex) int iR pixelsRGB.get(sourceIndex ) int iG pixelsRGB.get(sourceIndex ) int iB pixelsRGB.get(sourceIndex ) pixels targetIndex 0xFF000000 ((iR amp 0x000000FF) lt lt 16) ((iG amp 0x000000FF) lt lt 8) (iB amp 0x000000FF) BufferedImage bufferedImage new BufferedImage(width, height, BufferedImage.TYPE INT ARGB) bufferedImage.setRGB(0, 0, width, height, pixels, 0, width) try ImageIO.write(bufferedImage, "PNG", outputFile) catch (Exception e) e.printStackTrace() Override public void display(GLAutoDrawable drawable) GL2 gl drawable.getGL().getGL2() gl.glClear(GL.GL COLOR BUFFER BIT) gl.glBegin(GL.GL TRIANGLES) gl.glColor3f(1, 0, 0) gl.glVertex2f( 1, 1) gl.glColor3f(0, 1, 0) gl.glVertex2f(0, 1) gl.glColor3f(0, 0, 1) gl.glVertex2f(1, 1) gl.glEnd() private void run() GLProfile.initSingleton() GLProfile glp GLProfile.getDefault() GLCapabilities caps new GLCapabilities(glp) final GLCanvas canvas new GLCanvas(caps) canvas.addGLEventListener(this) Frame frame new Frame() frame.setSize(400, 400) frame.add(canvas) frame.setVisible(true) frame.addWindowListener(new WindowAdapter() public void windowClosing(WindowEvent e) writeBufferToFile(canvas, new File(" whatever text.png")) System.exit(0) )
1
How do I apply skeletal animation from a .x (Direct X) file? Using the .x format to export a model from Blender, I can load a mesh, armature and animation. I have no problems generating the mesh and viewing models in game. Additionally, I have animations and the armature properly loaded into appropriate data structures. My problem is properly applying the animation to the models. I have the framework for applying the models and the code for selecting animations and stepping through frames. From what I understand, the AnimationKeys inside the AnimationSet supplies the transformations to transform the bind pose to the pose in the animated frame. As small example Animation Armature 001 Bone AnimationKey 2 Position 121 number of frames 0 3 0.000000, 0.000000, 0.000000 , 1 3 0.000000, 0.000000, 0.005524 , 2 3 0.000000, 0.000000, 0.022217 , ... AnimationKey 0 Quaternion Rotation 121 0 4 0.707107, 0.707107, 0.000000, 0.000000 , 1 4 0.697332, 0.697332, 0.015710, 0.015710 , 2 4 0.684805, 0.684805, 0.035442, 0.035442 , ... AnimationKey 1 Scale 121 0 3 1.000000, 1.000000, 1.000000 , 1 3 1.000000, 1.000000, 1.000000 , 2 3 1.000000, 1.000000, 1.000000 , ... So, to apply frame 2, I would take the position, rotation and scale from frame 2, create a transformation matrix (call it Transform A) from them and apply that matrix the vertices controlled by Armature 001 Bone at their weights. So I'd stuff TransformA into my shader and transform the vertex. Something like vertexPos vertexPos bones int(bfs BoneIndices.x) bfs BoneWeights.x Where bfs BoneIndices and bfs BoneWeights are values specific to the current vertex. When loading in the mesh vertices, I transform them by the rootTransform and the meshTransform. This ensures they're oriented and scaled correctly for viewing the bind pose. The problem is when I create that transformation matrix (using the position, rotation and scale from the animation), it doesn't properly transform the vertex. There's likely more to it than just using the animation data. I also tried applying the bone transform hierarchies, still no dice. Basically I end up with some twisted models. It should also be noted that I'm working in openGL, so any matrix transposes that might need to be applied should be considered. What data do I need and how do I combine it for applying .x animations to models? I've made some mock ups of what this looks like, in case that's useful. First I wanted to just test the translation, this is a bobbing head, what it looks like in Blender http i.stack.imgur.com NAc4B.gif And what it looks like in game (don't mind the colors) http i.stack.imgur.com nj2du.gif Then for just rotation, the animation was the head rotating 360 degrees around the vertical axis. This is what that looks like in game http i.stack.imgur.com gVyUW.gif Note, there should be no tilt, only rotation like a merry go round. Update I have the translation part of the animation working. But it feels kind of hacky and I don't see how to apply it to the rotation. The translation works by taking these steps Take the position from the animation frame and swap the y and z values Translate the transformation matrix by the altered position Transpose the transformation matrix Apply the transformation matrix to the vertices So that's how it can work, but how is it supposed to work generally for position, scale and rotation?
1
glDrawElements not acting as expected code Vertices are created by an .obj file. (loading OBJFile.java) I draw a cube perfectly fine with glDrawArrays. (VertexModel.java) created like this new VertexModel(square.getIndexedVertexBuffer()) However, when I try to do the same with glDrawElements i get a distorted cube.. (IndexedVertexModel.java) The IndexedVertexModel is created like this new IndexedVertexModel(square.getVertexBuffer(), square.getVertexIndecies()) The result can be seen here. I've tried to figure out what's wrong, but i really have no idea what causes it to behave this way. The OBJ file I'm using looks like this v 1.00000000000000 1.00000000000000 1.00000000000000 v 1.00000000000000 1.00000000000000 1.00000000000000 v 1.00000000000000 1.00000000000000 1.00000000000000 v 1.00000000000000 1.00000000000000 1.00000000000000 v 1.00000000000000 1.00000000000000 1.00000000000000 v 1.00000000000000 1.00000000000000 1.00000000000000 v 1.00000000000000 1.00000000000000 1.00000000000000 v 1.00000000000000 1.00000000000000 1.00000000000000 f 1 4 1 2 3 1 4 2 1 f 3 4 3 4 3 3 6 2 3 f 5 4 4 6 3 5 8 2 4 f 7 4 6 8 3 6 2 2 6 f 2 4 7 8 3 7 6 2 7 f 7 4 8 1 3 8 3 2 8 f 1 4 1 4 4 1 3 3 1 f 3 4 3 6 4 3 5 3 3 f 5 4 4 8 4 4 7 3 4 f 7 4 6 2 4 6 1 3 6 f 2 4 7 6 4 7 4 3 7 f 7 4 8 3 4 8 5 3 8
1
Looking for literature about graphics pipeline optimization I am looking for some books, articles or tutorials about graphics architecture and graphics pipeline optimizations. It shouldn't be too old (2008 or newer) the newer, the better. I have found something in Optimising the Graphics Pipeline, NVIDIA, Koji Ashida too old, Real time rendering, Akenine Moller , OpenGL Bindless Extensions, NVIDIA, Jeff Bolz , Efficient multifragment effects on graphics processing units, Louis Frederic Bavoil and some internet discussions. But there is not too much information and I want to read more. It should contain something about application, driver, memory and shader units communication and data transfers. About vertices and attributes. Also pre and post T amp L cache (if they still exist in nowadays architectures) etc. I don't need anything about textures, frame buffers and rasterization. It can also be about OpenGL (not about DirecX) and optimizing extensions (not old extensions like VBOs, but newer like vertex buffer unified memory).
1
How do I get GLEW to use my Nvidia GPU instead of an integrated Intel card? I'm new to graphics programming, though not to coding generally. I've been learning opengl from the OpenGL Superbible, 6th Edition. I was trying out the shader examples from the book but couldn't get them to work. The logged error says approximately "incorrect version". The book uses GLSL 4.3 core. I ran glewinfo.exe and the generated text file said that the core OpenGL version on my system was OpenGL 4.0, but it was the stats of my integrated Intel GPU, not my Nvidia GT720m GPU. I'm pretty sure the nvidia gpu supports a higher opengl version. How do I get GLEW to detect the high performance GPU and make calls to the Nvidia drivers instead of the integrated Intel HD GPU? A similar question has been asked there, but it concerned only running the program on the gpu. How do I tell which version of OpenGL the Nvidia GPU supports?
1
Texture not visible on particles This is the first time I am working with particles (GL POINTS) I am using kinematic equations and controlling their movement in vertex shader. I am following an example given in OpenGL 4.0 Cookbook by David Wolff. I am not able to see the texture which I am loading in the particle. However, I can see the texture if I put it to some other object, say quad. I have increased the size of POINT using glPointSize just so I can see if the texture is really visible. In the image above you can that I am loading and rendering the same texture for quad, but when I use the same texture for particles, I get different results. PS The color which you're seeing for the particles is from the texture itself. If i choose a texture having white color, the same is seen for the particles. Here is a code from my fragment shader. I figured if the texture is being displayed correctly for quad, then the issue may not be with the loading of the texture. if(rendPlane false) vs fragColor vec4(.3f,.3f,0.6f,1.00f) vs fragColor texture(u particlesTex, gl PointCoord) vs fragColor.a vs transp else plane... vs fragColor texture(u particlesTex, vs texCoord)
1
Does glScissor affect stencil and depth buffer operations? I know glScissor() affects glColorMask() and glDepthMask(), but does it affect the stencil and depth buffers? For example glEnable(GL DEPTH TEST) glEnable(GL SCISSOR TEST) glEnable(GL STENCIL TEST) glScissor(X,Y,W,H) Is this color mask set only for the scissor area? glColorMask(TRUE,TRUE,TRUE,TRUE) Does this stencil function only work within the scissor area? glstencilfunc(GL ALWAYS) Does the stencil function only work within scissor area? glstencilop(GL KEEP,GL KEEP,GL KEEP) Is this depth mask set only for the scissor area? glDepthMask(GL TRUE) Does this depth function only work within the scissor area? glDepthFunc(GL ALWAYS)
1
How does Windows GDI work compared to DirectX OpenGL? OpenGL and DirectX, if we are talking about graphics, are somehow built into a graphics card and you can access them using appropriate environment and draw some graphics. After reading wiki, I came to a thought that even though GDI doesn't use hardware directly as DirectX OpenGL, it still get its access to a graphics card after some manipulations. What is GDI and how does it work in compare to these two?
1
Is it ok to use polymorphism as a way of "interfacing" with objects in a scene? I'm designing a class that holds values representing a 3D scene. This includes lights, cameras, meshes, materials, etc. The way how I'm setting it up is that each "thing" has a name. A camera has a name, a light has a name, etc. But they also have an "interface" to access the object via pointer instead of having to name it each time something needs to be changed. For example Light light scene.getLight("light2") light gt setBrightness(2.0f) The problems is what Light is. Ideally, it's "proper C " where it's a pointer directly to the internal object. However, these classes are closely tied to how the scene class works, which makes maintainability a mess. It's seems much easier to just have private implementations that do all the dirty work and only let the user receive pointers to an interface. What I want to do is make Light an interface (struct with virtual functions, no data, no non virtual functions except empty constructor) and implement the internal details as a private class inside the scene class. While this makes it more maintainable, it comes at the cost of using polymorphism where I'm not dealing with more than one implementation. Any thoughts or ideas are helpful.
1
Some maths about camera in 3D So after finishing a 2d game for my school project, I decided to dive into 3d world by using raylib instead of Unity. Raylib does provide a simple implementation of 3d camera without you doing maths, but instead I want to challenge myself to understand trigonometry in 3d.Here, I am trying to make the camera rotate upside down and left right(like it's orbiting). After implementing the math, it seems like it only works when the target is at Vector3 0.0f, 0.0f, 0.0f position. If the target position gets further and further, the rotation gets weird. Here is the demonstration video that I uploaded to youtube. https www.youtube.com watch?v MpXkhGy60HA amp feature youtu.be And below is my code for math Vector3 cubePosition 0.0f, 0.5f, 0.0f Camera3D camera 0 camera.position 0.0f, 10.0f, 10.0f Camera position camera.target 0.0f, 0.0f, 0.0f Camera looking at point camera.up 0.0f,2.0f, 0.0f Camera up vector (rotation towards target) camera.fovy 45.0f Camera field of view Y float distance 15.0f Set the distance from camera.target to camera.position float verticalAngle 45.0f The Y XZ angle float horizontalAngle 90.0f The X Z angle, y is considered as 0 float horizontalDistance distance cosf(verticalAngle PI 180.0f) Horizontal distance, the magnitude camera.position.x horizontalDistance cosf(horizontalAngle PI 180.0f) Calculate the position of camera.position x based on distance etc.. camera.position.z horizontalDistance sinf(horizontalAngle PI 180.0f) camera.position.y distance sinf(verticalAngle PI 180.0f) Vector3 selectedTarget cubePosition And for the main loop if (IsKeyDown(KEY A)) camera.position.x horizontalDistance cosf(horizontalAngle PI 180.0f) camera.position.z horizontalDistance sinf(horizontalAngle PI 180.0f) horizontalAngle 1.0f else if (IsKeyDown(KEY D)) camera.position.x horizontalDistance cosf(horizontalAngle PI 180.0f) camera.position.z horizontalDistance sinf(horizontalAngle PI 180.0f) horizontalAngle 1.0f else if (IsKeyDown(KEY W) amp amp verticalAngle lt 88.0f) horizontalDistance distance cosf(verticalAngle PI 180.0f) camera.position.x horizontalDistance cosf(horizontalAngle PI 180.0f) camera.position.z horizontalDistance sinf(horizontalAngle PI 180.0f) camera.position.y distance sinf(verticalAngle PI 180.0f) verticalAngle 1.0f else if (IsKeyDown(KEY S) amp amp verticalAngle gt 2.0f) horizontalDistance distance cosf(verticalAngle PI 180.0f) camera.position.x horizontalDistance cosf(horizontalAngle PI 180.0f) camera.position.z horizontalDistance sinf(horizontalAngle PI 180.0f) camera.position.y distance sinf(verticalAngle PI 180.0f) verticalAngle 1.0f if (IsKeyPressed(KEY RIGHT)) camera.position.x 1.0f selectedTarget.x 1.0f else if (IsKeyPressed(KEY LEFT)) camera.position.x 1.0f selectedTarget.x 1.0f if (IsKeyPressed(KEY UP)) camera.position.z 1.0f selectedTarget.z 1.0f else if (IsKeyPressed(KEY DOWN)) camera.position.z 1.0f selectedTarget.z 1.0f if (IsKeyPressed(KEY SPACE)) cubeList.push back(selectedTarget) camera.target selectedTarget Since raylib is using opengl so i tried to use the following link and picture to understand the camera system. https learnopengl.com Getting started Camera This is the picture I used to calculate my camera position. https uploads.disquscdn.com images 58fa1f1a3dd8d736a9345b3b168dd55caf0f14d485e9dae7e06b8e185348a42a.png Any help would be appreciated. Thank you..
1
Why would OpenGL ignore GL DEPTH TEST setting? I cannot figure out why some of my objects are being rendered on top of each other. I have Depth testing on. glEnable(GL DEPTH TEST) glDepthFunc(GL LEQUAL) Do I need to draw by order of what is closest to the camera? (I thought OpenGL did that for you.) Setup code private void setUpStates() glShadeModel(GL SMOOTH) glEnable(GL DEPTH TEST) glDepthFunc(GL LEQUAL) glEnable(GL LIGHTING) glEnable(GL LIGHT0) glLightModel(GL LIGHT MODEL AMBIENT, BufferTools.asFlippedFloatBuffer(new float 0, 0f, 0f, 1f )) glLight(GL LIGHT0, GL CONSTANT ATTENUATION,BufferTools.asFlippedFloatBuffer(new float 1, 1, 1, 1 ) ) glEnable(GL COLOR MATERIAL) glColorMaterial(GL FRONT, GL DIFFUSE) glMaterialf(GL FRONT, GL SHININESS, 50f) camera.applyOptimalStates() glEnable(GL CULL FACE) glCullFace(GL BACK) glEnable(GL TEXTURE 2D) glClearColor(0.0f, 0.0f, 0.0f, 0.0f) glEnableClientState(GL VERTEX ARRAY) glEnableClientState(GL COLOR ARRAY) glEnableClientState(GL NORMAL ARRAY) glHint(GL PERSPECTIVE CORRECTION HINT, GL NICEST) Render Code private void render() Clear the pixels on the screen and clear the contents of the depth buffer (3D contents of the scene) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) Reset any translations the camera made last frame update glLoadIdentity() Apply the camera position and orientation to the scene camera.applyTranslations() glLight(GL LIGHT0, GL POSITION, BufferTools.asFlippedFloatBuffer(500f, 100f, 500f, 1)) glPolygonMode(GL FRONT AND BACK, GL LINE) for(ChunkBatch cb InterthreadHolder.getInstance().getBatches()) cb.draw(camera.x(), camera.y(), camera.z()) The draw method in ChunkBatch public void draw(float x, float y, float z) shader.bind() shader.setUniform("cameraPosition", x,y,z) for(ChunkVBO c VBOs) glBindBuffer(GL ARRAY BUFFER, c.vertexid) glVertexPointer(3, GL FLOAT, 0, 0L) glBindBuffer(GL ARRAY BUFFER, c.colorid) glColorPointer(3, GL FLOAT, 0, 0L) glBindBuffer(GL ARRAY BUFFER, c.normalid) glNormalPointer(GL FLOAT, 0, 0L) glDrawArrays(GL QUADS, 0, c.visibleFaces 6) ShaderProgram.unbind()
1
Do game engines put all interpolation onto the GPU? Is it considered standard to push all the vertices and an interpolation of the next position onto the GPU? Suppose a sector is moving up every game tick at 50 units per second. You could put on the shader something like (good old parametric equation like formulas) pos.x a x1 (1 a) x0 Where you'd calculate a as a uniform and only send this value to the GPU in an uncapped fashion, like so while (true) a percentage of the way between frames in range from 0.0 to 1.0 updateGLUniform(..., a) render() Rest of stuff delay() The other way of doing it was to manually set each entity and sector every single time you loop, but then you might have to update the position of a lot of things every game cycle. Just setting one value seems way more efficient. However, I don't know if there's any downsides to this. Are there any negatives to shoving interpolation off to the GPU to save computation time in the game? Or is there something that I'm missing? What do major games do?
1
How to create an OpenGL texture using primitives? I'm currently rendering a large proportion of the graphics of my game using custom OpenGL primitive functions (e.g. DrawCircleFilled(), DrawPolygonFilled(), etc) drawing procedurally generated objects (e.g. trees). Typically the core of these functions is a glBegin and glEnd loop, and use GL TRIANGLE FAN, GL LINE LOOP, GL LINES, or GL POLYGON. While this generally works, I'm seeing some performance issues if I have too many objects using these functions on screen at once. For instance, I could render a brick wall by drawing 128 small brick coloured rectangles with an outline. Alternatively I could render a single OpenGL texture with these bricks pre drawn, which takes up more video memory but seems to be less processor intensive. With this in mind, my question is how do I take the output of the primitive functions and store them in an OpenGL texture at runtime? I'm aware of how to do with with an SDL Surface, but I don't have access to the primitive drawing functions in SDL and SDL primitive drawing is very slow (one of the reasons why I switched to raw OpenGL in the first place). Advice appreciated. Thanks, Nathan
1
Black Area in skybox in opengl Hi I am using opengl and I am having a problem if I go further from the skybox with the camera I get black areas, a smaller radius of the skybox will make the camera too close to the object to stay inside the skybox and increasing it obviously doesn't help. Here is the code for the skybox glEnable(GL TEXTURE 2D) glPushMatrix() qobj gluNewQuadric() glTranslated(50,0,0) glBindTexture(GL TEXTURE 2D, underwater) gluQuadricTexture(qobj,true) gluQuadricNormals(qobj,GL SMOOTH) gluSphere(qobj,100,100,100) glPopMatrix()
1
How do I render a word in the middle of the screen using FreeType and OpenGL? I'm following this tutorial series on making a simple 2D clone of the classic Breakout game, but its author doesn't mention how to render a whole word or sentence in the middle of the screen. He simply uses hard coded X and Y coordinates that obviously work for his screen, but not for mine (hence why the mere suggestion of using hard coded values for something like this makes my skin crawl). For more details on how he accomplishes the task, see this page of the aforementioned tutorial. My version of his game has two major differences I am rendering the GLFW window in fullscreen mode instead of a 800x600 window. I suspect that even if I were to use a 800x600 window, things still wouldn't look consistent across every device. As such I have made many changes to hard coded values across the entire code base. I'm a perfectionist so even though no one else will probably ever run my game, I'm trying to eliminate the possibility of different results on different resolutions. But to do that has turned out to be a massive undertaking! Here is the code that I'm having particular trouble with void TextRenderer RenderText(string text, GLfloat x, GLfloat y, GLfloat scale, vec3 color) Activate corresponding render state shader.Use() shader.SetVector3f("textColor", color) glActiveTexture(GL TEXTURE0) glBindVertexArray(vao) Iterate through all characters for (auto c text.begin() c ! text.end() c ) auto ch characters c auto xPos x ch.bearing.x scale auto yPos y (characters 'H' .bearing. y ch.bearing.y) scale auto w ch.size.x scale auto h ch.size.y scale Update VBO for each character GLfloat vertices 6 4 xPos, yPos h, 0.0, 1.0 , xPos w, yPos, 1.0, 0.0 , xPos, yPos, 0.0, 0.0 , xPos, yPos h, 0.0, 1.0 , xPos w, yPos h, 1.0, 1.0 , xPos w, yPos, 1.0, 0.0 Render glyph texture over quad glBindTexture(GL TEXTURE 2D, ch.textureID) Update content of VBO memory glBindBuffer(GL ARRAY BUFFER, vbo) glBufferSubData(GL ARRAY BUFFER, 0, sizeof(vertices), vertices) glBindBuffer(GL ARRAY BUFFER, 0) Render quad and advance cursors for next glyph glDrawArrays(GL TRIANGLES, 0, 6) Bitshift by 6 to get value in pixels (1 64th times 2 6 64) x (ch.advance gt gt 6) scale glBindVertexArray(0) glBindTexture(GL TEXTURE 2D, 0) I have tried adding a bool centreX and bool centreY param to the function, as well as another for loop before the main one like this if (centreX centreY) auto textWidth 0.0f auto textHeight 0.0f for (auto c text.begin() c ! text.end() c ) auto ch characters c if (centreX) textWidth (ch.bearing.x scale) (ch.size. x scale) ((ch.advance gt gt 6) scale) if (centreY) textHeight (y (characters 'H' .bearing.y ch.bearing.y) scale) (ch.size.y scale) if (centreX) auto lastCh text.end() 1 textWidth (characters lastCh .advance gt gt 6) scale But for the sentence, font and font size used I'm getting a textWidth value of 1952 while a screenshot of the rendered text is 1143 px wide S Where has my math or understanding of the original code gone wrong? My display is a high DPI one and I have my Windows scaling settings at 200 could either of those things be the culprit? MTIA!
1
How can I simulate a limited (256) color palette in OpenGL? On Twitter, I found this screenshot of a game in development The image on top seems to be without any color limitation. But the two other pictures at the bottom have a 256 color palette. I want to achieve a similar effect in my game (I am using OpenGL). How can I do so?
1
How do I better organise rendering in an entity component system? I'm developing an isometric RPG, with 3D characters in 2D level. I was using a standard object orientated programming paradigm, but was faced with a lot of issues. Recently, I learned about entity component systems, and am currently rewriting my engine using this paradigm. I use OpenGL, and have two rendering systems one for drawing in 2D, and one for drawing in 3D. Each system consists of one VBO id, with the appropriate geometry, shader program id, id of texture unit binded to the texture atlas (which accommodates all tilesets or all model textures) and other information like that. The alternate systems provide different logic for drawing in 2D and 3D. When it comes to the components for these systems, I am stuck. Say I have component, defined by the buffer offset from the beginning of VBO buffer and the primitive count that needs to be rendered. Let's pretend that I have two instances of these components. The first instance belongs to the level floor entity, and describes the bunch of quad tiles. The second instance is a character mesh that belongs to the player entity. The first component instance should use the 2D rendering system, and second instance should use the 3D rendering system, but the data structure of these instances are identical. It must be said that I use EntityX and the components gets in the systems, automatically depending on their type. Both systems require position and craphics components, so both systems will take both component instances, and one of these component will be processed improperly, what with a 2D graphics component in a 3D rendering system, and vice versa. I'm thinking about creating the "hollow" components, like sprite and model, which will indicate what graphics components in the entity mean. Since these component will not contain any properties, and would be used only as flags, this solution seems clumsy. How do I better organise rendering in an entity component system?
1
How do I align the cube in which shadows are computed with the view frustrum? ("View Space aligned frustum") Short and concise Given 8 world space positions that form a cube of arbitrary size, position and orientation and given an arbitrary light direction. How do I compute the View and Projection matrix for a directional light, such that the shadows are computed in exactly and only this cube? Edit Clarification I want to create Shadow Maps, in the way, that Crytek calls "View Space aligned" (http www.crytek.com download Playing 20with 20Real Time 20Shadows.pdf page 54 55), so I want that the cube in which shadows are computed is aligned with the view frustrum. How do I achieve that, when I have already have the 8 world space positions of a cube, around the view frustrum?
1
Peer to peer first person shooter I've been developing a first person shooter massive multiplayer online roleplaying game for my small business, and was wondering if it would be feasible to use peer to peer technology to communicate between players, without the use of an intermediary server (as our company does not have enough funds for a high speed connection to run a game server on). How would such Peer to peer technology be best implemented in such a game? Here's a few of the options I've been considering a) Divide the game into segments, and have each segment be played on a separate P2P "mesh" (currently named XRevolution) b) Have one mesh for all portions of the game, and place all of the players on a single grid c) Some other P2P solution I've already used option A (considering only one area of the game is done) on a small scale (4 computers), but am wondering how well that would scale when thousands of computers would potentially be bound to the same mesh. Here's a list of possible concerns with P2P technology During penetration testing, it was identified that a client could spoof packets to lie about the state of the game to other peers, allowing people to cheat. This is identified as "medium" risk, as it could be prevented to a certain extent through various security routines that would be integrated within the game (such as encryption, movement prediction, double checking state with other peers, AI, etc.) Lag was originally a problem on slower networks, but now movement prediction and other techniques are used to reduce the effect of lag, and disconnect peers whose lag is too high. However, a problem which remains is the amount of time it would take on a larger scale network to find peers (other players), and establish connections. Due to the amount of time it would take to connect to so many peers, solution A would seem optimal to solve this particular problem A problem with solution A, would be in game chat between meshes (as no two peers in different meshes can communicate with each other, except for the purpose of relaying), however, this problem could be potentially solved by using something in between solutions A and B (such as creating a separate mesh for inter mesh communication, which would be specific to each player or something) What's the best way to do something like this? Solution A, or B, something in between the two, or an entirely different solution altogether. Due to our budget, using a server to facilitate communication between players is completely out of the question, so it would be nice to use a P2P solution, otherwise the project would have to be abandoned.
1
How to achieve light that changes color mid way? I thought of creating light sources, and some colored windows. Now, the windows are semi transparent. How could I make it so that when the light (say, pure white) hits the glass and continues through it, but changes the color to the same color as the glass it passed? I know the effect described here can be faked by using area lights on the "colored" side of the window, but what if I just wanted to have one white point light?
1
Reason for white lines in tile based game My game is tile based. I am getting this lines between two tiles. They don't look nice. How can i resolve this issue?
1
Shadow mapping with directional light? I'm doing shadow mapping in my OpenGL 4.3 deferred renderer, and I'm starting with directional lights believing it to be the easiest. What I do not understand is how the view projection matrix is to be constructed for the shadow mapping depth pass. I mean, all I have is the lights direction, and it affects the entire scene so how do I construct the view matrix from only the direction, and what of the projection matrix? Thanks
1
OpenGL fix objects to camera I'm developing some game in order to learn OpenGL. I can make an object follow me around when I move forward, backward, left, right, up or down, by setting it's position right infront of the camera. Problem starts when the camera rotates tilts (looking up, down, left, right...). The object is indeed sticks infront of the camera position (because I calculate a vector going out from the camera position and place it there), however I can't make the angle of the object to rotate tilt along with the camera, so it is just fixed. I think I might have to set the object coordinate system to correlate with the camera's coordinate system, but I can't figure out how. Maybe I'm wrong and there's another way to do that. My game implementation I have 4 vectors for the camera and 4 vectors for each object origin, x, y and z (to represent the objects coordinate system). To move the camera, I substract z axis from origin. Same for other directions. In order to tilt the camera, for example to the right, I just calculate rotation around Z axis and apply to X,Y axes. Note Movement in the world works good, in all directions and angles. Here's the (the relevant) code I use to position objects in the world. Before drawing them I first put them in their origin (recall I save an origin and x,y,z axes to all my objects), then do some scale to fit my desired size translate to position gl.glTranslatef((float)origin.x, (float)origin.y, (float)origin.z) scale to size gl.glScalef((float)size.x, (float)size.y, (float)size.z) Any ideas?
1
glGenVertexArrays causes crash My code keeps crashing at runtime, I have done some creative debugging and determined that it was the glGenVertexArrays that was causing the crash, I've looked around and come across some answers that told me to enable experimental mode in GLEW but that didn't work, as far as I can tell my graphics card supports it, my opengl version is 3.1. I'm using freeGLUT and GLEW here's the code, the line in question is 45 http hastebin.com rekizejuza.cpp std cout lt lt "made it here r n" glGenVertexArrays(1, amp meshID) std cout lt lt "not here here r n" glBindVertexArray(meshID)
1
Lighting in a Minecraftian World Minecraft is a game that is largely based on a heightmap and uses that heigtmap information to flood the world with light. From my understanding the highest point in the heightmap is the end of the sunlight influenced area. Everything above that is lit by sunlight, everything below that just is influenced by light nearby in a radius of 8 blocks. Thus if you have a floating island on the top of your world everything below that will be seen essentially as a cave. When two lights influence the same point the brighter light wins (unsure about that). Either way there are a couple of problems with minecrafts lighting model first of all, if your world does not have a heightmap it becomes trickier to figure out what exactly is supposed to emit sunlight and what not. A simple way would be to assume that the world is (in my case) a floating rock and then traverse each axis from both directions and figure out where the rock starts and ends. But this does not fully eliminate the problem as dents in the rock are not supposed to be in darkness. Minecraft itself will cache the light information in its chunks together with the information about the material of a block. Thus only if the world is modified the lighting has to update. Unfortunately that process is still pretty slow on updates and on quick light changes one can see the lighting lag behind. That's especially true if a lot of blocks change (TNT, sunset etc.) and you're not running the fastest computer (Or Java on Mac). From my still limited understanding of 3D graphics lighting a world like minecraft shouldn't be the biggest issue. How would you tackle the problem? I think the basic requirements for lighting in a voxel world would be update fast enough that it could happen in a single frame. One might be able to do the lighting in the graphics device and download the changed light information to the main RAM. light information must be quickly available for the main game logic so not entirely based on the graphics device reasoning light affects the growth of grass, spawning of monsters etc. light updates would have to be local to a chunk or have some other limit so that one does not have to relight the whole world which might be very large in size. The main idea would be to make the light updates fast, not necessarily more beautiful. For general light rendering performance improvements one could easily add SSAO on top of that which should result in much nicer worlds.
1
Help with dual quaternion skinning I'm trying to convert my code to use dual quaternion skinning instead of matrix skinning because i just can't get the skinning matrix created correctly from bones weights using matrices. Edit just to clarify, if each vert has only one bone the matrix version did skin correctly. But it's just not working and i really don't understand why, i'm using code i found here http www.chinedufn.com dual quaternion shader explained http donw.io post dual quaternion skinning I've looked at the assorted papers often brought up in the answers to these questions, but for the most part i just don't understand them. I'm not even entirely sure how the x, y, z, w of a glm quat correspond to the 1, i, j, k of a quaternion I think that w might be the scalar because it goes first in the actual byte order of the glm quat struct, but i'm not sure. Anyway, in my shader bindings i have void gltfShader bindBones(const std vector lt glm mat4 gt amp bones) std vector lt glm fdualquat gt dual quats(std min lt int gt (bones.size(), 64)) glm quat r glm vec3 t, s, sk glm vec4 pr glm fdualquat dq for(size t i 0 i lt dual quats.size() i) glm decompose(bones i , s, r, t, sk, pr) dq 0 r dq 1 glm quat(t.x, t.y, t.z, 0) r .5f dual quats i dq glUniformMatrix2x4fv(u bones, dual quats.size(), GL FALSE, (float ) amp dual quats 0 ) In the vertex shader mat2x4 GetBoneTransform(ivec4 joints, vec4 weights) float sum weight weights.x weights.y weights.z weights.w Fetch bones mat2x4 dq0 u bones joints.x mat2x4 dq1 u bones joints.y mat2x4 dq2 u bones joints.z mat2x4 dq3 u bones joints.w Ensure all bone transforms are in the same neighbourhood weights.y sign(dot(dq0 0 , dq1 0 )) weights.z sign(dot(dq0 0 , dq2 0 )) weights.w sign(dot(dq0 0 , dq3 0 )) Blend mat2x4 result weights.x dq0 weights.y dq1 weights.z dq2 weights.w dq3 result 0 3 int(sum weight lt 1) (1 sum weight) Normalise float norm length(result 0 ) return result norm mat4 GetSkinMatrix() mat2x4 bone GetBoneTransform(a joints0, a weights0) vec4 r bone 0 vec4 t bone 1 return mat4( 1.0 (2.0 r.y r.y) (2.0 r.z r.z), (2.0 r.x r.y) (2.0 r.w r.z), (2.0 r.x r.z) (2.0 r.w r.y), 0.0, (2.0 r.x r.y) (2.0 r.w r.z), 1.0 (2.0 r.x r.x) (2.0 r.z r.z), (2.0 r.y r.z) (2.0 r.w r.x), 0.0, (2.0 r.x r.z) (2.0 r.w r.y), (2.0 r.y r.z) (2.0 r.w r.x), 1.0 (2.0 r.x r.x) (2.0 r.y r.y), 0.0, 2.0 ( t.w r.x t.x r.w t.y r.z t.z r.y), 2.0 ( t.w r.y t.x r.z t.y r.w t.z r.x), 2.0 ( t.w r.z t.x r.y t.y r.x t.z r.w), 1) EDIT video of the results https youtu.be 8jIt Xhhffk The one on the left is supposed to be a walk cycle, the one on the right is supposed to be this https github.com KhronosGroup glTF Sample Models tree master 2.0 Monster
1
How to implement camera pan like in Maya? I am trying to implement camera pan like the one in Maya. I've got it almost working. The problem is that the mouse cursor is moving faster than the 3d mesh (in fact I am moving the camera but I said mesh, because it is more intuitive), while in Maya the distance between the mouse position and the mesh stays always the same. This is what my code looks like glm vec3 dist cameraPos modelPos float len glm length(dist) float clipX startX 2.0 width 1.0 float clipY 1.0 startY 2.0 height float clipEndX lastX 2.0 width 1.0 float clipEndY 1.0 lastY 2.0 height convert begin and end mouse positions into world space glm mat4 inverseMVP glm inverse(projection view modelMatrix) glm vec4 outVector inverseMVP glm vec4(clipX, clipY, 0.0, 1.0) glm vec3 worldPos(outVector.x outVector.w, outVector.y outVector.w, outVector.z outVector.w) glm vec4 outEndVec inverseMVP glm vec4(clipEndX, clipEndY, 0.0, 1.0) glm vec3 worldPos2(outEndVec.x outEndVec.w, outEndVec.y outEndVec.w, outEndVec.z outEndVec.w) glm vec3 dir worldPos2 worldPos glm vec3 offset glm length(dir) glm normalize(dir) angleFovRatio len 2.f cameraPos offset cameraTarget offset Where angleFovRatio std tan(cameraFov 0.5f) I am using glm lookAt(cameraPos, cameraTarget, cameraUp) for my view matrix I will add picture of what my problem is As you can see, when I drag to the right, the red dots are my begin and end mouse positions. But the teapot is on the blue dot, which is not what I want. It must be centered on the second red dot (mouse end position, just like in maya).
1
Can I use multiple OpenGL version together I want to use GLSL but keep my current OpenGL 1.1 setup. The thing is that can I use OpenGL 2.0 shaders on OpenGL 1.1 renders?
1
Access alpha value of opengl texture pixels I'm trying to get the alpha values of each pixel of an image that I'm loading onto a texture so i can use it to check for per pixel collision. I have a texture class struct GLTexture std string filePath "" GLuint id int width int height std vector lt std vector lt int gt gt alphaPixels When i bind the texture i want to do something like this Generate the openGL texture object glGenTextures(1, amp (texture.id)) Bind the texture object glBindTexture(GL TEXTURE 2D, texture.id) Upload the pixels to the texture glTexImage2D(GL TEXTURE 2D, 0, GL RGBA, width, height, 0, GL RGBA, GL UNSIGNED BYTE, amp (out 0 )) for(int i 0 i lt width i ) for(int j 0 j lt height j ) alphaPixels i j getAlpha(GL TEXTURE 2D) lt something like this line but i don't know how to access to alpha value How can i get the alpha value of each pixel of the image? Is this even a feasible thing to do or is there a better way to do this?
1
How can I generate a view or projection matrix for OpenGL 3. I'm transitioning from OpenGL 2 to OpenGL 3. and to GLSL 1.5. I'm trying to avoid using the deprecated features. My question how do we now generate the view or projection matrix. I was using the matrix stack to calculate the projection matrix for me GLfloat ptr 16 gluPerspective(...) glGetFloatv(GL MODELVIEW MATRIX, ptr) then pass ptr via a uniform to the shader But obviously the matrix stack is deprecated. So this approach is not the best an option going forward. I have the 'Red Book', 7th ed, which covers 3.0 amp 3.1 and it still uses the deprecated matrix functions in it's examples. I could write some utility code myself to generate the matrices. But I don't want to re invent this particular wheel, especially when this functionality is required for every 3D graphics program. What is the accepted way to generate world,view amp projection matrices for OpenGL? Is there an emerging 'standard' library for this? Or is there some other hidden (to me) functionality in OpenGL GLSL which I have overlooked?
1
How should I handle a modelview stack with multiple shader programs involved? I'm building a framework where each object has an associated program and each object has a 'draw' method. What is the best choice, to have a single modelview stack handled by a Renderer class or to have a modelview stack for each program? In the preferred case how can I handle optimally a "world" transformation that affects all the objects independently by the program (i.e. moving the camera)
1
Backface culling without light leaking through I want to be able to see through walls, so to do this I used planes for the walls, and enabled backface culling. However with shadow mapping I have a lot of light leaking through I read that using a thick wall solves this, so I did just that. This solved the leaking light but now I cannot see through the walls (camera is within the room in the above screen), all the backfaces are inward. How could I prevent light leaking, as well as enable seeing through the walls?
1
How to deallocate release delete of a glTexStorage2D? I tried to find this but found nothing. Does anyone know how to 'deallocate' textures mipmap from glTexStorage2D once you don't need it anymore?
1
OpenGL Core profile Array of arrays in glBufferData for VBOs I want to send each face as VBO, and I structured the data as this facevbo 0 x x x x x x x,y,z,r,g,b,s,t facevbo 1 x x x x x facevbo 2 x x x x x x x facevbo 3 x x x x . . . facevbo numfaces x x x x Basicly I have dynamic 2d array with different column size GLfloat faceVBO new GLfloat m numOfFaces faceVBO dfaceIndex new GLfloat pFace gt numOfVerts 8 When I print all the faces, everything is fine, but when im trying to put them to use, i have problem at this line (ideally I would iterate trough all the faces to bind each face, but lets take the first column) glBufferData(GL ARRAY BUFFER, 4 oindices, faceVBO 0 , GL STATIC DRAW) the third parameter (faceVBO 0 ) "argument of type GLfloat is incompatible with param. of type const void " but if i tead from array, like GLfloat facevbo x,x,x,x... and glBufferData(GL ARRAY BUFFER, 4 oindices, faceVBO, GL STATIC DRAW) it works OK. How should i send the data in this case? Thanks
1
Dynamic Terrain Triangulation Is there someone who know have an algorithm which can perform terrain triangulation like on the example image right under (there is a secondary image as well). The reason I say "Dynamic" is because I want it to support Dynamic changes to the terrain, like if one was to dig into the ground or into the side of a mountain, then it should be capable of triangulating the new changes without destroying the old terrain which doesn't need changes. Ignore the things like the trees, etc. This is only about the style of the terrain itself. Click here for fullscreen picture Secondary Image I've tried using the Marching Cubes Algorithm, though it didn't give me that triangle'ish feeling, like in the image(s) above (While ignoring the normal rendering in my example). Bottom Line Does anybody know have an algorithm, or any idea of how to... Triangulate terrain so it have that style like in the above example image(s). Support dynamic changes like Marching Cubes does, but with this triangle'ish style. Note All the "points" would be in an unsorted list, where the "points" of course is all the vertices, which should be used to generate all the triangles from. Disclaimer I usually answer and sometimes ask question on Stack Overflow, so if anything is wrong with my question here I apologize, though comment and I will fix whatever is wrong.
1
Why does the pitch affect the x component of the front vector? In every tutorial for implementing a camera in OpenGL, the front vector is calculated with something like this front.x cos(pitch) cos(yaw) front.y sin(pitch) front.z cos(pitch) sin(yaw) What I don't understand is why the pitch affects the x component? Shouldn't it just be front.x cos(yaw) Also, for the z component, why do we multiply cos(pitch) with sin(yaw). I understand that the pitch and the yaw both affect the z component, but why multiply? Why not add or something else?
1
How to draw a plane equivalent to a given btStaticPlaneShape using OpenGL With all other shapes in bullet, you can easily get the transform from the MotionState which holds the origin, the scale, and the rotation. There seems to be no way to get the same information about a btStaticPlaneShape. How can I get this information, so that I can build a rectangle with 4 vertices that will act as a representation in OpenGL?
1
Deferred Shading and material ID I am implementing a deferred rendering framework, and I wanted to allow programmers to write custom materials. However, I did not find yet how to handle different materials. For now, a material is just a set of uniforms for me. I wanted to write the material ID in the GBuffer, however, I don't see the point of doing it as the lightning pass is using the same shader for every fragment (obviously). In the frosbite paper (page 15), they explain they use a Material ID and a MatDat parameter, but I did not really get what it can be and how they handle it during the lightning pass. Is the material ID used in a simple branching condition? In this case, when a new material is created, do I have to rebuild the lightning pass uber shader by adding this branching and the code to execute? UE4 seems to build GLSL code from the material editor, which is not something hard, but how do they handle it with the deferred renderer? Do they only plug the GLSL code into the Uber Shader?
1
How can I animate a portion of the textures on a model? I have a model to which I have attached multiple textures. Both textures are currently static, but if I want to move (or slide) the texture which is on the top (in UV space), is that possible? Maybe by moving the texture coordinates or something?
1
How can glass breaking effect from Smash Hit be achieved? I saw Smash Hit the other day and was amazed by the physics of the game, specially the shattered glass effect I've read other posts about this subject but I still feel that they don't share enough details to let me get started on implementing this on my own with OpenGL GLSL. Is it possible for somebody with an enhanced perception and graphics understanding to watch the gameplay and give some pointers on how this effect could be replicated? I rather not use 3rd party physics engine and do the entire thing on my own for educational purposes, so could you mention some of the physics that goes behind this as well? References to other documents and demos are highly appreciated.
1
What version of OpenGL should I code for, given compatibility and performance considerations? When the OpenGL spec is updated, they only ever add features. So in theory, the latest and greatest hardware with support for the Core and Compatibility profiles should run super old OpenGL1.1 code just fine. This has turned out to be true. I've spent 12 months learning OpenGL1.1 and have a fair grasp on it. I had a read of a few chapters of one of those fancy new OpenGL4.2 books. Almost all of the features that I rely on have been deprecated (like Display Lists), which lets me assume that there are better ways of doing all these things. Lets consider that 1.1 is likely to be supported, in full, by ALL modern hardware. 1.1 was released in 1992. I'm not coding the hard way just to support 20 year old PCs. p I think its reasonable to assume most gamers are running hardware that bottoms out at about 5 year old mid range. I think the newer methods are designed to universally be either one of two things better performing, or easier to code. I read somewhere that its never both though! XD What version of OpenGL is most widely supported by 5ish year old hardware? What version makes most sense to use, given these considerations?
1
Transparency problem, phong model It's the first time I'm trying to implement the Phong lighting model. I'm pretty sure everything is working fine. I was experimenting using different materials, meaning I played with Kd,Ks,Ka and Ns values when I came across this problem. Whenever I use a material with alpha value less than one I get this weird result At first I thought it has to do with the normals of the model since they are all supposed to point away from the model but it doesn't seem to be the case. Any ideas?
1
GLM Quaternion SLERP Interpolation I wish to interpolate two quaternion values. As I still can not get working results, can I kindly ask you to verify my function calls? The code below supports GLM (OpenGL Mathemathics) library, so this questions might be for those, who know it. Firstly, I perform Quaternion intialization from Euler Angles glm quat myAxisQuat(pvAnimation gt at(nFrameNo).vecRotation) glm quat myAxisNextQuat(pvAnimation gt at(nFrameNo 1).vecRotation) Secondly, I interpolate between the two input quaternions. The variable fInterpolation contains value in the range 0.0f 1.0f. myInterpolatedRotQuat glm mix(myAxisQuat, myAxisNextQuat, fInterpolationTime) Thirdly, I convert my interpolated quaternion back to Euler Angles vecInterpolatedRot glm gtx quaternion eulerAngles( myInterpolatedRotQuat) At the end, the values in vecInterpolatedRot do not represent the interpolated EulerAngles. It is difficult to understand the Quaternion values after conversion from Euler Angles, so I would like to ask you for your help, please. What can be wrong, please? I double and tripple checked input variables, I tried various approaches, and the only issue, at this moment might be with the third Aplha parameter in glm mix() Update To provide you with more information, the returned values in vecInterpolatedRot are extremely low. At the end of the interpolation, I would expect valid Euler angles. This is random sequence of interpolated values, as the object moves according to predefined animation path. rotX 1.7451 rotY 1.7993 rotZ 0.854642 rotX 1.06451 rotY 1.18485 rotZ 0.694015 rotX 0.254822 rotY 0.437004 rotZ 0.942035 rotX 0.578816 rotY 0.335103 rotZ 0.716057 rotX 1.53934 rotY 1.07602 rotZ 1.0182 rotX 2.5582 rotY 1.87737 rotZ 0.759468 rotX 2.58259 rotY 2.47432 rotZ 1.06071 rotX 1.35049 rotY 3.11548 rotZ 0.81839 rotX 0.0106472 rotY 2.78129 rotZ 1.04353 rotX 1.46636 rotY 2.33968 rotZ 0.879188 rotX 0.0289322 rotY 2.31166 rotZ 0.91746 rotX 1.47901 rotY 2.37235 rotZ 0.938591 rotX 2.59482 rotY 2.89469 rotZ 1.15554 rotX 2.47283 rotY 2.76131 rotZ 0.992493 rotX 1.73065 rotY 1.53285 rotZ 1.27898 rotX 0.85806 rotY 0.176976 rotZ 1.03487 rotX 0.452009 rotY 1.14604 rotZ 0.927788 rotX 0.0604701 rotY 2.12479 rotZ 1.05684 rotX 0.107648 rotY 2.07785 rotZ 1.05071 rotX 0.154894 rotY 2.03083 rotZ 1.04569 rotX 0.809623 rotY 2.14456 rotZ 1.31262 rotX 1.15268 rotY 0.332553 rotZ 0.983604 rotX 2.16299 rotY 0.545458 rotZ 1.11758 rotX 2.95376 rotY 1.2008 rotZ 0.846527 rotX 2.94892 rotY 0.892473 rotZ 1.17334 rotX 1.89716 rotY 1.30162 rotZ 1.53247 rotX 0.804938 rotY 1.93659 rotZ 1.37281 rotX 0.653453 rotY 1.73722 rotZ 1.14364 rotX 2.24713 rotY 0.658935 rotZ 1.03684 rotX 2.97528 rotY 0.508203 rotZ 0.559124 rotX 2.49988 rotY 0.640482 rotZ0.0117903 rotX 1.57379 rotY 1.16303 rotZ0.288639 rotX 1.4928 rotY 1.17794 rotZ0.902059 rotX 0.667796 rotY 1.94995 rotZ1.49074 rotX 2.12971 rotY 1.85782 rotZ0.904871 rotX 2.36951 rotY 2.03682 rotZ0.189242 rotX 1.5574 rotY 2.92156 rotZ 0.450418 rotX 1.6256 rotY 2.29519 rotZ 1.46659 rotX 2.85414 rotY 2.11303 rotZ 0.42888 rotX 2.48503 rotY 2.96942 rotZ0.189887 rotX 1.55656 rotY 3.00852 rotZ0.675669
1
WinAPi OpenGL Borderless Fullscreen Alt Tab problems I'm implementing a borderless fullscreen functionality and managed to make something that almost works. Using C , Winapi, OpenGL 3.3. When in fullscreen borderless mode, if alt tab the game window loses focus but is still on top, not even the alt tab window shows. I managed to track it down to SwapBuffers. If i disable it, no problems but also no rendering. If i force the window to hide when it loses focus, the entire screen flickers. And if i alt tab back, only the game window flickers. If i put the game in windowed mode, it works as normal. Window Styles are first set to(windowed mode stops here) WS OVERLAPPED WS MINIMIZEBOX WS SYSMENU WS VISIBLE And then are set to(if fullscreen) GetWindowLong(window, GWL STYLE) amp WS OVERLAPPEDWINDOW My gameloop while(isRunning) loopNewTime timeGetTime() loopDuration (r32)(loopNewTime loopOldTime) loopTime Clamp(0.0f, (loopDuration sleepTime), 250.0f) loopOldTime loopNewTime loopAccumulator loopTime sleepTime while(loopAccumulator gt loopFixedDeltaTime) if(isFocused canRunWithoutFocus) DoStuff() loopAccumulator loopFixedDeltaTime if(isFocused canRunWithoutFocus) DoStuff() glClearColor(0.0f, 0.0f, 0.0f, 1.0f) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT GL STENCIL BUFFER BIT) DrawStuff() SwapBuffers(deviceContext) framerateTimeThreshhold loopTime sleepTime framerateCount if(framerateTimeThreshhold gt 1) framerate framerateCount framerateCount 0 framerateTimeThreshhold sleepTime Max(timeStep loopTime, 0.0f) Sleep((u32)sleepTime) So, how can i solve this problem? Thanks in advance.
1
Adjusting view matrix when glViewPort resizing I have an issue when my window resizes. I have a simple MVP shader like this gl Position projection view model vec4(in Position, 1.0) to render a map in my game scene. The map is extremely simple that it simply offers a lane's vertex. And what I need to do is drawing it inside the window. So I set up my MVP matrix individually, the "view" and "model" matrix remains the same because all I want to do is drawing a static map. "perspective" is the only matrix need to change since I want to have a zoom feature that I can zoom in out by changing fovy. "perspective" is updated by calling glm perspective(glm radians(fovy), (float)width height, near, far) and my view matrix is calculated by glm lookAt(camera pos, camera pos camera front, camera up) However, when I resize the viewport, I find everything changed. I mean by default, the resolution gonna drop when my window goes larger if I didn't do anything(like updating the glViewPort()). But the behavior I wanna achieve is like "I could see more part of the map when I made my window size larger". Therefore I believe that I should keep my viewport resolution same (glViewPort(0, 0, screen width, screen height)). And updating perspective matrix by changing the ratio when new window's width and height comes in. Then the final step is to update the camera's position to update view matrix. However I stuck here for a while coz I don't know how to adjust it. I believe this is a common issue but I cannot even find a source talking about it. Do I think it the wrong way? Do I need to change Camera's position to update my view matrix to achieve the resize behavior? Platform Ubuntu 16.04, OpenGL3.3 , C 14
1
Why is texturing interfering with my ID based picking implementation? Today I came across a tutorial about color picking and I implemented it on my machine. But there is a problem when I disable the texture and draw the object I want to pick with its picking color, it turns to black (the first colorID) and then switches to the texture which I am placing on pick. That is the case when I enable the textures after rendering the object. When I enable texturing before rendering my object, as in void MenuButton Render(double windowWidth, double windowHeight) glEnable(GL TEXTURE 2D) gui OrthogonalStart(windowWidth, windowHeight) TextureManager Inst() gt BindTexture(gui TextureID) glBegin(GL QUADS) glTexCoord2f(0.0f, 1.0f) glVertex2f(gui Position.GetX(), gui Position.GetY() 25.0f) glTexCoord2f(1.0f, 1.0f) glVertex2f(gui Position.GetX() 100.0f, gui Position.GetY() 25.0f) glTexCoord2f(1.0f, 0.0f) glVertex2f(gui Position.GetX() 100.0f, gui Position.GetY()) glTexCoord2f(0.0f, 0.0f) glVertex2f(gui Position.GetX(), gui Position.GetY()) glEnd() gui OrthogonalEnd() Then the texture cannot be disabled and the color picking isn't working. I hope I explained it well. If the code provided in the link isn't enough tell me and I will post more. Can this problem be solved in some way?
1
Raycasting mouse coordinates to rotated object? I am trying to cast a ray from my mouse to a plane at a specified position with a known width and length and height. I know that you can use the NDC (Normalized Device Coordinates) to cast ray but I don't know how can I detect if the ray actually hit the plane and when it did. The plane is translated 100 on the Y and rotated 60 on the X then translated again 100. Can anyone please give me a good tutorial on this? For a complete noob! I am almost new to matrix and vector transformations.
1
Averaging normals, or tangents I am using a library to load an obj but it doest compute the tangets for each vertex, which I need for normal mapping and pom. I computed my tangets, and bitangents, everything appears to be fine, but I read that I need to average tangents if the vertices are the same (same position, uv, normal). So for example if there is a vertex v that has the same pos,uv amp norms as vertex v1 and v2, then the tangent for vertex v, v1, and v2 is normalize(v.t v1.t v2.t)? I want to make sure I have the correct understanding before implementing.
1
Map and fill texture using PBO (OpenGL 3.3) I'm learning OpenGL 3.3 trying to do the following (as it is done in D3D)... Create Texture of Width, Height, Pixel Format Map texture memory Loop write pixels Unmap texture memory Set Texture Render Right now though it renders as if the entire texture is black. I can't find a reliable source for information on how to do this though. Almost every tutorial I've found just uses glTexSubImage2D and passes a pointer to memory. Here is basically what my code does... (In this case it is generating an 1 byte Alpha Only texture but it is rendering it as the red channel for debugging) GLuint pixelBufferID glGenBuffers(1, amp pixelBufferID) glBindBuffer(GL PIXEL UNPACK BUFFER, pixelBufferID) glBufferData(GL PIXEL UNPACK BUFFER, 512 512 1, nullptr, GL STREAM DRAW) glBindBuffer(GL PIXEL UNPACK BUFFER, 0) GLuint textureID glGenTextures(1, amp textureID) glBindTexture(GL TEXTURE 2D, textureID) glTexImage2D(GL TEXTURE 2D, 0, GL R8, 512, 512, 0, GL RED, GL UNSIGNED BYTE, nullptr) glBindTexture(GL TEXTURE 2D, 0) glBindTexture(GL TEXTURE 2D, textureID) glBindBuffer(GL PIXEL UNPACK BUFFER, pixelBufferID) void Memory glMapBuffer(GL PIXEL UNPACK BUFFER, GL WRITE ONLY) Memory copied here, I know this is valid because it is the same loop as in my working D3D version glUnmapBuffer(GL PIXEL UNPACK BUFFER) glBindBuffer(GL PIXEL UNPACK BUFFER, 0) And then here is the render loop. This chunk left in for completeness glUseProgram(glProgramId) glBindVertexArray(glVertexArrayId) glBindBuffer(GL ARRAY BUFFER, glVertexBufferId) glEnableVertexAttribArray(0) glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, 20, 0) glVertexAttribPointer(0, 2, GL FLOAT, GL FALSE, 20, 12) GLuint transformLocationID glGetUniformLocation(3, 'transform') glUniformMatrix4fv(transformLocationID , 1, true, somematrix) Not sure if this is all I need to do glBindTexture(GL TEXTURE 2D, pTex gt glTextureId) GLuint textureLocationID glGetUniformLocation(glProgramId, "texture") glUniform1i(textureLocationID, 0) glDrawArrays(GL TRIANGLES, Offset 3, Triangles 3) Vertex Shader version 330 core in vec3 Position in vec2 TexCoords out vec2 TexOut uniform mat4 transform void main() TexOut TexCoords gl Position vec4(Position, 1.0) transform Pixel Shader version 330 core uniform sampler2D texture in vec2 TexCoords out vec4 fragColor void main() Output color fragColor.r texture2D(texture, TexCoords).r fragColor.g 0.0f fragColor.b 0.0f fragColor.a 1.0
1
Reason for white lines in tile based game My game is tile based. I am getting this lines between two tiles. They don't look nice. How can i resolve this issue?
1
differences between opengl 3 and opengl 4 I'm just getting started with game programming and I want to start learning opengl. I found a very great tutorial from scratch to get started with opengl 3 and I'm wondering if there is a big difference between openGL 3 and openGl 4. Or I should ask, does openGL 4 make openGL 3 obsolete, or I can start with openGL 3 and then I can move to openGL 4?
1
Matrix operations to flip 3d models front to back, without changing position Multiple 3D objects look to be in the correct screen position, and I can move around them and this continues to be true, but each one looks like it is flipped back to front (I can see the backside of the object as if the camera were on the other side of it). Any clues to why this may be happening in addition to correcting it would be welcome also I'm getting a transform matrix from one api what is documented as OpenGL 4x4 matrix format, but my underlying api (bgfx) is wrapping opengl so may be assuming another format. I can probably go get the transform in different format, but now I'd really like to know how to correct this also if there was no option but to start with the opengl matrix.
1
What happens if I don't call glNormal for some vertices? Let's say I have an object where some vertices will have the same normal and some won't. Are these two equivalent? Note I'm working with an old OpenGL version (2.3 I think). Option 1 glNormal3f(0.0, 0.0, 1.0) glVertex3f( 1.0, 1.0, 1.0) glVertex3f( 1.0, 1.0, 1.0) glVertex3f( 1.0, 1.0, 1.0) glVertex3f( 1.0, 1.0, 1.0) glNormal3f(0.1, 0.0, 1.0) glVertex3f( 1.0, 1.0, 1.0) glNormal3f(0.2, 0.0, 1.0) glVertex3f( 1.0, 1.0, 1.0) glNormal3f(0.3, 0.0, 1.0) glVertex3f( 1.0, 1.0, 1.0) Option 2 glNormal3f(0.0, 0.0, 1.0) glVertex3f( 1.0, 1.0, 1.0) glNormal3f(0.0, 0.0, 1.0) glVertex3f( 1.0, 1.0, 1.0) glNormal3f(0.0, 0.0, 1.0) glVertex3f( 1.0, 1.0, 1.0) glNormal3f(0.0, 0.0, 1.0) glVertex3f( 1.0, 1.0, 1.0) glNormal3f(0.1, 0.0, 1.0) glVertex3f( 1.0, 1.0, 1.0) glNormal3f(0.2, 0.0, 1.0) glVertex3f( 1.0, 1.0, 1.0) glNormal3f(0.3, 0.0, 1.0) glVertex3f( 1.0, 1.0, 1.0)
1
How to remove seams from a tile map in 3D? I am using my OpenGL custom engine to render a tilemap made with Tiled, using a well spread tileset from the web. There is nothing fancy going on. I load the TMX file from Tiled and generate vertex arrays and index arrays to render the tilemap. I am rendering this tilemap as a wall in my 3D world, meaning that I move around with a fly camera in my 3D world and at Z 0 there is a plane showing me my tiles. Everything is working correctly but I get ugly seems between the tiles. I've tried orthographic and perspective cameras and with either I found particular sets of parameters for the projection and view matrices where the artifacts did not show, but otherwise they are there 99 of the time in multiple patterns, depending on the zoom and camera parameters like field of view. Here's a screenshot of the artifact being shown http i.imgur.com HNV1g4M.png Here's the tileset I am using (which Tiled also uses and renders correctly) http i.imgur.com SjjHK4q.png My tileset has no mipmaps and is set to GL NEAREST and GL CLAMP TO EDGE values. I've looked around many articles in the internet and nothing helped. I tried uv correction so the uv fall at half of the texel, rather than the end of the texel to prevent interpolating with the neighbour value(which is transparency). I tried debugging with my geometry and I verified that with no texture and a random color in each tile, I don't seem to see any seams. All vertices have integer coordinates, i.e, the first tile is a quad from (0,0) to (1,1) and so on. Tried adding a little offset both to the UV and to the vertices to see if the gaps cease to exist. Disabled multisampling too. Nothing fixed it so far. Thanks.
1
How can I easily determine the z ordering of objects in multiple glDraw calls? I have a simple 2D tile based scene that I'm rendering with OpenGL. On each frame I first draw the static background, and then draw the dynamic elements (monsters, etc.). This way it seems that the later drawn elements "paint over" the earlier drawn ones (using separate glDraw calls). However, when asking around, it seems that if I used a depth buffer the order in which things are drawn doesn't necessarily matter, as the depth buffer can be used to determine visibility. But what happens when some things share the same z value when using a depth buffer? Is the newer one "painted over" the older one? If so, should I rely on this behavior, or rather use different z values for my "layers"?
1
Working towards going 3D I am a beginner intermediate C Java programmer and my long term goal is to be a game programmer. I have decided to start off with 2D and work my way towards 3D. I would like to use SDL to start off with, but I am wondering if it is maybe not such a great idea. Given the fact that I am working towards 3D, would it be advisable to use SDL or jump into OpenGL without the Z axis?
1
GL EXT draw instanced vs VBO's I'm having trouble understanding the benefit of the newer GL EXT draw instanced over traditional VBO's. Don't both keep geometry cached on the gpu for faster redrawing? VBO's seem much more flexible.
1
Left handed version of the reversed depth projection with infinite far plane Let's say that I have standard left handed WorldToView and ViewToScreen row major matrices in the same vein as the DirectX D3DXMatrixPerspectiveLH functions, like so EXVector vView NormalizeVector(LookAt() m Position) float fDotProduct DotProduct(m WorldUp, vView) EXVector vUp NormalizeVector(m WorldUp fDotProduct vView) Generate x basis vector (crossproduct of y amp z basis vectors) EXVector vRight CrossProduct(vUp, vView) Generate the matrix row0.Set(vRight.x, vUp.x, vView.x, 0) row1.Set(vRight.y, vUp.y, vView.y, 0) row2.Set(vRight.z, vUp.z, vView.z, 0) row3.Set( DotProduct(m Position, vRight), DotProduct(m Position, vUp), DotProduct(m Position, vView), 1) EXMatrix mWorldToView(row0, row1, row2, row3) Here's the left handed mViewToScreen projection matrix float zNear 0.001f float zFar 1000.000f float Aspect DisplayHeight() DisplayWidth() float sinfov, cosfov qsincosf(VFOV() 2, sinfov, cosfov) float cotfov cosfov sinfov cos(n) sin(n) 1 tan(n) float w Aspect (cotfov) float h 1.0f (cotfov) float Q zFar (zFar zNear) float R Q zNear row0.Set(w, 0, 0, 0) row1.Set(0, h, 0, 0) row2.Set(0, 0, Q, 1) row3.Set(0, 0, R, 0) EXMatrix mViewToScreen(row0, row1, row2, row3) Many things like map object culling depend on this fact View to cull matrix Q ( zFar zNear) (zFar zNear) R (2.0f zFar zNear) (zFar zNear) row0.Set(w, 0, 0, 0) row1.Set(0, h, 0, 0) row2.Set(0, 0, Q, 1) row3.Set(0, 0, R, 0) EXMatrix mViewToCull(row0, row1, row2, row3) Does anyone know how to derive the infamous "reverse depth with infinite far plane" from the original blog post for mViewToScreen and mViewToCull? Every single reference I have found (nlguillemot and nVidia, for example) is in the right handed space. Normally I am able to figure things out visually, but 4D matrices are a headache.
1
Why projection window is between 1 and 1 Is it a convetion ? What we achieve with this ? I am reading about how the perspective and orthographic matrix is getting calculated and everyone is trying to normalize the homogenous coordinates to 1,1 Thank you ! Orthographic Projection
1
How do I flip upside down fonts in FTGL I just use FTGL to use it in my app. I want to use the version FTBufferFont to render font but it renders in the wrong way. The font(texture?buffer?) is flipped in the wrong axis. I want to use this kind of orthographic settings void enable2D(int w, int h) winWidth w winHeight h glViewport(0, 0, w, h) glMatrixMode(GL PROJECTION) glLoadIdentity() I don't even want to swap the 3rd and 4th param because I like to retain the top left as the origin glOrtho(0, w, h, 0, 0, 1) glMatrixMode(GL MODELVIEW) I render the font like this No pushing and popping of matrices No translation font.Render("Hello World!", 1, position, spacing, FTGL RenderMode RENDER FRONT) On the other forums, they said, just scaled it down to 1, but it wont work in mine. Example font is above the screen I can't see relevant problem like in mine in google so I decide to ask this here again. How can I invert texture's v coordinate without modifying its source code ? (assume its read only)
1
Opengl textures change color of everything Whenever I render textures, all shapes get the color of the texture. I'm not sure why, but I think it's something to do with the way I render textures. Here's what I use to draw textures public void drawTex(Texture t, int x, int y, int width, int height) t.bind() glBegin(GL QUADS) glTexCoord2f(0,0) glVertex2f(x,y) glTexCoord2f(1,0) glVertex2f(x t.getTextureWidth(),y) glTexCoord2f(1,1) glVertex2f(x t.getTextureWidth(),y t.getTextureHeight()) glTexCoord2f(0,1) glVertex2f(x,y t.getTextureHeight()) glEnd() Now I draw shapes like this public void drawQuad(int x, int y, int width, int height, float r, float g, float b) glBegin(GL QUADS) glColor3f (r, g, b) glVertex2f(x, y) glVertex2f(x width, y) glVertex2f(x width, y height) glVertex2f(x, y height) glEnd()
1
Should all primitives be GL TRIANGLES in order to create large, unified batches? Optimizing modern OpenGL relies on aggressive batching, which is done by calls like glMultiDrawElementsIndirect. Although glMultiDrawElementsIndirect can render a large number of different meshes, it makes the assumption that all these meshes are made of the same primitives (eg. GL TRIANGLES, GL TRIANGLE STRIP, GL POINTS). In order to most efficiently batch rendering, is it wise to force everything to be GL TRIANGLES (while ignoring possible optimizations with GL TRIANGLE STRIP or GL TRIANGLE FAN) in order to make it possible to group more meshes together? This though comes from reading the Approaching Zero Driver Overhead slides, which suggests to draw everything (or, presumably, as much as possible) in a single glMultiDrawElementsIndirect call.
1
How would I go about merging two models together in game to create a new model? Let's say I have two models model 1 is hilt and model 2 is blade. How would I merge these to models together to create a sword?
1
VBOs no longer renger when gluPerspective applied I've written a basic program so I can make sure I'm properly learning VBOs before converting my 3d game's rendering to them. Essentially, this question is why changing the perspective is the GL setup code stops my triangle from rendering GLU.gluPerspective(60f, (width height), 0.1f, 200.0f) Wihout that code, the triangle renders as expected. With it, only an empty screen is visible. I tried changing the triangle vertices but nothing worked. This may help explain why the VBO rendering isn't working in my game. I've spent more time reading about gluPerspective but I'm not catching why it prevents the images from rendering. My display gl init code Init display Display.setResizable(true) Display.setTitle( Game.name ) Display.setVSyncEnabled(true) Display.setDisplayMode(new DisplayMode(Game.GAME WIDTH, Game.GAME HEIGHT)) Set viewport PixelFormat pixelFormat new PixelFormat() ContextAttribs contextAtrributes new ContextAttribs(3, 0) Display.create(pixelFormat, contextAtrributes) glViewport(0, 0, Display.getDisplayMode().getWidth(), Display.getDisplayMode().getHeight()) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT GL STENCIL BUFFER BIT) glViewport(0, 0, Display.getWidth(), Display.getHeight()) glMatrixMode(GL PROJECTION) glLoadIdentity() GLU.gluPerspective(60f, ((float) Display.getWidth() (float) Display.getHeight()), 0.1f, 200.0f) glMatrixMode(GL MODELVIEW) glLoadIdentity() I have a simple triangle drawing class that uses and interleaved VBO triangleTesselator.addVertex( 0.5f, 0.5f, 0f) triangleTesselator.addColor(1, 0, 0) triangleTesselator.addVertex( 0.5f, 0.5f, 0f) triangleTesselator.addColor(0, 1, 0) triangleTesselator.addVertex( 0.5f, 0.5f, 0f) triangleTesselator.addColor(0, 0, 1) triangleTesselator.render() I also tried several different vertex coords thinking the 0 was behind the clipping (near) plane of the perspective but that didn't work. Update I've added a basic camera system to this rendering test app, and after looking around a bit I've found the rendered cube behind me...
1
Point Sprite Size and Coloring OpenGL ES 2.0 I am trying to render point sprites with with variable color, but they are all black. Before I added gl PointSize 5.0 they had color. The environment is Android with C , I believe OpenGL ES 2.0. I have tried to work from point sprite size and applying color to point sprite, but have had no luck. Vertex Shader precision mediump float precision mediump int attribute vec4 vertex uniform mat4 mvp varying vec4 v color void main() gl Position mvp vertex v color vertex gl PointSize 5.0 Fragment Shader precision mediump float precision mediump int varying vec4 v color uniform sampler2D tex void main() gl FragColor vec4(v color) gl FragColor vec4(1.0, 0.0, 0.0, 1.0) gl FragColor texture2D(tex, gl PointCoord) gl FragColor vec4(v color.rgb, texture2D(tex, gl PointCoord).a) All of the gl FragColor assignments both commented and active lead to black points. How do I give these points color?
1
GLSL Efficient Point inside Box Check I'm attempting to improve the performance of a shader that changes the colour of a region of the world that is inside a "zone". I am using a deferred lighting system, so the colour and world space position of each pixel on the screen are stored in two separate textures, gColor and gPosition. The zonePos uniform stores the two corners of each zone. The zoneColor uniform stores the colour of each zone. The totalZones uniform stores the amount of zones in the current game. Here is the fragment shader out vec4 FragColor in vec2 texCoords layout (binding 0) uniform sampler2D gColor layout (binding 1) uniform sampler2D gPosition uniform vec3 zonePos 10 uniform vec3 zoneColor 5 uniform int totalZones void main() vec3 FragPos texture(gPosition, texCoords).rgb vec3 Albedo texture(gColor, texCoords).rgb vec3 j vec3 k for (int i totalZones i gt 0 i ) j zonePos i 2 k zonePos i 2 1 if (FragPos.x gt j.x amp amp FragPos.x lt k.x amp amp FragPos.y gt j.y amp amp FragPos.y lt k.y amp amp FragPos.z gt j.z amp amp FragPos.z lt k.z) Albedo zoneColor i FragColor vec4(Albedo, 1.0) The changes I have made so far are Using j and k, rather than accessing zonePos 6 times in the if statement Looping down from totalZones to 0, as a comparison against 0 is faster Any suggestions are greatly appreciated.
1
OpenGL Colour Issues I am using a single VBO to store vertices in the follow format v1X, v1Y, v1Z, v1R, v1G, v1B, v2A, v2X, ... Vertex positioning is fine, shapes show up where expected, however instead of using the colour provided, all shapes show up red. The code given below simply draws two triangles to form one square ground shape. Buffer data preparation method public void prepare(float data) glBindBuffer(GL ARRAY BUFFER, vboID) FloatBuffer dataBuffer RenderUtils.fArrayToBuffer(data) if(dataLength ! data.length) glBufferData(GL ARRAY BUFFER, dataBuffer, GL STATIC DRAW) dataLength data.length else glBufferSubData(GL ARRAY BUFFER, 0, dataBuffer) glVertexAttribPointer(0, 3, GL FLOAT, false, 7 4, 0) glVertexAttribPointer(3, 4, GL FLOAT, false, 7 4, 3 4) Render code floorObj.prepare(new float 5, 0, 5, 1, 0, 0, 1, 5, 0, 5, 0, 1, 0, 1, 5, 0, 5, 0, 0, 1, 1, 5, 0, 5, 1, 0, 0, 1, 5, 0, 5, 0, 1, 0, 1, 5, 0, 5, 0, 0, 1, 1, ) glDrawArrays(GL TRIANGLES, 0, 6) Vertex shader version 330 core in vec3 position in vec4 i color out vec4 color uniform mat4 transform void main() gl Position transform vec4(position, 1) color i color Fragment shader version 330 core in vec4 color out vec4 f color void main() f color color As previously stated, vertex positions work fine, however colour does not. Just ask if any other code would be useful to determine the problem. Thanks, Jasper
1
How do I align the cube in which shadows are computed with the view frustrum? ("View Space aligned frustum") Short and concise Given 8 world space positions that form a cube of arbitrary size, position and orientation and given an arbitrary light direction. How do I compute the View and Projection matrix for a directional light, such that the shadows are computed in exactly and only this cube? Edit Clarification I want to create Shadow Maps, in the way, that Crytek calls "View Space aligned" (http www.crytek.com download Playing 20with 20Real Time 20Shadows.pdf page 54 55), so I want that the cube in which shadows are computed is aligned with the view frustrum. How do I achieve that, when I have already have the 8 world space positions of a cube, around the view frustrum?
1
Changing Yaw, Pitch And Roll OpenGL I'm using LWJGL and OpenGL 1.1 at this time and I was wondering what command is used to change the yaw, pitch and roll?
1
What is the purpose of glScissor? I know that it is more efficient than stencil test, but am I right assuming that the same functionality could be achieved using projection transformations with viewport?
1
How to snap a 2D Quad to the mouse cursor using OpenGL 3.0? I've been having issues trying to snap a 2D Quad to the mouse cursor position I'm able 1.) To get values into posX, posY, posZ 2.) Translate with the values from those 3 variables But the quad positioning I'm not able to do correctly in such a way that the 2D Quad is near the mouse cursor using those values from those 3 variables eg."posX, posY, posZ" I need the mouse cursor in the center of the 2D Quad. I'm hoping someone can help me achieve this. I've tried searching around with no avail. Heres the function that is ment to do the snapping but instead creates weird flicker or shows nothing at all only the 3d models show up void display() glClearColor(0.0,0.0,0.0,1.0) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) for(std vector lt GLuint gt iterator I cube.begin() I ! cube.end() I) glCallList( I) if(DrawArea true) glReadPixels(winX, winY, 1, 1, GL DEPTH COMPONENT, GL FLOAT, amp winZ) cerr lt lt winZ lt lt endl glGetDoublev(GL MODELVIEW MATRIX, modelview) glGetDoublev(GL PROJECTION MATRIX, projection) glGetIntegerv(GL VIEWPORT, viewport) gluUnProject(winX, winY, winZ , modelview, projection, viewport, amp posX, amp posY, amp posZ) glBindTexture(GL TEXTURE 2D, DrawAreaTexture) glPixelStorei(GL UNPACK ALIGNMENT, 1) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL NEAREST) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL NEAREST) glTexEnvf(GL TEXTURE ENV, GL TEXTURE ENV MODE, GL DECAL) glTexImage2D (GL TEXTURE 2D, 0, GL RGB, DrawAreaSurface gt w, DrawAreaSurface gt h, 0, GL RGBA, GL UNSIGNED BYTE, DrawAreaSurface gt pixels) glEnable(GL TEXTURE 2D) glBindTexture(GL TEXTURE 2D, DrawAreaTexture) glTranslatef(posX , posY, posZ) glBegin(GL QUADS) glTexCoord2f (0.0, 0.0) glVertex3f(0.5, 0.5, 0) glTexCoord2f (1.0, 0.0) glVertex3f(0, 0.5, 0) glTexCoord2f (1.0, 1.0) glVertex3f(0, 0, 0) glTexCoord2f (0.0, 1.0) glVertex3f(0.5, 0, 0) glEnd() SwapBuffers(hDC) I'm using OpenGL 3.0 WIN32 API C GLSL if you really want the full source here it is http pastebin.com 1Ncm9HNf , Its pretty messy.
1
Camera follow 3D Object OpenGL I'm trying to make my camera follow a 3D object, the X and Y values from the camera position are the same as the 3D object, but the Z axis is not the same, it has an offset of 13.0f so the camera looks to the object from a distance. The problem is that, the more I go up with the object, more the object disappears on the screen, its like the camera is following it slowly, or like, since the camera is far away then the object goes up faster than the camera, so how do I center the object in the middle of the camera? my current code for the view matrix of the camera view glm lookAt(position, position forward, up)
1
Can instantiated objects have different material texture? While I have some experience with simple 2D games, I am new to more process demanding 3D games. One basic question that has been concerning me recently and for which I am having difficulties to find a proper detailed answer, is the following. I understand that when we instantiate an object, we are saving memory because the instantiated copy does not have to save mesh memory (like vertices' position, UV cood and normals). So, the instantiated copy only needs to save a transformation matrix in order to properly position, scale and rotate the mesh structure they share with the original mesh. But can the instantiated copies have different texture or material from the original mesh from which they were instantiated? If so, then it means the instantiated objects actually save with themselves more than transformation matrices. I would love to read more on that, so suggestions are welcome.
1
Are there any reasons to use Legacy (2.X) OpenGL? The benefits are well documented of the Modern OpenGL 3.X amp 4.X API's, but I'm wondering if there are ANY benefits to keeping with the old OpenGL, Or if learning OpenGL 2.X is a complete waste of time now no matter what? Particularly I've wondered if using the OpenGL 2.X API is appropriate if the target platform had graphics hardware capable of only up to OpenGL 2.X. Would a driver update on said target platform allow programs compiled using the Modern OpenGL API's to be released on this old platform? If they both work, which would be faster? Thanks
1
How to draw only visible tiles? I have a big map with isometric tiles(3d camera), how can i draw only visible tiles ? Whats the best way to do that ? space partitionning (octrees etc...)?
1
How to draw Alpha Masked fragments' depth to depth buffer? I feel like an absolute idiot for asking this. But how exactly do safely draw the depth of a fragment featuring a Masked ( Alpha 1) texture on it's surface? So far I've literally been doing a depth test on truly opaque geometry. Here's a logarithic Z bufer from GTA. And yes... it's strange that I know that and how to do it... but not a alpha mask depth. EDIT From this, it looks like it's actually possible to write solid texture data to the depth buffer, and ignore the binary transparency. Here's an image that's an example of the problem I would like to solve.
2
Implementing Achievement System in Javascript I understand that for more complex achievements, specific code can't be avoided, but for simple ones like (player clicked 100 times) or (player earned 30 bajillion cookies) or (player played for 1 hour) it should be as simple as abstracting the achievements to something like a list of properties values and having some event handler. My problem is that since I'm using Javascript and there's no really clean way to do pointers, how am I supposed to create an achievement class that can be iterated through procedurally? Ideally I'd like something like this Achievement class function Achievement(name, property, value, text, points) this.name name this.property property this.value value this.text text And then I'd add an achievement like so achievements.push(new Achievement("Clicktastic", amp numClicks, 1000, "You clicked 1000 times!") And have some kind of function bound to a Window.setInterval that was constantly looping through all achievements and for these simple types doing something like for (var i 0 i lt achievements.length i ) if ( achievements i .property gt achievements i .value) deal with earning achievement I'm just using some C style pointer dereferencing notation here. Basic idea is that I'm not sure how to create a generic achievement that can arbitrarily care about specific properties without pointers.
2
Cocos2D JS Windows App in fullscreen mode (Win32) I've been looking for a way to make my Cocos2D JS app go into fullscreen mode, but there doesn't seem to be much documentation on Cocos2D JS (or Cocos2D x js as some seem to call it.) This app is to be exported into a Windows Desktop platform (Win32), and so fullscreen mode should work there as well. I checked the Cocos2D JS Online API and found cc.screen, which has a method called requestFullScreen, so I tried this cc.screen.requestFullScreen(document.documentElement, function() cc.log('Requesting full screen...') ) However, this didn't work on either of the platforms I tested it on (web browser and win32). Since document does not exist when you export the app into win32, I then tried changing it for cc. canvas cc.screen.requestFullScreen(cc. canvas, function() cc.log('Requesting full screen...') ) But this did not work either. Is there a way to make the app run in fullscreen mode? PD My apologies for any wrong tags I may have included to this question. I added the ones I thought were related to this problem.
2
HTML5 Jump and Run Game Performance issues we're developing a HTML5 Javascript jump and run game and therefore we have developed our own gameframework. It consists of following most important structures Stage Scene Layer divs ObjectEntites Rectangle Sprite canvas In the gameloop things get moved by calculating an offset value e.g. if we place a coin and this coin moves through the screen, it gets a new offset in each gameloop call. The dirty area of the coin is cleared in the canvas and the coin is repainted. We pay attention that we just draw at int values (no floating point). Unfortunately we have some performance issues now and can't figure out why our macbook pro's are running hot while playing the game. Are the drawing operations so crucial? We have layered Canvas, so just moving stuff gets repainted and even not the whole canvas but just dirty areas get cleared. IStat shows following values while playing the game Memory module A1 and Heatsink B get up to 60 and the rpm Value of the Fans increases up to 3000 4000. In firefox we see these values And chrome shows us this, where we also see how cpu increases Do you have any idea how to check performance of our game, how to find bottlenecks or general tips? We have searched for canvas performance, used firebug profiler... We'd be grateful for any advice!
2
Strange quaternion camera streching in WebGL So after a lot of researching about quaternions I almost got the quaternion camera working. Almost, cause it rotates in a proper way only in a vertical axis. Other rotations stretches and deforms the view (like on a picture below). Wobbling horizontal moves makes only Z axis stretched I don't like to post topics like "Please, debug my brain", but I'm in a dead end right now, and I don't know how to fix it. My implementation looks like this I Convert an axis angle rotation to a quaternion and normalize it. Then multiply it with a camera identity quaternion. And then multiply a position of a camera (which is a 4x4 model view matrix) by a product of above multiplication converted to matrix. My free quaternion camera class looks like this (for now) function QuaternionCamera() Fake 4x4 Matrix this.position new Matrix4x3() this.rotation new Quaternion() QuaternionCamera.prototype verticalRotation function(angle) this.rotation.multiply(new Quaternion().axisToQuaternion(angle, 0, 1, 0)) this.position.multiply(this.rotation.quaternionToMatrix()) this.rotation.makeIdentity() , horizontalRotation function(angle) this.rotation.multiply(new Quaternion().axisToQuaternion(angle, 1, 0, 0)) this.position.multiply(this.rotation.quaternionToMatrix()) this.rotation.makeIdentity() And on mouse movement, it runs this camera.verticalRotation(degToRad(deltaX 6)) camera.horizontalRotation(degToRad(deltaY 6)) I believe that this is a proper way to do it, so I'll just dump my math library, maybe you could find a typo in there (cause I have checked it a hunderd of times and found nothing) http pastebin.com 0zYLaR1V PS How to get rid off an additional little roll when rotating camera around, so it will move in a game fashion way? PS2 And where did the imaginery numbers gone when implementing quaternion math to programming language?? This is a true mystery for me.
2
Is there any opensource game frameworks in browser? I have been looking for and found any.
2
Uncaught TypeError Cannot read property 'Mesh' of undefined I would like to use dragonBones with pixi.js. I got a problem. When the following line executes in dragonBones, I am getting an error slot.init(slotData, displays, new PIXI.Sprite(), new PIXI.mesh.Mesh(null, null, null, null, PIXI.mesh.Mesh.DRAW MODES.TRIANGLES)) The error says Uncaught TypeError Cannot read property 'Mesh' of undefined.. And yeah, while debugging, I noticed that PIXI.mesh is really undefined. What can I undertake against the error? I am using this file for a dragonbones. May it be that the problem is in the fact that I am using pixi.js v4.5.6?
2
Object have the same "speed" all time JavaScript I try to create a 2D game with a big map and multiple objects. The player is always in the middle of the screen and all the other objects is moving around so it feels like a big map. I want the player to always move after the mouse location. I have modified the mouse location to (0,0) by this var mouseX 0 var mouseY 0 if (e.pageX lt window.innerWidth 2) mouseX (window.innerWidth 2 e.pageX) else mouseX e.pageX window.innerWidth 2 if (e.pageY lt window.innerHeight 2) mouseY (window.innerHeight 2 e.pageY) else mouseY e.pageY window.innerHeight 2 That is working just fine. When it comes to the moving part i can't relly get it to move with the same "speed". I have made some animations to explain how it is now and how i want it to be. How it is now DC is the speed that my player would go and as you can see it is moving the most when its closest to x 0 and y 0, i have done it like this so far var all mouseX mouseY clientsID key 'x' mouseX all speed clientsID key 'y' mouseY all speed But as i know it is not working well. Here is a animation to show how i want it to be DA would be the speed and it's always 1 because it's radius of the circle. But as i don't know how far the mouse is from the center (mouse is E on the last animation) i have to scale it down somehow. Does anyone know how to get how much i have to add x and y so the speed always will be the same? Thank you EDIT See the solution under here
2
Top down four directional movement snap to tiles, not feel like crap I want to make a topdown rpg movement, I need to update these parameters based on user input User input four directions . Update parameters Position x, y, Flip sprite flipx, flipy, Sprite number s, eg 1 for horizontal sprite, 17 for vertical sprite Sprite Animation si, eg 1 to 4 for walking animation This is my crappy implementation function init() p t 0, x 0, y 0, dx 0, dy 0, mtimer 0, s 1, si 0, flipx false, flipy false end function update() local input x 0 local input y 0 if btnp( ) then input y 1 elseif btnp( ) then input y 1 end if btnp( ) then input x 1 elseif btnp( ) then input x 1 end if p.dx 0 and p.dy 0 then if input x ! 0 then p.dx input x p.mtimer 4 elseif input y ! 0 then p.dy input y p.mtimer 4 end end if p.mtimer gt 0 then p.mtimer 1 p.t 2 p.x p.dx 2 p.y p.dy 2 if p.mtimer 0 then p.dx 0 p.dy 0 end end p.t 0.5 if p.dx ! 0 then p.s 1 elseif p.dy ! 0 then p.s 17 end p.flipx p.dx 0 and p.flipx or p.dx lt 0 p.flipy p.dy 0 and p.flipy or p.dy lt 0 p.si flr(p.t 30 30 4) end function draw() spr(p.s p.si, p.x, p.y, 1,1, p.flipx, p.flipy) end This is the end result As you can see it's so bad looking, I want you to implement this movement so it's fun and juicy. Or what can I do to improve the looks and feel of this.
2
player keeps on firing (Javascript Canvas) I'm brand new to game development (with Javascript)... I'm pretty sure my question is very simple, but I have a hard time to figure out myself... I'm starting at the very beginning, so this is pretty simplistic. This is my whole code in javascript var fire false function keyUp(evt) switch (evt.keyCode) case SPACE fire false break ... function keyDown(evt) switch (evt.keyCode) case SPACE fire true break ... function update() ... if (fire) playerHasShot true shot.x player.x player.w shot.y player.y (player.h 2) shot.vx 2 if (!fire) playerHasShot false ... shot.x shot.vx ... Bullet if (playerHasShot) playerHasShot false if (!playerHasShot) playerHasShot false ... if (shot.vx gt 4) shot.vx 4 ... if (shot.x gt canvas.width) shot.x player.x player.w shot.y player.y (player.h 2) playerHasShot false ... render() renderShot() I hit the spacebar once, and that guy can't stop shooting... Would any of you mind telling me how to stop him, please ? I'm sure I'm close, but there's something I'm missing somewhere.... Thanks! )
2
Box2D Adjusting character position for side scrolling In my side scroller I would like to give the illusion of the character walking forward while his sprite is actually being held in place. If he walks near the edge of the screen, I would like to adjust his position at each step so that he is remaining still while the background moves backwards at the same rate he would have been moving. I think you get what I mean. I figured that a simple case of "position position velocity" would do the trick, but then I realized it is not quite that easy with box2d. Velocity is expressed not at distance per frame, but distance per second. So instead I am essentially using position position velocity 60 since my game refreshes (or attempts to refresh at) 60 frames per second. But I can still see some lilting in the character's movement, and if I monitor his position, it shows that it indeed changes, most likely due to variations in frame rate. Is velocity 60 the best I can hope for, save for hard coding the stop positions? I am using the newest release of Box2Dweb. Yes, the javascript one. Here's a sample of the code var posX this.body.GetPosition().x var posY this.body.GetPosition().y var velX this.body.GetLinearVelocity().x if(posX gt 21 posX lt 12) screen.pos velX this.body.SetPosition(new b2Vec2(posX (velX 60),posY)) I realize I can hardcode the positions like this if(posX gt 21) screen.pos velX this.body.SetPosition(new b2Vec2(21,posY)) else if(pos lt 12) screen.pos velX this.body.SetPosition(new b2Vec2(12,posY)) But I'm not super excited about how decidedly undynamic that option is.
2
Prevent cheating in html Javascript game I have made a Javascript html game. Now the problem I have is anyone can edit the client code and cheat in game for example there is a man shooting a enemy. Man HP nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp 100 Enemy HP nbsp nbsp nbsp nbsp nbsp 50 lt div id "man" hp "100" gt lt div gt Now if someone changes the hp attribute to a very high value then he becomes literally immortal making the game easy. Now how can I prevent this type of thing ?
2
Tagging "regions" in rot.js map generator I'm working on a small roguelike and I'd like to be able to tag the various quot regions quot generated via the Map generators so that I can use it as a lookup to environment descriptions. I see that there are quot Rooms quot that are generated in the dungeon generator algorithms, however I didn't really see anything similar in say, the cellular automata generator. Is there a way to leverage the generator in rot.js, or am I stuck with manually determining if I'm in a particular open area (and breaking encapsulation of internal, private variables along the way)? Basically, I'd like to be able to link up my player's location to the text you see at the bottom of , but can't seem to figure out how to do it generically. I'm fairly certain I could use the 'Rooms' object in the first 2 screenshots, and everything else is a hallway, but I'm not clear on how I could get a particular 'region' in a cellular automata generator (or one of the maze generators, for instance).
2
Am I doing this node.js game loop wrong? I have 2 arrays of JSON objects, actions and game objects. At any time a user can make a request from the client which can add an action to the actions array. I have a setInterval(function() , 1000) that runs every second and first loops through all of the actions, doing them in order then dumping emptying the array. Then it for loops (on the length of the array) through all the objects and does change processing such as healing, repairing, consuming resources, updating location of travelling objects, etc. Each game object is tied to a user and when it is done being modified it will push out to them using socket.io all the data they need for their current view (assuming they are in an active session still). Is this right? Am I not taking advantage of the node.js event loop as well as I could for this?
2
Setting a jump counter in my 2d platformer built with phaser I have a platforming game, written in javascript with the Phaser.js framework, where my character collects objects. I thought trying to collect all objects in the least amount of jumps would be interesting, so I tried to implement a jump counter. Relevant pieces of code var jumpCount 0 var jumpText function create () jumpText this.add.text(16, 48, 'Jumps Used 0', fontsize '32px', fill ' 000' ) function update (jumpCount) if (cursors.space.isDown amp amp player.body.touching.down) player.setVelocityY( 630) jumpCount jumpText.setText('Jumps Used ' jumpCount) This is displaying the counter and incrementing it sort of correctly, but it seems to count the amount of time that the spacebar is held down and is leading to jump counts that are floats well into the thousands. Is there a more reasonable way to handle counting the number of times a player has manually jumped? Thanks!