_id
int64 0
49
| text
stringlengths 71
4.19k
|
---|---|
8 | What are these odd distortions on far away textures? I'm currently writing a simple voxel engine just to get some practice in, and I'm coming across some odd issues. In order to generate terrain, I create a mesh and then assign UV coordinates on a texture atlas to each vertex. Whenever I run the game, I get strange distortions, like these I've tried doing a few things so far Generating mip maps for the texture atlas. Play around with the max size of the texture atlas. Play around with the format of the texture atlas. Unfortunately, none of these have worked so far. As of right now, the import settings on the texture atlas itself look like this Is there any way I can get rid of these obnoxious distortions? |
8 | How can I animate a colour change over hue saturation value instead of RGB in Blender? If I create a keyframe for a colour in Blender (I'm using 2.6) it always seems to traverse the RGB values for the animation, even if I set the colour using HSV values. I do this by setting the colour, using HSV, then pressing i over it. How can I animate the HSV values rather than the RGB values? |
8 | Do games depend on the OS to scale resolution? I'm having issues with my computer related with scaling when using different resolutions other than the native. So I started to wonder, in PCs, is scaling handled by the card driver or the game engine? Does it depend of what engine we are talking about? Do engines somehow bypass the scaling algorithm from the driver? PS I'm talking about the regular PC games that uses DirectX or OpenGL. |
8 | Clear reference implementations of MLAA? Does anyone know of a clearly written reference implementation of morphological antialiasing (MLAA)? Intel provide a paper and reference implementation at the following address, but I find the code very opaque. http visual computing.intel research.net publications publications.htm |
8 | How do I replicate the warp effect from Geometry Wars? I'm trying to achieve the warp effect that you see in games like Geometry Wars. Can someone help explain what is going on here? I feel like the grid's z axis is manipulated in some way, any way, by the programmer, but is there maybe some known mathematical equation to really get the correct warping effect? Edit To be more specific, I mean the warping of the grid. How everything looks pulled towards the negative z axis and then bounces. |
8 | Opengl in 500 lines barycentric calculation question https github.com ssloy tinyrenderer wiki Lesson 2 Triangle rasterization and back face culling I cannot figure out how we go from uAB vector vAC vector PA vector 0 to the linear system with those subscripts x and y? Is there another way of explaining how we take three vectors, split them into x and y type vectors, and produce a cross product that can be used to find the barycentric coords (u, v, w)? Also, I am not sure why there is a division by the z component of the cross product in last line of the barycentric function. Maybe the answer to my first question will make this more obvious. Below is a picture of the section of the tutorial where I am stuck. Link to full page is above. |
8 | How does cube mapping work? Based on my reading of cube mapping tutorials so far, my understanding is that you need a direction vector, and from the direction vector we can determine the point of intersection with one of the six planes. Is the direction vector the player's view direction? Besides that, I do not understand how having only one ray will give us the texture to fill the whole screen. Furthermore, what if we are looking at one of the edge of the cube? Now we have to sample texture from two faces? Can someone please explain what am I missing here? |
8 | Why is Y up in many Games? I learned at school that the z axis is up. It is the same in modeling software like Blender. However in many games the y axis is up. What is the reason? |
8 | How to make proper animations for a html5 game? I am working on a web based MMORPG in html5. Therefore, I think about using Phaser.io to be the main engine of the game on the client side. I want the game to be isometric and I am wondering how to make proper sprites animations gfx for the game. I don't want my graphic team to have too much pain. Would it be possible, for example, to use any kind of "vectorial spritesheet" so that I can zoom in and have a better look at the character. On the other hand, I don't want the "stuff" (hat, armor, etc) to be too hard to "plug" on the character in all the different orientations. I thought about using some kind of flash like system (my game is based on the french MMO Dofus which only uses flash for its client side but no web based techno) But I don't know anything about flash, can I integrate it inside a canvas and does it mux well with raster images like png or jpg? What alternative could I use? |
8 | Fastest way to create a simple particle effect I am looking to the fastest way to create really simple particle effect that will be spammed like hell on the game... Basically, my game looks like a vectrex game, made mostly of lines... I want to make some small explosions that will be really common. There are something faster than just moving around some points and rendering with GL Point? |
8 | SDL deleting an image from the screen? Well, I'm sort of a beginner to SDL, and I was wondering how one would go about deleting an image from the screen and replacing it with another? I attempted to do this, but it didn't seem to change it, how would someone with more experience than me go about doing it? |
8 | Displaying whole screen image on multiple devices without stretching In my(Android) game, all of my sprites are scaled against to a particular ratio (this 'guide' ratio stays the same regardless of the actual ratio of the screen on which they are to be displayed) these sprites are then displayed on screens of different ratios. This way, the game can run full screen on any device, it simply means that more less of the game is visible on some devices than it is on others. However, I can't work out how to diaplay a large (full screen) image on different devices. The only thing I can think of is to simply create the original image for the larger display and the crop it down to fit other ratios. Something like this So here, the image on the right is how the picture will show on the original device (full screen with no cropping), the device on the left has a smaller width. But what if this runs on a device with a larger screen? In this case is it simply a case of (uniformly) stretching the image until it fills the screen? Would be grateful for some guidance. |
8 | How did they make the screen move in Dangerous Dave? I've made games in BASIC when I was a child, and I was curious about how the graphics were done in the 1988 version of Dangerous Dave made in C especially because they didn't have any worthwhile graphics packages those days. Remember how when Dave reached the edge of the screen, the entire screen graphic used to move leftward in a sweeping motion? I remember reading that Romero had used a special technique to do that. I've been wanting to create something like Dave, and was wondering what graphics package method they used for Dave? and how to make the entire screen graphic move like they did? |
8 | Rendering scaled down card images I have high quality SVG card images, but they drastically lose their quality when I downsize them. I have tried two ways of rendering cards (using Inkscape and Imagemagics) 1) Render SVG to high res PNG and resize it then inkscape D export png QS1024.png export width 1024 QS.svg convert QS1024.png filter Lanczos sampling factor 1x1 resize 71x QS71.png 2) Render SVG to image of proper size at once inkscape D export png QS71.png export width 71 QS.svg Both approaches generate blurry card images, which looks even worse than old Windows cards. What are the best way to generate smaller card images from SVG sources and not to loose their quality a lot? UPDATE I am using Inkscape to render SVG PNG and ImageMagick then to downsize PNG. I've tried using convert resize with couple of filters (Lanczos Mitchell etc), but result was pretty much the same. Original 71x raster |
8 | Single texture or one texture per light for shadow mapping The basic implementations of shadow mapping that I have seen create a depth texture for each light source affecting the scene. I'm just curious as to why this might be done rather than using and reusing a single texture for all of the lights in the scene so as to save on memory. To clarify, is there any benefit in the following for light in lights render scene depth to light.shadowTexture One texture object per light end for light in lights render scene with light (blended) end as opposed to new Texture shadowMap One texture object, written to by all lights for light in lights render scene depth to shadowMap render scene with light (blended) end |
8 | How to create the "drunk camera" effect in GTA 4? If you have played GTA 4, then you have probably been drunk at some stage. This is one of the best intoxicated simulators I have ever seen. It is actually hard or sometimes almost impossible to drive a vehicle while drunk in this game. I know that a lot of the things that make up this drunk experience are changes to the controls or having random changes in direction, but how would the camera be done? The camera seems to wave in and out and side to side. It also has changing blur, and other effects going on. Would this be some kind of shader? What are the effects that are used on top of each other to create the experience? |
8 | PBR How to correcly use standard lighting and IBL I'm creating a physically based renderer but I am a bit confused on how to put together standard lighting with IBL, since like I'm doing now I think it's wrong. Right now, for each light, I evaluate it's contribution to the scene lighting combined with IBL lighting (I use both the light contribution and diffuse and specular coming from the IBL) but like this I sum the IBL contribution for each light, and I don't think it's right. To put together standard lighting with IBL, do I need to process all the lights alone and then, in another step, bake into the scene the IBL? I think this would be more correct. |
8 | Rendering 2d sprites into a 3d world? In opengl how do I render 2d sprites in opengl given that I have a png of the sprite? See images as an example of the effect I'd like to achieve. Also I would like to overlay weapons on the screen like the rifle in the bottom image. Does anyone know how I would achieve the two effects? Any help is greatly achieved. |
8 | Correctly bitmasking path tiles based on existing paths Currently, I have a bitmasking implementation that sometimes incorrectly bitmasks the tiles. Conventionally, it is done correctly, and the math etc is sound, but it achieves results such as the following This is similar to what you might expect, and looks fine when doing simple x shaped crossings. However, I would like for the images to remain unchanged in some of these cases. For instance, the parallel paths in the third image would remain parallel paths instead of being bitmasked into many crossings. In the first and second images, only the parts of the path highlighted in the following images would be bitmasked (Different parts of path labelled for clarity.) If I purely update only the tiles that make up the newest path, I again get good results in simple x shaped crossings, however I get results such as those demonstrated in the following image What could I do to achieve results where this would not occur? I have though about adding some sort of flag to the tile to indicate that it is a crossing, but how could I detect that? |
8 | Algorithm to generate multifaced cube? Are there any elegant soloution to generate a simple six sided cube, where each cube is made out of more than one face? The method I have used ended up a horrible and complicated mess of logic that is imopssible to follow and most likely to maintain. The algorithm should not generate reduntant vertices, and should output the indice list for the mesh as well. The reason I need this is that the cubes vertices will be deformed depending on various factors, meaning that a simple six faced cube will nto do. |
8 | Opengl in 500 lines point in triangle question https github.com ssloy tinyrenderer wiki Lesson 2 Triangle rasterization and back face culling I am on lesson 2 of the quot Opengl in 500 lines quot tutorial. I follow the part of the lesson in quot The method I adopt for my code quot , but I don't understand this leap from P A uAB vector vAC vector to 0 vector uAB vector vAC vector PA vector. |
8 | Is there a game oriented graphics or image editor? I remember looking a few times over the years for an alternative to Photoshop for editing images. I've tried using Gimp too, and that's also not quite sufficient. There are some problems that have only been solvable by breaking out into an asset compiling stage. For example, none of the graphics packages come with premultiplied alpha options, and none come boxed with previews of what the image will look like once it's been compressed for hardware. I've never come across a package that lets you modify the diffuse, specular, ambient and normal maps all at the same time with the same tools. Is there an image manipulation package out there that can satisfy things like this, allow for textures that have more than just 4 channels, export with preview to final format, maybe even rendering them in some coherent way? items on the wishlist include batch conversion scriptable pixel shader transform view diffuse alpha over a background allow alpha2coverage preview render depth mapping via height or normal map show mip maps and allow editing them |
8 | Why are my exported illustrator graphics blurry and messy in game when scaling? Non scaled image And this is 2x scaled I use 300dpi and did tick anti aliasing settings for export png 24 image for my game in illustrator. But it look still bizzard when scaling. Could you please tell me what's wrong here?? What is the best way to get better art in my case? Thank you in advance! |
8 | Monogame Linux cannot resize screen While working on a Monogame project on Linux (Arch to be exact), I found that I could not change the screen height away from the default 800 px, while I could easily, perfectly change the width to whatever I want. Has anyone ever had this issue, and is there a fix for it? I am going to try the solution that Jon explained in this post, but I doubt it will work since this seems to be a Linux only problem. I have installed the latest version of OpenTK and built the latest build of Monogame as of a few days ago, and have looked through the source for the Monogame OpenTKGameWindow and GraphicsDevice classes, as well as the GraphicsDeviceManager. I don't see the issue in any of those, but I might be missing something. |
8 | What does the term "channel" mean when used in regards to computer graphics? I was studying terminology for computer graphics, and this statement came up that confused me. The image can have alpha channels for transparency. I tried searching for the meaning of the term "alpha channel," but I got really confused by the definition which used another concept called a "channel." I'm not really sure what this means, so could someone be kind enough to please explain this term to me? |
8 | How come the 3d graphics and animations of MMORPGs are usually worse than non online 3d games? I have noticed that in general it seems like the 3d graphics and animations for MMOs and MMORPGs seem not as seductive and polished as the graphics for normal, non online 3d games. How come this is the case? or Is my judgement inaccurate? If my judgement is inaccurate please provide examples of MMORPGs that render 3d graphics and animations that are superior to normal, non online 3d games. |
8 | What's the best way to generate an NPC's face using web technologies? I'm in the process of creating a web app. I have many randomly generated non player characters in a database. I can pull a lot of information about them their height, weight, down to eye color, hair color, and hair style. For this, I am solely interested in generating a graphical representation of the face. Currently the information is displayed with text in the nicest way possible, but I believe it's worth generating these faces for a more... human experience. Problem is, I don't know where to start. Were it 2007, I'd naturally think to myself that using Flash would be the best choice. I'd love to see "breathing" simulated. However, since Flash is on its way out, I'm not sure of a solid solution. With a previous game, I simply used layered .PNGs to represent various aspects of the player's body their armor, the face, the skin color. However, these solutions weren't very dynamic and felt very amateur. I can't go deep into this project feeling like that's an inferior way to present these faces, and I'm certain there's a better way. Can anyone give some suggestion on how to pull this off well? |
8 | How to create a decent strategy game without becoming an artist? I love to make games. But I am not an artist. I never learned to make 2d or 3d models. I spent most of my time on programming. I know there are Game engines for non programmers. But I am looking a way to create a decent strategy game (medieval theme) without spending time to learn how to model 2d or 3d models it is more time consume than learning how to program for an artist. |
8 | What does the term 'photorealistic' really mean? I was wondering about the term 'photorealistic' in regards to rendering and was wondering how this is used. Is it used to describe a shader (or set of) that have certain quantifiable features? Or any rendering thats not meant to be abstract, like the cartoon effect seen in Borderlands? Or is it just a subjective term meaning 'really really realistic'? |
8 | 2D platformers why make the physics dependent on the framerate? "Super Meat Boy" is a difficult platformer that recently came out for PC, requiring exceptional control and pixel perfect jumping. The physics code in the game is dependent on the framerate, which is locked to 60fps this means that if your computer can't run the game at full speed, the physics will go insane, causing (among other things) your character to run slower and fall through the ground. Furthermore, if vsync is off, the game runs extremely fast. Could those experienced with 2D game programming help explain why the game was coded this way? Wouldn't a physics loop running at a constant rate be a better solution? (Actually, I think a physics loop is used for parts of the game, since some of the entities continue to move normally regardless of the framerate. Your character, on the other hand, runs exactly fps 60 as fast.) What bothers me about this implementation is the loss of abstraction between the game engine and the graphics rendering, which depends on system specific things like the monitor, graphics card, and CPU. If, for whatever reason, your computer can't handle vsync, or can't run the game at exactly 60fps, it'll break spectacularly. Why should the rendering step in any way influence the physics calculations? (Most games nowadays would either slow down the game or skip frames.) On the other hand, I understand that old school platformers on the NES and SNES depended on a fixed framerate for much of their control and physics. Why is this, and would it be possible to create a patformer in that vein without having the framerate dependency? Is there necessarily a loss of precision if you separate the graphics rendering from the rest of the engine? Thank you, and sorry if the question was confusing. |
8 | How do mipmapping, anti aliasing and anisotropic filtering contribute to rendering quality? To my understanding, mipmaps exist to save on computations and memory when textures are far away. In an ideal world, we wouldn't want them. We'd just use the same high quality textures far away and not care about wasted resources. But in reality, we need anisotropic filtering to smooth the disconnect between varying LOD mipmaps (or so I think). However, do images using mipmaps with anisotropic filtering look better than no mipmaps without anisotropic filtering? See here (the first image looks worse than the third even though the first has no mipmaps). Is it because anisotropic filtering has anti aliasing built in? And if we didn't have mipmaps, but added anti aliasing, would the resulting image look better? |
8 | Shader compile log depending on hardware I'm done with the core of my graphics engine and I'm testing it on every platform I can get my hands on. Now, what I noticed is that different drivers return different shader and program compile log content. For example, on my friend's laptop if you successfuly compile a shader then the log is simply empty. However on my PC I get some useful information along with it. So if I compile a vertex shader, I'll get Vertex shader was successfully compiled to run on hardware. Which isn't that impressive, but is what happens when I compile a program. On my friend's computer the log is empty, since the program compiles. However on my own computer I get Vertex shader(s) linked, fragment shader(s) linked. Which is awesome, because I'm attaching a geometry shader with 0 (I have a geometry shader file with trash, so it doesn't compile and the pointer is set to 0), and the compiler just tells me which shaders linked. Now it got me thinking, if I was going to buy a graphics card, is there a way for me to get the information about whether or not I'll get this "extended" compile information? Maybe it's vendor specific? Now I don't expect an answer TBH, this seems a bit obscure, but maybe somebody has any experience with this and could post it. |
8 | Where to hire graphic designers for mobile games? I need to find someone talented to create a series of 2D graphics a la cart for my mobile games. My graphics are just not impressive enough, but the game play is solid and fun. This is sort of a multi part question Where do I start to find someone? What should I expect to pay for custom sprites and backgrounds? What kinds of terms should I use so I receive exclusive, royalty free, ownership of the artwork when it's finished? |
8 | Where can I find articles books on 2D graphics (how it works etc.)? After I have been through more intensive C studies, I am thinking of giving it a start in my practices of computer game programming. Therefore, I think 2D knowledge (processing images, animating frames, etc.) would be a good place to start. However, I don't seem to find many articles on 2D (mostly 3D), and if found one, it is mostly stuffs like how to use a 2D game engine. I would really want something much more low level, or "from scratch" ideas. I want articles books that would answer something like How would I import images into C ? (Let's say, if I want to import jpeg bmp etc. file into raw C code, what would I do?) Basically, I just want to have enough low level knowledge about computer graphics to be able to create something like 2D game engine from scratch. |
8 | Where can I find articles books on 2D graphics (how it works etc.)? After I have been through more intensive C studies, I am thinking of giving it a start in my practices of computer game programming. Therefore, I think 2D knowledge (processing images, animating frames, etc.) would be a good place to start. However, I don't seem to find many articles on 2D (mostly 3D), and if found one, it is mostly stuffs like how to use a 2D game engine. I would really want something much more low level, or "from scratch" ideas. I want articles books that would answer something like How would I import images into C ? (Let's say, if I want to import jpeg bmp etc. file into raw C code, what would I do?) Basically, I just want to have enough low level knowledge about computer graphics to be able to create something like 2D game engine from scratch. |
8 | How many polycount for smartphone hardware starting from 2015? I already know that the answer is "depends". Depend if the hardware is low mid or high hend. So, I try to re formulate is 100K concurrent polycount in a 3d game for smartphone too much ? How many is it a good compromise ? Thanks |
8 | How do I get hardware accelerated graphics and shaders in PyGame? I have created several simple games with PyGame, but until now, I have focused on mechanics rather than graphics. The graphics in my games have been extremely rudimentary, consisting of basic shapes on black backgrounds. I've recently decided to change that and create a 2D graphics engine supporting proper texturing, some animation, and most importantly lighting. While researching these, especially the last, I've started doubting whether PyGame is the right tool for this. Many tutorials concering implementation of lighting, shadows, et cetera recommend using techniques such as GPU shaders and pixel level manipulation. Looking at the documentation of PyGame, I don't see anything that would let me implement such things with any degree of efficiency. I can access bitmaps directly with PixelArray, but doing any significant processing this way seems like it would be a performance nightmare. How can I get hardware accelerated graphics, vertex and pixel shaders when making a game with PyGame? |
8 | SDL2 linux fullscreen issue at lower than desktop resolution Having a problem trying to get proper fullscreen in linux. I'm using 1440x900 on desktop. When i set SDL to use 1280x720 as fullscreen, it does change screen resolution. But if i drag the mouse cursor to the bottom or right edge of the game screen, it "scrolls" the screen beyond the game surface and makes part of the desktop visible. Here's how I set the window screen gameWindow SDL CreateWindow( "SDL Tutorial", SDL WINDOWPOS CENTERED, SDL WINDOWPOS CENTERED, screenWidth, screenHeight, SDL WINDOW FULLSCREEN SDL WINDOW BORDERLESS ) Am i missing some flag(s) perhaps? Is this a common problem? I'm on Linux Mint MATE (Rosa). This problem also occurs when trying to run the same build in an Openbox session. Using nVidia drivers (x64, v340.96) from nvidia.com, on 9600 GT card(s). No twin dual screen or second Workspaces. Any good tips on how to avoid or workaround this issue? |
8 | How can I replicate the look and limitations of the Super NES? I am looking to produce graphics with the same limitations look that in the Super Nes era. I am specifically looking for graphics similar to Chrono Trigger FF6. It would be a lot easier to do if I had an idea of the resolution dpi I am supposed to use. I found that the technical specs for the SNES are Progressive 256 224, 512 224, 256 239, 512 239 Interlaced 512 448, 512 478 But even by using these resolutions, it is pointless if I set it at 72dpi, as I will still have possibly very detailed graphics (that is the main thing, I don't want detailed graphics, I want to go pixelated). I figured it might be related to the sprite size limit, i.e. Sprites can be 8 8, 16 16, 32 32, or 64 64 pixels, each using one of eight 16 color palettes and tiles from one of two blocks of 256 in VRAM. Up to 32 sprites and 34 8 8 sprite tiles may appear on any one line. This would work for sprites (characters, objects), but what about maps? Are they built entirely from 8x8 tiles? And then, at what resolution is the end result displayed? It might seem like I am giving the question and answers at the same time, but all of these are suppositions I am making, so could someone confirm or correct them? |
8 | How do I change color of one part of sprite in Game Maker? I want to generate a competitive game like Age of Empires where each player has access to a variety of units, and each player can be distinguished by a color in each unit And instead of sprites create 8 different colors, would have only one, with a mark where that color is changed, for example How do I change color of one part of sprite in Game Maker? |
8 | Bounding Box in Monogame for mouse picking Ray perspective My mouse ray is screwing up precision. I don't really know how to fix it, maybe you guys can help. problem (5.6mg gif) https www.dropbox.com s v0z67afso88hsd1 perspective ray.gif how i create the mouse ray private Ray GetMouseRay(GraphicsDevice gd, ref Matrix view, ref Matrix proj) create source positions i dont really understand why the 0 and the 1, since the near far clip plane are totaly diferent, but from experimentation, this is a must Vector3 nearsource new Vector3((float)MousePosition.Value.X, (float)MousePosition.Value.Y, 0.0f) Vector3 farsource new Vector3((float)MousePosition.Value.X, (float)MousePosition.Value.Y, 1.0f) Console.WriteLine("nearsource " nearsource.ToString() " farsource " farsource.ToString()) Matrices needed are the view proj and this world we are positioning the mouse ray in the origin(model origin, its a 3Dspace ray) Matrix world Matrix.CreateTranslation(0, 0, 0) unproject mouseposition in the clipping planes Vector3 nearPoint gd.Viewport.Unproject(nearsource, proj, view, world) Vector3 farPoint gd.Viewport.Unproject(farsource, proj, view, world) Console.WriteLine("nearPoint " nearPoint.ToString() " farpoint " farPoint.ToString()) Create a ray from the near clip plane to the far clip plane. Vector3 direction farPoint nearPoint direction.Normalize() return new Ray(nearPoint, direction) How i am drawing the ray CDebugShapeRenderer.AddLine(mouseRay.Position, mouseRay.Position mouseRay.Direction 1000, Color.Red) how i am calculating the Obb Bounding Box in Monogame for mouse picking How i am calculating the collision Line 349 https github.com CartBlanche MonoGame Samples blob master CollisionSample BoundingOrientedBox.cs L349 So how can i create a mouse ray that is accurate? or remove that perspective somehow. Roger. Edit I forgot to add the matrices used proj Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver2, this.GraphicsDevice.Viewport.AspectRatio, 1.0f, 1000.0f) view Matrix.Identity |
8 | What is render path? What is "render path"? Who invented this term? I can suppose that this thing mean something one from the list below The thing which shows which rendering technique should be used (e.g. forward deffered rendering) Module for rendering lines and curves Description how shaders should be connected in mutlipass render to reach the goal of some render FX. For what this term is stand for? Thanks |
8 | Unreal Engine Player Quality I had to force Unreal to use my integrated graphics card because VR wasn't working with my NVidia card. When I did this, the framerate dropped terribly and it popped up a dialogue "Frame rate's bad, would you like to lower quality?" Which I said yes to. Now, I've set it to use my NVidia card again, but it's stuck in this low quality state. It seems to be tied to the engine, not the project. All my 4.9 projects run in this garbage state, but my 4.10 projects are fine. How can I reset the engine player settings for 4.9? |
8 | SDL2 linux fullscreen issue at lower than desktop resolution Having a problem trying to get proper fullscreen in linux. I'm using 1440x900 on desktop. When i set SDL to use 1280x720 as fullscreen, it does change screen resolution. But if i drag the mouse cursor to the bottom or right edge of the game screen, it "scrolls" the screen beyond the game surface and makes part of the desktop visible. Here's how I set the window screen gameWindow SDL CreateWindow( "SDL Tutorial", SDL WINDOWPOS CENTERED, SDL WINDOWPOS CENTERED, screenWidth, screenHeight, SDL WINDOW FULLSCREEN SDL WINDOW BORDERLESS ) Am i missing some flag(s) perhaps? Is this a common problem? I'm on Linux Mint MATE (Rosa). This problem also occurs when trying to run the same build in an Openbox session. Using nVidia drivers (x64, v340.96) from nvidia.com, on 9600 GT card(s). No twin dual screen or second Workspaces. Any good tips on how to avoid or workaround this issue? |
8 | How do games deal with Z sorting partially transparent foliage textures? I was busy implementing basic transparency in a prototype I'm working on when something occurred to me. In order for a given texture's transparency to work as expected, the (semi )transparent texture must be drawn after whatever is behind it, right? Well, if we take for example a tree or shrub in a game like Skyrim, the texture(s) that make up the foliage on that tree or shrub must include some transparency somewhere, right? A vertex perfect leaf model would be far too resource intensive. But the player can move around, and sometimes through, any plant at will, thus changing the relative position of all textures to the camera. Doesn't this mean that the game has to constantly Z sort all textures, both between models and even within a single plant model, whenever the player moves (so potentially every single frame)? Isn't that very resource intensive? How do games with lots of partially transparent textures deal with this? |
8 | What do you use to create sprite graphics? Possible Duplicate What tools do you use for 2D art sprite creation? What do you folks suggest for creating sprite graphics and sprite sheets? I fiddle with pixelformer and tilestudio. Pixelfromer has a kicken interface, it is quick and easy to make graphics, but a bit cumbersom if you want to make a spritemap. Tile Studio is a interesting mix or tiles and maps, but it is a bit buggy and basic. The Adobe series, just don't really seem to handle tiny graphics well. (there is a previous posting of this question existing, but it is a year old and I was hoping for further updated input from the community) |
8 | Creating graphics at different angles for sprites I am developing a Java game that uses sprites for the graphics. It's just a top down shooter and our ship sprites look like the following These work fine, but they're hard to create and the artist who created them is no longer working on the game. I could easily create a top down view of a ship in Photoshop myself, but I'm not sure how to get all the angles. What do you think would be the best program or approach would be for me to create more? |
8 | High Performance Vector Graphics Solutions I'm looking for a high performance vector graphics library I can use in my games. I'm thinking along the lines of vector graphics such as those that can be made with SVG. I'll consider any language at the moment (but must run on Windows). A solution that takes advantage of GPU hardware would be great. Thanks in advance. |
8 | text wrapping on a texture applied to a 3D model How would I create and implement a texture of text that wraps around a 3D model? The texture will just be white and you should be able to add text to it, but I need to create this so that when the texture is wrapped around a model of a person, the person is then composed of lines of text and the lines should not be distorted. In my head the way I would do it is to put the flattened texture in a file, and draw text on to it. Is this the best way? are there any issues that I'm unaware of that I might come across? |
8 | Algorithm for drawing asteroids from, er... Asteroids game? What would the algorithm be for generating drawing the asteroid shapes from the original Asteroids game? Is it even an algorithm? Or would they be hard coded shapes? Here is a screenshot to jog your memory. Screenshot from Asteroids game http www.heinzwerner.de emu asteroids.jpg EDIT Also found this alt text http www.next gen.biz files images feature article 2009 05 asteroids4.jpg from the article Edge The Making of Asteroids. Interesting read, but does not mention any specifics. |
8 | How to combine objects in a rendered scene with and without bloom effect? I'm working on a game and engine as I go under OpenGL. I've had a bloom effect in place for a while that works as a post procesing effect on the entire screen. It's been fine up until now. I have an object that is made up of a few meshes, and one of these meshes is mostly white. The result is it blooms like crazy, but I really don't want it to. The only way around it I can see is that the objects I do want to have bloom would need to be rendered to a bloom framebuffer from which I can bloom (and store the depths) to then mix back in with the rest of my scene. Is this the only way to go about it? It seems like a lot of work to simply isolate something from a screen wide affect. |
8 | Find point along x axis where yz plane flips to back facing after perspective projection. Have a solution but don't know why it works I am trying to find the point along the x axis where a plane with normal pointing along the x axis would flip to back facing after perspective projection. Essentially the red line in this image. My first attempt to find it I just sampled along the x axis checking if a triangle there was back facing after projection. This only gave me an approximation but I could make the approximation as accurate as I wanted by just sampling more planes so I used this as a baseline to verify other solutions. I then tried the following vec4 center projection.inverse() vec4(0,0,0,1) center center.w float d dot(center.xyz, vec3(1,0,0)) vec3 point vec3(1,0,0) d But this did not work, it was close but not right. See this image, the red line is the sampled point and the blue is the one calculated from that. I then tried changing the calculation for center to projection.inverse() vec4(0,0, 1,1) to see what that would look like. With the the sampled point in red, the blue line being the previous version and the green the new version. I got the following image which as you can see is still not correct. But I noticed that the green was always directly between the blue and the red so I tried the following. vec4 c1 projection.inverse() vec4(0,0,0,1) c1 c1.w float d1 dot(c1.xyz, vec3(1,0,0)) vec4 c2 projection.inverse() vec4(0,0, 1,1) c2 c2.w float d2 dot(c2.xyz, vec3(1,0,0)) float d (d2 d1) 2 d1 vec3 point vec3(1,0,0) d Which seems to be correct but I have no idea why that works. It is always extremely close to the sampled point so I assume it is correct. My question is why does this work? |
8 | How to project textures onto an animated model? I'm trying to figure out exactly how to project textures onto an animated model. I've taken a look at L4D2's wound's white paper. But their method doesn't exactly explain how they went about this. I've tried using the old school method to create a mesh and attach it to the object... But that would require the GPU to store that data for a long period of time, and recall it correctly for the animated model. On top of that there's the Z fighting problem with it. I've tried the Deferred shading method. But I can't get that to work correctly either. My set up requires some form of filter in order to prevent the decal from projecting onto other models. And then should a moving body part cross over the volume, it gets rendered onto that as well, which is not as desired. |
8 | Effect of graphics card on game programming I know the question can be pretty funny to some, but I have been thinking, does a gaming graphics card like agtx and a professional graphics card like quaddro have any effect on game development during texture rendering, materials, importing assets, shader compilation, etc.. |
8 | Which image format is more memory efficient PNG, JPEG, or GIF? Which image format is more efficient to save memory? PNG, JPEG, or GIF? |
8 | How do mipmapping, anti aliasing and anisotropic filtering contribute to rendering quality? To my understanding, mipmaps exist to save on computations and memory when textures are far away. In an ideal world, we wouldn't want them. We'd just use the same high quality textures far away and not care about wasted resources. But in reality, we need anisotropic filtering to smooth the disconnect between varying LOD mipmaps (or so I think). However, do images using mipmaps with anisotropic filtering look better than no mipmaps without anisotropic filtering? See here (the first image looks worse than the third even though the first has no mipmaps). Is it because anisotropic filtering has anti aliasing built in? And if we didn't have mipmaps, but added anti aliasing, would the resulting image look better? |
8 | Would seam carving liquid rescale make changing aspect ratios easier? Seam carving is an algorithm which allows for resizing images without major distortions. I think it might help to make games which would adapt to different aspect ratios resolutions much easier. But am I right? You can watch this presentation to see how seam carving works for videos. |
8 | How to improve my graphics on HTML5 canvas? I am making my own army fighting simulator. Designer have made an 3D model with Blender and textured it, made animations. I have made sprites from that model and did programming. But the result is not as good as I expected. You can look at armyfight.com simulator.html (refresh until it loads all images properly if it didn't at first time) and when you maximally maximize, you can see there are units moving. But I want to be them clearly visible, with details, high resolution like in flash games and also that when they are really small units, would be understandable. Is it possible on canvas? Or it is my mistake and I must do somehow more HD sprites? I'm stuck now, and I don't know to improve the graphics.. |
8 | How can I create animated card graphics like in Hearthstone? In the game Hearthstone, there are cards with animated images on them. A few examples http www.hearthhead.com card 281 argent commander http www.hearthhead.com card 469 blood imp The animations seem to be composed of multiple effects Particle systems. Fading sprites in and out rotating them Simple scrolling textures A distortion effect, very evident in the cape and hair of example 1. Swirling smoke effects, the light in example 1 and the green purple glow in example 2. The first three elements are trivial, what I'd like to know is how the last two could be done. Can this even be done realtime in a game, or are they pre rendered animations? |
8 | What are these odd distortions on far away textures? I'm currently writing a simple voxel engine just to get some practice in, and I'm coming across some odd issues. In order to generate terrain, I create a mesh and then assign UV coordinates on a texture atlas to each vertex. Whenever I run the game, I get strange distortions, like these I've tried doing a few things so far Generating mip maps for the texture atlas. Play around with the max size of the texture atlas. Play around with the format of the texture atlas. Unfortunately, none of these have worked so far. As of right now, the import settings on the texture atlas itself look like this Is there any way I can get rid of these obnoxious distortions? |
8 | Glitch free cross fades in HTML5 In my HTML5 canvas game, I need to cross fade two sprites which have some glow around them. (Glow is backed into sprites.) Initially, the first sprite is visible. During the cross fade the first sprite should vanish, and be replaced with the second one. How exactly the cross fade is done does not matter, as long as it is smooth and there are no visual glitches. I've tried two techniques During the cross fade I simultaneously interpolate alpha of the first sprite from 1.0 to 0.0, and alpha of the second sprite from 0.0 to 1.0. With this technique I can see background in the middle of the cross fade. That's because both sprites are semi transparent most of the time. During the cross fade I first interpolate alpha of the second sprite from 0.0 to 1.0 (first sprite alpha is at 1.0), and then interpolate alpha of the first sprite from 1.0 to 0.0. With this technique background is not seen, but the glow around sprites flashes during the cross fide when both sprites are near the full visibility. In non HTML5 game I'd use shaders to do cross fade separately in RGB and alpha channels. Is there a trick to do the cross fade I need in HTML5 without visual glitches? |
8 | How do I change color of one part of sprite in Game Maker? I want to generate a competitive game like Age of Empires where each player has access to a variety of units, and each player can be distinguished by a color in each unit And instead of sprites create 8 different colors, would have only one, with a mark where that color is changed, for example How do I change color of one part of sprite in Game Maker? |
8 | Where can I find good designers animators for the graphics of a 2d game? Possible Duplicate Where to hire 2D sprite artists? I've got a little bit of experience in this field already, and I noticed that it's way better to look for animators than graphic designers, because games need explosions, looping sprite sheets, etc. But it would be even better if I could go somewhere and find people with experience in making graphic assets FOR GAMES, not for general markets. Is there some specialized site or something? |
8 | How to make some monsters appear more dangerous than others? My game is an open world co op MMO with retro graphics, permadeath and no leveling system. The problem I'm facing, is the fact that I don't know how to make some monsters appear harder than others. Since nothing has a level, and characters can only get stronger by acquiring better items, it's difficult for me to "warn" players that a monster is difficult for them. Fortunately, the whole world is designed by hand and we know which areas should be more difficult than others. So this means, I as developer knows which monsters are stronger than others. But new players won't! I don't want them to get frustrated by walking into the wrong monsters reserved for higher players. So then, how can I make monsters appear more dangerous than others? Some things I thought of increasing the size of the monsters This could work but it's not a valid rule for everything. For instance, the end boss has small minions which are also powerful. use of particles I remember from playing WoW some monsters who were stronger excessively used particles to make them appear more powerful dividing the world into danger levels Here I would add a danger level or something to the HUD that displays whether the player is in a safe zone or somewhere dangerous. It would change and display a notice whenever the player moves to a different danger level. Do able although I have no clue how to portray it on the HUD. |
8 | What platform were old TV video games developed on? I am very eager to know how TV video games (which we all used to play in our childhood) were developed and on which platform. I know how games are developed for mobile devices, Windows PC's and Mac but I'm not getting how (in those days) Contra, Duck Hunt and all those games were developed. As they have high graphics and a large number of stages. So how did they manage to develop games in such a small size environment and with lower configuration platform? |
8 | Generating Normal map from a Image with a given Albedo map I am working on a research problem part of which involves generating normal map from a given image of a rusted object. I searched the internet for techniques to achieve the above and apparently crazybump is mentioned a lot. I tried it but it didn't produce the desirable effects. Also I am looking for a method which draws inspiration from an existing research paper not some closed source software. I turned my attention to the technique described in the this paper. Results from this technique are satisfactory for normal objects because of bias in the training data but it doesn't work very well in the case of rusted objects. After this I focussed my attention on generating Albedo map (the above problem would become more solvable if Albedo map is obtained). Fortunately I am able to generate pretty good albedo maps for images of rusted objects. I used this paper's approach to generate Albedo maps. Now I want to know a good technique to get Normal map given an image and it's corresponding Albedo map. To give you an idea of what kind of images I am working with I am attaching a sample. Links to research material would be really appreciated. Thanks! |
8 | sprite animation individual framerate When animating sprites I am taking the delta difference between frames and locking the rendering frame rate of the sprite animation to the delta time. float delay 1000.0f FPS float now SDL GetTicks() if(now sprite gt last update gt delay) render sprite frame This is causing flickering because the background is being drawn faster than the sprite. (my guess) If I draw as fast as the frame rate the flickering goes away. If I slow down the background rendering then the background flickers. By slow down I mean set the FPS to 30. I want to be able to slow down the animation for a single sprite. I need a better way to lock the frame rate for individual sprites. I can slow the entire frame rate down by delaying in the game loop but I want to be able to slow down a single sprite. Is this possible? |
8 | Art asset creation workflow question are there any neat ways to save your work and automatically export it to desired format? To be specific I have used GraphicsGale for pixel art sprites. The program saves its files in its own format (.gal, I believe), but the images in my game have to be .png. Saving it to .png loses information for GraphicsGale, like the palette. Manually keeping to seperate files impairs the workflow. Ideal would be that after every save, the original file is saved in one folder, and a .png export in another, awaiting runtime and packing. Any suggestion thoughts? |
8 | Geometry Shader and Stream Output with Directx11 I am having trouble trying to send verticies generated in the Geometry Shader to Stream Output. What I am trying to accomplish is to generate verticies from the Geometry Shader and store them to a vertex buffer so that I can use the vertex buffer to draw later. I read that I have to use the CreateGeometryShaderWithStreamOutput function to create a Geometry Shader that can send verticies to Stream Output instead of the rasterization stage. This is how I am trying to use it device gt CreateGeometryShaderWithStreamOutput(this gt mGSBlobSO gt GetBufferPointer(), this gt mGSBlobSO gt GetBufferSize(), so decl, 1, amp stride, 1, D3D11 SO NO RASTERIZED STREAM, NULL, amp this gt mGeometryShaderSO) I am getting an E INVALIDARG at this line. I am specifying D3D11 SO NO RASTERIZED STREAM because I think this means that I do not want to send data to the rasterizer but I am not sure. When I replace D3D11 SO NO RASTERIZED STREAM with a 0, I do not get this runtime error, but I do not get the result I want. How can I setup the geometry shader to store vertices to a vertex buffer in Stream Output? |
8 | How can I improve or replace my programmer art? Say I'm a programmer who has done his own sprites or 3d models which would fall into programmer ish kind of art. What steps can I take in order to improve or replace my own art? |
8 | Which camera angle is used for these sprites? I am trying to replicate these sprites But I am not managing to make sense of how it is done. The character is facing forward ( lt ), but you can see the front of its body instead of just the side. Why? Is the camera positioned diagonally? Or is the body bent, as in a fighter stance? Or a combination of both? Which exactly is the positioning perspective view I should use to replicate it correctly? |
8 | How do you make use of all texture units on today's graphics cards? I saw a review of the GeForce GTX 460 graphics card. It has 56 texture units. I'm not that knowledgeable about graphics effects. But, the ones I know use around 3 or 4 texture units. In this graphics card case, that would leave a lot of texture units idle. How are these graphics cards with so many texture units used? |
8 | How to determine the maximum supported level of anisotropic filtering for a graphics card In software, at runtime, (C if it matters), is there a way to find out the maximum level of anisotropic filtering supported by the graphics card? |
8 | Graphics for non Graphics Designers Are there resources for programmers with little to no graphics design talent, but still want to make good enough graphics for their own programs? I'm interested in both 2d and 3d computer generated graphics techniques, as well as free graphics repositories. |
8 | How to render infinite universe? I'm curious what are the best practices in game development industry to render 3d universe? To be more specific The data points are given and static. Each point has position, color and size The entire data set is much larger than available memory User should be able to "zoom out" to see bigger picture at once The most naive approach would be to split universe into cubes and render only what's visible. I'm not sure how in this scenario should I implement the "zoom out". Should I precompute cubes for each possible zoom level? Or maybe there are better approaches? I'm looking for technology agnostic solution. |
8 | Is there a sprite sheet creator that satisfies these requirements? I'm looking for a decent sprite sheet packer. Features Command line interface for Linux. Effective packing algorithm. Configurable padding between sprites in the sheet. Configurable fixed sprite sheet size (i.e. pack to many sheets). Duplicate sprites detection, preferably with some tolerance to allow for FP errors when packing rasterized vector images. Auto crop by transparent pixels (i.e. sprite is cropped when packed and it is correctly reflected in the data). Support for web graphics formats (jpg, png at least). Reasonable data output format (sprite sheet indes). Paid software is okay as long as the price is reasonable. |
8 | What's the difference between using hardware accelerated APIs and the OS's drawing API? On Windows, I can do drawing with the OS API without OpenGL or D3D. The code I am writing will make calls to a device driver and tell the GPU what to do regardless, right? How is using OpenGL different exactly? Do these libraries have code that will interact with the GPU differently than just the Windows API does? |
8 | what is order of implementation of matrix transforms? What is the meaning of Tr Sh Ro Sc M ? How is matrix M written to form the graphic transformation 1st Sc Scale, 2nd R Rotate, 3rd Sh Shear 4th T Translate ? Is the matrix M above written properly in a "post multiplcative" format" T Sh R Sc , but executed Scale, Rotate, Shear , finaly Translate ? |
8 | Gravity independent of game updates per second Edit Just for clarification, my sprite's 'movement' isn't the problem. If I set my Time variable to 4 seconds, then it will cross the screen in exactly 4 seconds regardless of logic updates rate per second, rendering rate per second or screen resolution. So I am pretty sure I'm scaling the sprite's movement correctly. What I'm pretty sure I'm not doing correctly, is scaling acceleration. Original Question I'm trying to implement gravity in my 2d platformer and am having a few problems understanding how to keep it consistent when I change my updates per second. Here's what I have. My Gameloop overview Currently, my gameloop renders at the maximum rate allowed by device and the updates are 'clamped' to an upper limit (At the moment, 60 per second). I am working on the assumption that most of the time, my game will have no problem hitting this, even if the actual rendering dips. Thus I am doing all of my calculations based Delta Time derived from this fixed 'ticksPerSecond' value. I don't know for sure that this will remain at 60, I may decide at some point during the development to lower this upper limit. My Gravity variable declaration and initial values At the moment, I have float spriteYTime 7f This is the (initial) amount in seconds that this sprite will take to move from the top of the screen to the bottom. float fallAccel .5 This is the value that will subtracted from he sprite's fall speed (To make it fall ever faster) float terminalVelocity 1.5f Cap speed at this rate (1.5 seconds) my sprite's position is worked out on dt as follows Delta time float ticksPerSecond 60 float dt 1f ticksPerSecond Velocity spriteYVel 1 spriteYTime Update position spriteYReal spriteYReal (spriteYVel dt) Convert to screen coordinates (will be drawn at this Y coordinate) spriteyScreen (int) (furmanYReal height) My Gravity code If my sprite's state is 'f' (meaning falling) then apply gravity if(sprite.getState('f') true) Calculate new position spriteYReal spriteYReal (spriteYVel dt) Convert to screen coordinates sprite.yScreen (int) (spriteYReal r.height) Reduce time by fallAccel amount amp update velocity value based on new time value (So sprite falls slightly faster this frame compared to the last spriteYTime fallAccel spriteYVel 1 spriteYTime Check that speed isn't faster than terminal velocity if (spriteYTime lt spriteVelocity) spriteYTime spriteVelocity The problem Now, this does work, but if I change my ticksPerSecond value, it goes wrong (it falls at different rates). I know that 'Earth Normal' gravity is approximately 9.8 meters per second per second, but this is measured 'per second' whereas I (think) I need to work with 'per frame'. This is where I'm getting confused. So, take this example If set my initial Time value to 4 seconds, then if it remained constant at this speed, it would take 4 seconds to reach the bottom, if I changed my ticksPerSecond, because the time value is worked out using delta, it would still take 4 seconds but if I apply 'fallAccel" to the Time value (i.e. subract it), it goes wrong when I change the ticksPerSecond value why? How can I get this to fall at the same rate regardless of the value of 'ticksPerSecond' any help in understanding this would be appreciated. |
8 | How can SpriteBatch use a single texture asset as multiple independent objects instances I'm using LibGDX to create a game, but I'm encountering a problem with SpriteBatch. Whenever two objects that use the same image for their sprite come onto the screen, the new object replaces the old object. So, for example, one ship will come onto the screen, and when a ship with the same sprite comes onto the screen, the old ship will disappear and the new ship will have all the damage and other characteristics of the old ship. I could always make multiple copies of the same image and put them in the assets folder, but that seems unnecessary. Does anyone have any ideas? Am I using SpriteBatch incorrectly? Edit Here's a couple of examples of rendering methods in the game loop. The method for rendering ships private void renderDoodad(Doodad doodad) if (!doodad.isDisabled()) batch.begin() if (doodad instanceof Enemy) batch.draw(doodad.getSprite(), doodad.getX(), doodad.getY()) Followed by some code about what to do if the ship is the player's ship. But the player's ship is rendering fine, so I left that out. Then there's the method for rendering shots private void renderShot(Shot shot) if (!shot.isDisabled()) batch.begin() batch.draw( shot.getSprite(), shot.getX(), moveDoodadY(shot, shot.getDirection().getVal() getApp().getGraphics().getDeltaTime())) batch.end() |
8 | Gravity independent of game updates per second Edit Just for clarification, my sprite's 'movement' isn't the problem. If I set my Time variable to 4 seconds, then it will cross the screen in exactly 4 seconds regardless of logic updates rate per second, rendering rate per second or screen resolution. So I am pretty sure I'm scaling the sprite's movement correctly. What I'm pretty sure I'm not doing correctly, is scaling acceleration. Original Question I'm trying to implement gravity in my 2d platformer and am having a few problems understanding how to keep it consistent when I change my updates per second. Here's what I have. My Gameloop overview Currently, my gameloop renders at the maximum rate allowed by device and the updates are 'clamped' to an upper limit (At the moment, 60 per second). I am working on the assumption that most of the time, my game will have no problem hitting this, even if the actual rendering dips. Thus I am doing all of my calculations based Delta Time derived from this fixed 'ticksPerSecond' value. I don't know for sure that this will remain at 60, I may decide at some point during the development to lower this upper limit. My Gravity variable declaration and initial values At the moment, I have float spriteYTime 7f This is the (initial) amount in seconds that this sprite will take to move from the top of the screen to the bottom. float fallAccel .5 This is the value that will subtracted from he sprite's fall speed (To make it fall ever faster) float terminalVelocity 1.5f Cap speed at this rate (1.5 seconds) my sprite's position is worked out on dt as follows Delta time float ticksPerSecond 60 float dt 1f ticksPerSecond Velocity spriteYVel 1 spriteYTime Update position spriteYReal spriteYReal (spriteYVel dt) Convert to screen coordinates (will be drawn at this Y coordinate) spriteyScreen (int) (furmanYReal height) My Gravity code If my sprite's state is 'f' (meaning falling) then apply gravity if(sprite.getState('f') true) Calculate new position spriteYReal spriteYReal (spriteYVel dt) Convert to screen coordinates sprite.yScreen (int) (spriteYReal r.height) Reduce time by fallAccel amount amp update velocity value based on new time value (So sprite falls slightly faster this frame compared to the last spriteYTime fallAccel spriteYVel 1 spriteYTime Check that speed isn't faster than terminal velocity if (spriteYTime lt spriteVelocity) spriteYTime spriteVelocity The problem Now, this does work, but if I change my ticksPerSecond value, it goes wrong (it falls at different rates). I know that 'Earth Normal' gravity is approximately 9.8 meters per second per second, but this is measured 'per second' whereas I (think) I need to work with 'per frame'. This is where I'm getting confused. So, take this example If set my initial Time value to 4 seconds, then if it remained constant at this speed, it would take 4 seconds to reach the bottom, if I changed my ticksPerSecond, because the time value is worked out using delta, it would still take 4 seconds but if I apply 'fallAccel" to the Time value (i.e. subract it), it goes wrong when I change the ticksPerSecond value why? How can I get this to fall at the same rate regardless of the value of 'ticksPerSecond' any help in understanding this would be appreciated. |
8 | Planet sized quadtree terrain precision I've been working on a planet sized and shaped terrain lib for a while (In webgl and Unity) and have a couple of working implementations Quadtree cubesphere and circular clipmaps . But both of them suffer from the same problem. In the GPU vertex shader I move the vertices to their correct location on the planet. But 16 bit floats can't handle values that go into the millions. So when you get close to any vertex in the terrain it jiggles from the GPU rounding errors. With the quadtree solution this is how I put vertices where they belong in the shader vec3 newPosition vec3(normalize(StartPosition.xyz (WidthDir position.x HeightDir position.y) Width) Radius) In the circular clipmaps I create a point at 0,0,1 and use two quaternions to rotate it to where it belongs (one for where the point is in the clipamp and one for where the clipmap should appear on the planet), then multiply it by the planet radius. But both implementations require the multiplication of the planet's radius the clipmap also has one quaternion with precision far higher than a 16 bit float, at ground level it will be a precision of 1 6000000 of PI . So does anyone know how to handle this problem on the GPU so that at least the points closest to the camera have position values close to zero and no large number multiplication? Or is the only solution to set the positions on the CPU and just do some math work to make sure the vertices on the side of the planet closet to the camera always have values close to zero? |
8 | Project 2d click touch onto 3d plane I have a 3d scene that contains an infinite plane that is NOT parallel to the camera (so every screen coordinate corresponds to a point on this plane in other words, there are no possible invalid clicks touches). I have all the information you'd expect to work with camera pos, dir, fov, aspect ratio, the 3d points defining the plane (either as a triangle, or as a normal distance), the view projection matrices that project it onto the screen, etc... I haven't found anything online that answers this in a straightforward way. Please no "use this or that library". Also, if you're going to say "invert the matrix", please give a way to go about doing that (I don't think the matrix is easily possibly invertable, as the "project to 2d" process involves the divide by w step, rendering the final translation between 3d to 2d non affine). |
8 | What is "ROAM" related to terrain rendering? I saw it mentioned on this question, but no one explained what it is. |
8 | Where can I find good designers animators for the graphics of a 2d game? Possible Duplicate Where to hire 2D sprite artists? I've got a little bit of experience in this field already, and I noticed that it's way better to look for animators than graphic designers, because games need explosions, looping sprite sheets, etc. But it would be even better if I could go somewhere and find people with experience in making graphic assets FOR GAMES, not for general markets. Is there some specialized site or something? |
8 | What's the difference between using hardware accelerated APIs and the OS's drawing API? On Windows, I can do drawing with the OS API without OpenGL or D3D. The code I am writing will make calls to a device driver and tell the GPU what to do regardless, right? How is using OpenGL different exactly? Do these libraries have code that will interact with the GPU differently than just the Windows API does? |
8 | What do you use to create sprite graphics? Possible Duplicate What tools do you use for 2D art sprite creation? What do you folks suggest for creating sprite graphics and sprite sheets? I fiddle with pixelformer and tilestudio. Pixelfromer has a kicken interface, it is quick and easy to make graphics, but a bit cumbersom if you want to make a spritemap. Tile Studio is a interesting mix or tiles and maps, but it is a bit buggy and basic. The Adobe series, just don't really seem to handle tiny graphics well. (there is a previous posting of this question existing, but it is a year old and I was hoping for further updated input from the community) |
8 | Transformations between coordinate systems In a graphics engine, I have three three dimensional orthogonal coordinate systems, O, A and B. A and B are the result of two different transformations from O. I now want to calculate the transformation matrix R, which takes you from A to B. R should be the rotation and translation with respect to coordinate system A, not the original coordinate system O. Which of the following is correct? (1) B A R or (2) B R A By doing some simple worked examples, it seems that (1) is correct. However, my intuition says that (2) is correct, because the transformation to A should be applied before the transformation to B, via R. Which is it? Thanks ) |
8 | Including sprite file for mobile games I'm making a simple online RPG for Android amp IOS using HTML5 amp Phonegap and was wondering should I include sprite file with the game for downloads (because of bandwidth)? What should I do when I modify the sprite file in the future? (i.e. forced update). |
8 | How do I fix this unintentional chromatic aberration on a lightmap? Some strange rainbow halos (front and left) appear after computation of a lightmap for direct light. The formula is dot dot2 ( dist dist ) dl gt intensity with gamma correction, lightmap is computed as vec3 t and ultimately cropped to 3 bytes. However, brightness of the region in question fit into 255 nicely, and even calculating in byte instead of float beforehand result in just a subtle quot oil painting quot effect and does not produce such artifact across entire image. My best guess is how this artifact appear from base change in a floating point math, but I don't know for sure nor do I understand what exactly is happening. |
8 | What does the term 'photorealistic' really mean? I was wondering about the term 'photorealistic' in regards to rendering and was wondering how this is used. Is it used to describe a shader (or set of) that have certain quantifiable features? Or any rendering thats not meant to be abstract, like the cartoon effect seen in Borderlands? Or is it just a subjective term meaning 'really really realistic'? |
8 | How can I include vertex color information in .OBJ files? The .obj files I export are missing data for vertex colors. Is there a way to include color information in the .obj file? If not, what are the alternatives? |
8 | Correctly bitmasking path tiles based on existing paths Currently, I have a bitmasking implementation that sometimes incorrectly bitmasks the tiles. Conventionally, it is done correctly, and the math etc is sound, but it achieves results such as the following This is similar to what you might expect, and looks fine when doing simple x shaped crossings. However, I would like for the images to remain unchanged in some of these cases. For instance, the parallel paths in the third image would remain parallel paths instead of being bitmasked into many crossings. In the first and second images, only the parts of the path highlighted in the following images would be bitmasked (Different parts of path labelled for clarity.) If I purely update only the tiles that make up the newest path, I again get good results in simple x shaped crossings, however I get results such as those demonstrated in the following image What could I do to achieve results where this would not occur? I have though about adding some sort of flag to the tile to indicate that it is a crossing, but how could I detect that? |
8 | Precomputing Visibility Having noticed that UDK (Unreal) and Unity 3 include similar pre computed visibility solutions that unlike Quake are not dependent on level geometry, I've been trying to figure out how the calculation is done. The original Quake system is well documented You divide the world into convex volumes that limit both the camera and the geometry. Each volume has a list of all the other volumes that are visible from it. Visibility would be computed by firing rays at some random distribution of points in the target volume and see if any hit. And because the position of the camera in the source volume could have an effect, those thousands of rays would have to be fired from multiple places in the source cell. So what I'm wondering is if there's any been fundamental change to this basic scheme in the intervening 15 or so years? I can see how to adapt it to a UDK Unity scheme that has regular source volumes and deals mostly with arbitrary meshes as the targets, but is there a better way than stochastic ray testing? |
8 | How do games deal with Z sorting partially transparent foliage textures? I was busy implementing basic transparency in a prototype I'm working on when something occurred to me. In order for a given texture's transparency to work as expected, the (semi )transparent texture must be drawn after whatever is behind it, right? Well, if we take for example a tree or shrub in a game like Skyrim, the texture(s) that make up the foliage on that tree or shrub must include some transparency somewhere, right? A vertex perfect leaf model would be far too resource intensive. But the player can move around, and sometimes through, any plant at will, thus changing the relative position of all textures to the camera. Doesn't this mean that the game has to constantly Z sort all textures, both between models and even within a single plant model, whenever the player moves (so potentially every single frame)? Isn't that very resource intensive? How do games with lots of partially transparent textures deal with this? |
8 | How to generate a star onto a render texture with spherical warping How would one proceduraly generate a star in a compute shader that looks like one of thes two at any size needed. Also any way transfer this into a spherical map, would be appreciated. Goal is to create a spherical skybox of stars, (stars a pre generated, and not just decoration). Also I have searched about everywhere for such equations, and or tutorials but have not found anything so far. So far got accurate positioning of the stars on the spherical skybox but lack the equations to get it looking like I want. Preferably the one on the right. This below is what I currently have, 5 to 15ms processing time, little over 30k stars Using Unity2019.3.1f1, needs to be compute shader compatible. (if not I will convert it somehow) Render Texture output. |
8 | Why is Y up in many Games? I learned at school that the z axis is up. It is the same in modeling software like Blender. However in many games the y axis is up. What is the reason? |
8 | Why does clipping take place after illumination? Can you explain why clipping takes place after the illumination process in the rendering pipeline? Would it be not cheaper to clip first and do then the illumination? |
8 | How do I generate a random curve for landscape (like Worms)? Possible Duplicate How do I generate terrain like that of Scorched Earth? How can I generate Worms style terrain? I must build random curve line for the 2D Game on the BitMap (like in Worms, from the side). Teacher said that I should do it using Terrain Generation through recourcy (I work in Delphi 7). I understand the main principle, but I don't know how to introduce it as code. All measurements according to the screen resolution. |
8 | Why do games ask for screen resolution instead of automatically fitting the window size? It seems to me that it would be more logical, reusable and user friendly to implement flexible, responsive UI layout over a 3d or 2d screen, which can then be run on any screen resolution. Some modern games auto detect screen resolution and adjust the game to that, but the option to change the resolution still remains in the settings. Why is this option there? My guess is that this is one way to provide older machines with a way to improve performance by rendering less graphics and just stretching them across the screen, but surely there are better ways of improving performance (choosing different texture and model qualities, for example). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.