_id
int64
0
49
text
stringlengths
71
4.19k
13
Have both Cubemaps and TextureArray in a single texture register? I want to have both Point Lights, aswell as Spot Lights. For the Point Lights I have a TextureCubeArray and for the Spot Lights I have a Texture2DArray. Is it possible to combine these two into a single Texture2DArray and treat the Point Light Shadow Maps as Cubemaps, and treat the SpotLight shadow maps as single textures? I would like to bind only one array instead of multiple ones.
13
How to best utilize depth buffer precision Are there strategies to minimize depth buffer precision problems with hyperbolic depth buffers, such as the ones resulting from perspective projection matrices, or depth buffers in general? For example, graphics APIs usually give an option to change the depth range, which might influence precision. It's possible to linearize non linear depth buffers, for whatever reason. There's the option of floating point depth buffers, and non floating point depth buffers. It's possible that changing the information in projection matrices has a result on the resulting range amp precision of the depth buffer. How do all of these things interact with the resulting range amp precision, or with each other, and how do I get the maximum out of my depth buffer? Are there general good practices one should adopt, regardless of project specifics?
13
Unity particles rendering on top of camera space UI I'm using LWRP in Unity 2019.1.12f1. The UI is in Screen Space Camera. The shader used for the particles is Lightweight Render Pipeline Particles Unlit (Transparent, Premultiply). Any ideas?
13
How many LoD versions of a model should I have? Many games facilitate better performance by increasing decreasing the number of triangles polygons that are drawn, depending on how close the camera is to said object. Mountains, viewed from far away, could become literally flat. Models would steadily lose resolution as you walk further away from them, etc. If I want to implement such a system, how many different permutations of the same model models should I have? Suppose the model is 20,000 triangles when viewed up close, but then I halve that when the camera goes away, so now it is 10,000. Then again, when the camera is even further away, making it 5,000 triangles. Would three versions be enough? Or is this really just an arbitrary implementation? Does it depend on the game itself?
13
How to prepare a sprite for a game? For example, I created my character drawing on Paint.NET using the Pencil tool and now it's a sprite like asset. If I want him to move do I need to create many different poses for him and render them according a timer? Or do I need to have each pose side by side in a single image, then render the rectangle I need, thus loading the picture only once? Thanks.
13
How much slower is it to draw on "half pixels"? I've noticed that games like Diep.io are using floating decimal points for thin stroke lines on the grid. I have even tried this myself, by adding 0.5 to all of the positions for the grid lines to make the lines more thin. I heard it from a friend that drawing on half pixels causes the GPU to do more work to smooth it out, like anti aliasing. I am really trying to make my game look nice, by making the most smoothest lines as I can. How much slower is it really, and should I use it in an online competitive 2D game using Canvas?
13
Scale screen space quads (used in font rendering) I have quads positioned in lt 0, 1 x lt 0, 1 coordinates. I use this system for font rendering. In vertex shader I have gl Position.xy 2.0 POSITION.xy 1.0 that brings the position to the screen space lt 1, 1 x lt 1, 1 . My quads are created from two triangles ABD, BCD D C A B Now, I want to scale the quad by a scale factor. Can I achieve this with this data? I have tried to change geometry and store quad center and half sizes, then calculate the final position as gl Position.xy 2.0 (CENTER POSITION.xy HALF SIZE.xy) 1.0 This way, I can translate points to origin, scale them and translate them back. However, this scale single quad correctly, but the two closely neigboring quads are no longer closely neigboring, since the space between them is constant. So the final question. How can I scale quads used for font rendering, so that the text is still readable without large gaps between glyphs?
13
LWJGL Text Rendering Currently in my project I am using LWJGL and the Slick2D library to render text onto the screen. Here is a more specific example Font f new Font("Times New Roman",Font.BOLD,18) font new UnicodeFont(f) font.getEffects().add(new ColorEffect(Color.white)) font.addAsciiGlyphs() try font.loadGlyphs() catch (SlickException e) e.printStackTrace() then i use font.drawString to write onto the screen. This is a quick easy way but it has a lot of disadvantages. for example font.loadGlyphs take a very long time 1 3 seconds. so when i want to change a color or font type then i have to wait 1 3 seconds which means I cannot do it while rendering (ie. cant have different color text on the same screen). My question is what is a better way of drawing multicolored text onto the screen? I use slick2d only for the text rendering so maybe i can fully get rid of the library and draw text some other way... If you have an answer please leave a quick short example. Thanks!
13
How can I keep straight alpha during rendering particles? Rencently,I was trying to save textures of 3D particles so that I can reuse the in 2D rendering.Now I had some problem with alpha channel.Some artist told me I that my textures should have unpremultiplied alpha channel.When I try to get the rgb value back,I got strange result.Some area went lighter and even totally white.I mainly focus on additive and blend mode,that is ADDITIVE srcAlpha VS 1 BLEND srcAlpha VS 1 srcAlpha I tried a technique called premultiplied alpha.This technique just got you the right rgb value,its all you need on screen.As for alpha value,it worked well with BLEND mode,but not ADDITIVE mode.As you can see in parameters,BLEND mode always controlled its value within 1.While ADDITIVE mode cannot guarantee. I want proper alpha,but it just got too big or too small consider to rgb.Now what can I do?Any help will be great thankful. PS If you don't understand what I am trying to do,there is a commercial software called "Particle Illusion".You can create various particles and then save the scene to texture,where you can choose to remove background of particles. Now,I changed the title.For some software like maya or AE,what I want is called straight alpha .
13
How can I render cloud patterns like in these examples? I have a quick question. I see in many games, usually in the menu, some moving "clouds" in the background, apparently additive blended into each other, which does a really nice job immersing the player, in my opinion. A couple of examples that jump to mind right now are the Far Cry 3 main menu background https www.youtube.com watch?v B5XcMx3GjPA and the Plague Inc main menu background https www.youtube.com watch?v RQv60ywrLxU (the first seconds show it) These cloudy patterns seem like some kind of noise to me, like Perlin or other. So, how would you proceed to achieve that kind of blurred cloud effect with vivid colors? More specifically, would you pre generate sets of clouds and include them in the game package? Or generate them on the fly? On the CPU as a regular texture or on draw in the GPU shader program? I am interested in mastering this kind of visual effect, and as such any help would be appreciated pointing me in the right way. Thanks.
13
How can I render 3d pictures synched frame by frame to a keyed video? I'm developing a software that aims to combine a keyed video with a 3D rendered one, synchronously frame by frame. Each frame of video should be combined with a timely corresponding frame of 3D picture using a matching camera position parameter. How can I perform this rendering and ensure it stays in synch?
13
What is the best way to render eletric wires (like in gta 4)? From the top of my mind, I see 2 ways to do it classic mesh. (but that's likely to be a lot of tris for little screenspace) bilboards (but the placements of the billboards may be tricky. still looks like the best solution) a box with a shader (the parametric shape would be in the shader, seems tricky). Any idea ? refs Screenshot1 Screenshot2
13
SDL 2 SDL BLENDMODE BLEND way faster. Why? I was just tinkering around with a simple ecs, when I noticed drawing like 20,000 rectangles killed my framerate ( 10 FPS). I thought ok, maybe it's just too many. Later I wanted to draw them transparent and set the blend mode to SDL BLENDMODE BLEND. Now I can draw around 60,000 of them with 60 FPS. I figured maybe it's because of pixel format conversion or something alike. When I use getWindowPixelFormat I get RGB888. No I'm even more confused because of the lacking alpha channel. Code (abbreviated) SDL Window window SDL CreateWindow( quot Test quot , SDL WINDOWPOS CENTERED, SDL WINDOWPOS CENTERED, 1280, 720, wndFlags) SDL Renderer renderer SDL CreateRenderer( window, 1, SDL RENDERER PRESENTVSYNC SDL RENDERER ACCELERATED) while(running) SDL RenderClear(renderer) SDL SetRenderDrawColor(renderer, 0xFF, 0x00, 0xFF, 0xFF) SDL RenderFillRect(renderer, amp rect) SDL RenderPresent(renderer) SDL SetRenderDrawColor(renderer, 0x00, 0x00, 0x00, 0xFF) Could someone enlighten me please?
13
Create multiple viewports in minecraft? Im new to minecraft modding, but Im curious about the possibility of creating a mod such that players could set up a 'security camera' with its own viewport so that the player could 'see what the camera sees'. For example, imagine a minimap style window that displays the viewport of the camera you've setup. Specific questions can you have a second viewport that isnt tied to the player? Can you programmatically access a viewport? e.g. store the pixel values to a variable rather directly rendering to the screen.
13
Drawing a line between 2 vectors I was trying to implement a simple mechanic by drawing a line between the sprite and the mouse, but it's not working that well extends KinematicBody2D onready var player CollisionShape2D var pos two var pos func physics process(delta) var vel Vector2() pos two player.get position() pos get global mouse position() look at(pos) if Input.is action pressed( quot movethere quot ) vel Vector2(400 , 0).rotated(rotation) vel move and slide(vel) func draw() draw line( pos two ,pos, Color(255 , 0 , 0))
13
SVD vs Normal decomposition for BRDF compression In the slides over here by NVidia, they describe methods for BRDF compression. They first create a BRDF matrix where each column(or row) corresponds to a single light direction (or outgoing view direction). This matrix is then compressed by decomposing it either by using SVD or Normalized decomposition. My question is that they claim SVD gives better results than Normalized decomposition for similar compression sizes. Does anyone know what could be the possible reason for this?
13
16bit triangle lists vs 32bit triangle strips When drawing geometry we may use indexed drawing, where we pass index of the vertex we want to draw in array. In this case we need to pick a topology for our geometry and the type of indices. Popular topologies are triangle lists, where we have to specify 3 indices for each triangle, and triangle strips, where each new triangle shares its first 2 vertices with last 2 vertices of previous triangle. As for index types we have 16 bit indices, which allow for 65k vertices per model, and 32 bit indices, 4B vertices. My GPU (1050ti) fetches 32 bit indices at half the rate of 16 bit indices, where 16 bit indices triangle list topology gets it at its maximum throughput, but so do 32bit triangle strips. As my GPU isn't that old, I expect many GPUs to be quite alike in this manner. So, is the inconvinience of having to specify geometry in strips worse than having a limit to 65k vertices per model?
13
Gameobjects disappearing when changing renderer.material.color I'm generating an array of objects using this code while(currentPosition.z lt poolSize) var ringObj GameObject.Instantiate(ring) as GameObject ringObj.transform.position currentPosition ringObj.GetComponent lt Renderer gt ().material.color new Color (1.0f, 0f, 0f) Adds the ring to the pool. pool (int)currentPosition.z ringObj updateCurrentPosition () This works fine without this line ringObj.GetComponent lt Renderer gt ().material.color new Color(Random.Range(0.0f,1.0f), Random.Range(0.0f,1.0f), Random.Range(0.0f,1.0f)) Which should randomize the color of the object. The problem is that instead of doing this, the whole code would generate only one object (I can see it on the hierarchy view) with the standard material color and then stops the generation. Also, the material is not changing color. I can't understand why this happens. Can someone explain this to me, please? I'm using Unity 5.3.4f1. Thank you.
13
How can I render fonts in a game with correct hinting? I've used angel code's bitmap font generator quite a bit though it's very good, I wonder if there would be a way to use the hinting information to provide a better and more readable result, by using hinting to provide differing thickness based on size pixel coverage. I imagine any solution would have to use the distance field tech presented in the valve paper on smoothing fonts while maintaining or reducing asset size. I haven't found any demos of it being used with hinting information turned on, or included in the field gradients in any way. Another way of looking at this is whether there are any font bitmap generators that will output mipmaps that still maintain their readability in the face of pixel size. I think the lower mip levels would try to guarantee fill and space where it is necessary to maintain readability topology over maintaining style form the point of hinting. Is there a reason you can't just render the size you want? The problem lies in the fact that font rasterisers currently don't render in 3D, and hinting information would be important in different amounts, due to the pixel density being different along different axes even differing in importance along the length of a string, due to the size reducing over distance. For example, I only want horizontal hinting in a texture that is viewed from the side, and only really want vertical hinting in a font that is viewed from below or above. This isn't meant to be a renderer that tries to render a perfect outline as accurately as possible, as hinting distorts the reality of the font instead, this is meant to be a rendering solution for static scenes, where the scenes use 3D transformed and warped text layout. In this case, the legibility is important more important than the accuracy of representation of the polygon shape.
13
Rendering models in isometric view How to setup the rendering and camera for isometric gameworld projection? And specifically how do i get the images exactly the right size? What angles to use to get the exact 2 1 isomtric view? Methods to set the camera on the right position? Options to set like anti alias off. I have tried many things, 45, 30, 35.264 degree angles. What i do is set the angle of the camera, then place the camera in front of the model then use dolly fov lens settings to get the left and right edge of the model lined up with the save frame. Then adjust camera height so the bottom lines up with the bottom of the save frame. But i keep getting jagged edges and not the isometric style 2 width one height.
13
How can I create character graphics similar to Valkyrie Profile? I want to create a game with the character graphics similar to Valkyrie Profile. I don't know whether I should make my game character sprites pre rendered or hand drawn, or which technique would let me create a similar look. Are these created pixel by pixel, what sets me off is the ditter found in a lot of the images, kind of looks like the characters from treasure hunter G or harvest moon friends of mineral town. edit Added the screenshots from Treasure Hunter G and Harvest Moon. I know these are pre rendered sprites that are later incorporated into the game, just for comparison against the valkyrie profile ones. Also, the treasure hunter G is a SNES game, and the Valkyrie Profile is PSX one, so I expect an upgrade in graphics.
13
Manual per glyph rendering with SDL TTF I'm trying to create a font atlas with SDL TTF. My idea was to create an SDL Surface for every character using TTF RenderGlyph Blended() and use TTF GlyphMetrics() to get glyph metrics so that I can position the surface relative to a baseline. I'm struggling with a correct positioning of a glyph. It looks like SDL TTF creates a surface with height which is at least the height of the font and width which is at least advance of a glyph, so the surface is bigger than just the glyph. How do I extract glyph's rectangle from the created surface? How do I position the glyph correctly relative to the base line on the screen? I thought y position of the glyph it should be baselineY maxy (where maxy is from TTF GlyphMetrics), but that places glyph too high above the base line.
13
Blender Object Appearing Gray when all Lights are Off I have an issue with Blender where, when I turn my only light off (a sun lamp) and render the image my object appears gray rather than black (and thus, not appear to the camera). I can't figure out why this is happening. Here's what I just did in my scene Added a new UV Sphere mesh (to make a total of two spheres), made it visible to the camera, turned off the sun lamp (by setting energy to 0), and rendered. The result I obtained is below. I discovered this when attempting to render the first sphere with a material texture on it and it was too bright. The material on the spheres (which are different) are very basic, there's no emit, diffuse and specular are at default values. Could there be an issue with the way my camera is setup? Thanks in advance!
13
SetColorMod() with mini delay when the texture is rendered for the first time I have one image.png that I load using SDL IMG. I load by creating a surface. Then I create a texture using CreateTextureFromSurface() and I free the original surface. I repeat the same process but now at the end I use SetColorMod(128, 128, 0) to this new texture. This changes the color of the image as I wanted! Cool! This whole process is done way before the images are rendered to the screen. The original texture (without the color change) is rendered normally. The texture with the color change creates a mini delay when rendered for the first time. After that it keeps rendering without any delay. I m using SDL 2.0.2 (I m using ubuntu 14.04) A quick preview of the code to give an idea of what is happening load function func load() newSurface, err img.Load(fname) defer newSurface.Free() newTexture, err renderer.CreateTextureFromSurface(newSurface) return newTexture pre load func init() sprite1 load() sprite2 load() sprite2.SetColorMod(128, 128, 0) render... func render() Renderer.Clear() loop for each object obj1 sprite1 obj2 sprite2 Renderer.CopyEx(obj1, amp crop, dst, render.Angle, render.Center, render.Flip) LAG HERE Renderer.CopyEx(obj2, amp crop, dst, render.Angle, render.Center, render.Flip) Renderer.Present()
13
Deferred Shading and Transparency Clarification? So, this is a bit of a simple question, but I can't seem to find a real answer anywhere. I've been looking into implementing deferred rendering using MRT into my in progress render engine, but I'm a bit hesitant to for the following reason. I've read in a few places now that deferred shading does not support transparency natively, and if I wanted to achieve transparency, I would need to resort to forward shading of these transparent objects. But, the idea of "transparency" seems a bit vague because it seems as if there might (corrct me if I'm wrong) be two types of transparency. One type being the sort of overlay transparency, where I, say, tile a wall with a brick texture. Then I want the wall to be 50 transparent so I apply that on top of the already textured object. This appears to be one "mode" transparency in which the engine, as part of its rendering pipeline, inserts this transparency manually. The next type in my mind is using a texture that already has an alpha value associated with it, i.e. a 256x256 image of a person, where everything that isn't the person has an alpha value of zero. Then, if I were to billboard that image of the man onto a simple quad (which is what I'm actually trying to do in my game, for reference), would that sort of "native" transparency on the parts of the image with zero alpha still render as transparent, even in a deferred rendering situation? Does such a distinction exist between these two types of transparency, or are they merely two sides of the same coin when it comes to deferred rendering?
13
Could frame interpolation like used by SmoothVideo Project be an option to increase the framerate of games without as big a performance hit? The SmoothVideo Project uses frame interpolation to increase the fps of video from 24 to 60. The results are pretty impressive. I was wondering if this could be applied to, and whether it would look good in video games? It uses much less resources than rendering all the frames so would allow lower end rigs to render at the quality of much better rigs at some level of compromise. I know it won't be as accurate, and would slightly increase input latency as it needs to hold on to the newest frame to be able to generate and insert the interpolated one. It's not as bad as a full frame though, by my reasoning only the lag would be the interpolation time plus half the original fps refresh time. So for 30 fps it would be 33ms 2 interpolation time. Maybe this lag would make it unsuitable for fast past first person games, but I doubt it would be a hindrance in slower paced games. The lag becomes lower at higher start rates, so I would think it would be certainly worth it when going from 60fps to 100 fps which improves the experience though increasingly marginally, while being extreme taxing on the system.
13
What is a texture atlas? I've heard about this concept, but what is it?
13
Forward Shading with multiple shadow casting lights I am currently thinking about how to organize shadowing and lighting. We use forward rendering and currently, our algorithm looks like this collect all items that are visible in the view for each item, collect a list of lights where the item is in the attenuation radius (each item keeps a list of lights) determine the shadow light by the distance of the main character (only one light can currently cast shadow) render the scene by using a constant buffer of the currently processed item to shade it (each item is rendered with a constant buffer which contains light properties. the number of lights per item is predefined so we have a Light 16 and numLights in the constant buffer) How would I do multiple shadow casting lights in an organisatory way? We do not want to go the deferred way, since we dont want to limit us to GBuffers.
13
How to render Viva Pinata fur In the game Viva Pinata, cute virtual animals have color changing paper cut like furs. It didn't seem like using shell rendering because there are LOTS of animals in a scene and shell rendering each every one of them to render these furs sounded like a daunting process for a game. I tried to build 3d model with each triangle but that didn't seem like right solution either. I am out of my tricks in my pocket.
13
What are some of the more commonly used projectile rendering techniques? couldn't find a duplicate question (bit surprising to me) but anywho I'm starting to get near implementing the rendering of projectiles for my game. My question is what are some good techniques for efficiently rendering projectiles? I would like emphasis on techniques that leave room for the projectiles to be "rich" and dynamic (Cool to look at!) I'm also using DX11 for my rendering engine so bleeding edge techniques that can make use of that would be much appreciated too. Thanks!
13
Seamless tiling with TexturePacker and Marmalade IwGX I'm looking for a way to get seamless tiling working, where the tiles are sprites off a TexturePacker sprite sheet, and the rendering is done with Marmalade's IwGX's streams. I also need to render the tiles in multiple scales at the same time. Even with TP settings "reduce border artifacts off" and "inner padding 0", there are very noticeable, pixel wide gaps between the tiles. If I move the tiles close together, they look fine at a scale of 1, but anything smaller or larger yields gaps. If I use one tile per texture, I get no gaps, but it means that I either can't have a variety, or each tile will take one draw call material switching, which is not only slow, but brings the gaps back. Any tips?
13
Card Masking in Phaser Good Day, I'm new to Phaser and still learning about it to. I am challenging my self to create a simple animation of a card flip somewhat realistic rather that making use of a scale. This is something that I already started, But having trouble to attain what i need. See this link http sopronioli713.github.io card masking. Here is my source code lt html gt lt head gt lt title gt Card Masking Flipping Card lt title gt lt head gt lt div class "canvas" gt lt div gt lt body gt lt script type "text javascript" src "jquery 2.1.3.min.js" gt lt script gt lt script type "text javascript" src "phaser.js" gt lt script gt lt script type "text javascript" gt (document).ready(function() var game new Phaser.Game(1024, 768, Phaser.AUTO, 'canvas', init init, preload preload, create create, update update ) function init() function preload() game.load.image('back', ' .png') game.load.image('front', 'XRC.png') game.load.image('de', 'de.jpg') function create() game.add.sprite(550, 100, 'de') var cardfront game.add.sprite(400, 250, 'front') cardfront.anchor.setTo(0.5,0.5) var cardback game.add.sprite(150, 250, 'back') cardback.anchor.setTo(0.5,0.5) var tlx cardback.x (cardback.width 2) var tly cardback.y (cardback.height 2) var blx tlx var bly cardback.y (cardback.height 2) var rtx cardback.x ((cardback.width 2)) var rty tly var rbx cardback.x (cardback.width 2) var rby bly var mask game.add.graphics(tlx, tly) mask.beginFill(0xFF3300) mask.lineStyle(2, 0xffd900, 1) draw a shape mask.lineTo(0, cardback.height 100) mask.lineTo(100, cardback.height) mask.lineTo(cardback.width, cardback.height) mask.lineTo(cardback.width, 0) mask.endFill() cardback.mask mask mask for front card var draw front game.add.graphics(tlx, tly (cardback.height 100)) draw front.beginFill(0xFF3300) draw front.lineStyle(2, 0xffd900, 1) draw front.lineTo(100, 0) draw front.lineTo(100, 100) draw front.endFill() var txt game.add.text(5,0, 'What i want to attain is that n i want to draw portion of the front card n to the orange area', font "24px arial", fill " FFF", align 'right', fontWeight 'bold', anchor '0.5,0.5' ) var txt game.add.text(600,50, 'Output that i want to attain', font "24px arial", fill " FFF", align 'right', fontWeight 'bold', anchor '0.5,0.5' ) function update() ) lt script gt lt body gt lt html gt
13
Converting Cube Maps I have cube maps in lat long format, and i need to convert them to Horizontal Vertical Cross, and individual cross images, is there an utility to do that?
13
How can state changes be batched while adhering to opaque front to back alpha blended back to front? This is a question I've never been able to find the answer to. Batching objects with similar states is a major performance gain when rendering many objects. However, I've been learned various rules when drawing objects in the game world. Draw all opaque objects, front to back. Draw all alpha blended objects, back to front. Some of the major parameters to batch by, as I understand it, are textures, vertex buffers, and index buffers. It seems that, as long as you are adhering to the above two rules, there's little to be done in regards to batching. I see one possibility to batch, while still adhering to the above two rules. Opaque objects can still be drawn out of depth order, because drawing them front to back is merely a fillrate optimization, meanwhile state changes may very well be far more expensive than the overdraw of drawing out of depth order. However, non opaque objects, those that require alpha blending at least, must be drawn back to front in order to avoid rendering artifacts. Is the loss of the fillrate optimization for opaques worth the state batching optimization?
13
Why am I getting white lines in my tiled map when my character moves? I followed the tutorial here http mainroach.blogspot.com 2013 02 fast html5 canvas rendering tiled maps.html I can't really figure out why it's happening. At certain times the map looks fine and then when I move my character the map starts to look like this
13
Rendering tiles at a larger size for a detailed look I'm trying to make my 32 bit tiled map become more quot zoomed quot in. However, I experience an error that I can't resolve. Firstly, I followed this post on how to render the tiles larger. This is my code (close to identical to the code in the other post) class TiledMap def init (self, filename) tm pytmx.load pygame(filename, pixelalpha True) self.width tm.width tm.tilewidth self.height tm.height tm.tileheight self.tmxdata tm def render(self,surface) ti self.tmxdata.get tile image by gid for layer in self.tmxdata.visible layers if isinstance(layer,pytmx.TiledTileLayer) for x,y,gid in layer tile ti(gid) tile pg.transform.scale(tile,(64,64)) lt Line that gives Error if tile surface.blit(tile,(x 64, y 64)) def make map(self) temp surface pg.Surface((self.width, self.height)) self.render(temp surface) return temp surface The error that is produced is tile pg.transform.scale(tile,(64,64)) TypeError argument 1 must be pygame.Surface, not None I'm suspecting that some tile that its trying to render simply is empty. I've tried a separate smaller map but the issue persists. Is it the tileset? They've all been processed through the program Tiled where they were initially part of one png. If anyone knows a possible solutions I'll gladly try one. Or any other methods that could make it more zoomed in...
13
What causes aliasing? I always hear about aliasing and anti aliasing and I know what it looks like but what I don't understand is what causes it. Is it a physical phenomenon? Or a numerical one? If it helps to explain, I have some programming knowledge, but not in video games or graphics.
13
LibGDX drawing portion of tmx map I am trying to make a tile based game in LibGDX and I have run into some problem. In earlier versions of LibGDX you were able to draw a certain section of a .tmx map instead of a whole and even a section of a layer. Even though , in the nightly builds I cannot find anything about this methods the only thing I have found was drawing of a whole map or one of its layers. Ideally I would like to draw portion of a map something like draw nao from coords (5,5) to (25,25) . Meaning only the square 20x20 tiles starting from tile in the 5th row column at position no. 5. Is it even doable in the newer builds ?
13
Drawing entities in an isometric engine I am having some problems drawing my entities in an isometric map. Tiles are drawn using Painter's algorithm to do the z sorting which works great for the tiles alone. Entities are parented to a particular tile and have an offset within it. They are drawn immediately after their parent tile. The problem is that when the entity is too far right or too far down the next tile(horizontally to the right, or vertically below) is drawn over parts of it like so (Note, currently my player has his registration point (red circle) a little higher than right at the bottom, just to better approximate the centre of his feet.) A couple of ideas I had to try and remedy this 1) To simply offset the position of entities to move them so that they will always draw in a place that won't be drawn over later. I really don't like to add in strange offsets that must be compensated for all over the place but this seems like a viable option. 2) To draw everything tiles and entities, using painter's algorithm. Entities would not need to be parented to a tile anymore but every renderable graphic would need it's drawing position offset. (Tiles would have their point at the very top, entities at the very bottom so that painter's orders things correctly) 3) Implement some kind of layering system so that all floors are drawn first, then things behind entities etc etc. This seems complex and would change from scenario to scenario. As my game will have randomly generated levels, I feel this isn't the right solution. So, do any of the above have merit? Do I have it all completely wrong and there's another solution I've missed?
13
How to render a field of view of over 180 degrees? In most 3D renderings, a view frustum is used. This has the problem, that things get stretched out towards the edges. At quot normal quot fov (field of view) of about 60 degrees, the effect is not very noticeable, unless compared side by side (example) but at a higher fov of something like 120 degrees it's really noticeable by making things near the edge look way larger closer (example). A fov of anything near or over 180 degrees breaks the rendering completely. So far, I couldn't find anything on how to render in such a way that fixes this or even how it's called. The ultimate goal would be to be able to render up to 360 degrees (ignoring the hardware needed, to display it) without any distortion, relative to other parts of the image. So, how would I approach rendering that? Is there already some open source software, I can take a look at?
13
Is it okay to update the graphical handlers in the same function where you render them? I have a class called 'GraphicalHandler' that all my graphical handlers inherit from the main game loop. The class manages a static vector of pointers to all instances of itself. I can then call the member fucntion GraphicalHandler drawAllInstances() to draw everything to the screen. But since my graphical handlers need to update their graphical objects, as well as rendering them, I figured I would change the function to updateAndDrawAllInstances(). I know that game logic and rendering should be as separated as possible, but since updating the graphical handlers doesn't seem like game logic I figured it would be valid. So my question is if this is 'accepted' in game development, i.e. to update and render the graphical objects in the same function.
13
Rendering 10M point that changing position (individualy) at 30Hz i'm able with openTk (vb.net) to draw 10M of points adn rendering it. The problem is that the bind operation take 114mS on every frame refresh. I need to refresh the position at 30 Hz , so what can i use for speeder this operation.
13
Why does my game look different on my mobile device than in the editor? I am new to Unreal Engine. My game in mobile looks totally different than in Unreal Engine editor. In Editor In Mobile Is this a symptom of something obvious? How can I make it so that what I'll see in the editor will be the same thing that I'll see on my mobile device?
13
How to improve my maxscript random blood generation code? The idea was simple. Draw basic blood drop meshes. Shuffle them with random generator to get final drop's. Export and render them in game as other meshes. The problem is that it's not looks like a blood drop, but a rocks. What is the best way to improve this? Redraw basic drops? Apply some modifier to final drop mesh? But what modifier? How to smooth borders in final drop mesh? I am totally stuck with this. The improvment must be codeable in MAXSCRIPT, not a manual correction of mesh, because I am a programmer not 3D modeller. The generator final result screenshot. Final blood drop P.S. For those who would like to test generator manually here it goes the prooflink. Works nice at 2010 version.
13
How to automatically select a graphical quality level? I'm making a game that lets the users choose their graphical settings, with the usual categories low, medium, high. I want to implement an quot auto quot category, and decide what the appropriate values are on startup. But I'm not sure how to do that. For example in OpenGL, you could check the OpenGL version or query the memory through extensions (if provided) to guess at it, but that seems fickle since it may be the case that a low end graphics card has a high OpenGL version but bad hardware. Checking memory might be a bit better, but I assume it still is not the most accurate way of checking things. A more robust approach would be to render in the background and check the FPS, but that could require setting up a whole scene, rendering it, etc. Most games seem to do this very quickly, so I don't know how they could set up a whole scene and do a bunch of quick tests. Is there a fast way to do this?
13
How many LoD versions of a model should I have? Many games facilitate better performance by increasing decreasing the number of triangles polygons that are drawn, depending on how close the camera is to said object. Mountains, viewed from far away, could become literally flat. Models would steadily lose resolution as you walk further away from them, etc. If I want to implement such a system, how many different permutations of the same model models should I have? Suppose the model is 20,000 triangles when viewed up close, but then I halve that when the camera goes away, so now it is 10,000. Then again, when the camera is even further away, making it 5,000 triangles. Would three versions be enough? Or is this really just an arbitrary implementation? Does it depend on the game itself?
13
Why would you want multiple render targets? In d3d11, you can bind multiple render targets ID3D11DeviceContext OMSetRenderTargets. But why would you want to do this?
13
What is the state of the art of ray tracing on the GPU? I think ray trace rendering had to be done on the CPU for a long time. But since we have compute shaders in OpenGL 4.3 now, it might be possible to move the computations on the GPU and perform passable real time rendering. What approaches for GPU based ray tracing are there already? Can it compete with rasterization rendering nowadays?
13
Deterministic random number sequence I have an array of lights that I need to sample from randomly, but deterministically so that all the lights are sampled. Currently I use random numbers to pick a light, and then put processed lights into another list, but I'm wondering if there is a way to use something like a hammersly or halton sequence to do this instead. A biased solution would be ok.
13
Vertex buffers interleaved or separate? Interleaved all vertex data (position, normal, texcoord...) kept in 1 vertex buffer, separate each vertex attribute is kept in a separate vertex buffer (1 for positions, 1 for normals...). I know this question came up many times and I also know there's no 1 right answer (sadly). But I'd like to try to list the main advantages disadvantages of both. Or maybe you have some general rules of thumb for when to use each. Advantages of interleaved faster? (all data for 1 vertex is fetched at once? sth about cache working better)? less API calls (for creating and setting buffers but that's probably a very small difference) Advantages of separate when different shaders need different vertex attributes (e.g. one shader needs only position and another needs position, normal and texcoord) it's possible to provide each shader only the data it needs and there's no data duplication when updating only e.g. position of vertices it's not necessary to resend the other attribute data (e.g. normals and texcoords) If you see any other differences please write. From the above it generally looks like a struggle between memory and performance optimisation. But maybe I'm wrong? Maybe one is better worse in most cases? Edit One more concern, actually with interleaved buffers I could end up sending unnecessary data to GPU and the data bandwidth is a big bottleneck in today's cards. Should I worry about that?
13
Rendering skybox in first person shooter I am trying to get a skybox rendered correctly in my first person shooter game. I have the skybox cube rendering using GL TEXTURE CUBE MAP. I author the cube with extents of 1 and 1 along X,Y and Z. However, I can't wrap my head around the camera transformations that I need to apply to get it right. My render loop looks something like this mp Camera ApplyTransform() Takes the current player transformation and inverts it and pushes that on the modelview stack. Draw GameObjects Draw Skybox DrawSkybox does the following glEnable(GL TEXTURE CUBE MAP) glDepthMask(GL FALSE) draw the cube here with extents glDisable(GL TEXTURE CUBE MAP) glDepthMask(GL TRUE) Do I need to translate the skybox by the translation of the camera ? (btw, that didn't help either) EDIT I forgot to mention It looks like a small cube with unit extents. Also, I can strafe in out of the cube. Screenshot
13
Drawing pixels with SDL2 advice, is it fine to draw them one by one on the CPU? I want to make a raycaster and I figured out how to open a window and draw pixels to it with SDL2, but the way I am doing it now just doesn't seem very efficient, I figured out how to do it from briefly reading the documentation. For every pixel in the window, I am going through them one by one, setting the render color with SDL SetRenderDrawColor(), then drawing the pixel with SDL RenderDrawPoint() based on an array. It's surprisingly fast doing this but is there some way I can send like a pixel buffer to the GPU or something? I see there is a SDL RenderDrawPoints (plural version) function but it only takes positions, meaning all the pixels drawn with it would be the same color? I'm looking for a similar function but either it also takes the color data too or it is just a buffer of color data and its index determines its position. Rendering a bunch of pixels that are all the same color doesn't seem very useful to me. I later tested it by fading between colors and with window sizes of more than 600 x 400 the performance is horrible with my method, with 1200 x 800 it is like 5fps.
13
Efficiently rendering tiled map on OS X I'm writing an original (top down) SimCity clone in Swift and attempting to use SpriteKit as the basis for the game. However, I am running into performance issues when rendering and animating the tile map which represents the city map. I'm rendering a 44x44 tile map with each tile being 16x16 pixels. Tile animation could happen on any arbitrary tile and is implemented by having a separate image for each frame of the animation. The map is dynamic (naturally) since the player will draw on it and tiles can be animated (roads, etc). I have tried several implementations to render the map the screen and each has had its own performance issues. What I've tried Each tile is an SKSpriteNode with its texture loaded from a texture atlas. Textures were swapped for tiles that were dirty (needed redraw). I disabled physics simulations and physics bodies on the nodes. Pros Minimal draws (due to the Texture atlas) Code wasn't very complex Cons Redrawing the entire map destroyed performance Swapping textures to animate was inefficient Map was rendered using NSView, drawRect, etc. Pros Drawing was relatively efficient Dirty rectangle drawing was easy to implement Cons Not really suitable for animation (really bad performance) My question is What is the most efficient way of rendering animating my tile map? Is there an optimization I can make to SpriteKit to speed this up? Or do I need to use something lower level (OpenGL GLSL) to draw animate the map efficiently? Any help would be greatly appreciated!
13
3D Huge mesh rendering I am writing a program, that as input, I have a huge (10 6 elements) 3d mesh (with hexagonal shaped elements), and I want to realtime render it, but not as real time as a game. It just can show the scene and rotate, zoom and pan. The most important point is, I don't need any special lighting nor any shadows. Also, the objects to render are static, and they do not move. My object hasn't any textures. I've read about ray tracing methods, but I don't know if there is any good libraries for this purpose, or I have to implement everything by myself. Thanks a lot.
13
Rotate around a 3d Object (Software Renderer) I have a simple 3d software renderer (SDL C ) which can load a 3ds Model and render it (shaded) and rotate it around X Y Z Axis. Now I would like to rotate move around the object, meaning, the object itself stands still, and everything in the scene is displayed from the angle of the view. Is that what a camera in 3d engines does, and how would I implement something like this ? (No OpenGL DirectX is used, just plain software)
13
What is a texture atlas? I've heard about this concept, but what is it?
13
Render text and still have it interractive with minimum effort with hit testing? The background of the situation I need to render text. Text will be moving aroud the screen and user will have ability to interact with it by erasing with a finger, shifting the order of the words, deleting letters and so on. Text will be in many different languages. Kering and other typing tricks should be implemented when rendering. So the problem for me I don't know the most efficient way to render all the text AND at the same time to have information about the location of each and every letter on the screen, plus each letter should know the word it belongs. Now, if I make a big class of strings, where each letter is a child of each word, and words are the children of sentences, then interaction part seems trivial to implement, but rendering becomes a problem since either I render each letter to the pipeline individually and store it's location in screen coordinates. or I prepare one big string by gathering letters from all the above mentioned classes and render it in one run, but I loose the ability to store the coordinates of each letter in the screen coodrinates. Is there a way to have both ) Fast rendering in one string and coodinates of each letter stored?
13
Fastest way to render image data from buffer Currently I am doing my rendering by using a 3D array window width x window height x rgb as a buffer, then looping through the buffer and plotting pixels on screen using SDL2 (SDL RenderDrawPoint). I know this is horrible and stupidly slow but I am not well experienced in graphics techniques. What is the better way to do this?
13
How to use a mask texture with Kobold2D I am an iOS developer but I'm new to cocos2d. I'm working on new game, I use Kobold2D, have cocos2d installed too, and I want to make this effect I know how is done with Flash, but can't make it with Kobold2D. There's 2 images with the same size one is a low res image for the background and the second is a hi res over the first one. When the "reticle" mask moves, it reveals the second image inside the circle and the background is visible outside only. I googled with no success, saw some Ray Wenderlich projects they weren't helpful.
13
How to use a mask texture with Kobold2D I am an iOS developer but I'm new to cocos2d. I'm working on new game, I use Kobold2D, have cocos2d installed too, and I want to make this effect I know how is done with Flash, but can't make it with Kobold2D. There's 2 images with the same size one is a low res image for the background and the second is a hi res over the first one. When the "reticle" mask moves, it reveals the second image inside the circle and the background is visible outside only. I googled with no success, saw some Ray Wenderlich projects they weren't helpful.
13
Rotate around a 3d Object (Software Renderer) I have a simple 3d software renderer (SDL C ) which can load a 3ds Model and render it (shaded) and rotate it around X Y Z Axis. Now I would like to rotate move around the object, meaning, the object itself stands still, and everything in the scene is displayed from the angle of the view. Is that what a camera in 3d engines does, and how would I implement something like this ? (No OpenGL DirectX is used, just plain software)
13
Is it a useful strategy for Mobile VR titles to render faster than their simulation loop? For example If a title had a very heavy simulation loop (say 20ms), is it desirable to render at a greater rate, say 90hz? This would always a present head pose correct to the view, without being stalled on simulation. Or does this result in discomfort for the player, and instead render and sim should stay in lockstep?
13
What resolution should I render art for a 3D game for PC? I hope this is not too stupid a question. I'm making my first game for PC and I'm wondering what resolution I should render the artwork at. It will be a fixed perspective game, so I'm using a 2D background with 3D characters moving around. How big should I render the background? Is 1920x1080 enough these days or should I go even higher? How big should the character textures be? Thanks a lot!
13
Isometric painter's algorithm problem Note I dont use tiles, I use 3D Polygons ) I'm currently working on a real time renderer for scanned real life objects. My main goal is to have an isometric viewer with the simple ability to rotate the object. This alone is a simple problem. I basically had the full solution for it, but I wasn't happy with the performance at all. As you might know, software rendering based on the CPU is slow (but I don't want to code a game engine, just a small viewer which should be okay with a low poly count like lt 10000) but for what i need it works surprisingly good. The only part where my renderer is toooo slow is the z buffer (multiple seconds for a 3000 poly obj). Mainly because it loops xyobjects polygons. Currently I'm using some sort of painters algorithm to draw polygons one after another, which brings me some problems https youtu.be PDq4xtrgoi8 As you can see the distances per poly are calculated very poorly, just the center of the poly to world point 1000,1000,1000. My question is not just about the pointers algorithm but I'd love to use a minimal failing method of drawing the polygons in a specific order rather than calculating every pixel by z buffering. If you have other simple methods which work fast on a CPU, I' d really appreciate hearing about them! )
13
Synchronization between game logic thread and rendering thread How does one separate game logic and rendering? I know there seem to already be questions on here asking exactly that but the answers are not satisfactory to me. From what I understand so far the point of separating them into different threads is so that game logic can start running for the next tick immediately instead of waiting for the next vsync where rendering finally returns from the swapbuffer call its been blocking on. But specifically what data structures are used to prevent race conditions between the game logic thread and the rendering thread. Presumably the rendering thread needs access to various variables to figure out what to draw, but game logic could be updating these same variables. Is there a de facto standard technique for handling this problem. Maybe like copy the data needed by the rendering thread after every execution of the game logic. Whatever the solution is will the overhead of synchronization or whatever be less than just running everything single threaded?
13
Manual per glyph rendering with SDL TTF I'm trying to create a font atlas with SDL TTF. My idea was to create an SDL Surface for every character using TTF RenderGlyph Blended() and use TTF GlyphMetrics() to get glyph metrics so that I can position the surface relative to a baseline. I'm struggling with a correct positioning of a glyph. It looks like SDL TTF creates a surface with height which is at least the height of the font and width which is at least advance of a glyph, so the surface is bigger than just the glyph. How do I extract glyph's rectangle from the created surface? How do I position the glyph correctly relative to the base line on the screen? I thought y position of the glyph it should be baselineY maxy (where maxy is from TTF GlyphMetrics), but that places glyph too high above the base line.
13
Scale screen space quads (used in font rendering) I have quads positioned in lt 0, 1 x lt 0, 1 coordinates. I use this system for font rendering. In vertex shader I have gl Position.xy 2.0 POSITION.xy 1.0 that brings the position to the screen space lt 1, 1 x lt 1, 1 . My quads are created from two triangles ABD, BCD D C A B Now, I want to scale the quad by a scale factor. Can I achieve this with this data? I have tried to change geometry and store quad center and half sizes, then calculate the final position as gl Position.xy 2.0 (CENTER POSITION.xy HALF SIZE.xy) 1.0 This way, I can translate points to origin, scale them and translate them back. However, this scale single quad correctly, but the two closely neigboring quads are no longer closely neigboring, since the space between them is constant. So the final question. How can I scale quads used for font rendering, so that the text is still readable without large gaps between glyphs?
13
What is the relationship between clipping and the fog of war concept? I'm currently developing a 2D top down game and recently implemented clipping. I understand clipping in a 2D top down game as rectangle or any other geometrical form which defines a viewport for the player of what exactly he sees and what is technically rendered from the engine. As I'm centering the clipping area around the player I recognized that it is similiar to the fog of war concept. So the player has a limited view perspective depending on his current position. My questen is what is the concrete difference to the fog of war concept? Is this concept usually using clipping? I often recognized that for example the map is rendered but simply not the objects which are on that map. Are these objects rendered and simply invisible or are they not rendered at all because of the clipping? Could clipping be defined as a way to achieve fog of war? Would be cool if anyone could shed some light on this topic.
13
Could frame interpolation like used by SmoothVideo Project be an option to increase the framerate of games without as big a performance hit? The SmoothVideo Project uses frame interpolation to increase the fps of video from 24 to 60. The results are pretty impressive. I was wondering if this could be applied to, and whether it would look good in video games? It uses much less resources than rendering all the frames so would allow lower end rigs to render at the quality of much better rigs at some level of compromise. I know it won't be as accurate, and would slightly increase input latency as it needs to hold on to the newest frame to be able to generate and insert the interpolated one. It's not as bad as a full frame though, by my reasoning only the lag would be the interpolation time plus half the original fps refresh time. So for 30 fps it would be 33ms 2 interpolation time. Maybe this lag would make it unsuitable for fast past first person games, but I doubt it would be a hindrance in slower paced games. The lag becomes lower at higher start rates, so I would think it would be certainly worth it when going from 60fps to 100 fps which improves the experience though increasingly marginally, while being extreme taxing on the system.
13
Triangle Strips of Tetraheron I am confused about the triangle strip representation of closed mesh .The vertex buffer for triangle strip representation of the a figure is shown below A(0,1) B(0,0) C(1,1) D(1,0)E(2,1) vertex buffer A,B,C,D,E where, T1 (A,B,C) T2 (B,C,D) T3 (C,D,E) Now, if I have a tetrahedron with four vertex A(0,0,0),B(1,0,0),C(0,1,0) and D(0,0,1) then what would be its vertex buffer representation? Thank you very much.
13
When a Render Pass decides what textures it needs, how are shaders written? I am studying render graph architectures (I've seen the Frostbite presentation). A RenderPass has outputs (i.e. textures you draw to) and inputs. How are these inputs bound to the internal pipeline ? Lets say I have an AO Pass and it has Normals and Depth as the input. Do I just bind the texture to a register and that's it and sample it? What about the actual shaders (for drawing geometry) that also use textures?
13
Precision loss when transforming from cartesian to isometric My goal is to display a tile map in isometric projection. This tile map has 25 tiles across and 25 tiles down. Each tile is 32x32. See below for how I'm accomplishing this. World Space World Space to Screen Space Rotation (45 degrees) Using a 2D rotation matrix, I use the following double rotation Math.PI 4 double rotatedX ((tileWorldX Math.Cos(rotation)) ((tileWorldY Math.Sin(rotation))) double rotatedY ((tileWorldX Math.Sin(rotation)) (tileWorldY Math.Cos(rotation))) World Space to Screen Space Scale (Y axis reduced by 50 ) Here I simply scale down the Y value by a factor of 0.5. Problem And it works, kind of. There are some tiny 1px 2px gaps between some of the tiles when rendering. I think there's some precision loss somewhere, or I'm not understanding how to get these tiles to fit together perfectly. I'm not truncating or converting my values to non decimal types until I absolutely have to (when I pass to the render method, which only takes integers). I'm not sure how to guarantee pixel perfect rendering precision when I'm rotating and scaling on a level of higher precision. Any advice? Do I need to supply for information?
13
Where do the buffer values come from when rendering? In the textbook I am reading, it talks about fragment tests that are performed when rendering. All of these tests involve comparing the current fragment x value (x can be alpha, color, etc.) with a corresponding buffer value, and doing something in case the test passes. The test is usually a comparison between those two values (for example, , lt , etc.). What I cannot understand is where do these buffer values come from in the first place? Are these previous values? If so, what do the current values have to do with previously calculated values? I don't even know what to search in google for this topic. Sorry if it is a total starter question. I am currently reading about rendering for the first time
13
Triangle Strips of Tetraheron I am confused about the triangle strip representation of closed mesh .The vertex buffer for triangle strip representation of the a figure is shown below A(0,1) B(0,0) C(1,1) D(1,0)E(2,1) vertex buffer A,B,C,D,E where, T1 (A,B,C) T2 (B,C,D) T3 (C,D,E) Now, if I have a tetrahedron with four vertex A(0,0,0),B(1,0,0),C(0,1,0) and D(0,0,1) then what would be its vertex buffer representation? Thank you very much.
13
3D BSP rendering for maps made in 2d platform style I wish to render a 3D map which is always seen from top, camera is in sky and always looking at earth. Sample of a floor layout I don't think I need complex structures like BSP trees to render them. I mean I can divide the map in grids and render them like done in 2D platform games. I just want to know if this is a good idea and what may go wrong if I don't choose a BSP tree rendering here. Please also mention is any better known rendering techniques are available for such situations.
13
How can I create a "cracked glass" material? I'm trying to figure out how the cracked and chipped glass effect in a Bioshock Infinite Burial at Sea Episode 2 works. My current guess is that it is essentially a transparent shader with gloss. It would have a map defining the direction of reflections from the environment, with the cracks being significantly different from the mesh's normals. It would also include some kind of model of the angular dependence of the amount of refraction transmission to reflection so that it can roughly approximate Fesnel's equations. It doesn't appear to be a full refractive model, so I'm wondering how exactly this is implemented? Am I right with what I have said above?
13
Partial mesh culling by checking against the AABB tree of objects vertices instead of only the AABB of the whole objects First thing this is more of a conceptual question than an implementation oriented one, but still tips about implementation will be very much welcome if you happen to have any (athough I have some experience programming different parts of games, graphics are certainly my weak spot as you will see). So, I have an application in which all high poly objects have their vertices grouped in an object specific AABB tree to speed up the narrow phase of collision detection. Now, it occurred to me the following. Since that structure is already in place, would it be possible to use it for culling parts of objects instead of the usual all or nothing approach of the frustum and occlusion culling techniques? The idea for that is simple in concept. Instead of testing for visibility only using the whole objects' AABB, I would do that first but in the positive cases I would proceed to visibility checks of the sub AABBs containing that object's vertices. Once identified the sub AABBs that are visible, only the triangles that are contained in these sub AABBs would be sent to the GPU for rendering. Therefore, in a more systematized way, my three related questions are 1) is such an approach even possible in what regards the way GPU gets and processes the mesh geometry information pulled from CPU? 2) given that in such scenario the CPU would have to break the meshes somehow and pass to the GPU only the vertices that were identified as being in the visible parts of the visible objects, wouldn't that pose additional load on the processing time such that the cost would overcome the gains? 3) most importantly, passing to the GPU only some triangles of a mesh could cause graphically distorted results when shader and texture are rendered for that partially only rendered object? I searched quite a bit for academic references on this subject but came almost empty. I would gladly welcome reading suggestions of any sort.
13
How to use WorldToScreenPoint function in a texture context? I am rendering a scene into a RenderTexture and I have a set of 3D points. I want to convert these points from 3D to texture coordinate frame. It worked, when I rendered into screen and used camera.WorldToScreen() function. Now I have to render into texture. Is there any way to do the same as WorldToScreen() function does for the render 2D texture?
13
When using deferred rendering technique, what space should my normals be? Why? I'm implementing a deferred shading technique and the following question arose When storing the normals, should I transform to view space, or may I keep them in world space? Why? Will any of the alternatives be better than the other for calculating lighting?
14
What is an efficient way to manage uniforms in a game? Most engines on the market have their drawbacks and it's difficult to find a simple light weight one that's open source and doesn't have to put you through a rather complex learning process. Writing one is a difficult task on its own, but it might not be a bad idea if what you want that engine to do is to support a specific kind of games (e.g. 2.5 D games on mobile devices). So, in search for a good game engine architecture, I've found a few logistical issues. Consider this scenario Objects Each object is comprised of two principle structures of render information a model (geometry mainly) and a material (that tells the object what textures and what shaders to use). Of course, it is natural to allow an object to switch its material definition on the fly. But a material encapsulates the shaders, so these drag along with them some slots for uniforms and vertex attributes. Since an uniform, for example, can be object specific (color, specular exponent, etc.) global superglobal (lights, weather conditions fog wind,etc.) or specific to a group of objects (they all have, let's say, a reflectivity factor) then it means that it's wrong to put them in either the object's property region or in the material's property region. It's clear that both uniforms and attributes are always declared in the shader sections of a material, but where their values come from is an enigma in the frames of a pretty general rendering engine. You have to allow for the existence of numerous types (by semnatics!) of uniforms position, colour, bone matrices, indices, lighting parameters, etc. The big question now how would you suggest to organize and manage uniforms? (especially the information flow they're declared by shaders, but their values are supplied by apparently different type of entities some renderable, some more abstract, being themselves controllers or managers).
14
Linear gradient shader ( Photoshop like) I'm searching a way to implement a linear gradient shader that behaves as the linear gradient in Photoshop (only the vertical case is necessary). It will be applied to 2D sprites. Currently I'm passing to the pixel shader these parameters StartColor. EndColor. Offset an offset applied to the gradient starting point. Length the gradient length (the range inside where the colors will be interpolated). isFixed a boolean parameter that indicates if the gradient must be influenced by the camera position or not. Here a first attempt of the vertex shader and the pixel shader that I've implemented struct VsInput float3 position VES POSITION float2 uv VES TEXCOORD0 float4 color VES COLOR struct VsOutput float4 position HPOS float2 uv TEXCOORD0 float4 color COLOR0 float4 outPos TEXCOORD1 VsOutput main(VsInput input) VsOutput vsOutput vsOutput.position MUL(float4(input.position, 1.0f), WorldViewProjection) vsOutput.uv input.uv vsOutput.color input.color vsOutput.outPos vsOutput.position return vsOutput struct PsInput float4 Position HPOS float2 UV TEXCOORD0 float4 Color COLOR0 float4 outPos TEXCOORD1 float4 startColor float4 endColor float offset float len bool isFixed float4 main(PsInput psInput) COLOR psInput.outPos psInput.outPos psInput.outPos.w float yScreenSize 900.0f float yPixelCoordinate 0.0f if (isFixed) yPixelCoordinate 0.5f (1.0f psInput.UV.y) yScreenSize else yPixelCoordinate 0.5f (psInput.outPos.y 1.0f) yScreenSize float gradient (yPixelCoordinate offset) len gradient clamp(gradient, 0.0f, 1.0f) return lerp(startColor, endColor, gradient) When isFixed is false I want the gradient influenced by the camera position. But my shader is wrong, since the starting point of the gradient is the bottom of the window instead of the bottom of the sprite. The question is how can I modify the shader in order to have the gradient starting from the bottom of the sprite? Maybe I need the size of the sprite in pixel? Or there are other convenient ways? The other question regards the "fixed gradient" if I want the gradient not influenced by the camera position, what is the convenient way? It's possible to have these two behaviors in the same shader? Thanks. Edit more details based on DMGregory suggestion. Maybe the sentence gradient influenced by the camera, it is not correct. I'll try to be more clear with a few images (taken using the shaders above). In the first image you can see a base sprite (in black) with an orange gradient applied to it. On top of the base you can see the player sprite the camera is attached to this one. In the second image the player has jumped and the camera follows him the isFixed attribute is false and, as you can see, the gradiend moves with the player and the camera. The third image shows the same situation of the second image, but the isFixed attribute is true and, as you can see, the gradient don't moves with the player and the camera. I hope it's a bit more clear what I'm looking for.
14
How to get the texture coordinate of a neighbouring pixel for a blur shader? I'm still having some trouble to get my head around fragment shaders and doing some image processing on textures. The context is a 2D sprite a simple texture painted on a quad. All done with OpenGL ES 2.0. My very basic goal is a simple blur filter using a 3x3 Kernel with average weights every pixel used is weighed 1 9th and summed up. Besides many ways to improve the performance of the fragment shader(code below) so far I'm still having some difficulties to find the right texture coordinate for the kernel. My approach so far is to use the actual size of the quad on the screen on which the texture is painted and pass those two values to the shader. This is done outside the shader and passed as a uniform to the shader program. glUniform2f( offset, 1 spriteWidth, 1 spriteHeight) This should result in the step in both directions to calculate a texture coordinate in the 0 to 1 space. The result is kind of looking good. BUT I am still struggling if this is something that could be done within the shader. Is there a way to get the size of the texture within the fragment shader? If we would be only doing this on a bitmap, I'll just go from pixel to pixel and read the color of the surrounding pixels. I am wondering if my understanding of a fragment shader is quite right It's run per rendered pixel on the screen. I found some examples for the GLSL to do this but I wasn't able to port it to OpenGL ES, so I had to start from scratch. For the sake of readability I write a bit more code in hope it's easier to understand the fragment shader varying vec2 v texCoord uniform vec2 u offset uniform sampler2D u texture const int size 3 const int KernelSize size size void main() int i, j vec4 sum vec4(0.0) vec4 intensityOfPixel vec2 texCoordForKernel for (i 0 i lt size i ) for (j 0 j lt size j ) texCoordForKernel vec2(v texCoord.x (float(i) 1.0) u offset.x, v texCoord.y (float(j) 1.0) u offset.y) intensityOfPixel texture2D(u texture, texCoordForKernel) sum intensityOfPixel 1.0 float(KernelSize) gl FragColor sum Thanks alot in advance!
14
Doubts with results of per vertex lighting shader I'm researching simple shaders to add to my DirectX 11 project such as a per vertex diffuse shader plus specular reflection component. I'll begin with the results Seems specular reflection is working, but do you see the border and upper objects? They don't seem to be lit well, as a shader based on this lighting equation is not supposed to gray scale the diffuse component (excepting when dot product lt 0 in backfaces,etc). My HLSL vertex shader rationale is as follows, I dont know at this point of the day, if i'm missing something. Vertex shader input Output structs are defined as Output and Input Structs struct VS OUTPUT float4 pos SV POSITION float4 color COLOR struct VS INPUT float4 pos POSITION float4 norm NORMAL 1) Transform the input position to world coordinates vo.pos mul(vi.pos, worldMatrix) 2) Transform normal to world and normalize float4 N normalize(mul(vi.norm, worldMatrix)) 3) Calculate the vector from the light position to the current vertex position. Lighting is already in world coordinates. w 1.0, so float4 L float4(normalize(float4(light 0 .pos.xyz,1.0f) vo.pos)) I have camera position in world coordinates passed through Cbuffer, so camera minus vertex position yields V vector. float4 V normalize(cameraPos vo.pos) 4) Use reflect intrinsic to calculate R, negating L since it's pointing towards light. float4 R reflect( L, N) 5) Finally I calculate the color to pass to the pixel shader, and transform the vertex position with the view projection matrices. vo.color ka ke kd saturate(dot(N, L)) (ks pow(saturate(dot(V, R)), specp)) vo.pos mul(mul(vo.pos, viewMatrix), projMatrix) return vo Just for completion, here are the constant buffers define MAX LIGHTS 16 cbuffer bufWorldMatrix register (b0) float4x4 worldMatrix cbuffer bufViewProjMatrix register (b1) float4x4 viewMatrix float4x4 projMatrix float4 cameraPos cbuffer materialProperties register (b2) float4 kd float4 ka float4 ks float4 ke float specp Specular power (shininess) struct LightBase float3 pos float3 color float intensity float isOn THanks, any help is appreciated. I'm sure normals are OK, but I can recheck in case. I can post additional debugging code if it's required.
14
Shading a concave cube as a convex cube with forced perspective Context I'm building a graphics pipeline for voxel volumes. I'm using an existing game engine (Bevy) which provides a way to put an object in 3D space. In my application, the voxel volumes can be oriented arbitrarily and are not necessarily axis aligned with respect to the world space. The pipeline does front face culling of these voxel volumes to allow the camera to go inside the bounds of the volume without clipping (imagine a player walking into a sparsely populated volume and looking around). The voxel volumes are rendered using ray marching, starting from a ray's contact point with the front face of the voxel volume. The Problem Now of course, this works if there are front faces, but as I said I'm culling them. So what really happens is my frag shader is shading the near side of the cube's back faces. What I need to do is given the point of contact with the cube's back face, and the direction to the camera, find the point that would have been hit on the front face. This will allow the rest of the program to ray march as though the ray hit a quot convex quot cube. I've spent quite a while looking at this answer but so far I've been unable to implement it into my shader I think it's relevant https stackoverflow.com questions 4248090 finding the length of a ray within a cube Assumptions Voxel volume axes are perpendicular to each other, but are not necessarily the same length A ray from the camera to a back face can hit any one of the 3 back faces, and will only pass through any one of the 3 front faces Expected Result Here's an example of what I expect. Below (first image) is what is rendered without quot simulating quot the front face positions. The second image is what would be rendered with correct front face simulation. Partial Code I've removed the irrelevant code. Right now what is rendered for a voxel volume size (16, 16, 16) is 3 16x16 walls along the axes. Of course what should be seen is a 16x16x16 voxel cube. voxel.vs version 450 layout(location 0) in vec3 Vertex Position layout(location 1) in vec3 Vertex Normal layout(location 2) in vec2 Vertex Uv layout(location 0) out vec3 v Position layout(location 1) out vec3 v Normal layout(location 2) out vec2 v Uv layout(set 0, binding 0) uniform Camera mat4 ViewProj mat4 View layout(set 1, binding 0) uniform Transform mat4 Model void main() v Normal Vertex Normal v Position Vertex Position v Uv Vertex Uv gl Position ViewProj vec4((Model vec4(Vertex Position, 1.0)).xyz, 1.0) voxel.fs version 450 layout(location 0) in vec3 v Position layout(location 1) in vec3 v Normal layout(location 2) in vec3 v Uv layout(location 0) out vec4 o Target layout(set 0, binding 0) uniform Camera mat4 ViewProj mat4 View layout(set 1, binding 0) uniform Transform mat4 Model layout(set 3, binding 0) buffer VoxelVolume vec3 voxel volume size void main(void) vec3 Normal mat3(Model) v Normal vec3 scale voxel volume size 16.0 mat4 InverseView inverse(View) vec3 CameraPosition (Model vec4(vec3(InverseView 3 ), 0.)).xyz vec3 BackFacePosition v Position vec3 BackFaceModelPosition (Model vec4(BackFacePosition, 1.0)).xyz vec3 BackFaceRayDirection normalize(BackFaceModelPosition CameraPosition) vec3 FrontFacePosition ? vec3 FrontFaceRayDirection ? Differs from BackFaceRayDirection when front face is perpendicular to back face vec3 ScaledPosition ((FrontFacePosition (scale 2.0)) scale) voxel volume size o Target vec4(floor(ScaledPosition) voxel volume size, 1.0) I would be extremely grateful if someone could help me fill in those missing variables! Thanks.
14
Curved Meters and Gauges I'm wondering how people here on GameDev stack exchange would handle curved meter GUI elements for things such as life bars or energy bars. My thought on the matter was that you could use a shader with a cutoff value and an image which has one channel dedicated to masking the image (alpha) and one that also has a gradient which I compare to a uniform float to determine whether or not the fragment should be fully transparent. However, I've been running into some strange behavior when writing this shader. Specifically, the shader's output has a weird artifact where the cutoff begins that looks almost like ripped paper the line that indicates the end of the meter has a sloppy contour. This image has some distortion effect going on with the pixels where the lifebar is supposed to end via the cutoff uniform. There's definitely got to be a better way of doing this same thing in a more tactful way. Shader code is below CGPROGRAM pragma vertex vert pragma fragment frag sampler2D MainTex float4 Color float Cutoff struct Vert IN float4 loc POSITION float4 texcoords TEXCOORD0 struct Frag IN float4 pos SV POSITION float4 uv TEXCOORD0 Frag IN vert( Vert IN input ) Frag IN output output.uv input.texcoords output.pos mul( UNITY MATRIX MVP, input.loc ) return output float4 frag( Frag IN input ) COLOR float4 value Color float4 valueFromMask tex2D( MainTex, input.uv.xy ) value.w valueFromMask.w float desiredTransparency step( valueFromMask.x, Cutoff ) value.w min( value.w, desiredTransparency ) return value ENDCG (Again, the red channel of this image is actually a gradient mask used to determine where the cutoff point should be. Other channels were going to be used for something else (like special scrolling patterns or what not) ) Example of the asset used in the shader I'm assuming there's probably something I'm missing when using the step function that could help ease the fade dropoff in order to make a better looking end result? What do you think is a good method for making curved meter GUI elements? Is there something wrong with the shader code presented above that causes this strange page tearing artifact?
14
FXC Error X3501 'main' entrypoint not found I am trying to compile a vertex shader using VS2013, but every time I try, FXC returns the following error Error error X3501 'main' entrypoint not found I've reduced the vertex shader to its simplest form and yet I'm still getting the same result DefaultVS.hlsl include quot Include.hlsl quot cbuffer CameraTransform float4x4 ViewProjMat VS OUT main(VS IN input) VS OUT result result.Position mul(input.Position, mul(input.WorldMat, ViewProjMat)) return result Include.hlsl struct VS IN float4 Position POSITION float4x4 WorldMat INSTANCE TRANSFORM struct VS OUT float4 Position SV POSITION And the properties of both files Zi E quot main quot Od Fo quot Path To Output DefaultVS.cso quot vs quot 5 0 quot nologo Zi Od Fo quot Path To Output Include.cso quot nologo
14
How to implement color changing fragment shader? I have a background of a given size and filled with a given color. I want to change it with an animation effect, starting from the center and spread out until it extends the whole background. The new color should fade blend smooth into the existing color from the background. Some kind of radial gradient that changes the color and then spreads out over the whole background. I am working with SpriteKit on iOS and I am really sure that the best way to implement this is to do this with fragment shaders which are new to iOS 8 SpriteKit SDK. I have done some work with shaders and understand how they work but I am asking for help more on the mathematics behind this.
14
2d game view camera zoom, rotation offset using 'Filter' 'Shader' processing? I wish to add the ability to zoom in, zoom out, rotate and move the view in a top down view over a collection of points and lines in a large 2d map. I split the map into a grid so I only need to render the points that are 'near' the camera. My question is, how do I render a point A(Xp,Yp) assuming the following details Offset of the camera pov from the origin of the map is Xc, Yc Meaning the camera center is positioned on top of that point. If there's a point in Xc, Yc it is positioned in the center of the screen. The rotation angle is alpha The scale is S Read my answer first. I am thinking there is more optimized solution, thanks. My question is how to include the following improvement I read in the AS3 Bible book that In regards to ShaderInput, You can use these methods to coerce Pixel Bender to crunch huge sets of data masquerading as images, without doing too much work on the ActionScript side to make them look like images. Meaning if I am performing the same linear function on a lot of items, I can do it all at once if I use Shaders correctly and save processing time. Does anyone know how that is accomplished? Here is a sample of what I mean http wonderfl.net c eFp0
14
What coordinates are we passing to pixel shader from vertex shader? I have read articles about shader programing and understood the very basic knowledge of shader programing. One thing I always get confused is about the texture mapping. What I pass(output) from VS to PS is the vertex position and Texture Coordinates. I understood that in VS we can simply choose to pass the vertex position as it is or manipulate vertex position here (and may be something more). So what we ultimately pass is the position of the vetex. But... it is still not clear to me that we are passing which we are calling as texture coordinates or UV? If I understood correctly, if my model is a simple triangle consist of 3 vertices, VS will be run three times for each of the vertix and I am passing the position of the vertex. Its straight forward. But what is with the texture? Say, I am using 50x50 jpg image as texture. How are the 3 vertices are mapped to this 50x50 pixel texture? From the book I have started reading couple of days ago explained that the rasterizer is the one which groups the vertices to form triangle and calculates the number of pixel in the triangle. But again its not clear to me that what we are passing as texture coordinates to VS?
14
Which part of the rendering pipeline compiles shaders? Generally, video games and other graphical software that use vertex and fragment shaders will have said shaders in the form of actual shader source code (typically in glsl or hlsl), and will compile them for use with the display hardware at runtime. (It is my understanding that shaders cannot really be precompiled into the game binary because the actual shader machine code that the display hardware uses depends on the graphics card brand and even its model generation, and they may well be incompatible with each other.) Even though I do myself work on the game programming field and use shaders, I actually don't know exactly which part of the graphics pipeline actually compiles the shaders into the machine code that the graphics card uses. Compiling shaders is essentially a black box Throw the shader source code into whatever library you are using, and it does its magic behind the scenes. I would be interested in knowing exactly where this compiling process happens. Which exact part of the entire system takes the shader source code and produces shader machine code from it? My (wild) guess is that this is done by the graphics card driver (because the driver knows what kind of machine code it should output, and how to optimize it, for the particular brand and model of graphics card). (If this is indeed so, it would explain at least partially why the graphics driver is so crucial in how efficiently games run, as its optimization of the shader machine code would have great impact on rendering speed.) Are there any resources out there were I could find more info, in general, about these things?
14
How many active shaders at one frame in the game should I typically use? 5 or more like 100? How many shaders are usually active, at the same time in one scene, in modern games? I know that multiple shaders are being used, with the games switching between them in each frame, and it's common to draw objects via the shader Draw all objects with shader one Change from shader one to shader two Draw all objects with shader two Still, I know it's not as simple, especially with effects like a glow effect for whole scene, render to texture, etc., but I guess we can assume it works that way most of the time, right? The "group by shader" approach is good, because switching shaders is an expensive operation. From one side, you cannot have too many shaders, because you want to render the scene fast. On the other hand, you need many different shaders (or uber shader with branches quite similar) for skin, metal, water etc. How many (and which) different shaders would the theoretical, modern, third person, 3D detective game for PC (DirectX 11, if it matters) use? It would be 5, 20 or more like 100 active shaders, counting only active, at some "frame X"? I know it's not one number, but I wonder what scale and factors are important, in consideration for a PC game. In my sample game, I would use about 9 11 per frame (count it as different, small shaders or one uber shader doesn't matter now) Skin shader Eye shader (not too much? but they are different) Metal shader Ground shader Snow rain shader (if required) Water shader (if the water exists in scene) Glow shader (only when some special effects are involved) Light emiter shader (street lamps etc.) Standard shader (for all other, just standard shading) Standard shader with normal maps 2D shader (for GUI etc.) Is it "much" or "not many"? Did I forget about some important shaders that I would need?
14
shader coding calculate screen coordinates of fragment Good morning, I'm new to shader coding and trying to implement some visual effects code in shaders using billboards. (Yes, I couldn't have picked anything harder to start with, but I'm lucky that way) Setup I have rendered the full screen z depth to an array of floats in a previous pass. In the fragment shader I need the scene depth where the rendered fragment is displayed (to see if it's occluded). I can use tex2d() to get the depth value if I have the screen coordinates of the point being rendered in the fragment shader. Question In the fragment shader how do you calculate the screen coordinates of the pixel (in the range 0 1.0)? Is the position passed to the fragment shader a pixel offset? If so, I guess it would be float2( position.x screen width, position.y screen height ) Thanks for any help
14
Phaser Shader Chain I want to implement lighting via shadowmaps. I see process as 1) render something to RenderTexture1(size as game) 2) create RenderTexture2 (custom size) 3) add it to Image2 (custom size) 4) apply "generate lightmap" shader to that image, with RenderTexture1 as additional channel 5) create RenderTexture3(size as game) 6) add it to Image3(size as game) 7) Apply "generate light from lightmap" shader, with rendered lightmap as additional channel I have a problem on last step, seems texture passing as additional channel is not rendered with shader. however it's ok in image. code create() ... this.sourceRT this.game.make.renderTexture(this.game.width, this.game.height) this.shadowMapRT this.game.make.renderTexture(this.SHADER SIZE, this.SHADER SIZE) this.shadowMapImage this.game.add.image(0, 0, this.shadowMapRT) this.shadowMapImage.filters this.shadowTexureShader this.lightingRT this.game.make.renderTexture(this.game.width, this.game.height) this.lightingImage this.game.add.sprite(200, 0, this.lightingRT) this.lightingImage.filters this.shadowCastShader this.shadowTexureShader.uniforms.iChannel0.value this.sourceRT this.shadowCastShader.uniforms.iChannel0.value this.shadowMapRT update() this.light.x this.game.input.activePointer.x this.light.y this.game.input.activePointer.y this.sourceRT.renderRawXY(this.renderGroup, 0, 0, true) this.renderGroup is a group with shadow casters this.shadowTexureShader.uniforms.uLightPosition.value this.light.x, this.light.y this.shadowCastShader.uniforms.uLightPosition.value this.light.x, this.light.y Additional re explanation with picture I take texture0, apply shader to it (texture1 as an additional channel), to get shadowmap. Then i want to take texture1, apply shader2 to it, with texture2 (given by shader as additional channel) to get shadows from objects. Problem at the question mark instead of getting texture2 (processed by shader1) i get texture0 (still unprocessed). I want to get rendered texture2 in my shader2.
14
Early Z culling Ogre For Ogre experienced people, but also experts in the field Early Z culling is sometimes quite desirable, and that's what I tried to do in Ogre by using a two pass material. The first one is writing to the Z Buffer, but not to the frame buffer. This is how it looks like pass EarlyZ texture unit TU0 ambient diffuse texture texture TU0 TEXTURE tex coord set 0 filtering trilinear cull software none cull hardware none lighting off colour write off shading flat scene blend alpha blend alpha rejection greater equal 200 depth bias 5 5 ugly hack without it, objects tend to flicker The biggest problem I get is with alpha objects and shadows. For example, now I can't get tree impostors to cast correct shadows instead of blocks. Although they are rendered correctly, the PSSM isn't working correctly, so the shadows tend to look like stencil shadows. Any ideas on how to fix it? As many people said is it possible to perform early Z culling and still have transparent objects in the scene? If yes, some hints to do it in Ogre? Here are some screenshots
14
How to read neighbor pixels in GLSL? I'm using SFML 2.1, it's much more straightforward for me so I can jump directly to learning the shading language. I'm trying to do something similar to conway's game of life. I already learned that I will need to use 2 textures since I need to read and write at the same time. My questions How do I read neighbor pixels ? Do I need to pass a vec2 from the vertex shader to the fragment shader like this ? vertex shader out vec2 pixelpos void main() pixelpos gl Position fragment shader in sampler2d mytexture in vec2 pixelpos void main() if(mytexture pixelpos.x pixelpos.y .r gt 0.5) How do I write to a texture ?
14
How is this particular HLSL condition treated with respect to compile or run time evaluation? Let's say I have this very simple pixel shader (cbuffers and other stuff omitted) float4 PS(VertexOut pin, uniform bool useLighting) SV Target float4 retColor gDiffuseMap.Sample( sampler0, pin.Tex ) if (useLighting) retColor retColor float4(gAmbientLight, 1.0f) return retColor and two techniques such as technique11 TexTech pass P0 SetVertexShader( CompileShader( vs 4 0, VS())) SetGeometryShader(NULL) SetPixelShader(CompileShader( ps 4 0, PS(false))) technique11 TexLitTech pass P0 SetVertexShader( CompileShader(vs 4 0, VS())) SetGeometryShader(NULL) SetPixelShader(CompileShader(ps 4 0, PS(true))) The way I understand it, the useLighting condition is evaluated during compile time and each technique will have its own version of the pixel shader function without any branching. That means the useLighting condition wouldn't have any runtime penalties. Is that correct? So it's kind of like C preprocessing? Why can the pin variable just be left out like that in the CompileShader call? It makes sense, of course, I'm just wondering if this is some special HLSL or Effect Framework syntax?
14
Where should shaders and lights be in a component based entity system? Where should I put the shader and the light shadow calculation? Should that be a component too? And should the rendering system know how to handle them or should there be a separate light system? I'm specifically talking about about a 2D system, but it should be the same in for 3D I think.
14
Different ways to mark the status of an enemy I have a 3D game with a "eagle view", I have enemies and I have magics that will affect the status of the enemies. I want to show the status of the enemies( frozen, on fire(dot damage), slowed...etc) I think about two possibilities Put a 2D image in the enemy that show status Change the color of the enemy with vertex shaders Is there other,or correct way? In case of use vertex shader, how I change the color of enemy without losing all details(Something like a filter)? shaders
14
Make part of albedo transparent I have a shader which creates a circle inside of a plane mesh. I would like to get get rid of the parts around the circle, which are the r and b parts of the ALBEDO but I can't seem to figure out how to do it. The only thing I've managed to find is ALPHA but that changes the transparency of the entire shader and not just parts of it. shader type spatial float circle(vec2 position, float radius, float feather) return smoothstep(radius, radius feather, length(position vec2(0.5))) void fragment() ALBEDO vec3(0, circle(UV vec2(0), 0.5, 0.005), 0) Which currently looks like
14
What is Ramp Shading or Lighting? What is ramp shading or lighting and how does it work? Is it different than toon shading or is it the same concept? How is specularity calculated differently for ramp shading versus blinn phong or lambert?