_id
int64
0
49
text
stringlengths
71
4.19k
13
Converting Cube Maps I have cube maps in lat long format, and i need to convert them to Horizontal Vertical Cross, and individual cross images, is there an utility to do that?
13
How do I render text in a view in SFML 2.5.1 without artifacts? I'm using the following code to render text in a letterboxed view. include lt iostream gt include lt random gt include lt math.h gt include lt cstring gt include lt ostream gt include lt SFML Graphics.hpp gt const float FIXED UPDATE TIME 16.67f 60fps const unsigned int DEFAULT CAMERA WIDTH 1280 const unsigned int DEFAULT CAMERA HEIGHT 720 sf View get letterbox view(sf View view, int window width, int window height) Compares the aspect ratio of the window to the aspect ratio of the view, and sets the view's viewport accordingly in order to archieve a letterbox effect. A new view (with a new viewport set) is returned. float window ratio window width (float) window height float view ratio view.getSize().x (float) view.getSize().y float sizeX 1 float sizeY 1 float posX 0 float posY 0 bool horizontal spacing true if (window ratio lt view ratio) horizontal spacing false If horizontal spacing is true, the black bars will appear on the left and right side. Otherwise, the black bars will appear on the top and bottom. if (horizontal spacing) sizeX view ratio window ratio posX (1 sizeX) 2.f else sizeY window ratio view ratio posY (1 sizeY) 2.f view.setViewport( sf FloatRect(posX, posY, sizeX, sizeY) ) return view int main() sf RenderWindow window(sf VideoMode(DEFAULT CAMERA WIDTH, DEFAULT CAMERA HEIGHT), "Adventure of Jaggy Font Edges") window.setMouseCursorVisible(false) Set up resource sf Font fnt fnt.loadFromFile("res fnt Ruda Regular.ttf") Set up text sf Text text text.setFont(fnt) text.setString("TEXT RENDERING") text.setCharacterSize(128) text.setFillColor(sf Color White) text.setOutlineColor(sf Color Black) text.setOutlineThickness(3.f) Center text (in world space, I believe?) sf FloatRect text rect text.getLocalBounds() text.setOrigin( floorf(text rect.left text rect.width 2), floorf(text rect.top text rect.height 2) ) text.setPosition(0, 0) Set up view, and center it on the text sf View camera view(sf FloatRect(0.f, 0.f, DEFAULT CAMERA WIDTH, DEFAULT CAMERA HEIGHT)) camera view.setCenter(0, 0) sf Clock clock while (window.isOpen()) only update 60fps sf Time elapsed time clock.getElapsedTime() if (elapsed time.asMilliseconds() lt FIXED UPDATE TIME) continue handle events sf Event event while (window.pollEvent(event)) window closed event if (event.type sf Event Closed) window.close() window resized event else if (event.type sf Event Resized) camera view get letterbox view(camera view, event.size.width, event.size.height) window.clear(sf Color(255, 255, 255, 255)) window.setView(camera view) window.draw(text) window.display() return EXIT SUCCESS At the default resolution, it looks like this However, the moment I change the window size, we get artifacts
13
Should the frame rate be consistent or is it normal for it to jump around? As I commented on a post, I started wondering if it was correct or not (the comment, not the answer). And I'd like to straighten that out. Although not needed as info, here's the link to the question Scene management for 3D editor My comment "Wouldn't mind that much about the framerate, as long as it stays above 60. you can get away with 30 even." Still I think if the frame rate is over 30, the real time rendering stays as a good animation. But sometimes I read that people restrict the render() to 60fps or 30fps for performance. This could would remove all unneccecary calculation wich we can't see anyway. Although true, they (sometimes) claim it is an optimization. When a scene will render more objects, the fps will drop. Think that is normal. So what's the deal with fps, should it stay more or less the same even when you turn the camera 90 or 180 degrees, or only when keeping (almost) the same view?
13
Is multiple material on same mesh cause low perf? I'm making something like "Low poly" style game. For that, I'm using multiple materials for each mesh without uv mapping and texturing. Luckily, only weapon does this, others not. Also it's first person shooter so most of time, it just 7 8 materials requires for rendering. However, I heard that lots of materials causes more rendering for GPU and it cause performance issue. My target platform is Android, and that could be a big problem. Should I change all my weapons to have single material each? This is the screenshot of my game. As you can see, single weapon uses 6 materials.
13
What causes aliasing? I always hear about aliasing and anti aliasing and I know what it looks like but what I don't understand is what causes it. Is it a physical phenomenon? Or a numerical one? If it helps to explain, I have some programming knowledge, but not in video games or graphics.
13
Drawing entities in an isometric engine I am having some problems drawing my entities in an isometric map. Tiles are drawn using Painter's algorithm to do the z sorting which works great for the tiles alone. Entities are parented to a particular tile and have an offset within it. They are drawn immediately after their parent tile. The problem is that when the entity is too far right or too far down the next tile(horizontally to the right, or vertically below) is drawn over parts of it like so (Note, currently my player has his registration point (red circle) a little higher than right at the bottom, just to better approximate the centre of his feet.) A couple of ideas I had to try and remedy this 1) To simply offset the position of entities to move them so that they will always draw in a place that won't be drawn over later. I really don't like to add in strange offsets that must be compensated for all over the place but this seems like a viable option. 2) To draw everything tiles and entities, using painter's algorithm. Entities would not need to be parented to a tile anymore but every renderable graphic would need it's drawing position offset. (Tiles would have their point at the very top, entities at the very bottom so that painter's orders things correctly) 3) Implement some kind of layering system so that all floors are drawn first, then things behind entities etc etc. This seems complex and would change from scenario to scenario. As my game will have randomly generated levels, I feel this isn't the right solution. So, do any of the above have merit? Do I have it all completely wrong and there's another solution I've missed?
13
How do I use tiles and sprites together in an isometric scene? I'm trying to write a 2D isometric scene. Rendering order is complex, since both tilemaps and sprites are different concepts. Rendering one of the 2 before the other will draw the scene incorrectly. Tiles and Sprites both have some common data that can be used as render information. I thought of creating an extra object which simply holds the coordinates and texture data. However, this also meant having to couple a tile or sprite to a render info object. (something I haven't figured out). This adds complexity. However, I thought this way I could abstract over any renderable object. This tutorial defines tiles as any other sprite and then uses a topology algorithm to sort the scene every frame. I was actually thinking of using the pigeon hole algorithm by sorting the render info objects on their depth property. How is this usually done? I can't wrap my head around it. Bear in mind that I have no actual z depth to work with. Everything relies on the artificial depth from the x and y coordinates.
13
What exactly IS a model? I'm trying to actually learn OpenGL after having experimented with it for a while, and so I'm trying to build a rudimentary rendering engine (not a game engine) to lead me through it. I've come to a conceptual obstacle however I don't know exactly what a model really is. I mean obviously, it is composed of one of more meshes and textures, and other important data. But how much data? Should shaders be associated with models directly, with each one holding some sort of reference to the shader they require? Or should that be handled in another layer? I think it makes sense to incorporate the shader program and the necessary parameters along with the meshes and textures as part of a model, but I wanted a more experienced opinion.
13
SDL2 Rendering based on frames Is it possible to render a particular object on the screen for a certain number of frames without having to delay the rendering? Thanks
13
How to create a vertex buffer that provides this pattern? I have a series of vertices that I want to layout with the following configuration, but I haven't been able to find out how to do this with the square and X pattern. Most of the time I have generally seen a quad split into two triangles usually using triangle lists. I suspect this is using triangle strips but I am not exactly sure how to do it. Could anyone help?
13
What do game developers do when they have to port their DirectX games to PS4 Switch? I'm not a game developer (hope to be one day though!) and was wondering how devs handle porting their games that use DirectX to PS4 Switch? From what I know neither supports DirectX, only OpenGL, and Vulkan. Do the devs just have to recreate their game with OpenGL or Vulkan? That seems like a lot of work. I'd imagine it would probably be a fairly common scenario, especially with AAA games so I'd assume that there would be something somebody has made to make it easier?
13
Blender Object Appearing Gray when all Lights are Off I have an issue with Blender where, when I turn my only light off (a sun lamp) and render the image my object appears gray rather than black (and thus, not appear to the camera). I can't figure out why this is happening. Here's what I just did in my scene Added a new UV Sphere mesh (to make a total of two spheres), made it visible to the camera, turned off the sun lamp (by setting energy to 0), and rendered. The result I obtained is below. I discovered this when attempting to render the first sphere with a material texture on it and it was too bright. The material on the spheres (which are different) are very basic, there's no emit, diffuse and specular are at default values. Could there be an issue with the way my camera is setup? Thanks in advance!
13
Changing rendering in UE4 Is it possible to have Ue4 render the graphics with a black line around everything similar to a cell shaded style, or would that require models specifically prepared for that purpose?
13
Adding lighting to custom vertex shader in Unity What's the simplest way to add shadow functionality based on the modified vertices, nothing I've found online seems to work Shader quot Unlit HyperUnlit quot Properties MainTex ( quot Texture quot , 2D) quot white quot Color( quot Color quot , Color) (1,1,1,1) SubShader Tags quot RenderType quot quot Opaque quot LOD 100 Pass CGPROGRAM pragma vertex vert pragma fragment frag include quot UnityCG.cginc quot include quot Complex.cginc quot struct appdata float4 vertex POSITION float2 uv TEXCOORD0 struct v2f float2 uv TEXCOORD0 UNITY FOG COORDS(1) float4 vertex SV POSITION sampler2D MainTex float4 MainTex ST fixed4 Color Params float Tscale float3 Tpos float klein float Rscale Mobius Transformation (a,b,c,d) float2 a float2 b float2 c float2 d v2f vert (appdata v) v2f o Reduce to float3 float3 worldPos mul(unity ObjectToWorld, float4(v.vertex.xyz, 1)).xyz 2. Apply linear transformations worldPos Tscale worldPos Tpos 3. Apply mobius transformation worldPos MobiusXZ(a,b,c,d, worldPos) 4. Klein? if (klein 1) worldPos PoincareToKlein(worldPos) 5. Scale to disk radius worldPos Rscale Convert back to ? space o.vertex mul(UNITY MATRIX VP, float4(worldPos, 1)) transform position to clip space o.vertex UnityObjectToClipPos(v.vertex) o.uv TRANSFORM TEX(v.uv, MainTex) return o fixed4 frag (v2f i) SV Target sample the texture fixed4 col tex2D( MainTex, i.uv) Color return col ENDCG UsePass quot Legacy Shaders VertexLit SHADOWCASTER quot
13
Why not draw a custom font with lines and or polygons? Reasons advantages I see More flexible procedural animation. Completely custom font. Performance (no texturing or high poly)? No assets (unless data driven). Multi resolution compared to sprite fonts sprites. Downsides I see Effort and time to define every character. Effort and time to implement the general code. Future localization pain. Example https youtu.be Tz7L 92Bo M?t 2m13s Is this unreasonable and why? Will I get a rendering performance win compared to sprite fonts on common multi core mobile devices?
13
DIY hologram experiments split screen I am currently experimenting with the DIY holograms. Does anyone know to split my windows into 4 perpendicular identical displays like this but for Windows? I am using Unreal Engine amp I want it to make something like this but the user should be able to rotate the camera.
13
Vulkan PushDescriptorSetTemplates (Can only parts of the UpdateTemplate be updated?) I'm exploring the VkDescriptorUpdateTemplate usage and really like the efficiency in coding this versus individual DescriptorSets, except I have one concern. When making an UpdateTemplate for each shader "program" with multiple UBOs, SSBOs, combined image samplers, etc. and attempting to update a single buffer sampler, I have to update them all based on the template. In reading the specification and googling (not much information on descriptor templates), it isn't clear if there is a way to update only part of the DescriptorTemplate and allow the remainder of the bindings to remain. Quick Code Example (Updating all three buffer sampler bindings). This works but in changing the sampler, I have to rebind the other buffers as well. FDescriptorInfo Sampler(LinearSampler, Tex gt ImageView, VK IMAGE LAYOUT SHADER READ ONLY OPTIMAL) FDescriptorInfo Desc VertexBuffer, CameraBuffer, Sampler vkCmdPushDescriptorSetWithTemplateKHR(Vulkan gt CommandBuffer, Vulkan gt MeshProgram.UpdateTemplate, Vulkan gt MeshProgram.Layout, 0, Desc) If I just want to update the combined image sampler is there a way to only update the 3rd binding? The example below gives validation errors along with attempting to just pass a single DescriptorInfo as there is no way to tell PushDesctiptor which binding I want to update versus all of them in the template. FDescriptorInfo Sampler(LinearSampler, Tex gt ImageView, VK IMAGE LAYOUT SHADER READ ONLY OPTIMAL) FDescriptorInfo Desc 0, 0, Sampler FDescriptorInfo Desc Sampler Also tried this but there is no way to tell vkCmdPushDescriptorSetWithTemplate I'm only updating the 3rd binding. vkCmdPushDescriptorSetWithTemplateKHR(Vulkan gt CommandBuffer, Vulkan gt MeshProgram.UpdateTemplate, Vulkan gt MeshProgram.Layout, 0, Desc) Is it possible to reduce re binding re pushing other buffers when utilizing the DescriptorSetTemplates? Thanks in advance for any advice.
13
How is the Unreal rendering pipeline implimented? I know this seems like a very broad question, but i am just interested in getting to know how the Unreal engine's rendering pipeline looks like, specially how it handles meterials, different types of lighting, and post processing. I know that Unreal4 uses differred shading, but how does it handle things like translucency, reflections, etc. I do not have access to the source code, that's why I asked this question here. You could even direct me to the source of the rendering part if available.
13
Adding lighting to custom vertex shader in Unity What's the simplest way to add shadow functionality based on the modified vertices, nothing I've found online seems to work Shader quot Unlit HyperUnlit quot Properties MainTex ( quot Texture quot , 2D) quot white quot Color( quot Color quot , Color) (1,1,1,1) SubShader Tags quot RenderType quot quot Opaque quot LOD 100 Pass CGPROGRAM pragma vertex vert pragma fragment frag include quot UnityCG.cginc quot include quot Complex.cginc quot struct appdata float4 vertex POSITION float2 uv TEXCOORD0 struct v2f float2 uv TEXCOORD0 UNITY FOG COORDS(1) float4 vertex SV POSITION sampler2D MainTex float4 MainTex ST fixed4 Color Params float Tscale float3 Tpos float klein float Rscale Mobius Transformation (a,b,c,d) float2 a float2 b float2 c float2 d v2f vert (appdata v) v2f o Reduce to float3 float3 worldPos mul(unity ObjectToWorld, float4(v.vertex.xyz, 1)).xyz 2. Apply linear transformations worldPos Tscale worldPos Tpos 3. Apply mobius transformation worldPos MobiusXZ(a,b,c,d, worldPos) 4. Klein? if (klein 1) worldPos PoincareToKlein(worldPos) 5. Scale to disk radius worldPos Rscale Convert back to ? space o.vertex mul(UNITY MATRIX VP, float4(worldPos, 1)) transform position to clip space o.vertex UnityObjectToClipPos(v.vertex) o.uv TRANSFORM TEX(v.uv, MainTex) return o fixed4 frag (v2f i) SV Target sample the texture fixed4 col tex2D( MainTex, i.uv) Color return col ENDCG UsePass quot Legacy Shaders VertexLit SHADOWCASTER quot
13
How should I set camera in Blender for example to render a sprite which can be used in an isometric map? I am trying to render some 3D model in blender and use them as a texture sprite in a game which uses Apple s SpriteKit. I have an isometric map with tiles size 32 as width and 24 as height, Since I use same size tile whole over the map (isometric) I need to use an orthographic projection I think! please tell me if I am wrong but I was pretty sure! anyway... I am using Blender to render my sprites (3D models) but I can not set the camera direction to have the rendered image really fit with the map. The attached picture shows the problem, take a look at the building. I have also attached the geometric math that I have used to create my render script in Blender. I found out that if I use a 32,24 tile I need to look at from 29Deg 29Min 28Sec but it seems not alright as the iOS simulator shows! Can anyone help me on this? How do I have to prepare my sprite to fit well on my isometric map? Those who have experience with isometric map games are welcome to answer and I really appreciate it. Here is the code I wrote in python to render the 3D Model, I have also set the camera to Orthographic manually. import bpy cam bpy.data.objects "Camera" def look at(cam, point) loc camera cam.matrix world.to translation() direction point loc camera point the cameras ' Z' and use its 'Y' as up rot quat direction.to track quat(' Z', 'Y') assume we're using euler rotation cam.rotation euler rot quat.to euler() meshObj bpy.data.objects "plate" meshObj.rotation mode 'XYZ' meshObj.rotation euler (0,0,0) d meshObj.dimensions Finding maximum dim of the object objectScale 1 if d 1 gt d 2 objectScale d 1 16 elif d 2 gt d 1 objectScale d 2 16 cam.rotation mode 'XYZ' cam.location (20.416 objectScale,20.416 objectScale,20.416 objectScale) cam.rotation euler (0.7853,0.7853,0.5147) cam.rotation euler (0.7853,0.7853,0) look at(cam, meshObj.matrix world.to translation()) alamp bpy.data.objects "Lamp" alamp.location (17.416 objectScale 4,17.416 objectScale 4,20 objectScale 4) print("Scaling ",objectScale) bpy.context.scene.render.filepath " Users iman Documents Render ISOBUILDING.png" bpy.ops.render.render(write still True, use viewport True, scene "Camera )
13
How to sort tiled decal list? I have a tiled forward render pipeline (also called forward ). It assigns a list of lights for every 16 16 block of pixels (tiles) on the screen. Lights are accumulated additively so their order doesn't matter inside the list. The problem comes when I add decals to the light list. Decals need to be evaluated before the lights (because they can modify surface properties), and even their order among themselves matter because their texture is alpha blended on top of each other. Because the lights and decals are culled in parallel, and added to the list by keeping a shared pointer to the end of the list which is incremented atomically when a light is added, the order of them can not be guaranteed. This is how I add a light to the tile's light list from a thread I have a numthreads(16,16,1) thread group for (uint i groupIndex i lt g nNumLights i 16 16) Light light GlobalLightArray i if(IsLightVisible(light)) uint index InterlockedAdd(LightCount, 1, index) LightCount is a groupshared uint (the whole threadgroup can see it) LightArray index i A light is always added to the back of the per tile light array I thought I should try to sort the light list after the culling and the creation of the list has ended, but I have no experience with highly parallel sorting algorithms. I however came across a nice presentation Holy smoke! Faster Particle Rendering using Direct Compute by Gareth Thomas . From the 8th slide it has a brief explanation of the bitonic sort algorithm with pseudocode and nice pictures. I get the idea, but I have no idea how to implement it or if it could be even done efficiently for my case because my light list is variable in size for each tile. Maybe it could be done in the same dispatch when the culling occurs, on this array (which hold a variable amount of light indices lt 1024) groupshared uint LightArray 1024 I have also seen a DirectX11 example of the algorithm of this kind of sorting, but on a different kind of dataset and across multiple dispatches and other post processing transposing which is not clear to me at all. Or maybe there is a completely different approach that I should take which doesn't involve sorting? Maybe when adding lights to the back of the array, the order of addition could be enforced somehow? I have also learnt that the new Doom game also renders its decals this way.
13
Seamless tiling with TexturePacker and Marmalade IwGX I'm looking for a way to get seamless tiling working, where the tiles are sprites off a TexturePacker sprite sheet, and the rendering is done with Marmalade's IwGX's streams. I also need to render the tiles in multiple scales at the same time. Even with TP settings "reduce border artifacts off" and "inner padding 0", there are very noticeable, pixel wide gaps between the tiles. If I move the tiles close together, they look fine at a scale of 1, but anything smaller or larger yields gaps. If I use one tile per texture, I get no gaps, but it means that I either can't have a variety, or each tile will take one draw call material switching, which is not only slow, but brings the gaps back. Any tips?
13
How do I efficiently render the tiles of a tile map? I've been working on a game, and I've been using Python with Pyglet to create it. I had an issue on how I could do effective tile based rendering. I tried cocos2d, however the API is very efficient and doesn't support maps which are larger than 1000x1000 tiles. I did some further research, and found you could make it so when you triggered an update, only sprites on the edge of the screen would update, however it is still buggy (it's not fully working). Here is my code, and here's some relevant rendering code self.camera 0 amount for a in range(self.window.height self.block size) self.sprites (self.camera 0 self.block size) 1 a (self.camera 1 self.block size) .batch render self.sprites self.window.width self.block size (self.camera 0 self.block size) 1 a (self.camera 1 self.block size) .batch None What approach should I use to render maps larger than 1000x1000 tiles?
13
Pygame performance issue for many images I've made a script for generating a game world based off of image pixel data. Trying to make a python map editor My progress so far has resulted in a program which loads an image and draws sprites in positions correlating with the map, like this Now the problem is that even for small levels, I'm noticing a drop in frame rate. The provided example level will make pygame drop from 100 fps to 80 fps. If the map is 40 x 40 the frame rate will be around 17. This is concerning, for two reasons. The program is not doing anything else than drawing images right now, how bad will the frame rate be when game logic is also being calculated every frame? Second, the aim is to ultimately make a two player strategy game, in which the levels would be significantly larger than the example, and in that case 20 frames per second is way below acceptable parameters. Every sprite in the game is a 48x48 pixel png image with some portion of transparency. Everything except rendering the sprites is done once when the game starts. This is the only code being run on every frame def draw(self) self.screen.fill((0,0,0)) for e in self.map layout.fixed list self.screen.blit(e.picture, e.rect) I'm guessing pygame's method for rendering images does not scale very well. Windows task manager shows my CPU utilisation at 2 with a map of size 40x40. My question is What can I do to improve frame rate? I don't mind switching from (or complementing) pygame to something else, so long as the transition is not very difficult for a novice programmer like myself.
13
Can an unmodifiable 3D map be one large model or many small models? Unmodifiable meaning that any ingame actions have no result on the terrain. Like in Source games. Question Should I export one large model of the map, or should I split it into smaller chunks? If so, how?
13
How do I refer to the two types of loading screen? I'm troubleshooting some issues beyond the scope of this forum, and I'd like to be able to talk about certain parts of the gaming experience, specifically the two types of loading screen I see in Prey. Both are shown in this short video I made. The first type with the loading bar is what I usually think of as the quot loading screen quot , but what do I call the second quot loading screen quot that happens right after with the moving squares in the bottom right? To be clear, I'd like a term to be used with non game devs.
13
How can I render cloud patterns like in these examples? I have a quick question. I see in many games, usually in the menu, some moving "clouds" in the background, apparently additive blended into each other, which does a really nice job immersing the player, in my opinion. A couple of examples that jump to mind right now are the Far Cry 3 main menu background https www.youtube.com watch?v B5XcMx3GjPA and the Plague Inc main menu background https www.youtube.com watch?v RQv60ywrLxU (the first seconds show it) These cloudy patterns seem like some kind of noise to me, like Perlin or other. So, how would you proceed to achieve that kind of blurred cloud effect with vivid colors? More specifically, would you pre generate sets of clouds and include them in the game package? Or generate them on the fly? On the CPU as a regular texture or on draw in the GPU shader program? I am interested in mastering this kind of visual effect, and as such any help would be appreciated pointing me in the right way. Thanks.
13
Portal frustum culling Our Levels consists of rooms. Each room has an arbitrary number of portals into other rooms. We frustum cull rooms by testing whether all portal vertices (a portal consists of a single quad) are outside of the frustum this is done recursively until no room is found. This however has one flaw. if a portal is visibly occluded, it is still taken into account because it is within the frustum. Is there an easy way to cull the rooms which are in the frustum but occluded? I had Hardware occlusion culling in mind, but I am not sure whether this is too overkill.
13
How do you simulate UV light and materials with UV reflective fluorescent properties within a PBR renderer? I have a PBR setup, say, in Unity (or Unreal, doesn't really matter, I'm asking about the general principle, not specific implementation), and I would like to add a UV light source, and have control over how my materials respond to it, but still ideally have it conform to the rules of physically based rendering. Question is what do I need to extend with what kinds of properties, and what are the equations to wire them together, and where do I plug those? Do I need to actually do modifications like this, or does the fact that it already is PBR mean that I only need to plop in a light source with correct combination of settings, and correct responses to that UV source just naturally happen? Edit I am asking about fluorescence, where I want certain materials to glow in particular visible light colours in places where the invisible UV light hits them.
13
How do I efficiently determine what objects are visible to a camera? I want to call the rendering methods of only the game objects that are visible. How can I efficiently determine which objects or tiles are within the camera's rendered region?
13
Isometric rendering and picking? I've been looking for a formula to plot (world screen) and mouse pick (world screen) isometric tiles in a diamond shaped world. The ones I've tried always seem to be, well, off. What's the usual correct way to do this?
13
How to create realism similar to scenes rendered in Vray? I am tasked to make a game which requires very realistic interior scenes. Unity's bedroom demo(http blogs.unity3d.com 2015 11 10 bedroom demo archviz with ssrr ) really caught my eye. See I know how to code but I am not good at modelling in 3DS. I see some amazing images only generated in 3DS using vray. But those are static images and the moment the same models are imported into Unity, they end up looking unrealistic. But somehow in the demo, Unity managed to create a very realistic looking scene. So here are my questions How do I create realistic game scenes like the ones rendered by vray in 3DS Max? Is it possible to bake the image rendered by vray into a UV map? I couldnt find how to do this. Also, pls refer to the image in this link, from evermotion http www.evermotion.org files tutorials content uploads nr AI44 002 cam 001 pp 051.jpg and also this image http www.evermotion.org files tutorials content uploads nr ae44s02 0041 Layer 2 022.jpg. The second pic is very unrealistic but the first pic after vray rendering looks amazing. I am sure the bedroom demo assets looked the same and assuming the answer to my first question is it cannot be done, then how do I go about creating the same level of realism as in the first pic? Guys this is a really long question and might not be clear. So sorry, tried to describe it the best I can. Appreciate any answer, tips or advice. Thanks guys
13
Efficiently rendering tiled map on OS X I'm writing an original (top down) SimCity clone in Swift and attempting to use SpriteKit as the basis for the game. However, I am running into performance issues when rendering and animating the tile map which represents the city map. I'm rendering a 44x44 tile map with each tile being 16x16 pixels. Tile animation could happen on any arbitrary tile and is implemented by having a separate image for each frame of the animation. The map is dynamic (naturally) since the player will draw on it and tiles can be animated (roads, etc). I have tried several implementations to render the map the screen and each has had its own performance issues. What I've tried Each tile is an SKSpriteNode with its texture loaded from a texture atlas. Textures were swapped for tiles that were dirty (needed redraw). I disabled physics simulations and physics bodies on the nodes. Pros Minimal draws (due to the Texture atlas) Code wasn't very complex Cons Redrawing the entire map destroyed performance Swapping textures to animate was inefficient Map was rendered using NSView, drawRect, etc. Pros Drawing was relatively efficient Dirty rectangle drawing was easy to implement Cons Not really suitable for animation (really bad performance) My question is What is the most efficient way of rendering animating my tile map? Is there an optimization I can make to SpriteKit to speed this up? Or do I need to use something lower level (OpenGL GLSL) to draw animate the map efficiently? Any help would be greatly appreciated!
13
What is the relationship between clipping and the fog of war concept? I'm currently developing a 2D top down game and recently implemented clipping. I understand clipping in a 2D top down game as rectangle or any other geometrical form which defines a viewport for the player of what exactly he sees and what is technically rendered from the engine. As I'm centering the clipping area around the player I recognized that it is similiar to the fog of war concept. So the player has a limited view perspective depending on his current position. My questen is what is the concrete difference to the fog of war concept? Is this concept usually using clipping? I often recognized that for example the map is rendered but simply not the objects which are on that map. Are these objects rendered and simply invisible or are they not rendered at all because of the clipping? Could clipping be defined as a way to achieve fog of war? Would be cool if anyone could shed some light on this topic.
13
Isometric painter's algorithm problem Note I dont use tiles, I use 3D Polygons ) I'm currently working on a real time renderer for scanned real life objects. My main goal is to have an isometric viewer with the simple ability to rotate the object. This alone is a simple problem. I basically had the full solution for it, but I wasn't happy with the performance at all. As you might know, software rendering based on the CPU is slow (but I don't want to code a game engine, just a small viewer which should be okay with a low poly count like lt 10000) but for what i need it works surprisingly good. The only part where my renderer is toooo slow is the z buffer (multiple seconds for a 3000 poly obj). Mainly because it loops xyobjects polygons. Currently I'm using some sort of painters algorithm to draw polygons one after another, which brings me some problems https youtu.be PDq4xtrgoi8 As you can see the distances per poly are calculated very poorly, just the center of the poly to world point 1000,1000,1000. My question is not just about the pointers algorithm but I'd love to use a minimal failing method of drawing the polygons in a specific order rather than calculating every pixel by z buffering. If you have other simple methods which work fast on a CPU, I' d really appreciate hearing about them! )
13
Techniques to mitigate microstuttering when the FPS is above 60fps and vsync is off The problem with first person shooters is that your input is important, and coupling it with VSync ruins it. The following frames per second data when rendering with OpenGL shows 60 100 fps annoying jitter 140 fps minor jitter, but certainly noticeable 200 fps very minor small jitter sometimes 260 fps either generally fine or very very small 300 fps perfectly fine The problem is other games I play that use similar rendering algorithms only stutter in the range of 60 100fps, whereas my range is 60 250fps (albeit it gets better when you get into the 200 range but you can still notice it if paying attention). At first I thought my game loop may have been wrong, but after going through it in great detail it is working just fine (and vsync works perfectly smooth with it too, if it was broken this would be an issue the update rate and the vsync monitor refresh rate are not divisible by each other). 60 100 fps is 62.5 10ms 100 250 fps is 10 4ms Not being able to go above 4ms would be pretty constraining! Because I'm using C (note I have zero GC, it doesn't run), I don't know if this means I'm somehow running into higher variability with the VM than I would when I did stuff in C . I don't know if being in the 4 10ms range means that sometimes it will spike. Now I did profile this, and this is what I've seen Ignoring the green line, the blue line is the uncapped version and orange is vsync. Since the vsync line has massive variability but seems to render fine, whereas the blue one has spikes but are all generally below the vsync line (and definitely not greater than the spikes from vsync), I don't understand whether I'm actually seeing those blue spikes and it's because of some kind of intermediate frame tear or not. It doesn't look like a frame tear to me, it just looks like a frame is lost and I lurch forward a bit more than normal like I was lagging online. So the question is... what can I do to diagnose this issue? Or is this normal? I was thinking of rendering to a framebuffer and just drawing that every 1 60 seconds in an attempt to smooth things out and emulate vsync but this comes with it's own drawbacks. Problem is... my back is against the wall and I'm worried I'll be doing potentially stupid and wasteful stuff. EDIT 1 Ticker code that does what I do (int, double) timerFunc() long currentTime GetCurrentTime() accumulation currentTime lastTimeSeen lastTimeSeen currentTime If it returns 1.27, then we have to run 1 tick, and interpolate at t 0.27 double tickFraction accumulation timePerGametick int ticksToRun floor(tickFraction) double fraction tickFraction ticksToRun if (ticksToRun gt 0) accumulation (ticksToRun timePerGametick) return (ticksToRun, fraction) EDIT 2 Game loop and basic logic (uses timerFunc data from above) void update(int ticksToRun) for i in ticksToRun foreach plane in planes that should move plane.Tick() foreach entity in entities entity.Tick() interpolation will be in 0.0, 1.0) void render(double interpolation) foreach plane in planes planeDelta (plane.current plane.prev) planeInterpolated plane.prev (planeDelta interpolation) render(planeInterpolated) foreach entity in entities entityDelta (entity.current entity.prev) entityInterpolated entity.prev (entityDelta interpolation) render(entityInterpolated) swapBuffers() void gameLoop() while (true) (int ticksToRun, double fraction) timerFunc() pollInput() if (ticksToRun gt 0) update(ticksToRun) render(fraction)
13
How is the Unreal rendering pipeline implimented? I know this seems like a very broad question, but i am just interested in getting to know how the Unreal engine's rendering pipeline looks like, specially how it handles meterials, different types of lighting, and post processing. I know that Unreal4 uses differred shading, but how does it handle things like translucency, reflections, etc. I do not have access to the source code, that's why I asked this question here. You could even direct me to the source of the rendering part if available.
13
Why not draw a custom font with lines and or polygons? Reasons advantages I see More flexible procedural animation. Completely custom font. Performance (no texturing or high poly)? No assets (unless data driven). Multi resolution compared to sprite fonts sprites. Downsides I see Effort and time to define every character. Effort and time to implement the general code. Future localization pain. Example https youtu.be Tz7L 92Bo M?t 2m13s Is this unreasonable and why? Will I get a rendering performance win compared to sprite fonts on common multi core mobile devices?
13
How do I use tiles and sprites together in an isometric scene? I'm trying to write a 2D isometric scene. Rendering order is complex, since both tilemaps and sprites are different concepts. Rendering one of the 2 before the other will draw the scene incorrectly. Tiles and Sprites both have some common data that can be used as render information. I thought of creating an extra object which simply holds the coordinates and texture data. However, this also meant having to couple a tile or sprite to a render info object. (something I haven't figured out). This adds complexity. However, I thought this way I could abstract over any renderable object. This tutorial defines tiles as any other sprite and then uses a topology algorithm to sort the scene every frame. I was actually thinking of using the pigeon hole algorithm by sorting the render info objects on their depth property. How is this usually done? I can't wrap my head around it. Bear in mind that I have no actual z depth to work with. Everything relies on the artificial depth from the x and y coordinates.
13
How to compute the 2D equations of 3D circular arcs? I'd like to obtain these equations for the ellipses produced by the perspective projections of (3 dimensionally transformed) circles. This is useful for rendering in 2D contexts which provide curve primitives. I'm using HTML5's canvas, so I get Beziers, arcs, and quadratic curves. See here The projection of a sphere outside of the plane of projection is an ellipse because the view is a cone (silhouette of a sphere is a circle). However if I want to draw my sphere using circular wireframes, that projection cone is no longer a circular cone. So it's not your traditional conic section anymore. How to deal with this?
13
How can I create my own sky maps? What are the methods tools for generating realistic skies with clouds and atmospheric shading? FOSS alternatives and spherical projections get extra points.
13
Is it a useful strategy for Mobile VR titles to render faster than their simulation loop? For example If a title had a very heavy simulation loop (say 20ms), is it desirable to render at a greater rate, say 90hz? This would always a present head pose correct to the view, without being stalled on simulation. Or does this result in discomfort for the player, and instead render and sim should stay in lockstep?
13
Drawing game objects that are bigger than tiles Problem I have 32x32 world tiles, and a 64x64 object. I am only drawing visible tiles around the player. The object has its x and y coordinates in the tile world. I am drawing the object after I've drawn the visible area. I'm drawing the object like this for (int y firstY y lt lastY y ) for (int x firstX x lt lastX x ) if (object.visible(x, y)) object.draw() The visible(x,y) method public boolean visible(int x, int y) if (this.x x amp amp this.y y) return true return false Now what happens is this Image 1. Character stands alongside the object, everything is ok. Image 2. Character triggers screen move left, half of the object gets drawn outside the map. Image 3. Character triggers screen move right, half of the object should be visible but its not. I know that this happens because I'm checking only if the upper left corner of the object is visible. I'm stuck. What should I do to remedy this? Thank you for any ideas.
13
Tiled Map Editor Isometric View Problem I'm using the latest version of Tiled Map Editor (0.9.1) to create isometric maps. I have objects that are larger than my tile size (64 x 32), so I am breaking them up into two tiles of the correct size. Below I show that the blue blocks are made of a top and a bottom (both of which are 64 x 32). You can also see that when I am placing these blocks side by side, there is some strange rendering overlap. Shouldn't the foreground be showing if it is rendered in the correct order? In that picture you can see 4 blocks stacked side by side with the problem, 2 blocks side by side with the problem, 1 block by itself without the problem, and 1 dissected block in its two components. Anyone know what's up? Note this is the view from within the Tiled Map Editor itself, not from within my game.
13
Self occluding object and alpha blending Look at the object I've rendered with my app It's the same screen twice, above the original and below I've drawn (by hand P) the shape of the mesh of one of plant's leaves. You can clearly see where the problem is. From what I understand, this leave is drawn before the other leaves, writing a higher value to the depth buffer but not changing the pixel color (as it's transparent in that particular place). When the other leaves are drawn then their pixels in that place in the buffer are discarded since they're failing the depth test (they're farther away from the camera). Now while I understand what the problem is I don't know how to solve it. This whole plant is one object so I can't sort by depth. What should I do?
13
What is the state of the art of ray tracing on the GPU? I think ray trace rendering had to be done on the CPU for a long time. But since we have compute shaders in OpenGL 4.3 now, it might be possible to move the computations on the GPU and perform passable real time rendering. What approaches for GPU based ray tracing are there already? Can it compete with rasterization rendering nowadays?
13
Drawing thousands of isometric tiles per frame with a high FPS This question seems similar to other questions but no other topics I saw helped. I'm making a game in GML (GameMaker Language). Regardless of the language software, this should be a universal topic. Okay, so I currently barely exceed 60FPS on a game with no code in the tick (step? frame? iteration?) so it should most definitely not be a CPU issue. My computer is fantastic and I can run nearly all (if not, all) games at ultra and high settings. Anyway, my game is isometric and tiles are 32x16. The terrain data ds grid (like a 2D array, but can't be jagged) is never changed except when loading other maps. I need the game to be played in a max of 1080p, and higher resolutions won't be implicitly supported. So, I have a nested for loop, to loop through a calculated minx maxx and miny maxy, basically defining the rectangular area in the terrain data ds grid that I need to render. However, at the moment, in 1080p, after applying a variable called count to increment every iteration of the nested loop, I have nearly 8000 draw calls every frame, rendering every tile near the view. Remember, the tiles are both isometric and small, so there are a lot of things needed to be rendered. There are multiple grass tiles to reduce texture repeating. There is also a water tile, and I may add other terrain tiles. So basically, I need a method of rendering tons of tiles (that don't change) with a much higher (preferably at least 300) FPS. MORE INFO in GM, a ds grid is a data structure that is very similar to a 2D array, where the main differences are that it cannot be jagged (it's similar to a table) and any cell can hold any data type, so I could have strings and real numbers etc. all in the same ds grid. My ds grid contains enumeration values, such as tile.grass or tile.water. The terrain does not fit within the screen borders, and resolution directly affects view size.
13
CPU GPU memory data flow I'm a newbie graphics programmer and I've been wondering recently how does model data (meshes and materials) flow from application (CPU memory) to graphics card (GPU memory?)? Say I have a static model (e.g. a building) that I load and setup once and don't change throughout the app lifespan. Does its data get sent to GPU memory only once and sit there forever? When the model gets actually rendered each frame do GPU processors have to fetch its data each time from GPU memory? What I mean is if I had 2 models rendered multiple times each would it matter if I first rendered the first one multiple times and then the second one multiple times or if I rendered the first one just once, the second one just once and kept interleaving it like that? I could call this question "internal GPU data flow" in this sense. Obviously graphics cards have limited RAM when it can't hold all the model data necessary for rendering 1 frame I guess it keeps fetching (some of) it from CPU RAM each frame, is that correct? I know there's a lot of books and stuff about this on the internet but maybe you have some quick general guidelines as to how to manage this data flow (when to send what and how much, when and how to render)? Edit I forgot to make one distinction there's sending the data to the GPU and there's setting binding the buffers as current. Does the latter cause any data flow? Edit2 After reading Raxvan's post I'd like to distinct a few actions buffer creation with initialisation (as he said I can store the data in either CPU ram or GPU one) buffer data update (which I believe is straightforward when the data is kept in CPU ram and requires fetching from GPU to CPU ram (and then back) when it's kept in GPU ram) binding the buffer as active (is it just a way to tell the API that I want this buffer to be rendered in the next draw call and it doesn't do anything by itself?) API draw call (here I'd like to hear from you what actually happens there)
13
How do I efficiently determine what objects are visible to a camera? I want to call the rendering methods of only the game objects that are visible. How can I efficiently determine which objects or tiles are within the camera's rendered region?
13
SetColorMod() with mini delay when the texture is rendered for the first time I have one image.png that I load using SDL IMG. I load by creating a surface. Then I create a texture using CreateTextureFromSurface() and I free the original surface. I repeat the same process but now at the end I use SetColorMod(128, 128, 0) to this new texture. This changes the color of the image as I wanted! Cool! This whole process is done way before the images are rendered to the screen. The original texture (without the color change) is rendered normally. The texture with the color change creates a mini delay when rendered for the first time. After that it keeps rendering without any delay. I m using SDL 2.0.2 (I m using ubuntu 14.04) A quick preview of the code to give an idea of what is happening load function func load() newSurface, err img.Load(fname) defer newSurface.Free() newTexture, err renderer.CreateTextureFromSurface(newSurface) return newTexture pre load func init() sprite1 load() sprite2 load() sprite2.SetColorMod(128, 128, 0) render... func render() Renderer.Clear() loop for each object obj1 sprite1 obj2 sprite2 Renderer.CopyEx(obj1, amp crop, dst, render.Angle, render.Center, render.Flip) LAG HERE Renderer.CopyEx(obj2, amp crop, dst, render.Angle, render.Center, render.Flip) Renderer.Present()
13
Dynamically Deformable Terrain In Game Engine I am looking for a game engine that is open to the public for free or at a payed price that allows for any reasonable way of doing deformable terrain over a network. The closest I have found to this is in udk where one can build a terrain in 3ds, cut it up, and import different chunks into udk, and fracture them. Unfortunately after a few hours work I discovered that this doesn't seem to work too well for what I am trying to do. Can anyone recommend a game engine, or even a rendering engine that supports this? Programming other features into a rendering engine is not an issue for me.
13
Drawing entities in an isometric engine I am having some problems drawing my entities in an isometric map. Tiles are drawn using Painter's algorithm to do the z sorting which works great for the tiles alone. Entities are parented to a particular tile and have an offset within it. They are drawn immediately after their parent tile. The problem is that when the entity is too far right or too far down the next tile(horizontally to the right, or vertically below) is drawn over parts of it like so (Note, currently my player has his registration point (red circle) a little higher than right at the bottom, just to better approximate the centre of his feet.) A couple of ideas I had to try and remedy this 1) To simply offset the position of entities to move them so that they will always draw in a place that won't be drawn over later. I really don't like to add in strange offsets that must be compensated for all over the place but this seems like a viable option. 2) To draw everything tiles and entities, using painter's algorithm. Entities would not need to be parented to a tile anymore but every renderable graphic would need it's drawing position offset. (Tiles would have their point at the very top, entities at the very bottom so that painter's orders things correctly) 3) Implement some kind of layering system so that all floors are drawn first, then things behind entities etc etc. This seems complex and would change from scenario to scenario. As my game will have randomly generated levels, I feel this isn't the right solution. So, do any of the above have merit? Do I have it all completely wrong and there's another solution I've missed?
13
Books that discuss practical rendering techniques? I'm looking for some books that discuss practical rendering topics like say rendering a bsp level or md2 3 mode, making a little quake like game as the goal or something to that effect. Any recommendations would be much appreciated.
13
How to maintain char widths of non monospace fonts? Having a font via spritesheet (as PNG), the easiest way to render fonts from that is just showing chars as monospace, but as you can imagine, that looks not pretty with chars like l, i, and so on.. Is there a slick way to maintain the width of every char? I already thought of storing it into an extra file, telling the width of each char in pixels. Pro Fast rendering. Con That's one load of work for about 100 chars. I gave "counting" the pixel width of each char on rendering it a thought. Pro Once that algorithm does it's job there's no work with that afterwards. Con As there is pretty much font rendering going on each frame, this is plain bullshit performancewise. I'm open to suggestions and or known algorithms for this problem. EDIT TTF or some other real font is not an option, because they render wayy to pixelated on the small sizes needed. EDIT Thanks to lorenzo gatti, made simple marker pixels in the spritesheet like so Distance between these gets counted on startup of the game and the markers are replaced with transparent pixels. So no heavy additional logic in render which would slow down and startup time is not really slower than before, thanks!
13
What problem does double or triple buffering solve in modern games? I want to check if my understanding of the causes for using double (or triple) buffering is correct A monitor with 60Hz refresh's the monitor display 60 times per second. If the monitor refresh the monitor display, he updates pixel for pixel and line for line. The monitor requests the color values for the pixels from the video memory. If I run now a game, then this game is constantly manipulating this video memory. If this game don't use a buffer strategy (double buffering etc.) then the following problem can happen The monitor is now refreshing his monitor display. At this moment the monitor had refreshed the first half monitor display already. At the same time, the game had manipulated the video memory with new data. Now the monitor accesses for the second half monitor display this new manipulated data from the video memory. The problems can be tearing or flickering. Is my understanding of cases of using buffer strategy correct? Are there other reasons?
13
Camera rotation deforms objects I'm working on a ray tracer in C with an adjustable camera for a university assignment. I'm entirely new with graphics and I'm struggling with one thing. I have a matrix as a mat4 object (4x4 matrix with each cell saved in a 1D array) that I save in a Camera object that handles transformations of every point in the scene (which is currently just two spheres). Translations seem to be working fine, but when I attempt rotations, things break. The spheres deform, and it only gets worse the more I attempt to rotate the camera. This is where I set the camera matrix mat4 Renderer setCameraMatrix(const Camera amp camera) mat4 cameraToWorld mat4() Initialise 4x4 matrix. Translation. cameraToWorld.cell 3 camera.pos.x cameraToWorld.cell 7 camera.pos.y cameraToWorld.cell 11 camera.pos.z Rotation. cameraToWorld.cell 0 camera.orientmat.cell 0 , cameraToWorld.cell 1 camera.orientmat.cell 1 , cameraToWorld.cell 2 camera.orientmat.cell 2 cameraToWorld.cell 4 camera.orientmat.cell 4 , cameraToWorld.cell 5 camera.orientmat.cell 5 , cameraToWorld.cell 6 camera.orientmat.cell 6 cameraToWorld.cell 8 camera.orientmat.cell 8 , cameraToWorld.cell 9 camera.orientmat.cell 9 , cameraToWorld.cell 10 camera.orientmat.cell 10 return cameraToWorld This matrix is then used to set the origin and direction of the camera in the world space later on in the code vec3 orig transformpoint(vec3(0), cameraToWorld) ... vec3 dir transformvector(vec3(x, y, 1), cameraToWorld) orig dir.normalize() The transformpoint and transformvector functions basically multiply a vector v by a matrix M (M.v), except the former takes into consideration translations, and the latter doesn't. Their code can be found here. The orientmat matrix is calculated from the three separate orientation angles (orient.x, orient.y, orient.z) in this line of code cam gt orientmat mat4().rotatexyz(cam gt orient.x, cam gt orient.y, cam gt orient.z) Using this function mat4 mat4 rotatexyz(const float a, const float b, const float c) mat4 M const float ca cosf(a), sa sinf(a) const float cb cosf(b), sb sinf(b) const float cc cosf(c), sc sinf(c) M.cell 0 cb cc, M.cell 1 1 cb sc, M.cell 2 sb M.cell 4 sa sb cc ca sc, M.cell 5 1 sa sb sc ca cc, M.cell 6 1 sa cb M.cell 8 1 ca sb cc sa sc, M.cell 9 ca sb sc sa cc, M.cell 10 ca cb return M (This function basically creates this transformation matrix from the three camera orientation angles.) I've tried to keep this question as compact as possible but I can provide more code if needed. I'm completely baffled about why I'm getting this kind of deformation on my spheres, and some insight would be invaluable! Thanks.
13
How can I create my own sky maps? What are the methods tools for generating realistic skies with clouds and atmospheric shading? FOSS alternatives and spherical projections get extra points.
13
Can global Illumination via path tracing replace all other current lighting techniques? In the sense that you currently have algorithms like HDR, shadows, reflections, caustics, motion blur and so on, does complete path tracing take care of all these effects, or would you still have to implement these effects separately?
13
How can I create explosions like in Zelda Breath of the Wild? The explosions in BotW look spectacular. Here's a video of them in slow motion https www.youtube.com watch?v bZtaNoUOSDQ It looks like it's not all billboards, but also some volumetric geometry it's hard to see all the details. Ideally, an answer would be API agnostic, but I am using OpenGL with a homemade renderer, if that helps. Which techniques are combined to create such a fantastic effect?
13
How many LoD versions of a model should I have? Many games facilitate better performance by increasing decreasing the number of triangles polygons that are drawn, depending on how close the camera is to said object. Mountains, viewed from far away, could become literally flat. Models would steadily lose resolution as you walk further away from them, etc. If I want to implement such a system, how many different permutations of the same model models should I have? Suppose the model is 20,000 triangles when viewed up close, but then I halve that when the camera goes away, so now it is 10,000. Then again, when the camera is even further away, making it 5,000 triangles. Would three versions be enough? Or is this really just an arbitrary implementation? Does it depend on the game itself?
13
Is it feasible to stream your game when the screen is small? My game consists of a very small screen (256x256). Behind that screen, though, there is a lot of very, very expensive ray tracing going on, which requires a very powerful GPU. Moreover, all clients see a variation of the same scene, so, a single GPU could render the whole scene for everyone. Given that scenario, I had the idea of just rendering the screen on the server and streaming it to the clients. This way, one could run my game even on very weak machines. But I guess the bandwidth use could make it unpractical is it possible to stream a 256x256 screen at 30 fps without delay and in a lossless format?
13
Rendering a selected part of the terrain with different shading I'm trying to understand how to modify the shading of the section of the terrain based on some user selection. In a game like Civ5, when a user selects a city or unit, that area is lit up more than the other parts of the terrain. I'm trying to implement this effect. In the civ 5 example the selected tile is has a circle outline and the selected city has a transparent overlay that follows the height of the terrain nicely. My current pipeline so far is at a primitive state. I render a quad and texture it with a grass texture. Then I draw a rectangular grid on top of this (a set of lines, with a slightly higher height value than the background grid). Upon selection (ie a mouse click inside the grid area), I calculate the approriate grid index by converting mouse to world coordinates and then to grid coordinates, and highlight the appropriate grid rectangle by changing the color of that particular rectangle. But, my approach gets outdated the moment I get a height map (as it assumes constant height), and looks pretty bad, hence this question. EDIT Added my pipeline description
13
How much slower is it to draw on "half pixels"? I've noticed that games like Diep.io are using floating decimal points for thin stroke lines on the grid. I have even tried this myself, by adding 0.5 to all of the positions for the grid lines to make the lines more thin. I heard it from a friend that drawing on half pixels causes the GPU to do more work to smooth it out, like anti aliasing. I am really trying to make my game look nice, by making the most smoothest lines as I can. How much slower is it really, and should I use it in an online competitive 2D game using Canvas?
13
What's the use and difference between Forward, Deferred and Physically Based Rendering? Maybe I'm still confused about my own understanding, Forward Render render the lighting of an object according to the light source in the scene. (e.g Phong) Deferred Render render the scene to get geometric information, store it in a buffer, and then apply lighting. (e.g ?) PBR treats lights in the scene the way it behaves in the real world. (e.g Ray tracing, Photon Mapping) I don't see how Forward and Deferred render are going to give much difference to the scene. Why do we need to store the geometrical value in Deferred Render before applying lighting ? Why don't we just compute lighting right away like Forward Render ? Also for Physically Based Rendering, computing each light takes a lot of time, and almost impossible to be done in real time. But why this technique is always considered the best ?
13
How can I create a "cracked glass" material? I'm trying to figure out how the cracked and chipped glass effect in a Bioshock Infinite Burial at Sea Episode 2 works. My current guess is that it is essentially a transparent shader with gloss. It would have a map defining the direction of reflections from the environment, with the cracks being significantly different from the mesh's normals. It would also include some kind of model of the angular dependence of the amount of refraction transmission to reflection so that it can roughly approximate Fesnel's equations. It doesn't appear to be a full refractive model, so I'm wondering how exactly this is implemented? Am I right with what I have said above?
13
Forward Shading with multiple shadow casting lights I am currently thinking about how to organize shadowing and lighting. We use forward rendering and currently, our algorithm looks like this collect all items that are visible in the view for each item, collect a list of lights where the item is in the attenuation radius (each item keeps a list of lights) determine the shadow light by the distance of the main character (only one light can currently cast shadow) render the scene by using a constant buffer of the currently processed item to shade it (each item is rendered with a constant buffer which contains light properties. the number of lights per item is predefined so we have a Light 16 and numLights in the constant buffer) How would I do multiple shadow casting lights in an organisatory way? We do not want to go the deferred way, since we dont want to limit us to GBuffers.
13
Load all sprites up front, or stream them? I'm currently in the starting phase of my first large game in Swift and SpriteKit. It will be a Super Mario like platformer game where each level is about two minutes long. Since I've never made a game like this one before, I don't have much experience about how to make games effective and RAM friendly. I'm wondering if I should load every single SpriteNode immediately as the player launches a level, or if I should render these as the player advances through the level. Loading everything instantly seems easier and more straightforward, but will this have a large effect on performance?
13
How can I render 3d pictures synched frame by frame to a keyed video? I'm developing a software that aims to combine a keyed video with a 3D rendered one, synchronously frame by frame. Each frame of video should be combined with a timely corresponding frame of 3D picture using a matching camera position parameter. How can I perform this rendering and ensure it stays in synch?
13
Why not draw a custom font with lines and or polygons? Reasons advantages I see More flexible procedural animation. Completely custom font. Performance (no texturing or high poly)? No assets (unless data driven). Multi resolution compared to sprite fonts sprites. Downsides I see Effort and time to define every character. Effort and time to implement the general code. Future localization pain. Example https youtu.be Tz7L 92Bo M?t 2m13s Is this unreasonable and why? Will I get a rendering performance win compared to sprite fonts on common multi core mobile devices?
13
Learning how to make imposters manually in Unity how to render an object to a texture not what camera sees? I am trying to learn the most performant way to make an imposter in Unity manually not using pre fabricated solutions. My first guess is that it should be achieved by using RenderTexture and a second camera and then taking different "snapshots" of different angles of the objec. If there are better alternatives in terms of performance (even if more difficult to implement), I would love to hear about. In case that is the only path, then I have learned how to render from a camera to a texture via script, but I still couldn't figure it out how do the following, which is what I need to do first how to render to texture a specific object without the background of the camera (i.e. with rest of texture being transparent) in a performant manner? Pointers, suggestions or at least link recommendation will be great. Thanks in advance.
13
LWJGL Text Rendering Currently in my project I am using LWJGL and the Slick2D library to render text onto the screen. Here is a more specific example Font f new Font("Times New Roman",Font.BOLD,18) font new UnicodeFont(f) font.getEffects().add(new ColorEffect(Color.white)) font.addAsciiGlyphs() try font.loadGlyphs() catch (SlickException e) e.printStackTrace() then i use font.drawString to write onto the screen. This is a quick easy way but it has a lot of disadvantages. for example font.loadGlyphs take a very long time 1 3 seconds. so when i want to change a color or font type then i have to wait 1 3 seconds which means I cannot do it while rendering (ie. cant have different color text on the same screen). My question is what is a better way of drawing multicolored text onto the screen? I use slick2d only for the text rendering so maybe i can fully get rid of the library and draw text some other way... If you have an answer please leave a quick short example. Thanks!
13
Ray casting through terrain mesh Octree ( Mesh collision ) I'm currently in development of a terrain editor which i will use for my game. One thing that is partially stopping my development is that I don't have collision detection implemented between mouse ray cast and the terrain mesh. The terrain is consisted of triangles and can be lowered and raised.Also the x and y values of the mesh don't change. Of course what i need to do is ray cast through my mouse positions and get intersected triangle. I would get the point on the triangle using Barycentric coordinates. Just I'm not quite sure how does octree come into play here. As far as I understand it it works like a 3d quadtree, splitting the box into smaller box that the ray intersects. But which box do i choose ? How do i know which box is the closest one to the terrain, if either raised or lowered terrain ? How many subdivisions would i have? I was thinking on having as much as i have tiles( basically the last level of subdivisions would correspond to a x 3 box). Right now i'm just doing collision with the 0 Z value plane, and it works okay, but if the terrain is a bit raised then this can give faulty results. Other two methods that i found is to iterate through the mesh, which of course is insanely inefficient the other is to project the terrain in reverse and then i guess do some checking with the mouse coordinates (think it's called z buffer checking or something). If anyone has any other ideas i'm all ears. Just note that the algorithm should be an efficient one (of course this depends on the implementation) and accurate. Thanks !
13
What is an achievable way of setting content budgets (e.g. polygon count) for level content in a 3D title? In answering this question for swquinn, the answer raised a more pertinent question that I'd like to hear answers to. I'll post our own strategy (promise I won't accept it as the answer), but I'd like to hear others. Specifically how do you go about setting a sensible budget for your content team. Usually one of the very first questions asked in a development is what's our polygon budget? Of course, these days it's rare that vertex poly count alone is the limiting factor, instead shader complexity, fill rate, lighting complexity, all come into play. What the content team want are some hard numbers limits to work to such that they have a reasonable expectation that their content, once it actually gets into the engine, will not be too heavy. Given that 'it depends' isn't a particularly useful answer, I'd like to hear a strategy that allows me to give them workable limits without being a) misleading, or b) wrong.
13
Portal frustum culling Our Levels consists of rooms. Each room has an arbitrary number of portals into other rooms. We frustum cull rooms by testing whether all portal vertices (a portal consists of a single quad) are outside of the frustum this is done recursively until no room is found. This however has one flaw. if a portal is visibly occluded, it is still taken into account because it is within the frustum. Is there an easy way to cull the rooms which are in the frustum but occluded? I had Hardware occlusion culling in mind, but I am not sure whether this is too overkill.
13
What is the best PBR real time Fresnel function? I'm working on a physically based renderer, and I've come to a sort of crossroads regarding the Fresnel factor. I'm having trouble finding the best way to represent it. I know that Schlick's Fresnel approximation is based off IOR, but IORs can go up to 38.6 for a meta material, and 4.05 for a natural element, which will make representing these in a 0 1 image difficult and confusing. I also noticed that no one really uses IOR maps. I also read a paper on Unreal's PBR integration, and I discovered that they initially wanted to use a F0 of 0.4, for non metals. What would be the F0 for metals, in this case, and isn't the static value of 0.4 worth the limitations for that tiny bit of memory? I believe the F0 tends to the base color, as it becomes more metallic, but I'd like confirmation. Finally, there's reflectivity or specular, as is used in modern PBR equations. Is there a standard for this, in regard to getting an F0? It seems arbitrary is it a float value up to that directly maps to F0? I am not sure if there are any real reasons to not combine specular color and base color, as Unreal has done this, before. I can't think of a single real reason, even for more stylized implementations. What is the best PBR real time Fresnel function?
13
Vertices displaced when smoothing in Maya The issue below appears when smoothing in maya. I would prefer to solve the problem normally as opposed to import export obj. Below the image is a .ma file you can download yourself. File https www.dropbox.com s nhtc1uj2lvakjwf reload.ma?dl 0 Steps Taken To Isolate Problem Issue does not occur when doing a smooth mesh preview with the EXCEPTION of 1 face Deleted face. Error moves up another face. Continued to delete the face ring one face at a time and noticed the error moves up. Selected face ring, delete and rebridged. Fixed. Infered that the root cause of the problem has something to do with the way I created face ring or edge ring. The issue only occurs in this one mesh object. Deleted all items in the scene except the problem mesh and saved an isolated file. Deleted UVs. Deleted history. Optimized scene. Cleaned mesh with the following settings. Select matching polygons. Apply to selected objects, Keep Construction History. Lamina faces. Nonmanifold geometry, Normal and geometry. Merged all vertices with a tolerance of 0.01. Unlocked normals. Ensured normal directions were correct. Ensured there were no faces inside of edges. Issue ONLY occurs with 1 subdivision. Hardened all the edges. Exported and reimported OBJ. Again, I do not want the lazy solution, I want the correct one. I did this so I can infer that the root cause is not mesh related. Combined with cube Deleted history (again) Optimized scene (again) Cleaned mesh (again) Specs Maya Version Maya 2016.5 Ext 2 (64bit) OS Windows 10 Pro (Build Latest) Video Card Nvidia GeForce 980 X (Driver Latest) CPU Intel Core i7 3930K Mirror Threads http polycount.com discussion 198641 maya 2 vertices displaced when smoothing p1?new 1
13
How can I render all objects behind a plane with a specific transparency value? I have a game where there a multiple floor levels between the player switches. The floor is not present everywhere, so you can look through it. When 'working' (playing, building stuff, etc.) on a specific floor I want to add a transparency value to everything that is NOT on the current floor. Ho can I accomplish that, and what is the best most efficient way to do that?
13
How do you programmatically generate a sphere? Could someone please explain how it would be possible to create a sphere vertices, indices and texture coordinates? There is a surprising lack of documentation on how to do so and it is something that I am interested in learning. I have tried the obvious, googling, looking on gamedev.net, etc. However, nothing covers the generations of spherical points, indexing them, and texturing.
13
Camera rotation deforms objects I'm working on a ray tracer in C with an adjustable camera for a university assignment. I'm entirely new with graphics and I'm struggling with one thing. I have a matrix as a mat4 object (4x4 matrix with each cell saved in a 1D array) that I save in a Camera object that handles transformations of every point in the scene (which is currently just two spheres). Translations seem to be working fine, but when I attempt rotations, things break. The spheres deform, and it only gets worse the more I attempt to rotate the camera. This is where I set the camera matrix mat4 Renderer setCameraMatrix(const Camera amp camera) mat4 cameraToWorld mat4() Initialise 4x4 matrix. Translation. cameraToWorld.cell 3 camera.pos.x cameraToWorld.cell 7 camera.pos.y cameraToWorld.cell 11 camera.pos.z Rotation. cameraToWorld.cell 0 camera.orientmat.cell 0 , cameraToWorld.cell 1 camera.orientmat.cell 1 , cameraToWorld.cell 2 camera.orientmat.cell 2 cameraToWorld.cell 4 camera.orientmat.cell 4 , cameraToWorld.cell 5 camera.orientmat.cell 5 , cameraToWorld.cell 6 camera.orientmat.cell 6 cameraToWorld.cell 8 camera.orientmat.cell 8 , cameraToWorld.cell 9 camera.orientmat.cell 9 , cameraToWorld.cell 10 camera.orientmat.cell 10 return cameraToWorld This matrix is then used to set the origin and direction of the camera in the world space later on in the code vec3 orig transformpoint(vec3(0), cameraToWorld) ... vec3 dir transformvector(vec3(x, y, 1), cameraToWorld) orig dir.normalize() The transformpoint and transformvector functions basically multiply a vector v by a matrix M (M.v), except the former takes into consideration translations, and the latter doesn't. Their code can be found here. The orientmat matrix is calculated from the three separate orientation angles (orient.x, orient.y, orient.z) in this line of code cam gt orientmat mat4().rotatexyz(cam gt orient.x, cam gt orient.y, cam gt orient.z) Using this function mat4 mat4 rotatexyz(const float a, const float b, const float c) mat4 M const float ca cosf(a), sa sinf(a) const float cb cosf(b), sb sinf(b) const float cc cosf(c), sc sinf(c) M.cell 0 cb cc, M.cell 1 1 cb sc, M.cell 2 sb M.cell 4 sa sb cc ca sc, M.cell 5 1 sa sb sc ca cc, M.cell 6 1 sa cb M.cell 8 1 ca sb cc sa sc, M.cell 9 ca sb sc sa cc, M.cell 10 ca cb return M (This function basically creates this transformation matrix from the three camera orientation angles.) I've tried to keep this question as compact as possible but I can provide more code if needed. I'm completely baffled about why I'm getting this kind of deformation on my spheres, and some insight would be invaluable! Thanks.
13
How to compute the 2D equations of 3D circular arcs? I'd like to obtain these equations for the ellipses produced by the perspective projections of (3 dimensionally transformed) circles. This is useful for rendering in 2D contexts which provide curve primitives. I'm using HTML5's canvas, so I get Beziers, arcs, and quadratic curves. See here The projection of a sphere outside of the plane of projection is an ellipse because the view is a cone (silhouette of a sphere is a circle). However if I want to draw my sphere using circular wireframes, that projection cone is no longer a circular cone. So it's not your traditional conic section anymore. How to deal with this?
13
How to improve my maxscript random blood generation code? The idea was simple. Draw basic blood drop meshes. Shuffle them with random generator to get final drop's. Export and render them in game as other meshes. The problem is that it's not looks like a blood drop, but a rocks. What is the best way to improve this? Redraw basic drops? Apply some modifier to final drop mesh? But what modifier? How to smooth borders in final drop mesh? I am totally stuck with this. The improvment must be codeable in MAXSCRIPT, not a manual correction of mesh, because I am a programmer not 3D modeller. The generator final result screenshot. Final blood drop P.S. For those who would like to test generator manually here it goes the prooflink. Works nice at 2010 version.
13
Kerning between glyphs using SDL2 ttf I'm loading a number of characters (using SDL2 ttf) into a texture atlas to improve performance. The glyphs can be separately rendered correctly, however, how can you find the kerning distance between two characters that will sit next to each other? As a follow up, are there any other variables that impact font rendering?
13
How can I keep straight alpha during rendering particles? Rencently,I was trying to save textures of 3D particles so that I can reuse the in 2D rendering.Now I had some problem with alpha channel.Some artist told me I that my textures should have unpremultiplied alpha channel.When I try to get the rgb value back,I got strange result.Some area went lighter and even totally white.I mainly focus on additive and blend mode,that is ADDITIVE srcAlpha VS 1 BLEND srcAlpha VS 1 srcAlpha I tried a technique called premultiplied alpha.This technique just got you the right rgb value,its all you need on screen.As for alpha value,it worked well with BLEND mode,but not ADDITIVE mode.As you can see in parameters,BLEND mode always controlled its value within 1.While ADDITIVE mode cannot guarantee. I want proper alpha,but it just got too big or too small consider to rgb.Now what can I do?Any help will be great thankful. PS If you don't understand what I am trying to do,there is a commercial software called "Particle Illusion".You can create various particles and then save the scene to texture,where you can choose to remove background of particles. Now,I changed the title.For some software like maya or AE,what I want is called straight alpha .
13
How do 3D rendering pipelines render light effects? How does 3D software like Blender, Maya, Unity, or Houidini etc. render the effects of lights illuminating a 3D scene?
13
Generating a Sprite to cover given area I have an area say a x b of cells. actually I render every single cell in this matrix with a tile representing floor, wall, etc. What I'm trying to achieve it to replace this tiling system with a unique tile covering the whole a x b area. I tried to play with creating a single GameObject and then adding a SpriteRenderer component with the base idea to stick in a sprite with desired size so that the image (whatever it is) gets stretched to fit it but apparently all the suitable fields which could be involved in this are readonly (I considered the rect and the bounds attribute). In other words I feel like I'd need to attach a SpriteRenderer with a dynamic size based on given dimensions. How could I reach this goal?
13
How to render Viva Pinata fur In the game Viva Pinata, cute virtual animals have color changing paper cut like furs. It didn't seem like using shell rendering because there are LOTS of animals in a scene and shell rendering each every one of them to render these furs sounded like a daunting process for a game. I tried to build 3d model with each triangle but that didn't seem like right solution either. I am out of my tricks in my pocket.
13
Vulkan dynamic UBO weird numbers Why these numbers?? I have created a dynamic ubo and two objects, one for view and proj and the other for model. Here I update the objects Render UniformBufferObject ubo Render UniformBufferObject2 ubo2 ubo2.model amp model transform.GetWorldMatrix() ubo.view amp EditorCamera transform.GetWorldMatrix().Inverse() ubo.proj amp current camera gt GetProjectionMatrix() map memory without a staging buffer void data auto memory uniform buffer gt get ubo memory() Render VulkanMemoryAllocator get instance() gt map buffer allocation(memory current image , amp data) memcpy(data, amp ubo, sizeof(ubo)) memcpy(data, amp ubo2, sizeof(ubo2)) Render VulkanMemoryAllocator get instance() gt unmap buffer allocation(memory current image ) Here binding and draw command std array lt uint32 t, 2 gt dynamicOffsets i static cast lt uint32 t gt (Render StandardUniformBuffer required alignment(sizeof(Maths Matrix4) 2)), i static cast lt uint32 t gt (Render StandardUniformBuffer required alignment(sizeof(Maths Matrix4))) cmd.bindDescriptorSets( vk PipelineBindPoint eGraphics, pipeline gt get pipeline layout(), 0, 2, descriptor set gt get descriptor sets() gt data(), 2, dynamicOffsets.data()) cmd.drawIndexed(static cast lt uint32 t gt (index buffer gt get primative indices size()), 1, 0, 0, 0)
13
SDL2 doesn't render a window A tutorial's code from LazyFoo wonderful place's 01 hello SDL page doesn't show a white form, but a desktop's screen, like a screenshot. Something like The code Using SDL and standard IO include lt SDL2 SDL.h gt include lt stdio.h gt Screen dimension constants const int SCREEN WIDTH 640 const int SCREEN HEIGHT 480 int main( int argc, char args ) The window we'll be rendering to SDL Window window NULL The surface contained by the window SDL Surface screenSurface NULL Initialize SDL if( SDL Init( SDL INIT VIDEO ) lt 0 ) printf( "SDL could not initialize! SDL Error s n", SDL GetError() ) else Create window window SDL CreateWindow( "SDL Tutorial", SDL WINDOWPOS UNDEFINED, SDL WINDOWPOS UNDEFINED, SCREEN WIDTH, SCREEN HEIGHT, SDL WINDOW SHOWN ) if( window NULL ) printf( "Window could not be created! SDL Error s n", SDL GetError() ) else Get window surface screenSurface SDL GetWindowSurface( window ) Fill the surface white SDL FillRect( screenSurface, NULL, SDL MapRGB( screenSurface gt format, 0xFF, 0xFF, 0xFF ) ) Update the surface SDL UpdateWindowSurface( window ) Wait two seconds SDL Delay( 2000 ) Destroy window SDL DestroyWindow( window ) Quit SDL subsystems SDL Quit() return 0 Putting PumpEvents() or while(SDL PollEvent( amp event))... didn't work, unfortunately. The machine's characteristics CPU i7 M620 GPU Intel Core Processor Integrated Graphics Controller GPU Driver i915 RAM 8GB OS KDE Neon 5.15.4 64bit, 4.15.0 47 Kernel, fully updated
13
SVG rendering performance I have created a jump'n'run browser game based on SVG. The World grew large ( 80px 20000px, before scaled to viewport height) and rendering went slow. In consequence I included a range searching algorithm to exclude elements from the render tree, if they are out of the visible area. To get all elements inside the viewport, I am using a range query on a AVL Tree. Basically the approach was very effective and increased the performance up to 40 , unfortunately it also results in flickering of the graphic if the player is in motion. The initialization looks like this for each element of type path, rect, polygon, line, circle, polyline,use,text or tspan var box bbox.call(this, element, mtr), data style element.style, x1 box 0 .x, x2 box 1 .x, visible false this.tree.insert(box 0 .x, data) this.tree.insert(box 1 .x, data) element.style.display 'none' whereby bbox() creates gets the bounding box in global coordinates. Each time at rendering the following is done var x1 this.viewBox.x 1, x2 this.viewBox.x this.viewBox.width 1, result this.tree.query(x1, x2), visible this.visible, i 0, n result.length, data for ( i lt n i ) data result i if (!data.visible) data.style.display 'block' data.visible true for (i 0, n visible.length i lt n i ) data visible i if (data.x1 gt x2 data.x2 lt x1) data.style.display 'none' data.visible false this.visible result As mentioned above, the framerate increased between 20 and 40 on each platform, but flickering and kind of sloppy rendering began. What could I do to gain a more fluent rendering?
13
Lost transparency in SDL surfaces drawn manually I want to create SDL Surface objects for each layer of my 2d tile based map so that I have to render only one surface per layer rather than too many tiles. With normal tiles which do not have transparent areas this works well, however I am not able to create a SDL Surface with transparent pixels everywhere to be able to draw some tiles on specific parts which should be visible (I do NOT want the whole surface to appear with a specific opacity I want to create overlaying tiles where one can look through). Currently I am creating my layers like this to draw with SDL BlitSurface on them SDL Surface layer SDL CreateRGBSurface( SDL HWSURFACE SDL SRCALPHA, layerWidth, layerHeight, 32, 0, 0, 0, 0) If you have a look at this screenshot I have provided here you can see that the bottom layer with no transparent parts gets rendered correctly. However the overlay with the tree tile (which is transparent in the top left corner) is drawn own its own surface which is black and not transparent as expected. The expected result (concerning the transparency) can be seen here Can anyone explain me how to handle surfaces which are actually transparent rather than drawing all my overlay tiles separately?
13
Where do the buffer values come from when rendering? In the textbook I am reading, it talks about fragment tests that are performed when rendering. All of these tests involve comparing the current fragment x value (x can be alpha, color, etc.) with a corresponding buffer value, and doing something in case the test passes. The test is usually a comparison between those two values (for example, , lt , etc.). What I cannot understand is where do these buffer values come from in the first place? Are these previous values? If so, what do the current values have to do with previously calculated values? I don't even know what to search in google for this topic. Sorry if it is a total starter question. I am currently reading about rendering for the first time
13
In SDL2 what is the fastest most efficient way to draw pixels onto the screen? Using SDL RenderDrawPoint or SDL RenderCopy with a Texture? I want to add pixel splatters and particle effects to a game. For this my options were to have a bunch of pre made animations in the form of textures, OR create a particle engine. For the particle engine approach, my options as far as I know are to create pixels and render them either using SDL RenderDrawPoint(s) (Using SDL SetRenderDrawColor) SDL RenderCopy (Using Textures with SDL SetTextureColorMod) I'm not sure what's a better approach. 2 is probably easier as I have a whole system built around fast texture rendering (not necessarily faster than 1., but seemingly easier to implement with my current system.) 1 seems ideal because it's in the SDL2 library for a reason I'd think, but really I'm unsure which approach I should go with.
13
Rendering a selected part of the terrain with different shading I'm trying to understand how to modify the shading of the section of the terrain based on some user selection. In a game like Civ5, when a user selects a city or unit, that area is lit up more than the other parts of the terrain. I'm trying to implement this effect. In the civ 5 example the selected tile is has a circle outline and the selected city has a transparent overlay that follows the height of the terrain nicely. My current pipeline so far is at a primitive state. I render a quad and texture it with a grass texture. Then I draw a rectangular grid on top of this (a set of lines, with a slightly higher height value than the background grid). Upon selection (ie a mouse click inside the grid area), I calculate the approriate grid index by converting mouse to world coordinates and then to grid coordinates, and highlight the appropriate grid rectangle by changing the color of that particular rectangle. But, my approach gets outdated the moment I get a height map (as it assumes constant height), and looks pretty bad, hence this question. EDIT Added my pipeline description
13
how to dent deform damaged objects in unreal engine? I am trying to simulate damages in cars using unreal engine when a body collides with it like it is done in this video https vimeo.com 162729920 I am trying to do it in the first person shooter game template, so that i can shoot projectiles at the object. but the object just moves and doesn't dent. I tried creating a destructible mesh but that cause the object to break down into pieces. I want it to dent when i apply a force to it. Could someone point me in the right direction?
13
How to draw geometric shapes to a texture? How can I create a texture containing a geometric figure using SDL2? I saw there is a way to convert Surface to Texture, and I also saw that there is a way to draw directly to the screen using DrawRect() FillRect() functions... I'm not sure if the geometry has something similar. Ultimately I want to have a texture to render. The way I'm making my engine draw is to read game objects' texture and copy it to the render using render.Copy(texture, rect1, rect2). This makes me wonder if there's a way for my game object to hold a texture containing a rect, circle, or triangle texture. I couldn't find in the SDL wiki or tutorials online.
13
Is it a useful strategy for Mobile VR titles to render faster than their simulation loop? For example If a title had a very heavy simulation loop (say 20ms), is it desirable to render at a greater rate, say 90hz? This would always a present head pose correct to the view, without being stalled on simulation. Or does this result in discomfort for the player, and instead render and sim should stay in lockstep?
13
GPU particle system using vertex texture fetch in Direct3D9 I've been reading up on particle systems amongst other stuff and one interesting approach uses rendertargets to store a particle's position, velocity, lifetime, etc. A pretty neat summary is given here, for example. However, I'm wondering how to effectively use such a system? How would you create new particles, upload them to the GPU and how'd you determine what particles to draw? I mean, if you've got a 512x512 rendertarget storing particles, would you really just send a vertexbuffer with 262144 quads to the GPU and let the shader figure out which particles to draw and which to ignore? Right now this sounds like a good idea for those bigger, continuous particle effects like weather or something... particle systems, which don't change often, can run for a while or where particle generation. I don't really see how they'd be performant enough to power dynamic one off effects like explosions or something, because uploading new particles would basically require locking the rendertarget(s) and manipulating its (their) pixels, right?
13
How can I render all objects behind a plane with a specific transparency value? I have a game where there a multiple floor levels between the player switches. The floor is not present everywhere, so you can look through it. When 'working' (playing, building stuff, etc.) on a specific floor I want to add a transparency value to everything that is NOT on the current floor. Ho can I accomplish that, and what is the best most efficient way to do that?
13
Rendering a image with some transparency has removed all black pixels and makes full texture transparent I am writing a program where with Directx11 I am rendering a texture to a flat rectangle something along the line a of 2D engine. Now parts of this image need to be transparent, to this effect I looked into alpha blending and it seemed to work until i changed textures and realized that it had removed all black pixels and made the whole image transparent not just those pixels that are blank. So how in Directx11 do you have it so that you preserve the transparency of the original image. I am using a ShaderResourceView that is a loaded png for the texture Setup Blend State D3D11 BLEND DESC BlendStateDescription ZeroMemory( amp BlendStateDescription, sizeof(D3D11 BLEND DESC)) BlendStateDescription.RenderTarget 0 .BlendEnable TRUE BlendStateDescription.RenderTarget 0 .RenderTargetWriteMask D3D11 COLOR WRITE ENABLE ALL BlendStateDescription.RenderTarget 0 .SrcBlend D3D11 BLEND SRC ALPHA BlendStateDescription.RenderTarget 0 .DestBlend D3D11 BLEND INV SRC1 COLOR BlendStateDescription.RenderTarget 0 .SrcBlendAlpha D3D11 BLEND ONE BlendStateDescription.RenderTarget 0 .DestBlendAlpha D3D11 BLEND ONE BlendStateDescription.RenderTarget 0 .BlendOp D3D11 BLEND OP ADD BlendStateDescription.RenderTarget 0 .BlendOpAlpha D3D11 BLEND OP ADD dev gt CreateBlendState( amp BlendStateDescription, amp blend) float blendFactor 0, 0, 0, 0 UINT sampleMask 0xffffffff devcon gt OMSetBlendState(blend, blendFactor, sampleMask)
13
Rendering a selected part of the terrain with different shading I'm trying to understand how to modify the shading of the section of the terrain based on some user selection. In a game like Civ5, when a user selects a city or unit, that area is lit up more than the other parts of the terrain. I'm trying to implement this effect. In the civ 5 example the selected tile is has a circle outline and the selected city has a transparent overlay that follows the height of the terrain nicely. My current pipeline so far is at a primitive state. I render a quad and texture it with a grass texture. Then I draw a rectangular grid on top of this (a set of lines, with a slightly higher height value than the background grid). Upon selection (ie a mouse click inside the grid area), I calculate the approriate grid index by converting mouse to world coordinates and then to grid coordinates, and highlight the appropriate grid rectangle by changing the color of that particular rectangle. But, my approach gets outdated the moment I get a height map (as it assumes constant height), and looks pretty bad, hence this question. EDIT Added my pipeline description