_id
int64 0
49
| text
stringlengths 71
4.19k
|
---|---|
34 | Why is it when I render a basic cube, my editor's grid changes too? I have one HLSL file for DirectX11 that only has input layout for color and position. Then another HLSL file for the simple cube that has position, normal and textures. What I noticed is when I render the simple cube the grid also changes and doesn't remain pure white color. They have different pixelshaders, vertexshaders, constant buffers and different inputlayout descriptions. Would anyone like to chip in and help get this resolved? This has been puzzling for a day! |
34 | Why texture coordinate of flat surface reflection is calculated like this in fragment shader? The whole project is here https developer.apple.com library mac samplecode GLEssentials Introduction Intro.html The rendering flow is like this Render the character upside down to a texture. Render the character normally. Render the flat reflection surface below the character using the texture from step 1. I don't understand the texture coordinate calculation in the fragment shader, specifically the block marked with ??? below precision highp float Color of tint to apply (blue) const vec4 tintColor vec4(0.0, 0.0, 1.0, 1.0) Amount of tint to apply const float tintFactor 0.2 varying vec3 varNormal varying vec3 varEyeDir uniform sampler2D diffuseTexture void main (void) Compute reflection vector vec3 reflectDir reflect(varEyeDir, varNormal) Compute altitude and azimuth angles vec2 texcoord texcoord.t normalize(reflectDir).y ??????????????????????????????????????????????? reflectDir.y 0.0 Why clear reflectDir.y? texcoord.s normalize(reflectDir).x 0.5 Why times 0.5? Translate index values into proper range if (reflectDir.z gt 0.0) texcoord (texcoord 1.0) 0.5 else texcoord.t (texcoord.t 1.0) 0.5 texcoord.s ( texcoord.s) 0.5 1.0 Why translation of s is like this, different from t? ??????????????????????????????????????????????? Do a lookup into the environment map. vec4 texColor texture2D(diffuseTexture, texcoord) Add some blue tint to the image so it looks more like a mirror or glass gl FragColor mix(texColor, tintColor, tintFactor) Thanks in advance to anyone who can demystify this for me! |
34 | Howdo I create differently sized texture atlases for different screen sizes? I am beginning game development and using texture atlases. I've created textures based on the resolution 1920x1080, so I created a 1024x1024 size Texture Atlas for storing multiple graphics. If the game is played on a 800x480 size device, the atlas will be very big to load in memory. An atlas of 512x512 would be enough and on devices with 480x320 resolution the game might not even work due to the different texture size. How can I resize the atlas to save memory? Can I use different texture atlases for different screen sizes? I just want to know how other game devs do it? |
34 | How can I determine the extreme color values in a texture? I am looking for a way to determine the most extreme color values for all of the texels in a texture. So for a texture consisting only of black and white texels, the extreme values should be (0,0,0) and (1,1,1) expressed in RGB format. For a color gradient from red to green I should get the values (1,0,0) and (0,1,0). Now obviously I could do this on the CPU by iterating over all the pixels texels of the texture and keeping track of the color values found to be most apart from each other, but this is probably relatively slow, so I am looking for a way to do this using the GPU shaders. Is this possible using shaders? I am not experienced with GPGPU, so a solution in HLSL GLSL would be preferred. Or maybe there is a fast algorithm I could use on the CPU? |
34 | How to use multiple textures in OpenGL ES 2.0? I am working in OpenGL ES 2.0. I load two png images as my textures with the libSOIL. I need to use one of them as the texture for the background and another one as the texture of a rotating cube. In OpenGL ES 2.0, the adding texture operation is in the shader. I don't know how to add the different textures to the different places in a shader. How do I do that? |
34 | Animation Texture spread out when animation on 3dsMax I have a 3d model of a human biped, i did the skeleton and attached it to my 3d model with a skin modifier. But when i move his arms a part of the trunk is comming too . Is it possible to fix this ? |
34 | How to give a certain scene different preference settings then other scenes unity In my unity game that I am creating there are 4 scenes. 3 of those scenes rely on a backdrop because they are the main menu, death screen and the escape screen(for when you escape the place). Therefore they need a high resolution image. However, the other scene (the main game scene) is incredibly laggy when the quality setting for the texture quality isn't quot eighth res quot . How can I make it so that the texture quality for only my game scene is eighth res and the rest is normal res? This might help my build order value is the main menu is 0, my game is 1, my death scene is 2 and my escape scene is 3. thanks in advance |
34 | What technology does Starcraft 2 use render its maps? I've got a map that is being procedurally generated at run time and I'm currently investigating methods of rendering this map. I've taken an interest in the look of Starcraft 2 and I'd like some advise on what methods it employs to achieve it. Secondarily, I'd like to see any tutorials, articles, or even source code examples if possible. There are a couple of main things I'd like to get some advise on, but please also feel free to suggest anything else that could help me. Snappable Tilesets A typical starcraft map seems to consist of a tileset of models that one can snap together to create the cliffs, ramps and other elevated terrain. What methods do they employ to make them look so natural? What I mean is, its very hard to spot repetition. Terrain Textures The terrain textures are so varied and even dynamic (Zerg creep). What methods are used to do this? Thanks. |
34 | Unity Terrain Texturing Roads The way I would texture a road within a given terrain is with a seperate mesh and a horizontally tileable texture such as this Then make sure the texture on the edge of the road and the terrain bordering the road looked kind of similar. That would however always create a visible seam such as in the image below taken from half life 2 episode 2 Now the same game seems to blend in a horizontally tileable texture (the road) with a terrain made up of horizontally and vertically tileable textures (grass amp dirt). How is this possible without a seperate mesh material and without any visible seams, especially since the road is twisty meaning the texture has been rotated. I am completely baffled as to how this has been created, this is clearly beyond standard blend maps. |
34 | Why are my explosions not showing? I'm creating a space shooter using type script and I don't understand why explosions won't work meaning that nothing is drawn on the screen and I don't understand what I'm doing wrong. for (var i 0 i lt projectileList.length i ) if (this.y gt projectileList i .y) let projDist this.x projectileList i .x if (projDist lt 0) projDist projDist ( 1) if (projDist gt 0 amp amp projDist lt 30) this.x Math.floor(Math.random() 600) 1 this.y 20 projectileList i .x 2000 for (var j 1 j lt 9 j ) this.sprite " images explode " j ".png" this.img.src this.sprite this.gameContext.drawImage(this.img, 200, 200) Basically when there is a collision between projectiles and asteroids, I'm just trying to draw some images on the screen but nothing is happening and the console won't give any errors ( Do you guys know what is wrong? Thanks for help! |
34 | Repeat texture in a scrolling game The bottom region of the screen is filled by a repeated texture grassTexture new Texture(Gdx.files.internal("images grass.png")) grassTexture.setWrap(TextureWrap.Repeat, TextureWrap.Repeat) grassSprite new Sprite(grassTexture, 0, 0, width,grassTexture.getHeight()) However, The grass should also be scrolled to the left (looping texture from edge to edge). I have got this in the render() method scrollTimer 0.0007f if (scrollTimer gt 1.0f) scrollTimer 0.0f grassSprite.setU(scrollTimer) grassSprite.setU2(scrollTimer 1) grassSprite.draw(spriteBatch) The scrolling part now works. However, the grass texture is not repeated anymore. It looks like this |
34 | Merging several textures into one using RGB channels Would it be possible to place a texture into each RGB channel? Example Red wood.png Blue tiles.png Green metal.png The advantages I could see are saving space, memory and draw calls and the resolution might not suffer. (unless I'm missing something) I've seen people pack many textures into one but they have to be scaled to fit which lowers resolution. |
34 | Unreal Engine Niagara Write to Texture or Data Structure In the Niagara system, is there any way to write to a texture? I want to write the position of each particle into a texture. So the 2d grayscale texture has its intensity as the number of particles in the pixel. Or, is there any data structure that I can use to store the particle positions? Something like 2d or 3d array? |
34 | Is point texture filtering the same as nearest? Is point texture filtering the same as nearest (different names for the same technique)? |
34 | Texturing spherical terrain, seemingly arbitrary distortion I'm using this algorithm to find texture coordinates on a sphere Wikipedia UV Coordinates I get the famous seam due to the wrong interpolation, but I also get odd distortions on certain points on the sphere. A picture follows where you can see like five starshaped distortions. What could cause this? |
34 | Can I make a mirror with Unity Free? If I understand correctly, render textures are a feature of Unity Pro, and that is the best way to create mirrors, TV screens, and so forth. If I am wrong, please tell me. What I would like to know is if there is another, probably less convenient, way to create a mirror with the free version of Unity. If I need to write a custom script, I would appreciate a basic outline of how to write it. Even better, if you know of a free script package that does this, please tell me. |
34 | HLSL Voxel texturing I'm currently trying to develop a Voxel Engine using Direct3D 9 and C . To keep the memory usage low, i'm only passing the position, the orientation and the offset of the current voxels texture in the texture atlas of each vertex to the vertex shader. The vertex shader then calculates the normal and passes it to the pixel shader. I found this article which covers, how to texture voxels with just their position and normal in glsl. This is the part that calculates the texture coordinates in my pixel shader (SM3) float2 tileUV float2(dot(input.normal.zxy, input.pos3D), dot(input.normal.yzx, input.pos3D)) float2 texcoord input.texOffset tileSize frac(tileUV) This code works fine for faces that point in negative z direction (normal 0,0, 1 ), however, the back is flipped by 180 and the sides and top bottom squares are flipped by 90 270 . I am not sure, if this is correctly translated from glsl, because this behaviour should be the expected one in hlsl, if I calculate it by hand. Is there anything that I have overseen or should I aim for a different approach? Edit I have now managed to successfully texture the faces by replacing the previous calculation with the following if(input.normal.y ! 0.0f) handle top bottom surfaces as front back faces input.pos3D.y input.pos3D.z input.normal.z input.normal.y input.normal.y 0.0f texcoord.x input.texOffset.x tileSize frac(float3(1.0f, 1.0f, 1.0f) cross(frac(input.pos3D), input.normal)).y texcoord.y input.texOffset.y tileSize (1.0f frac(input.pos3D.y)) Is there any way that I can simplify optimize the equation? I may also mention that the voxels are all axis aligned and clamped to integer coordinates. Edit2 This is the modified formula of zogi's answer which works as expected. float3 n abs(normal.xyz) float2 texcoord float2(input.texOffset.x tileSize dot(n, frac(input.pos3D.zxx)), input.texOffset.y tileSize tileSize dot( n, frac(input.pos3D.yzy))) |
34 | Two graphical entities, smooth blending between them (e.g. asphalt and grass) Supposedly in a scenario there are, among other things, a tarmac strip and a meadow. The tarmac has an asphalt texture and its model is a triangle strip long that might bifurcate at some point into other tinier strips, and suppose that the meadow is covered with grass. What can be done to make the two graphical entities seem less cut out from a photo and just pasted one on top of the other at the edges? To better understand the problem, picture a strip of asphalt and a plane covered with grass. The grass texture should also "enter" the tarmac strip a little bit at the edges (i.e. feathering effect). My ideas involve two approaches put two textures on the tarmac entity, but that involves a serious restriction in how the strip is modeled and its texture coordinates are mapped or try and apply a post processing filter that mimics a bloom effect where "grass" is used instead of light. This could be a terrible failure to achieve correct results. So, is there a better or at least a more obvious way that's widely used in the game dev industry? UPDATE This is a conceptual question, I do not require a specific shading language or framework. In real life, I use Cg with Ogre for rendering experiments. If I supply two textures and two different texture coordinate sets for the tarmac entity, I can analyze the second texture coordinate set in the fragment shader and do some blending depending on the .y component of the uv pair, but that requires the artist to hand craft every asphalt tarmac runway in the scenario and it looked to me that this is probably the wrong way to tackle the issue. |
34 | Premultiplied alpha, yea or nay? Should I be using premultiplied alpha? Reading Tom Forsyth's article it seems a nobrainer. Premultiplied alpha (AKA additive blending) is superior in every sense. However, premultiplied alpha seems to have one major drawback, it's not well supported with tools. Do you have you used premultiplied alpha? How is the tool support? In particular, I don't think gimp will edit premultiplied alpha images well. Are there any gimp plugins to export premultiplied alpha? How about an external tool? |
34 | What is the difference between these options when saving an image as a PNG in Photoshop for my game? I'm creating sprites and tilemaps for 2D games. I might just use Unity. What is the impact of choosing different options when saving as a .png? |
34 | Replicating no. of sprites without letting the app to slow down and crash Is it possible that if I'm making a a simple drag n drop game, does making a new sprite via constructor with texture as a parameter makes the game slower and depletes more memory until it crashes or does making a new texture via constructor cause the memory to go slow? I'm on a revision for the new drag n drop puzzle game where the only goal is to stack the most no. of objects as you can before it falls off the platform. Is it a good idea to create another sprite with texture as a parameter every time a player drag n drop another object? Here's the start of the dry run looked like and I set it to debug mode so that the platform is completely block the pit to test the replication test. Afterwards, when it reaches from hundreds to thousands, this will be like this and the game gets slower. When the game crashes, here's the result via Logcat 12 11 16 38 53.953 E AudioTrack(25701) AudioFlinger could not create track, status 22 12 11 16 38 53.953 E SoundPool(25701) Error creating AudioTrack 12 11 16 38 53.993 D dalvikvm(25701) GC EXPLICIT freed 947K, 14 free 14414K 16672K, paused 2ms 4ms, total 96ms 12 11 16 39 16.363 E AudioTrack(25701) AudioFlinger could not create track, status 22 12 11 16 39 16.363 E SoundPool(25701) Error creating AudioTrack 12 11 16 39 16.433 D dalvikvm(25701) GC FOR ALLOC freed 2120K, 15 free 14418K 16848K, paused 47ms, total 47ms By the way, I'm using BodyEditor library for Box2D program and rendering, a physics engine. |
34 | DirectX 11 GenerateMips only works with premultiplied alpha? The GenerateMips method in the ID3D11DeviceContext allows generation of mipmaps at runtime, which is fine for fully opaque textures. However, when this method is used with transparent textures that do not have premultiplied alpha, the resulting mip levels tend to have black outlines at the transparency edges. For instance, this PNG texture, loaded through WIC and processed with GenerateMips, produces mipmaps with edge pixels that are way too dark My question Is there any way to specify to that this texture uses non premultiplied alpha, so that DirectX can generate more correct mip levels? |
34 | How to print Depth to a Texture2D and then read it in the next pass on a shader in DirectX11 I'm programming a two pass effect in DirectX 11 (SharpDX). It's supposed to write the depth to a texture in the first pass and then use that texture to extract data on the second one in the pixel shader. What I get is a white screen, with nothing but the interface and I don't know why nothing is being printed. What could be the problem? I would say I should get at least something from the Depth Texture. This is how I'm setting the depth texture values this.depthBuffer new Texture2D(device, new Texture2DDescription() Format Format.R32 Typeless, ArraySize 1, MipLevels 1, Width (int)host.ActualWidth, Height (int)host.ActualHeight, SampleDescription new SampleDescription(1, 0), Usage ResourceUsage.Default, BindFlags BindFlags.DepthStencil BindFlags.ShaderResource, CpuAccessFlags CpuAccessFlags.None, OptionFlags ResourceOptionFlags.None, ) this.depthBufferShaderResourceView new ShaderResourceView(this.device, this.depthBuffer, new ShaderResourceViewDescription() Format Format.R32 Float, Dimension ShaderResourceViewDimension.Texture2D, Texture2D new ShaderResourceViewDescription.Texture2DResource() MipLevels 1, MostDetailedMip 0, ) var depthStencilDesc new DepthStencilStateDescription() DepthComparison Comparison.LessEqual, DepthWriteMask global SharpDX.Direct3D11.DepthWriteMask.All, IsDepthEnabled true, And here is how I sample the depth in the .fx file int3 posTex int3(input.p.xy, 0) float depthPixel DepthTexture.Load(posTex) float4 color float4(depthPixel, depthPixel , depthPixel, 1.0f ) return color And here the way I'm now setting the Depth Buffer stencil view as a Render Target in 2 passes. In the first I try to set the depthstencilview as a target. In the second pass I'm trying to set teh depth texture as a shader resource to read from it. this.device.ImmediateContext.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(this.vertexBuffer, LinesVertex.SizeInBytes, 0)) PASS 0 this.device.ImmediateContext.OutputMerger.SetTargets(depthBufferStencilView) this.device.ImmediateContext.ClearDepthStencilView(this.depthBufferStencilView, DepthStencilClearFlags.Depth DepthStencilClearFlags.Stencil, 1.0f, 0) this.technique.GetPassByIndex(0).Apply(this.device.ImmediateContext) this.device.ImmediateContext.DrawIndexed(this.geometry.Indices.Length, 0, 0) PASS 1 this.device.ImmediateContext.OutputMerger.ResetTargets() unbinding the depthStencilView this.device.ImmediateContext.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(this.vertexBuffer, LinesVertex.SizeInBytes, 0)) this.depthStencilShaderResourceVariable effect.GetVariableByName("DepthTexture").AsShaderResource() this.depthStencilShaderResourceVariable.SetResource(this.depthBufferShaderResourceView) this.technique.GetPassByIndex(1).Apply(this.device.ImmediateContext) this.device.ImmediateContext.DrawIndexed(this.geometry.Indices.Length, 0, 0) Finally, this is how I set the two passes in the .fx file technique11 RenderMyTechnique pass P0 SetDepthStencilState( DSSDepthLessEqual, 0 ) SetVertexShader ( CompileShader( vs 4 0, VShader() ) ) SetHullShader ( NULL ) SetDomainShader ( NULL ) SetGeometryShader ( NULL ) SetPixelShader ( NULL ) pass P1 SetDepthStencilState( DSSDepthLessEqual, 0 ) SetVertexShader ( CompileShader( vs 4 0, VShader() ) ) SetHullShader ( NULL ) SetDomainShader ( NULL ) SetGeometryShader ( CompileShader( gs 4 0, GShader() ) ) SetPixelShader ( CompileShader( ps 4 0, PShader() ) ) |
34 | Which gaming engine can be used for pixel manipulation? Here is the gist of where I am going with this. I work for a printer company and basically all of our input files(or output files for virtual printers) are bitmap files. We currently have a tool built in java over 10 years ago that works, but chokes on our high end 1600x1600 files. So I was thinking, why not utilize a game engine's built in ability to utilize the power of a gpu? I am thinking all I would have to do is create a 1600x1600 plane, feed it the image file and draw the scene. I figure this is the easy part. However, the hard part is reaching into the image for example and clicking the mouse on a particular pixel. Getting RGB data, converting to CYMK, making changes, converting back into RGB and then redrawing the scene. Not only that but pulling the modified texture out of memory and saving it back into a bitmap. Is this even possible? I haven't used a game engine since college and that was opengl 2.0. Just figured this looked like a good place to ask. Thanks. |
34 | Sprite sheet resolutions and Tile Maps I am making a game using Cocos2d x and want to support multiple mobile phone resolutions and sizes. Right now I have made my game sprite sheets set on a resolution of iPad Retina's resolution of 2048X1536 and then will use Tiled Map editor to design my game levels. My question is on what basis do I scale my sprite sheets for different resolutions suppose iPhone etc so that my Tiled Map's design does not get effected when used in the game ? |
34 | Why is Metal is rendering textures SLIGHTLY darker than they should be? I already checked this question. Using MTKTextureLoader.Option.SRGB false in texture loader options doesn't work. All that happens is when I do MTKTextureLoader.Option.SRGB true , it's a lot darker instead of slightly darker. There must be something else. Here is an example of this unwanted behavior. As you can see I am using the mouse to hold the original packed ice texture over the rendered one. It's considerably lighter, even with MTKTextureLoader.Option.SRGB false . Here is my fragment function fragment half4 TextureFragment(VertexOut VertexIn stage in , texture2d lt float gt Texture texture(0) ) constexpr sampler ColorSampler(mip filter nearest, mag filter nearest, min filter nearest) float4 Color Texture.sample(ColorSampler, VertexIn.TexCoord.xy VertexIn.TexCoord.z) return half4(Color.r, Color.g, Color.b, 1) How can I fix the problem? How can I render the actual texture instead of a darker version? Note I must fix the problem, not work around it. A workaround would be like return half4(Color.r, Color.g, Color.b, 1) from the fragment shader. I must fix the problem from the root. |
34 | Getting the correct texture from an Atlas I'm writing an OpenGL ES (2.0) app and have a custom Quad class which draws a textured quad for my sprites. Currently, I put the textures of any sprites of the same size into a single file or atlas, then, when drawing, I can simply specify the frame I want. It will the render the correct frame based on how many textures in the atlas and how many total frames etc. Something like so mySprite.x 0 mySprite.y 0 mySprite frame 1 drawSprite(mySprite) The texture Atlas would look something like this (Simplified for the purpose of this question, in the real project, there are over 200 textures arranged in neat rows columns, but the important thing is there is order, every texture is the same size as this is necessary for the calculations). So, in the example above, I can say, there are 2 columns, 1 row and a total of 2 textures. So, if I specify 1 as my texture (2nd texture), my sprite class can work out (using the columns, rows, total textures etc..) which texture I want and how to get it. As I said this is simplified, the calculations are a little but more involved when there are multiple rows. But it makes doing animations nice and simple as I can simply increment or decrement the current frame. However, I keep seeing sprite sheets where there is no order. There are different sized textures seemingly thrown into an atlas like so I really would like to go down this route as it means I wouldn't have to separate my sprite sheets by object size, and therefore, vastly reduce the number of separate sheets I have to use. I can't however work out how one would write a calculation to get the correct texture just by specifying the texture number to render. So how does this work? Do I need to manually work out the texel coordinates of each and every object (and indeed frame), manually (and maybe store this value in an array and associate it to a frame number?) or am I missing a trick here somewhere? |
34 | Best way to load multiple TextureRegion from multiple bitmaps in AndEngine I've seen a lot of examples of AndEngine usage where some bitmaps are mapped into some TextureRegions and loaded into an Atlas. But what if I have a lot of bitmaps, where each bitmap is a sprite sheet, with several sprites for a given entity? How can I extract multiple texture regions from each bitmap and load all that into a single Atlas? |
34 | TexturePacker ignores extensions I'm using TexturePacker in one of my games, though when packing a bunch of textures their extension is kept in the data file. So when I want to find a texture I need to search for "image.png" instead of just "image". Is there an option to let texture packer ignore the extensions of my source images in the data file? Solved So if anyone else wants this, here's the exported I made https www.box.com s bf12q1i1yc9jr2c5yehd Just extract it into "C Program Files (x86) CodeAndWeb TexturePacker bin exporters UIToolkit No Extensions" (or something similar) and it should show op as an exporter. |
34 | Is it possible to use unnormalized texture coordinates from a GLES2 GLSL fragment shader? I want to look up a texel from my GLES2 GLSL fragment shader using un normalized texture coordinates (0 w, 0 h instead of 0 1, 0 1). The reason is that this texture is used as a look up table and I get precision problems with normalized coordinates. I see that GL TEXTURE RECTANGLE is not supported wihtout extensions and neither is texelFetch(), so I have ruled out those options. Thanks! |
34 | Renaming Texture 2D nodes in Unity's shader graph? I am new here so thanks for having me. I have a query regarding renaming the Texture 2D nodes I have created in my first Shader graph. I have setup a shader graph with a normal map, base colour, smoothness and ao. These are showing up as exposed parameters which is just what I wanted so I can drag and drop new texture maps for the other materials easily (please see screen shot attached). I would love to be able to name each of those Texture 2D maps accordingly so it is clear where each map from the assets folder needs to be dragged. E.g The Texture node that contains the normal map is simply called 'Normal' instead of 'Texture 2D', the base colour map named 'base colour' etc. Atm I have 4 exposed parameters all called Texture 2D. Not the most serious of issues I would agree, but I feel like I must be missing something. No sign of a 'rename' option for those nodes. Any help very much appreciated. Cheers Nick |
34 | How to produce a texture to represent a vector field I would like to be able to produce a texture that essentially represents a 2D vector field (eg red represents vector 1, 0 . What I would like is to draw a path and the vector at the point on the path should be tangential to the path. Ideally, I would like the path to be thick (i.e. the points on the path are the tangent colour and then all points within r perpendicular to the path at that point are a smaller version of that vector). All other points would have the zero vector. So for example, if I had a circle path, the vectors would push a particle around the circle. How would I easily produce this texture? I am currently considering writing a tool that parses a SVG file and produces a bitmap that can be used but I'm wondering if there are existing tools that do this already. NB I tried to tag this with "vector fields" as I believe this could be a common theme (since UE4 supports using them with particles), but don't have enough rep to create the tag. |
34 | What is the difference between these options when saving an image as a PNG in Photoshop for my game? I'm creating sprites and tilemaps for 2D games. I might just use Unity. What is the impact of choosing different options when saving as a .png? |
34 | How to correctly export UV coordinates from Blender Alright, so I'm just now getting around to texturing some assets. After much trial and error I feel I'm pretty good at UV unwrapping now and my work looks good in Blender. However, either I'm using the UV data incorrectly (I really doubt it) or Blender doesn't seem to export the correct UV coordinates into the obj file because the texture is mapped differently in my game engine. And in Blender I've played with the texture panel and it's mapping options and have noticed it doesn't appear to affect the exported obj file's uv coordinates. So I guess my question is, is there something I need to do prior to exporting in order to bake the correct UV coordinates into the obj file? Or something else that needs to be done to massage the texture coordinates for sampling. Or any thoughts at all of what could be going wrong? (Also here is a screen shot of my diffused texture in blender and the game engine. As you can see in the image, I have the same problem with a simple test cube not getting correct uv's either) Edit Added my geometry pass shader source code to show how I'm rendering and sampling the diffuse texture. I'm simply using the UV coordinates provided by the obj file and an anisotropic sampler. Texture2D diffuseTexture register(t0) SamplerState textureSampler register(s0) cbuffer ObjectTransformBuffer register(b0) float4x4 worldTransformMatrix, Translates to world space. cameraTransformMatrix Translates to camera space. (not including rotation) cbuffer ScreenTransformBuffer register(b1) float4x4 viewProjectionMatrix Rotates to camera space and then projects to screen space. cbuffer MaterialBuffer register(b2) float3 materialDiffuseAlbedo float materialSpecularExponent float3 materialSpecularAlbedo bool isTextured, isLighted struct vsInput float3 positionLS POSITION float3 textureLS TEXTURE Input signature uses a uvw coordinate but w is 0 and not needed for any geo pass textures. float3 normalLS NORMAL struct vsOutput float4 positionCS SV POSITION float2 textureLS TEXTURE float3 normalWS NORMAL float3 positionWS POSITION struct psOutput float4 positionWS SV Target0 Surface positions. float4 normalWS SV Target1 Surface normals. float4 diffuseAlbedo SV Target2 Surface diffuse albedo. float4 specularAlbedo SV Target3 Surface specular albedo. vsOutput VS( in const vsInput in ) vsOutput out out.positionCS mul(float4( in.positionLS, 1.0f), mul(cameraTransformMatrix, viewProjectionMatrix)) out.positionWS mul(float4( in.positionLS, 1.0f), worldTransformMatrix).xyz out.normalWS mul(float4( in.normalLS, 0.0f), worldTransformMatrix).xyz out.textureLS in.textureLS.xy w coordinate is 0 and not needed. return out psOutput PS( in vsOutput in ) psOutput out Use the alpha channel to indicate specular light intensity. out.normalWS float4(normalize( in.normalWS), materialSpecularExponent) float lightEffectModifier if (isLighted) lightEffectModifier 1.0f else lightEffectModifier 0.0f Use the alpha channel to indicate whether the surface is affected by light for the light pass. out.positionWS float4( in.positionWS, lightEffectModifier) out.diffuseAlbedo float4(materialDiffuseAlbedo, 1.0f) if (isTextured) out.diffuseAlbedo diffuseTexture.Sample(textureSampler, in.textureLS.xy) out.specularAlbedo float4(materialSpecularAlbedo, 1.0f) return out |
34 | Unity5 imported models show no texture I've got a problem with Unity materials. I'm a beginner so I used Wings3d for creating 3d models. But there's a little Problem. Both objects on the picture below have the same material(a Standart Material with 0 smoothness, 0 metallic and only a only an Albedo picture) but obviously the right one has no texture( and that's the problem). First I ignored it and used a custom shader ( quot Custom WorldCoord Diffuse) which I found in a package for fixing it. Poorly this shader doesn't support Normal or Height maps and strangely slows down my game extremly (my scene with only Standart shaders 80 FPS my scene with this strange shader 7 FPS). I don't know how to write my own shaders and I don't know blender, and I don't have much time to fix this. |
34 | TexturePacker ignores extensions I'm using TexturePacker in one of my games, though when packing a bunch of textures their extension is kept in the data file. So when I want to find a texture I need to search for "image.png" instead of just "image". Is there an option to let texture packer ignore the extensions of my source images in the data file? Solved So if anyone else wants this, here's the exported I made https www.box.com s bf12q1i1yc9jr2c5yehd Just extract it into "C Program Files (x86) CodeAndWeb TexturePacker bin exporters UIToolkit No Extensions" (or something similar) and it should show op as an exporter. |
34 | Unity 3d Animating Offset on a Quad is causing stretching issue only with iOS devices Using Wrap Mode Repeat I added a 3D Object Quad and added a graphic on it with the Wrap Mode set to Repeat. I'm animating the Offset to scroll the repeating image for a parallax effect. Everything is working perfectly with in Editor and Android devices. On iOS devices, the image shows fine and then stretches as the offset is changing. I'm attaching 2 screen shots of the image import settings Quad settings, and the issue on the iPhone. Below is my code public float scrollSpeed 0.05f private Vector2 savedOffset Rigidbody2D player void Start () GameObject player go GameObject.FindGameObjectWithTag("Player") if (player go null) Debug.LogError("Couldn't find an object with tag 'Player'") return player player go.GetComponent lt Rigidbody2D gt () savedOffset GetComponent lt Renderer gt ().sharedMaterial.GetTextureOffset(" MainTex") void FixedUpdate() if(player.GetComponent lt BirdMovement gt ().dead) return float vel Time.fixedTime scrollSpeed float y Mathf.Repeat(vel, 1) Vector2 offset new Vector2(y, savedOffset.x 1) offset.x (float) System.Math.Round (offset.x, 3) Mathf.Round (offset.x 10) 10 Debug.Log ("Offset " offset.x) if (offset.x gt 1) offset.x 0 GetComponent lt Renderer gt ().sharedMaterial.SetTextureOffset(" MainTex", offset) void OnDisable() GetComponent lt Renderer gt ().sharedMaterial.SetTextureOffset(" MainTex", savedOffset) |
34 | Total Texture memory size iOS OpenGL ES My team is running into an issue where the amount of texture memory allocated via the glTexImage2D is high enough that it crashes the app ( at about 400 MB for iPhone 5). We're taking steps to minimize the texture allocation ( via compression, using fewer bits channel and doing procedural shaders for VFX etc). Since the app crashed on glTexImage2D, I felt like, it's running out of texture memory (as against virtual memory). Is there any documentation guideline on the recommended texture memory usage by an app (not just optimize your texture memory) . AFAIK on iOS devices ( and many Android devices) there's no dedicated VRAM and our app process is still well within the virtual memory limit. Is this some how related to the size of physical RAM ? My searches so far has resulted only in info on max texture size and tricks for optimizing texture usage and such. Any information is appreciated. |
34 | Is it possible to completely avoid copying image data when uploading textures to the GPU on iOS? I am not a game developer, but I have been doing iOS software engineering for many years. I have a particular interest in graphics and animation, but the finer details are still a little foreign to me. Here is the scenario I am working with I have an extremely memory contrained iOS environment (i.e., no more than 16 20MB of active memory usage) where I am trying to display many high resolution sprite frames in rapid succession to show an animation. Each frame might be up to 1210x1210 pixels, and I can have up to 200 frames in a single animation. Obviously, these constraints mean I have to completely optimize my memory usage. There's no way I can even consider having more than one frame of an animation inflated and being used by my application's memory at any given point. Yet I need images to be loaded super fast. I had considered trying to pack a bunch of frames together into a PVRTC2 or PVRTC4 image file. The quality and disk space usage of PVRTC files is superior to other formats, considering they're only using 2 or 4bpp. I attempted to use SpriteKit to read the texture atlas I created and display the image frames. This "worked" in that it did what I wanted, but the memory usage was insane, around 600 700MB. Apparently, SpriteKit is always going to make copies of texture data for its own use, which is unfortunate. My question is if it's possible to completely bypass copying image data by my application before sending it to the GPU for rendering. My understanding was that the whole point of PVR image formats was to be an exact representation on disk of the uncompressed image data that the GPU can use directly, without the need for copying the image into its own format. (As far as I know, PVR is the format the GPU needs to render the texture.) Indirectly, I was able to mostly achieve what I wanted using a framework I helped develop for iOS called Fast Image Cache. Basically, it creates image tables on disk that are fully uncompressed and are page aligned in advance such that Core Animation can use the image data directly without doing a single memcpy. Fast Image Cache uses mapped memory to further avoid any copies whatsoever. In essence, data goes directly from disk to being rendered on display without any copies being made. (At least, no copies that are made within the context of your application, which could be held against your memory usage limits.) The problem with FIC is that it is a very naive image table. It doesn't do any sort of texture packing. For my testing, I loaded each frame of the animation as a separate image inside FIC. There was a ton of disk IO overhead going on. In addition, FIC only supports 32bpp (with or without alpha) and 16bpp bitmap formats, which means the image tables are huge. Is there a way for me to achieve something similar to FIC using OpenGL and PVRTC images? If I need to build out my own simple texture atlas support to map regions of PVRTC images to individual animation frames, I'll do that, though it'd be nice if something else besides SpriteKit could do this for me. The crucial point is being able to quickly load fairly large image data to be rendered without impacting my app's memory footprint. |
34 | example of texture mapping in pyglet Using pyglet, I am trying to create a UVSphere mesh, and on its surface I would like to display a mercator projection map. In researching examples of pyglet texture mapping, I have found Nehe tutorial 6 in pyglet. However this uses immediate mode, and shows the entire texture on each cube face. pyglet obj test, which looks like it includes texture maps. However instead of showing the chosen texture, it shows an even color. stackoverflow question, which looks like there was an answer with a solution. However the referenced gists showing working code have disappeared. Another implementation, referenced in the pyglet google group Are there any simple examples of a working mesh texture map in pyglet? Edit I found out the pyglet obj test code is looking for lines in the .obj file starting with vt in order to create texture mapping coordinates. However the Wavefront exporter in Blender apparently has no option to export this information. I exported the same mesh with each of the other available types COLLADA (.dae) Stanford (.ply) Stl (.stl) 3D Studio (.3ds) Autodesk FBX (.fbx) X3D Extensible 3D (.x3d) Perhaps there's a pyglet importer for one of these formats... Edit 2 I clearly have a poor understanding of the wavefront format. It seems vt lines are not necessary. What I was missing is full f maps. For instance, to simplify things way down, I created a 3d mesh that was actually a single square, with a texture mapped onto it. In Blender it shows the square with distorted texture, but the built in wavefront exporter produces a single line of texture mapping f 2 1 1 1 3 1 4 1. These should apparently instead be something like f 2 0 1 1 0 1 3 0 1 4 0 1 (note there should be a number between the two slashes). Apparently Blender only does a full (proper?) export of these lines if a texture is UV mapped. Applying a texture without a UV Map renders properly in Blender, but I guess the wavefront exporter doesn't properly handle it. Perhaps it's time for me to submit a Blender bug report... |
34 | LibGDX render sprites in tiled map format ( convert texture to sprite ) I am creating a game in which till now I have used textures to create A 2D tiled game. I have created a matrix which converts the touch co ordinates into a array index number and according to that index number, a specific tile gets selected. The problem is that I want to emulate the same behaviour for sprites instead of textures. But I get stuck on what could possibly be given as the co ordinates for the sprite position? I want to just use Sprites instead of using textures as Sprites are easier to animate. No matter which values I give, the screen remains white and sometimes a different tile's texture gets rendered at a different position. What am I doing wrong? Function returns the position of tile touched int getPos(int screenX, int screenY) x screenX (int)xSize y screenY (int)ySize y Squares Y 1 y int result y Squares X x return result Previous Render function public void render(float delta) batch.begin() Tile current initial layout if(current.topLink ! null) for (int Pos 0 Pos lt Squares X Pos ) for (int ypos 0 ypos lt Squares Y ypos ) batch.draw(bg texture, Pos xSize, ypos ySize, xSize, ySize) if(current! null) batch.draw(current.getTexture(), Pos xSize, ypos ySize, xSize, ySize) current current.topLink if(Gdx.input.justTouched()) if(win false) int x Gdx.input.getX() int y Gdx.input.getY() touchDown(getPos(x, y)) else System.out.print(" nLevel Completed !! " utils.level) game.dispose() batch.end() New Render Function public void render() batch.begin() Tile current initial layout if(current.topLink ! null) for (int Pos 0 Pos lt Squares X Pos ) for (int ypos 0 ypos lt Squares Y ypos ) batch.draw(bg texture, Pos xSize, ypos ySize, xSize, ySize) if(current! null) spriteBatch.begin() texture current.getTexture() sprite new Sprite(texture,(int)xSize,(int)ySize) sprite.setPosition(Pos xSize, ypos ySize) batch.draw(current.getTexture(), Pos xSize, ypos ySize, xSize, ySize) sprite.draw(spriteBatch) current current.topLink spriteBatch.end() if(Gdx.input.justTouched()) if(win false) int x Gdx.input.getX() int y Gdx.input.getY() touchDown(getPos(x, y)) else System.out.print(" nGame WON for Level " utils.level) game.dispose() batch.end() |
34 | How do I generate coins in a pattern? In my game you collect coins (surprise!). At the moment I generate them like this Find a random position given a rectangle (eg. the screen size) and generate a coin Possible positions are left and right of this coin for our next coin If both are available, choose at random If a coin exists in one position, use the other If neither are available skip this step Do the same for up and down positions of the new coin Repeat this sequence for all available coins This works fine but I would like to create custom shapes with my coins like arrows, stars etc. This got me thinking into how I could achieve this. One way I thought of was to use a small texture where each coloured pixel represented the position of a coin. So that a picture like this Can be used to generate an array of coin position coordinates in any framework supporting textures. I'm pretty sure this can be done, but was wondering if anyone has tried this or something totally different for generating coins or any other objects in a game. Ideally the game would involve several different textures and choose them at random, and combine this with the random scattering textures from simple algorithms such as the one I have above. |
34 | Repeat or wrap texture (DirectX 9) Sure there's something I'm missing about wrap repeat textures in D3D 9. I've tried setting the sampler in the shader, i.e. sampler DiffuseSampler sampler state Texture lt DiffuseMap gt MipFilter NONE MinFilter POINT MagFilter POINT AddressU WRAP Repeat on X AddressV WRAP Repeat on Y. , and I've also tried doing it in code before I render the object device gt SetSamplerState(D3DVERTEXTEXTURESAMPLER0, D3DSAMP ADDRESSU, D3DTADDRESS WRAP) device gt SetSamplerState(D3DVERTEXTEXTURESAMPLER0, D3DSAMP ADDRESSV, D3DTADDRESS WRAP) device gt SetSamplerState(D3DVERTEXTEXTURESAMPLER0, D3DSAMP MAGFILTER, D3DTEXF POINT) device gt SetSamplerState(D3DVERTEXTEXTURESAMPLER0, D3DSAMP MINFILTER, D3DTEXF POINT) device gt SetSamplerState(D3DVERTEXTEXTURESAMPLER0, D3DSAMP MIPFILTER, D3DTEXF NONE) I thought there might be something screwy with my texture coordinates too, so I hard coded them to be 0.0, 0.0 on the left and 2.0, 2.0 bottom right. I expected x 2 repeat but got a similar result to the screenshot, with the texture top left and then what looks like clamp across the rest of the image. What mistake have I made here? (Note that the screenshot isn't the 0.0, 0.0 2.0, 2.0 experiment I did, it's the general case I get as I'm panning my image around). |
34 | Rendering Texture Quad to Screen or FBO (OpenGL ES) I need to render the texture on the iOS device's screen or a render to texture frame buffer object. But it does not show any texture. It's all black. (I am loading texture with image myself for testing purpose) Load texture data UIImage image UIImage imageNamed "textureImage.png" GLuint width FRAME WIDTH GLuint height FRAME HEIGHT Create context void imageData malloc(height width 4) CGColorSpaceRef colorSpace CGColorSpaceCreateDeviceRGB() CGContextRef context CGBitmapContextCreate(imageData, width, height, 8, 4 width, colorSpace, kCGImageAlphaPremultipliedLast kCGBitmapByteOrder32Big) CGColorSpaceRelease(colorSpace) Prepare image CGContextClearRect(context, CGRectMake(0, 0, width, height)) CGContextDrawImage(context, CGRectMake(0, 0, width, height), image.CGImage) glGenTextures(1, amp texture) glBindTexture(GL TEXTURE 2D, texture) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL NEAREST) glTexImage2D(GL TEXTURE 2D, 0, GL RGBA, width, height, 0, GL RGBA, GL UNSIGNED BYTE, imageData) glTexParameterf(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameterf(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) Simple Texture Quad drawing code mentioned here Bind Texture, Bind render to texture FBO and then draw the quad const float quadPositions 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0 const float quadTexcoords 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0 stop using VBO glBindBuffer(GL ARRAY BUFFER, 0) setup buffer offsets glVertexAttribPointer(ATTRIB VERTEX, 3, GL FLOAT, GL FALSE, 3 sizeof(float), quadPositions) glVertexAttribPointer(ATTRIB TEXCOORD0, 2, GL FLOAT, GL FALSE, 2 sizeof(float), quadTexcoords) ensure the proper arrays are enabled glEnableVertexAttribArray(ATTRIB VERTEX) glEnableVertexAttribArray(ATTRIB TEXCOORD0) Bind Texture and render to texture FBO. glBindTexture(GL TEXTURE 2D, GLid) Actually wanted to render it to render to texture FBO, but now testing directly on default FBO. glBindFramebuffer(GL FRAMEBUFFER, textureFBO pixelBuffernum ) draw glDrawArrays(GL TRIANGLES, 0, 2 3) What am I doing wrong in this code? P.S. I'm not familiar with shaders yet, so it is difficult for me to make use of them right now. |
34 | NPOT texture and video memory usage I read in this QA that NPOT will take memory as much as next POT sized texture. It means it doesn't give any benefit than POT texture with proper management. (maybe even worse because NPOT should be slower!) Is this true? Does NPOT texture take and waste same memory like POT texture? I am considering NPOT texture for post processing, so if it doesn't give memory space benefit, using of NPOT texture is meaningless to me. Maybe answer can be different for each platforms. I am targeting mobile devices. Such as iPhone or Androids. Does NPOT texture takes same amount of memory on mobile GPUs? |
34 | Is it OK to use 6 7k texture map in Unity? I want to unwrap all my car into one texture (interier, wheels and e.t.c), so for better quality i need to use one big 7k texture. Is it OK or the better choice is to make smaller ones for groups ? |
34 | Is there any difference between storing textures and baked lighting for environment meshes? I assume that when texturing environments, one or several textures will be used, and the UVs of the environment geometry will likely overlap on these textures, so that e.g. a tiling brick texture can be used by many parts of the environment, rather than UV unwrapping the entire thing, and having several areas of the texture be identical. If my assumption is wrong, please let me know! Now, when thinking about baking lighting, clearly this can't be done the same way lighting in general will be unique to every face so the environment must be UV unwrapped without overlap, and lighting must be baked onto unique areas of one or several textures, to give each surface its own texture space to store its lighting. My questions are Have I got this wrong? If so, how? Isn't baking lighting going to use a lot of texture space? Will the geometry need two UV sets, one used for the colour normal texture and one for the lighting texture? Anything else you'd like to add? ) |
34 | Painting terrain with a selection circle I don't mean putting a circle under a unit like done in an RTS, that's fairly simply done with a scenegraph( like structure). What I'm referring to is selection circles for things like Area of Effects that are painted on the terrain. It seems relatively simple until I think about it. Here are some images for reference http i.stack.imgur.com iXWXB.jpg http im.tech2.in.com images 2009 jan img 116282 initialthreat 640x360.jpg The first image gets the idea across, but not the full scope. Basically, the floor is painted with a decal detailing the region that will be affected by the spell. The second image gets it across much better, it's not just a flat painted circle, it wraps along the mesh. At first I figured to use some sort of object selection, and then in the vertex shader draw the texture on the correct vertices based on... some criteria that I'm still working out. Most likely based on distance and angle from the intersection point. The difficulty seems to come in that the methods wrap onto objects that seem to be completely different meshes. I can't find an image that shows it (and don't play WoW anymore so I can't take a screenshot), but say you have a terrain mesh and a house on top of it and I select a point very close to the house's wall. Instead of continuing on the terrain mesh, the selection circle will go as far along the terrain mesh as it can, and then wrap up onto the wall of the house. It also ignores certain objects (players, etc), but that's pretty easy to ignore with correct collision tags callbacks. It's not so much that I can't think of a way to do this, but all the ways I can think of would probably lag your game out or turn your processor into slag. Somehow I have to Draw a texture With no breaks Wrapping on multiple objects according to the terrain Centered at a given point Completely dynamically (so no pre baking it into a bunch of dummy textures) Quickly enough to be computed every frame Is there a standard method for this? Am I just overthinking it? My best guess is rather than do one ray cast and selecting that point as the "center" I do a bunch of casts and in each case select the nearest vertex and bind the appropriate UV coord to that vertex, but it seems messy and potentially performance heavy to select that many vertices that way at the CPU frame by frame level. Especially since ray casts in general are expensive, even with a good physics engine. It also doesn't seem particularly portable to anything other than a selection circle. A similar case would be like drawing the movement arrow over the terrain in Total War games, a thick arrow is drawn between the unit and its destination (see here http webguyunlimited.com pixelperfectgaming wp content gallery total war shogun 2 rise of the samurai total war rots screenshot 014.jpg ), this would seem to be a very similar technique to the AOE selection circle, but the ray cast method I mentioned wouldn't really work for it. There are, of course, numerous other examples too (such as trajectory markers). |
34 | How do I control texture appearance based on game play events? In an FPS I am developing, I am procedurally altering the appearance of textures based on game play events, and would like to know if the way I have implemented this was a good idea. Right now, on each frame, I am grabbing a pointer to the uncompressed image and writing to it using memset. It's pretty straightforward if I want to have game play events affect the texture's appearance, because all I have to do is tie whatever attribute I like from any game object into the function writing the image data. For example, using this I could get the player's x,y position and draw an x directly to a map texture as the player moves around. Is there a better way to implement this functionality in general? My first thought would be to use shaders instead, but I haven't been able to find anyone doing anything like this and I know almost nothing about shaders. I feel like I could be missing something obvious to someone. I am trying to keep this question as engine independent as possible, but if relevant, I am using Source Engine procedural materials. |
34 | LibGDX stretching 2d graphics I have a problem with stretching sprites in LibGDX. I have one png file for the sky in the game (1 x 1000px) and I want to render it 1000 x 1000px sprite.setSize(pixelWidth,pixelHeight) I want to stretch this 1px wide sky image. It has little gradient from the top to the bottom, as you can see. I want to make it 1000px in width. But I get this strange result You can see that there is something like a horizontal gradient, even with lines, but there's no apparent reason why that's happening, since I'm scaling a 1px wide image! The strange thing is that everything looks fine on other PCs and mobile phones but I don't think that the problem is with my PC!? I don't know what to try. Any ideas? |
34 | Difference between reflection mapping and raytracing Both methods involve casting a ray unto the reflecting object. It will be reflected. In reflection mapping the intersection of this reflected ray with the environment map (texture) will yield the result. Isn't it the same for ray tracing, though? Or is ray tracing classified only as ray tracing if it works with the real 3D geometry in the scene (not pre computed images). |
34 | Resolution independence resize on the fly or ship all sizes? My game relies heavily on textures of various sizes with some being full screen. The game is targeted for multiple resolutions. I found that resizing textures (downsizing) works quite well for this game s art type (it s not Pixel Art or anything like that). I asked my artist to ensure that all textures at the edges of the screen to be created in such a way that they can safely overflow off screen this means that aspect ratio is not an issue. So with no aspect ratio issues, I figured that I would simply ask my artist to create assets in very high resolution, and then resize them down to the appropriate screen resolution. The question is, when and how do I do that? Do I pre resize everything to common resolutions in Photoshop and package all assets in the final product (increasing the size download that the user has to deal with) and then select the appropriate asset based on the detected resolution? Or do I ship with the largest set of Textures, detect the resolution on load, set a render target and draw all downsized assets to it and use that? Or for the latter, do I use some sort of a CPU sided algorithm to resize on game load? |
34 | What are Textures in games development? At any time, a game will likely have many objects rendering textures. So what is a texture? an image for example? |
34 | Capturing small details in textures without making a huge map Ex. Nuts and bolts on a cabinet. Said items are tiny relative to the cabinet, and on a texture map looks like an indistinguishable dot. Is it worth it to separate these tiny details into a mesh of their own? Obvious cons are increasing poly count and draw calls, disregarding how much of an increase that would be. Is there a generally accepted way of going about this? |
34 | OpenGL ES 2.0. Sprite Sheet Animation I've found a bunch of tutorials on how to make this work on Open GL 1 amp 1.1 but I can't find it for 2.0. I would work it out by loading the texture and use a matrix on the vertex shader to move through the sprite sheet. I'm looking for the most efficient way to do it. I've read that when you do the thing I'm proposing you are constantly changing the VBO's and that that is not good. Edit Been doing some research myself. Came upon this two Updating Texture and referring to the one before PBO's. I can't use PBO's since i'm using ES version of OpenGL so I suppose the best way is to make FBO's but, what I still don't get, is if I should create a Sprite atlas batch and make a FBO loadtexture for each frame of if I should load every frame into the buffer and change just de texture directions. |
34 | Why is my texture displaying incorrect colours using dx11? I am trying to load my data from a binary file (ppm) and create a texture using this data. It is important that I learn to do it this way as I am eventually going to be packing all of my textures into a single binary file and then index them so creating the texture with pure binary data is something that I need to be able to do. It seems that the texture is drawing correctly, but the colours are incorrect. I saved my image as .ppm just for this test application. Here is the code to load my data ppm ppm ppm.read(std string("textureppm.ppm")) just to ensure the data is correct uint32 t val ppm.pixels 0 unsigned char r (val amp 0xFF000000) gt gt 24 unsigned char g (val amp 0x00FF0000) gt gt 16 unsigned char b (val amp 0x0000FF00) gt gt 8 unsigned char a (val amp 0x000000FF) ID3D11ShaderResourceView texSRV nullptr D3D11 SUBRESOURCE DATA initData amp ppm.pixels, ppm.width sizeof(uint32 t), 0 D3D11 TEXTURE2D DESC desc desc.Width ppm.width desc.Height ppm.height desc.MipLevels 1 desc.ArraySize 1 desc.Format DXGI FORMAT R8G8B8A8 UNORM desc.SampleDesc.Count 1 desc.Usage D3D11 USAGE IMMUTABLE desc.BindFlags D3D11 BIND SHADER RESOURCE ID3D11Texture2D tex HRESULT hr getDevice() gt CreateTexture2D( amp desc, amp initData, amp tex) if (SUCCEEDED(hr)) D3D11 SHADER RESOURCE VIEW DESC SRVDesc SRVDesc.Format DXGI FORMAT R8G8B8A8 UNORM SRVDesc.ViewDimension D3D11 SRV DIMENSION TEXTURE2D SRVDesc.Texture2D.MipLevels 1 hr getDevice() gt CreateShaderResourceView(tex, amp SRVDesc, amp texSRV) if (FAILED(hr)) throw 0 else setTexture(texSRV) I have packed each byte into a uint32 t as it seems that is the format that is required DXGI FORMAT R8G8B8A8 UNORM Here is the packing uint32 t ppm CreateRGBA(unsigned char r, unsigned char g, unsigned char b, unsigned char a) uint32 t value 0 int r2 (r amp 0xff) lt lt 24 int g2 (g amp 0xff) lt lt 16 int b2 (b amp 0xff) lt lt 8 int a2 (a amp 0xff) value r2 g2 b2 a2 return value This code produces the following texture When the original texture is Does anyone know what I am doing wrong? |
34 | How to expose a child node s texture from the parent in Godot I built a simple scene that I instanciate at will. Simply a KinematicBody2D as root, containing a Sprite and a CollisionShape2D. There is a script linked to the root node that basically expose a few properties to describe an orbit and make it move in orbit around its parent. The point being to make a very basic solar system simulation or anything that requires a node to move around another in a circular fashion. The movement work fine and I ve been playing around my scene tree, instancing a bunch and watching them move around as moons of planets, etc. But the texture is all the same. If I make children editable, I can go change the texture of the sprite, but I d like to expose that property from the root node so I can access it directly from the editor. I tried adding an exposed property with a setter and a getter to assign and retrieve the texture, but it seems to crash Godot completely with this code export(Texture) onready var texture setget texture set, texture get func texture set(newtexture) "Sprite".texture newtexture func texture get() return "Sprite".texture Is there a solution for that? Or simply another way? |
34 | Texture2DArray in Directx11 with different formats I'm trying to create a texture2DArray from multiple of images, each of which has different formats, and I got the following error from the DirectX11 debug layer. D3D11 ERROR ID3D11DeviceContext CopySubresourceRegion Cannot invoke CopySubresourceRegion when the Formats of each Resource are not the same or at least castable to each other, unless one format is compressed I want to ask you there is any way of creating a texture2d array in Directx 11 with different format? |
34 | Making a radial gradient visual range I am currently working on a project in p5JS. So lets say i've got a texture and top of it, i've got a radial gradient that is overlapping on the texture. I want to make this like a cool vision range effect in 2D. I believe you can imagine this stitation. So how am i able to start this thing? Should i use another framework to do this? |
34 | How can I handle shadowing of a planet's rings by the planet itself? I assume the most straightforward way to draw planetary rings (such as those around Saturn) is to use a texture that is transparent everywhere except for the rings, and then place the planet in the middle of the texture and that's it. The problem with this approach is that I don't know how to shadow the area of the rings texture that is behind the planet (which isn't lit by the sun). At the moment I have a rings texture that is permanently lit and a planet that darkens in the areas where it's not lit by the sun. The planet revolves around the sun, so the lit shadowed areas are constantly changing. Any suggestions on how to apply shadow on the rings? |
34 | Does a detailed texture take longer to render? Let's say I want to render a square its texture is "square.png." Is it easier for the computer to render it if the texture is just a plain color? And what about if it is a very noisy texture with completely random colors here and there? Or what if that texture is noisy in the sense that every pixel in it is different from one to another, but only by a tiny bit? |
34 | How does Megatexture work? I've been thinking about developing a small engine not only to develop small experimental games, but also to serve as a base to test various rendering techniques and things like that. Right now I've been thinking a lot about how to handle textures and stumbled on megatexture, but this is something that is a bit puzzling. There is a lot of talk of it being better than the traditional approach to having a bunch of textures around and loading them as needed, but how does megatexture avoid this, I've read around that they use streaming and you can just stream bits and pieces of it as opposed to loading each texture individually, but how does that offer better performance, and isn't that just another form of tilling? How do we sample such a texture when in a shader, do we stream part of it into memory then work on it. I've seen the latest videos of Rage and the texture do look great, but is that just the result of great artists or does the tech come into play. To sum up, how does it work, why is it great and how could I do something similar. |
34 | Apply a texture on a Box jMonkeyEngine How can i wrap a texture (for ex a 4x4 pixels) on a box (1x1x1) in such a way that the texture is repeated and not enlarged to cover all the surface. Geometry box new Geometry("Box", new Box(1f,1f,1f)) |
34 | Why don't we use the whole color depth for normal maps? All normal maps I've seen are pinkish or bluish. It seems like only a small color range is used. Why do game developers give away so much precision? |
34 | does cocos2d cache texture automatically I know if we want the textures cached we can use the sharedtexture manager to cache them, but since this is on mobile platform , why don't cocos2d x just do this for all texture loads ? when creating sprites with images, are these textures cached as well ? |
34 | Why is Metal is rendering textures SLIGHTLY darker than they should be? I already checked this question. Using MTKTextureLoader.Option.SRGB false in texture loader options doesn't work. All that happens is when I do MTKTextureLoader.Option.SRGB true , it's a lot darker instead of slightly darker. There must be something else. Here is an example of this unwanted behavior. As you can see I am using the mouse to hold the original packed ice texture over the rendered one. It's considerably lighter, even with MTKTextureLoader.Option.SRGB false . Here is my fragment function fragment half4 TextureFragment(VertexOut VertexIn stage in , texture2d lt float gt Texture texture(0) ) constexpr sampler ColorSampler(mip filter nearest, mag filter nearest, min filter nearest) float4 Color Texture.sample(ColorSampler, VertexIn.TexCoord.xy VertexIn.TexCoord.z) return half4(Color.r, Color.g, Color.b, 1) How can I fix the problem? How can I render the actual texture instead of a darker version? Note I must fix the problem, not work around it. A workaround would be like return half4(Color.r, Color.g, Color.b, 1) from the fragment shader. I must fix the problem from the root. |
34 | GLSL Noise via texture I am trying to access a texture in a fragment shader to overlay this texture over a certain area. varying vec4 v color varying vec2 v texCoord0 uniform sampler2D u sampler2D uniform vec4 u oldcolor uniform vec4 u newcolor uniform vec3 u noise void main() vec4 color texture2D( u sampler2D , v texCoord0 ) float threshold 0.005f if(color.r lt (u oldcolor.r threshold) amp amp color.g lt (u oldcolor.g threshold) amp amp color.b lt (u oldcolor.b threshold) amp amp color.r gt (u oldcolor.r threshold) amp amp color.g gt (u oldcolor.g threshold) amp amp color.b gt (u oldcolor.b threshold)) color.rgb u newcolor.rgb vec3(v texCoord0, 0.1) gl FragColor color For every pixel the shader checks for a certain color and replaces it with a new color v texCoord0. Now I am want to bring in a third component a noise texture to make it look like this I searched the web for a solution but I could not find anything helpful. My questions 1.Is this even possible to accomplish via a shader? 2.How to access the texture ? I hope that my questions are clear and proper for this forum ) . |
34 | Should we avoid a padding of transparent pixels around an image when creating Assets to be used in 3D Space? Assuming we are creating Assets to be used on iOS in order to create textures that are then mapped to the surface of Quads in 3D Space, is below recommendation and its reasoning appropriate? Try to avoid unnecessary transparent pixels whilst ensuring a Power of 2 texture size. Since the texture will occupy the same amount of RAM whether it has its content spread across its entire surface or not, try to use as much of the available surface for the content as possible. This will improve visual quality when downsizing the texture (e.g. to generate Mipmaps). Example (for illustrative purposes, transparency is represented by a Gray color) e.g. a padding of transparent pixels around an image could be considered unnecessary if the texture will be resized anyway. |
34 | Repeating UVs wont convert to texture in maya I have designed a material in Maya which i want to convert to a texture jpg, however i have repeating the UV's on the material. When i convert the material to a jpg the repeating UV's wont convert across. Is there anyway i can get them too? |
34 | One single texture atlas? When I want to use texture atlasing, should I try to place all of my sprites into one single texture atlas? Or are there times when I should have multiple smaller texture atlases? |
34 | Texture prefetching in GLSL I have a fragment shader which needs lots of semi random access to 32x32 texture patch. Fortunately, the patch is constant for each poly, so there should be no issue storing the whole thing to the texture cache. So my question is, is there any way in GLSL to prefetch texture patches? Normally, I would be fine letting the FS load the texture naturally, but with the nondeterministic access pattern, I'm afraid that could be producing a ton of unnecessary thrashing while it sorts itself out. |
34 | How to shade a texture two different colors? To give an example of what I'm asking about, I'll use Saints Row 3 since I've been playing that lately. In that game you can customize your looks and your car's appearance a lot. Your coat can have a primary color and a trim color. Your car can have a primary color and a stripe color, etc. Is there just a single coat texture that is being shaded two different colors somehow or are they overlaying a transparent second texture for the trim stripes that gets shaded differently? If it's just one texture I'd like to know how it's done. If it's two different textures it seems like it's a waste of space. The second texture would be the same size as the first one but mostly transparent if you just wanted to lay it on top of the first one. Or are they just carefully positioning a second, smaller texture so that it aligns properly with the first one? |
34 | What is the difference between a Sprite and a Texture? I have an assignment due for University, and my task is to discuss textures used within video games, and how they have evolved. My main question is, what is the fundamental difference between using a sprite, or using other texture methods? Having done a little research myself, I seem to be inclined that Sprites store images in a single file, and can be used for animations etc.. and were commonly used with older video games, generally using sprites as all of their game visuals. Now, with modern games, sprites I believe tend to be less used as technology advances and other textures are available such as bump mapping. Although sprites are still used today to accommodate features such as a health bar or long distance textures. But what are the main advantages of using textures, over sprites? |
34 | How to update a dynamic texture I'm using Ogre 1.10, I have a dynamic texture assigned to a material that I need to update its buffer with a new image every few seconds. How can I transfer pixel data from an image to my dynamic texture? I've created a manual texture like Create the texture Ogre TexturePtr texture Ogre TextureManager getSingleton().createManual( "dyn texture", name Ogre ResourceGroupManager DEFAULT RESOURCE GROUP NAME, Ogre TEX TYPE 2D, type 256, 256, width amp height 0, number of mipmaps Ogre PF BYTE BGRA, pixel format Ogre TU DYNAMIC WRITE ONLY DISCARDABLE) And on my update function I have the image that I want to transfer Ogre Image img img.load(basename.toStdString(), "resources") Copy pixels from img to texture ?? I've already tried doing Ogre HardwarePixelBufferSharedPtr pixelBuffer texture gt getBuffer() pixelBuffer gt blitFromMemory(img.getPixelBox()) works but it's quite slow, gui freezes when updating like that. |
34 | Texture prefetching in GLSL I have a fragment shader which needs lots of semi random access to 32x32 texture patch. Fortunately, the patch is constant for each poly, so there should be no issue storing the whole thing to the texture cache. So my question is, is there any way in GLSL to prefetch texture patches? Normally, I would be fine letting the FS load the texture naturally, but with the nondeterministic access pattern, I'm afraid that could be producing a ton of unnecessary thrashing while it sorts itself out. |
34 | Problem with transparent textures in SFML I have been told that this is kind of a common problem with transparent textures, but didn't get any further information on how to solve it. In my code, I'm trying to render this texture (a square with rounded corners, where rounded corners have some alpha) What I get instead is this Notice those greyish places on the corners of the textures where rounded corners are supposed to be. What could be causing this? I have a pure white texture, so I don't expect a single pixel to get any darker than the background. All pixels should have at least the color of background, but as you can see, there is something darker. Zoomed even more Any help would be highly appreciated. |
34 | Other procedural material generators for Unity? In the new version of Unity 3.4, unity announced that they would now support procedural materials (which is awesome, by the way). While I was researching it, I found this in the manual Allegorithmic's Substance Designer can be used to create Procedural Materials, but there are other applications (3D modeling apps, for example) that incorporate the Substance technology and work just as well with Unity. I don't have Allegorithmic's substance designer and don't plan on buying it soon. What other applications or 3D modeling apps can make procedural materials that work in Unity? Edit I found that Allegorithmic has a program called Player. But it is on Windows only. I'm on mac. |
34 | Unstable sampled color in GLSL shader It looks like that sampling the same exact color data from different areas of a texture atlas results in different vec4 in the GLSL fragment shader. In other words, texture2D is directly dependent of the specified texture coordinates, and not only transitively because of the color of the texel at those coordinates. To say that with an example, I am getting the impression that sampling a red pixel surrounded by blue pixels from, say, (0.1, 0.2) may give a different result than sampling the same shade of red surrounded by the same shade of blue at, say, (0.7, 0.3). Is this non deterministic behavior that I am observing likely to be real? Additional details I have a atlas that we use to texture multiple screen aligned quads. Each vertex has a xy position and uv texture coordinates. The allotted rectangles in the atlas do not overlap furthermore, a 2 pixel margin exists between adjacent rectangles. We are using linear filtering and no mipmapping. The atlas is populated dynamically using sprites downloaded from the web. Two different runs of the program may have the atlas populated in different orders, even if user input is exactly the same, due to the fact that some HTTP requests may complete before others. So now imagine that on a run we have a Princess Peach sprite in the middle of the screen the texture is coming from the upper left corner of the atlas, because Princess Peach was the first sprite added to the scene. In another run, there have been a few sprites loaded but now their quads are not on the screen and again we have Princess Peach in the middle, in the same exact position as the first run. But this time the texture is coming from a different region of the atlas. Problem rendered images are not the same. The differences are imperceptible and usually consist of only a few pixels near the edges of the rendered sprites. The differences are detected by a screenshot testing framework. Where is the source of randomness? Even more details It's a WebGL project. It happens on both GeForce and Quadro cards. Using nearest neighbor sampling leads to more stable (and ugly) renderings but it does not fully eliminate the problem. Enforcing a strict order when copying the sprites to the atlas fully solves the problem, but it is not an acceptable solution. I inspected the GL state using an inspector and I found two frames that should have been the same in two different runs of the program the state is the same in both cases, the only difference being a different arrangement of sprites in the atlas, and consequently different texture coordinates in the array buffer. I know that I could test using some sort of fuzzy thresholded comparison but first I would like to rule out bugs on my side. |
34 | How to read a color of a pixel from texture (cocos2d js)? How to read a color of a pixel at x,y from a texture (in cocos2d js)? |
34 | Should I bake all textures to a texture atlas in a game level? I'm writing my own game engine using Java OpenGL. I'm creating levels in Blender 3D and exporting them in a custom file format. I'm not going to be auto generating any terrain everything is going to be exported directly from Blender. I'm trying to figure out the best way to deal with textures for the static objects in my scene (including very large objects like long ridges of cliffs, grassy fields, canyon floors, etc). One possibility is to just use the textures at they are applied in the editor. I have a lot of tiling textures for grass and dirt and such that are not too big and which cover large meshes. This doesn't take up much texture memory space, but is a little inflexible from an art design standpoint. Also, if I want to do something special like create a blend shader to have a dirt path run through my field I'll need to code a unique shader to handle this case. (To complicate things further, Blender doesn't provide support for custom GLSL shaders, so exporting unique things like this will be non trivial). Alternately, I can create a single texture atlas for the entire level and bake everything onto it. This way I don't have to worry about special purpose shaders or multiple tiling textures. The down side is that this texture would be enormous. I'm worried such a large texture might be bigger than my device can handle. Anyhow, I'm not sure what the best practice is in these cases. Should I bake everything out, or have each model with its own texture? |
34 | How should I prepare for migration from D3D9 to D3D10 or D3D11? I'm considering a Direct3D9 texture streaming solution, but my company will eventually be going to D3D11 in the future. I wanted to make my solution as forward compatible as possible, so I was hoping that someone could give me some feedback on exactly what D3D11's capabilities were and what I should be looking for when preparing such a migration? For reference, here's what I'm considering on D3D9 Load low res mip maps for all meshes at load time Create bounding boxes around each of my objects and detect when I'm inside any given bounding box For any bounding box that I'm inside of I will load the high res portion of the mip map Any bounding box that I've left I'll unload the texture from I've also got to cook up some scheme to manage fragmentation of the GPU memory, initially I'll probably just cycle the GPU memory whenever the camera and my objects are still |
34 | Why do my sprites have a dark shadow line frame surrounding the texture? I'm starting OpenGL with Apple's GLKit hand I'm having some trouble to get my sprites displayed properly. The Problem is that they all are surrounded with thin dark lines. The screen shot below shows two rectangles with a png image textures containing transparency (obviously). The black shadows, surrounding them are definitely not part of the pngS. The green png is done without anti aliasing the blue one has an anti aliased border. The black border is also apparent if I draw only one sprite. Te relevant part (hope so...) of code is render the scene (void)render glClearColor(69. 255., 115. 255., 213. 255., 1.) glClear(GL COLOR BUFFER BIT) shapes enumerateObjectsUsingBlock (AAAShape shape, NSUInteger idx, BOOL stop) shape renderInScene self creating and storing the effect inside shape class (GLKBaseEffect )effect if(!effect) effect GLKBaseEffect alloc init return effect rendering the shape (including effect configuration) (void)renderInScene (AAAScene )scene TODO Storing vertices in Buffer self.effect.transform.projectionMatrix scene.projectionMatrix self.effect.transform.modelviewMatrix self.objectMatrix if(texture) self.effect.texture2d0.enabled GL TRUE self.effect.texture2d0.envMode GLKTextureEnvModeReplace self.effect.texture2d0.target GLKTextureTarget2D self.effect.texture2d0.name texture.name self.effect prepareToDraw if(texture) glEnableVertexAttribArray(GLKVertexAttribTexCoord0) glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL FLOAT, GL FALSE, 0, self.textureCoordinates) glEnable(GL BLEND) glBlendFunc(GL SRC ALPHA, GL ONE MINUS SRC ALPHA) glEnableVertexAttribArray(GLKVertexAttribPosition) glVertexAttribPointer(GLKVertexAttribPosition, 2, GL FLOAT, GL FALSE, 0, self.vertices) glDrawArrays(GL TRIANGLE FAN, 0, self.vertexCount) glDisableVertexAttribArray(GLKVertexAttribPosition) if(texture) glDisableVertexAttribArray(GLKVertexAttribTexCoord0) glDisable(GL BLEND) Any ideas anyone? Thank you. |
34 | How to update a dynamic texture I'm using Ogre 1.10, I have a dynamic texture assigned to a material that I need to update its buffer with a new image every few seconds. How can I transfer pixel data from an image to my dynamic texture? I've created a manual texture like Create the texture Ogre TexturePtr texture Ogre TextureManager getSingleton().createManual( "dyn texture", name Ogre ResourceGroupManager DEFAULT RESOURCE GROUP NAME, Ogre TEX TYPE 2D, type 256, 256, width amp height 0, number of mipmaps Ogre PF BYTE BGRA, pixel format Ogre TU DYNAMIC WRITE ONLY DISCARDABLE) And on my update function I have the image that I want to transfer Ogre Image img img.load(basename.toStdString(), "resources") Copy pixels from img to texture ?? I've already tried doing Ogre HardwarePixelBufferSharedPtr pixelBuffer texture gt getBuffer() pixelBuffer gt blitFromMemory(img.getPixelBox()) works but it's quite slow, gui freezes when updating like that. |
34 | What would be the benefit of placing all of a room's textures on an object outside of the camera's view? Pok mon XD Gale of Darkness does an odd thing with textures. As described in this video, all textures for a room are placed on an object outside of the camera's view This kind of caught me off guard. I wasn't really expecting anything to be on the other side of this room, but when I turned the camera around, it was all the textures that were inside this room plastered into one spot. Now I can't explain this whatsoever I've never seen this in any other game that I've covered. But what's really odd is that this isn't an isolated case, this happens periodically throughout Pok mon XD. Now it's not in every room and city, but from time to time, you will see its textures on some type of object in an area that's off camera. What would be the benefit of doing this? The scene is shown to the player all at once after loading, so I'm not seeing how it would help with loading times or anything. |
34 | Game Asset Size Over Time The size (in bytes) of games have been growing over time. There are probably many factors contributing to this trailer cut scene videos being bundled with the game, more and higher quality audio, multiple levels of detail being used, etc. What I'd really like to know is how the size of 3D models and textures that games ship with have changed over time. For example, if one were to look at the size of meshes and textures for Quake I (1996), Quake II (1997), Quake III Arena (1999), Quake 4 (2005), and Enemy Territory Quake Wars (2007), I'd imagine a steady increase in file size. Does anyone know of a data source for numbers like this? |
34 | Implement spherical mapping for texture coordinates I am using a texture of a world map and I am trying to put that image on a sphere made up of many triangles. Each triangle has points a,b,c with their own (x,y,z) coordinates. I am trying to use the coordinate system conversions formula from Wikipedia. This is my world to spherical coordinates function function worldToSpherical(p) const r Math.sqrt(Math.pow(p 0 ,2) Math.pow(p 1 ,2) Math.pow(p 2 ,2)) const u Math.atan2(p 1 ,p 0 ) const i Math.atan2(Math.sqrt(Math.pow(p 0 ,2) Math.pow(p 1 ,2)),p 2 ) const s r Math.sin(i) Math.cos(u) const t r Math.sin(i) Math.sin(u) return s, t But this is what my output looks like It seems to be wrapping around twice. Am I using the wrong formula, or using it wrong? |
34 | Unity 5 Textures look strange in Deferred Rendering I've created an exterior terrain scene in Unity 5 which features lots of trees and grass but some textures look awful and I'm wondering why. Please see the attached image This texture problem is especially obvious at night (I'm using Time of Day unity asset for Day Night cycle) but also during day light it doesn't look good. The tree bark looks like plastic and also the ground textures are too bright and lack contrast. Stone texture on the other hand look perfectly fine. Can somebody point me into the right direction of what I'm doing wrong here with my lighting and rendering? |
34 | What is the difference between these options when saving an image as a PNG in Photoshop for my game? I'm creating sprites and tilemaps for 2D games. I might just use Unity. What is the impact of choosing different options when saving as a .png? |
34 | projected textures not appear on the "back" of the mesh as well? I want to create blood wounds on my character's bodies by using projected textures. I've watched some commentaries on games like Left 4 Dead and they say they use projected textures for the blood. But the way projected textures work is that if you project a texture on a rigged character, say his chest, it will also appear on his back. So what's the trick? How to get projected textures appear only on one "side" of the mesh? I use the Panda3D game engine, if that will help. |
34 | Where can I learn about Substance maps for 3ds Max 2012? A new feature in 3ds max 2012 is Substance procedural textures. Are there any good online libraries or resources for substance maps? |
34 | MonoGame renders texture in an almost compressed looking way I was working on a game in MonoGame, which I have been doing for quite some time now. As I was implementing UI, I was noticing some sort of weird scanlines. Investigating further, to my surprise my whole scene was covered in it! Notice how the texture when zoomed in has almost a JPEG squared compressed feel too it. I have no idea what is causing this. We were using a RenderTargetTexture instead of the backbuffer, so figured I try it without, but the same happens. I checked if we were doing any weird Matrix transformations, we did not, in fact, I disabled them just to test it out. After that I thought "half pixel offset" (even though it should not really apply anymore post DX9), but also that did not solve a thing. Last but not least I thought maybe a MipMapping issue, but that would be weird since the exact same thing happens with the normal backbuffer. My Question Does anyone here recognize this "effect", and have any clue what might be causing it? I'm basically just rendering a 1920x1080 image to a 1920x1080 backbuffer, but I can assure its not the image. When I for example draw a cursor and make it follow the mouse, the points where pixels "go missing", even the cursor texture deforms. Might not be as clear in this picture, but notice how both the cursor and the orange curve basically "skip" a row of pixels, when I move the cursor to another location, it turns back to normal. |
34 | HLSL Voxel texturing I'm currently trying to develop a Voxel Engine using Direct3D 9 and C . To keep the memory usage low, i'm only passing the position, the orientation and the offset of the current voxels texture in the texture atlas of each vertex to the vertex shader. The vertex shader then calculates the normal and passes it to the pixel shader. I found this article which covers, how to texture voxels with just their position and normal in glsl. This is the part that calculates the texture coordinates in my pixel shader (SM3) float2 tileUV float2(dot(input.normal.zxy, input.pos3D), dot(input.normal.yzx, input.pos3D)) float2 texcoord input.texOffset tileSize frac(tileUV) This code works fine for faces that point in negative z direction (normal 0,0, 1 ), however, the back is flipped by 180 and the sides and top bottom squares are flipped by 90 270 . I am not sure, if this is correctly translated from glsl, because this behaviour should be the expected one in hlsl, if I calculate it by hand. Is there anything that I have overseen or should I aim for a different approach? Edit I have now managed to successfully texture the faces by replacing the previous calculation with the following if(input.normal.y ! 0.0f) handle top bottom surfaces as front back faces input.pos3D.y input.pos3D.z input.normal.z input.normal.y input.normal.y 0.0f texcoord.x input.texOffset.x tileSize frac(float3(1.0f, 1.0f, 1.0f) cross(frac(input.pos3D), input.normal)).y texcoord.y input.texOffset.y tileSize (1.0f frac(input.pos3D.y)) Is there any way that I can simplify optimize the equation? I may also mention that the voxels are all axis aligned and clamped to integer coordinates. Edit2 This is the modified formula of zogi's answer which works as expected. float3 n abs(normal.xyz) float2 texcoord float2(input.texOffset.x tileSize dot(n, frac(input.pos3D.zxx)), input.texOffset.y tileSize tileSize dot( n, frac(input.pos3D.yzy))) |
34 | "RENDER WARNING there is no texture bound to the unit 0" When not Rendering Texture I have a webgl program that sometimes renders textures, but mostly renders triangles or lines. When I am not rendering a texture, webgl gives the warning, "RENDER WARNING there is no texture bound to the unit 0". Here is some code from my rendering loop if (thing.hasOwnProperty('texCoords')) gl.uniform1i(shaderProgram.useTexture, true) gl.bindBuffer(gl.ARRAY BUFFER, thing.texCoords) gl.activeTexture(gl.TEXTURE0) gl.bindTexture(gl.TEXTURE 2D, thing.texture) gl.uniform1i(shaderProgram.tex uniform, 0) gl.enableVertexAttribArray(shaderProgram.vertexPositionTex) gl.vertexAttribPointer(shaderProgram.vertexPositionTex, 2, gl.FLOAT, false, 0, 0) else gl.uniform1i(shaderProgram.useTexture, false) gl.disableVertexAttribArray(shaderProgram.vertexPositionTex) If a thing doesn't render a texture, there is code to set a flag in my shaders not to consult the sampler2d for texture values and instead just draw polygons. I suppose "there is no texture bound to the unit 0" is correct there isn't a texture bound since the last call to drawArrays, but why would that issue a warning when I don't try to access TEXTURE0? |
34 | How to give a certain scene different preference settings then other scenes unity In my unity game that I am creating there are 4 scenes. 3 of those scenes rely on a backdrop because they are the main menu, death screen and the escape screen(for when you escape the place). Therefore they need a high resolution image. However, the other scene (the main game scene) is incredibly laggy when the quality setting for the texture quality isn't quot eighth res quot . How can I make it so that the texture quality for only my game scene is eighth res and the rest is normal res? This might help my build order value is the main menu is 0, my game is 1, my death scene is 2 and my escape scene is 3. thanks in advance |
34 | Sprite sheet textures picking up edges of adjacent texture I have a custom sprite routine (openGL 2.0) which uses a simple sprite sheet (my textures are arranged horizontally next to each other). So, for example, here is a test sprite sheet with 2 simple textures Now, what I do when creating my openGL sprite object is specify the total number of frames in its atlas and when drawing, specify which frame I want to draw. It then works out where to grab the texture from by Dividing the required frame number by the total number of frames (to get the left coordinate) And then diving 1 by the total number of frames and adding the result to the left hand coordinate calculated above. This does seem to work but sometimes I get problems. Say for example, I want to draw the X below and I get........... I've heard about putting a 'padding' of 1 px between each texture but could someone explain exactly how this works? I mean if I do this it will surely throw off the calculations for getting the texture. If I simple include the padding in texture picked up (so the sprite is drawn with a blank border), then surely this will cause problem with collision detection? (ie sprites may appear to collide when using bounding boxes when the transparent parts collide). Would appreciate if someone could explain. |
34 | PVRCT Texture Format with glTexStorage2D on Open GL 3.0 ES How? I've looked everywhere and I can't seem to find the answer. How can I use glTexStorage2D with PVRCT textures? I've done this define GL COMPRESSED RGBA PVRTC 4BPPV1 IMG 0x8C02 glTexStorage2D(GL TEXTURE 2D, 1, GL COMPRESSED RGBA PVRTC 4BPPV1 IMG, m uPixelsWide, m uPixelsHigh ) Where the GL COMPRESSED RGBA PVRTC 4BPPV1 IMG internal format works for glTexImage2D, but when I run that Tex Storage command, my pvr textures come out black. There are no errors posted, and when I query glGetTexParameteriv, my texture is indeed set to Immutable. Does PVRCT not work for opengl 3.0? |
34 | How to adjust texture on a sphere? As seen, the texture is not adjusted to the sphere. I tried to change alignment, but it still displays multiple sides. How can I fix this? |
34 | Rendering Texture Quad to Screen or FBO (OpenGL ES) I need to render the texture on the iOS device's screen or a render to texture frame buffer object. But it does not show any texture. It's all black. (I am loading texture with image myself for testing purpose) Load texture data UIImage image UIImage imageNamed "textureImage.png" GLuint width FRAME WIDTH GLuint height FRAME HEIGHT Create context void imageData malloc(height width 4) CGColorSpaceRef colorSpace CGColorSpaceCreateDeviceRGB() CGContextRef context CGBitmapContextCreate(imageData, width, height, 8, 4 width, colorSpace, kCGImageAlphaPremultipliedLast kCGBitmapByteOrder32Big) CGColorSpaceRelease(colorSpace) Prepare image CGContextClearRect(context, CGRectMake(0, 0, width, height)) CGContextDrawImage(context, CGRectMake(0, 0, width, height), image.CGImage) glGenTextures(1, amp texture) glBindTexture(GL TEXTURE 2D, texture) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL NEAREST) glTexImage2D(GL TEXTURE 2D, 0, GL RGBA, width, height, 0, GL RGBA, GL UNSIGNED BYTE, imageData) glTexParameterf(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameterf(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) Simple Texture Quad drawing code mentioned here Bind Texture, Bind render to texture FBO and then draw the quad const float quadPositions 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0 const float quadTexcoords 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0 stop using VBO glBindBuffer(GL ARRAY BUFFER, 0) setup buffer offsets glVertexAttribPointer(ATTRIB VERTEX, 3, GL FLOAT, GL FALSE, 3 sizeof(float), quadPositions) glVertexAttribPointer(ATTRIB TEXCOORD0, 2, GL FLOAT, GL FALSE, 2 sizeof(float), quadTexcoords) ensure the proper arrays are enabled glEnableVertexAttribArray(ATTRIB VERTEX) glEnableVertexAttribArray(ATTRIB TEXCOORD0) Bind Texture and render to texture FBO. glBindTexture(GL TEXTURE 2D, GLid) Actually wanted to render it to render to texture FBO, but now testing directly on default FBO. glBindFramebuffer(GL FRAMEBUFFER, textureFBO pixelBuffernum ) draw glDrawArrays(GL TRIANGLES, 0, 2 3) What am I doing wrong in this code? P.S. I'm not familiar with shaders yet, so it is difficult for me to make use of them right now. |
34 | 2D RGBA texture is gray in GLSL shader I have a shader to perform LUT coloring of a texture and a c c programs that call it which work perfectly when i use a 1D texture. Since i need to support computers that don't support the 1D texture in the shader i thought the easiest would be to convert the 1D texture to a 2D texture with the height 1. After the conversion (1D 2D) when i run the program it seems that the LUT 2D texture is received gray in the shader. I render the exact same texture in another place (not using a shader) in the program and the texture looks fine (same as it did when i used a 1D texture). The shader code uniform sampler2D FluorImage uniform sampler2D FluorColorLutRGBA void main(void) vec3 FluorPix texture2D(FluorImage, gl TexCoord 0 .st).rgb vec2 Coord vec2(FluorPix.g, 0.0) vec4 NewColorFluor texture2D(FluorColorLutRGBA, Coord).rgba gl FragColor.rgb NewColorFluor.rgb The texture creation code glGenTextures(1, amp m FluorShadersParams fpsFused .m LutTexture) glBindTexture(GL TEXTURE 2D, m FluorShadersParams fpsFused .m LutTexture) glTexImage2D(GL TEXTURE 2D, 0, GL RGBA, 256, 1, 0, GL RGBA, GL UNSIGNED BYTE, NULL) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glActiveTexture(FluorTextureIndexes fpsFused ) glBindTexture(GL TEXTURE 2D, m FluorShadersParams fpsFused .m LutTexture) glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, 0) Filling the texture with data glActiveTexture(a FluorTextureIndexes fpsFused ) glTexSubImage2D(GL TEXTURE 2D, 0, 0, i, 256, 1, GL RGBA, GL UNSIGNED BYTE, pLUT) glActiveTexture(GL TEXTURE0) Calling the shader glUseProgram(m progObj fsFluorFused ) glActiveTexture(FluorTextureIndexes fpsFused ) glBindTexture(GL TEXTURE 2D, FluorTextureIndexesNum fpsFused ) glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, inputTextures eftIR ) Fluor image glBindFramebuffer(GL FRAMEBUFFER,outputFBO.m FBO eftFused ) uniformLoc glGetUniformLocation(m progObj fsFluorFused , "FluorImage") if (uniformLoc ! 1) glUniform1i(uniformLoc, 1) uniformLoc glGetUniformLocation(m progObj fsFluorFused , "FluorColorLutRGBA") if (uniformLoc ! 1) glUniform1i(uniformLoc, FluorTextureIndexesNum fpsFused ) glClear(GL COLOR BUFFER BIT) glBegin(GL QUADS) glTexCoord2f(0, fTextureRatioY) glVertex2f(0, fTextureRatioY) glTexCoord2f(fTextureRatioX, fTextureRatioY) glVertex2f(fTextureRatioX, fTextureRatioY) glTexCoord2f(fTextureRatioX, 0) glVertex2f(fTextureRatioX, 0) glTexCoord2f(0, 0) glVertex2f(0, 0) glEnd() glUseProgram(0) glActiveTexture(GL TEXTURE0) glBindFramebuffer(GL FRAMEBUFFER,0) glBindTexture(GL TEXTURE 2D,0) The strange thing is that if i change all the places in the code back from Texture2D into Texture1D and the sampler in the shader to 1D everything works great. Anyone has any idea where the issue can be? |
34 | NPOT texture and video memory usage I read in this QA that NPOT will take memory as much as next POT sized texture. It means it doesn't give any benefit than POT texture with proper management. (maybe even worse because NPOT should be slower!) Is this true? Does NPOT texture take and waste same memory like POT texture? I am considering NPOT texture for post processing, so if it doesn't give memory space benefit, using of NPOT texture is meaningless to me. Maybe answer can be different for each platforms. I am targeting mobile devices. Such as iPhone or Androids. Does NPOT texture takes same amount of memory on mobile GPUs? |
Subsets and Splits