_id
int64 0
49
| text
stringlengths 71
4.19k
|
---|---|
34 | Sprite sheet resolutions and Tile Maps I am making a game using Cocos2d x and want to support multiple mobile phone resolutions and sizes. Right now I have made my game sprite sheets set on a resolution of iPad Retina's resolution of 2048X1536 and then will use Tiled Map editor to design my game levels. My question is on what basis do I scale my sprite sheets for different resolutions suppose iPhone etc so that my Tiled Map's design does not get effected when used in the game ? |
34 | CS GO workshop awp skin scrambles I was starting make my awp skin, but I was wondering that my skin at CS GO looks bit different than on photoshop. So I came here for help. I will show just a test skin (fully red). First I was save my skin with photoshop to .tga format (32 bits pixel). Then importing to VTF editor, but not changing any of these settings, they are default. After that I checked only "No Level Of Detail", "No Mipmap" and "SRGB". Opening CS GO and workshop, awp is not fully red, there is some scrambles. I hope for solution. ) |
34 | Texture disappearing from landscape I am using Unreal Engine 4.25, and I have an issue with adding textures. I have some textures right now in my landscape, but lately when I add a texture to the landscape, suddenly all texture disappear from my landscape it looks like I did not add any textures at all. How can I fix this issue? |
34 | DirectX 11 GenerateMips only works with premultiplied alpha? The GenerateMips method in the ID3D11DeviceContext allows generation of mipmaps at runtime, which is fine for fully opaque textures. However, when this method is used with transparent textures that do not have premultiplied alpha, the resulting mip levels tend to have black outlines at the transparency edges. For instance, this PNG texture, loaded through WIC and processed with GenerateMips, produces mipmaps with edge pixels that are way too dark My question Is there any way to specify to that this texture uses non premultiplied alpha, so that DirectX can generate more correct mip levels? |
34 | Why is my texture displaying incorrect colours using dx11? I am trying to load my data from a binary file (ppm) and create a texture using this data. It is important that I learn to do it this way as I am eventually going to be packing all of my textures into a single binary file and then index them so creating the texture with pure binary data is something that I need to be able to do. It seems that the texture is drawing correctly, but the colours are incorrect. I saved my image as .ppm just for this test application. Here is the code to load my data ppm ppm ppm.read(std string("textureppm.ppm")) just to ensure the data is correct uint32 t val ppm.pixels 0 unsigned char r (val amp 0xFF000000) gt gt 24 unsigned char g (val amp 0x00FF0000) gt gt 16 unsigned char b (val amp 0x0000FF00) gt gt 8 unsigned char a (val amp 0x000000FF) ID3D11ShaderResourceView texSRV nullptr D3D11 SUBRESOURCE DATA initData amp ppm.pixels, ppm.width sizeof(uint32 t), 0 D3D11 TEXTURE2D DESC desc desc.Width ppm.width desc.Height ppm.height desc.MipLevels 1 desc.ArraySize 1 desc.Format DXGI FORMAT R8G8B8A8 UNORM desc.SampleDesc.Count 1 desc.Usage D3D11 USAGE IMMUTABLE desc.BindFlags D3D11 BIND SHADER RESOURCE ID3D11Texture2D tex HRESULT hr getDevice() gt CreateTexture2D( amp desc, amp initData, amp tex) if (SUCCEEDED(hr)) D3D11 SHADER RESOURCE VIEW DESC SRVDesc SRVDesc.Format DXGI FORMAT R8G8B8A8 UNORM SRVDesc.ViewDimension D3D11 SRV DIMENSION TEXTURE2D SRVDesc.Texture2D.MipLevels 1 hr getDevice() gt CreateShaderResourceView(tex, amp SRVDesc, amp texSRV) if (FAILED(hr)) throw 0 else setTexture(texSRV) I have packed each byte into a uint32 t as it seems that is the format that is required DXGI FORMAT R8G8B8A8 UNORM Here is the packing uint32 t ppm CreateRGBA(unsigned char r, unsigned char g, unsigned char b, unsigned char a) uint32 t value 0 int r2 (r amp 0xff) lt lt 24 int g2 (g amp 0xff) lt lt 16 int b2 (b amp 0xff) lt lt 8 int a2 (a amp 0xff) value r2 g2 b2 a2 return value This code produces the following texture When the original texture is Does anyone know what I am doing wrong? |
34 | How can I determine the extreme color values in a texture? I am looking for a way to determine the most extreme color values for all of the texels in a texture. So for a texture consisting only of black and white texels, the extreme values should be (0,0,0) and (1,1,1) expressed in RGB format. For a color gradient from red to green I should get the values (1,0,0) and (0,1,0). Now obviously I could do this on the CPU by iterating over all the pixels texels of the texture and keeping track of the color values found to be most apart from each other, but this is probably relatively slow, so I am looking for a way to do this using the GPU shaders. Is this possible using shaders? I am not experienced with GPGPU, so a solution in HLSL GLSL would be preferred. Or maybe there is a fast algorithm I could use on the CPU? |
34 | Monogame Texture anti aliasing result in transparent edges This is a follow up question on a previous issue I faced with a game I'm trying to make. It's in a 2D isometric perspective and the levels are created from individual tiles. A simple 3x3 map can be seen here The tiles are individually added to the sprite batch, and (at least, this is what I think is happening) each one get anti aliased on their edges, resulting in a small blur which creates the thin edges that you can see in the picture. To make matters worse, I actually want to provide a bit more complex tiles in my game than what is displayed above. For example, I want to be able to show roads. To do so, I wanted to layer multiple sprites on top of each other, resulting in the picture below I think you can see where I'm going here. The same problem that I described with the aliasing is now occuring aroud each texture. So sad. I know I can use Point sampling to make the textures align pixel perfect, but even though this does work, the (relatively) high resolution textures really do not seem fit for such a setting I mean, these edges simply need to be anti aliased. So, my question to you all is Is there a way to stitch my tiles (and tile pieces) together in a way so that anti aliasing does not ruin it with those annoying edges, while still being able to produce a nicely anti aliased result? I hope there is, and that someone here can show me how ) Thanks, Rutger Edit 1 Changed the title to be more representative to the problem (I hope?) |
34 | What is the name for the technique to use different palettes for different tiles of a screen? I've found the keywords quot subpalette quot and quot PPU palette quot , but they seem to be NES specific. Though the idea of using a small palettes table and use 2 bits per pixel in an image ( few bits reference to a palette per tile) looks like a cool retro compression method with low decompression drawbacks, so I guess it was used more widely. Does this technique (of using different palettes for different blocks of pixels) have a name? Does it have a usage apart from NES? |
34 | How to create a texture2d from an array of pixels in DirectX11 I currently have a ray casting camera that holds an array of pixel colors (XMFLOAT4 color) that is 1024x768. I am trying to create a Texture2D in DirectX11 that is also 1024x768 and simply copy over the pixels to the texture. I then want to apply the texture to a fullscreen quad to essentially display what geometry my rays are intersecting with. D3D11 TEXTURE2D DESC textureDesc 0 textureDesc.Width 1024 textureDesc.Height 768 textureDesc.MipLevels 1 textureDesc.ArraySize 1 textureDesc.Format DXGI FORMAT R32G32B32A32 FLOAT textureDesc.SampleDesc.Count 1 textureDesc.SampleDesc.Quality 0 textureDesc.Usage D3D11 USAGE DEFAULT ID3D11Texture2D fullScreenTexture nullptr ThrowIfFailed(mGame gt Direct3DDevice() gt CreateTexture2D( amp textureDesc, nullptr, amp fullScreenTexture), "ID3D11Device CreateTexture2D() failed.") Assuming I have XMFLOAT4 pixels 1024 768 Filled in appropriately with colors. I'm not sure where how I would take my pixel data and apply it to the texture. Any help would be greatly appreciated. |
34 | UV Mapping in Autodesk Maya 2011 goes wrong I'm always struggling with UV mapping, I'm still a student though. When I am creating a UV map the texture on the object always seems to be very low resolution... does anybody know a very good tutorial, or can somebody tell me what I am doing wrong? Thank you. |
34 | How do I convert a cube map to an equirectangular projection? I have a cube map texture showing a surrounding area, which I want to pass to a program that only works with latitude longitude maps. How can I do this? In other words, I need to turn this into this (I think it's additionally rotated 90 over the x axis) |
34 | PVRCT Texture Format with glTexStorage2D on Open GL 3.0 ES How? I've looked everywhere and I can't seem to find the answer. How can I use glTexStorage2D with PVRCT textures? I've done this define GL COMPRESSED RGBA PVRTC 4BPPV1 IMG 0x8C02 glTexStorage2D(GL TEXTURE 2D, 1, GL COMPRESSED RGBA PVRTC 4BPPV1 IMG, m uPixelsWide, m uPixelsHigh ) Where the GL COMPRESSED RGBA PVRTC 4BPPV1 IMG internal format works for glTexImage2D, but when I run that Tex Storage command, my pvr textures come out black. There are no errors posted, and when I query glGetTexParameteriv, my texture is indeed set to Immutable. Does PVRCT not work for opengl 3.0? |
34 | Is it possible to look up a texel from a texture in GLES2 GLSL framgent shader without using sampler? Is there some way I can directly access texture memory from fragment shader in GLES2 GLSL? I don't need the sampler to be involved since I am just using it as a look up table. |
34 | How to print Depth to a Texture2D and then read it in the next pass on a shader in DirectX11 I'm programming a two pass effect in DirectX 11 (SharpDX). It's supposed to write the depth to a texture in the first pass and then use that texture to extract data on the second one in the pixel shader. What I get is a white screen, with nothing but the interface and I don't know why nothing is being printed. What could be the problem? I would say I should get at least something from the Depth Texture. This is how I'm setting the depth texture values this.depthBuffer new Texture2D(device, new Texture2DDescription() Format Format.R32 Typeless, ArraySize 1, MipLevels 1, Width (int)host.ActualWidth, Height (int)host.ActualHeight, SampleDescription new SampleDescription(1, 0), Usage ResourceUsage.Default, BindFlags BindFlags.DepthStencil BindFlags.ShaderResource, CpuAccessFlags CpuAccessFlags.None, OptionFlags ResourceOptionFlags.None, ) this.depthBufferShaderResourceView new ShaderResourceView(this.device, this.depthBuffer, new ShaderResourceViewDescription() Format Format.R32 Float, Dimension ShaderResourceViewDimension.Texture2D, Texture2D new ShaderResourceViewDescription.Texture2DResource() MipLevels 1, MostDetailedMip 0, ) var depthStencilDesc new DepthStencilStateDescription() DepthComparison Comparison.LessEqual, DepthWriteMask global SharpDX.Direct3D11.DepthWriteMask.All, IsDepthEnabled true, And here is how I sample the depth in the .fx file int3 posTex int3(input.p.xy, 0) float depthPixel DepthTexture.Load(posTex) float4 color float4(depthPixel, depthPixel , depthPixel, 1.0f ) return color And here the way I'm now setting the Depth Buffer stencil view as a Render Target in 2 passes. In the first I try to set the depthstencilview as a target. In the second pass I'm trying to set teh depth texture as a shader resource to read from it. this.device.ImmediateContext.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(this.vertexBuffer, LinesVertex.SizeInBytes, 0)) PASS 0 this.device.ImmediateContext.OutputMerger.SetTargets(depthBufferStencilView) this.device.ImmediateContext.ClearDepthStencilView(this.depthBufferStencilView, DepthStencilClearFlags.Depth DepthStencilClearFlags.Stencil, 1.0f, 0) this.technique.GetPassByIndex(0).Apply(this.device.ImmediateContext) this.device.ImmediateContext.DrawIndexed(this.geometry.Indices.Length, 0, 0) PASS 1 this.device.ImmediateContext.OutputMerger.ResetTargets() unbinding the depthStencilView this.device.ImmediateContext.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(this.vertexBuffer, LinesVertex.SizeInBytes, 0)) this.depthStencilShaderResourceVariable effect.GetVariableByName("DepthTexture").AsShaderResource() this.depthStencilShaderResourceVariable.SetResource(this.depthBufferShaderResourceView) this.technique.GetPassByIndex(1).Apply(this.device.ImmediateContext) this.device.ImmediateContext.DrawIndexed(this.geometry.Indices.Length, 0, 0) Finally, this is how I set the two passes in the .fx file technique11 RenderMyTechnique pass P0 SetDepthStencilState( DSSDepthLessEqual, 0 ) SetVertexShader ( CompileShader( vs 4 0, VShader() ) ) SetHullShader ( NULL ) SetDomainShader ( NULL ) SetGeometryShader ( NULL ) SetPixelShader ( NULL ) pass P1 SetDepthStencilState( DSSDepthLessEqual, 0 ) SetVertexShader ( CompileShader( vs 4 0, VShader() ) ) SetHullShader ( NULL ) SetDomainShader ( NULL ) SetGeometryShader ( CompileShader( gs 4 0, GShader() ) ) SetPixelShader ( CompileShader( ps 4 0, PShader() ) ) |
34 | How to compute tangent and bitangent vectors I have a texture loaded in three.js, then passed to the shaders. In the vertex shader I compute the normal, and I save into a variable the uv vector. lt script id "vertexShader" type "x shader x vertex" gt varying vec3 N,P varying vec2 UV void main() gl Position projectionMatrix modelViewMatrix vec4(position,1.0) P position N normalMatrix vec3(normal) UV uv lt script gt lt script id "fragmentShader" type "x shader x fragment" gt varying vec3 N,P varying vec2 UV uniform sampler2D texture void main() gl FragColor texture2D(texture,UV) lt script gt How do I compute the T and B vectors? |
34 | Wrong UV mapping when using small textures and positions I am using a really small texture (64x64) and when I use it as a UV map on my mesh, there are some incomplete lines from the color of the other side of the UV map rendered as well (in this case, white.) The model was made in Blender, and it is rendered without this error. I set absolute coordinates, so the UV coordinates are multiple of 1.0 (1.0, 2.0, 3.0, 4.0, etc.) I am using Ogre as renderer This is the UV map Image with error |
34 | WebGL send part of texture to GPU I have a matrix of pixels in RAM memory, e.g. 1000x1000 pixels. I send it to GPU using gl.texImage2D(gl.TEXTURE 2D, 0, gl.RGBA, 1000,1000, 0, gl.RGBA, gl.UNSIGNED BYTE, pixels) Which works great. But then I change some pixels, located in rectangle x 50, y 50, width 100, height 100. Sending the whole 1000x1000px picture again takes too much time, I would like to send only changed sub area to the GPU. I have tired gl.texSubImage2D(gl.TEXTURE 2D, 0, 50,50, 100,100, gl.RGBA, gl.UNSIGNED BYTE, pixels) but it seems, that parameter "pixels" should be the sub area of 100x100 pixels. Copying out a sub area from a large picture also takes too much time. Can I somehow specify a sub rectangle for source pixels in RAM? Maybe somehow using gl.pixelStorei() ? UPDATE I discovered that it is possible in WebGL 2.0 ( OpenGL ES 3.0) using gl.pixelStorei() by changing PACK ROW LENGTH, PACK SKIP ROWS, PACK SKIP PIXELS parameters. Too bad WebGL 2.0 is not supported in most of devices now ( |
34 | 2 Images, same object tone difference and alignment I would like to know a software which helps in post production to calculate the general color tone difference of 2 images. The images contain the same object, but arn't shot from the same position. One image might be sunny, the other cloudy, so one is more orange yellow, the other more grey blue, but the same object. How would you align the tone of the image? Background info i shoot a couple of hundreds of images to produce textures for my models, the objects are outside, so weather conditions can change at any time, shooting must continue. I do photogrammetry, so I use the images to project them on the geometry and bake them in a UV Map. So in general I have a chunk of images made in sunny condition and another in cloudy. I would like to apply a tone to all cloudy images, having similar tone like in the sunny images. It's not about HDR. |
34 | How to load and display an image in OpenGL ES 3.0 using C I'm trying to make a simple app on Android Studio using the NDK, JNI to call C code that load and display an image. I have managed to create the surface and draw a simple Triangle already. Now, I'm looking for a way to load and display an image in OpenGL ES 3.0 using C . I have done the search around but all of them is either missing some important function or written in Java. It would be great if someone could guide me with a simple example, thanks in advanced ) . |
34 | Optimizing Texture Text Rendering? I have an implementation currently that stores each letter as it's own separate texture, usually only a couple pixels in width and height. So this has some problems as i am unable to do a batch render, i can only batch render the same characters. I am stuck with this format, all i can do is try and process it into something better at runtime. I was thinking of creating a single larger texture from all the little ones instead. What algorithm can i use to sort the small images into a larger one? How big can this texture get without losing pixel precision with single floating point precision UV mapping (texture width height normalization into percent 0.0 to 1.0)? |
34 | OpenGL 3.0 framebuffer to texture images I need a way to capture what is rendered on screen, i have read about glReadPixels but it looks really slow. Can you suggest a more efficient or just an alternative way for just copying what is rendered by OpenGL 3.0 to the local RAM and in general to output this in a image or in a data stream? How i can achieve the same goal with OpenGL ES 2.0 ? EDIT i just forgot with this OpenGL functions how i can be sure that I'm actually reading a complete frame, meaning that there is no overlapping between 2 frames or any nasty side effect I'm actually reading the frame that comes right next to the previous one so i do not lose frames |
34 | Sprite Tile Sheets Vs Single Textures I'm making a race circuit which is constructed using various textures. To provide some background, I'm writing it in C and creating quads with OpenGL to which I assign a loaded .raw texture too. Currently I use 23 500px x 500px textures of which are all loaded and freed individually. I have now combined them all into a single sprite tile sheet making it 3000 x 2000 pixels seems the number of textures tiles I'm using is increasing. Now I'm wondering if it's more efficient to load them individually or write extra code to extract a certain tile from the sheet? Is it better to load the sheet, then extract 23 tiles and store them from one sheet, or load the sheet each time and crop it to the correct tile? There seems to be a number of way to implement it... Thanks in advance. |
34 | How to texture a VBO? I'm not sure which way is the best way to do this for my purposes. My purposes being having a textured (with an image) cube. Also, I've been following a blend of tutorials, so I'm not sure which way is the most right and what I'm doing wrong (currently I haven't had the texture fit right). Do I use glDrawArrays or glDrawElements? For pointers, do I say gl.glVertexPointer(3, GL.GL FLOAT, 0, vertices) and gl.glTexCoordPointer(2, GL.GL FLOAT, 0, textureData) or gl.glVertexPointer(3, GL.GL FLOAT, 0, 0) and gl.glTexCoordPointer(2, GL.GL FLOAT, 0, 0) When do I bind unbind the buffers? (gl.glBindBuffer(GL.GL ARRAY BUFFER, VBOVertices) gl.glBindBuffer(GL.GL ARRAY BUFFER, vboTextureCoordHandle) ) Before I establish the pointers or not in my drawing method at all? Do I need to specify the UV coordinates Texture Coordinates in the shader? If I'm getting a weirdly stretched texture, does that mean my fragment shader and setting the uniforms is working correctly? Thank you! I can supply more code, but I think I just need more information on the basics. |
34 | How can I render images to multiple windows without creating double textures in SDL2? Simply put What is the simplest way to render an image to two different sdl2 windows? I'm pretty new to SLD2, but I have a nice little level editor working. I then decided to move the area with edit tools to a separate window. I immediately got the following error Texture was not created with this renderer. Alas, it looks like each window needs to have its own SDL Renderer, and each renderer needs to create its own textures for the images I want to display. IMG LoadTexture() needs a renderer and will only render the resulting texture to that renderer. SDL CreateRenderer() then needs a window as a parameter and doesn't seem to be able to ever render to another output window. So does this really mean I have to create separate textures of each of my images for each and every renderer window? Or is there a way to load graphics into textures that can be used by any renderer, or on any window? |
34 | Doubt about texture waves in CG Ocean Shader I'm new on graphical programming, and I'm having some trouble understanding the Ocean Shader described on "Effective Water Simulation from Physical Models" from GPU Gems. The source code associated to this article is here. My problem has been to understand the concept of texture waves. First of all, what is achieved by texture waves? I'm having a hard time trying to figure out it's usefulness. In the section 1.2.4 of the article, it does say that the waves summed into the texture have the same parametrization as the waves used for vertex positioning. Does it mean that I can't use the texture provided by the source code if I change the parameters of the waves, or add more waves to sum? And in the section 1.4.1, is said that we can assume that there is no rotation between texture space and world space if the texture coordinates for our normal map are implicit. What does mean that the "normal map are implicit'? And why do I need a rotation between texture and world spaces if the normal map are not implicit? I would be very grateful for any help on this. |
34 | What is the difference between a Sprite and a Texture? I have an assignment due for University, and my task is to discuss textures used within video games, and how they have evolved. My main question is, what is the fundamental difference between using a sprite, or using other texture methods? Having done a little research myself, I seem to be inclined that Sprites store images in a single file, and can be used for animations etc.. and were commonly used with older video games, generally using sprites as all of their game visuals. Now, with modern games, sprites I believe tend to be less used as technology advances and other textures are available such as bump mapping. Although sprites are still used today to accommodate features such as a health bar or long distance textures. But what are the main advantages of using textures, over sprites? |
34 | What is causing my skybox texturing issue? I am using the NVIDIA cubemap tool to generate my .dds file in Photoshop but I am unexpected results when the texture is loaded into my application. I'm unsure as to whether this is caused by my code, or the texture generation in Photoshop. Does anyone know why this is happening? The texture is being applied to a sphere and it seems to only be stretching down the centre of the sphere, the rest is ok. The code I am using to generate the resource view D3DX11 IMAGE LOAD INFO loadSMInfo loadSMInfo.MiscFlags D3D11 RESOURCE MISC TEXTURECUBE loadSMInfo.BindFlags D3DX11 DEFAULT loadSMInfo.Depth D3DX11 DEFAULT loadSMInfo.Filter D3DX11 DEFAULT loadSMInfo.FirstMipLevel D3DX11 DEFAULT loadSMInfo.MipFilter D3DX11 DEFAULT loadSMInfo.MipLevels D3DX11 DEFAULT ID3D11Texture2D SMTexture 0 hr D3DX11CreateTextureFromFile( pd3dDevice, L"assets textures skybox sky.dds", amp loadSMInfo, 0, (ID3D11Resource ) amp SMTexture, 0) D3D11 TEXTURE2D DESC SMTextureDesc SMTexture gt GetDesc( amp SMTextureDesc) D3D11 SHADER RESOURCE VIEW DESC SMViewDesc SMViewDesc.Format SMTextureDesc.Format SMViewDesc.ViewDimension D3D11 SRV DIMENSION TEXTURECUBE SMViewDesc.TextureCube.MipLevels SMTextureDesc.MipLevels SMViewDesc.TextureCube.MostDetailedMip 0 ID3D11ShaderResourceView smrv hr pd3dDevice gt CreateShaderResourceView(SMTexture, amp SMViewDesc, amp fullcubemap) Edit I have changed the sphere to a cube and the texture is looking better quality on two of the faces, but the other faces are stretched and distorted. I'm going to try another texture. https i.gyazo.com 6d4556bd79b0e3adce156e631fb2f514.mp4 Edit I believe the problem is in my shader, forgetting to set the w value on the position to 1.0f. https i.gyazo.com e1fd41679e2257920aebff8d4c766118.mp4 Perhaps the only problem left is a wrong texture image? |
34 | Ray tracing texture and phong lighting Other questione releated to my ray tracer implementation for iPad. If I have a polygon that has a texture and a material, how do I calculate the color using Phong lighting model? Is the texture used in substitution of some of the component (diffuse?)? Or if there's a texture I need to just ignore the material and get only the texture color? |
34 | How to achieve highly detailed textures on buildings architecture? I have a bit of experience when it comes to 3D modeling, but something I never really wrapped my head around is environment modeling. You know, you walk down a road and you see lots of buildings, they all look good, with great texture details, and yet they don't go over the texture budget and the framerate is nice. Now I'm working on a little prototype, and I wanted to test how to make assets for it. This is supposed to be one floor of a fantasy tower. I did the modeling in Blender (with a rough UV), then I exported it into Quixel, I made a simple texture, 4K resolution because why not at this stage. The problem is, when I look at it ingame or in blender, it's quite awful. This is a single texture and a single material for the whole mesh. I was expecting this result, because the wall is some 20 meters long, so the texel density is really low. But this is already a 4K texture, when I should be using a 512 or 1k texture really. I mean, when you look at modern games, stuff like this looks sharp enough, with lower res textures than what I'm using now. So, basically, what is the right approach to something like this? I was thinking about something like this Have the "core" with a low res tiling texture, and then details as separate meshes that I would build in the engine (UE4 in this case). But I'm kinda worried about performance. I mean one floor would be at least 15 meshes, so 15 draws calls, so for ten floors it would be 150 drawcalls or something, I could barely make a village this way. I know some games do fully modular buldings, like Fallout 4, they have a bunch of "kits" made of 2x2m blocks that snap together and each block has its texture and it looks good. But I don't really need that level of granularity, and I want more individual looking buildings, that don't look really copy pasted. And even if I did go that way, if one building is like 100 pieces alone, that's some 100 draw calls, how does that even work ingame? An example from RDR2, how would go about making something like this? It can't possibly be one single mesh with just one texture. How can I achieve this kind of result? |
34 | How to apply saturn's ring texture in Unreal Engine 4? I'm working on a "solar system" model project. And while trying to apply saturn's ring texture which is this one it ended up looking like this I'm new to UE4 and this branch in general. So I have no idea how to fix this . your help would be appreciated |
34 | How does Megatexture work? I've been thinking about developing a small engine not only to develop small experimental games, but also to serve as a base to test various rendering techniques and things like that. Right now I've been thinking a lot about how to handle textures and stumbled on megatexture, but this is something that is a bit puzzling. There is a lot of talk of it being better than the traditional approach to having a bunch of textures around and loading them as needed, but how does megatexture avoid this, I've read around that they use streaming and you can just stream bits and pieces of it as opposed to loading each texture individually, but how does that offer better performance, and isn't that just another form of tilling? How do we sample such a texture when in a shader, do we stream part of it into memory then work on it. I've seen the latest videos of Rage and the texture do look great, but is that just the result of great artists or does the tech come into play. To sum up, how does it work, why is it great and how could I do something similar. |
34 | How are minimaps rendered for open worlds? When I've created minimaps for smaller to medium levels before I would use a large texture mapped to a plane for the minimap and world map. The minimap world map camera would then view this texture from a top down view. For larger, open world style environments though I'm facing a performance vs fidelity issue. For example, a large environment requires a large texture to fit everything in, but at the same time the texture also needs to be a high enough resolution for when the player decides to zoom in onto a specific point in the map. I can make the map high resolution with enough detail up close but this creates quite a large texture size. Or I can make a lower res map which is more efficient but it becomes blurry pixelated when zooming in. Is this just a balance that I have to play with, or is there a better way to handle this? For example, creating a simple 3d mesh whose shape would act as the map. Or instead of using a texture using a vector graphic. How is this usually done? |
34 | Why is the standard on displaying missing textures is via a black and purple checkerboard? In many different games, such as Minecraft and TF2, the default texture is a black and purple checkerboard. Even games that don't use that exact texture incorporate elements of it. Borderlands 2 uses a white and purple checkerboard, and Skyrim uses a solid purple texture. Why is the black and purple checkerboard so prevalent as a default texture? |
34 | What technology does Starcraft 2 use render its maps? I've got a map that is being procedurally generated at run time and I'm currently investigating methods of rendering this map. I've taken an interest in the look of Starcraft 2 and I'd like some advise on what methods it employs to achieve it. Secondarily, I'd like to see any tutorials, articles, or even source code examples if possible. There are a couple of main things I'd like to get some advise on, but please also feel free to suggest anything else that could help me. Snappable Tilesets A typical starcraft map seems to consist of a tileset of models that one can snap together to create the cliffs, ramps and other elevated terrain. What methods do they employ to make them look so natural? What I mean is, its very hard to spot repetition. Terrain Textures The terrain textures are so varied and even dynamic (Zerg creep). What methods are used to do this? Thanks. |
34 | Would it be possible to edit character models of a PS1 game, to be played on an emulator? There are some abandoned PS1 oldies out there that I would love to see revamped with improved assets. I have some skills with some some modelling programs, so I was wondering if it would be possible to rip some models with or without their textures so that I could try to update them, and experience them on an emulator. Is this possible? Even if the edited copies cannot be reinserted into the game, would it still be possible to rip the models out so that I could edit them? |
34 | How to extract the texture from an image onto a mesh? (preamble I think this fits either here or SO with computer vision tag, I chose gamedev because I think you guys are probably really good with textures and meshes, but tell me if my decision was wrong.) I want to accomplish the following Given an image with an object, a 3D mesh very similar to that object, and rendering parameters that render that mesh to the exact location of this object. I want to "extract" the original texture from the image onto the mesh. (So in a later step, I could re render the mesh from another viewpoint (of course some of the texture would be invisible black)). My mesh has uv coordinates for each triangle so I guess I could either "store the texture on the mesh" somehow, or directly backmap it to a 2D texture map. So I guess what I want to do is kind of texture backmapping remapping, kind of the inverse of what is done in game dev when texturing objects. I was having a lot of trouble finding any useful information on Google about what I want to do, so I thought I'd ask here. Maybe I haven't found the right word for it yet. I think there's probably quite a lot of "stuff" involved because pixel in the original image won't exactly correspond to a location on the mesh. |
34 | Texture filtering Is the minification or the magnification filter used when rendering at the exact texture size? Suppose you have a texture where the minification filter is linear, but the magnification filter is nearest neighbor (point filtering). If the texture is rendered at exactly 1 1 pixels, but at a non whole number pixel position, it is being neither minified or magnified. Is there a convention for whether the min or mag filter will be used? What is the justification? |
34 | SDL2 Textures bleeding 1px border around tile maps SDL RenderCopyEx taking integer arguments https forums.libsdl.org viewtopic.php?t 9486 This post gives a good indication of my question. Basically if you set SDL2 logicalScale or otherwise and render textures at native window resolution they appear fine. However, with tile maps if you resize the window in anyway, you get a bleed where an integer rounding issue creates a 1px border around certain tiles. Is my only option to create a 1px border around all my images to stop this bleed rounding error? Or a semi transparent border with the main color. What are my options? Is this solved in any of the latest SDL2.X.Y ? EDIT A simpler method I have used is reducing my images from 64x64px to 62x62px in SDL2 (not the actual sprite) and using it's own sprite as a 1px border, and using Render Scaling to scale up that 1px, which stops bleed. It reduces the quality on background images ever so slightly, but it requires no tweaking of any code or sprites... but again wondering if there's a more elegant solution. |
34 | does cocos2d cache texture automatically I know if we want the textures cached we can use the sharedtexture manager to cache them, but since this is on mobile platform , why don't cocos2d x just do this for all texture loads ? when creating sprites with images, are these textures cached as well ? |
34 | DDS load texture and cubemaps I'm trying to render a skybox with a cubemap on it and to do so I'm using DDS Texture Loader from DirectXTex library. I use texassemble to generate the cubemap (texture array of 6 textures) into a DDS file that I load at runtime. I generated a cube "dome" and sample the texture using the position vector of the vertex as the sample coordinates (so far so good), but I always get the same face of the cubemap mapped on the sky. As I look around I always get the same face (and it wobbles a bit if I move the camera). My code Texture.cpp Texture Texture(const wchar t textureFilePath, const std string amp textureType) mType(textureType) CreateDDSTextureFromFile(Game GetInstance() gt GetDevice(), Game GetInstance() gt GetDeviceContext(), textureFilePath, amp mResource, amp mShaderResourceView) CreateDDSTextureFromFileEx(Game GetInstance() gt GetDevice(), Game GetInstance() gt GetDeviceContext(), textureFilePath, 0, D3D11 USAGE DEFAULT, D3D11 BIND SHADER RESOURCE, 0, D3D11 RESOURCE MISC TEXTURECUBE, false, amp mResource, amp mShaderResourceView) SkyBox.cpp void SkyBox Draw() set cube map ID3D11ShaderResourceView resource mTexture.GetResource() Game GetInstance() gt GetDeviceContext() gt PSSetShaderResources(0, 1, amp resource) set primitive topology Game GetInstance() gt GetDeviceContext() gt IASetPrimitiveTopology(D3D PRIMITIVE TOPOLOGY TRIANGLELIST) mMesh.Bind() mMesh.Draw() Vertex Shader cbuffer Transform register(b0) float4x4 viewProjectionMatrix float4 main(inout float3 pos POSITION) SV POSITION return mul(float4(pos, 1.0f), viewProjectionMatrix) Pixel Shader SamplerState cubeSampler TextureCube cubeMap float4 main(in float3 pos POSITION) SV TARGET float4 color cubeMap.Sample(cubeSampler, pos.xyz) return color I tried both functions grom DDS loader but I keep getting the same result. All results I found on the web are about the old SDK toolkits, but I'm using the new DirectXTex lib. |
34 | WebGL send part of texture to GPU I have a matrix of pixels in RAM memory, e.g. 1000x1000 pixels. I send it to GPU using gl.texImage2D(gl.TEXTURE 2D, 0, gl.RGBA, 1000,1000, 0, gl.RGBA, gl.UNSIGNED BYTE, pixels) Which works great. But then I change some pixels, located in rectangle x 50, y 50, width 100, height 100. Sending the whole 1000x1000px picture again takes too much time, I would like to send only changed sub area to the GPU. I have tired gl.texSubImage2D(gl.TEXTURE 2D, 0, 50,50, 100,100, gl.RGBA, gl.UNSIGNED BYTE, pixels) but it seems, that parameter "pixels" should be the sub area of 100x100 pixels. Copying out a sub area from a large picture also takes too much time. Can I somehow specify a sub rectangle for source pixels in RAM? Maybe somehow using gl.pixelStorei() ? UPDATE I discovered that it is possible in WebGL 2.0 ( OpenGL ES 3.0) using gl.pixelStorei() by changing PACK ROW LENGTH, PACK SKIP ROWS, PACK SKIP PIXELS parameters. Too bad WebGL 2.0 is not supported in most of devices now ( |
34 | Why is it when I render a basic cube, my editor's grid changes too? I have one HLSL file for DirectX11 that only has input layout for color and position. Then another HLSL file for the simple cube that has position, normal and textures. What I noticed is when I render the simple cube the grid also changes and doesn't remain pure white color. They have different pixelshaders, vertexshaders, constant buffers and different inputlayout descriptions. Would anyone like to chip in and help get this resolved? This has been puzzling for a day! |
34 | Rendering Texture Quad to Screen or FBO (OpenGL ES) I need to render the texture on the iOS device's screen or a render to texture frame buffer object. But it does not show any texture. It's all black. (I am loading texture with image myself for testing purpose) Load texture data UIImage image UIImage imageNamed "textureImage.png" GLuint width FRAME WIDTH GLuint height FRAME HEIGHT Create context void imageData malloc(height width 4) CGColorSpaceRef colorSpace CGColorSpaceCreateDeviceRGB() CGContextRef context CGBitmapContextCreate(imageData, width, height, 8, 4 width, colorSpace, kCGImageAlphaPremultipliedLast kCGBitmapByteOrder32Big) CGColorSpaceRelease(colorSpace) Prepare image CGContextClearRect(context, CGRectMake(0, 0, width, height)) CGContextDrawImage(context, CGRectMake(0, 0, width, height), image.CGImage) glGenTextures(1, amp texture) glBindTexture(GL TEXTURE 2D, texture) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL NEAREST) glTexImage2D(GL TEXTURE 2D, 0, GL RGBA, width, height, 0, GL RGBA, GL UNSIGNED BYTE, imageData) glTexParameterf(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameterf(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) Simple Texture Quad drawing code mentioned here Bind Texture, Bind render to texture FBO and then draw the quad const float quadPositions 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0 const float quadTexcoords 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0 stop using VBO glBindBuffer(GL ARRAY BUFFER, 0) setup buffer offsets glVertexAttribPointer(ATTRIB VERTEX, 3, GL FLOAT, GL FALSE, 3 sizeof(float), quadPositions) glVertexAttribPointer(ATTRIB TEXCOORD0, 2, GL FLOAT, GL FALSE, 2 sizeof(float), quadTexcoords) ensure the proper arrays are enabled glEnableVertexAttribArray(ATTRIB VERTEX) glEnableVertexAttribArray(ATTRIB TEXCOORD0) Bind Texture and render to texture FBO. glBindTexture(GL TEXTURE 2D, GLid) Actually wanted to render it to render to texture FBO, but now testing directly on default FBO. glBindFramebuffer(GL FRAMEBUFFER, textureFBO pixelBuffernum ) draw glDrawArrays(GL TRIANGLES, 0, 2 3) What am I doing wrong in this code? P.S. I'm not familiar with shaders yet, so it is difficult for me to make use of them right now. |
34 | How can I determine the extreme color values in a texture? I am looking for a way to determine the most extreme color values for all of the texels in a texture. So for a texture consisting only of black and white texels, the extreme values should be (0,0,0) and (1,1,1) expressed in RGB format. For a color gradient from red to green I should get the values (1,0,0) and (0,1,0). Now obviously I could do this on the CPU by iterating over all the pixels texels of the texture and keeping track of the color values found to be most apart from each other, but this is probably relatively slow, so I am looking for a way to do this using the GPU shaders. Is this possible using shaders? I am not experienced with GPGPU, so a solution in HLSL GLSL would be preferred. Or maybe there is a fast algorithm I could use on the CPU? |
34 | What is the name for the technique to use different palettes for different tiles of a screen? I've found the keywords quot subpalette quot and quot PPU palette quot , but they seem to be NES specific. Though the idea of using a small palettes table and use 2 bits per pixel in an image ( few bits reference to a palette per tile) looks like a cool retro compression method with low decompression drawbacks, so I guess it was used more widely. Does this technique (of using different palettes for different blocks of pixels) have a name? Does it have a usage apart from NES? |
34 | Automated texture mapping I have a set of seamless tiling textures. I want to be able to take an arbitrary model and create a UV map with these properties No stretching (all textures tile appropriately so there is no stretching and sheering of the texture) The textures display on the correct axis relative to the model it's mapping to (if you look at the example, you can see some of the letters on the front are tilted, the y axis of the texture should be matching up with the y axis of the object. Some other faces have upside down letters too) the texture is as continuous as possible on the surface of the model (if two faces are adjacent, the texture continues on the adjacent face where it left off) the model is closed (all faces are completely enclosed by other faces) A few notes. This mapping will occur before triangulation. I realize there are ways to do this by hand and it's probably a hard problem to automatically map textures in general, but since these textures are seamless and I just need uniform coverage it seems like an easier problem. I'm looking for an algorithmic approach to this that I can apply in general, not a tool that does it. What approach would work for this, is there an existing one? (I assume so) |
34 | Faster way to do multi texture blending(HLSL, triplanar texturing and overhang mapping) Right now I'm doing a lot of lerping to blend my textures based on height(under water, mid ground and height ground), normal.y 0.0 for the under side of the terrain, fractal blend map for a marble effect and then triplanar mapping for the slopes. It looks good and runs ok and I'm getting just the right blending effect where I need but I'm just thinking that all the lerping could be bad? I end up with this in my terrain shader Find the blend value for the blend zones float IsLowGround saturate(max(distance(min(input.WSPosition.y, 350), 350) 300, 0)) float IsHeighGround saturate(max(distance(min(input.WSPosition.y, 1500), 1500) 1250, 0)) Find the flip value float3 flip saturate(float3(normal.x lt 0.0, normal.y gt 0.0, normal.z lt 0.0)) triplanar float3 blend weights max((abs(normal) 0.2) 0.5f, 0.0f) blend weights pow(blend weights,2.0f) float total (blend weights.x blend weights.y blend weights.z) blend weights total float2 coord1 float2(input.WSPosition.zy 750) float2 coord2 float2(input.WSPosition.zx 150) float2 coord3 float2(input.WSPosition.xy 700) Marble blend maps float Blend BlendMap.Sample(colorSampler, input.WSPosition.xz 2500).r flat blend float Blend2 BlendMap.Sample(colorSampler, input.WSPosition.zy 6000).r slope blend Color and Normal blending Slope Color With Marble Color lerp(ColorMap.Sample(colorSampler, coord1) blend weights.x, ColorMap6.Sample(colorSampler, coord1 3) blend weights.x,Blend2) Color lerp(ColorMap2.Sample(colorSampler, coord3) blend weights.z, ColorMap6.Sample(colorSampler, coord3 3) blend weights.z, Blend2) Slope Normal with marble NormalM lerp(NormalMap.Sample(colorSampler, coord1).rgb blend weights.x, NormalMap6.Sample(colorSampler, coord1 3).rgb blend weights.x, Blend2) NormalM lerp(NormalMap2.Sample(colorSampler, coord3).rgb blend weights.z, NormalMap6.Sample(colorSampler, coord3 3).rgb blend weights.z, Blend2) Height based Color Blending with marble eg. (mid gt height) gt marble float4 GroundColor lerp(lerp(ColorMap5.Sample(colorSampler, coord2 1.5) blend weights.y, ColorMap1.Sample(colorSampler, coord2) blend weights.y, IsHeighGround), ColorMap7.Sample(colorSampler, coord2) blend weights.y, Blend) Height based Normal Blending with marble eg. (mid gt height) gt marble float3 GroundNormal lerp(lerp(NormalMap5.Sample(colorSampler, coord2 1.5).rgb blend weights.y, NormalMap1.Sample(colorSampler, coord2).rgb blend weights.y, IsHeighGround), NormalMap7.Sample(colorSampler, coord2).rgb blend weights.y, Blend) Last Color Blending with underside flip eg. (underwater gt Ground) gt flip Color lerp(ColorMap4.Sample(colorSampler, coord2 2) blend weights.y, lerp(GroundColor, ColorMap3.Sample(colorSampler, coord2 1.2) blend weights.y, IsLowGround), flip.y) Last Normal Blending with underside flip eg. (underwater gt Ground) gt flip NormalM lerp(NormalMap4.Sample(colorSampler, coord2 2).rgb blend weights.y, lerp(GroundNormal, NormalMap3.Sample(colorSampler, coord2 1.2).rgb blend weights.y, IsLowGround), flip.y) Here is what it looks like when rendered |
34 | Texturing an asteroid I'm trying to texture this asteroid so it looks reasonable. I'm missing something though. Following this tutorial, I got this so far. Now I don't understand the next step how do you resolve the seam problems? This icosahedral sphere has a seam running through it that cannot be avoided. Ok. How do I paint the texture so it looks seamless when applied to the object? How do I know which edge connects to what side? I guess it will just be symmetrical? This seems awfully hard to texture a simple sphere. Am I missing some technique here? |
34 | Problem with transparent textures in SFML I have been told that this is kind of a common problem with transparent textures, but didn't get any further information on how to solve it. In my code, I'm trying to render this texture (a square with rounded corners, where rounded corners have some alpha) What I get instead is this Notice those greyish places on the corners of the textures where rounded corners are supposed to be. What could be causing this? I have a pure white texture, so I don't expect a single pixel to get any darker than the background. All pixels should have at least the color of background, but as you can see, there is something darker. Zoomed even more Any help would be highly appreciated. |
34 | How can I render something like the swirling clouds of Jupiter? The effect should be as similar as possible to this, though it does not need to be at planetary scale. I was thinking that it could be done offline via particles, perhaps by directing their motion in some way. Then rendering the frames to a texture and playing these back in a loop for real time rendering? But I wonder if there is some other real time way... maybe mesh patches shaped like swirls can be moved around, after being textured carefully, to imitate the motion of swirls of gaseous matter? |
34 | How can I achieve UnrealEd's per face texturing in Blender 2.6? Using UnrealEd I can create geometry and assign a material to each face of that geometry. Each face can have its own UV settings. How can I achieve the same functionality using Blender? I've seen the "Texture Face" option mentioned but that seems to be gone in Blender 2.59 ? |
34 | What is the Workflow? Blender Substance Painter Unreal Engine 4 My name is Jose and I'm pretty new to Game Art Development. I know some intermediate UE4 (both C and Blueprints), but my weakest point has always been the workflow to import assets from Blender and Substance. Up to this point I have always bought external 3D assets. As far as I know, the workflow is this (but I don't know if I'm correct) BLENDER Create a 3D model in Blender. Create the materials I need for each part of the mesh. UV unwrap it. Export as .fbx SUBSTANCE PAINTER Import the .fbx file into Substance Painter. Paint the 3D model. Export the .obj file AND the texture maps (albedo, roughness, metallic, normal, etc.) UNREAL ENGINE 4 Import the .obj file AND the texture maps. Create a material based on these maps. Apply the material to the model. I have a few questions now Is this workflow correct? Or is there something I'm misunderstanding? What model do I have to import into UE4? The .fbx from Blender? Or the .obj from Substance? Is there any difference between these two? When exporting the textures from Substance Painter, I can choose from a dropdown called config, to what game engine I'm exporting these textures to (Unity, Unreal, Lumberyard, Cryengine, etc.). What exactly does this option do? What is the fundamental difference in exporting textures to Unreal or to Unity, for example? |
34 | Is quadrilinear texture sampling hardware supported? If you have a volume texture with mipmaps, GL LINEAR MIPMAP LINEAR texture sampling will perform quadrilinear texture sampling. Is that implemented in hardware like bilinear texture sampling is? Or does the driver just do two trilinear texture samples and interpolate those results for you? Is trilinear sampling even supported by hardware? |
34 | How to apply saturn's ring texture in Unreal Engine 4? I'm working on a "solar system" model project. And while trying to apply saturn's ring texture which is this one it ended up looking like this I'm new to UE4 and this branch in general. So I have no idea how to fix this . your help would be appreciated |
34 | How to deal with texture coordinates without range? I have a set of given texture coordinates(u,v coordinates), but they are ranging from ( inf, inf), contradicting the 0,1 convention. I tried to do a rescaling by value (value min) (mac min). But if I have a rectangle which composed of two triangles, four vertices. Suppose the u,v coordinates are 0.260944 0.490887 3.619507 0.490887 3.619507 3.043434 0.260944 3.043434 After scaling, the coordinates would be mapped to exact 1 and 1, resulting in wrong texture. So how should I deal with this kind of texture coordinates? |
34 | How do I get an instance of KX PolygonMaterial in Blender? I've got a question concerning using Python in Blender's Game Engine. Before I start I want to state that I'm trying to change the color of an object in Blender's game engine. To do this, I'm attempting to find a way to update the texture of the object (I basically want two or three states, red, (yellow), green). What I'm doing right now is scene GameLogic.getCurrentScene() pingMeter scene.objects 'Ping Meter' mesh pingMeter.meshes materials mesh 0 .materials material materials 0 However, when I do print(material. class . name ) it outputs KX BlenderMaterial. Shouldn't I be getting KX PolygonMaterial if I'm running the Blender Game Engine? Is there anyway to change color or texture with KX BlenderMaterial because I can't find anything in the documentation. Can I get an instance of KX PolygonMaterial out of the code above? ...or should I just take a different approach all together? Thanks! EDIT I'm using Blender 2.65 which uses Python 3 in case anyone is wondering. |
34 | Draw Ordering with Glowing Objects This is mostly a general design theory question, but for reference the bulk of my game is in JavaScript. I can draw images to the screen that layer on top of each other in the order that I please. One of the layers, I call it the light layer, is simply a series of black squares that change opacity based on the light levels in the game. During night, these black squares are almost opaque (unless a light source is affecting them) and during the day they are nearly invisible. Glowing things, like bulbs or radar console screens, are drawn above the light layer to make it look as if they're exempt from the light effect. (Glowing) However, the turret head is a non glowing object that needs to appear above the glowing objects. For example, it might partially cover the radar console screen or light bulbs. However, it isn't meant to glow. This is where the paradox occurs. The turret head needs to be below the light layer, but also above the glowing elements. Any ideas of how I could solve such a paradox? |
34 | Display dynamic texture image on inside of sphere I'm displaying a ring wherever the camera 'looks'. When the camera faces the ground (green) the ring image always looks correct, see image However, my scene is within a sphere (purple)... and when the ring reaches the sphere it does not display correctly. See image This second image, the ring should be facing the camera since it's on a surface that is in front of the camera (not on the ground like the image appears). Sorry if my images are poor representations. I basically have a flat plane which is within a sphere. The sphere has flipped normals to be able to display from within the sphere. I need the ring to "attach" or bend to the shaped of the object it is being drawn against. Any help is greatly appreciated. EDIT (more details). You can't raycast to a collider from within the same collider, so I'm using a raycast in the opposite direction. Code Ray ray new Ray(m Camera.position, m Camera.forward) get ray Vector3 forwardVector ray.GetPoint(100) get a point 100 away ray new Ray(forwardVector, m Camera.forward) create new ray from this point ray.direction ray.direction reverse ray direction if (Physics.Raycast(ray, out hit, m RayLength, m ExclusionLayers)) imageTransform.rotation Quaternion.FromToRotation (Vector3.forward, hit.normal) |
34 | TexturePacker ignores extensions I'm using TexturePacker in one of my games, though when packing a bunch of textures their extension is kept in the data file. So when I want to find a texture I need to search for "image.png" instead of just "image". Is there an option to let texture packer ignore the extensions of my source images in the data file? Solved So if anyone else wants this, here's the exported I made https www.box.com s bf12q1i1yc9jr2c5yehd Just extract it into "C Program Files (x86) CodeAndWeb TexturePacker bin exporters UIToolkit No Extensions" (or something similar) and it should show op as an exporter. |
34 | Is it possible to use unnormalized texture coordinates from a GLES2 GLSL fragment shader? I want to look up a texel from my GLES2 GLSL fragment shader using un normalized texture coordinates (0 w, 0 h instead of 0 1, 0 1). The reason is that this texture is used as a look up table and I get precision problems with normalized coordinates. I see that GL TEXTURE RECTANGLE is not supported wihtout extensions and neither is texelFetch(), so I have ruled out those options. Thanks! |
34 | Rasterization Rules and States This thread directly concerns lightmap generation however, indirectly, the rasterization of polygons by the GPU. I am currently generating lightmaps using a pixel shader. To the shader I send 3 lightmap UV coordinates per mesh face. Those UV coordinates are directly rendered onto a lightmap texture( by setting the lightmap as the render target ). The vertex shader looks like this Vertex Declaration struct VS INPUT Std float4 Position POSITION0 float3 Normal NORMAL0 float2 tUV TEXCOORD0 diffuse UV coordinates float2 lUV TEXCOORD1 lightmap UV coordinates LightMap Pass PS INPUT Std LightMapPass Vertex Shader( VS INPUT Std Input ) PS INPUT Std Output ( PS INPUT Std )0 Output.Position float4( ( Input.lUV.x 2 ) 1, ( Input.lUV.y 2 ) 1, 0, 1 ) Output.Normal mul( Input.Normal, mW ) Output.Pos3 mul( Input.Position, mW ) Output.tUV Input.tUV Output.lUV Input.lUV return Output The pixel shader returns a color based on light contribution. The result of the lightmap generation for this test mesh looks like this The light for this scene is floating in the center of the room... The problem is that the lightmap has 'cracks' due to the rasterization of polygons by the GPU. It appears as though that for a given face lightmap pixels are not included during rasterization because their pixel centers do not fall within the bounds of the face's UV coordinates despite the fact that the face overlaps those pixels. As a result a black( unset ) pixel is rendered and effectively blackens the diffuse color of the mesh during texture modulation. Here is a screenshot Here is a screenshot of an auxiliary window which shows both the lightmap and a selected mesh face. This is generated by rendering the UV coordinates of the mesh face over a screen quad of the lightmap As you can see, the mesh face( green ) extends beyond the pixels rendered during generation of the lightmap. It seems as though those pixels were excluded during rasterization. Most of the problem faces are narrow in dimension. I thought that I could fix this by increasing the size of the lightmap, however, this doesn't always work... Here is the entire lightmap Can I change a rasterization state to allow pixels under certain conditions? EDIT Lightmap and Deferred Shading Cube Pics |
34 | How to read BC4 texture in GLSL? I'm supposed to receive a texture in BC4 format. In OpenGL, i guess this format is called GL COMPRESSED RED RGTC1. The texture is not really a "texture", more like a data to handle at fragment shader. Usually, to get colors from a texture within a fragment shader, i do uniform sampler2D TextureUnit void main() vec4 TexColor texture2D(TextureUnit, vec2(gl TexCoord 0 )) (...) the result of which is obviously a v4, for RGBA. But now, i'm supposed to receive a single float from the read. I'm struggling to understand how this is achieved. Should i still use a texture sampler, and expect the value to be in a specific position (for example, within TexColor.r ?), or should i use something else ? |
34 | Texture atlas vs. array texture how differently are they handled by CPU and GPU and how that impacts performance? Unity 5.4 (currently in beta), will bring a much awaited feature (since 2013) that is array textures in the same vain as OpenGL's ArrayTexture. However, after doing some reading about arrays textures and texture atlases, I still can't quite understand the technical differences on their usage by CPUs and GPUs. So, to be more specific, I would like to ask for an explanation on the main differences between how texture atlas and texture arrays are dealt with by CPU and GPU and, most importantly, how such differences can impact on performance and memory handling (e.g. how texture arrays can be more performance than texture atlases, etc). If technical details on Unity's implementation are lacking due to its unfortunate closed sourceness, I would be happy enough with an answer regarding OpenGL.s ArrayTexture. |
34 | Texture to Painted Vertex Algorithm I want to generate some game textures into coloured vertex arrays. Is there a known algorithm to transform texture bitmap data into an optimised vertex array. Much like how the Homeworld skyboxes were made. EDIT Don't need the dome part, would like todo these flat. |
34 | Awesomium custom surface scroll I have a custom Awesomium Surface implementation, and I need to create a Scroll function for it. The function asks me to manually move some pixels of my texture by the indicated X and Y offset. How can I move access pixels inside a SDL Texture since SDL LockTexture() provide write only pixels? |
34 | Is it possible to use unnormalized texture coordinates from a GLES2 GLSL fragment shader? I want to look up a texel from my GLES2 GLSL fragment shader using un normalized texture coordinates (0 w, 0 h instead of 0 1, 0 1). The reason is that this texture is used as a look up table and I get precision problems with normalized coordinates. I see that GL TEXTURE RECTANGLE is not supported wihtout extensions and neither is texelFetch(), so I have ruled out those options. Thanks! |
34 | How AAA games use this texture in the tone mapping shader? I found that Battlefield 3 as well as Saint's Row the Third use this texture in their final tone mapping stage. Can anyone share a link to an article about how this texture is used? UPDATE As there are hardly any examples of Color Grading implementation on the net, I will post my quick and dirty sample for XNA 4.0. This does nothing but builds the default 2d colormap from scratch on GPU, copies it's contents into a 3d texture on CPU and performs the actual color correction with the shader code from the article provided in the answer. XNA 4.0 Color Correction Sample If you want to know what you can do with this, look here http the witness.net news 2012 08 fun with in engine color grading In short the author explains there, how you can basically make a screenshot of your normal in game image with the default color grading palette on top, perform screenshot color correction in photoshop, cut and copy the deformed color correction palette into the engine and have that color correction applied for every in game scene automatically. |
34 | Are global shader variables slower than texture look ups? I want to send quite a bit of data to the GPU, the data will never change (or will change very rarely) after it has been created. Is there a performance impact to using global shader variables or should I pack the data into a texture to perform look ups? |
34 | Faster way to do multi texture blending(HLSL, triplanar texturing and overhang mapping) Right now I'm doing a lot of lerping to blend my textures based on height(under water, mid ground and height ground), normal.y 0.0 for the under side of the terrain, fractal blend map for a marble effect and then triplanar mapping for the slopes. It looks good and runs ok and I'm getting just the right blending effect where I need but I'm just thinking that all the lerping could be bad? I end up with this in my terrain shader Find the blend value for the blend zones float IsLowGround saturate(max(distance(min(input.WSPosition.y, 350), 350) 300, 0)) float IsHeighGround saturate(max(distance(min(input.WSPosition.y, 1500), 1500) 1250, 0)) Find the flip value float3 flip saturate(float3(normal.x lt 0.0, normal.y gt 0.0, normal.z lt 0.0)) triplanar float3 blend weights max((abs(normal) 0.2) 0.5f, 0.0f) blend weights pow(blend weights,2.0f) float total (blend weights.x blend weights.y blend weights.z) blend weights total float2 coord1 float2(input.WSPosition.zy 750) float2 coord2 float2(input.WSPosition.zx 150) float2 coord3 float2(input.WSPosition.xy 700) Marble blend maps float Blend BlendMap.Sample(colorSampler, input.WSPosition.xz 2500).r flat blend float Blend2 BlendMap.Sample(colorSampler, input.WSPosition.zy 6000).r slope blend Color and Normal blending Slope Color With Marble Color lerp(ColorMap.Sample(colorSampler, coord1) blend weights.x, ColorMap6.Sample(colorSampler, coord1 3) blend weights.x,Blend2) Color lerp(ColorMap2.Sample(colorSampler, coord3) blend weights.z, ColorMap6.Sample(colorSampler, coord3 3) blend weights.z, Blend2) Slope Normal with marble NormalM lerp(NormalMap.Sample(colorSampler, coord1).rgb blend weights.x, NormalMap6.Sample(colorSampler, coord1 3).rgb blend weights.x, Blend2) NormalM lerp(NormalMap2.Sample(colorSampler, coord3).rgb blend weights.z, NormalMap6.Sample(colorSampler, coord3 3).rgb blend weights.z, Blend2) Height based Color Blending with marble eg. (mid gt height) gt marble float4 GroundColor lerp(lerp(ColorMap5.Sample(colorSampler, coord2 1.5) blend weights.y, ColorMap1.Sample(colorSampler, coord2) blend weights.y, IsHeighGround), ColorMap7.Sample(colorSampler, coord2) blend weights.y, Blend) Height based Normal Blending with marble eg. (mid gt height) gt marble float3 GroundNormal lerp(lerp(NormalMap5.Sample(colorSampler, coord2 1.5).rgb blend weights.y, NormalMap1.Sample(colorSampler, coord2).rgb blend weights.y, IsHeighGround), NormalMap7.Sample(colorSampler, coord2).rgb blend weights.y, Blend) Last Color Blending with underside flip eg. (underwater gt Ground) gt flip Color lerp(ColorMap4.Sample(colorSampler, coord2 2) blend weights.y, lerp(GroundColor, ColorMap3.Sample(colorSampler, coord2 1.2) blend weights.y, IsLowGround), flip.y) Last Normal Blending with underside flip eg. (underwater gt Ground) gt flip NormalM lerp(NormalMap4.Sample(colorSampler, coord2 2).rgb blend weights.y, lerp(GroundNormal, NormalMap3.Sample(colorSampler, coord2 1.2).rgb blend weights.y, IsLowGround), flip.y) Here is what it looks like when rendered |
34 | How to load and display an image in OpenGL ES 3.0 using C I'm trying to make a simple app on Android Studio using the NDK, JNI to call C code that load and display an image. I have managed to create the surface and draw a simple Triangle already. Now, I'm looking for a way to load and display an image in OpenGL ES 3.0 using C . I have done the search around but all of them is either missing some important function or written in Java. It would be great if someone could guide me with a simple example, thanks in advanced ) . |
34 | How to correctly export UV coordinates from Blender Alright, so I'm just now getting around to texturing some assets. After much trial and error I feel I'm pretty good at UV unwrapping now and my work looks good in Blender. However, either I'm using the UV data incorrectly (I really doubt it) or Blender doesn't seem to export the correct UV coordinates into the obj file because the texture is mapped differently in my game engine. And in Blender I've played with the texture panel and it's mapping options and have noticed it doesn't appear to affect the exported obj file's uv coordinates. So I guess my question is, is there something I need to do prior to exporting in order to bake the correct UV coordinates into the obj file? Or something else that needs to be done to massage the texture coordinates for sampling. Or any thoughts at all of what could be going wrong? (Also here is a screen shot of my diffused texture in blender and the game engine. As you can see in the image, I have the same problem with a simple test cube not getting correct uv's either) Edit Added my geometry pass shader source code to show how I'm rendering and sampling the diffuse texture. I'm simply using the UV coordinates provided by the obj file and an anisotropic sampler. Texture2D diffuseTexture register(t0) SamplerState textureSampler register(s0) cbuffer ObjectTransformBuffer register(b0) float4x4 worldTransformMatrix, Translates to world space. cameraTransformMatrix Translates to camera space. (not including rotation) cbuffer ScreenTransformBuffer register(b1) float4x4 viewProjectionMatrix Rotates to camera space and then projects to screen space. cbuffer MaterialBuffer register(b2) float3 materialDiffuseAlbedo float materialSpecularExponent float3 materialSpecularAlbedo bool isTextured, isLighted struct vsInput float3 positionLS POSITION float3 textureLS TEXTURE Input signature uses a uvw coordinate but w is 0 and not needed for any geo pass textures. float3 normalLS NORMAL struct vsOutput float4 positionCS SV POSITION float2 textureLS TEXTURE float3 normalWS NORMAL float3 positionWS POSITION struct psOutput float4 positionWS SV Target0 Surface positions. float4 normalWS SV Target1 Surface normals. float4 diffuseAlbedo SV Target2 Surface diffuse albedo. float4 specularAlbedo SV Target3 Surface specular albedo. vsOutput VS( in const vsInput in ) vsOutput out out.positionCS mul(float4( in.positionLS, 1.0f), mul(cameraTransformMatrix, viewProjectionMatrix)) out.positionWS mul(float4( in.positionLS, 1.0f), worldTransformMatrix).xyz out.normalWS mul(float4( in.normalLS, 0.0f), worldTransformMatrix).xyz out.textureLS in.textureLS.xy w coordinate is 0 and not needed. return out psOutput PS( in vsOutput in ) psOutput out Use the alpha channel to indicate specular light intensity. out.normalWS float4(normalize( in.normalWS), materialSpecularExponent) float lightEffectModifier if (isLighted) lightEffectModifier 1.0f else lightEffectModifier 0.0f Use the alpha channel to indicate whether the surface is affected by light for the light pass. out.positionWS float4( in.positionWS, lightEffectModifier) out.diffuseAlbedo float4(materialDiffuseAlbedo, 1.0f) if (isTextured) out.diffuseAlbedo diffuseTexture.Sample(textureSampler, in.textureLS.xy) out.specularAlbedo float4(materialSpecularAlbedo, 1.0f) return out |
34 | Spherical fractal noise generator in shader I have a growing sphere in space, and I thought of having a procedural generated texture over it. Since it is growing, I thought a fractal would be a great choice, because more details would be visible the larger the sphere get (and I could mess with some parameter over time to have it animated). A quick Mandelbrot implementation in GLSL showed it would be too expensive to have it in the devices I am targeting also, I don't know how to map a cool looking fractal over complex plane onto a sphere without distortions (I expect the players to fly around this sphere in every direction, so there should be no "glued" edges or collapsed points), neither I have the background to devise project a fractal over the spherical surface myself (probably was done before, but I could not find). So weighting the requisites of the procedural texture Fast to run, for low end mobile GPU Over a spherical surface domain Growing in details with growing in size Possible to animate (BONUS) Cool looking (of course) then I thought it might be impossible within the constraints. But since I am no expert in this fractal thing, I thought I could ask it here first before scraping out the idea. Maybe it is really not a fractal I need, and there is some other kind of noise with growing details I could use. Do you know of such noise generation procedure? Do you know of any noise generator with uniform distribution over spherical sufaces, or any fractals whose domain is a sphere? Can you suggest any alternatives for my situation? |
34 | Texture filtering Is the minification or the magnification filter used when rendering at the exact texture size? Suppose you have a texture where the minification filter is linear, but the magnification filter is nearest neighbor (point filtering). If the texture is rendered at exactly 1 1 pixels, but at a non whole number pixel position, it is being neither minified or magnified. Is there a convention for whether the min or mag filter will be used? What is the justification? |
34 | DirectX11 Texturing Terrain Mesh with Shared Vertices I'm trying to create and texture a terrain mesh in DirectX11. I've managed to create the mesh itself, but I don't know how I should do the texturing. Let me start by explaining what I'm doing so far I have a vertex structure that looks like this struct Vertex XMFLOAT3 position XMFLOAT4 color Then I create all the vertices for the terrain mesh for (int z 0, index 0 z lt terrainHeight z) for (int x 0 x lt terrainWidth x, index) vertices index .position XMFLOAT3(x, 0.0f, z) vertices index .color XMFLOAT3(1.0f, 1.0f, 1.0f, 1.0f) Then I create all the triangles for the terrain mesh with indices for (int z 0, index 0 z lt terrainHeight 1 z) for (int x 0 x lt terrainWidth 1 x) indices index (z ) terrainWidth (x ) indices index (z 1) terrainWidth (x ) indices index (z 1) terrainWidth (x 1) indices index (z ) terrainWidth (x ) indices index (z 1) terrainWidth (x 1) indices index (z ) terrainWidth (x 1) With 256x256 vertices I get this result (rendering in wireframe) I'd now like to change the color to a texture, so I change the Vertex structure to this struct Vertex XMFLOAT3 position XMFLOAT2 texture Changed the color to a texture coordinate. And this is where I get stuck. Since each vertex now have a texture coordinate (U, V) I'd like to set it up like this But each vertex can only hold one texture coordinate. As you can see, the 2 vertices between square A and B needs 2 different texture coordinates each in order to map it correctly. If I'd loop through each pair of triangles, then when I reach square B I'll overwrite the texture coordinates that was correct for square A. So therefore I have some questions Am I doing this in the "right" way? How does modern games do it? Do they texture each square or the entire mesh? If I'd like to texture each square, how can I solve this problem? |
34 | Is point texture filtering the same as nearest? Is point texture filtering the same as nearest (different names for the same technique)? |
34 | How do I generate coins in a pattern? In my game you collect coins (surprise!). At the moment I generate them like this Find a random position given a rectangle (eg. the screen size) and generate a coin Possible positions are left and right of this coin for our next coin If both are available, choose at random If a coin exists in one position, use the other If neither are available skip this step Do the same for up and down positions of the new coin Repeat this sequence for all available coins This works fine but I would like to create custom shapes with my coins like arrows, stars etc. This got me thinking into how I could achieve this. One way I thought of was to use a small texture where each coloured pixel represented the position of a coin. So that a picture like this Can be used to generate an array of coin position coordinates in any framework supporting textures. I'm pretty sure this can be done, but was wondering if anyone has tried this or something totally different for generating coins or any other objects in a game. Ideally the game would involve several different textures and choose them at random, and combine this with the random scattering textures from simple algorithms such as the one I have above. |
34 | DragonBones import question Most DragonBones tutorials I've found seem to have you importing separate files at the start of a project, but the example projects use a single PNG texture containing all the parts of the final armature, along with an accompanying JSON file that says how to chop up the texture to get at each part. How would you create this pair of files? |
34 | Howdo I create differently sized texture atlases for different screen sizes? I am beginning game development and using texture atlases. I've created textures based on the resolution 1920x1080, so I created a 1024x1024 size Texture Atlas for storing multiple graphics. If the game is played on a 800x480 size device, the atlas will be very big to load in memory. An atlas of 512x512 would be enough and on devices with 480x320 resolution the game might not even work due to the different texture size. How can I resize the atlas to save memory? Can I use different texture atlases for different screen sizes? I just want to know how other game devs do it? |
34 | reading from texture2d resource in directx11 Hi i am trying to read data in resource, which i used to do well without any problems. but suddenly it is not working. first i made immutable resource that has data in it, which is XMFLOAT4(1,1,1,1) here. next i made staging resource for reading. lastly, i called map unmap to read and store data into outputArr. (all HRESULT checked already) int WIDTH 10, HEIGHT 2 ID3D11Texture2D resource create texture D3D11 TEXTURE2D DESC texDesc texDesc.BindFlags D3D11 BIND SHADER RESOURCE texDesc.Usage D3D11 USAGE IMMUTABLE texDesc.Format DXGI FORMAT R32G32B32A32 FLOAT texDesc.Width WIDTH texDesc.Height HEIGHT texDesc.CPUAccessFlags 0 texDesc.ArraySize 1 texDesc.MipLevels 1 texDesc.SampleDesc.Count 1 texDesc.SampleDesc.Quality 0 texDesc.MiscFlags 0 XMFLOAT4 initValues new XMFLOAT4 WIDTH HEIGHT for (int i 0 i lt WIDTH HEIGHT i) initValues i XMFLOAT4(1,1,1,1) D3D11 SUBRESOURCE DATA data data.pSysMem initValues data.SysMemPitch sizeof(XMFLOAT4) WIDTH data.SysMemSlicePitch 0 device gt CreateTexture2D( amp texDesc, amp data, amp resource) ID3D11Texture2D staging create texture for reading D3D11 TEXTURE2D DESC stgDesc stgDesc.BindFlags 0 stgDesc.Usage D3D11 USAGE STAGING stgDesc.Format DXGI FORMAT R32G32B32A32 FLOAT stgDesc.Width WIDTH stgDesc.Height HEIGHT stgDesc.CPUAccessFlags D3D11 CPU ACCESS READ stgDesc.ArraySize 1 stgDesc.MipLevels 1 stgDesc.SampleDesc.Count 1 stgDesc.SampleDesc.Quality 0 stgDesc.MiscFlags 0 device gt CreateTexture2D( amp stgDesc, nullptr, amp staging) XMFLOAT4 outputArr new XMFLOAT4 WIDTH HEIGHT READ dContext gt CopyResource(staging, resource) D3D11 MAPPED SUBRESOURCE mappedResource ZeroMemory( amp mappedResource, sizeof(D3D11 MAPPED SUBRESOURCE)) dContext gt Map(staging, 0, D3D11 MAP READ, 0, amp mappedResource) outputArr reinterpret cast lt XMFLOAT4 gt (mappedResource.pData) std vector lt XMFLOAT4 gt testV for (int y 0 y lt HEIGHT y) for (int x 0 x lt WIDTH x) int idx y WIDTH x testV.push back(outputArr idx ) dContext gt Unmap(staging, 0) and it turns out, only when WIDTH is multiple of 16(HEIGHT doesn't seem to be matter here), it copies the data well into ALL element of array, otherwise it fill out just 0 into array until next 16 element. For example, if width height is 10 2, first 10 elements of outputArr will have proper data and next 6 elements have just 0, and next another 10 elements with data, and 6 elements with 0, so on. i haven't had any problem on dealing with resources. and struggle still. just my humble assumption is that there might be specific alignment in number of width of resource that i miss. Or silly mistake in my process. Hope anyone can find something from this question. thanks |
34 | When should a bullet texture be loaded in XNA? I'm making a SpaceWar! esque game using XNA. I want to limit my ships to 5 active bullets at any time. I have a Bullet DrawableGameComponent and a Ship DrawableGameComponent. My Ship has an array of 5 Bullet. What is the best way to manage the Bullet textures? Specifically, when should I be calling LoadTexture? Right now, my solution is to populate the Bullet array in the Ship's constructor, with LoadTexture being called in the Bullet constructor. The Bullet objects will be disabled not visible except when they are active. Does the texture really need to be loaded once for each individual instance of the bullet object? This seems like a very processor intensive operation. Note This is a small scale project, so I'm OK with not implementing a huge texture management framework since there won't be more than half a dozen or so in the entire game. I'd still like to hear about scalable solutions for future applications, though. |
34 | Very slow direct3D texture sampling So I'm writing a small game using Direct3D 9 and I'm using multitexturing for the terrain. All I'm doing is sampling 3 textures and a blend map and getting the overall color from the three textures based on the color channels from the blend map. Anyway, I am getting a massive frame rate drop when I sample more than 1 texture, I'm going from 120 fps to just under 50. This is the HLSL code responsible for the slow down float3 ground tex2D(GroundTex, multiTex).rgb float3 stone tex2D(StoneTex, multiTex).rgb float3 grass tex2D(GrassTex, multiTex).rgb float3 blend tex2D(BlendMapTex, blendMap).rgb Am I doing it wrong ? If anyone has any info or tips about texture sampling or anything, that would be nice. Thanks. |
34 | Issue with Mapping Textures to Models in Blender I've been trying to texture a model using Blender, but when I draw on the UV Editor it doesn't show up on the model, and I can't draw on the model itself. I've tried saving the image and the 3D View is set to Texture. Everything seems to be in order and I've followed several tutorials, but none of them seem to work with the version I'm using (2.64 update was necessary for import plugin) and I'm absolutely stumped. How can I draw textures to the model? If not within Blender itself, how do I export import the textures? EDIT Vertex Paint works, though it is insufficient for my purposes. In addition, moving to the rendered view produces a solid color model with none of the applied textures. |
34 | CS GO workshop awp skin scrambles I was starting make my awp skin, but I was wondering that my skin at CS GO looks bit different than on photoshop. So I came here for help. I will show just a test skin (fully red). First I was save my skin with photoshop to .tga format (32 bits pixel). Then importing to VTF editor, but not changing any of these settings, they are default. After that I checked only "No Level Of Detail", "No Mipmap" and "SRGB". Opening CS GO and workshop, awp is not fully red, there is some scrambles. I hope for solution. ) |
34 | How does Texture Mapping work? I read the answers from What exactly is UV and UVW Mapping? and How does UVW texture mapping work?, which are quite nice, but am still not 100 sure, if I understand correctly. Lets start with a 2D example. So, say I have a triangle (obviously described by 3 vertices). Now my question is, how do I convert a (x,y) coordinate to the (u,v) coordinate of my texture? Since x,y could be any value between 0,n with n being all real numbers, considering that it is in object space. But my texture coordinates are between 0,1 . How do I know how to map lets say (3,4) to (u,v)? If I know how to map the object coordinates to the texture coordinate it is easy to interpolate the values, I assume (either using bilinear interpolation or barycentric interpolation). And then how would this work for 3D? Lets say in this case we would have a pyramide with 5 vertices (4 bottom, 1 tip). I guess the procedure would be similar, with the exception that I know have an additional depth value. But how does the mapping of a 2D texture work on a 3D object, when I don't have nice flat surfaces like on a pyramide, but instead have a circular surface, like a tea pot? I hope I'm clear in my questions. I'm still little confused myself. I just don't quite get the mathematical background of texture mapping. It would be enough, if you could point me to some website with good explanation, maybe with clear graphics and step by step description. Thanks for your time! |
34 | Using tex2Dlod to determine the appropriate size of the texture We have a top down game with fixed camera position. We also use orthographic projection so every model in the game will always be viewed from the very same angle and from the very same distance. The question we want to answer is what size should our textures be for each model. I tried turning off mipmapping and clearly saw that the game does not use the highes mip level when drawing our stuff. I also know that tex2Dlod can be used to specify which mip texture level to use for sampling, so my plan is to gradually change this value to manually specify lower and lower mip levels until I see that decreasing it further reduces the quality of the picture. The problem is I cannot figure out how to set the mip level. The value passed to the tex2Dlod function is float4 and as I understood x and y hold te texture coordinates and w holds the mip level. But it should be set in fractional representation and I cannot know what the step size is. So basically I need to know what number I need to send into the function to gradually switch mip levels. |
34 | iOS pass UIImage to shader as texture I am trying to pass UIImage to GLSL shader. The fragment shader is varying highp vec2 textureCoordinate uniform sampler2D inputImageTexture uniform sampler2D inputImageTexture2 void main() highp vec4 color texture2D(inputImageTexture, textureCoordinate) highp vec4 color2 texture2D(inputImageTexture2, textureCoordinate) gl FragColor color color2 What I want to do is send images from camera and do multiply blend with texture. When I just send data from camera, everything is fine. So problem should be with sending another texture to shader. I am doing it this way (void)setTexture (UIImage )image forUniform (NSString )uniform CGSize sizeOfImage image size CGFloat scaleOfImage image scale CGSize pixelSizeOfImage CGSizeMake(scaleOfImage sizeOfImage.width, scaleOfImage sizeOfImage.height) create context GLubyte spriteData (GLubyte )malloc(pixelSizeOfImage.width pixelSizeOfImage.height 4 sizeof(GLubyte)) CGContextRef spriteContext CGBitmapContextCreate(spriteData, pixelSizeOfImage.width, pixelSizeOfImage.height, 8, pixelSizeOfImage.width 4, CGImageGetColorSpace(image.CGImage), kCGImageAlphaPremultipliedLast) draw image into context CGContextDrawImage(spriteContext, CGRectMake(0.0, 0.0, pixelSizeOfImage.width, pixelSizeOfImage.height), image.CGImage) get uniform of texture GLuint uniformIndex glGetUniformLocation( programPointer, uniform UTF8String ) generate texture GLuint textureIndex glGenTextures(1, amp textureIndex) glBindTexture(GL TEXTURE 2D, textureIndex) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP S, GL CLAMP TO EDGE) glTexParameteri(GL TEXTURE 2D, GL TEXTURE WRAP T, GL CLAMP TO EDGE) create texture glTexImage2D(GL TEXTURE 2D, 0, GL RGBA, pixelSizeOfImage.width, pixelSizeOfImage.height, 0, GL RGBA, GL UNSIGNED BYTE, spriteData) glActiveTexture(GL TEXTURE1) glBindTexture(GL TEXTURE 2D, textureIndex) "send" to shader glUniform1i(uniformIndex, 1) free(spriteData) CGContextRelease(spriteContext) Uniform for texture is fine, glGetUniformLocation function do not returns 1. The texture is PNG file of resolution 2000x2000 pixels. PROBLEM When the texture is passed to shader, I have got "black screen". Maybe problem are parameters of the CGContext or parameters of the function glTexImage2D Thank you |
34 | How to create a 32 bit red texture byte buffer I want to make a red texture image buffer. Would anyone help me to make it in right way. I have tried following std vector lt BYTE gt redTexture(w h 4) const auto stride w 4 BYTE buf redTexture.data() for (int i 0 i lt h i) const auto redValue Gdiplus Color Red memcpy(buf, amp redValue, stride) buf stride |
34 | Unity5 imported models show no texture I've got a problem with Unity materials. I'm a beginner so I used Wings3d for creating 3d models. But there's a little Problem. Both objects on the picture below have the same material(a Standart Material with 0 smoothness, 0 metallic and only a only an Albedo picture) but obviously the right one has no texture( and that's the problem). First I ignored it and used a custom shader ( quot Custom WorldCoord Diffuse) which I found in a package for fixing it. Poorly this shader doesn't support Normal or Height maps and strangely slows down my game extremly (my scene with only Standart shaders 80 FPS my scene with this strange shader 7 FPS). I don't know how to write my own shaders and I don't know blender, and I don't have much time to fix this. |
34 | Problem with transparent textures in SFML I have been told that this is kind of a common problem with transparent textures, but didn't get any further information on how to solve it. In my code, I'm trying to render this texture (a square with rounded corners, where rounded corners have some alpha) What I get instead is this Notice those greyish places on the corners of the textures where rounded corners are supposed to be. What could be causing this? I have a pure white texture, so I don't expect a single pixel to get any darker than the background. All pixels should have at least the color of background, but as you can see, there is something darker. Zoomed even more Any help would be highly appreciated. |
34 | How to make a texture appear the same way in a cube? My problem seems so simple, that before asking the question here I did a search on google and this site https forums.unrealengine.com development discussion content creation 33665 applying and manipulating textures on actors https www.youtube.com watch?v p6O4AgSwTmQ Trouble applying a texture to a cube How to produce a texture to represent a vector field Why do I get a blank material in Unreal Engine 4? How to create smoke that spreads outward in all directions? But nothing seemed to be related to my problem. I imported this image.png I created a material and added the image to it But I did not get the expected result in the game (I wish this yellow line were under that pink arrow that I drew) Innocently I thought the solution was obvious, so I added another image.png and added it to the material To my surprise, the result was worse, the yellow line did not even appear I realized this happens because the floor of my game is a "stretched" cube, so the image I added is appearing correctly on one of its faces, not just the one I want. I'd like to know how to make it look the same on all faces, or some value that I can change so that it appears correctly on the face I want. EDIT 1 (attempt using rotator) I put the input time to see if any value would be compatible with what I want, but none is. I tried with both texture images, but both presented the same behavior See that spinning looks like the shaft rotates around the top left of the floor. For some moments the yellow line will disappear. |
34 | So, I need to create varied animal textures for my game. How would I go about doing that? For example, one of the creatures planned is an isopod like animal. Would I need to get pictures of real animals and adjust them to my needs? |
34 | How should I prepare for migration from D3D9 to D3D10 or D3D11? I'm considering a Direct3D9 texture streaming solution, but my company will eventually be going to D3D11 in the future. I wanted to make my solution as forward compatible as possible, so I was hoping that someone could give me some feedback on exactly what D3D11's capabilities were and what I should be looking for when preparing such a migration? For reference, here's what I'm considering on D3D9 Load low res mip maps for all meshes at load time Create bounding boxes around each of my objects and detect when I'm inside any given bounding box For any bounding box that I'm inside of I will load the high res portion of the mip map Any bounding box that I've left I'll unload the texture from I've also got to cook up some scheme to manage fragmentation of the GPU memory, initially I'll probably just cycle the GPU memory whenever the camera and my objects are still |
34 | Does iOS support BC4 compressed texture? I've been designing a new OpenGL image algorithm using BC4 textures at its core. It works well accross Windows and Mac, my main targets up to now. But today, the customer added a new requirement it should work on iOS too ! I'm starting to worry is BC4 texture (named GL COMPRESSED RED RGTC1 in OpenGL) supported on iOS devices (tablets amp iphones alike) ? I've read that iOS is OpenGL ES 2.0 capable. But i'm unable to know which textures are supported under this API (except PowerVR ones, which are not compatible with PC amp Macs...) |
34 | Texture filtering Is the minification or the magnification filter used when rendering at the exact texture size? Suppose you have a texture where the minification filter is linear, but the magnification filter is nearest neighbor (point filtering). If the texture is rendered at exactly 1 1 pixels, but at a non whole number pixel position, it is being neither minified or magnified. Is there a convention for whether the min or mag filter will be used? What is the justification? |
34 | How to read a color of a pixel from texture (cocos2d js)? How to read a color of a pixel at x,y from a texture (in cocos2d js)? |
34 | Best way to organize models with multiple meshes and textures? I have character models which have different parts, ex upper body, head, lower body each part is a separate mesh in the same model. Each part has several textures associated with it (diffuse, normal, etc). I'm wondering if there is a best practice for associating the textures with the meshes, given a .obj file as the model, for example, and .tga files for the textures. So for instance, making sure the head textures get mapped on to the head object. One way would be having a separate file for each mesh, and using the file names to associate them with their textures, but that seems impractical. Is there a nice, clean way to do this, which is both easy to program (importing and rendering) and easy for the artist? |
34 | How can I handle shadowing of a planet's rings by the planet itself? I assume the most straightforward way to draw planetary rings (such as those around Saturn) is to use a texture that is transparent everywhere except for the rings, and then place the planet in the middle of the texture and that's it. The problem with this approach is that I don't know how to shadow the area of the rings texture that is behind the planet (which isn't lit by the sun). At the moment I have a rings texture that is permanently lit and a planet that darkens in the areas where it's not lit by the sun. The planet revolves around the sun, so the lit shadowed areas are constantly changing. Any suggestions on how to apply shadow on the rings? |
34 | What's wrong with my method of getting intermediate rendering to my postprocessing shader? I'm working on a project in OpenGL. Earlier this week, I successfully implemented Deferred Shading, but I'm not sure how to pass the information from the Deferred Shader to the Post Processing Shader. glBindFramebuffer(GL FRAMEBUFFER, fboId) glViewport(0, 0, int(ScreenSize.x), int(ScreenSize.y)) Render on the whole framebuffer, complete from the lower left corner to the upper right glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) ...Drawing for Opaque objects, pass all transparent objects to a seperate texture after having drawn them using Forward Shading glBindFramebuffer(GL FRAMEBUFFER, 0) glViewport(0, 0, int(ScreenSize.x), int(ScreenSize.y)) Clear the screen glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) Use our shader glUseProgram(quad programID) glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, gBuffTextures 0 ) glUniform1i(gBuffPosnID, 0) More Textures Uniforms glEnableVertexAttribArray(0) glBindBuffer(GL ARRAY BUFFER, quad vertexbuffer) glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, 0, (void )0 ) glDrawArrays(GL TRIANGLE STRIP, 0, 4) 2 3 indices starting at 0 gt 2 triangles glDisableVertexAttribArray(0) So far this works just fine. Now here's the difficult part passing the final image to the post processing shader. glBindFramebuffer(GL FRAMEBUFFER, 0) glViewport(0, 0, int(ScreenSize.x), int(ScreenSize.y)) glClear(GL COLOR BUFFER BIT GL DEPTH BUFFER BIT) glUseProgram(post programID) glActiveTexture(GL TEXTURE0) glBindTexture(GL TEXTURE 2D, gBuffTextures 0 ) glUniform1i(UniformPostImage, 0) glActiveTexture(GL TEXTURE1) glBindTexture(GL TEXTURE 2D, gBuffDepthTextures) glUniform1i(UniformDepthImage, 1) glActiveTexture(GL TEXTURE2) glBindTexture(GL TEXTURE 2D, dirt) glUniform1i(UniformDirtImage, 2) glUniform1f(UniformTime, (float)currentTime) glEnableVertexAttribArray(0) glBindBuffer(GL ARRAY BUFFER, quad vertexbuffer) glVertexAttribPointer(0, 3, GL FLOAT, GL FALSE, 0, (void )0 ) glDrawArrays(GL TRIANGLE STRIP, 0, 4) 2 3 indices starting at 0 gt 2 triangles glDisableVertexAttribArray(0) I'm loading the image from the deferred shader normally, and passing it using layout(location 0) out vec3 image All I recieve is the first texture I pass, the position shader. Can anyone tell me how to pass the image from the deferred shader to the post processing shader? |
34 | Multi texture obj files? Im downloading some free 3d models off the internet to test things in my game. When I download the package, it gives me a bunch of texture files and the obj model. What I dont understand is what I do with those textures? Are they used as source image files within the object? If so how can I export the uv map of the model with all these textures on it? Ive been trying to get a working example of this and havent been able to. Can someone guide me in the right direction? I edit my files with blender, and realize this question is a little bit off topic, but im lost! EDIT Example of the textures I am using now |
34 | OpenGL texture on sphere I want to create a rolling, textured ball in OpenGL ES 1.0 for Android. With this function I can create a sphere public Ball(GL10 gl, float radius) ByteBuffer bb ByteBuffer.allocateDirect(40000) bb.order(ByteOrder.nativeOrder()) sphereVertex bb.asFloatBuffer() points build() private int build() double dTheta STEP Math.PI 180 double dPhi dTheta int points 0 for(double phi (Math.PI 2) phi lt Math.PI 2 phi dPhi) for(double theta 0.0 theta lt (Math.PI 2) theta dTheta) sphereVertex.put((float) (raduis Math.sin(phi) Math.cos(theta))) sphereVertex.put((float) (raduis Math.sin(phi) Math.sin(theta))) sphereVertex.put((float) (raduis Math.cos(phi))) points sphereVertex.position(0) return points public void draw() texture.bind() gl.glEnableClientState(GL10.GL VERTEX ARRAY) gl.glVertexPointer(3, GL10.GL FLOAT, 0, sphereVertex) gl.glDrawArrays(GL10.GL TRIANGLE FAN, 0, points) gl.glDisableClientState(GL10.GL VERTEX ARRAY) My problem now is that I want to use this texture for the sphere but then only a black ball is created (of course because the top right corner s black). I use this texture coordinates because I want to use the whole texture 0 0 0 1 1 1 1 0 That's what I learned from texturing a triangle. Is that incorrect if I want to use it with a sphere? What do I have to do to use the texture correctly? |
Subsets and Splits