_id
int64
0
49
text
stringlengths
71
4.19k
40
Downscaling texture via mipmap Copied from Computer Graphics SE. I am implementing a post processing effect in my DirectX 11 pet renderer. The post processing pass is implemented by rendering a full screen quad covered with texture containing original rendered image, which works as it should, but I have problems with downscaling the texture. The non processed testing scene looks like this (three very bright emmissive spheres) I see no problem at this stage, but when I run the first post processing pass, which just down scales the image by the factor of 8 using the texture sampler, the result is very flickery (up scaled for clarity) I expected a mipmap would solve or at least reduce the flickering, but it didn't change a thing. What am I doing wrong? RenderDoc Update After investigating the issue using RenderDoc I found that the mipmap is being generated successfully and it's third level looks like this However, the output of the down scaling pass looks like this As if the sampler didn't use the mipmap at all. Don't get distracted by coloured object instead almost white ones. I lowered the sphere brightness a bit while investigating the bug. Even if I choose the mipmap level explicitly float4 vColor s0.SampleLevel(LinearSampler, Input.Tex, 3) it changes nothing RenderDoc also says "LOD Clamp 0 0" for the used sampler. What is it? Couldn't this be the problem? DirectX details Samplers D3D11 SAMPLER DESC descSampler ZeroMemory( amp descSampler, sizeof(descSampler)) descSampler.AddressU D3D11 TEXTURE ADDRESS CLAMP descSampler.AddressV D3D11 TEXTURE ADDRESS CLAMP descSampler.AddressW D3D11 TEXTURE ADDRESS CLAMP descSampler.Filter D3D11 FILTER MIN MAG MIP LINEAR mDevice gt CreateSamplerState( amp descSampler, amp mSamplerStateLinear) descSampler.Filter D3D11 FILTER MIN MAG MIP POINT hr mDevice gt CreateSamplerState( amp descSampler, amp mSamplerStatePoint) ...are set right before rendering the screen quad ID3D11SamplerState aSamplers mSamplerStatePoint, mSamplerStateLinear mImmediateContext gt PSSetSamplers(0, 2, aSamplers) ...and used within the down scaling PS shader SamplerState PointSampler register (s0) SamplerState LinearSampler register (s1) Texture2D s0 register(t0) float4 Pass1PS(QUAD VS OUTPUT Input) SV TARGET return s0.Sample(LinearSampler, Input.Tex) Texture D3D11 TEXTURE2D DESC descTex ZeroMemory( amp descTex, sizeof(D3D11 TEXTURE2D DESC)) descTex.ArraySize 1 descTex.BindFlags D3D11 BIND RENDER TARGET D3D11 BIND SHADER RESOURCE descTex.MiscFlags D3D11 RESOURCE MISC GENERATE MIPS descTex.Usage D3D11 USAGE DEFAULT descTex.Format DXGI FORMAT R32G32B32A32 FLOAT descTex.Width width descTex.Height height descTex.MipLevels 0 descTex.SampleDesc.Count 1 device gt CreateTexture2D( amp descTex, nullptr, amp tex) ...it's render target view D3D11 RENDER TARGET VIEW DESC descRTV descRTV.Format descTex.Format descRTV.ViewDimension D3D11 RTV DIMENSION TEXTURE2D descRTV.Texture2D.MipSlice 0 device gt CreateRenderTargetView(tex, amp descRTV, amp rtv) ...it's shader resource view D3D11 SHADER RESOURCE VIEW DESC descSRV ZeroMemory( amp descSRV, sizeof(D3D11 SHADER RESOURCE VIEW DESC)) descSRV.Format descTex.Format descSRV.ViewDimension D3D11 SRV DIMENSION TEXTURE2D descSRV.Texture2D.MipLevels (UINT) 1 descSRV.Texture2D.MostDetailedMip 0 device gt CreateShaderResourceView(tex, amp descSRV, amp srv) Explicit generation of mipmap is called after the scene was rendered into the texture and another texture was set as a render target. ID3D11RenderTargetView aRTViews 1 mPass1Buff.GetRTV() mImmediateContext gt OMSetRenderTargets(1, aRTViews, nullptr) mImmediateContext gt GenerateMips(mPass0Buff.GetSRV()) ID3D11ShaderResourceView aSRViews 1 mPass0Buff.GetSRV() mImmediateContext gt PSSetShaderResources(0, 1, aSRViews) The code is compiled in debug and the D3D device was created with D3D11 CREATE DEVICE DEBUG flag and I get no runtime errors on the console.
40
Borderless windowed (fake fullscreen) mode doesn't cover the entire screen I'm using Direct3D 11 running on Windows 10 20H2, but have seen this problem going back to Windows 7. I'm adding borderless windowed (fake fullscreen) mode support, and all of the online resources I can find suggest that it's simple just setup a windowed mode the same size as your display resolution, use WS POPUP, and that's all you need to do. The problem is that it doesn't actually work. Instead, I get (on a 1920x1080 display) a 1920x1060 window, with the bottom 20 pixel high portion of the taskbar still visible. The odd thing is, the swapchain backbuffer is sized correctly, at 1920x1080, and adding some debug output shows that my WM SIZE message is coming through at 1920x1080 it's as if something else internal was preventing the window from covering the full screen. Fullscreen exclusive modes work correctly. The workflow I'm using to set this mode goes like Determine if the window will be the same size as the display. If so, set WS POPUP (not sure what to use as the ExStyle so I'm leaving it at 0 for now, but I've also tested WS EX TOPMOST with no success). If not, set the usual array of styes. Call SetWindowPos with SWP FRAMECHANGED. Pump my message loop to make sure that everything is brought up to date. Call IDXGISwapChain ResizeTarget to do the actual window resizing. Respond to WM SIZE to call IDXGISwapChain ResizeBuffers, create my views, etc. Of note is that I don't change the window size using SetWindowPos I let IDXGISwapChain ResizeTarget do that for me, and that works for every other combination of windowed to windowed, windowed to fullscreen, fullscreen to windowed and fullscreen to fullscreen transition (where quot fullscreen quot here is fullscreen exclusive). The only thing that does not work is borderless windowed. What sorcery am I missing?
40
Fix Pixel Shader "Stage did not run. No output" I'm trying to set up a minimal D3D11 renderer but fail to get the pixel shader stage to run. The available answers here or the ones I found through Google couldn't help me, unfortunately. Using Visual Studio's Graphics Debugger I could verify that my vertices are set correctly and the vertex shader also runs as expected. The debug layer doesn't report any issues either. As a test I disabled depth testing, stencil testing and backface culling as shown below with no changes. I made sure that the viewport is set correctly as well as this seems to be a common source of this problem. At this moment I can't think of any other cause for this issue and would be happy about any advice of what else to look out for. Disable depth stencil test D3D11 DEPTH STENCIL DESC dsstate desc dsstate desc.DepthEnable false dsstate desc.StencilEnable false dsstate desc.DepthFunc D3D11 COMPARISON ALWAYS dsstate desc.DepthWriteMask D3D11 DEPTH WRITE MASK ALL dsstate desc.BackFace.StencilFailOp D3D11 STENCIL OP KEEP dsstate desc.BackFace.StencilPassOp D3D11 STENCIL OP KEEP dsstate desc.BackFace.StencilFunc D3D11 COMPARISON ALWAYS dsstate desc.FrontFace.StencilFailOp D3D11 STENCIL OP KEEP dsstate desc.FrontFace.StencilPassOp D3D11 STENCIL OP KEEP dsstate desc.FrontFace.StencilFunc D3D11 COMPARISON ALWAYS d3d11device gt CreateDepthStencilState( amp ds state, amp this gt ds state) d3d11context gt OMSetDepthStencilState(this gt ds state, 0) Disable backface culling D3D11 RASTERIZER DESC rasterizer desc rasterizer desc.AntialiasedLineEnable false rasterizer desc.CullMode D3D11 CULL NONE rasterizer desc.DepthBias 0 rasterizer desc.DepthBiasClamp 0.0f rasterizer desc.DepthClipEnable false rasterizer desc.FillMode D3D11 FILL SOLID rasterizer desc.FrontCounterClockwise true rasterizer desc.MultisampleEnable false rasterizer desc.ScissorEnable false rasterizer desc.SlopeScaledDepthBias 0.0f d3d11device gt CreateRasterizerState( amp rasterizer desc, amp this gt rasterizer state) d3d11context gt RSSetState(this gt rasterizer state) Pipeline view in debugger Vertex shader transformation Vertex shader struct VertexIn float3 position POSITION float3 normal NORMAL float4 color COLOR struct VertexOut float4 color COLOR float4 position SV POSITION VertexOut main(VertexIn i) VertexOut o o.position float4(i.position, 1.0f) o.color i.color return o Pixel shader struct FragmentIn float4 color COLOR float4 pos SV POSITION float4 main(FragmentIn i) SV TARGET return float4(1.0f, 0.0f, 0.0f, 1.0f)
40
HLSL Buffer Data Type I'm working on converting a dx11 shader from a .fx file for use in Unity3D and I'm a little puzzled by the HLSL Buffer lt type declared in the shader. More specifically, what are these and how can I implement them in Unity? I'm aware of the Structured, Append, and Consume Buffers but those appear to be different then this and the Microsoft documentation wasn't to helpful. Is it just like an array that is populated and sized from code before getting assigned to the shader? Are they read only or writable as well? So far I'm thinking the closest approximation I can use is a StructuredBuffer but the .fx file has its own declaration for that as well so I'm not entirely sure I should go that route. Example Buffer lt float4 gt g someData register(t18)
40
DXGI Frame rate drops from 8000 FPS to 1500 FPS when switching to full screen mode I've created a simple app with a DirectX11 device and swap chain (IDXGISwapChain). All it does is clear the screen with a color and call Present(0, 0) on the swap chain. The app handles full screen windowed mode transition by itself (I passed DXGI MWA NO WINDOW CHANGES) to the IDXGIFactory object. After switching from windowed mode to full screen, the VS graphics profiler shows a drop in the frame rate, from 8000 FPS (window mode) to 1500 FPS (full screen mode). I think I'm resizing my buffers properly (in response to WM SIZE), I'm not getting any warnings about presentation inefficiencies in the debug window. The mode used to create the swap chain is obtained by enumerating the supported modes of the output device and selecting the proper resolution info. Isn't full screen mode supposed to be more efficient (as I understand, it can simply do a flip instead of a bit blit). The code is here in case you want to take a look.
40
ID3D11Buffer and std array buffer looks empty I am having trouble at rendering vertices stored in a std vector. Create and initialize the vertex buffer. D3D11 BUFFER DESC vertexBufferDesc ZeroMemory( amp vertexBufferDesc, sizeof(D3D11 BUFFER DESC)) vertexBufferDesc.BindFlags D3D11 BIND VERTEX BUFFER vertexBufferDesc.ByteWidth sizeof(VertexData) this gt vertex data.size() vertexBufferDesc.CPUAccessFlags 0 vertexBufferDesc.Usage D3D11 USAGE DEFAULT D3D11 SUBRESOURCE DATA resourceDataVertex ZeroMemory( amp resourceDataVertex, sizeof(D3D11 SUBRESOURCE DATA)) resourceDataVertex.pSysMem amp (this gt vertex data) HRESULT hr pDevice gt CreateBuffer( amp vertexBufferDesc, amp resourceDataVertex, amp (this gt pVertexBuffer)) if (FAILED(hr)) return false this gt vertex data being a std vector lt VertexData gt with VertexData being Vertex data struct VertexData XMFLOAT3 v XMFLOAT2 vt XMFLOAT3 vn VertexData(XMFLOAT3 v, XMFLOAT2 vt, XMFLOAT3 vn) v v , vt vt , vn vn Now I'm not sure doing resourceDataVertex.pSysMem amp (this gt vertex data) is a good idea. I can render another mesh without problem just by changing the ID3D11Buffer vertex buffer (said mesh being known at compile time and stored in a C Array). This time around I simply get no render at all. I triple checked my object is in sight and vertex data is fully populated.
40
How many views can be bound to a 2D texture at a time? I am a newbie trying to learn on DX11.x. While reading about resources and views in MSDN, I thought this question For a given 2D Texture created with ID3D11Texture2Dinterface (or for that matter any kind of resource), how many of following views can be bound to it? 1) DepthStencilView 2) RenderTargetView 3) ShaderResourceView 4) UnorderedAccessView Thanks in advance. PS I know the answer would be app specific, but still any insight into this would be helpful.
40
Directional light and finding relevant shadow casters Right now when culling the models to render for the directional light shadow map pass I just do a view frustrum culling using the main camera. At some angles, the objects will be outside the view but still obviously should cast shadows. My question is, how do you cull gather all the relevant shadow casters for a directional light?
40
Full screen quad in the HLSL Directx 11 I want to create a full screen Triangle Quad so I can blur the box that is the quad I made. I want to do this in the vertex buffer. I tried this code struct VSQuadOut float4 position SV POSITION float2 uv TEXCOORD outputs a full screen triangle with screen space coordinates input three empty vertices VSQuadOut VSQuad( uint vertexID SV VertexID ) VSQuadOut result result.uv float2((vertexID lt lt 1) amp 2, vertexID amp 2) result.position float4(result.uv float2(2.0f, 2.0f) float2( 1.0f, 1.0f), 0.0f, 1.0f) return result I want something like this Any ideas?
40
Realtime local reflections of particle system I'm finding my way around CryEngine 3 (through Steam) and want to create a simple effect where a fire on shore is reflected in a body of water. For testing purposes, I've made the water dead calm... (Note Dx11 on 2nd line of debug info) As you can see, the terrain is reflected properly, but the flame particles aren't reflected. It's my understanding that this should be possible. NB I've created an appropriate cubemap for the water environment, although I don't believe it comes into play here. I've seen a number of posts saying Glossiness 50 (or 50 ) and a light specular color are required. I've got 100 and white... And for completeness, the water volume properties... Can someone please tell me what I need to enable to get this working? Thanks.
40
How do i draw depth complexity(overdraw) in directx 11? I want to read the stencil value in the shader so I can set colors for different depths. What I understand. Make a loop after the scene is rendered but before is is presented. so a loop with the number of colors(k) and in the loop, use the code md3dImmediateContext gt OMSetDepthStencilState(RenderStates DrawDepthDSS, k) This is what I know. What are the other steps? like to send the depth to the shader? ..I have this code for the shader Texture2D lt uint gt txStencil register (t0) set the corresponding register float4 PSStencil(float4 pos SV Position) SV Target uint stencil txStencil.Load(int3(pos.xy, 0)) debug output if (stencil 1) return float4(0, 1, 0, 1) else return float4(0, 0, 0, 0) But how do I send the shader the stencil value?
40
How best to handle ID3D11InputLayout in rendering code? I'm looking for an elegant way to handle input layouts in my directx11 code. The problem I have that I have an Effect class and a Element class. The effect class encapsulates shaders and similar settings, and the Element class contains something that can be drawn (3d model, lanscape etc) My drawing code sets the device shaders etc using the effect specified and then calls the draw function of the Element to draw the actual geometry contained in it. The problem is this I need to create an D3D11InputLayout somewhere. This really belongs in the Element class as it's no business of the rest of the system how that element chooses to represent it's vertex layout. But in order to create the object the API requires the vertex shader bytecode for the vertex shader that will be used to draw the object. In directx9 it was easy, there was no dependency so my element could contain it's own input layout structures and set them without the effect being involved. But the Element shouldn't really have to know anything about the effect that it's being drawn with, that's just render settings, and the Element is there to provide geometry. So I don't really know where to store and how to select the InputLayout for each draw call. I mean, I've made something work but it seems very ugly. This makes me thing I've either missed something obvious, or else my design of having all the render settings in an Effect, the Geometry in an Element, and a 3rd party that draws it all is just flawed. Just wondering how anyone else handles their input layouts in directx11 in a elegant way?
40
Flipped Normals On Back Faces I'm trying to create grass quads and have therefore disabled backface culling when rendering these quads. With each vertex normal set upwards ( 0.f, 1.f, 0.f ), all front faces are lit correctly, but backfaces are black. If I change the the Y normal to 1.f the backfaces light up and the front faces go black. D3D11 RASTERIZER DESC desc desc.AntialiasedLineEnable false desc.CullMode D3D11 CULL NONE desc.DepthBias 0 desc.DepthBiasClamp 0.f desc.DepthClipEnable true desc.FillMode D3D11 FILL SOLID desc.FrontCounterClockwise false desc.MultisampleEnable false desc.ScissorEnable false desc.SlopeScaledDepthBias 0.f I'm not so sure there's a problem here. Any ideas?
40
Why does PIX crash while creating render target views in my D3D11 application? I'm trying to use PIX to debug my Direct3D11 application. PIX crashes and gives the following stack trace Frame 000001 ........PRE lt this 0x054338e0 gt IDXGISwapChain GetBuffer(0, IID ID3D11Texture2D, 0x00B3E0F0) Frame 000001 ........POST lt S OK gt lt this 0x054338e0 gt IDXGISwapChain GetBuffer(0, IID ID3D11Texture2D, 0x00B3E0F0) Frame 000001 ........PRE lt this 0x05433978 gt ID3D10Texture2D Map(11788512, Unknown D3D10 MAP, 1843764804, 0x00B3EA08) Frame 000001 ........POST lt E INVALIDARG gt lt this 0x05433978 gt ID3D10Texture2D Map(11788512, Unknown D3D10 MAP, 1843764804, 0x00B3EA08) Frame 000001 ........PRE lt this 0x054b70d0 gt ID3D11Device CreateRenderTargetView(0x05433978, NULL, 0x00B3E0F4) Frame 000001 ........POST lt E INVALIDARG gt lt this 0x054b70d0 gt ID3D11Device CreateRenderTargetView(0x05433978, NULL, 0x00B3E0F4) What can I do to get around this problem so that I can debug my application?
40
DirectX Tessellation Cracks I have the following simple patch function in DX11, but I keep getting rips, and when I look at the wireframe its clear that adjacent edges are not getting the same tessellation factor. The CalcTessFactor() function just does a distance from the camera to the point passed, so should always give the same value for the same edge center that I pass in. PatchTess patchFunction Far(InputPatch lt VertexToPixel Far, 3 gt patch, uint patchID SV PrimitiveID) PatchTess pt Compute midpoint on edges, and patch center float3 e0 0.5f (patch 0 .WorldPosition patch 1 .WorldPosition) float3 e1 0.5f (patch 1 .WorldPosition patch 2 .WorldPosition) float3 e2 0.5f (patch 2 .WorldPosition patch 0 .WorldPosition) float3 c (patch 0 .WorldPosition patch 1 .WorldPosition patch 2 .WorldPosition) 3.0f pt.EdgeTess 0 CalcTessFactor(e0) pt.EdgeTess 1 CalcTessFactor(e1) pt.EdgeTess 2 CalcTessFactor(e2) pt.InsideTess CalcTessFactor return pt My patches are triangles. Is there something I'm doing trivially wrong here (like assuming that EdgeTess 0 is correctly assumed to be edge 0 1, rather than edge 2 0 for instance ? its a wild guess..
40
Is there something similar to XMFLOAT2 that has its operators overloaded? Since XMFLOAT2 is just a structure, I'm sure it does not have operator overloading which is what I need to make things a lot simpler. Is there something like XMFLOAT2 where I can add two (a) (b). I also need to use the . Thanks
40
D3D11 Deferred Context CommandList Reset a rather quick question, I am starting on implementing rendering with deferred context into my game engine, and came across a heavy memory leak when recording command lists on my deferred contexts. Because every frame I would go like this DeferredContext THREAD FinishCommandList( 0, amp commandLists THREAD ), it would eat up RAM, so I Release them every time, like if(commandLists THREAD ) commandLists THREAD gt Release() commandLists THREAD NULL I previously only released objects when they wouldn't be used any more, so I thought there would be a more optimized way of handling this? (Like mapping dynamic ID3D11Buffers for example?)
40
Is non indexed, non instanced rendering useful anymore? I'm adding batched rendering to my game engine and I'm wondering Should I support non indexed, non instanced batches or just indexed and or instanced? It's my understanding that the concept of indexed rendering was invented after pure "vertex only" drawing. That said, is supporting vertex only rendering useful anymore? Is there a modern use case for it?
40
Restricting movement to 3D axis A cube is constructed in a 3D and can be rotated to view any side. I want to be able to drag the cube to any position with the mouse while keeping the Y coordinate the same. I used project and unproject functions to map the mouse to the 3d world. This method seems to work fine when the green face is parallel to the camera. However, when i rotate the cube say 45 degrees around the x axis and the cube face is now at an angle to the camera, the y coordinates do not track the mouse. when the camera is parallel to the red face, the tracking in the y axis is almost nothing. When i dont fix the y coordinate, the cube tracks the mouse perfectly, however, the y coordinates are changed as well. How can i go about keeping the y axis fixed and yet having the mouse cube track the mouse?
40
How do I use com ptr t with RenderTargetView and DepthStencilView? I have successfully used com ptr t with the ID3D11Device and IDXGISwapChain but when applying the same reasoning to the RenderTargetView and DepthStencilView, the function m spD3DImmediateContext gt OMSetRenderTargets(...) sets the m spRenderTargetView smart COM pointer to null! Then, subsequent draw calls fail on ClearRenderTargetView and ClearDepthStencilView. Is it because I am passing the smart pointer incorrectly? HR(m spD3DDevice gt CreateRenderTargetView(pBackBuffer, 0, amp m spRenderTargetView)) ... HR(m spD3DDevice gt CreateTexture2D( amp stDepthStencilDesc, 0, amp m spDepthStencilBuffer)) ... m spD3DImmediateContext gt OMSetRenderTargets(1, amp m spRenderTargetView, m spDepthStencilView) assert(m spRenderTargetView) lt FAIL I think the smart pointer overloads the operator amp so that it returns an Interface (see Extractors in com ptr t class).
40
Memory allocation strategy for the vertex buffers (DirectX 10 11) I'm writing a CAD system. I have a 3D scene and there are many different objects (walls, doors, windows and so on). The user can add or delete objects. The question is How do I keep track of all the vertices for all my objects? I can create vertex buffer for every object. But I think drawing switching from one buffer to another would have performance penalty. Another way I can create several big buffers for every object type. But I don't understand how to update such buffers. It is too big to update whole buffer (for example buffer for all walls). What would I do if I wanted to delete an object that's in the middle of the buffer? I have the similar question here on Stack Overflow. Most examples I've found work with static models. Therefore, they tend to create a single vertex buffer with their list of points, and then are just manipulated by matrix transformations. I, on the other hand, will be updating the scene very often. So what's the best way to keep track of and store this information?
40
Adding mesh's objects to procedural isosurface terrain Thanks again for reading! So following on from my last question, I have my fully working isosurface terrain and now its time to add my trees and grass and whatever to the world. The old way I was doing it was to cast rays in a grid faceing down over the terrain, I would then read the normal from the ray hit and place things if the normal was lt as value. I would do this at different grid resolutions for each plant tree grass time and offset there x, z and scale by some random value then adding the object to my 2d BV tree because I was only working with height map terrain so I only had one row of bounding volumes for the tree. I can still use the above method but as you know it will only place objects on parts of the terrain that can see the sky. Part of building my new terrain system involved making a real octree so I no that I will use that to store my objects but what is the best way to generate my object positons. From what I have read around I should somehow use the voxels I build but I'm not really sure how I would go about it.
40
How do I change rasterizer state properly? To set the rasterizer state I have to ID3D11Device CreateRasterizerState() and then ID3D11DeviceContext RSSetState. And then I should ID3D11RasterizerState Release() it, right? How about when I want to change the state? Do I follow the above 3 steps again? When I want to change just 1 setting (i.e. only CullMode) do I still have to fill and set whole structure? Also, do I have to create and release the state object each time (assuming I don't want to set the exact same state again in the future)? How about performance? If I set the exact same state again does it do anything? Is there a difference between changing only 1 setting or most all of them?
40
Copying a ID3D11Texture2D created by one device without the D3D11 RESOURCE MISC SHARED flag to another device I'm writing a native plugin for Unity that is responsible for presenting the rendered Unity scene in a separate window with its own swap chain and an associated device and context all owned by the plugin. I'm rendering the Unity scene to a texture and passing the native pointer of the texture to the plugin. Had Unity's texture been created with the D3D11 RESOURCE MISC SHARED flag enabled, I could have simply accessed the texture using OpenSharedResource and created a ID3D11ShaderResourceView bound to it. But since Unity does not specify that flag at the creation of its texture, I need to copy that texture to another texture which is created with the D3D11 RESOURCE MISC SHARED flag enabled. I'm facing issues while doing this copy. To test this pipeline, I'm working with the SpriteBatch example in DirectXTK tweaking it a little bit create a texture using one device, and pass it on to another device (the original example uses the same device to create the first texture) which is responsible for rendering it on the screen. I create a first texture from an image file using the first ID3D11Device1 without the D3D11 RESOURCE MISC SHARED flag enabled, and create a second empty texture using the second ID3D11Device1 with the D3D11 RESOURCE MISC SHARED flag enabled. Then I copy the contents of the first texture to the second texture, and create a ID3D11ShaderResourceView bound to the second texture. ComPtr lt ID3D11Resource gt textureResource DX ThrowIfFailed(CreateWICTextureFromFile(m d3dDevice.Get(), L"cat.png", textureResource.GetAddressOf(), nullptr)) ComPtr lt ID3D11Texture2D gt cat DX ThrowIfFailed(textureResource.As( amp cat)) CD3D11 TEXTURE2D DESC catDesc cat gt GetDesc( amp catDesc) ComPtr lt ID3D11Texture2D gt spriteBatchTexture D3D11 TEXTURE2D DESC sharedTextureDesc catDesc sharedTextureDesc.Usage D3D11 USAGE DEFAULT sharedTextureDesc.Format catDesc.Format sharedTextureDesc.MiscFlags D3D11 RESOURCE MISC SHARED sharedTextureDesc.BindFlags D3D11 BIND SHADER RESOURCE Create a texture which is a copy of cat, that belongs to the wicDevice but with a SHARED flag DX ThrowIfFailed(wicDevice gt CreateTexture2D( amp sharedTextureDesc, nullptr, spriteBatchTexture.ReleaseAndGetAddressOf())) wicContext gt CopyResource(spriteBatchTexture.Get(), cat.Get()) IDXGIResource sharedResource(nullptr) HANDLE sharedHandle DX ThrowIfFailed(spriteBatchTexture gt QueryInterface( uuidof(IDXGIResource), (void ) amp sharedResource)) DX ThrowIfFailed(sharedResource gt GetSharedHandle( amp sharedHandle)) sharedResource gt Release() DX ThrowIfFailed(m d3dDevice gt OpenSharedResource(sharedHandle, uuidof(ID3D11Texture2D), (void )(spriteBatchTexture.GetAddressOf()))) D3D11 SHADER RESOURCE VIEW DESC sbSRVDesc sbSRVDesc.Format catDesc.Format sbSRVDesc.ViewDimension D3D11 SRV DIMENSION TEXTURE2D sbSRVDesc.Texture2D.MipLevels 1 sbSRVDesc.Texture2D.MostDetailedMip 0 DX ThrowIfFailed(m d3dDevice gt CreateShaderResourceView(spriteBatchTexture.Get(), amp sbSRVDesc, m texture.ReleaseAndGetAddressOf())) The m texture is the ID3D11ShaderResourceView that is finally used by the example to render the image on to the screen. The code goes through fine, but there's no image rendered onto the window when I do this. Note It does not seem to matter which device m d3dDevice or wicDevice I choose as the creator of the spriteBatchTexture. As long as OpenSharedResource is used properly, the code does not break but the end result is the same blank screen.
40
Simple switch to instanced draws causes consistent, but incorrect, results I have dumbed the following code down to "stupid simple" for DirectX and still cannot get any cooperation g d3dContext gt OMSetRenderTargets(1, g renderTargetWorld gt ColorRenderTargetView.GetAddressOf(), g depthStencilView.Get()) g d3dContext gt OMSetDepthStencilState(g States gt DepthNone(), 0) g d3dContext gt OMSetBlendState(g States gt AlphaBlend(), NULL, 0xffffffff) g d3dContext gt IASetVertexBuffers(0, 2, m segments gt GetLineVertexBuffers(), s strides, s offsets) g d3dContext gt IASetIndexBuffer(m segments gt GetLineIndexBuffer(), DXGI FORMAT R16 UINT, 0) g d3dContext gt IASetInputLayout(s IL Polyline.Get()) g d3dContext gt IASetPrimitiveTopology(D3D11 PRIMITIVE TOPOLOGY LINELIST ADJ) g d3dContext gt VSSetShader(g World Polyline VS Segment.Get(), NULL, 0) g d3dContext gt GSSetShader(g World Polyline GS Segment.Get(), NULL, 0) g d3dContext gt PSSetShader(g World Polyline PS Segment.Get(), NULL, 0) g d3dContext gt RSSetState(g States gt CullNone()) g d3dContext gt DrawIndexed(4 m segments gt LineCount, 0, 0) g d3dContext gt DrawIndexedInstanced(4, m segments gt LineCount, 0, 0, 0) g d3dContext gt GSSetShader(nullptr, NULL, 0) The DrawIndexed call draws all segments The DrawIndexedInstanced call draws only the first segment Creating a second instance of this class, DrawIndexedInstanced draws only the first segment of each compound line. Although it is correctly created, filled, and set, I am not even using the per instance data yet. Manually manipulating the StartIndexOffset parameter causes different, only one, line segments to be drawn. PIX is worthless, as usual, but, at least, allowed me to inspect the buffer contents. All of the data is correct and in place but I cannot get the "gosh darn" thing to work. After 18 hours of searching and getting nowhere, I'm livid, giving up, and going to bed. Please help me explain why (4 2) ! 4 0 1 Edit This, stupidly, works for (char i 0 i lt m segments gt LineCount i ) g d3dContext gt DrawIndexedInstanced(4, 1, i 4, 0, i 4) So, now (4 0 4 1 ) (4 2) ! 4 0 1 Verifying, again, that the data is all where it needs to be.. Stumped. It's got to be something so obvious, that it's not. I feel like I'm already over kill on setting every possible state parameter there is and if I comment any one line, it would switch from doesn't work to shouldn't work and I can't debug it.
40
Real Time Terrain Deformation I can't really find anything at all on this topic. There's a bunch of YouTube videos that show people doing it, but there aren't any articles that I can find explaining the mechanics of it. In my game, terrain is loaded from a .RAW heightmap, and saved to a .RAW heightmap. But the player will be able to modify the terrain. I don't really understand his this terrain deformation works, but so far, I've only created things using vertex buffers which as far as I know can't be changed efficiently. Here and there I've read a few things about GPU tessellation and compute shaders. I understand tessellation and I can do that, but there's also very little I can find about compute shaders. Are there any good books or websites that explain these things in detail? I'm not really looking for tutorials, because they tend to be pretty specific. But I really just want anything to bring me out of the dark on this subject.
40
Occlusion culling of BV tree nodes behind terrain So I have a bounding volume tree, almost an octree but not quite. Anyway I'm trying to optimize my drawing, right now I have a few different culling frustums that I use to cull different ranges of the BV tree, its very fast, I can pull all my world data in less than 1ms but when it comes to culling the grass and the other mesh data I'm thinking that because I have lots of mountains and hill that I'm rendering stuff that is inside the frustum but occluded but hills and mountains. What would be the right way to cull this? Should I do a ray cast against my terrain from each corner of the bounding boxes to the player and cull anything that gets a hit? or should I find some fancy way to do it on the GPU?. I only have to test maybe 1k boxes x 8 corners, so the ray casting shouldn't take to long.
40
How to handle normal vector when duplicating vertex? I'm currently developing a UV mapping, such as the UVW Map Modifier in 3ds Max (Not Unwrap UVW Modifier). I split the vertices of the model in the form of a primitive shape (Box, Plane, etc.) and set UV coordinates. However, in the process of duplicating the normal vector, i ran into a problem. For example, there is a box model with 8 vertices. We need to split the vertices into 24 in order to mapping texture to the 6 sides of the box and set normal vectors separately. In the case of the box, the normal vector calculation is simple. After splitting the vertices, calculate the normal vector for each triangle. The problem is when this algorithm is applied to the Sphere model. Split the vertices of the sphere model into a Box Shape. (Box Shape is one of the functions of UVW Map modifier). Setting the UV coordinates is fine. The problem is the normal vector setting of the seam vertices. Simply calculating the normal vector for each triangle, the normal vector of the vertices of the seam is different. (duplicated vertices have the same position). If the normal vector is different, the lighting is not applied properly. Of course, this is solved by calculating the normal vector of the Sphere model before splitting the vertices. However, this method could not be applied to a box model with 8 vertices. The three split vertices of the box must each have a different normal vector. In summary, split vertices may have different normal vectors or may have the same normal vector. How would you like to solve this? (I used the Box and Sphere model as an example, but in practice it should be applicable to a variety of models.)
40
Is there something similar to XMFLOAT2 that has its operators overloaded? Since XMFLOAT2 is just a structure, I'm sure it does not have operator overloading which is what I need to make things a lot simpler. Is there something like XMFLOAT2 where I can add two (a) (b). I also need to use the . Thanks
40
Proper vertex buffer use How're you supposed to use vertex buffers? Say you have 500 distinct deformable shapes models in the world (ie you want to be able to change delete vertices from the models somewhat arbitraily as the game progresses). The requires you refresh the vertex buffers in the frames the model has become dirty, at least. So how should you handle your vertex buffer, assuming D3D11 interfaces (so vertex buffers are your only option to draw anything) Store model vertices in CPU RAM. Create one vertex buffer at program start. For each model, copy the vertices into the single vertex buffer, render Create 500 vertex buffers, update each when necessary, render.
40
Single pass separable gaussian blur problem I created a single pass gaussian blur using HLSL compute shader. I also want it to be separable, which means, that first I perform blur along the horizontal direction, write out the result to the texture, then perform the vertical blur with the horizontally blurred data. I do this by creating DeviceMemoryBarriers before and after writing out the blur results to the globallycoherent Texture2D. This is my shader Texture2D lt float4 gt input register(t0) globallycoherent RWTexture2D lt float4 gt input output register(u0) Note Shader requires feature Typed UAV additional format loads! numthreads(16, 16, 1) void main(uint3 DTid SV DispatchThreadID) Query the texture dimensions (width, height) uint2 dim input output.GetDimensions(dim.x, dim.y) Determine if the thread is alive (it is alive when the dispatchthreadID can directly index a pixel) if (DTid.x lt dim.x amp amp DTid.y lt dim.y) Do bilinear downsampling first and write it out input output DTid.xy input.SampleLevel(sampler linear clamp, ((float2)DTid 0.5f) (float2)dim, 0) DeviceMemoryBarrier() uint i 0 float4 sum 0 Gather samples in the X (horizontal) direction unroll for (i 0 i lt 9 i) sum input output DTid.xy uint2(gaussianOffsets i , 0) gaussianWeightsNormalized i Write out the result of the horizontal blur DeviceMemoryBarrier() input output DTid.xy sum DeviceMemoryBarrier() sum 0 Gather samples in the Y (vertical) direction unroll for (i 0 i lt 9 i) sum input output DTid.xy uint2(0, gaussianOffsets i ) gaussianWeightsNormalized i Write out the result of the vertical blur DeviceMemoryBarrier() input output DTid.xy sum The problem is that the result flickers a bit, and has some errors in the image, too. Seems a bit like a thread group can't see writes by other groups. But there is a globallycoherent modifier before the RWTexture2D which should flush the entire resource so that writes are visible in every thread group (MSDN). Indeed, if I remove that modifier, then the flickering becomes a whole lot worse than if I leave it there. Here is a screenshot of the problem (Notice the lines on the windmill, and it also flickers on the whole image from time to time which is not visible on a still shot) Anyone here has an idea what I can do about it? (PS. the blur is performed when creating mipmaps, so I very much want to avoid multiple passes because it is already one pass for each mip)
40
Resizing D3D Buffers within a frame I have a particle system. So far it worked like this I have a dynamic vertex buffer for a system, which is created with a size that can hold for example 100 000 particles. I map unmap this and write the new data into it every frame. But what if the particle count gets bigger than the buffer can hold? I thought of recreating the vertex buffer with the double of its previous capacity (then map unmap into it). Is this the right direction for this or should I solve it in a different way? A short example ID3D11VertexBuffer buffer ID3D11Device graphicsDevice ... D3D11 BUFFER DESC desc buffer gt GetDesc( amp desc) if(dataSize gt (int)desc.ByteWidth) data can't fit, so destroy and recreate buffer gt Release() desc.ByteWidth 2 graphicsDevice gt CreateBuffer( amp desc, nullptr, amp buffer ) returns S OK Update I want to use it for other things, not just particles but for example instanced meshes. If I spawn a couple of instances, I'd only like to resize a buffer, without creating an other one.
40
DX11 application running on Windows XP using only DX9? I'm developing an application that utilizes DX11. I know that DX11 is only available on Windows 7 (and Vista with SP). I wonder if there is some way to run the application on Windows XP and use only the old DX9? I need to prevent the loading of DX11 dlls when on Windows XP. How can this be done? I've got an old renderer, that runs entirely on DX9 and a new renderer that uses DX11. My idea is to run the old renderer underr Windows XP and load only DX9 dlls, and to run the new renderer on Windows Vista 7 and load DX11 stuff as well. I've heard about a LoadLibrary() function that loads libraries at runtime. So far I have my Visual Studio project with all that DX11 .lib files in Additional Dependecies. How should I change this to load it completely at runtime? Do I need to define all symbols in DX11 .dlls manually?
40
DX11 application running on Windows XP using only DX9? I'm developing an application that utilizes DX11. I know that DX11 is only available on Windows 7 (and Vista with SP). I wonder if there is some way to run the application on Windows XP and use only the old DX9? I need to prevent the loading of DX11 dlls when on Windows XP. How can this be done? I've got an old renderer, that runs entirely on DX9 and a new renderer that uses DX11. My idea is to run the old renderer underr Windows XP and load only DX9 dlls, and to run the new renderer on Windows Vista 7 and load DX11 stuff as well. I've heard about a LoadLibrary() function that loads libraries at runtime. So far I have my Visual Studio project with all that DX11 .lib files in Additional Dependecies. How should I change this to load it completely at runtime? Do I need to define all symbols in DX11 .dlls manually?
40
Performance of ClearRenderTargetView While profiling GPU usage in VS2017, I've noticed a strange disproportion in performance of ClearRenderTargetView compared to ClearDepthStencilView What can cause this four orders of magnitude difference? First idea was that the 1.0f I'm writing to the depth buffer is a special optimized value, so I've tried 0.0f and 0.4f, but it seem to affect nothing. Then I tried to change the clear color from (0,0,0,1) to (0,0,0,0) to no avail. Googling for ClearRenderTargetView performance brings up nothing. The only thing I can think of is that the target texture is somehow affected by the D3D9 interop it participates in private void InitRenderTarget(int width, int height) var renderTargetDescr new Texture2DDescription() Width width, Height height, MipLevels 1, ArraySize 1, SampleDescription new DxgiSampleDescription(1, 0), Usage ResourceUsage.Default, CpuAccessFlags CpuAccessFlags.None, The following are mandatory for WPF interop Format DxgiFormat.B8G8R8A8 UNorm, BindFlags BindFlags.RenderTarget BindFlags.ShaderResource, OptionFlags ResourceOptionFlags.Shared renderTarget new Texture2D( device, renderTargetDescr) var renderTargetViewDescr new RenderTargetViewDescription Format renderTargetDescr.Format, Dimension RenderTargetViewDimension.Texture2D renderTargetView new RenderTargetView( device, renderTarget, renderTargetViewDescr) private void SetupWpfInterop() surfaceD3D9 d3d9.DXDevice.GetSharedD3D9( renderTarget).GetSurfaceLevel(0) targetImage.Lock() targetImage.SetBackBuffer(D3DResourceType.IDirect3DSurface9, surfaceD3D9.NativePointer) targetImage.Unlock() But then, calls to DrawIndexed would likely also be affected, which doesn't seem to be the case (or I just misinterpret the profiler diagrams).
40
Fbx SDK Importer issue (texture uv related) I am using the latest autodesk FBX Importer SDK, but whatever I do, I am unable to get the uvs right. Some parts are textured properly while others are not. I am using Direct3D9 and Direct3D11 (same result in both). Image 360 image https i.gyazo.com 5a2e5f6e127521915508c9c300eb03e5.mp4 The model uses a single texture and a single material shared among 4 meshes. Is there someone who sees immediately what the problem could be? Or is there someone who can replicate the issue for me and figure out what I am missing? FBX Test File http www.4shared.com rar o WG0Crpce Peach64FBX.html My UV reading method int vertexCounter 0 for (int j 0 j lt nbPolygons j ) for (int k 0 k lt 3 k ) int vertexIndex pFbxMesh gt GetPolygonVertex(j, k) Vector2 uv readUV(pFbxMesh, vertexIndex, pFbxMesh gt GetTextureUVIndex(j, k), uv) pVertices vertexIndex .uv.x uv.x pVertices vertexIndex .uv.y 1.0 uv.y vertexCounter void readUV(fbxsdk FbxMesh pFbxMesh, int vertexIndex, int uvIndex, Vector2 amp uv) fbxsdk FbxLayerElementUV pFbxLayerElementUV pFbxMesh gt GetLayer(0) gt GetUVs() if (pFbxLayerElementUV nullptr) return switch (pFbxLayerElementUV gt GetMappingMode()) case FbxLayerElementUV eByControlPoint switch (pFbxLayerElementUV gt GetReferenceMode()) case FbxLayerElementUV eDirect fbxsdk FbxVector2 fbxUv pFbxLayerElementUV gt GetDirectArray().GetAt(vertexIndex) uv.x fbxUv.mData 0 uv.y fbxUv.mData 1 break case FbxLayerElementUV eIndexToDirect int id pFbxLayerElementUV gt GetIndexArray().GetAt(vertexIndex) fbxsdk FbxVector2 fbxUv pFbxLayerElementUV gt GetDirectArray().GetAt(id) uv.x fbxUv.mData 0 uv.y fbxUv.mData 1 break break case FbxLayerElementUV eByPolygonVertex switch (pFbxLayerElementUV gt GetReferenceMode()) Always enters this part for the example model case FbxLayerElementUV eDirect case FbxLayerElementUV eIndexToDirect uv.x pFbxLayerElementUV gt GetDirectArray().GetAt(uvIndex).mData 0 uv.y pFbxLayerElementUV gt GetDirectArray().GetAt(uvIndex).mData 1 break break I am doing V 1.0 uv.y because I am using Direct3D11. NOTE the MappingMode is always eByPolygonVertex and ReferenceMode is always eIndexToDirect Rendering Info Rendered as a Triangle List Uv wrapping mode Wrap (Repeat) Culling None
40
How can I split my terrain into quads so that each quad would have a renderable vertex index buffer? (DirectX11, C ) I am creating a quad tree to store my terrain in chunks and currently have the implementation working to an extent. I am currently starting with a grid of triangle pairs that make squares and splitting this down into quads. I have my vertices split into quads, but I'm struggling with the indices. Here is how I generate the plane snip This gives me one list of indices, and one list of vertices. This is taken by the quadtree and split into different vertex lists and stored in different nodes. This works fine and the vertices seem to be correct, however I'm not sure how to create the indices for each vertex list as before I was doing it based on the height and width of the whole terrain. How would I create an indices list for each node in the quadtree or is there a better way to do this? If I just use the whole list of indices for each node, I get weird results Edit There is an issue generating my plane, I am fixing this and seeing if it resolves my problem. Edit I have edited the topic title to something more suitable. I have fixed my issue with the grid generation and I am using a smaller grid to make the issues more clear. I am using a 2x2 grid (9 verts, 24 indices, 8 tris) which is split into 4 quads (4 verts per quad, 2 tris per quad) and it's almost working, just a small issue with one of the sides. I know this is potentially duplicating verts on a small scale, but on a larger scale with culling, this should actually save performance. Can anyone spot what's going on? Vertex issue? When splitting the quads, should it be by triangle, vertices, or indices? What happens if we split by triangle and a triangle overlaps 2 quads (although this shouldn't happen)
40
HLSL Shader Optimilation with MAD(m,a,d) I understand that the expression x m a d is most efficiently written as x mad(m,a,d) because at the assembly level only one instruction is needed rather than a multiply and add separately. My question regards optimally writing this expression x m a. Should this be written as x mad(m,a,x) or x m a? The difference is too subtle to profile but I'm wondering if anyone can see the difference at the assembly level. (I don't know how to view the assembly code.)
40
Create Render Target View 138 140 need to Bind the back buffer to the render target view DirectX 11 This is the error D3D11 ERROR ID3D11Device CreateRenderTargetView A RenderTargetView cannot be created of a Resource that did not specify the RENDER TARGET BindFlag. STATE CREATION ERROR 138 CREATERENDERTARGETVIEW INVALIDRESOURCE D3D11 ERROR ID3D11Device CreateRenderTargetView Returning E INVALIDARG, meaning invalid parameters were passed. STATE CREATION ERROR 140 CREATERENDERTARGETVIEW INVALIDARG RETURN So as i see here i need to set the BindFlag of the backBuffer to D3D11 BIND RENDER TARGET, but in all the tutorial i saw nobody is doing it, they all create the render target view by using this code hr m swapChain gt GetBuffer(0, uuidof(ID3D11Texture2D), reinterpret cast lt void gt ( amp m backBuffer)) if (FAILED(hr)) MessageBox(0, L"Failed get buffer", 0, 0) hr m device gt CreateRenderTargetView(m backBuffer, NULL, amp m renderTargetView) if (FAILED(hr)) MessageBox(0, L"Failed to create rendertargetview", 0, 0) And nobody, nowhere is doing as the msdn site says, that the render target view must be created with this specific flag D3D11 BIND RENDER TARGET So I tried doing it by myself. I created a texture2d desc like this for the back buffer D3D11 TEXTURE2D DESC backBufferDescription backBufferDescription.BindFlags D3D11 BIND RENDER TARGET backBufferDescription.ArraySize 1 backBufferDescription.CPUAccessFlags 0 backBufferDescription.Format DXGI FORMAT R8G8B8A8 UNORM backBufferDescription.Height 600 backBufferDescription.Width 800 backBufferDescription.MipLevels 1 backBufferDescription.MiscFlags 0 backBufferDescription.SampleDesc.Quality 0 backBufferDescription.SampleDesc.Count 1 backBufferDescription.Usage D3D11 USAGE DEFAULT m device gt CreateTexture2D( amp backBufferDescription, NULL, amp m backBuffer) if (FAILED(hr)) MessageBox(0, L"Fallito back buffer desc", 0, 0) And it doesn't work the same. Help please
40
Is object space the same as local space? I was in directx 11 and was wondering is local space the same as object space and if not, what is object space?
40
What is the AlphaToCoverage blend state useful for? Alright, just finished most of my early UI stuff and I wanted the windows to have some transparency. So I expanded my application to initialize and bind blend states so that my UI shader could implement alpha blending. I was initially having no luck but I got the blend state configured to achieve my purpose, however the way it is configured doesn't make sense to me so I must be missing something. Here is how I initialize my blend state. bool BlendState StartUp() D11DeviceManager pDeviceManager D11DeviceManager GetSingleton() D3D11 BLEND DESC desc desc.AlphaToCoverageEnable false desc.IndependentBlendEnable false desc.RenderTarget 0 .BlendEnable true desc.RenderTarget 0 .SrcBlend D3D11 BLEND SRC COLOR desc.RenderTarget 0 .DestBlend D3D11 BLEND DEST COLOR desc.RenderTarget 0 .BlendOp D3D11 BLEND OP ADD desc.RenderTarget 0 .SrcBlendAlpha D3D11 BLEND SRC ALPHA desc.RenderTarget 0 .DestBlendAlpha D3D11 BLEND DEST ALPHA desc.RenderTarget 0 .BlendOpAlpha D3D11 BLEND OP ADD desc.RenderTarget 0 .RenderTargetWriteMask 7 return !FAILED( pDeviceManager gt GetDevice() gt CreateBlendState( amp desc, amp pBlendState)) Right now my alpha blending only works when the AlphaToCoverage bool is set to false... I thought this was just a bool enabling the pixel fragment blending (which upon retrospection would be redundant considering the BlendEnable flags...) but looking at the MSDN documentation is appears to do this... You can use the AlphaToCoverageEnable member of D3D11 BLEND DESC1 or D3D11 BLEND DESC to toggle whether the runtime converts the .a component (alpha) of output register SV Target0 from the pixel shader to an n step coverage mask (given an n sample RenderTarget). The runtime performs an AND operation of this mask with the typical sample coverage for the pixel in the primitive (in addition to the sample mask) to determine which samples to update in all the active RenderTargets. Could someone explain this to me and how it differs from when the AlphaToCoverageEnable bool is false? I see that it adds a new mask to the fragment but I don't quite follow what the mask exactly is.
40
How to blend multiple normal maps? I want to achieve a distortion effect which distorts the full screen. For that I spawn a couple of images with normal maps. I render their normal map part on some camera facing quads onto a rendertarget which is cleared with the color (127,127,255,255). This color means that there is no distortion whatsoever. Then I want to render some images like this one onto it If I draw one somewhere on the screen, then it looks correct because it blends in seamlessly with the background (which is the same color that appears on the edges of this image). If I draw another one on top of it then it will no longer be a seamless transition. For this I created a blendstate in directX 11 that keeps the maximum of two colors, so it is now a seamless transition, but this way, the colors lower than 127 (0.5f normalized) will not contribute. I am not making a simulation and the effect looks quite convincing and nice for a game, but in my spare time I am thinking how I could achieve a nicer or a more correct effect with a blend state, maybe averaging the colors somehow? I I did it with a shader, I would add the colors and then I would normalize them, but I need to combine arbitrary number of images onto a rendertarget. This is my blend state now which blends them seamlessly but not correctly D3D11 BLEND DESC bd bd.RenderTarget 0 .BlendEnable true bd.RenderTarget 0 .SrcBlend D3D11 BLEND SRC ALPHA bd.RenderTarget 0 .DestBlend D3D11 BLEND INV SRC ALPHA bd.RenderTarget 0 .BlendOp D3D11 BLEND OP MAX bd.RenderTarget 0 .SrcBlendAlpha D3D11 BLEND ONE bd.RenderTarget 0 .DestBlendAlpha D3D11 BLEND ZERO bd.RenderTarget 0 .BlendOpAlpha D3D11 BLEND OP MAX bd.RenderTarget 0 .RenderTargetWriteMask 0x0f Is there any way of improving upon this? (PS. I considered rendering each one with a separate shader incementally on top of each other but that would consume a lot of render targets which is unacceptable)
40
How do I use com ptr t with RenderTargetView and DepthStencilView? I have successfully used com ptr t with the ID3D11Device and IDXGISwapChain but when applying the same reasoning to the RenderTargetView and DepthStencilView, the function m spD3DImmediateContext gt OMSetRenderTargets(...) sets the m spRenderTargetView smart COM pointer to null! Then, subsequent draw calls fail on ClearRenderTargetView and ClearDepthStencilView. Is it because I am passing the smart pointer incorrectly? HR(m spD3DDevice gt CreateRenderTargetView(pBackBuffer, 0, amp m spRenderTargetView)) ... HR(m spD3DDevice gt CreateTexture2D( amp stDepthStencilDesc, 0, amp m spDepthStencilBuffer)) ... m spD3DImmediateContext gt OMSetRenderTargets(1, amp m spRenderTargetView, m spDepthStencilView) assert(m spRenderTargetView) lt FAIL I think the smart pointer overloads the operator amp so that it returns an Interface (see Extractors in com ptr t class).
40
How can I check the shader model capabilities of an adapter? I'm writing an application that targets Direct3D11 (through SlimDX) and shader model 5. When I'm running it on a system that doesn't have SM5 capable hardware, I will get a NullReferenceException when trying to access the techniques in the compiled effect instance. How can I check if the adapter is capable of this before I even attempt to use any of these features?
40
Adding mesh's objects to procedural isosurface terrain Thanks again for reading! So following on from my last question, I have my fully working isosurface terrain and now its time to add my trees and grass and whatever to the world. The old way I was doing it was to cast rays in a grid faceing down over the terrain, I would then read the normal from the ray hit and place things if the normal was lt as value. I would do this at different grid resolutions for each plant tree grass time and offset there x, z and scale by some random value then adding the object to my 2d BV tree because I was only working with height map terrain so I only had one row of bounding volumes for the tree. I can still use the above method but as you know it will only place objects on parts of the terrain that can see the sky. Part of building my new terrain system involved making a real octree so I no that I will use that to store my objects but what is the best way to generate my object positons. From what I have read around I should somehow use the voxels I build but I'm not really sure how I would go about it.
40
Fast fullscreen quad rendering in Direct3D 11? For the last few weeks, I've been trying to port a DX9 implementation of HDR rendering (tone mapping, bloom, stars, etc.) over to DX11. I believe I've got all features working but I'm not getting good enough performance. I'd like to be able to render the whole effect in under 4ms on a fairly low powered GPU, but using D3D11 Queries I'm noticing that it takes 0.5ms to just render a fullscreen quad with a solid color, and 1.0ms to render a fullscreen texture! And because tone mapping is the only part of the effect that uses a fullscreen texture, this makes it the most expensive! I'm already doing some optimisations with my limited graphics knowledge, I've disabled blending and depth testing, I make sure that the texture sampler uses sensible filtering settings, and I'm pretty sure that the effects of any state changes are negligible. I've heard that rendering 1 oversized triangle instead of 2 can yield some improvements, but I'm not sure if that will help me in this situation. Basically, does anyone have any suggestions to speed up rendering of a textured quad?
40
How can I prevent other applications from interrupting my game's exclusive fullscreen mode? I am developing a game using D3D 11. When I got a pop up message from a chat client (HipChat), my game's full screen mode is disabled because IDXGISwapChain Present returns DXGI STATUS OCCLUDED. How can I avoid or prevent this? I don't want my game's exclusive full screen access interrupted.
40
Restricting movement to 3D axis A cube is constructed in a 3D and can be rotated to view any side. I want to be able to drag the cube to any position with the mouse while keeping the Y coordinate the same. I used project and unproject functions to map the mouse to the 3d world. This method seems to work fine when the green face is parallel to the camera. However, when i rotate the cube say 45 degrees around the x axis and the cube face is now at an angle to the camera, the y coordinates do not track the mouse. when the camera is parallel to the red face, the tracking in the y axis is almost nothing. When i dont fix the y coordinate, the cube tracks the mouse perfectly, however, the y coordinates are changed as well. How can i go about keeping the y axis fixed and yet having the mouse cube track the mouse?
40
Clipping wrong results in C DIRECTX 11 Im trying to do some clipping but having some weird results. The triangle is not getting clipped how it should be. Here are some images. The second image is how it should. video of what is happen VIDEO OF WHAT IS HAPPENING What is wrong with my code that is making the triangle wrong way. CLIPPING ALGORITHM ClipVertex( const Vector3 amp end ) float BCend mPlane.Test(end) bool endInside ( BCend gt 0.0f ) if (!mFirstVertex) if one of the points is inside if ( mStartInside endInside ) if the start is inside, just output it if (mStartInside) mClipVertices mNumVertices mStart if one of them is outside, output clip point if ( !(mStartInside amp amp endInside) ) float t Vector3 output if (endInside) t BCend (BCend mBCStart) output end t (end mStart) else t mBCStart (mBCStart BCend) output mStart t (end mStart) mClipVertices mNumVertices output mStart end mBCStart BCend mStartInside endInside mFirstVertex false PLANE EQUATION Plane clipPlane( 1.0f, 0.0f, 0.0f, 0.0f ) In directx 11 in a Left Handed system
40
What is the AlphaToCoverage blend state useful for? Alright, just finished most of my early UI stuff and I wanted the windows to have some transparency. So I expanded my application to initialize and bind blend states so that my UI shader could implement alpha blending. I was initially having no luck but I got the blend state configured to achieve my purpose, however the way it is configured doesn't make sense to me so I must be missing something. Here is how I initialize my blend state. bool BlendState StartUp() D11DeviceManager pDeviceManager D11DeviceManager GetSingleton() D3D11 BLEND DESC desc desc.AlphaToCoverageEnable false desc.IndependentBlendEnable false desc.RenderTarget 0 .BlendEnable true desc.RenderTarget 0 .SrcBlend D3D11 BLEND SRC COLOR desc.RenderTarget 0 .DestBlend D3D11 BLEND DEST COLOR desc.RenderTarget 0 .BlendOp D3D11 BLEND OP ADD desc.RenderTarget 0 .SrcBlendAlpha D3D11 BLEND SRC ALPHA desc.RenderTarget 0 .DestBlendAlpha D3D11 BLEND DEST ALPHA desc.RenderTarget 0 .BlendOpAlpha D3D11 BLEND OP ADD desc.RenderTarget 0 .RenderTargetWriteMask 7 return !FAILED( pDeviceManager gt GetDevice() gt CreateBlendState( amp desc, amp pBlendState)) Right now my alpha blending only works when the AlphaToCoverage bool is set to false... I thought this was just a bool enabling the pixel fragment blending (which upon retrospection would be redundant considering the BlendEnable flags...) but looking at the MSDN documentation is appears to do this... You can use the AlphaToCoverageEnable member of D3D11 BLEND DESC1 or D3D11 BLEND DESC to toggle whether the runtime converts the .a component (alpha) of output register SV Target0 from the pixel shader to an n step coverage mask (given an n sample RenderTarget). The runtime performs an AND operation of this mask with the typical sample coverage for the pixel in the primitive (in addition to the sample mask) to determine which samples to update in all the active RenderTargets. Could someone explain this to me and how it differs from when the AlphaToCoverageEnable bool is false? I see that it adds a new mask to the fragment but I don't quite follow what the mask exactly is.
40
Downscaling texture via mipmap Copied from Computer Graphics SE. I am implementing a post processing effect in my DirectX 11 pet renderer. The post processing pass is implemented by rendering a full screen quad covered with texture containing original rendered image, which works as it should, but I have problems with downscaling the texture. The non processed testing scene looks like this (three very bright emmissive spheres) I see no problem at this stage, but when I run the first post processing pass, which just down scales the image by the factor of 8 using the texture sampler, the result is very flickery (up scaled for clarity) I expected a mipmap would solve or at least reduce the flickering, but it didn't change a thing. What am I doing wrong? RenderDoc Update After investigating the issue using RenderDoc I found that the mipmap is being generated successfully and it's third level looks like this However, the output of the down scaling pass looks like this As if the sampler didn't use the mipmap at all. Don't get distracted by coloured object instead almost white ones. I lowered the sphere brightness a bit while investigating the bug. Even if I choose the mipmap level explicitly float4 vColor s0.SampleLevel(LinearSampler, Input.Tex, 3) it changes nothing RenderDoc also says "LOD Clamp 0 0" for the used sampler. What is it? Couldn't this be the problem? DirectX details Samplers D3D11 SAMPLER DESC descSampler ZeroMemory( amp descSampler, sizeof(descSampler)) descSampler.AddressU D3D11 TEXTURE ADDRESS CLAMP descSampler.AddressV D3D11 TEXTURE ADDRESS CLAMP descSampler.AddressW D3D11 TEXTURE ADDRESS CLAMP descSampler.Filter D3D11 FILTER MIN MAG MIP LINEAR mDevice gt CreateSamplerState( amp descSampler, amp mSamplerStateLinear) descSampler.Filter D3D11 FILTER MIN MAG MIP POINT hr mDevice gt CreateSamplerState( amp descSampler, amp mSamplerStatePoint) ...are set right before rendering the screen quad ID3D11SamplerState aSamplers mSamplerStatePoint, mSamplerStateLinear mImmediateContext gt PSSetSamplers(0, 2, aSamplers) ...and used within the down scaling PS shader SamplerState PointSampler register (s0) SamplerState LinearSampler register (s1) Texture2D s0 register(t0) float4 Pass1PS(QUAD VS OUTPUT Input) SV TARGET return s0.Sample(LinearSampler, Input.Tex) Texture D3D11 TEXTURE2D DESC descTex ZeroMemory( amp descTex, sizeof(D3D11 TEXTURE2D DESC)) descTex.ArraySize 1 descTex.BindFlags D3D11 BIND RENDER TARGET D3D11 BIND SHADER RESOURCE descTex.MiscFlags D3D11 RESOURCE MISC GENERATE MIPS descTex.Usage D3D11 USAGE DEFAULT descTex.Format DXGI FORMAT R32G32B32A32 FLOAT descTex.Width width descTex.Height height descTex.MipLevels 0 descTex.SampleDesc.Count 1 device gt CreateTexture2D( amp descTex, nullptr, amp tex) ...it's render target view D3D11 RENDER TARGET VIEW DESC descRTV descRTV.Format descTex.Format descRTV.ViewDimension D3D11 RTV DIMENSION TEXTURE2D descRTV.Texture2D.MipSlice 0 device gt CreateRenderTargetView(tex, amp descRTV, amp rtv) ...it's shader resource view D3D11 SHADER RESOURCE VIEW DESC descSRV ZeroMemory( amp descSRV, sizeof(D3D11 SHADER RESOURCE VIEW DESC)) descSRV.Format descTex.Format descSRV.ViewDimension D3D11 SRV DIMENSION TEXTURE2D descSRV.Texture2D.MipLevels (UINT) 1 descSRV.Texture2D.MostDetailedMip 0 device gt CreateShaderResourceView(tex, amp descSRV, amp srv) Explicit generation of mipmap is called after the scene was rendered into the texture and another texture was set as a render target. ID3D11RenderTargetView aRTViews 1 mPass1Buff.GetRTV() mImmediateContext gt OMSetRenderTargets(1, aRTViews, nullptr) mImmediateContext gt GenerateMips(mPass0Buff.GetSRV()) ID3D11ShaderResourceView aSRViews 1 mPass0Buff.GetSRV() mImmediateContext gt PSSetShaderResources(0, 1, aSRViews) The code is compiled in debug and the D3D device was created with D3D11 CREATE DEVICE DEBUG flag and I get no runtime errors on the console.
40
DX11 swap chain is 1 frame behind when presenting to screen and using multisampling After adding multisampling to a DirectX 11 project, I noticed that the screen was no longer updating when calling IDXGISwapChain Present. Further testing showed that it was in fact updating the screen, but it was always 1 frame behind. If I added a line to the scene and presented it, I wouldn t see that line until I do the next present call. This behavior is specific to the use of multisampling. If I turn multisampling off, the behavior goes away. This behavior is also specific to my Intel display adapter. If I use my Nvida display adapter, the behavior goes away. The multisampling settings are determined by using ID3D11Device CheckMultisampleQualityLevels, and the creation of the back buffer and depth stencil work, so I don t think the settings themselves are incorrect in any way. Also the multisampling does work the end result is properly multisampled. I have played with various multisampling settings within the available range, and all settings produce the same behavior. Just in case I m doing something wrong, I tried using the debug layer, but got no warnings or errors. Through hours of testing, I found two work arounds 1 Call Present twice HRESULT result swapChain gt UnmanagedPointer gt Present(0, 0) if (result ! S OK) ExceptionThrower Throw(result, "Failed to present the swap chain buffer.") result swapChain gt UnmanagedPointer gt Present(0, 0) if (result ! S OK) ExceptionThrower Throw(result, "Failed to present the swap chain buffer.") When I do this, the second Present call puts the correct information on screen. 2 Call ID3D11DeviceContext Flush after Present HRESULT result swapChain gt UnmanagedPointer gt Present(0, 0) if (result ! S OK) ExceptionThrower Throw(result, "Failed to present the swap chain buffer.") d3dDeviceContext gt Flush() After the Flush call, the correct information is on screen. As I understand it, both of these work arounds incur a significant penalty, so I would rather find a better solution (if on exists). I especially dislike this work around because it penalizes everyone. There is no way for me to know when this behavior is happening. I have a feeling that I may be dealing with a driver bug since this is only an issue with the Intel adapter. I hate to force all scenarios to add this overhead when for most, it won t be necessary. Oh and in case someone asks, I have updated the Intel drivers to the most recent, but it s an old adapter, so these drivers date back to 2012. Has anyone seen this type of behavior before? Is there a better work around, or perhaps a solution I m not aware of? If forced to use one of the two work arounds, which of the two would be the best? I m leaning towards the second (Flush call), but not sure. In case helpful, here is my swap chain creation code DXGI SWAP CHAIN DESC desc ZeroMemory(desc, sizeof(DXGI SWAP CHAIN DESC)) desc.BufferDesc.Width width desc.BufferDesc.Height height desc.BufferDesc.RefreshRate.Numerator 60 desc.BufferDesc.RefreshRate.Denominator 1 desc.BufferDesc.Format DXGI FORMAT R8G8B8A8 UNORM desc.BufferDesc.ScanlineOrdering DXGI MODE SCANLINE ORDER UNSPECIFIED desc.BufferDesc.Scaling DXGI MODE SCALING UNSPECIFIED desc.BufferUsage DXGI USAGE RENDER TARGET OUTPUT desc.BufferCount 1 desc.SampleDesc.Count 1 desc.SampleDesc.Quality 0 desc.OutputWindow outputWindow desc.Windowed true desc.SwapEffect DXGI SWAP EFFECT DISCARD desc.Flags 0 if (useMultiSampling) desc.SampleDesc.Count multiSampleCount desc.SampleDesc.Quality multiSampleQuality 1
40
DX11 Handle Device removed I get the DXGI ERROR DEVICE REMOVED error on some machines. According to this (https msdn.microsoft.com en us windows uwp gaming handling device lost scenarios) MSDN article this can happen and it should be handled by your application. I've managed to recreate the device but I'm unsure how to handle all content. It seems I have to create all vertex buffers and textures again, which essentially means I have to reload almost the entire scene. Is this really the correct way?
40
How can I render a 2D image to my screen using Direct X 11? I am trying to render an image to my window using Direct X but I don't really want a "3D" world as such, and I don't really want to set up all the vertex index buffers as I won't need them. I am performing all of my calculations and operations using compute shaders which returns me an image. As the file io is quite a bottleneck, I am looking to just render the image directly to the screen to increase performance. My question is, how would I go about this using Direct X 11? I have looked at some examples already such as rastertek and other questions on SO but they are either over complicating the matter, or they want a solution to font drawing etc. At the moment I am able to produce a blank screen using the following code and the screen size is the size of the image retrieved from the GPU. I am looking to draw this entire image to my back buffer or some render target, and then display it. void Application Draw() float ClearColor 4 0.5f, 0.125f, 0.3f, 1.0f context gt ClearRenderTargetView( pRenderTargetView, ClearColor) compute shader operations, retrieve image data from gpu draw image to back buffer render target then present swapChain gt Present(0, 0) From Why can 39 t I write to my render targets?, I have tried the solution using ID3D11Resource backBufferResource pRenderTargetView gt GetResource( amp backBufferResource) backbuffer render target context gt CopyResource(backBufferResource, gpuResource) This doesn't seem to throw any errors, but it doesn't change the output on the screen. It is mentioned that the back buffer must be the exact same format as the image we're trying to copy to it so this may be the issue. My swap chain desc is as follows DXGI SWAP CHAIN DESC sd ZeroMemory( amp sd, sizeof(sd)) sd.BufferCount 1 sd.BufferDesc.Width WindowWidth sd.BufferDesc.Height WindowHeight sd.BufferDesc.Format DXGI FORMAT R8G8B8A8 UNORM FORMAT DIFFERENT sd.BufferDesc.RefreshRate.Numerator 60 sd.BufferDesc.RefreshRate.Denominator 1 sd.BufferUsage DXGI USAGE RENDER TARGET OUTPUT sd.OutputWindow hWnd sd.SampleDesc.Count 1 sd.SampleDesc.Quality 0 sd.Windowed TRUE and my image format is DXGI FORMAT R32G32B32A32 FLOAT If I try to change the swap chain format it throws a memory access violation.
40
Send empty vertex buffer data but keep Vertex Shader Input Structure? lets say i have the following structure defined in a header (for reusage) struct VertexShaderInput float3 Position POSITION float3 Normal NORMAL float2 UV TEXCOORD float4 Color COLOR however, some meshes dont have normals or colors and some passes (such as Depth PrePass) doesn't need Normal or Color. Is there a way to keep the struct definition above in my shaders and simply sending "zero" bytes for Normals and Colors?
40
The steps in implementing B zier triangle patches What are the steps in creating a B zier triangle patches. What steps would you do in order to create this in directx 11? Say I just input 3 vertices and create a simple triangle. Is this enough? Or should I create a triangle with 9 vertices, all of them in different heights so it would make a bumpy triangle, then I apply berstein's formulas and make them smooth. so I get like a smooth triangle, not all bumpy. A triangle like My book says Research and implement B zier triangle patches. Luna, Frank D. (2012 05 21). Introduction to 3D Game Programming with DirectX 11 (Kindle Location 11901). Mercury Learning and Information. Kindle Edition. So what are the steps you would do in order to accomplish it? Please no "coulds"
40
Why does my game not update unless I'm moving the mouse? I'm pretty confused by what's happening. Now that I've finally put something moving on my screen, I notice it doesn't update unless I move the mouse, or press a key, or trigger other events. Using PIX, the "Frame " counter only goes up when I fire these events. This is all that happens in my game loop while(GetMessage( amp msg, NULL, 0, 0) gt 0 amp amp m isRunning) TranslateMessage( amp msg) DispatchMessage( amp msg) m gameGraphics gt BeginRender() m gameGraphics gt EndRender() BeginRender clears the screen, and EndRender presents to the swap chain. I thought maybe it was a problem with WndProc, but comparing it to other DirectX 11 game WndProcs, I don't see any major differences. I'm pretty confused, never seen this problem before, and I have no idea what causes it. I'm just hoping maybe someone will have some insight on why this might be happening.
40
Data overwritten in MapSubresource() method I am trying to dynamically update the vertex buffer in a UWP project using SharpDX, once every time I call the following method, where context is the device context member. public void UpdateVertexBuffer(ScatterVertex data) DataBox dataBox this.context.MapSubresource( scatterPointVertexBuffer, 0, D3D11.MapMode.WriteNoOverwrite, D3D11.MapFlags.None ) var pointer dataBox.DataPointer pointer Utilities.WriteAndPosition(pointer, ref data) this.context.UnmapSubresource(scatterPointVertexBuffer, 0) I am expecting to keep the old data during the update process. However each time I call this method, the previous data is overwritten. I checked the pointer of dataBox.DataPointer and it remains the same value in every call. Using DataStream as output doesn't help either. In either cases if I check the vertex buffer I get only one vertex. But shouldn't the MapSubresource method protect the old data if I choose the WriteNoOverwrite mode? What should I do to keep the previous data during update?
40
The Pixel Shader unit expects a Sampler configured for default filtering to be set at Slot 0 ... I don't understand this error. The full output being The Pixel Shader unit expects a Sampler configured for default filtering to be set at Slot 0, but the sampler bound at this slot is configured for comparison filtering. Here is how I create the sampler state. Skybox sampler description D3D11 SAMPLER DESC skyboxSamplerDesc ZeroMemory( amp skyboxSamplerDesc, sizeof(D3D11 SAMPLER DESC)) skyboxSamplerDesc.Filter D3D11 FILTER COMPARISON MIN MAG LINEAR MIP POINT skyboxSamplerDesc.AddressU D3D11 TEXTURE ADDRESS CLAMP skyboxSamplerDesc.AddressV D3D11 TEXTURE ADDRESS CLAMP skyboxSamplerDesc.AddressW D3D11 TEXTURE ADDRESS CLAMP skyboxSamplerDesc.MipLODBias 0.0f skyboxSamplerDesc.MaxAnisotropy 16 skyboxSamplerDesc.ComparisonFunc D3D11 COMPARISON EQUAL skyboxSamplerDesc.MinLOD 0 skyboxSamplerDesc.MaxLOD D3D11 FLOAT32 MAX Create the skybox texture sampler state hr g d3dDevice gt CreateSamplerState( amp skyboxSamplerDesc, amp g SkyboxSamplerState) if (FAILED(hr)) return false The bindings. ID3D11SamplerState samplerStates 2 samplerStates 0 g SkyboxSamplerState samplerStates 1 g PixelDepthSamplerState g d3dDeviceContext gt PSSetSamplers(0, 2, samplerStates) HLSL side. SamplerState sbSamplerState register (s0) Filter MIN MAG LINEAR MIP POINT AddressU CLAMP AddressV CLAMP AddressW CLAMP ComparisonFunc EQUAL
40
Implementing a Deferred Renderer (Basic Understanding) I am trying to implement a Deferred Renderer in Direct3D11. I am fairly new to this. I already bought a book Practical Rendering amp Computation with Direct3D 11. However, this book doesnt answer many of my questions. The Book just says "Call one of the Draw Commands to execute the Pipeline" In the context of a deferred Renderer I would like to know How I can actually render the different GBuffers, merge them and put actual Lighting to my scene. Let's say my GBuffers should represent Diffuse, Specular and Normals. I understand that Vertex Shaders have Constant Buffers that represent my Camera through Matrices. Vertices get Transformed in shaders into ViewSpace. How Do I get my Diffuse Specular Normal information out of that? Do I have to execute the Rendering Pipeline for every GBuffer? Technically do I just need to transform my vertices once in a VS and just execute my different GBuffer PS? The Context Object offers functions like "OMSetRenderTarget". The OutputMerger however is the last stage of the Pipeline, not the first... The Book itself just calls "Present(0,0)" exactly once and doesnt explain how you actually put things together. Sorry, quite a lot of different questions (
40
Implementing a Deferred Renderer (Basic Understanding) I am trying to implement a Deferred Renderer in Direct3D11. I am fairly new to this. I already bought a book Practical Rendering amp Computation with Direct3D 11. However, this book doesnt answer many of my questions. The Book just says "Call one of the Draw Commands to execute the Pipeline" In the context of a deferred Renderer I would like to know How I can actually render the different GBuffers, merge them and put actual Lighting to my scene. Let's say my GBuffers should represent Diffuse, Specular and Normals. I understand that Vertex Shaders have Constant Buffers that represent my Camera through Matrices. Vertices get Transformed in shaders into ViewSpace. How Do I get my Diffuse Specular Normal information out of that? Do I have to execute the Rendering Pipeline for every GBuffer? Technically do I just need to transform my vertices once in a VS and just execute my different GBuffer PS? The Context Object offers functions like "OMSetRenderTarget". The OutputMerger however is the last stage of the Pipeline, not the first... The Book itself just calls "Present(0,0)" exactly once and doesnt explain how you actually put things together. Sorry, quite a lot of different questions (
40
Texture coordinate into texel form? How do I convert a texture coordinate, like 1,0 into a texel form(a single value?). Like in the image below, what is the value of Q22 in texel? I'm in directx 11.
40
What are the valid DepthBuffer Texture formats in DirectX 11? And which are also valid for a staging resource? I am trying to read the contents of the depth buffer into main memory so that my CPU side code can do Some Stuff with it. I am attempting to do this by creating a staging resource which can be read by the CPU, which I will copy the contents of the depth buffer into before reading it. I keep encountering errors however, because of, I believe, incompatibilities between the resource format and the view formats. Threads like these lead me to believe it is possible in DX11 to access the depth buffer as a resource, and that I can create a resource with a typeless format and have it interpreted in the view as another, but I cannot get it to work. What are the valid formats for the resource to be used as the depth buffer? Which of these are also valid for a CPU accessible staging resource?
40
Is there something similar to XMFLOAT2 that has its operators overloaded? Since XMFLOAT2 is just a structure, I'm sure it does not have operator overloading which is what I need to make things a lot simpler. Is there something like XMFLOAT2 where I can add two (a) (b). I also need to use the . Thanks
40
How can you make custom direct3D11 calls in Unreal Engine 4? I have some custom code that renders to Direct3D11 texture. Is it possible to use this texture on an object in unreal 4? Or alternatively, is it possible to draw custom geometry directly to the scene in unreal 4 using raw Direct3d? So far, it seems like I need to make a custom instance of UPrimitiveComponent which creates and returns an FPrimitiveSceneProxy. In the FPrimitiveSceneProxy CreateRenderThreadResources I can call RHIGetNativeDevice and cast its return to ID3D11Device and use it to create Direct3D instances, however, I'm not sure where to put my per frame draw calls.
40
Frustum culling instancing I've implemented instancing in my app and right now I have an instance buffer holding position data for all instances. But I'd like to also implement frustum culling which would cull some of the instances depending on the camera view. Now, how can I render only some of the instances in the buffer but not all? Do I have to rebuild the buffer every frame (so that it only holds visible instances) or is there a way in DirectX 11 to "tell" API which instances from the buffer I want drawn?
40
Game only runs at 60fps in windowed mode with a 120hz monitor? In fullscreen the game will run at 120fps fine, the correct refresh rate for the monitor, but in windowed mode it only runs at 60fps. If I disable VSync then it runs at thounsands of fps so it's not a case of a lack of performance. I've correctly set the refresh rate in the ModeDescription.
40
HLSL Buffer Data Type I'm working on converting a dx11 shader from a .fx file for use in Unity3D and I'm a little puzzled by the HLSL Buffer lt type declared in the shader. More specifically, what are these and how can I implement them in Unity? I'm aware of the Structured, Append, and Consume Buffers but those appear to be different then this and the Microsoft documentation wasn't to helpful. Is it just like an array that is populated and sized from code before getting assigned to the shader? Are they read only or writable as well? So far I'm thinking the closest approximation I can use is a StructuredBuffer but the .fx file has its own declaration for that as well so I'm not entirely sure I should go that route. Example Buffer lt float4 gt g someData register(t18)
40
FormatMessage not working for HRESULTs returned by Direct3D 11 I am using Windows 7 x64 and Visual Studio 17 (v15.9.7). Say I try to create a swap chain using IDXGIFactory2 CreateSwapChainForHwnd and pass in DXGI SCALING NONE. I will get the following message in debug output (if I have enabled Direct3D debugging) DXGI ERROR IDXGIFactory CreateSwapChain DXGI SCALING NONE is only supported on Win8 and beyond. DXGI SWAP CHAIN DESC SwapChainType ... HWND, BufferDesc DXGI MODE DESC1 Width 816, Height 488, RefreshRate DXGI RATIONAL Numerator 0, Denominator 1 , Format B8G8R8A8 UNORM, ScanlineOrdering ... UNSPECIFIED, Scaling ... UNSPECIFIED, Stereo FALSE , SampleDesc DXGI SAMPLE DESC Count 1, Quality 0 , BufferUsage 0x20, BufferCount 2, OutputWindow 0x0000000000290738, Scaling ... NONE, Windowed TRUE, SwapEffect ... FLIP SEQUENTIAL, AlphaMode ... UNSPECIFIED, Flags 0x0 MISCELLANEOUS ERROR 175 The function returns 0x887a0001 in form of a HRESULT. If I put err,hr in the watch window, I get a nice error message there ERROR MOD NOT FOUND The specified module could not be found. However, if I pass this HRESULT to FormatMessage, it just puts NULL in the output and returns 0. err,hr helpfully informs me that the new error is ERROR MR MID NOT FOUND The system cannot find message text for message number 0x 1 in the message file for 2. My questions are Why is FormatMessage not giving me right error string (the one starting with ERROR MOD NOT FOUND...)? Where is Visual Studio getting these pretty error strings from? Can I get them too? Who do I pay? PS. I am using the Windows 10 SDK version of DX11, not the older DirectX SDK version. Thus, I can't really link to dxerr.lib either. This is the code that is used to print the error message LPTSTR error text NULL FormatMessage(FORMAT MESSAGE FROM SYSTEM FORMAT MESSAGE ALLOCATE BUFFER FORMAT MESSAGE IGNORE INSERTS, NULL, hr, MAKELANGID(LANG NEUTRAL, SUBLANG DEFAULT), (LPTSTR) amp error text, 0, NULL)
40
DirectX Tessellation Cracks I have the following simple patch function in DX11, but I keep getting rips, and when I look at the wireframe its clear that adjacent edges are not getting the same tessellation factor. The CalcTessFactor() function just does a distance from the camera to the point passed, so should always give the same value for the same edge center that I pass in. PatchTess patchFunction Far(InputPatch lt VertexToPixel Far, 3 gt patch, uint patchID SV PrimitiveID) PatchTess pt Compute midpoint on edges, and patch center float3 e0 0.5f (patch 0 .WorldPosition patch 1 .WorldPosition) float3 e1 0.5f (patch 1 .WorldPosition patch 2 .WorldPosition) float3 e2 0.5f (patch 2 .WorldPosition patch 0 .WorldPosition) float3 c (patch 0 .WorldPosition patch 1 .WorldPosition patch 2 .WorldPosition) 3.0f pt.EdgeTess 0 CalcTessFactor(e0) pt.EdgeTess 1 CalcTessFactor(e1) pt.EdgeTess 2 CalcTessFactor(e2) pt.InsideTess CalcTessFactor return pt My patches are triangles. Is there something I'm doing trivially wrong here (like assuming that EdgeTess 0 is correctly assumed to be edge 0 1, rather than edge 2 0 for instance ? its a wild guess..
40
FormatMessage not working for HRESULTs returned by Direct3D 11 I am using Windows 7 x64 and Visual Studio 17 (v15.9.7). Say I try to create a swap chain using IDXGIFactory2 CreateSwapChainForHwnd and pass in DXGI SCALING NONE. I will get the following message in debug output (if I have enabled Direct3D debugging) DXGI ERROR IDXGIFactory CreateSwapChain DXGI SCALING NONE is only supported on Win8 and beyond. DXGI SWAP CHAIN DESC SwapChainType ... HWND, BufferDesc DXGI MODE DESC1 Width 816, Height 488, RefreshRate DXGI RATIONAL Numerator 0, Denominator 1 , Format B8G8R8A8 UNORM, ScanlineOrdering ... UNSPECIFIED, Scaling ... UNSPECIFIED, Stereo FALSE , SampleDesc DXGI SAMPLE DESC Count 1, Quality 0 , BufferUsage 0x20, BufferCount 2, OutputWindow 0x0000000000290738, Scaling ... NONE, Windowed TRUE, SwapEffect ... FLIP SEQUENTIAL, AlphaMode ... UNSPECIFIED, Flags 0x0 MISCELLANEOUS ERROR 175 The function returns 0x887a0001 in form of a HRESULT. If I put err,hr in the watch window, I get a nice error message there ERROR MOD NOT FOUND The specified module could not be found. However, if I pass this HRESULT to FormatMessage, it just puts NULL in the output and returns 0. err,hr helpfully informs me that the new error is ERROR MR MID NOT FOUND The system cannot find message text for message number 0x 1 in the message file for 2. My questions are Why is FormatMessage not giving me right error string (the one starting with ERROR MOD NOT FOUND...)? Where is Visual Studio getting these pretty error strings from? Can I get them too? Who do I pay? PS. I am using the Windows 10 SDK version of DX11, not the older DirectX SDK version. Thus, I can't really link to dxerr.lib either. This is the code that is used to print the error message LPTSTR error text NULL FormatMessage(FORMAT MESSAGE FROM SYSTEM FORMAT MESSAGE ALLOCATE BUFFER FORMAT MESSAGE IGNORE INSERTS, NULL, hr, MAKELANGID(LANG NEUTRAL, SUBLANG DEFAULT), (LPTSTR) amp error text, 0, NULL)
40
Can't get Direct3D11 depth buffer to work I can't get the depth buffer to work correctly. I am rendering 2 cubes in a single Draw function, and from one angle it looks great But swing the camera around to view the opposite sides, and I discover it's just Painter's Algorithm. This is my code to setup the depth stencil buffer void Graphics CreateDepthStencilBuffer(ID3D11Texture2D backBuffer) D3D11 TEXTURE2D DESC dsTextureDesc backBuffer gt GetDesc( amp dsTextureDesc) dsTextureDesc.Format DXGI FORMAT D24 UNORM S8 UINT dsTextureDesc.Usage D3D11 USAGE DEFAULT dsTextureDesc.BindFlags D3D11 BIND DEPTH STENCIL dsTextureDesc.MipLevels 1 dsTextureDesc.ArraySize 1 dsTextureDesc.CPUAccessFlags 0 dsTextureDesc.MiscFlags 0 Microsoft WRL ComPtr lt ID3D11Texture2D gt dsBuffer ASSERT SUCCEEDED(g Device gt CreateTexture2D( amp dsTextureDesc, NULL, dsBuffer.ReleaseAndGetAddressOf())) D3D11 DEPTH STENCIL DESC dsDesc dsDesc.DepthEnable true dsDesc.DepthWriteMask D3D11 DEPTH WRITE MASK ALL dsDesc.DepthFunc D3D11 COMPARISON LESS ... snipped stencil properties ... ASSERT SUCCEEDED(g Device gt CreateDepthStencilState( amp dsDesc, g DepthStencilState.GetAddressOf())) g pDevCon gt OMSetDepthStencilState(g DepthStencilState.Get(), 1) D3D11 DEPTH STENCIL VIEW DESC depthStencilViewDesc depthStencilViewDesc.Format dsTextureDesc.Format depthStencilViewDesc.ViewDimension D3D11 DSV DIMENSION TEXTURE2D depthStencilViewDesc.Texture2D.MipSlice 0 ASSERT SUCCEEDED(g Device gt CreateDepthStencilView(dsBuffer.Get(), amp depthStencilViewDesc, g depthStencilView.GetAddressOf())) And I'm calling g pDevCon gt OMSetRenderTargets(1, g renderTargetView.GetAddressOf(), g depthStencilView.Get()) after every Present call. I've been scratching my head for ages wondering what the problem is. Any clues will be much appreciated! Edit I used the graphics debugger, and apparently the output merger is doing its job, as seen in the screenshot below, but that isn't what I am seeing on screen. At the top of the pic is the state of the depth buffer, but I can't make sense of it to determine if it's correct or not.
40
Fbx SDK Importer issue (texture uv related) I am using the latest autodesk FBX Importer SDK, but whatever I do, I am unable to get the uvs right. Some parts are textured properly while others are not. I am using Direct3D9 and Direct3D11 (same result in both). Image 360 image https i.gyazo.com 5a2e5f6e127521915508c9c300eb03e5.mp4 The model uses a single texture and a single material shared among 4 meshes. Is there someone who sees immediately what the problem could be? Or is there someone who can replicate the issue for me and figure out what I am missing? FBX Test File http www.4shared.com rar o WG0Crpce Peach64FBX.html My UV reading method int vertexCounter 0 for (int j 0 j lt nbPolygons j ) for (int k 0 k lt 3 k ) int vertexIndex pFbxMesh gt GetPolygonVertex(j, k) Vector2 uv readUV(pFbxMesh, vertexIndex, pFbxMesh gt GetTextureUVIndex(j, k), uv) pVertices vertexIndex .uv.x uv.x pVertices vertexIndex .uv.y 1.0 uv.y vertexCounter void readUV(fbxsdk FbxMesh pFbxMesh, int vertexIndex, int uvIndex, Vector2 amp uv) fbxsdk FbxLayerElementUV pFbxLayerElementUV pFbxMesh gt GetLayer(0) gt GetUVs() if (pFbxLayerElementUV nullptr) return switch (pFbxLayerElementUV gt GetMappingMode()) case FbxLayerElementUV eByControlPoint switch (pFbxLayerElementUV gt GetReferenceMode()) case FbxLayerElementUV eDirect fbxsdk FbxVector2 fbxUv pFbxLayerElementUV gt GetDirectArray().GetAt(vertexIndex) uv.x fbxUv.mData 0 uv.y fbxUv.mData 1 break case FbxLayerElementUV eIndexToDirect int id pFbxLayerElementUV gt GetIndexArray().GetAt(vertexIndex) fbxsdk FbxVector2 fbxUv pFbxLayerElementUV gt GetDirectArray().GetAt(id) uv.x fbxUv.mData 0 uv.y fbxUv.mData 1 break break case FbxLayerElementUV eByPolygonVertex switch (pFbxLayerElementUV gt GetReferenceMode()) Always enters this part for the example model case FbxLayerElementUV eDirect case FbxLayerElementUV eIndexToDirect uv.x pFbxLayerElementUV gt GetDirectArray().GetAt(uvIndex).mData 0 uv.y pFbxLayerElementUV gt GetDirectArray().GetAt(uvIndex).mData 1 break break I am doing V 1.0 uv.y because I am using Direct3D11. NOTE the MappingMode is always eByPolygonVertex and ReferenceMode is always eIndexToDirect Rendering Info Rendered as a Triangle List Uv wrapping mode Wrap (Repeat) Culling None
40
Direct3D11 HLSL ConstantBuffer Driver Stopped Responding I have a simple HLSL shader with a few constant buffers, 1 of them holding an array of Light structs which causes my display driver to crash when the array size goes over 3. Does anybody have an idea what causes my display driver to crash when MAX LIGHTS is higher than 3? I don't believe I am exceeding the maximum allowed number of elements right? Which is supposed to be somewhere near 4096 elements). I also don't think I have set up my constant buffer wrong. It is a multiple of 16. (I have excluded the other 3 constant buffers because these are not so interesting. They only contain world view proj matrices.) I don't mind posting more code, if required. define MAX LIGHTS 4 Black Geometry Crash define MAX LIGHTS 3 Properly Shaded amp Textured Geometry No Crash struct Light 16 float4 lightPosition 32 float4 lightDirection 48 float4 lightColor 52 float lightRange 56 float lightIntensity 60 float lightIsEnabled 64 uint lightType cbuffer CBLights register(b3) Light lights MAX LIGHTS uint numLights float cbLightsPadding1 float cbLightsPadding2 float cbLightsPadding3
40
Figure out why vertices are clipped I am trying to figure out why, at sharp cutoffs in geometry, some vertices seems to be culled for some reason. I have a heightmap for demonstrating it like this And this is the result I get See the red rectangles where the geometry is just cut and the skybox is shown behind. I am not quite sure where to start looking where the problem might be. I know for a fact it's not colliding with the z near value. Any non steep geometry is fine. Any pointers? EDIT at certain angles, re orienting the camera ever so slightly changes things, like this
40
D3D11InfoQueue Isn't filtering out messages I have followed the Coordinator's code advice from this page on how to query and filter the messages in the debug layer but it doesn't seem to be working. You can see from the following code that I try to filter out 2 messages but they still keep zinging by in debug output. Does anyone know how to do this correctly? Here is the code void CDebugLayer CreateD3DDebugLayer(void) if(SUCCEEDED(s pApplication gt m spEngine gt GetD3DObj() gt m spD3DDevice.Get() gt QueryInterface( uuidof(ID3D11Debug), (void )s spD3DDebug.GetAddressOf()))) if(SUCCEEDED(s spD3DDebug.Get() gt QueryInterface( uuidof(ID3D11InfoQueue), (void )s spD3DInfoQueue.GetAddressOf()))) ifdef DEBUG s spD3DInfoQueue gt SetBreakOnSeverity(D3D11 MESSAGE SEVERITY CORRUPTION, true) s spD3DInfoQueue gt SetBreakOnSeverity(D3D11 MESSAGE SEVERITY ERROR, true) s spD3DInfoQueue gt SetBreakOnSeverity(D3D11 MESSAGE SEVERITY WARNING, true) s spD3DInfoQueue gt SetBreakOnSeverity(D3D11 MESSAGE SEVERITY INFO, true) s spD3DInfoQueue gt SetBreakOnSeverity(D3D11 MESSAGE SEVERITY MESSAGE, true) endif D3D11 MESSAGE ID stHide D3D11 MESSAGE ID SETPRIVATEDATA CHANGINGPARAMS, D3D11 MESSAGE ID OFFERRELEASE NOT SUPPORTED Add more message IDs here as needed D3D11 INFO QUEUE FILTER stFilter memset( amp stFilter, 0, sizeof(stFilter)) stFilter.DenyList.NumIDs countof(stHide) stFilter.DenyList.pIDList stHide HRESULT hr s spD3DInfoQueue gt AddStorageFilterEntries( amp stFilter) assert(hr S OK) D3D11 INFO QUEUE FILTER stFilter2 memset( amp stFilter2, 0, sizeof(stFilter2)) SIZE T ByteLength Perform debug tracking to confirm what is being stored uiNumMsgs s spD3DInfoQueue gt GetNumMessagesAllowedByStorageFilter() 7 hr s spD3DInfoQueue gt GetStorageFilter( amp stFilter2, amp ByteLength) assert(hr S OK) uiNumMsgs s spD3DInfoQueue gt GetNumMessagesDeniedByStorageFilter() 0 Pass copy to Application then store in CEngine for global engine access s pApplication gt m spEngine gt SetDebug(s spD3DDebug.Get()) CreateD3DDebugLayer EDIT This is the error message I'm trying to filter out D3D11 INFO ID3D11DeviceContext OfferResources OfferResources is not supported on operating systems older than Windows 8. It is valid to call OfferResources, and an offerred resource must still be acquired with ReclaimResources before using it, however there is no benefit to calling OfferResources on this operating system besides testing the code path for Windows 8. EXECUTION INFO 3146071 OFFERRELEASE NOT SUPPORTED Caused by ID2D1DeviceContext gt EndDraw() on Windows 7 Sp1 x64. Using Windows SDK 8.1.
40
Screen point to world space conversion I have a 3D cube that can be rotated with the mouse to show any side of the sphere.I want to be able to click a point on the cube and draw a circle around that point at a fixed height(so i dont need depth information), however i run into a problem where i cant translate the screen coridiantes to the worldspace coordinates. What i have attempted get mouse click position(x,y) normalise the screen position( 1 to 1) create a vector with the z coordinate as 1 multiply the vector by (inverseProjection x inverseView) However, this does not give me the world coordinates as i would have expected. Where am i going wrong? The depth information is not needed form this, i only need to map the mouse click point (x,y) to the world point (x,y,z) where z is fixed. DirectX XMMATRIX projection projectionmatrix() DirectX XMMATRIX view viewmatrix() DirectX XMMATRIX invProjectionView DirectX XMMatrixInverse( amp DirectX XMMatrixDeterminant(view projection), (view projection)) invViewProjection invView invProjection float x (((2.0f mouseX) viewport.Width) 1) float y (((2.0f mouseY) viewport.Height) 1) DirectX XMVECTOR mousePosition DirectX XMVectorSet(x, y, 1.0f, 0.0f) mouseInWorldSpace DirectX XMVector3Transform(mousePosition, invProjectionView) Edit The fixed axis is the y axis, the z and x axis are the changing axis Edit I have revised the question after realising my mistakes in the initial question. The x, y, and z are positioned as shown above. I want to be able to click the cube surface and drwaw the shape at a point with a fixed y offset from the click point. I am not able to draw a plane on the surface of the cube due to the nature in which the cube is generated hence the need for the screen to world coords.
40
SlimDX 11 Setting Multiple Render Targets I'm using SlimDX 11 in my managed Direct3D application. I would like to implement deferred shading. I'm having trouble when I try to set a depth stencil surface and multiple render targets at the same time. Here is my code depth stencil view DepthStencilView dsv GetD3D11DepthStencilView() array of 3 render target views RenderTargetView rtv GetPositionDiffuseEmissiveRenderTargetViewArray() device context DeviceContext context GetD3D11DeviceContext() context.OutputMerger.SetTargets(dsv, rtv) EDIT create depth stencil state context.OutputMerger.DepthStencilState DepthStencilState.FromDescription(GetDevice(), new DepthStencilStateDescription() IsDepthEnabled true, DepthComparison Comparison.Less, DepthWriteMask DepthWriteMask.All, IsStencilEnabled false, StencilReadMask 0xff, StencilWriteMask 0xff, BackFace new DepthStencilOperationDescription() Comparison Comparison.Always, DepthFailOperation StencilOperation.IncrementAndClamp, FailOperation StencilOperation.DecrementAndClamp, PassOperation StencilOperation.Keep , FrontFace new DepthStencilOperationDescription() Comparison Comparison.Always, PassOperation StencilOperation.Keep, DepthFailOperation StencilOperation.IncrementAndClamp, FailOperation StencilOperation.Keep ) clear depth stencil context.ClearDepthStencilView(dsv, DepthStencilClearFlags.Depth, 1.0f, 0) draw as usual ... When I set the depth stencil and render targets all at one time then nothing is rendered to the screen. When I set only the render targets everything renders normally...Why might this occur?
40
Using DirectWrite to write text on a IDirect3DTexture9 I'm trying to write text using the DirectWrite API on a IDirect3DTexture9. I've heard that people were capable of doing this but I do not see directly how I can glue these two APIs together. It seems I need a IDXGISurface to create a valid ID2D1RenderTarget (with which I can draw the text). I can get this by creating a surface for the Window. But I think the idea is that I use the IDirect3DTexture9 as a surface for the render target so that I can use whatever is drawn on it in my application. I can get the underlying IDirect3DSurface9 from the IDrect3DTexture9 but then I have to convert it somehow to a IDXGISurface and this is where I'm stuck. Does anybody have a clue?
40
Effect version is unrecognized. This runtime supports fx 5 0 to fx 5 0 I'm reading the "Introduction to 3D game programming with DirectX11" (Frank Luna) and i'm having problems with lighting. I have this error Effect version is unrecognized. This runtime supports fx 5 0 to fx 5 0. When i use the D3DX11CreateEffectFromMemory function i have that error and an exception (this mFX was nullptr) Can you help me? Thanks EDIT I have no D3DCompileFromFile std ifstream fin("Ligthing.fx", std ios binary) fin.seekg(0, std ios base end) int size (int)fin.tellg() fin.seekg(0, std ios base beg) std vector lt char gt compiledShader(size) fin.read( amp compiledShader 0 , size) fin.close() hr D3DX11CreateEffectFromMemory( amp compiledShader 0 , size, ERROR IS HERE 0, device, amp mFX) if (FAILED(hr)) MessageBox(0, L"err", 0, 0)
40
Clipping wrong results in C DIRECTX 11 Im trying to do some clipping but having some weird results. The triangle is not getting clipped how it should be. Here are some images. The second image is how it should. video of what is happen VIDEO OF WHAT IS HAPPENING What is wrong with my code that is making the triangle wrong way. CLIPPING ALGORITHM ClipVertex( const Vector3 amp end ) float BCend mPlane.Test(end) bool endInside ( BCend gt 0.0f ) if (!mFirstVertex) if one of the points is inside if ( mStartInside endInside ) if the start is inside, just output it if (mStartInside) mClipVertices mNumVertices mStart if one of them is outside, output clip point if ( !(mStartInside amp amp endInside) ) float t Vector3 output if (endInside) t BCend (BCend mBCStart) output end t (end mStart) else t mBCStart (mBCStart BCend) output mStart t (end mStart) mClipVertices mNumVertices output mStart end mBCStart BCend mStartInside endInside mFirstVertex false PLANE EQUATION Plane clipPlane( 1.0f, 0.0f, 0.0f, 0.0f ) In directx 11 in a Left Handed system
40
How can I check the multisample quality level count? I can check it with ID3D11Device CheckMultisampleQualityLevels(). So to use it I need to create the device D3D11CreateDeviceAndSwapChain(). To call it I need to fill the DXGI SWAP CHAIN DESC which, among others, specifies... the multisample quality level. Vicious circle. Do I have to split D3D11CreateDeviceAndSwapChain() into creating the device and the swap chain separately? Or what?
40
DX11 Handle Device removed I get the DXGI ERROR DEVICE REMOVED error on some machines. According to this (https msdn.microsoft.com en us windows uwp gaming handling device lost scenarios) MSDN article this can happen and it should be handled by your application. I've managed to recreate the device but I'm unsure how to handle all content. It seems I have to create all vertex buffers and textures again, which essentially means I have to reload almost the entire scene. Is this really the correct way?
40
Why can't I create an unordered access view of a R32G32B32 UINT buffer? So I'm trying to create an unordered access view for a buffer having with three component elements, but it fails with this warning D3D11 ERROR ID3D11Device CreateUnorderedAccessView The format (0x7, R32G32B32 UINT) cannot be used with a Typed Unordered Access View. STATE CREATION ERROR 2097343 CREATEUNORDEREDACCESSVIEW INVALIDFORMAT . This is my code void test(ID3D11Device dev ctx) D3D11 BUFFER DESC buff desc memset( amp buff desc, 0, sizeof(buff desc)) buff desc.BindFlags D3D11 BIND SHADER RESOURCE D3D11 BIND UNORDERED ACCESS buff desc.ByteWidth 1024 12 buff desc.CPUAccessFlags 0 buff desc.MiscFlags 0 buff desc.StructureByteStride 12 buff desc.Usage D3D11 USAGE DEFAULT ID3D11Buffer buff ptr nullptr dev ctx gt CreateBuffer( amp buff desc, nullptr, amp buff ptr) if (!buff ptr) return D3D11 UNORDERED ACCESS VIEW DESC uav desc uav desc.Format DXGI FORMAT R32G32B32 UINT uav desc.ViewDimension D3D11 UAV DIMENSION BUFFER uav desc.Buffer.FirstElement 0 uav desc.Buffer.Flags 0 uav desc.Buffer.NumElements 1024 ID3D11UnorderedAccessView uav ptr nullptr dev ctx gt CreateUnorderedAccessView(buff ptr, amp uav desc, amp uav ptr) if (!uav ptr) return It fails for any resource with three components (R32G32B32 ), but it works if the resource has 1 2 4 components. So what could be the problem?
40
Bilinear filtering of output image in Direct3D 11 I'd like to render something at one resolution, but display it in a window at another resolution (e.g. render the scene at 640x480 and stretch it to a 1024x768 window). Simply resizing the window after creation allows this to work however, the image is filtered using nearest neighbor. Is there a way to change the filtering algorithm to e.g. bilinear?
40
Realtime local reflections of particle system I'm finding my way around CryEngine 3 (through Steam) and want to create a simple effect where a fire on shore is reflected in a body of water. For testing purposes, I've made the water dead calm... (Note Dx11 on 2nd line of debug info) As you can see, the terrain is reflected properly, but the flame particles aren't reflected. It's my understanding that this should be possible. NB I've created an appropriate cubemap for the water environment, although I don't believe it comes into play here. I've seen a number of posts saying Glossiness 50 (or 50 ) and a light specular color are required. I've got 100 and white... And for completeness, the water volume properties... Can someone please tell me what I need to enable to get this working? Thanks.
40
Why does my game not update unless I'm moving the mouse? I'm pretty confused by what's happening. Now that I've finally put something moving on my screen, I notice it doesn't update unless I move the mouse, or press a key, or trigger other events. Using PIX, the "Frame " counter only goes up when I fire these events. This is all that happens in my game loop while(GetMessage( amp msg, NULL, 0, 0) gt 0 amp amp m isRunning) TranslateMessage( amp msg) DispatchMessage( amp msg) m gameGraphics gt BeginRender() m gameGraphics gt EndRender() BeginRender clears the screen, and EndRender presents to the swap chain. I thought maybe it was a problem with WndProc, but comparing it to other DirectX 11 game WndProcs, I don't see any major differences. I'm pretty confused, never seen this problem before, and I have no idea what causes it. I'm just hoping maybe someone will have some insight on why this might be happening.
40
Tessellation Texture Coordinates Firstly some info I'm using DirectX 11 , C and I'm a fairly good programmer but new to tessellation and not a master graphics programmer. I'm currently implementing a tessellation system for a terrain model, but i have reached a snag. My current system produces a terrain model from a height map complete with multiple texture coordinates, normals, binormals and tangents for rendering. Now when i was using a simple vertex and pixel shader combination everything worked perfectly but since moving to include a hull and domain shader I'm slightly confused and getting strange results. My terrain is a high detail model but the textured results are very large patches of solid colour. My current setup passes the model data into the vertex shader then through the hull into the domain and then finally into the pixel shader for use in rendering. My only thought is that in my hull shader i pass the information into the domain shader per patch and this is producing the large areas of solid colour because each patch has identical information. Lighting and normal data are also slightly off but not as visibly as texturing. Below is a copy of my hull shader that does not work correctly because i think the way that i am passing the data through is incorrect. If anyone can help me out but suggesting an alternative way to get the required data into the pixel shader? or by showing me the correct way to handle the data in the hull shader id be very thankful! cbuffer TessellationBuffer float tessellationAmount float3 padding struct HullInputType float3 position POSITION float2 tex TEXCOORD0 float3 normal NORMAL float3 tangent TANGENT float3 binormal BINORMAL float2 tex2 TEXCOORD1 struct ConstantOutputType float edges 3 SV TessFactor float inside SV InsideTessFactor struct HullOutputType float3 position POSITION float2 tex TEXCOORD0 float3 normal NORMAL float3 tangent TANGENT float3 binormal BINORMAL float2 tex2 TEXCOORD1 float4 depthPosition TEXCOORD2 ConstantOutputType ColorPatchConstantFunction(InputPatch lt HullInputType, 3 gt inputPatch, uint patchId SV PrimitiveID) ConstantOutputType output output.edges 0 tessellationAmount output.edges 1 tessellationAmount output.edges 2 tessellationAmount output.inside tessellationAmount return output domain("tri") partitioning("integer") outputtopology("triangle cw") outputcontrolpoints(3) patchconstantfunc("ColorPatchConstantFunction") HullOutputType ColorHullShader(InputPatch lt HullInputType, 3 gt patch, uint pointId SV OutputControlPointID, uint patchId SV PrimitiveID) HullOutputType output output.position patch pointId .position output.tex patch pointId .tex output.tex2 patch pointId .tex2 output.normal patch pointId .normal output.tangent patch pointId .tangent output.binormal patch pointId .binormal return output Edited to include the domain shader domain("tri") PixelInputType ColorDomainShader(ConstantOutputType input, float3 uvwCoord SV DomainLocation, const OutputPatch lt HullOutputType, 3 gt patch) float3 vertexPosition PixelInputType output Determine the position of the new vertex. vertexPosition uvwCoord.x patch 0 .position uvwCoord.y patch 1 .position uvwCoord.z patch 2 .position output.position mul(float4(vertexPosition, 1.0f), worldMatrix) output.position mul(output.position, viewMatrix) output.position mul(output.position, projectionMatrix) output.depthPosition output.position output.tex patch 0 .tex output.tex2 patch 0 .tex2 output.normal patch 0 .normal output.tangent patch 0 .tangent output.binormal patch 0 .binormal return output
40
How can I use a different upscaling method in dx11? I need to upscale images using the 'box' scaling algorithm as the pixel art textures in my game don't scale well with bilinear because of the blur. Either that, or use high res pictures which would be undesirable and inflate the size by many times.
40
PN TRIANGLES is Bezier Triangles right? JUst wondering the term pn triangles is the same thing as saying bezier triangle patch??
40
Why specular reflection work only in center of virtual scene? How to calculate this cpecular reflection? HLSL void calculateSpecular( in float4 Normal, in float4 SunLightDir, inout float4 Specular ) Specular specularLevel pow(saturate(dot(reflect(normalize(abs(eyePosition)), Normal), SunLightDir)), specularExponent) in pixel shader float4 Specular float4(0.f,0.f,0.f,1.f) calculateSpecular( input.normal, sunLightDir, Specular ) sunLightDir it's just camera position vec3(0,10,0) in vertex shader output.normal mul( float4( input.normal, 0.f ) , World )
40
Tessellation Texture Coordinates Firstly some info I'm using DirectX 11 , C and I'm a fairly good programmer but new to tessellation and not a master graphics programmer. I'm currently implementing a tessellation system for a terrain model, but i have reached a snag. My current system produces a terrain model from a height map complete with multiple texture coordinates, normals, binormals and tangents for rendering. Now when i was using a simple vertex and pixel shader combination everything worked perfectly but since moving to include a hull and domain shader I'm slightly confused and getting strange results. My terrain is a high detail model but the textured results are very large patches of solid colour. My current setup passes the model data into the vertex shader then through the hull into the domain and then finally into the pixel shader for use in rendering. My only thought is that in my hull shader i pass the information into the domain shader per patch and this is producing the large areas of solid colour because each patch has identical information. Lighting and normal data are also slightly off but not as visibly as texturing. Below is a copy of my hull shader that does not work correctly because i think the way that i am passing the data through is incorrect. If anyone can help me out but suggesting an alternative way to get the required data into the pixel shader? or by showing me the correct way to handle the data in the hull shader id be very thankful! cbuffer TessellationBuffer float tessellationAmount float3 padding struct HullInputType float3 position POSITION float2 tex TEXCOORD0 float3 normal NORMAL float3 tangent TANGENT float3 binormal BINORMAL float2 tex2 TEXCOORD1 struct ConstantOutputType float edges 3 SV TessFactor float inside SV InsideTessFactor struct HullOutputType float3 position POSITION float2 tex TEXCOORD0 float3 normal NORMAL float3 tangent TANGENT float3 binormal BINORMAL float2 tex2 TEXCOORD1 float4 depthPosition TEXCOORD2 ConstantOutputType ColorPatchConstantFunction(InputPatch lt HullInputType, 3 gt inputPatch, uint patchId SV PrimitiveID) ConstantOutputType output output.edges 0 tessellationAmount output.edges 1 tessellationAmount output.edges 2 tessellationAmount output.inside tessellationAmount return output domain("tri") partitioning("integer") outputtopology("triangle cw") outputcontrolpoints(3) patchconstantfunc("ColorPatchConstantFunction") HullOutputType ColorHullShader(InputPatch lt HullInputType, 3 gt patch, uint pointId SV OutputControlPointID, uint patchId SV PrimitiveID) HullOutputType output output.position patch pointId .position output.tex patch pointId .tex output.tex2 patch pointId .tex2 output.normal patch pointId .normal output.tangent patch pointId .tangent output.binormal patch pointId .binormal return output Edited to include the domain shader domain("tri") PixelInputType ColorDomainShader(ConstantOutputType input, float3 uvwCoord SV DomainLocation, const OutputPatch lt HullOutputType, 3 gt patch) float3 vertexPosition PixelInputType output Determine the position of the new vertex. vertexPosition uvwCoord.x patch 0 .position uvwCoord.y patch 1 .position uvwCoord.z patch 2 .position output.position mul(float4(vertexPosition, 1.0f), worldMatrix) output.position mul(output.position, viewMatrix) output.position mul(output.position, projectionMatrix) output.depthPosition output.position output.tex patch 0 .tex output.tex2 patch 0 .tex2 output.normal patch 0 .normal output.tangent patch 0 .tangent output.binormal patch 0 .binormal return output
40
Rendering to specific face of a cubemap I have working omnidirectional shadow maps for point lights. Rendering a shadow map for one point light consists of rendering the scene to 6 separate rendertargets, then send those render target textures (shaderresourceviews) to a pixel shader which will shadow the scene by projective texturing. Switching rendertargets 6 times for each point light however is quite costly, and so does sampling from those in the pixel shader later. My question is how would I access a face of the cubemap to render to? Now I am doing Id3d11devicecontext omsetrendertargets(1,NULL, amp depthtarget) . My depthtarget is a texture2d which is bound to a depthstencil view and a shaderresource which is set up to be a cubemap. Can I even set an individual face to render to or is the process completely different?
40
Proper vertex buffer use How're you supposed to use vertex buffers? Say you have 500 distinct deformable shapes models in the world (ie you want to be able to change delete vertices from the models somewhat arbitraily as the game progresses). The requires you refresh the vertex buffers in the frames the model has become dirty, at least. So how should you handle your vertex buffer, assuming D3D11 interfaces (so vertex buffers are your only option to draw anything) Store model vertices in CPU RAM. Create one vertex buffer at program start. For each model, copy the vertices into the single vertex buffer, render Create 500 vertex buffers, update each when necessary, render.
40
How does Direct3D know if a constant buffer is for the vertex or pixel shader? I have a question about constant buffers in directX 11. They really confuse me and after searching on google most sites simply supply sample code without explaining how it works. I am probably overlooking something but, as of now, they appear to me to work by sheer magic. Say I have two constant buffers in an hlsl file struct Light float3 dir float4 ambient float4 diffuse cbuffer cbPerFram Light light cbuffer cbPerObject float4x4 WVP float4x4 World And I create an equivalent buffer in my C program like so Create the buffer to send to the cbuffer in effect file D3D11 BUFFER DESC cbbd ZeroMemory( amp cbbd, sizeof(D3D11 BUFFER DESC)) cbbd.Usage D3D11 USAGE DEFAULT cbPerObject is a struct with same layout as cbPerObject in HLSL file cbbd.ByteWidth sizeof(cbPerObject) cbbd.BindFlags D3D11 BIND CONSTANT BUFFER cbbd.CPUAccessFlags 0 cbbd.MiscFlags 0 hr d3d11Device gt CreateBuffer( amp cbbd, NULL, amp cbPerObjectBuffer) Create the buffer to send to the cbuffer per frame in effect file ZeroMemory( amp cbbd, sizeof(D3D11 BUFFER DESC)) cbbd.Usage D3D11 USAGE DEFAULT cbPerFrame is struct with same layout as in cbPerFrame in effect file cbbd.ByteWidth sizeof(cbPerFrame) cbbd.BindFlags D3D11 BIND CONSTANT BUFFER cbbd.CPUAccessFlags 0 cbbd.MiscFlags 0 And then I update one like this d3d11DevCon gt UpdateSubresource( cbPerObjectBuffer, 0, NULL, amp cbPerObj, 0, 0 ) d3d11DevCon gt VSSetConstantBuffers( 0, 1, amp cbPerObjectBuffer ) hr d3d11Device gt CreateBuffer( amp cbbd, NULL, amp cbPerFrameBuffer) How does directX know which constant buffers in my shader are going to be used for VS and which for PS and which buffers in my codes correspond to which buffers in my hlsl? For example I call VSSetConstantBuffers above and pass the arguments and it automatically knows to throw that stuff in the per object buffer as opposed to trying to put it in the perFrame buffer. As far as I can tell when creating cbPerObjectBuffer, I never explicitly bound it to the cbPerObject buffer in my hlsl. Is slot number dependent on the order the buffers appear in hlsl? The only way I can figure is that the data structures simply define an interface to the buffer memory on the card and it is up to the programmer to use the correct one. For example, I COULD attempt to use cbPerFrame in my vertex shader and indeed the same data can be accessed that way but in order to get correct results you would have to access it using offsets into the light object. Does what I am asking make any sense? lol
40
Using DirectWrite to write text on a IDirect3DTexture9 I'm trying to write text using the DirectWrite API on a IDirect3DTexture9. I've heard that people were capable of doing this but I do not see directly how I can glue these two APIs together. It seems I need a IDXGISurface to create a valid ID2D1RenderTarget (with which I can draw the text). I can get this by creating a surface for the Window. But I think the idea is that I use the IDirect3DTexture9 as a surface for the render target so that I can use whatever is drawn on it in my application. I can get the underlying IDirect3DSurface9 from the IDrect3DTexture9 but then I have to convert it somehow to a IDXGISurface and this is where I'm stuck. Does anybody have a clue?
40
FormatMessage not working for HRESULTs returned by Direct3D 11 I am using Windows 7 x64 and Visual Studio 17 (v15.9.7). Say I try to create a swap chain using IDXGIFactory2 CreateSwapChainForHwnd and pass in DXGI SCALING NONE. I will get the following message in debug output (if I have enabled Direct3D debugging) DXGI ERROR IDXGIFactory CreateSwapChain DXGI SCALING NONE is only supported on Win8 and beyond. DXGI SWAP CHAIN DESC SwapChainType ... HWND, BufferDesc DXGI MODE DESC1 Width 816, Height 488, RefreshRate DXGI RATIONAL Numerator 0, Denominator 1 , Format B8G8R8A8 UNORM, ScanlineOrdering ... UNSPECIFIED, Scaling ... UNSPECIFIED, Stereo FALSE , SampleDesc DXGI SAMPLE DESC Count 1, Quality 0 , BufferUsage 0x20, BufferCount 2, OutputWindow 0x0000000000290738, Scaling ... NONE, Windowed TRUE, SwapEffect ... FLIP SEQUENTIAL, AlphaMode ... UNSPECIFIED, Flags 0x0 MISCELLANEOUS ERROR 175 The function returns 0x887a0001 in form of a HRESULT. If I put err,hr in the watch window, I get a nice error message there ERROR MOD NOT FOUND The specified module could not be found. However, if I pass this HRESULT to FormatMessage, it just puts NULL in the output and returns 0. err,hr helpfully informs me that the new error is ERROR MR MID NOT FOUND The system cannot find message text for message number 0x 1 in the message file for 2. My questions are Why is FormatMessage not giving me right error string (the one starting with ERROR MOD NOT FOUND...)? Where is Visual Studio getting these pretty error strings from? Can I get them too? Who do I pay? PS. I am using the Windows 10 SDK version of DX11, not the older DirectX SDK version. Thus, I can't really link to dxerr.lib either. This is the code that is used to print the error message LPTSTR error text NULL FormatMessage(FORMAT MESSAGE FROM SYSTEM FORMAT MESSAGE ALLOCATE BUFFER FORMAT MESSAGE IGNORE INSERTS, NULL, hr, MAKELANGID(LANG NEUTRAL, SUBLANG DEFAULT), (LPTSTR) amp error text, 0, NULL)
40
How can I prevent other applications from interrupting my game's exclusive fullscreen mode? I am developing a game using D3D 11. When I got a pop up message from a chat client (HipChat), my game's full screen mode is disabled because IDXGISwapChain Present returns DXGI STATUS OCCLUDED. How can I avoid or prevent this? I don't want my game's exclusive full screen access interrupted.
40
Swapping out Vertex Buffer Necessary for Terrain LOD? I am using Directx 11 to try and implement terrain level of detail. I am trying the idea represented in this tutorial http www.rastertek.com tertut18.html Which is split your terrain into nodes, and render different quality nodes depending on how far away they are from you. Each of my nodes has its own vertex buffer (bad idea?), however I am trying to map unmap when I swap out a particular node for its higher quality version. I noticed that unless my vertex buffer is the size of the high quality version, then surrounding nodes gets affected, I believe this is to be expected since I am essentially overwriting memory in the vertex buffers of the other nodes. So is the only way to go about map umap is to make each node have the space for the high quality terrain? If so, then I can't do map unmap because I run out of memory. What's another approach then? Making a new vertex buffer with the desired size whenever I swap? Won't this be too slow?
40
IsoSurface Normals Texture Coords Problems(DX11 SharpDX) Hi and thanks for your time! So over the past few days I have been playing with isosurface construction from volume textures, I have it running on the cpu and all is well from a "just got it working" stand point eg. surface reconstruction works and I have my really cool voxel mesh drawing. I have based my code(converted to VB.net) on Paul Bourke's work here Polygonising a scalar field and like I said it works well apart from the fact that my normals are wrong and I have these strange fully black on both sides triangles, I think I'm doing my normals wrong but the way they are now is the beast I can get, I tried a few ways to build the normals and this one is the only way I could get smooth normals. Here you can see how my normals are wrong, green should be up, gray dark should be unlit and the hay'ish colour is the lit side. I have tried to flip and invert them but as you can see they are not wrong all the same way. HLSL Normals 1st try float3x3 cotangent frame(float3 N, float3 p, float2 uv) get edge vectors of the pixel triangle float3 dp1 ddx(p) float3 dp2 ddy(p) float2 duv1 ddx(uv) float2 duv2 ddy(uv) solve the linear system float3 dp2perp cross(dp2, N) float3 dp1perp cross(N, dp1) float3 T dp2perp duv1.x dp1perp duv2.x float3 B dp2perp duv1.y dp1perp duv2.y construct a scale invariant frame float invmax sqrt(max(dot(T, T), dot(B, B))) return float3x3(T invmax, B invmax, N) HLSL Normals 2nd try NormalData CalcWS Normal(float2 WS TexCoord, float3 WS Pos) NormalData dout float3 dp1 ddx(WS Pos) float3 dp2 ddy(WS Pos) float2 duv1 ddx(WS TexCoord) float2 duv2 ddy(WS TexCoord) float3x3 M float3x3(dp1, dp2, cross(dp1, dp2)) float2x3 inverseM float2x3(cross(M 1 , M 2 ), cross(M 2 , M 0 )) float3 t mul(float2(duv1.x, duv2.x), inverseM) float3 b mul(float2(duv1.y, duv2.y), inverseM) float3 normal normalize(cross(normalize(b), normalize(t))) dout.Normal normal dout.Tang t dout.BiTang b return dout Heres is how I'm doing it now, takes a vertex and then samples the volume texture VB.net Private Function vGetNormal(fX As Single, fY As Single, fZ As Single, fScale As Single) As Vector3 Dim rfNormal As Vector3 rfNormal.X fSample1(fX fScale, fY, fZ) fSample1(fX fScale, fY, fZ) rfNormal.Y fSample1(fX, fY fScale, fZ) fSample1(fX, fY fScale, fZ) rfNormal.Z fSample1(fX, fY, fZ fScale) fSample1(fX, fY, fZ fScale) Return Vector3.Normalize(rfNormal) End Function Private Function vGetNormal(fX As Single, fY As Single, fZ As Single, fScale As Vector3) As Vector3 Dim rfNormal As Vector3 rfNormal.X fSample1(fX fScale.X, fY, fZ) fSample1(fX fScale.X, fY, fZ) rfNormal.Y fSample1(fX, fY fScale.Y, fZ) fSample1(fX, fY fScale.Y, fZ) rfNormal.Z fSample1(fX, fY, fZ fScale.Z) fSample1(fX, fY, fZ fScale.Z) Return Vector3.Normalize(rfNormal) End Function Volume sampling code VB.net Public Function GetVolumeData( pos As Vector3) As Half Dim x2 As Single Math.Abs(( pos.X CellsPerPatch 32) Mod mWidth) Dim y2 As Single Math.Abs(( pos.Y CellsPerPatch 8) Mod mDepth) Dim z2 As Single Math.Abs(( pos.Z CellsPerPatch 32) Mod mHeight) Return mScalars(CInt(Math.Truncate(x2)) (CInt(Math.Truncate(z2)) mWidth) (CInt(Math.Truncate(y2) mWidth mWidth))) End Function The rest of the isosurface construction code is just a copy convert paste of Paul Bourke's c code. Just to makes sure everyone knows what I'm asking Can anyone tell me what I'm doing wrong? Is it a problem with my normals or the isosurface stuffs or both? All of the above is just so u have some context, I'm happy to post more code if ppl need to see it EDIT Black squares come from having no depth info at that point so my deferred render hates those spots, this is leading back to maybe a problem with the surface reconstruction? But why its only happening in those spots, one would think that it would happen everywhere?