_id
int64
0
49
text
stringlengths
71
4.19k
33
Are 3D textures used for ground tiles in 2.5D games? In an isometric system, each tile can be painted with texture. Normally, (if I recalled the technical term) the fastest method to paint these ground tiles with appropiate texture is to use something like UV mapping. However, that is not what I have seen on most of isometric tile based game (such as final fantasy tactics). For example, if I map the texture onto each grid, then the texture would appears to be flat. Now, in modern day game, each tile may have some part sprouting out (for example, grass). What I want to ask is how should I implements the isometric system such that this feature is supported? The next problem is, if I implement such feature, then I have to make change to the art process too UV texture mapping will not work anymore. My question is, what change would I have to make to the art process? How would an artist color a 3D isometric tile? Basically, I just want to know how this isometric system will work out for both programmer and artists. The grid system seems to require high degree of coordination between programmer and artist. EDIT Rephrase the question
33
How can I get the width and height of a picture? I'm trying to make a script on RPG Maker XP that needs the width and height of an image to do some calculations. I tried picture.bitmap.widthand picture.bitmap.height but it returned the width and height of the image that is being shown in the screen. For instance, if the image is 800 pixels height, and the image takes 400 pixels in the screen, picture.bitmap.height will only return 400. This is the code I'm using picture Sprite.new picture.bitmap RPG Cache.picture("picture") pictureWidth picture.bitmap.width pictureHeight picture.bitmap.height
33
In AGK2 Basic, how would I attach a camera to the player charater? I've started making a game in AppGameKit and I'm using Tier 1 AGK2 Basic. I'm trying to make a sort of endless runner. I've made it so that the player stays still and the rest of the world moves around them to create the illusion that the player is what is running forward. I then use a global variable to control all movement relative to the player. There must be a better way to do this, though. Like if I were to attach a camera to my player character. How would I do something like that?
33
Making a 2D Game Figuring out sprite sizes? I'm currently making a 2D game as a school project using Game Maker and its language. My concern however is around the spriting and art... I can't figure out how to correctly size everything so it shows up correctly on the end product. I originally used a temporary 32x32 character but it looks very small when I have the "Room Size" in Game Maker set to 720p... Is this something that I need to change size wise on Game Maker, such as changing how far in the perspective camera is, or do I have to set actual dimension rules? I'm unsure of how to do this properly.
33
Designing for multiple screen resolutions and aspect ratios, target one resolution then scale to meet others or use normalized values? I feel like what I'm currently doing is stupid but would just like reaffirmation it is. Basically I'm writing games which target a certain screen resolution (800x480), so all positions, widths and heights I set for sprites is for that resolution. On the engine side I have special scalers which will change the width and height of what is being drawn to meet the current platforms res, so for example if the platforms screen res is 1200x720 I will stretch a sprite by a factor of 1280 800.0f 720 480.0f Positions are also offset and manipulated in a similar way. I really feel I should move to a normalized system for widths and positions (i.e. 0 to 1) but at the same time feel this is more intuitive since the first platform I target is 800x480 and I already kind of have a good udnerstanding of dimensions and such for that. So my question is, is it really that bad to target a specific ccreen res then in the engine side modify this res to meet different platforms?
33
Acceleration and Deceleration During Rotation I'm attempting to have a sprite rotate in a way where its rotation speed increases until it has reached the halfway point in its rotation, to which it starts to slow down. I'm currently calculating the halfway point by calculating the amount of time it should take to rotate the sprite, and then checking if a current time variable is larger. The code works rather well except for the fact that I can't figure out how current time should increment each update. Currently I'm using delta time, but that doesn't work. I'm thinking it needs to be some combination of the calculated time to rotate and delta, but I haven't gotten it yet (clearly). EDIT Specifically, currentTime shouldn't be incremented by delta solely, but I have no idea what it should be in place of delta. if (currentTime lt timeToRotate) rotationSpeed 5f delta currentTime delta else if (rotationSpeed gt 5f) rotationSpeed 5f delta else rotationSpeed 5f currentTime delta
33
MonoGame sprites not consistently drawing Please don't mark this as a duplicate of my previous question which was asked anonymously. this is a reattempt because i cannot edit the first attempt. What I have so far is randomly generated tile set with items on SOME tiles one creature spawner that spawns up to 5 creatures which can be controlled by the player the ability for creatures to pick up items and bring them back to the spawn and store them for later use What I am noticing is when i launch the game, it doesnt always show all the sprites. Sometimes the spawner is missing, or the creatures, or the GUI. ANd when things DO appear, if i tell the creatures to pick up anything, other items on the ground randomly appear and disappear. even the spawn sometimes disappears! what I do notice is that when i have multiple creatures, some will disappear when crossing invisible lines but not all of them. i forced two creatures to go east at the same speed and time, and one vanished and the other didnt. If you have any ideas as to what might be causing this, please let me know. if you have any questions, i'll try to edit to appease you. thanks for reading! I have my draw method like this GraphicsDevice.Clear(Color.CornflowerBlue) TODO Add your drawing code here spriteBatch.Begin(SpriteSortMode.BackToFront, null, null, null, null, null, camera.GetTransformation()) worldEntity.Draw(spriteBatch) guiManager.DrawGUI(spriteBatch) spriteBatch.End() base.Draw(gameTime) for the world entity... its draw method systematically goes through each tile in the world and calls its draw method. the tile draws itself using Vector2 topLeftOfSprite new Vector2(this.X, this.Y) Color tintColor Color.White spriteBatch.Draw(tileTexture, topLeftOfSprite, tintColor) and then proceeds to call the draw method of any items on it. creatures and items have very similar draw methods. the gui also draws itself in a similar fashion. what i DO notice is that ive NEVER seen ground tiles vanish, only the items on them or creatures in the world. Could it be their Z isn't set appropriately?
33
Concerns about how to efficiently implement sprite atlas I currently transform (translate, rotate, scale) a bunch of vertices in my own Java code, then populate an mPositions array and an mTextureCoordinates array, which draws a bunch of different textured sprites to the screen in one GL draw command. Works great. However, I now wish to move the transformation process away from native Java code and over to the vertex shader, and so I will need to pass into the shader transformation matrices which encode the translation, rotation, and scaling operations for each sprite. Given my current approach the naive and obvious choice is to introduce another array mTransformations, passed into the shader via a GLES20.glVertexAttribPointer command, which contains a matrix for each and every vertex. But this way seems a little wasteful for two reasons I will have to add the same transformation matrix 6 times per sprite to the mTransformations array since each square sprite is made of 2 triangles (3 3 vertices) Since I'm now going to use the shader to perform the transformations, the 6 canonical coordinates of each square sprite (two triangles) will be the same for every sprite. In effect I'd have to populate mPositions with the same coordinates over and over for each sprite. Is there a more efficient way to do achieve what I want?
33
Slicing irregular spritesheet (automatically?) If found a cool sprite sheet on the internet but its irregular. Is there any way besides manually cutting sprites to extract the separate pngs? I need to pack them then again into the Texture Atlas under proper names. Maybe theres some smart online tool that uses deep learning or something like that?
33
How can I remove a sprite from the screen using pygame? I am having trouble with removing sprites on my screen using pygame. I am killing my sprite with sword.kill which is removing it from an all sprites group, so it should disappear right? Here is where i blit my images def draw(self) for sprite in self.all sprites self.screen.blit(sprite.image, self.camera.apply(sprite)) pg.display.update() Here is my sword class class Sword(pg.sprite.Sprite) def init (self, game, x, y, entity) self.groups game.all sprites pg.sprite.Sprite. init (self, self.groups) self.game game self.image self.game.sword self.image pg.transform.scale(self.image, (TILESIZE, TILESIZE)) self.image.set colorkey(WHITE) self.rect self.image.get rect() self.x x self.y y self.rect.x x self.rect.y y if entity.direction 'down' self.image pg.transform.rotate(self.image, 90) def update(self) self.kill() And here is where I create my sword object def get keys(self) self.vx, self.vy 0, 0 keys pg.key.get pressed() if keys pg.K LEFT or keys pg.K a self.game.walk sound.play( 1) now pg.time.get ticks() self.vx 200 self.direction 'left' if now self.last update gt self.frame rate self.last update now try self.game.player img self.game.walkleft self.frame self.frame 1 except IndexError self.frame 0 self.swing False elif keys pg.K RIGHT or keys pg.K d self.game.walk sound.play( 1) now pg.time.get ticks() self.vx 200 self.direction 'right' if now self.last update gt self.frame rate self.last update now try self.game.player img self.game.walkright self.frame self.frame 1 except IndexError self.frame 0 self.swing False elif keys pg.K UP or keys pg.K w self.game.walk sound.play( 1) now pg.time.get ticks() self.vy 200 self.direction 'up' if now self.last update gt self.frame rate self.last update now try self.game.player img self.game.walkup self.frame self.frame 1 except IndexError self.frame 0 self.swing False elif keys pg.K DOWN or keys pg.K s self.game.walk sound.play( 1) now pg.time.get ticks() self.direction 'down' self.vy 200 if now self.last update gt self.frame rate self.last update now try self.game.player img self.game.walkdown self.frame self.frame 1 except IndexError self.frame 0 self.swing False elif keys pg.K SPACE self.attack True if self.direction 'down' print(self.rect.centery) self.sword Sword(self.game, self.rect.centerx 7, self.rect.bottom, self) elif self.vx ! 0 and self.vy ! 0 self.vx 0.7071 self.vy 0.7071 self.swing False elif keys pg.K t self.x 1 TILESIZE self.y 1 TILESIZE self.swing False elif self.attack and not keys pg.K SPACE self.sword.update() self.attack False Any help would be appreciated. Thanks!
33
pixel art tree using 3 4 perspective Is there a correct way to calculate the exact proportions for drawing a tree sprite for an RPG. These are the types of trees I am aiming to create(sorry for the huge picture, I dont how to scale it down) The tutorial here, http cyangmou.deviantart.com art Pixel Art Tutorial 3 The perfect crate 311089437, perfectly describes how to create an object using the 3 4 perspective but when I apply this method when drawing a tree it doesn't quite work. Is there a different method that I could use for creating non rectangle shaped objects like a tree or other strangely shaped objects, or am I going about it wrong? Thank you in advance D.
33
Sprite Animation Software That Works With Cocos2D So the title basically says it all. I'm looking for software that would help me to create sprite animations that I could use with cocos2d by allowing me to pose a a sprite and the software create key frames between the sprites( basic tweening ). Any ideas?
33
Finding relative x and y value on a sprite Gamemaker I'm having a little conundrum that may just result from a lack of knowledge of Gamemaker's functionality. I've attached two images to aid explanation. I have a sprite of a turret (with a gun barrel), attached to a turret object, and at certain points in gameplay this object will spawn another object on top of it (let's call it the 'bullet object'). I would like the bullet object to spawn at the x and y coordinates at the end of the gun barrel. This would be easy to find as a pair of coordinates if the sprite was always stationary, in this configuration. Alas, it is not. It rotates like billy oh. This means that the x and y coordinates at the end of the gun barrel are constantly different. How do I find this constantly changing x and y coordinate? I imagine (though am most likely wrong) that there the initial x and y coordinates of the sprite are saved and can be found even if rotated is there a function that does this? Or do I need to write a script and then call it every time I want to spawn the bullet object? Thanks for your help.
33
Designing for multiple screen resolutions and aspect ratios, target one resolution then scale to meet others or use normalized values? I feel like what I'm currently doing is stupid but would just like reaffirmation it is. Basically I'm writing games which target a certain screen resolution (800x480), so all positions, widths and heights I set for sprites is for that resolution. On the engine side I have special scalers which will change the width and height of what is being drawn to meet the current platforms res, so for example if the platforms screen res is 1200x720 I will stretch a sprite by a factor of 1280 800.0f 720 480.0f Positions are also offset and manipulated in a similar way. I really feel I should move to a normalized system for widths and positions (i.e. 0 to 1) but at the same time feel this is more intuitive since the first platform I target is 800x480 and I already kind of have a good udnerstanding of dimensions and such for that. So my question is, is it really that bad to target a specific ccreen res then in the engine side modify this res to meet different platforms?
33
How can I implement the ability to 'cut' sprites into arbitrary shapes? I want to create a game where you have scissors that cut a sprite in a shape that you create by dragging your finger on the screen. My problem is that I can't figure out how can I cut my sprite in the specific shape that the user creates with their finger. So far the only solution that I found searching the web is that I'll need to create my own shader that lets me make a specific part of a texture transparent, but don't understand how I can apply that to my scenario?
34
Triangular grid and rendering to a texture (Direct3D 11) Is it possible to render vertex data from a triangular grid into a texture with a same size (i.e. 3x3 vertices to 3x3 pixels each pixel is representing one vertex)? Consider following situation. I have a grid (3x3 to keep it simple) and in a vertex shader is a computation of Z coordinate (imagine it like some wave model of a water surface). So, each vertex has some Z value, and I need to store these values into a texture (for a comparison with a next frame). A texture is set as a render target and has a same size as the grid. How to render a precise Z value (coordinate) of a vertex into appropriate pixel? I mean, my dilemma is a fact that rasterizer gets three vertices and interpolates their values by default, so values stored in pixels are not precise at all.
34
Renaming Texture 2D nodes in Unity's shader graph? I am new here so thanks for having me. I have a query regarding renaming the Texture 2D nodes I have created in my first Shader graph. I have setup a shader graph with a normal map, base colour, smoothness and ao. These are showing up as exposed parameters which is just what I wanted so I can drag and drop new texture maps for the other materials easily (please see screen shot attached). I would love to be able to name each of those Texture 2D maps accordingly so it is clear where each map from the assets folder needs to be dragged. E.g The Texture node that contains the normal map is simply called 'Normal' instead of 'Texture 2D', the base colour map named 'base colour' etc. Atm I have 4 exposed parameters all called Texture 2D. Not the most serious of issues I would agree, but I feel like I must be missing something. No sign of a 'rename' option for those nodes. Any help very much appreciated. Cheers Nick
34
TexturePacker ignores extensions I'm using TexturePacker in one of my games, though when packing a bunch of textures their extension is kept in the data file. So when I want to find a texture I need to search for "image.png" instead of just "image". Is there an option to let texture packer ignore the extensions of my source images in the data file? Solved So if anyone else wants this, here's the exported I made https www.box.com s bf12q1i1yc9jr2c5yehd Just extract it into "C Program Files (x86) CodeAndWeb TexturePacker bin exporters UIToolkit No Extensions" (or something similar) and it should show op as an exporter.
34
Using texture() in combination with JBox2D I'm getting some trouble using the texture() method inside beginShape() endShape() clause. In the display() method of my class TowerElement (a bar which is DYNAMIC), I draw the object like following void display() Vec2 pos level.getLevel().getBodyPixelCoord(body) float a body.getAngle() needed for rotation pushMatrix() translate(pos.x, pos.y) rotate( a) fill(temp) temp is a color defined in the constructor stroke(0) beginShape() vertex( w 2, h 2) vertex(w 2, h 2) vertex(w 2,h h 2) vertex( w 2,h h 2) endShape(CLOSE) popMatrix() Now, according to the API, I can use the texture() method inside the shape definition. Now when I remove the fill(temp) and put texture(img) (img is a PImage defined in the constructor), the stroke gets drawn, but the bar isn't filled and I get the warning texture() is not available with this renderer What can I do in order to use textures anyway? I don't even understand the error message, since I do not know much about different renderers.
34
Why do my sprites have a dark shadow line frame surrounding the texture? I'm starting OpenGL with Apple's GLKit hand I'm having some trouble to get my sprites displayed properly. The Problem is that they all are surrounded with thin dark lines. The screen shot below shows two rectangles with a png image textures containing transparency (obviously). The black shadows, surrounding them are definitely not part of the pngS. The green png is done without anti aliasing the blue one has an anti aliased border. The black border is also apparent if I draw only one sprite. Te relevant part (hope so...) of code is render the scene (void)render glClearColor(69. 255., 115. 255., 213. 255., 1.) glClear(GL COLOR BUFFER BIT) shapes enumerateObjectsUsingBlock (AAAShape shape, NSUInteger idx, BOOL stop) shape renderInScene self creating and storing the effect inside shape class (GLKBaseEffect )effect if(!effect) effect GLKBaseEffect alloc init return effect rendering the shape (including effect configuration) (void)renderInScene (AAAScene )scene TODO Storing vertices in Buffer self.effect.transform.projectionMatrix scene.projectionMatrix self.effect.transform.modelviewMatrix self.objectMatrix if(texture) self.effect.texture2d0.enabled GL TRUE self.effect.texture2d0.envMode GLKTextureEnvModeReplace self.effect.texture2d0.target GLKTextureTarget2D self.effect.texture2d0.name texture.name self.effect prepareToDraw if(texture) glEnableVertexAttribArray(GLKVertexAttribTexCoord0) glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL FLOAT, GL FALSE, 0, self.textureCoordinates) glEnable(GL BLEND) glBlendFunc(GL SRC ALPHA, GL ONE MINUS SRC ALPHA) glEnableVertexAttribArray(GLKVertexAttribPosition) glVertexAttribPointer(GLKVertexAttribPosition, 2, GL FLOAT, GL FALSE, 0, self.vertices) glDrawArrays(GL TRIANGLE FAN, 0, self.vertexCount) glDisableVertexAttribArray(GLKVertexAttribPosition) if(texture) glDisableVertexAttribArray(GLKVertexAttribTexCoord0) glDisable(GL BLEND) Any ideas anyone? Thank you.
34
How to extract the texture from an image onto a mesh? (preamble I think this fits either here or SO with computer vision tag, I chose gamedev because I think you guys are probably really good with textures and meshes, but tell me if my decision was wrong.) I want to accomplish the following Given an image with an object, a 3D mesh very similar to that object, and rendering parameters that render that mesh to the exact location of this object. I want to "extract" the original texture from the image onto the mesh. (So in a later step, I could re render the mesh from another viewpoint (of course some of the texture would be invisible black)). My mesh has uv coordinates for each triangle so I guess I could either "store the texture on the mesh" somehow, or directly backmap it to a 2D texture map. So I guess what I want to do is kind of texture backmapping remapping, kind of the inverse of what is done in game dev when texturing objects. I was having a lot of trouble finding any useful information on Google about what I want to do, so I thought I'd ask here. Maybe I haven't found the right word for it yet. I think there's probably quite a lot of "stuff" involved because pixel in the original image won't exactly correspond to a location on the mesh.
34
Unreal Engine Niagara Write to Texture or Data Structure In the Niagara system, is there any way to write to a texture? I want to write the position of each particle into a texture. So the 2d grayscale texture has its intensity as the number of particles in the pixel. Or, is there any data structure that I can use to store the particle positions? Something like 2d or 3d array?
34
Sprite Tile Sheets Vs Single Textures I'm making a race circuit which is constructed using various textures. To provide some background, I'm writing it in C and creating quads with OpenGL to which I assign a loaded .raw texture too. Currently I use 23 500px x 500px textures of which are all loaded and freed individually. I have now combined them all into a single sprite tile sheet making it 3000 x 2000 pixels seems the number of textures tiles I'm using is increasing. Now I'm wondering if it's more efficient to load them individually or write extra code to extract a certain tile from the sheet? Is it better to load the sheet, then extract 23 tiles and store them from one sheet, or load the sheet each time and crop it to the correct tile? There seems to be a number of way to implement it... Thanks in advance.
34
How to create 2D textures from footage? I am new to game development but I heard, that already in the 80s with Prince of Persia, indie developers where capable to create textures from footage. I am developing a game and so far I get an illustrator to draw things I need. I'd like to explore the opportunity of taking some video footage of complex movements and create some basic textures from it (as I guess Jordan Mechner did), so to then "feed" them to a more digital artist to draw "over" them in order to make them cartoonish and coherent with the style of the pre existing in the game. Anyone of you could also explore the answer for mobile game development? Any crazy mobile developer that did so already for a 2D game? )
34
Howdo I create differently sized texture atlases for different screen sizes? I am beginning game development and using texture atlases. I've created textures based on the resolution 1920x1080, so I created a 1024x1024 size Texture Atlas for storing multiple graphics. If the game is played on a 800x480 size device, the atlas will be very big to load in memory. An atlas of 512x512 would be enough and on devices with 480x320 resolution the game might not even work due to the different texture size. How can I resize the atlas to save memory? Can I use different texture atlases for different screen sizes? I just want to know how other game devs do it?
34
Is there a better way than to use render targets for a "glowing" effect? I am making glyphs on a wall "glow", by changing how much of the overlay textures light intensity is blended into the diffuse map. The basic approach is to capture the diffuse map into a render target, and then spritebatch.draw(additive) the light intensity texture on top of it. This worked great for smaller textures. Now I want to do the same for a larger scene that has massive texture maps. It doesn't make sense to capture all of that just to make some portions "glow". Is there a better way than to use render targets for a "glowing" effect?
34
does cocos2d cache texture automatically I know if we want the textures cached we can use the sharedtexture manager to cache them, but since this is on mobile platform , why don't cocos2d x just do this for all texture loads ? when creating sprites with images, are these textures cached as well ?
34
Ray tracing texture and phong lighting Other questione releated to my ray tracer implementation for iPad. If I have a polygon that has a texture and a material, how do I calculate the color using Phong lighting model? Is the texture used in substitution of some of the component (diffuse?)? Or if there's a texture I need to just ignore the material and get only the texture color?
34
How to update a dynamic texture I'm using Ogre 1.10, I have a dynamic texture assigned to a material that I need to update its buffer with a new image every few seconds. How can I transfer pixel data from an image to my dynamic texture? I've created a manual texture like Create the texture Ogre TexturePtr texture Ogre TextureManager getSingleton().createManual( "dyn texture", name Ogre ResourceGroupManager DEFAULT RESOURCE GROUP NAME, Ogre TEX TYPE 2D, type 256, 256, width amp height 0, number of mipmaps Ogre PF BYTE BGRA, pixel format Ogre TU DYNAMIC WRITE ONLY DISCARDABLE) And on my update function I have the image that I want to transfer Ogre Image img img.load(basename.toStdString(), "resources") Copy pixels from img to texture ?? I've already tried doing Ogre HardwarePixelBufferSharedPtr pixelBuffer texture gt getBuffer() pixelBuffer gt blitFromMemory(img.getPixelBox()) works but it's quite slow, gui freezes when updating like that.
34
Repeating UVs wont convert to texture in maya I have designed a material in Maya which i want to convert to a texture jpg, however i have repeating the UV's on the material. When i convert the material to a jpg the repeating UV's wont convert across. Is there anyway i can get them too?
34
Compressed vs Uncompressed Textures differences? What is the difference? As far as i know, compressed textures would speed things up, because the PCIe bus has to transfer less amount of data, and the interconnect is the main latency issue with GPU's. For this i'm certain. What i'm not sure of, and this is what i mainly want an answer to is, does the GPU decompress the textures on the fly, or does it display the uncompressed textures? In other words, is there any visual impact of using the compressed textures? The GPU decompresses them automatically? Does this vary with the compression algorithm used, or is it a GPU specific thing? Thanks in advance.
34
Why is the texture all wrong on my model? Why is this happening? If possible I'd like to know what this particular problem is called. The model is a simple cube made in blender and exported with normals and MTL. Texture is the cube unwrapped with crate image on each side. Are the UV coordinates wrong?
34
How can I draw a perspective correct quad? I'm trying to draw a quad in 2D (in SharpDX, but that is basically XNA). But texture correction is not working, and I'm getting only an affine textured quad. I'm using BasicEffect to render it. BasicTextureEffect new BasicEffect(Device) Alpha 1.0f, TextureEnabled true, LightingEnabled false, VertexColorEnabled true, Projection Matrix.OrthoOffCenterLH(0.0f, ScreenWidth, ScreenHeight, 0.0f, 0.0f, 1.0f), It is an 2D isometric game. I have pseudo 3D coordinates in isometric world (it's a shadow on the ground) converted to screen space, and then rendered using DrawQuad. Do I need to set up view (or projection?) somehow, to real 3D (emulate the isometric camera), and then draw this quad in 3D coordinates instead? How eventually? Or is there a way to correct this in 2D? Update I've uploaded the actual partial screen shot (Figure A), for you to see it's almost the same. (I cut it a bit, but it goes all to the corner) ensp ensp ensp ensp ensp ensp ensp Figure A ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp Figure B Update 2 I can confirm now, that plain SharpDX (XNA) BasicEffect does this with just plain UW mapping (4 corners of a square), and plain DrawQuad. I've temporarily changed it to include center point, and I do draw 4 triangles instead of 2 (Figure B), this reduces effect to minimum, but it's still there. It still needs to be solved, because I won't be using it only on shadow.
34
Godot 3.2 atlasTexture not creating any texture atlas When I try to use the new feature of godot that lets you import sprites as an atlasTexture, I dont' get any results.
34
How to make a hand drawn squared texture periodic with as little modification as possible? The obvious part is drawing as periodic as possible, but hand drawing is impossibly so perfect, so what kind of modifications are possible that do not make the textures loose their hand drawn character?
34
Proper use of texture units I'm a beginner using OpenGL v3.3 on C with SharpGL. I have a simple scene with a Skybox and some OBJ Models. Both the skybox and models have multiple textures. Currently what I do is that I load all the textures used in the scene into different texture units at once program launch and then while rendering each element in the scene I just change a uniform variable to reflect the correct texture unit and render the vertices. Doing this could get me into trouble if there are more textures in my scene than texture units on the GPU, so I'm not sure if this is the right approach. I would like to know what the standard practice for such a scenario. Do you just (re)load the texture for each element into say fixed texture unit 0 on every draw call or what?
34
Texture coordinates do not map correctly in Direct3D11 game engine I beg your pardon if this question has been already answered elsewhere or if this is the wrong site, but I have a serious issue with rendering textures with Direct3D 11. Using Cinema 4D R17, I created a simple cube, triangulated all polygons, and UV mapped a texture, as you can see here and it renders correctly. Next, I exported the file to .x because I created a simpler mesh format and .x is ideal for getting vertices, indices, normals, texture coordinates, etc. This conversion is perfect as all data from the source file is successfully transferred to my file (I checked and double checked it). However, when I upload the file in my game engine, I get the following result The cube is being rendered incorrectly where that strange pattern forms. This is the original .x file (I included only relevant parts) Mesh CINEMA4D Mesh 8 Cube. These are vertices. 0.01 0.01 0.01 , 0.01 0.01 0.01 , 0.01 0.01 0.01 , 0.01 0.01 0.01 , 0.01 0.01 0.01 , 0.01 0.01 0.01 , 0.01 0.01 0.01 , 0.01 0.01 0.01 12 Cube. These are indices. 3 0,1,3 , '3 ' means that this face contains 3 vertices. 3 2,3,5 , 3 4,5,7 , 3 6,7,1 , 3 1,7,5 , 3 6,0,2 , 3 0,3,2 , 3 2,5,4 , 3 4,7,6 , 3 6,1,0 , 3 1,5,3 , 3 6,2,4 MeshNormals 8 Cube 0.408 0.408 0.816 , 0.667 0.667 0.333 , 0.667 0.667 0.333 , 0.408 0.408 0.816 , 0.408 0.408 0.816 , 0.667 0.667 0.333 , 0.667 0.667 0.333 , 0.408 0.408 0.816 12 Cube 3 0,1,3 , 3 2,3,5 , 3 4,5,7 , 3 6,7,1 , 3 1,7,5 , 3 6,0,2 , 3 0,3,2 , 3 2,5,4 , 3 4,7,6 , 3 6,1,0 , 3 1,5,3 , 3 6,2,4 MeshTextureCoords 8 Cube 1.0 1.0 , 0.0 1.0 , 1.0 0.0 , 1.0 1.0 , 1.0 1.0 , 1.0 0.0 , 0.0 1.0 , 1.0 0.0 MeshMaterialList 2 12 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 Material C4DMAT NONE 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 Material C4DMAT Mat 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 TextureFilename "tex.bmp" C4DMAT Mat I also invert the vtexture coordinate (v 1 v). This is the vertex data I extract 8 Number of vertices. 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 Vertices stored in X, Y, Z. 1 0 0 0 1 1 1 0 1 0 1 1 0 0 1 1 Texture coordinates stored in U, V. 36 Indices count. 0 1 3 2 3 5 4 5 7 6 7 1 1 7 5 6 0 2 0 3 2 2 5 4 4 7 6 6 1 0 1 5 3 6 2 4 Indices. tex.bmp Texture file name. I read that there might be a conflict between texture coordinates of vertices that are shared among different faces and that resolving this would imply not using index buffers. EDIT I did what wondra said (replaced texture coordinates with 0 0 0 1 1 0 1 1 0 1 0 0 1 1 1 0 and it looks better indeed I am not sure what is wrong here. Could anybody help me? Thank you. UPDATE The Cinema 4D .obj is broken. Use Blender or anything else.
34
Is there a way to easily make 3d models and textures for people with no prior experience? I want to make a game, but I need assets. I am wondering if I should make them by myself even though I never made any textures or 3d models before. Are there technologies that allow you to easily make them? What would you recommend me to do?
34
SDL2 Textures bleeding 1px border around tile maps SDL RenderCopyEx taking integer arguments https forums.libsdl.org viewtopic.php?t 9486 This post gives a good indication of my question. Basically if you set SDL2 logicalScale or otherwise and render textures at native window resolution they appear fine. However, with tile maps if you resize the window in anyway, you get a bleed where an integer rounding issue creates a 1px border around certain tiles. Is my only option to create a 1px border around all my images to stop this bleed rounding error? Or a semi transparent border with the main color. What are my options? Is this solved in any of the latest SDL2.X.Y ? EDIT A simpler method I have used is reducing my images from 64x64px to 62x62px in SDL2 (not the actual sprite) and using it's own sprite as a 1px border, and using Render Scaling to scale up that 1px, which stops bleed. It reduces the quality on background images ever so slightly, but it requires no tweaking of any code or sprites... but again wondering if there's a more elegant solution.
34
How to load data for specific level at runtime? I'm trying to create a game with many levels loaded from XML files. In my game I have many objects in each level. At present my game contains 20 levels, and I load all the textures at once on startup. But I think the correct way to do it is to only load textures used in the current level. I don't know how to do that. So please explain this by providing some example code. At present I create a class for each type of entity by extending my Sprite class. This subclass loads the appropriate image. I know this is not the best way to do things. Basically I want to know how to load large levels efficiently in Andengine. What is the proper method for loading textures, level data and background images from files when the level is run?
34
How Would You Animate a 3d Characters Face (With Textures) So I'd like to do a low poly style of modeling, and one of the corners I'd like to cut is on head geometry. Specifically, making the details of the face a texture (like a Mii or a character in Animal Crossing for example). How would you go about accomplishing animating the face? Getting it onto the model in the first place? I'm using Unreal Engine to learn and working on my 3d modeling skills at the same time, so knowing a general workflow would give me some structure to work with.
34
Is it possible to look up a texel from a texture in GLES2 GLSL framgent shader without using sampler? Is there some way I can directly access texture memory from fragment shader in GLES2 GLSL? I don't need the sampler to be involved since I am just using it as a look up table.
34
Why are normal maps in tangent space but not in normal space? I want to implement normal mapping in my little game engine. When getting into normal mapping, I wonder why normal maps are typically in tangent space but not in normal space? That normal maps in object space would be complicated to handle is clear to me. But when in normal space, they could be easily added to the vertex normal, couldn't they? It seems that tangent space requires to construct a transformations matrix for each pixel.
34
Draw textured circle 2D shape in SceneKit I have the following code to create a circle Path and a SCNShape from this path. orbit gradient.png is a horizontal 1px high image that represents a gradient. let material SCNMaterial() material.isDoubleSided true material.lightingModel .constant material.diffuse.contents UIImage(named "art.scnassets orbit gradient.png") let shapePath Path.circle(radius radius, segments 512) let orbitShape SCNShape(shapePath) orbitShape.materials material self.orbitNode.geometry orbitShape The problem I have is that when applying the texture to this geometry I'm just left with a white circle. If I set the diffuse to be UIColor.red it displays as red. What I want to accomplish is a stroked circle that appears to gradually fade, creating a rotating trail effect.
34
Skysphere to Skybox Texture Conversion I am working on a hobby project and implemented a skybox, no big deal. Now I bought (licensed) a few really nice sky textures. Unfortunately the textures are projected for a skysphere. I could implement a skysphere handling rendering, but I figure rendering a box is way more efficient. (That may actually not matter on modern devcies though.) So I decided to transform the textures into 6 textures for a skybox. My hodge podge solution to convert the textures is I create a scene in blender and render out the appropriate 6 views. If you ask google that is aproptatly the common sollution. It there a tool that can do the conversion in a more automated way?
34
example of texture mapping in pyglet Using pyglet, I am trying to create a UVSphere mesh, and on its surface I would like to display a mercator projection map. In researching examples of pyglet texture mapping, I have found Nehe tutorial 6 in pyglet. However this uses immediate mode, and shows the entire texture on each cube face. pyglet obj test, which looks like it includes texture maps. However instead of showing the chosen texture, it shows an even color. stackoverflow question, which looks like there was an answer with a solution. However the referenced gists showing working code have disappeared. Another implementation, referenced in the pyglet google group Are there any simple examples of a working mesh texture map in pyglet? Edit I found out the pyglet obj test code is looking for lines in the .obj file starting with vt in order to create texture mapping coordinates. However the Wavefront exporter in Blender apparently has no option to export this information. I exported the same mesh with each of the other available types COLLADA (.dae) Stanford (.ply) Stl (.stl) 3D Studio (.3ds) Autodesk FBX (.fbx) X3D Extensible 3D (.x3d) Perhaps there's a pyglet importer for one of these formats... Edit 2 I clearly have a poor understanding of the wavefront format. It seems vt lines are not necessary. What I was missing is full f maps. For instance, to simplify things way down, I created a 3d mesh that was actually a single square, with a texture mapped onto it. In Blender it shows the square with distorted texture, but the built in wavefront exporter produces a single line of texture mapping f 2 1 1 1 3 1 4 1. These should apparently instead be something like f 2 0 1 1 0 1 3 0 1 4 0 1 (note there should be a number between the two slashes). Apparently Blender only does a full (proper?) export of these lines if a texture is UV mapped. Applying a texture without a UV Map renders properly in Blender, but I guess the wavefront exporter doesn't properly handle it. Perhaps it's time for me to submit a Blender bug report...
34
Would it be possible to edit character models of a PS1 game, to be played on an emulator? There are some abandoned PS1 oldies out there that I would love to see revamped with improved assets. I have some skills with some some modelling programs, so I was wondering if it would be possible to rip some models with or without their textures so that I could try to update them, and experience them on an emulator. Is this possible? Even if the edited copies cannot be reinserted into the game, would it still be possible to rip the models out so that I could edit them?
34
Projecting a moon texture to the background I want to display a 2d moon image to the background. I have a normalized direction for the center location of my moon texture. I also have the normalized view direction from the camera for each pixel. I'm working in HLSL and I need to find the uv for the texture. How can I do this without the texture being distorted. I tried a variety of spherical uv mapping equations but none of them worked. This post works but only if d is (0,0,1) in a right handed Y up coordinate system How can I calculate the U,V texture coordinates on a disk at infinity given only a view vector and a vector pointing to the disk 39 s center? It seems like you have to rotate the up righting right vectors with moon's location but how?
34
HLSL Voxel texturing I'm currently trying to develop a Voxel Engine using Direct3D 9 and C . To keep the memory usage low, i'm only passing the position, the orientation and the offset of the current voxels texture in the texture atlas of each vertex to the vertex shader. The vertex shader then calculates the normal and passes it to the pixel shader. I found this article which covers, how to texture voxels with just their position and normal in glsl. This is the part that calculates the texture coordinates in my pixel shader (SM3) float2 tileUV float2(dot(input.normal.zxy, input.pos3D), dot(input.normal.yzx, input.pos3D)) float2 texcoord input.texOffset tileSize frac(tileUV) This code works fine for faces that point in negative z direction (normal 0,0, 1 ), however, the back is flipped by 180 and the sides and top bottom squares are flipped by 90 270 . I am not sure, if this is correctly translated from glsl, because this behaviour should be the expected one in hlsl, if I calculate it by hand. Is there anything that I have overseen or should I aim for a different approach? Edit I have now managed to successfully texture the faces by replacing the previous calculation with the following if(input.normal.y ! 0.0f) handle top bottom surfaces as front back faces input.pos3D.y input.pos3D.z input.normal.z input.normal.y input.normal.y 0.0f texcoord.x input.texOffset.x tileSize frac(float3(1.0f, 1.0f, 1.0f) cross(frac(input.pos3D), input.normal)).y texcoord.y input.texOffset.y tileSize (1.0f frac(input.pos3D.y)) Is there any way that I can simplify optimize the equation? I may also mention that the voxels are all axis aligned and clamped to integer coordinates. Edit2 This is the modified formula of zogi's answer which works as expected. float3 n abs(normal.xyz) float2 texcoord float2(input.texOffset.x tileSize dot(n, frac(input.pos3D.zxx)), input.texOffset.y tileSize tileSize dot( n, frac(input.pos3D.yzy)))
34
Skybox Development Help I'm a freelance game designer, and I can't afford to hire someone to do skyboxes for me. However, I am unable to do them my self. Let's say I have a texture that can be tiled on four sides... If I put that very texture on each side of the skybox.. It just comes out looking like a cube. I'd like to know how exactly I need to "warp" a texture, so that when applied to a cube on all four sides, it looks spherical. Thanks! G
34
How to load a png file without alpha premultiplying in ios? I am making a game in which I use the alpha value, in some cases for transparency, in others for occlusion, and for HDR in others. I'm loading the images like this (this is MonoTouch, but objective c can be easily converted to c ) CGImage bitmap new UIImage(resourcePath).CGImage width bitmap.Width height bitmap.Height IntPtr pixels Marshal.AllocHGlobal(bitmap.Width bitmap.Height 4) using (var context new CGBitmapContext(pixels, bitmap.Width, bitmap.Height, 8, bitmap.Width 4, CGColorSpace.CreateDeviceRGB(), CGImageAlphaInfo.PremultipliedLast)) var dest new RectangleF(0, 0, bitmap.Width, bitmap.Height) context.ClearRect(dest) context.DrawImage(dest, bitmap) GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, width, height, 0, PixelFormat.Rgba, PixelType.UnsignedByte, pixels) Marshal.FreeHGlobal(pixels) bitmap.Dispose() However, this butchers all my pixels by premultiplying them by the alpha value. The reason behind this is that I'm creating CGBitmapContext with CGImageAlphaInfo.PremultipliedLast. I tried with CGImageAlphaInfo.Last, but creating the context fails. Core Graphics only works with premultiplied alpha, as explained here. I tried using CGDataProviderCopyData, as well as pngcs. However, they both simply decode the bitmap, and don't convert the pixels to RGBA, which is relevant when you're storing 8 bit, 24 bit PNGs, or indexed PNGs (I have lots of these, which are the result of optimizing my PNG files for size). So I'm lost. I need a way to decode PNG files in a way that they are converted to RGBA without premultiplying my colors by the alpha value. Is that too much to ask? GDI, GDI , System.Drawing, DirectX, Mono for Android as well as PSM do it just fine... Also, why does iOS development has to be so picky?
34
How do I get an instance of KX PolygonMaterial in Blender? I've got a question concerning using Python in Blender's Game Engine. Before I start I want to state that I'm trying to change the color of an object in Blender's game engine. To do this, I'm attempting to find a way to update the texture of the object (I basically want two or three states, red, (yellow), green). What I'm doing right now is scene GameLogic.getCurrentScene() pingMeter scene.objects 'Ping Meter' mesh pingMeter.meshes materials mesh 0 .materials material materials 0 However, when I do print(material. class . name ) it outputs KX BlenderMaterial. Shouldn't I be getting KX PolygonMaterial if I'm running the Blender Game Engine? Is there anyway to change color or texture with KX BlenderMaterial because I can't find anything in the documentation. Can I get an instance of KX PolygonMaterial out of the code above? ...or should I just take a different approach all together? Thanks! EDIT I'm using Blender 2.65 which uses Python 3 in case anyone is wondering.
34
FPS drop after moved from Textures to TextureAtlas in LibGDX I've started using LibGDX some time ago and I was making a test project to get used to this library. I've created some images and added them to the assets folder and loaded each image as a Texture using the AssetManager. Everything was working fine and I had 60 FPS. I wanted to work in a more efficient way so I packed all my images into an atlas using the TexturePacker tool. I loaded the atlas using the AssetManager again and started using TextureRegions instead of Textures. After this change I've started to notice sudden drops in FPS from 60 to 50 and even 30 once. I've tried to change the pixel format to RGBA4444, I've made sure that the min and mag filter were both seth to Nearest, but still I see those annoying frame drops. I'm not doing anything heavy in the game itself, it's currently some actors in a stage. I got some MoveActions and Animation, but nothing special yet. Does anyone have a clue what can cause the FPS drop? Thanks
34
SDL CreateTextureFromSurface not working Code SDL Surface tmpSurface IMG Load( quot Assets Player.png quot ) playerTex SDL CreateTextureFromSurface(renderer, tmpSurface) SDL FreeSurface(tmpSurface) Rendering the Texture SDL RenderCopy(renderer, playerTex, NULL, NULL) I did check to see if any of them returned null and playerTex is still null even though it is being set to a texture, im not sure why can anyone help
34
Why are examples of game textures often represented on a sphere model? I have recently started working with Unity and was looking for textures for the background (castle walls and such pieces) on a game that I am thinking about eventually building. I noticed that the textures are often represented as ball shaped, such as for example in this sample image here For example, why are the texture patterns not shown as simply flat squares? I guess there is a reason behind this that has to do with mapping to 3d surfaces but I was not sure so I wanted to check here. Why are textures often represented in spherical shapes in examples and previews?
34
How can I achieve UnrealEd's per face texturing in Blender 2.6? Using UnrealEd I can create geometry and assign a material to each face of that geometry. Each face can have its own UV settings. How can I achieve the same functionality using Blender? I've seen the "Texture Face" option mentioned but that seems to be gone in Blender 2.59 ?
34
libgdx texture edge blending problem I have two completely white bitmaps here They're there, trust me. When I put one on top of the other and scale them down with TextureFilter.Linear I get this How do I get rid of the dark pixels?
34
How can I draw a perspective correct quad? I'm trying to draw a quad in 2D (in SharpDX, but that is basically XNA). But texture correction is not working, and I'm getting only an affine textured quad. I'm using BasicEffect to render it. BasicTextureEffect new BasicEffect(Device) Alpha 1.0f, TextureEnabled true, LightingEnabled false, VertexColorEnabled true, Projection Matrix.OrthoOffCenterLH(0.0f, ScreenWidth, ScreenHeight, 0.0f, 0.0f, 1.0f), It is an 2D isometric game. I have pseudo 3D coordinates in isometric world (it's a shadow on the ground) converted to screen space, and then rendered using DrawQuad. Do I need to set up view (or projection?) somehow, to real 3D (emulate the isometric camera), and then draw this quad in 3D coordinates instead? How eventually? Or is there a way to correct this in 2D? Update I've uploaded the actual partial screen shot (Figure A), for you to see it's almost the same. (I cut it a bit, but it goes all to the corner) ensp ensp ensp ensp ensp ensp ensp Figure A ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp Figure B Update 2 I can confirm now, that plain SharpDX (XNA) BasicEffect does this with just plain UW mapping (4 corners of a square), and plain DrawQuad. I've temporarily changed it to include center point, and I do draw 4 triangles instead of 2 (Figure B), this reduces effect to minimum, but it's still there. It still needs to be solved, because I won't be using it only on shadow.
34
Other procedural material generators for Unity? In the new version of Unity 3.4, unity announced that they would now support procedural materials (which is awesome, by the way). While I was researching it, I found this in the manual Allegorithmic's Substance Designer can be used to create Procedural Materials, but there are other applications (3D modeling apps, for example) that incorporate the Substance technology and work just as well with Unity. I don't have Allegorithmic's substance designer and don't plan on buying it soon. What other applications or 3D modeling apps can make procedural materials that work in Unity? Edit I found that Allegorithmic has a program called Player. But it is on Windows only. I'm on mac.
34
What is the name for a single 2D polygon on a UV Mapped texture? When UV Mapping a polygon texture, is there name that describes a single polygon on the texture. Does this name differentiate if from the same geometry once it has been mapped to a face? Would sometime like "UV Tile" or "UV Face" be clear?
34
Difference between texture arrays and multiple single textures? I've just learnt that DirectX 10 and above have a feature called "texture arrays". Which, basically, is just a normal array of textures (shader resources) which in a shader is declared like Texture2D myTextures 2 What I've been using so far is 2 multiple separate textures Texture2D myFirstTexture Texture2D mySecondTexture Is there any practical (performance, memory etc...) difference between the two?
34
How do I generate mipmap .png and model .obj files for LibGDX? I'm playing a bit with LibGDX (OpenGL ES 2.0 wrapper for Android). I found some sample code which used prepared files to load models and mipmap textures for them, e.g., at https github.com libgdx libgdx blob master demos invaders gdx invaders src com badlogic gdxinvaders RendererGL20.java it reads .obj file for the model and RGB565 format .png file to apply a mipmapped texture to it. What is the best easiest way for me to create these files? I understand .obj files are generated by a bunch of tools (I installed Blender, Wings3D and Kerkythea so far), but which ones will be the most user friendly for someone unfamiliar with 3D modelling? But more importantly, how do I produce a .png file with the mipmapped texture? The .png I looked at ( https github.com libgdx libgdx blob master demos invaders gdx invaders data ship.png ) seems to include different textures for each of the faces, probably created with some tool. I browsed through the menus for the 3 tools I have installed but didn't find an option to export such a .png file. What am I missing?
34
Crash when creating RenderTargetViews for Cubemap I want to create a cubemap and I want to render to it and later sample from it for reflections. However, It crashes when I try to create the faces for the cubemap. Here is my code RenderTargetCube RenderTargetCube(ID3D11Device device, int resolution, DXGI FORMAT format) D3D11 TEXTURE2D DESC desc desc.Width resolution desc.Height resolution desc.MipLevels 1 desc.ArraySize 6 desc.Format format desc.SampleDesc.Count 1 desc.SampleDesc.Quality 0 desc.Usage D3D11 USAGE DEFAULT desc.BindFlags D3D11 BIND RENDER TARGET D3D11 BIND SHADER RESOURCE desc.CPUAccessFlags 0 desc.MiscFlags D3D11 RESOURCE MISC TEXTURECUBE HRESULT res device gt CreateTexture2D( amp desc, NULL, amp Texture) throwIfFailed(res) D3D11 RENDER TARGET VIEW DESC viewDesc viewDesc.Format desc.Format viewDesc.Texture2DArray.ArraySize 6 viewDesc.ViewDimension D3D11 RTV DIMENSION TEXTURE2DARRAY for (int i 0 i lt 6 i ) viewDesc.Texture2DArray.FirstArraySlice D3D11CalcSubresource(0, i, 1) res device gt CreateRenderTargetView(Texture.Get(), amp viewDesc, RenderTargetView i .GetAddressOf()) crash here throwIfFailed(res) and here is the message I receive D3D11 ERROR ID3D11Device CreateRenderTargetView The Dimensions of the View are invalid due to at least one of the following conditions. MipSlice (value 0) must be between 0 and MipLevels 1 of the Texture Resource, 0, inclusively. FirstArraySlice (value 1) must be between 0 and ArraySize 1 of the Texture Resource, 5, inclusively. With the current FirstArraySlice, ArraySize (value 6) must be between 1 and 5, inclusively, or 1 to default to all slices from FirstArraySlice, in order that the View fit on the Texture. STATE CREATION ERROR 137 CREATERENDERTARGETVIEW INVALIDDIMENSIONS I dont quite understand the error message, especially the last part about ArraySize, why does it want it to be between 1 and 5 and not 0 to 5?
34
Applying texture inside of an Icosphere? I have implemented a code to generate an Icosphere procedurally in Unity 5.3.4f, which works pretty well. However, what I need to do know it tweak the code to make the sphere to receive texture only in its inside triangle faces, not outside. Searching trough the Unity forums, many have said that it's simply the case of flipping the normals of all triangles (i.e. multiplying them by 1). I tried it and it is not the case. What is the correct way of changing the triangles (not trough shaders) so the Icosphere mesh can only be textured inside (e.g. in a skydome)? EDIT After Kromster's answer, I did some research and ended up implementing the following code Mesh InvertMeshFaces(Mesh mesh) Flip the triangles int triangles mesh.triangles int numtriangles triangles.Length 3 for (int t 0 t lt numtriangles t ) int temptriangle triangles t 3 triangles t 3 triangles (t 3) 2 triangles (t 3) 2 temptriangle reseting the UV Vector2 uvs mesh.uv for (int uvnum 0 uvnum lt uvs.Length uvnum ) uvs uvnum new Vector2(1 uvs uvnum .x, uvs uvnum .y) reseting normals Vector3 normals mesh.normals for (int normalsnum 0 normalsnum lt norms.Length normalsnum ) norms normalsnum norms normalsnum setting the new values of triangles, normals and UVs mesh.triangles triangles mesh.uv uvs mesh.normals normals return mesh So far, it works only partially. Applying that function to my Icosphere mesh, now it is possible to see the sphere rendered when camera is inside it. However, oddly enough, it's still also possible to see the sphere from outside. It means, both inside and outside are visible the outside is not culled as expected.
34
How can I scale a subset of my game view during rendering? I am developing a 2d game which has a lateral panel and a main panel with the scenario. I want to scale the main scenario to apply a zoom (only the main panel portion of the screen). I think if I use SDL SetRenderTarget I will be able to change textures and then zoom it, in runtime. I have to use only SDL2 library or some extension. I can't use openGL. How can I accomplish this?
34
How to deal with texture coordinates without range? I have a set of given texture coordinates(u,v coordinates), but they are ranging from ( inf, inf), contradicting the 0,1 convention. I tried to do a rescaling by value (value min) (mac min). But if I have a rectangle which composed of two triangles, four vertices. Suppose the u,v coordinates are 0.260944 0.490887 3.619507 0.490887 3.619507 3.043434 0.260944 3.043434 After scaling, the coordinates would be mapped to exact 1 and 1, resulting in wrong texture. So how should I deal with this kind of texture coordinates?
34
Draw Ordering with Glowing Objects This is mostly a general design theory question, but for reference the bulk of my game is in JavaScript. I can draw images to the screen that layer on top of each other in the order that I please. One of the layers, I call it the light layer, is simply a series of black squares that change opacity based on the light levels in the game. During night, these black squares are almost opaque (unless a light source is affecting them) and during the day they are nearly invisible. Glowing things, like bulbs or radar console screens, are drawn above the light layer to make it look as if they're exempt from the light effect. (Glowing) However, the turret head is a non glowing object that needs to appear above the glowing objects. For example, it might partially cover the radar console screen or light bulbs. However, it isn't meant to glow. This is where the paradox occurs. The turret head needs to be below the light layer, but also above the glowing elements. Any ideas of how I could solve such a paradox?
34
GLSL Noise via texture I am trying to access a texture in a fragment shader to overlay this texture over a certain area. varying vec4 v color varying vec2 v texCoord0 uniform sampler2D u sampler2D uniform vec4 u oldcolor uniform vec4 u newcolor uniform vec3 u noise void main() vec4 color texture2D( u sampler2D , v texCoord0 ) float threshold 0.005f if(color.r lt (u oldcolor.r threshold) amp amp color.g lt (u oldcolor.g threshold) amp amp color.b lt (u oldcolor.b threshold) amp amp color.r gt (u oldcolor.r threshold) amp amp color.g gt (u oldcolor.g threshold) amp amp color.b gt (u oldcolor.b threshold)) color.rgb u newcolor.rgb vec3(v texCoord0, 0.1) gl FragColor color For every pixel the shader checks for a certain color and replaces it with a new color v texCoord0. Now I am want to bring in a third component a noise texture to make it look like this I searched the web for a solution but I could not find anything helpful. My questions 1.Is this even possible to accomplish via a shader? 2.How to access the texture ? I hope that my questions are clear and proper for this forum ) .
34
Texture being stretched when using D3DTADDRESS CLAMP I'm trying to create a skybox using a cube and using one of this textures http forum.unity3d.com threads mundus skybox pack 01 released.202748 I've got it working well, but I noticed there are some bugs on the edges of the cube, as it can be seen here I found that this could be fixed using clamping, so I set D3DSAMP ADDRESSU and D3DSAMP ADDRESSV to D3DTADDRESS CLAMP. This, however, produced the following result Half of the cube wasn't being shown and the other half was only showing a one pixel wide texture region. I saw this similar question (which uses wrapping though), but I'm already using 4 vertices per face. So, what could be causing this?
34
What would be the benefit of placing all of a room's textures on an object outside of the camera's view? Pok mon XD Gale of Darkness does an odd thing with textures. As described in this video, all textures for a room are placed on an object outside of the camera's view This kind of caught me off guard. I wasn't really expecting anything to be on the other side of this room, but when I turned the camera around, it was all the textures that were inside this room plastered into one spot. Now I can't explain this whatsoever I've never seen this in any other game that I've covered. But what's really odd is that this isn't an isolated case, this happens periodically throughout Pok mon XD. Now it's not in every room and city, but from time to time, you will see its textures on some type of object in an area that's off camera. What would be the benefit of doing this? The scene is shown to the player all at once after loading, so I'm not seeing how it would help with loading times or anything.
34
Game Asset Size Over Time The size (in bytes) of games have been growing over time. There are probably many factors contributing to this trailer cut scene videos being bundled with the game, more and higher quality audio, multiple levels of detail being used, etc. What I'd really like to know is how the size of 3D models and textures that games ship with have changed over time. For example, if one were to look at the size of meshes and textures for Quake I (1996), Quake II (1997), Quake III Arena (1999), Quake 4 (2005), and Enemy Territory Quake Wars (2007), I'd imagine a steady increase in file size. Does anyone know of a data source for numbers like this?
34
How does Texture Mapping work? I read the answers from What exactly is UV and UVW Mapping? and How does UVW texture mapping work?, which are quite nice, but am still not 100 sure, if I understand correctly. Lets start with a 2D example. So, say I have a triangle (obviously described by 3 vertices). Now my question is, how do I convert a (x,y) coordinate to the (u,v) coordinate of my texture? Since x,y could be any value between 0,n with n being all real numbers, considering that it is in object space. But my texture coordinates are between 0,1 . How do I know how to map lets say (3,4) to (u,v)? If I know how to map the object coordinates to the texture coordinate it is easy to interpolate the values, I assume (either using bilinear interpolation or barycentric interpolation). And then how would this work for 3D? Lets say in this case we would have a pyramide with 5 vertices (4 bottom, 1 tip). I guess the procedure would be similar, with the exception that I know have an additional depth value. But how does the mapping of a 2D texture work on a 3D object, when I don't have nice flat surfaces like on a pyramide, but instead have a circular surface, like a tea pot? I hope I'm clear in my questions. I'm still little confused myself. I just don't quite get the mathematical background of texture mapping. It would be enough, if you could point me to some website with good explanation, maybe with clear graphics and step by step description. Thanks for your time!
34
Why are some games using some dithering pattern instead of traditional alpha for transparency? Recently, I have seen some 3D games (eg GTA IV) to use some kind of ordered dithering to simulate transparency alpha. The polygons are not transparent as usual, but instead render a dithering texture that gradually switch from left to right to simulate transparency The effect is not unpleasant but rather surprising to see at first (vs traditional alpha blending) Except for the visual aesthetic effect it produce, is there any reason some games do this (better performance, saving bandwidth or any else i do not think about) ? I have searched on the web but cannot found anything about this technique.
34
Merging several textures into one using RGB channels Would it be possible to place a texture into each RGB channel? Example Red wood.png Blue tiles.png Green metal.png The advantages I could see are saving space, memory and draw calls and the resolution might not suffer. (unless I'm missing something) I've seen people pack many textures into one but they have to be scaled to fit which lowers resolution.
34
Basic terrain shader without using external texture I have this (Right now I have the height map in a x x size 2D array and a 1D vector too.) What I am trying to achieve is something like this Without using any textures, only plain colors. So basically smooth transitions and some shadow (using shaders). My vertex shader looks like this version 330 layout (location 0) in vec3 Position layout (location 1) in vec3 Normal layout (location 2) in vec3 Color out vec3 fragmentNormal out vec4 ex color,pos out vec3 N out vec3 v void main () pos vec4(Position,1) ex color vec4(Color,1) fragmentNormal Normal v vec3(gl ModelViewMatrix pos) N normalize(gl NormalMatrix Normal) gl Position gl ModelViewProjectionMatrix vec4(Position,1) I have normals for all the vertices. Color is set simply in the c code based on height. Here is the fragment shader in vec3 N in vec3 v in vec4 ex color void main(void) vec3 L normalize(gl LightSource 0 .position.xyz v) vec4 Idiff gl FrontLightProduct 0 .diffuse max(dot(N,L), 0.0) Idiff clamp(Idiff, 0.0, 1.0) gl FragColor Idiff ex color So I guess my problem is what formula should I use to mix the colors. I think I don't need to set the colors in the c code but in the shaders. Update Here is the wireframe of the terrain. Update2 Based on Babis' answer the result is So the gradient is not quot projected quot onto the surface as I would like to do. What could cause this? Maybe my qustion wasn't clear.
34
How can I view an R32G32B32 texture? I have a texture with R32G32B32 floats. I create this texture in program on D3D11, using DXGI FORMAT R32G32B32 FLOAT. Now I need to see the texture data for debug purposes, but it will not save to anything but dds, showing the error in debug output, "Can't find matching WIC format, please save this file to a DDS". So, I write it to DDS but I can't open it now! The DirectX texture tool says "An error occurred trying to open that file". I know the texture is working because I can read it in the GPU and the colors seem correct. How can I view an R32G32B32 texture in an image viewer?
34
How do I make parts of a texture transparent in Max? I am applying an eye.psd eye texture to my character model's face, but the results aren't what I'm expecting. Here's what it looks like (Observe the eye on the right. Ignore the other one.) The image is attached to a quad. I want to either change the color of the blank area to the skin color, as it is nearby, or make the image be the shape of the eye instead of square? As shown, my image is a square and parts of it show as white, even though I erased them in Photoshop I expected this area to be skin coloured How can I make this work right?
34
Texture Tiling and Texture Coordinates in Unreal Engine I've just started to learn how to develop games and everything is new for me now. I don't know where to start and I decided to start with Textures and there is something it is difficult to understand for me Texture Tiling. I'm not an English speaker and I have searched the meaning of tile and tiling on my own language (Spanish) and I think tile is a way to say repeat the texture across the model instead of stretching it. I have found PolyCount Wiki and this is how they explain tiling Textures can be repeated across a model, by Tiling the TextureCoordinates. My problem here is that I don't understand the meaning of Tiling. Can I replace Tiling with Repeating without changing the meaning of the sentence? I have another question is about Texture Coordinates. On PolyCount Wiki they say that Texture coordinates are measured in a scale of 0.0 to 1.0, with 0.0 and 1.0 at opposite sides of the texture. When a model has a UV distance greater than 1 (for example, UV goes from 1 to 2) then the texture will tile across the model. How can I know if I model has a UV distance greater that 1? By the way, I'm using Unreal Engine, and on its Texture Coordinate element I don't understand the meaning of amount of tiling in their UTiling and VTiling properties.
34
MonoGame renders texture in an almost compressed looking way I was working on a game in MonoGame, which I have been doing for quite some time now. As I was implementing UI, I was noticing some sort of weird scanlines. Investigating further, to my surprise my whole scene was covered in it! Notice how the texture when zoomed in has almost a JPEG squared compressed feel too it. I have no idea what is causing this. We were using a RenderTargetTexture instead of the backbuffer, so figured I try it without, but the same happens. I checked if we were doing any weird Matrix transformations, we did not, in fact, I disabled them just to test it out. After that I thought "half pixel offset" (even though it should not really apply anymore post DX9), but also that did not solve a thing. Last but not least I thought maybe a MipMapping issue, but that would be weird since the exact same thing happens with the normal backbuffer. My Question Does anyone here recognize this "effect", and have any clue what might be causing it? I'm basically just rendering a 1920x1080 image to a 1920x1080 backbuffer, but I can assure its not the image. When I for example draw a cursor and make it follow the mouse, the points where pixels "go missing", even the cursor texture deforms. Might not be as clear in this picture, but notice how both the cursor and the orange curve basically "skip" a row of pixels, when I move the cursor to another location, it turns back to normal.
34
How can I draw a perspective correct quad? I'm trying to draw a quad in 2D (in SharpDX, but that is basically XNA). But texture correction is not working, and I'm getting only an affine textured quad. I'm using BasicEffect to render it. BasicTextureEffect new BasicEffect(Device) Alpha 1.0f, TextureEnabled true, LightingEnabled false, VertexColorEnabled true, Projection Matrix.OrthoOffCenterLH(0.0f, ScreenWidth, ScreenHeight, 0.0f, 0.0f, 1.0f), It is an 2D isometric game. I have pseudo 3D coordinates in isometric world (it's a shadow on the ground) converted to screen space, and then rendered using DrawQuad. Do I need to set up view (or projection?) somehow, to real 3D (emulate the isometric camera), and then draw this quad in 3D coordinates instead? How eventually? Or is there a way to correct this in 2D? Update I've uploaded the actual partial screen shot (Figure A), for you to see it's almost the same. (I cut it a bit, but it goes all to the corner) ensp ensp ensp ensp ensp ensp ensp Figure A ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp ensp Figure B Update 2 I can confirm now, that plain SharpDX (XNA) BasicEffect does this with just plain UW mapping (4 corners of a square), and plain DrawQuad. I've temporarily changed it to include center point, and I do draw 4 triangles instead of 2 (Figure B), this reduces effect to minimum, but it's still there. It still needs to be solved, because I won't be using it only on shadow.
34
What happened to procedurally generated textures? I recall some time ago that procedurally generated textures were becoming a big deal that a lot of people companies were really interested in with some serious benefits (smaller deployments, potentially faster loading, higher quality, scalable textures, potentially cheaper to produce, etc.). From what I can tell, the buzz is dead and no games on my radar are using them. What happened? I was hoping I'd see procedural textures go the way that NaturalMotion's stuff has (slow but steady adoption).
34
OSX Based Equivalents for Stacking Normal Maps? I've been looking for an OSX based alternative to the Windows based xNormal and NVIDIA textures tools for stacking normal maps in photoshop. Can anyone recommend something, whether it's a standalone package or photoshop plugin?
34
How to extract the texture from an image onto a mesh? (preamble I think this fits either here or SO with computer vision tag, I chose gamedev because I think you guys are probably really good with textures and meshes, but tell me if my decision was wrong.) I want to accomplish the following Given an image with an object, a 3D mesh very similar to that object, and rendering parameters that render that mesh to the exact location of this object. I want to "extract" the original texture from the image onto the mesh. (So in a later step, I could re render the mesh from another viewpoint (of course some of the texture would be invisible black)). My mesh has uv coordinates for each triangle so I guess I could either "store the texture on the mesh" somehow, or directly backmap it to a 2D texture map. So I guess what I want to do is kind of texture backmapping remapping, kind of the inverse of what is done in game dev when texturing objects. I was having a lot of trouble finding any useful information on Google about what I want to do, so I thought I'd ask here. Maybe I haven't found the right word for it yet. I think there's probably quite a lot of "stuff" involved because pixel in the original image won't exactly correspond to a location on the mesh.
34
Sprite sheet textures picking up edges of adjacent texture I have a custom sprite routine (openGL 2.0) which uses a simple sprite sheet (my textures are arranged horizontally next to each other). So, for example, here is a test sprite sheet with 2 simple textures Now, what I do when creating my openGL sprite object is specify the total number of frames in its atlas and when drawing, specify which frame I want to draw. It then works out where to grab the texture from by Dividing the required frame number by the total number of frames (to get the left coordinate) And then diving 1 by the total number of frames and adding the result to the left hand coordinate calculated above. This does seem to work but sometimes I get problems. Say for example, I want to draw the X below and I get........... I've heard about putting a 'padding' of 1 px between each texture but could someone explain exactly how this works? I mean if I do this it will surely throw off the calculations for getting the texture. If I simple include the padding in texture picked up (so the sprite is drawn with a blank border), then surely this will cause problem with collision detection? (ie sprites may appear to collide when using bounding boxes when the transparent parts collide). Would appreciate if someone could explain.
34
GLSL Noise via texture I am trying to access a texture in a fragment shader to overlay this texture over a certain area. varying vec4 v color varying vec2 v texCoord0 uniform sampler2D u sampler2D uniform vec4 u oldcolor uniform vec4 u newcolor uniform vec3 u noise void main() vec4 color texture2D( u sampler2D , v texCoord0 ) float threshold 0.005f if(color.r lt (u oldcolor.r threshold) amp amp color.g lt (u oldcolor.g threshold) amp amp color.b lt (u oldcolor.b threshold) amp amp color.r gt (u oldcolor.r threshold) amp amp color.g gt (u oldcolor.g threshold) amp amp color.b gt (u oldcolor.b threshold)) color.rgb u newcolor.rgb vec3(v texCoord0, 0.1) gl FragColor color For every pixel the shader checks for a certain color and replaces it with a new color v texCoord0. Now I am want to bring in a third component a noise texture to make it look like this I searched the web for a solution but I could not find anything helpful. My questions 1.Is this even possible to accomplish via a shader? 2.How to access the texture ? I hope that my questions are clear and proper for this forum ) .
34
SDL render to a texture I am trying to create a button for my game there are lots of similar buttons so using an image for each one is a bit inefficient. So I want to load an image then copy some text over it and render that to the screen. But it just doesn't render anything, it just gives me a transparent surface. SDL Texture button SDL CreateTexture(renderer, SDL PIXELFORMAT RGBA888, SDL TEXTUREACCES TARGET, 200, 50) SDL SetRenderTarget(renderer, button) SDL Texture border IMG LoadTexture(renderer, "border.png") SDL Texture text SDL CreateTextureFromSurface(renderer, TTF RenderText Blended(font, "new race", color)) SDL RenderClear(renderer) SDL RenderCopy(renderer, border, NULL, NULL) SDL RenderCopy(renderer, text, NULL, amp text centered) SDL SetRenderTarget(renderer, NULL) Then I do some extra stuff and finally SDL RenderClear(renderer) SDL RenderCopy(renderer, background, NULL, NULL) SDL RenderCopy(renderer, button, NULL, amp position) SDL RenderPresent(renderer) But it just rendered the background not the button, that indicates that the textures I tried to render to button where not rendered and it is still just a blank texture. I already set SDL RENDERER TARGETTEXTURE but it just doesn't work. UPDATE I found out that the problem was that the format and the access where passed in the wrong way, I corrected it and now when I compile there is an access violation, this is because I pass SDL TEXTUREACCES TARGET as the access argument, if I pass SDL TEXTUREACCES STATIC there is no such violation, the problem is that I can't use SDL SetRenderTarget if it is static, it needs to be target (which causes an access violation error).
34
Proper use of texture units I'm a beginner using OpenGL v3.3 on C with SharpGL. I have a simple scene with a Skybox and some OBJ Models. Both the skybox and models have multiple textures. Currently what I do is that I load all the textures used in the scene into different texture units at once program launch and then while rendering each element in the scene I just change a uniform variable to reflect the correct texture unit and render the vertices. Doing this could get me into trouble if there are more textures in my scene than texture units on the GPU, so I'm not sure if this is the right approach. I would like to know what the standard practice for such a scenario. Do you just (re)load the texture for each element into say fixed texture unit 0 on every draw call or what?
34
Animation Texture spread out when animation on 3dsMax I have a 3d model of a human biped, i did the skeleton and attached it to my 3d model with a skin modifier. But when i move his arms a part of the trunk is comming too . Is it possible to fix this ?
34
Different textures inside and outside a cube in DirectX 11 I want different textures for the outside and inside of a box (cube). The image below shows what I want to achieve. It is kind of an open box, where we can see a different texture inside I was trying to solve it by checking whether the normal of the face is point towards the view or to inside the screen, that is we are seeing the quot back quot of the face. I was also looking into cube mapping, but it does not seem to be the way. What method should I use to achieve that result?
34
Spherical fractal noise generator in shader I have a growing sphere in space, and I thought of having a procedural generated texture over it. Since it is growing, I thought a fractal would be a great choice, because more details would be visible the larger the sphere get (and I could mess with some parameter over time to have it animated). A quick Mandelbrot implementation in GLSL showed it would be too expensive to have it in the devices I am targeting also, I don't know how to map a cool looking fractal over complex plane onto a sphere without distortions (I expect the players to fly around this sphere in every direction, so there should be no "glued" edges or collapsed points), neither I have the background to devise project a fractal over the spherical surface myself (probably was done before, but I could not find). So weighting the requisites of the procedural texture Fast to run, for low end mobile GPU Over a spherical surface domain Growing in details with growing in size Possible to animate (BONUS) Cool looking (of course) then I thought it might be impossible within the constraints. But since I am no expert in this fractal thing, I thought I could ask it here first before scraping out the idea. Maybe it is really not a fractal I need, and there is some other kind of noise with growing details I could use. Do you know of such noise generation procedure? Do you know of any noise generator with uniform distribution over spherical sufaces, or any fractals whose domain is a sphere? Can you suggest any alternatives for my situation?
34
How does Megatexture work? I've been thinking about developing a small engine not only to develop small experimental games, but also to serve as a base to test various rendering techniques and things like that. Right now I've been thinking a lot about how to handle textures and stumbled on megatexture, but this is something that is a bit puzzling. There is a lot of talk of it being better than the traditional approach to having a bunch of textures around and loading them as needed, but how does megatexture avoid this, I've read around that they use streaming and you can just stream bits and pieces of it as opposed to loading each texture individually, but how does that offer better performance, and isn't that just another form of tilling? How do we sample such a texture when in a shader, do we stream part of it into memory then work on it. I've seen the latest videos of Rage and the texture do look great, but is that just the result of great artists or does the tech come into play. To sum up, how does it work, why is it great and how could I do something similar.
34
How can I texture a large terrain mesh in OpenGL? I want to apply a texture to a large terrain mesh in my game. I think that it is just not acceptable to have a gigantic UV map spanning over 4096 x 4096 pixels. What's a better way to apply a texture to it?
34
Sprite brighter than texture I have imported a new texture into UE, and have created a new sprite. I have setup the material of the sprite as follows I have also set my texture as texture parameter. But my sprite renders brighter than its texture Why? What am I doing wrong? Thank you in advance.
34
Unity5 imported models show no texture I've got a problem with Unity materials. I'm a beginner so I used Wings3d for creating 3d models. But there's a little Problem. Both objects on the picture below have the same material(a Standart Material with 0 smoothness, 0 metallic and only a only an Albedo picture) but obviously the right one has no texture( and that's the problem). First I ignored it and used a custom shader ( quot Custom WorldCoord Diffuse) which I found in a package for fixing it. Poorly this shader doesn't support Normal or Height maps and strangely slows down my game extremly (my scene with only Standart shaders 80 FPS my scene with this strange shader 7 FPS). I don't know how to write my own shaders and I don't know blender, and I don't have much time to fix this.
34
Basic terrain shader without using external texture I have this (Right now I have the height map in a x x size 2D array and a 1D vector too.) What I am trying to achieve is something like this Without using any textures, only plain colors. So basically smooth transitions and some shadow (using shaders). My vertex shader looks like this version 330 layout (location 0) in vec3 Position layout (location 1) in vec3 Normal layout (location 2) in vec3 Color out vec3 fragmentNormal out vec4 ex color,pos out vec3 N out vec3 v void main () pos vec4(Position,1) ex color vec4(Color,1) fragmentNormal Normal v vec3(gl ModelViewMatrix pos) N normalize(gl NormalMatrix Normal) gl Position gl ModelViewProjectionMatrix vec4(Position,1) I have normals for all the vertices. Color is set simply in the c code based on height. Here is the fragment shader in vec3 N in vec3 v in vec4 ex color void main(void) vec3 L normalize(gl LightSource 0 .position.xyz v) vec4 Idiff gl FrontLightProduct 0 .diffuse max(dot(N,L), 0.0) Idiff clamp(Idiff, 0.0, 1.0) gl FragColor Idiff ex color So I guess my problem is what formula should I use to mix the colors. I think I don't need to set the colors in the c code but in the shaders. Update Here is the wireframe of the terrain. Update2 Based on Babis' answer the result is So the gradient is not quot projected quot onto the surface as I would like to do. What could cause this? Maybe my qustion wasn't clear.
34
How to adjust texture on a sphere? As seen, the texture is not adjusted to the sphere. I tried to change alignment, but it still displays multiple sides. How can I fix this?
34
Avoid double compression of resources I am using .pngs for my textures and am using a virtual file system in a .zip file for my game project. This means my textures are compressed and decompressed twice. What are the solutions to this double compression problem? One solution I've heard about is to use .tgas for textures, but it seems ages ago, since I've heard that. Another solution is to implement decompression on the GPU and, since that is fast, forget about the overhead.
34
How do I apply a texture over a material in OpenGL ES? How can I apply a texture over a material? I've set up an object with the following material GLKBaseEffect effect effect.colorMaterialEnabled false effect.material.ambientColor GLKVector4Make(251.0 255.0, 95.0 255.0, 96.0 255.0, 1.0) effect.material.diffuseColor GLKVector4Make(251.0 255.0, 95.0 255.0, 96.0 255.0, 1.0) and it is correctly rendered with a pink color. However, when I apply a texture with GLKTextureInfo info GLKTextureLoader textureWithCGImage ... effect.texture2d0.name info.name effect.texture2d0.enabled true the material seems to disappear and I just see the texture the object is transparent where the texture is not visible, instead of being pink. Here's an example The texture is created with the following code (UIImage )generateTexturefromText (NSString )text NSError error UILabel myLabel UILabel alloc initWithFrame CGRectMake(0, 0, 200, 200) myLabel.text text myLabel.font UIFont fontWithName "Helvetica" size 50 myLabel.textAlignment NSTextAlignmentLeft myLabel.textColor UIColor colorWithRed 0 255.0 green 0 255.0 blue 0 255.0 alpha 1 myLabel.backgroundColor UIColor clearColor myLabel.numberOfLines 1 myLabel.lineBreakMode NSLineBreakByCharWrapping UIGraphicsBeginImageContextWithOptions(myLabel.bounds.size, NO, 1.0) CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0, 0) CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, 1.0) myLabel.layer renderInContext UIGraphicsGetCurrentContext() UIImage layerImage UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() if (error) NSLog(" Error loading texture from image ",error) return nil return layerImage Why is no color visible where the texture is transparent?
34
How are Volumetric Effects made within games? (e.g. Smoke Fire) For computer generated volumetric elements such as clouds, fire, and whitewater. How are they made? Is it just a simple collection of particles put to pixels taken from Physics equations or are the like textures? How are they applied? I would assume not from UV mapping Derbis, smoke and fire come from the vehicle.
34
Which gaming engine can be used for pixel manipulation? Here is the gist of where I am going with this. I work for a printer company and basically all of our input files(or output files for virtual printers) are bitmap files. We currently have a tool built in java over 10 years ago that works, but chokes on our high end 1600x1600 files. So I was thinking, why not utilize a game engine's built in ability to utilize the power of a gpu? I am thinking all I would have to do is create a 1600x1600 plane, feed it the image file and draw the scene. I figure this is the easy part. However, the hard part is reaching into the image for example and clicking the mouse on a particular pixel. Getting RGB data, converting to CYMK, making changes, converting back into RGB and then redrawing the scene. Not only that but pulling the modified texture out of memory and saving it back into a bitmap. Is this even possible? I haven't used a game engine since college and that was opengl 2.0. Just figured this looked like a good place to ask. Thanks.