_id
int64
0
49
text
stringlengths
71
4.19k
8
Minecraft What is the reasonable face vertex limit for custom models? When adding custom models to function as tile entity representations, what is a reasonable ceiling based on performance to maintain for individual models? Assume that the player will have the potential possibility to place as many of these as desired, but functionally it would also be reasonable to assume no more than 20 would have to be rendered at once. If the answer has changed between significant versions of the game, it would be nice if that could be reflected
8
About floating point precision and why do we still use it Floating point has always been troublesome for precision on large worlds. This article explains behind the scenes and offers the obvious alternative fixed point numbers. Some facts are really impressive, like "Well 64 bits of precision gets you to the furthest distance of Pluto from the Sun (7.4 billion km) with sub micrometer precision. " Well sub micrometer precision is more than any fps needs (for positions and even velocities), and it would enable you to build really big worlds. My question is, why do we still use floating point if fixed point has such advantages? Most rendering APIs and physics libraries use floating point (and suffer it's disadvantages, so developers need to get around them). Are they so much slower? Additionally, how do you think scalable planetary engines like outerra or infinity handle the large scale? Do they use fixed point for positions or do they have some space dividing algorithm?
8
What is the purpose for multiple windows in games? A lot of game development APIs recently got support for multiple windows (such as SDL 2, and GLFW 3). But why did they add that feature? I've never seen a game in my life use multiple windows (with the exceptions of a launcher, or a messagebox). Is there something I'm missing here? Is there an actual game development purpose to it? I'm very confused of why they did that.
8
Use 2 values of a mask Let's say one has a black gray white mask and wants to draw picture A over the white gray values and picture B over the black gray values (the blacker the more B picture). The best I come up with would be something like Set to RenderTargetA Draw Mask Draw PictureA over Mask (using DepthStencilState) Set to RenderTargetB Draw Mask Draw PictureB over Mask Draw both RenderTargets over each other somehow But that just seems a) overly complicated and b) both render targets still have the mask drawn on them so it wouldn't have worked anyway. So how would one achieve this?
8
What is "ROAM" related to terrain rendering? I saw it mentioned on this question, but no one explained what it is.
8
3D theory before graphics APIs? I'm a software engineer and I'm hoping to move my career towards game development. I'm reading a book right now on 2D using C DirectX. When I get into 3D I know I want to do it correctly. For example, I know nothing about 3d space. So if I learn only an API I might know it but I don't know if I can develop an interractive mini 3d world with it. I wouldn't call myself successful just having a rotating crate with the latest shaders etc. My math skills are up to trig linear algebra and still in college. I know more math is to come. Should I be reading on 3D theory books before picking up on OpenGL Direct3D, or any other suggestions? I just know an API isn't going to teach 3D game development and don't want to be lost afterward. I'm very book oriented so that's fine if there's suggestions there too. Thoughts are welcome. Thanks!
8
How do mobile mmo's deal with the graphical resources? I'm am creating an MMO(probably won't be so massive), and was wondering how to deal with the graphical resources. Obviously, I can't have what could be possibly a few gigs of images and animations loaded up to the client, so I need another way of doing it. I have tried having a php webserver that updates itself and writes to a file that the client draws, however this seems slow. How can I speed up the process of updating the graphics while not having too much into the client?
8
Asset Library Class or passing Asset's through constructors I am at the point of designing UI Logic and I was wondering what a good way to pass asset information to these objects would be. What I was trying as of yet was passing assets to the objects representing UI elements through their constructors, leading to code where I pass between 1 to 10 assets (like "button idle" "button being hovered" and "button active" etc.) to the objects or having to create "metaclasses" that define the "look and feel" of these objects that need to be passed to the ui elements to handle the rendering. A way I thought of handling this in a "dirty" way was to create a singleton "asset database" that is globally accessible. As every button need's it's own callback function on click it can also get it's own rendering function from where the button can pick it's assets from the accessible assets. What is a common way to handle this?
8
What is a good alternative to Unified Shader for Shadows? Most shadow systems I have seen use a unified shader system for shadowing techniques, resulting in an uber shader for the projects. What alternatives do you find work well or is the unified shader the best approach?
8
Small 3D Scene Graph I'm looking for a 3D graphics library (not a complete game engine). Preferred a scene graph. Something small (unlike jME, XNA or Unity), that I can easily expand and change. Preferred features Cross Platform Wrriten in Java Scala (JOGL or LWJGL), C (preferred OpenTK), Python or JavaScript WebGL. Support for OpenGL is a must. Direct3D is optional. Some material system Full support for some model format with full animation support (preferred COLLADA) Level of Detail (LOD) support Lighting support Shaders, GUI, Input and Terrain Water support are also preferred, but not required Thanks!
8
What steps can developers take to improve the graphics of their games? My understanding is that the differences in graphic quality between games for systems like Xbox One and PS4 and Pixar style movies stems from that the former are rendered in real time. But even with real time rendering, it looks like some game makers can get rather nice looking results. An example below is about the new Gears of War game Above image is from this YouTube video posted by GameSpot. So admittedly it could be from a video (e.g. a promo) that has been rendered more than the actual game. Anyway so what I wanted to ask is, are games solely dependent on the real time rendering on the target system, or are there some steps developers can take (besides the obvious start with as good graphics as possible) to improve the graphics in their game? Edit here I am referring strictly to in game graphics.
8
Why no night sky with realistic star constellations? As an amateur stargazer I noticed that many games which have night scenarios use textures for the night sky where the stars seem to be arranged entirely randomly. It seems like they were created by an artist from scratch without looking at a star chart. Why don't they use a night sky texture where the stars are arranged like on the real night sky, so you can make out well known constellations? Games which take place in a fantasy or science fiction scenario are obviously excused, but why do games which take place on earth spend so much work on realism, but neglect this one aspect, even though there are plenty of public domain resources which could be used to create a realistic night sky?
8
How do you handle large triangles with frustum? I am rendering the Sponza scene ato test my frustum implementation. My current approach is simply, test if the vertices of the triangle are within the frustum (angle between the camera looking direction and the position od the vertex relative to the camera). If any of the vertices are visible then the trianlge is considered to be visible. The issue is that the sponza scene has some really large triangles, thus with my approach it is possible to be looking directly at a triangle and see nothing if all 3 vertices are outside the camera frustum.
8
Do SpriteBuilder's Smart Sprite Sheet need to be loaded to memory? When I create an Smart Sprite Sheet Folder using SpriteBuilder and publish it, must I load that SpriteSheet into memory "FrameCache" with code or does SpriteBuilder do this automatically (so I just have to access any image inside of SpriteSheet?)
8
Geometry Shader and Stream Output with Directx11 I am having trouble trying to send verticies generated in the Geometry Shader to Stream Output. What I am trying to accomplish is to generate verticies from the Geometry Shader and store them to a vertex buffer so that I can use the vertex buffer to draw later. I read that I have to use the CreateGeometryShaderWithStreamOutput function to create a Geometry Shader that can send verticies to Stream Output instead of the rasterization stage. This is how I am trying to use it device gt CreateGeometryShaderWithStreamOutput(this gt mGSBlobSO gt GetBufferPointer(), this gt mGSBlobSO gt GetBufferSize(), so decl, 1, amp stride, 1, D3D11 SO NO RASTERIZED STREAM, NULL, amp this gt mGeometryShaderSO) I am getting an E INVALIDARG at this line. I am specifying D3D11 SO NO RASTERIZED STREAM because I think this means that I do not want to send data to the rasterizer but I am not sure. When I replace D3D11 SO NO RASTERIZED STREAM with a 0, I do not get this runtime error, but I do not get the result I want. How can I setup the geometry shader to store vertices to a vertex buffer in Stream Output?
8
Transformations between coordinate systems In a graphics engine, I have three three dimensional orthogonal coordinate systems, O, A and B. A and B are the result of two different transformations from O. I now want to calculate the transformation matrix R, which takes you from A to B. R should be the rotation and translation with respect to coordinate system A, not the original coordinate system O. Which of the following is correct? (1) B A R or (2) B R A By doing some simple worked examples, it seems that (1) is correct. However, my intuition says that (2) is correct, because the transformation to A should be applied before the transformation to B, via R. Which is it? Thanks )
8
Do I really need to use a graphics API? Is it necessary to use graphics APIs to get hardware acceleration in a 3D game? To what degree is it possible to free of dependencies to graphics card APIs like OpenGL, DirectX, CUDA, OpenCL or whatever else? Can I make my own graphics API or library for my game? Even if it's hard, is it theoretically possible for my 3D application to independently contact the graphics driver and render everything on the GPU?
8
Geometric clipping of triangles intersecting near plane I was reading about how rasterization works but there is one topic I can't quite understand. After perspective projection, our point is in clip space as I understand it. Now we test all points in a triangle like so rejected true for every point if (abs(point.x) lt abs(point.w) amp amp abs(point.x) lt abs(point.w) amp amp abs(point.x) lt abs(point.w) point.w gt 0) rejected false So the triangle is only rejected if all points are behind the near clipping plane or fall out boundaries, but it can happen that one point is behind the near clipping plane and others are in front. Now there is a part I don't understand. To deal with this we somehow clip points in 3d space by constructing new triangles. So how is this exactly works? And why we can't just interpolate our z values to test if the point is behind or in front of the near plane?
8
How do I load chunks of data from an assest manager during a loading screen? I'm developing an Android game. Basically I want to pre load all graphics sounds when the app is first loaded. But I also would like to show a progress bar as this is happening. Here is a snippet of Java Android code to display a progress bar while doing some work public class LoadingScreen extends Activity private static int progress 0 private ProgressBar progressBar private int progressStatus 0 private Handler handler new Handler() Called when the activity is first created. Override public void onCreate(Bundle savedInstanceState) super.onCreate(savedInstanceState) setContentView(R.layout.loadingscreen) progressBar (ProgressBar) findViewById(R.id.progress) do some work in background thread new Thread(new Runnable() public void run() while (progressStatus lt 100) progressStatus doSomeWork() Update the progress bar handler.post(new Runnable() public void run() progressBar.setProgress(progressStatus) ) hides the progress bar handler.post(new Runnable() public void run() progressBar.setVisibility(8) ) private int doSomeWork() try simulate doing some work Thread.sleep(50) catch (InterruptedException e) return progress ).start() My question is how do I design my Asset Manager to load "chunks" of data (bitmaps and sounds music) in the doSomeWork() function? Would it be some sort of stack that I keep popping off the next asset to load? How do I determine how much of the progress bar I should increment after loading a "chunk" of data? Is there a better way to approach this or is this the correct way?
8
How to simulate a spinning helicopter rotor visual effect programatically? I decided to display an animation that conveys to the viewer that a narrow flat surface is spinning really fast(such as an helicopter's rotor blade). Does anyone have any experience in implementing this effect and could provide an implementation? Please remember a rotor on video looks different than it does in real life. I need to make the rotor look like it does in real life and not on video.
8
Gravity independent of game updates per second Edit Just for clarification, my sprite's 'movement' isn't the problem. If I set my Time variable to 4 seconds, then it will cross the screen in exactly 4 seconds regardless of logic updates rate per second, rendering rate per second or screen resolution. So I am pretty sure I'm scaling the sprite's movement correctly. What I'm pretty sure I'm not doing correctly, is scaling acceleration. Original Question I'm trying to implement gravity in my 2d platformer and am having a few problems understanding how to keep it consistent when I change my updates per second. Here's what I have. My Gameloop overview Currently, my gameloop renders at the maximum rate allowed by device and the updates are 'clamped' to an upper limit (At the moment, 60 per second). I am working on the assumption that most of the time, my game will have no problem hitting this, even if the actual rendering dips. Thus I am doing all of my calculations based Delta Time derived from this fixed 'ticksPerSecond' value. I don't know for sure that this will remain at 60, I may decide at some point during the development to lower this upper limit. My Gravity variable declaration and initial values At the moment, I have float spriteYTime 7f This is the (initial) amount in seconds that this sprite will take to move from the top of the screen to the bottom. float fallAccel .5 This is the value that will subtracted from he sprite's fall speed (To make it fall ever faster) float terminalVelocity 1.5f Cap speed at this rate (1.5 seconds) my sprite's position is worked out on dt as follows Delta time float ticksPerSecond 60 float dt 1f ticksPerSecond Velocity spriteYVel 1 spriteYTime Update position spriteYReal spriteYReal (spriteYVel dt) Convert to screen coordinates (will be drawn at this Y coordinate) spriteyScreen (int) (furmanYReal height) My Gravity code If my sprite's state is 'f' (meaning falling) then apply gravity if(sprite.getState('f') true) Calculate new position spriteYReal spriteYReal (spriteYVel dt) Convert to screen coordinates sprite.yScreen (int) (spriteYReal r.height) Reduce time by fallAccel amount amp update velocity value based on new time value (So sprite falls slightly faster this frame compared to the last spriteYTime fallAccel spriteYVel 1 spriteYTime Check that speed isn't faster than terminal velocity if (spriteYTime lt spriteVelocity) spriteYTime spriteVelocity The problem Now, this does work, but if I change my ticksPerSecond value, it goes wrong (it falls at different rates). I know that 'Earth Normal' gravity is approximately 9.8 meters per second per second, but this is measured 'per second' whereas I (think) I need to work with 'per frame'. This is where I'm getting confused. So, take this example If set my initial Time value to 4 seconds, then if it remained constant at this speed, it would take 4 seconds to reach the bottom, if I changed my ticksPerSecond, because the time value is worked out using delta, it would still take 4 seconds but if I apply 'fallAccel" to the Time value (i.e. subract it), it goes wrong when I change the ticksPerSecond value why? How can I get this to fall at the same rate regardless of the value of 'ticksPerSecond' any help in understanding this would be appreciated.
8
What is the name of the perspective style of games like Final Fantasy 6, Chrono Trigger or Alundra? What is the actual term for the 'top down' style of the Super Nintendo games such as Final Fantasy 6, Chrono Trigger, et cetera. I'm looking for the equivalent term of 'isometric' (Jaggered Alliance, Final Fantasy Tactics, Fallout), 'first person' (Half Life, Doom, Quake), 'birds eye' (Grand Theft Auto) ? insert term here ? (Final Fantasy 6, Chrono Trigger)
8
What is "ROAM" related to terrain rendering? I saw it mentioned on this question, but no one explained what it is.
8
2D platformers why make the physics dependent on the framerate? "Super Meat Boy" is a difficult platformer that recently came out for PC, requiring exceptional control and pixel perfect jumping. The physics code in the game is dependent on the framerate, which is locked to 60fps this means that if your computer can't run the game at full speed, the physics will go insane, causing (among other things) your character to run slower and fall through the ground. Furthermore, if vsync is off, the game runs extremely fast. Could those experienced with 2D game programming help explain why the game was coded this way? Wouldn't a physics loop running at a constant rate be a better solution? (Actually, I think a physics loop is used for parts of the game, since some of the entities continue to move normally regardless of the framerate. Your character, on the other hand, runs exactly fps 60 as fast.) What bothers me about this implementation is the loss of abstraction between the game engine and the graphics rendering, which depends on system specific things like the monitor, graphics card, and CPU. If, for whatever reason, your computer can't handle vsync, or can't run the game at exactly 60fps, it'll break spectacularly. Why should the rendering step in any way influence the physics calculations? (Most games nowadays would either slow down the game or skip frames.) On the other hand, I understand that old school platformers on the NES and SNES depended on a fixed framerate for much of their control and physics. Why is this, and would it be possible to create a patformer in that vein without having the framerate dependency? Is there necessarily a loss of precision if you separate the graphics rendering from the rest of the engine? Thank you, and sorry if the question was confusing.
8
How to avoid minecraft like map to blend I am making a minecraft like world, with some differences. For instance, the camera will be facing always in the same direction. I am having trouble with the visuals. Here is an example As you can see, when looking at a "step", as long as you can see the wall between the 2 levels, kinda like looking from it upwards, it is easy to identify. But when the wall disappear, kinda like when you are looking at it downwards, the step seems to blend, to a point where it is hard to tell where the step begins. I thought of adding outlines, but due to the heavy palette of tiles that I am going to have, it would require to re think a lot the logic to choose the correct tile, and I was hoping to find an easier solution. What could I do to solve this problem?
8
What makes a game a 3D game? This may sound pretty obvious to game devs here but I'm wondering if a game can be called 3D if it has 3D assets but static isometric like camera view and not rotating at all. I tried searching the different types of video game graphics but I only find the inverse which is called 2.5D because it tries to fake 3D and that it uses 2D isometric assets. My question is 1) What really makes a game a 3D game? 2) If a game have 3D assets with a static non rotating isometric camera view is not called a 3D game, then what?
8
What weight should each frame have in motion blur? I am working on a game and would like to add motion blur to videos that I export from the engine. Right now, I play back gameplay at 1 16th the original speed, giving me 16x more frames (e.g. instead of 60 frames per second, it's 960 frames per second) and averaging the frames. I am curious though, is this the proper weight or should I give frame samples closer to the middle more alpha? Also, should the frames sampled be sampled uniformly?
8
How to render realistic ice? I am trying to write an ice shader in Unity that looks good and at least semi realistic. If the following shot (found on Google) was CG, what would its shader include? (the foreground cave). I might be wrong but it looks like it even has a different lighting model than the default diffuse.
8
Using Unreal Engine 4 to model Earth I could not find a game that has modeled Earth completely (i.e., The entire planet and full round model). Does UE4 have a limitation, such as maximum map size, that limits modeling the entire Earth? Edit Context The goal is to create a flight simulator using UE4 with centimeter granularity for modeling earth.
8
How can I modify how my scene is drawn to make it look like it's taking place at different times of day? My game has a day time association with the scene, that the morning light will be different from the afternoon and the night. But I don't know how to make this difference. Here is an example in Pokemon game I have an idea that put a layer on the top of the game with the transparency about 10 20 , and change its color depends on the time, but I don't know which color is good. Are there any other mechanisms to do some kind of day night cycle in a game of this style?
8
Geometry Shader and Stream Output with Directx11 I am having trouble trying to send verticies generated in the Geometry Shader to Stream Output. What I am trying to accomplish is to generate verticies from the Geometry Shader and store them to a vertex buffer so that I can use the vertex buffer to draw later. I read that I have to use the CreateGeometryShaderWithStreamOutput function to create a Geometry Shader that can send verticies to Stream Output instead of the rasterization stage. This is how I am trying to use it device gt CreateGeometryShaderWithStreamOutput(this gt mGSBlobSO gt GetBufferPointer(), this gt mGSBlobSO gt GetBufferSize(), so decl, 1, amp stride, 1, D3D11 SO NO RASTERIZED STREAM, NULL, amp this gt mGeometryShaderSO) I am getting an E INVALIDARG at this line. I am specifying D3D11 SO NO RASTERIZED STREAM because I think this means that I do not want to send data to the rasterizer but I am not sure. When I replace D3D11 SO NO RASTERIZED STREAM with a 0, I do not get this runtime error, but I do not get the result I want. How can I setup the geometry shader to store vertices to a vertex buffer in Stream Output?
8
Algorithm for drawing asteroids from, er... Asteroids game? What would the algorithm be for generating drawing the asteroid shapes from the original Asteroids game? Is it even an algorithm? Or would they be hard coded shapes? Here is a screenshot to jog your memory. Screenshot from Asteroids game http www.heinzwerner.de emu asteroids.jpg EDIT Also found this alt text http www.next gen.biz files images feature article 2009 05 asteroids4.jpg from the article Edge The Making of Asteroids. Interesting read, but does not mention any specifics.
8
Ground contact point for animated character sprites in an isometric game? I'm working on an isometric game. I have the 3D pre rendered sprites for directions per state (walking running, etc.) per frame. Now I need to know where a particular character stands on the ground to have it synchronized. But this "ground contact point" (GCP) changes from frame to frame, and is basically (slightly) different for every combination. At this moment I have some rough average, but sprite wiggles a bit during animation. It's rendered correctly (1024x768), but cropping of the frames even though it correctly removes empty borders, it also removes the "offset" information, which is no longer there. So I basically need to remove empty borders while taking into account all the frames in all the animations. So the character in the sprite animation is still stabilized, but unneeded space is removed. Example ensp ensp ensp ensp Is there any software that does it, or is there some technique to do it?
8
How to make image bigger than the screen to be slideable in the screen in monogame for windows phone 8? (Idk if my title is correct, because when I google it, there is no related result I guess) I am not sure how to explain it correctly, but I am making a plain 2D, tile based, tactic game in windows phone 8 using monogame. I want to make my map is "slideable". With "slidable" I mean I can draw larger images (in total) than my screen and then slide it so I can view a certain area of the drawn images Example I have a screen which dimension is 1280x720. I have a 1500x1500px image, which consists of 15 tiles, which is 100x100px each, which each tiles is redrawn each times the "Draw" is called. If the image is larger than the screen, the displayed area will be trimmed and of course, making a 220x780px area that is unseenable. The only way to see all of it is through "sliding" the screen around, so I can see all the area. My question is How to make that happen? Because in default, the screen is unslideable and the image remains trimmed. Sorry if my question and explanation is not clear enough. Clarify it as much as you like. Thank you.
8
Why are huge polygon amounts bad? It is always said that the polygon amount of a single modell must be as little as possible when it comes to realtime simulations such as computer games. (Or at least lower than when rendering a movie) I am fully aware that this must be done in order to save performance. But aside from that information i cannot find why huge polygon amounts must be avoided. (In Short I know that polygons eat performance. I want to know why they eat performance) So my question would be What happens when a frame is rendered? The polygons are surely somehow processed in the graphicard. What happens there?. If possible i would like to have some links to sites containig this information.
8
Mobile Game Development Supporting Multiple Screen Resolution for iOS Questions When designing the graphics of the game (e.g. for iPhone) What is the resolution I should base it upon? Is it the smaller resolution (iPhone4 screen resolution) so that I could scale it bigger to bigger resolution devices later on or the bigger resolution (iPhone6 ) so that I could scale it smaller for smaller resolution devices later on? Which solution would be the best to implement that will not distort the graphics? I am creating a game now (endless runner type game) and I am using parallax backgrounds. What will be the best option in supporting multiple resolutions? Scale it so that it will look the same on every device but it might distort the image or reduces the viewing area of the game when scaled. Alternatively let the other parts of the background be seen on larger resolution devices making the graphics stay as it is but let users using larger resolution devices see objects that might not be seen on smaller devices. I am talking about the next part of the parallax background. My question pertains to iPhone devices (iPhone 4 4s, iPhone 5, iPhone 5s, iPhone 5c, iPhone 6, iPhone 6 Plus) for now.
8
How do you handle large triangles with frustum? I am rendering the Sponza scene ato test my frustum implementation. My current approach is simply, test if the vertices of the triangle are within the frustum (angle between the camera looking direction and the position od the vertex relative to the camera). If any of the vertices are visible then the trianlge is considered to be visible. The issue is that the sponza scene has some really large triangles, thus with my approach it is possible to be looking directly at a triangle and see nothing if all 3 vertices are outside the camera frustum.
8
Should I make my games graphics lower quality so everyone can play it? The game I'm working on targets lower end graphics capabilities machine users. Should I make my game's graphics lower quality so that everyone can play it or make its graphics high quality so that people must go and buy a new GPU to play it?
8
Why isn't particle hair more popular in games? Blender 2.80 introduced EEVEE, a new realtime renderer, and after interacting with it for a while I was impressed by how well it handles particle hair in terms of performance and visuals. It even manages to reflect light through SSR. Sphere with 200 parent hair particles, 50 child particles per parent Blender isn't a game and they probably have different concerns and priorities regarding the performance of the application, so this might be a good reason not to see this being implemented in games right now but still, seeing how well EEVEE manages it and also knowing that games have user customizable graphic settings, I don't understand why we're still using planes with textures and other techniques like that as our best effort for realtime graphics in games. Of course having one million strands makes even Blender struggle to keep a reasonable framerate, but a game probably wouldn't need such a dense hair to look convincing and could cheat by having baked hair textures on the scalp in case gaps become visible close up. A friend of mine has said Nvidia HairWorks accomplishes just this, but looking at some screenshots I doubt that's the case (if it is, then maybe the technology has evolved a lot lately and nvidia has some catch up to do? I don't know but from what I've seen it doesn't look that good). So, why isn't particle hair more popular in games?
8
How to make a Sprite using CoreGraphics iOS Cocos2d iPhone I'm trying to make a sprite that uses the graphics that are made with core graphics. I cant seem to find anything to explain how to make shapes using core graphics to create a sprite to use in cocos2d iPhone. Thanks for any insights and help!
8
What are good puzzle game interface design principles? I'm building a prototype puzzle game at the moment. I'm just curious as to the principles behind building graphic interfaces within games. At the minute I'm looking at designing the graphics for an interface in something like Illustrator. I'm looking at one graphic (650 x 650 px) that incorporates the play area (400 x 550 px) as a blank square, with a nice border on the right hand side (vertically down the screen) and a border along the bottom too. I'm leaving a "blank" rectangle on the vertical border to incorporate a timer into it, and three "blank" squares along the bottom to incorporate icons to represent lives or power ups or something, which I'll design later and animate later. Is this an acceptable sensible approach?
8
Any interesting thesis topic? I study Computer Science at Technical University of Lodz (in Poland) with Computer Game and Simulation Technology specialization. I'm going to defend BSc thesis next year and I was wondering what topic I could choose but nothing really interesting is coming to my mind. Maybe You could help me and suggest some subjects related to programming graphics, games or simulations? (or maybe something else that is interesting enough ) ). I would be very grateful for any suggestion!
8
What's the best way to generate an NPC's face using web technologies? I'm in the process of creating a web app. I have many randomly generated non player characters in a database. I can pull a lot of information about them their height, weight, down to eye color, hair color, and hair style. For this, I am solely interested in generating a graphical representation of the face. Currently the information is displayed with text in the nicest way possible, but I believe it's worth generating these faces for a more... human experience. Problem is, I don't know where to start. Were it 2007, I'd naturally think to myself that using Flash would be the best choice. I'd love to see "breathing" simulated. However, since Flash is on its way out, I'm not sure of a solid solution. With a previous game, I simply used layered .PNGs to represent various aspects of the player's body their armor, the face, the skin color. However, these solutions weren't very dynamic and felt very amateur. I can't go deep into this project feeling like that's an inferior way to present these faces, and I'm certain there's a better way. Can anyone give some suggestion on how to pull this off well?
8
Vectorial clothes hair cuts that adapt on any character size, how is it done? Rimworld characters are simple but very nice to watch. They have different sizes, hair cuts, clothes... And everything seems to adapt to everyone. How does that work technically ? For example, any vest will adapt to every characters sizes. I highly doubt that the artists designed every vests for every body shape as it would be a tremendous amount of work. I think they use vectors, and somehow get it to work. But how ? See Veli amp Baldwin or Tau amp Kish below Or Flebe and maria below, they both wear a lightleather parka.
8
How does hidden surface removal work? Lately, I've been learning some OpenGL for fun, and I've been thinking about hidden surface removal. Say you have a high poly count static scene, with nothing that moves, no bones, physics, etc. Just static models to be drawn. I was thinking that you could put all the faces in the scene into a quad octree. And with the camera position, and dimensions of the view frustum, you could traverse the tree to construct an index buffer of all the faces that are within the view frustum (facing towards you of course). You could then update the index buffer with glBufferSubData(), and draw with glDrawElements(), and you would do this every frame, and if my limited understanding is correct, this should theoretically draw only the faces you see, right? So is this a good valid way to do occulusion culling? If not, how else can you do it?
8
How do I scale down pixel art? There are plenty of algorithms to scale up pixel art. (I prefer hqx, personally.) But are there any notable algorithms to scale it down? My game is designed to run at a resolution of 1280x720, but if it is played at a lower resolution I still want it to look good. Most pixel art discussions center around 320x200 or 640x480 and upscaling for use in console emulators, but I wonder how modern 2D games like the Monkey Island Remake could look good on lower resolutions? Of course ignoring the option of having multiple versions of assets (i.e. mipmapping).
8
Why are my exported illustrator graphics blurry and messy in game? Currently I use illustrator to create artwork for my Android games. My art size is pretty small, just 32x32. It looks OK in illustrator, but when exporting, my art is blurry and looks messy! Could you give some advice? How can you get vector like art? And which is better for small images, pixel art or vector? My large size export image This is my exported result (The pixel is so bad when zooming)
8
Graphics messed up when I try to reskin the game in Android Studio I need a little help. I'm new in Android Studio, I have a game where I want to reskin the design. First of all when I install it on my phone with the original files the game run perfectly without problems. When I try to reskin the photos in my assets folder, I save it, build it in APK file, I install the game and then the graphics is messed up. Here is screenshot of the game after the reskin. Sorry for my bad English...
8
SDL2 linux fullscreen issue at lower than desktop resolution Having a problem trying to get proper fullscreen in linux. I'm using 1440x900 on desktop. When i set SDL to use 1280x720 as fullscreen, it does change screen resolution. But if i drag the mouse cursor to the bottom or right edge of the game screen, it "scrolls" the screen beyond the game surface and makes part of the desktop visible. Here's how I set the window screen gameWindow SDL CreateWindow( "SDL Tutorial", SDL WINDOWPOS CENTERED, SDL WINDOWPOS CENTERED, screenWidth, screenHeight, SDL WINDOW FULLSCREEN SDL WINDOW BORDERLESS ) Am i missing some flag(s) perhaps? Is this a common problem? I'm on Linux Mint MATE (Rosa). This problem also occurs when trying to run the same build in an Openbox session. Using nVidia drivers (x64, v340.96) from nvidia.com, on 9600 GT card(s). No twin dual screen or second Workspaces. Any good tips on how to avoid or workaround this issue?
8
Why are huge polygon amounts bad? It is always said that the polygon amount of a single modell must be as little as possible when it comes to realtime simulations such as computer games. (Or at least lower than when rendering a movie) I am fully aware that this must be done in order to save performance. But aside from that information i cannot find why huge polygon amounts must be avoided. (In Short I know that polygons eat performance. I want to know why they eat performance) So my question would be What happens when a frame is rendered? The polygons are surely somehow processed in the graphicard. What happens there?. If possible i would like to have some links to sites containig this information.
8
What's the difference between using hardware accelerated APIs and the OS's drawing API? On Windows, I can do drawing with the OS API without OpenGL or D3D. The code I am writing will make calls to a device driver and tell the GPU what to do regardless, right? How is using OpenGL different exactly? Do these libraries have code that will interact with the GPU differently than just the Windows API does?
8
Cascaded Shadow Maps clipping against scene bounds I just implemented an initial version of CSMs based on the MSDN articles Common Techniques to Improve Shadow Depth Maps and Cascaded Shadow Maps. I fit the shadow projection to the view frustum, and do not clip against the scene bounds, because in Common Techniques to Improve Shadow Depth Maps it is stated It is also possible to clip the frustum to the scene AABB to get a tighter bound. This is not advised in all cases because this can change the size of the light camera's projection from frame to frame. Many techniques, such as those described in the section Moving the Light Texel Sized Increments, give better results when the size of the light's projection remains constant in every frame. However, in the paper from the Nvidia SDK the frustum is further clipped against the scene bounds to achieve a maximum used area for the shadow map. This confuses me. Did the guys from MSDN miss something, is the Nvidia paper simply older or am I not understanding the Nvidia approach correctly? Edit I was right in not understanding the Nvidia approach, basically not at all. I confused the crop matrix with a clipping against scene bounds.
8
Realistic metal shader How do you create a good metal shader? For different metals and say more or less eroded rusty and so on. I know that one difference from ordinary materials is that metal should 'colour' the specular light but when I do that with gold for example it just looks "yellow", not metallic at all. Any help appreciated!
8
Drawing the same mesh or drawing the same material? I was wondering. Suppose I have a 1000 grass meshes. They all have the same material, but I create them separately, because they look slightly different, because they have different heights. Does my GPU speed up if I only draw one mesh over and over again? Or is only the material switching and uniform setting the main problem? So Should I consider going to only one mesh a 1000 times, or is it ok to have a lot of different meshes with only the same material?
8
How to have real time (blood) traces? https www.youtube.com watch?v Tzf3zjPJYw4 In this game, Ink, color blobs fall then they create pretty color spots. The color traces are unlike any other game. For an example, in Super Meat Boy https www.youtube.com watch?v snaionoxjos The red traces of SM Boy was more of sprites aligned across the surfaces of the object. The trace in game Ink is different from those traditional traces of other games because... They stack infinitely. Come in many different shapes. The approaches I can think of are... Prepare many different shapes sprites, then put then where traces should be. Assign each block with their own texture, then draw the traces on that texture. The issues I see however are... First solution will lead to having many MANY MANY sprites being generated to look as if the traces are being drawn upon the object. Heavy calls between GPU and CPU. Keep drawing traces while uploading drawn new texture onto the GPU to be rendered. Significantly slowing down the game. I don't have a good computer. My computer cannot handle either approach 1 or 2. But it can the run the game Ink. So I am assuming Ink must use some sort of clever approach to handle this crucial visual aspect of the game. But I am out of ideas.
8
Blender 2.6 How to Undo 3D Cursor Movement I sometimes place the 3D cursor at the wrong location by a undesigned click on the left mouse button. But when I'm trying to click Control Z in order to undo the accidental 3D cursor movement, it doesn't revert. So my question is Is there an easy way to get Blender's (2.6) 3D cursor back to it's previous location? Thanks.
8
Bounding Box in Monogame for mouse picking Ray perspective My mouse ray is screwing up precision. I don't really know how to fix it, maybe you guys can help. problem (5.6mg gif) https www.dropbox.com s v0z67afso88hsd1 perspective ray.gif how i create the mouse ray private Ray GetMouseRay(GraphicsDevice gd, ref Matrix view, ref Matrix proj) create source positions i dont really understand why the 0 and the 1, since the near far clip plane are totaly diferent, but from experimentation, this is a must Vector3 nearsource new Vector3((float)MousePosition.Value.X, (float)MousePosition.Value.Y, 0.0f) Vector3 farsource new Vector3((float)MousePosition.Value.X, (float)MousePosition.Value.Y, 1.0f) Console.WriteLine("nearsource " nearsource.ToString() " farsource " farsource.ToString()) Matrices needed are the view proj and this world we are positioning the mouse ray in the origin(model origin, its a 3Dspace ray) Matrix world Matrix.CreateTranslation(0, 0, 0) unproject mouseposition in the clipping planes Vector3 nearPoint gd.Viewport.Unproject(nearsource, proj, view, world) Vector3 farPoint gd.Viewport.Unproject(farsource, proj, view, world) Console.WriteLine("nearPoint " nearPoint.ToString() " farpoint " farPoint.ToString()) Create a ray from the near clip plane to the far clip plane. Vector3 direction farPoint nearPoint direction.Normalize() return new Ray(nearPoint, direction) How i am drawing the ray CDebugShapeRenderer.AddLine(mouseRay.Position, mouseRay.Position mouseRay.Direction 1000, Color.Red) how i am calculating the Obb Bounding Box in Monogame for mouse picking How i am calculating the collision Line 349 https github.com CartBlanche MonoGame Samples blob master CollisionSample BoundingOrientedBox.cs L349 So how can i create a mouse ray that is accurate? or remove that perspective somehow. Roger. Edit I forgot to add the matrices used proj Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver2, this.GraphicsDevice.Viewport.AspectRatio, 1.0f, 1000.0f) view Matrix.Identity
8
How to make game appear to run faster? I believe I read somewhere that there is a technique which will make games appear more smooth than they are. I believe it is some visual trick, but I don't remember which one. (It is be something like "You percieve game to be more fluid if there is good shadows"). I may be wrong and there is no such thing.
8
Web site for XAML snippet sharing? I am making a game with WPF and C , and I frequently am in need of artwork for it. Some things could be easily shared, such as explosion animations, exhaust plumes, etc. I was wondering if there are any good web sites for this? All I can find are tutorials and the infrequent sample, but I imagine there is a site out there that I am missing. I would also like to clarify that I am looking for pure vector based XAML snippets. I know there are a lot of sites out there for sprites and textures, but I am looking for XAML components that I can tweak.
8
How does hidden surface removal work? Lately, I've been learning some OpenGL for fun, and I've been thinking about hidden surface removal. Say you have a high poly count static scene, with nothing that moves, no bones, physics, etc. Just static models to be drawn. I was thinking that you could put all the faces in the scene into a quad octree. And with the camera position, and dimensions of the view frustum, you could traverse the tree to construct an index buffer of all the faces that are within the view frustum (facing towards you of course). You could then update the index buffer with glBufferSubData(), and draw with glDrawElements(), and you would do this every frame, and if my limited understanding is correct, this should theoretically draw only the faces you see, right? So is this a good valid way to do occulusion culling? If not, how else can you do it?
8
Object Transparency Dithering (as shown in Super Mario Odyssey) A couple games I've been playing recently all have a similar goal of dithering objects when they approach the near clip plane. Super Mario Odyssey applies this dithering near clip plane effect, but along with object intersection dithering for objects like Mario. I have an example of this dithering in Super Mario Odyssey later in this post. How would one go about creating this effect in their graphics solution of choice? The gifs might take a little while to get started. I couldn't get them to each be under 30 mb ( Example in Super Mario Odyssey The balloon of the Odyssey doesn't "exit dithering" until the camera is a good distance away from it. This could mean dithering is calculated based on object distance from camera, instead surface distance. Flag starts out with a small dither density, and Mario has no dithering applied to him. Once the flag is fully opaque, Mario becomes dithered, but only in the portions that is covered. Also note Mario's coat tail has the "transparency dithering" applied to it. How are the object's shaders structured to allow this compound effect? Below is a blown up image of Mario being dithered. Notice how dithering exists in intersection regions, and when behind objects!
8
Would seam carving liquid rescale make changing aspect ratios easier? Seam carving is an algorithm which allows for resizing images without major distortions. I think it might help to make games which would adapt to different aspect ratios resolutions much easier. But am I right? You can watch this presentation to see how seam carving works for videos.
8
How to distort graphics in a wave like way? I saw a certain graphics distortion effect on multiple games over time and am interested in doing it in 2D. It is like a wave like texture distortion, I am not entirely sure what it is called actually. Super Mario 64 had an effect like that when you enter a painting depending on your position (https www.youtube.com watch?v H6r5oF73gNI t 72), also Braid has one which is even more similar to that what I am trying to achieve when you use the time ring (https www.youtube.com watch?v uqtSKkyJgFM t 38). I would really like to know how those effects were implement and if there is support for something like that in LWJGL (or even Slick2D). So far, I found absolutely nothing explaining on how it was done.
8
Precomputing Visibility Having noticed that UDK (Unreal) and Unity 3 include similar pre computed visibility solutions that unlike Quake are not dependent on level geometry, I've been trying to figure out how the calculation is done. The original Quake system is well documented You divide the world into convex volumes that limit both the camera and the geometry. Each volume has a list of all the other volumes that are visible from it. Visibility would be computed by firing rays at some random distribution of points in the target volume and see if any hit. And because the position of the camera in the source volume could have an effect, those thousands of rays would have to be fired from multiple places in the source cell. So what I'm wondering is if there's any been fundamental change to this basic scheme in the intervening 15 or so years? I can see how to adapt it to a UDK Unity scheme that has regular source volumes and deals mostly with arbitrary meshes as the targets, but is there a better way than stochastic ray testing?
8
How can I modify how my scene is drawn to make it look like it's taking place at different times of day? My game has a day time association with the scene, that the morning light will be different from the afternoon and the night. But I don't know how to make this difference. Here is an example in Pokemon game I have an idea that put a layer on the top of the game with the transparency about 10 20 , and change its color depends on the time, but I don't know which color is good. Are there any other mechanisms to do some kind of day night cycle in a game of this style?
8
How come the 3d graphics and animations of MMORPGs are usually worse than non online 3d games? I have noticed that in general it seems like the 3d graphics and animations for MMOs and MMORPGs seem not as seductive and polished as the graphics for normal, non online 3d games. How come this is the case? or Is my judgement inaccurate? If my judgement is inaccurate please provide examples of MMORPGs that render 3d graphics and animations that are superior to normal, non online 3d games.
8
Pixmaps, ByteBuffers, and Textures....Oh my My ultimate goal is to take a specific region of the screen, and redraw it somewhere else. For example, take a square from the upper left hand corner of the screen and redraw it on the lower right hand corner, so that it is basically a copy of that screen section kind of like a minimap, but at the same scale as the original. I have looked in to pixmaps and bytebuffers. Also maybe copying that region from the backbuffer somehow. Wondering the best way to go about this. Any help is appreciated. I am using opengl es and libgdx for what it's worth.
8
How to project textures onto an animated model? I'm trying to figure out exactly how to project textures onto an animated model. I've taken a look at L4D2's wound's white paper. But their method doesn't exactly explain how they went about this. I've tried using the old school method to create a mesh and attach it to the object... But that would require the GPU to store that data for a long period of time, and recall it correctly for the animated model. On top of that there's the Z fighting problem with it. I've tried the Deferred shading method. But I can't get that to work correctly either. My set up requires some form of filter in order to prevent the decal from projecting onto other models. And then should a moving body part cross over the volume, it gets rendered onto that as well, which is not as desired.
8
Graphics programming replicating the transition from Chrono Trigger inside the gate I started playing this game for the first time lately and this really peaked my interest You can see the transition in motion starting at 12 08 There seems to be some interesting maths taking place in there. It would appear that there are two different views combined together. The blue ripples are rendered first in the background and then the purple oscillating waves appear to be rendered on some sort of pseudo 3D projection. How exactly they came to this result however is beyond me and curiosity has got the better of me. I don't expect a lot of people would know how this is especially done but if anyone has done some extensive graphics programming, perhaps they could enlighten me.
8
Any interesting thesis topic? I study Computer Science at Technical University of Lodz (in Poland) with Computer Game and Simulation Technology specialization. I'm going to defend BSc thesis next year and I was wondering what topic I could choose but nothing really interesting is coming to my mind. Maybe You could help me and suggest some subjects related to programming graphics, games or simulations? (or maybe something else that is interesting enough ) ). I would be very grateful for any suggestion!
8
what options do I have for rendering "large" terrains? I am trying to design a game with some interesting features but one question I have is regarding terrain. I want a terrain that will make for a very large game world, and I want to be able to have such features as "roads" and "rivers". What options do I have in terms of terrain and what will give me the best result in terms of best looking, obviously I want to also keep the game playable with a reasonable frame rate, but this game will specifically target higher end hardware.
8
Object Transparency Dithering (as shown in Super Mario Odyssey) A couple games I've been playing recently all have a similar goal of dithering objects when they approach the near clip plane. Super Mario Odyssey applies this dithering near clip plane effect, but along with object intersection dithering for objects like Mario. I have an example of this dithering in Super Mario Odyssey later in this post. How would one go about creating this effect in their graphics solution of choice? The gifs might take a little while to get started. I couldn't get them to each be under 30 mb ( Example in Super Mario Odyssey The balloon of the Odyssey doesn't "exit dithering" until the camera is a good distance away from it. This could mean dithering is calculated based on object distance from camera, instead surface distance. Flag starts out with a small dither density, and Mario has no dithering applied to him. Once the flag is fully opaque, Mario becomes dithered, but only in the portions that is covered. Also note Mario's coat tail has the "transparency dithering" applied to it. How are the object's shaders structured to allow this compound effect? Below is a blown up image of Mario being dithered. Notice how dithering exists in intersection regions, and when behind objects!
8
Opengl in 500 lines point in triangle question https github.com ssloy tinyrenderer wiki Lesson 2 Triangle rasterization and back face culling I am on lesson 2 of the quot Opengl in 500 lines quot tutorial. I follow the part of the lesson in quot The method I adopt for my code quot , but I don't understand this leap from P A uAB vector vAC vector to 0 vector uAB vector vAC vector PA vector.
8
How do mobile mmo's deal with the graphical resources? I'm am creating an MMO(probably won't be so massive), and was wondering how to deal with the graphical resources. Obviously, I can't have what could be possibly a few gigs of images and animations loaded up to the client, so I need another way of doing it. I have tried having a php webserver that updates itself and writes to a file that the client draws, however this seems slow. How can I speed up the process of updating the graphics while not having too much into the client?
8
PBR How to correcly use standard lighting and IBL I'm creating a physically based renderer but I am a bit confused on how to put together standard lighting with IBL, since like I'm doing now I think it's wrong. Right now, for each light, I evaluate it's contribution to the scene lighting combined with IBL lighting (I use both the light contribution and diffuse and specular coming from the IBL) but like this I sum the IBL contribution for each light, and I don't think it's right. To put together standard lighting with IBL, do I need to process all the lights alone and then, in another step, bake into the scene the IBL? I think this would be more correct.
8
How can I tell a fragment shader to not write a particular pixel? In a WebGL I'd like to send a screen space quad through that gets processed by a fragment shader, but have the fragment shader only write out a pixel under certain conditions (say... that it was within a circle, or that the pixel belonged to the positive side of a halfspace defined by a curve equation or something). Is it possible within a fragment shader to say "don't write a pixel"? I know this could be accomplished using various other methods like alpha blending, rendering this first and having it put the background color where it doesn't want to draw a pixel, or maybe doing some trick with the depth or stencil buffers. I also know i could create a bunch of geometry to match what it is that I want to render. Is there a way though to make the fragment shader choose not to write a pixel at all?
8
Random generation of realistic human faces What would be a practicable way of generating huge numbers of realistically looking human faces? Randomizing 3D models and rendering them would require a lot of computing power, especially as I'm needing them on an ad hoc basis. Layering individual 2D drawn parts requires some artistic talent, which I'm definitely lacking. Also, I'd like to parameterize as many aspects of the generational process as possible. Do you have any suggestions how to go about this?
8
How to calculate texture coordinates for a cube? I was reading about texture mapping on 3D shapes in javafx where I came across this code, static TriangleMesh createMesh(float w, float h, float d) if (w h d 0) return null float hw w 2f float hh h 2f float hd d 2f float x0 0f float x1 1f 4f float x2 2f 4f float x3 3f 4f float x4 1f float y0 0f float y1 1f 3f float y2 2f 3f float y3 1f TriangleMesh mesh new TriangleMesh() mesh.getPoints().addAll( hw, hh, hd, point A hw, hh, hd, point B hw, hh, hd, point C hw, hh, hd, point D hw, hh, hd, point E hw, hh, hd, point F hw, hh, hd, point G hw, hh, hd point H ) mesh.getTexCoords().addAll( x1, y0, x2, y0, x0, y1, x1, y1, x2, y1, x3, y1, x4, y1, x0, y2, x1, y2, x2, y2, x3, y2, x4, y2, x1, y3, x2, y3 ) mesh.getFaces().addAll( 0, 10, 2, 5, 1, 9, triangle A C B 2, 5, 3, 4, 1, 9, triangle C D B 4, 7, 5, 8, 6, 2, triangle E F G 6, 2, 5, 8, 7, 3, triangle G F H 0, 13, 1, 9, 4, 12, triangle A B E 4, 12, 1, 9, 5, 8, triangle E B F 2, 1, 6, 0, 3, 4, triangle C G D 3, 4, 6, 0, 7, 3, triangle D G H 0, 10, 4, 11, 2, 5, triangle A E C 2, 5, 4, 11, 6, 6, triangle C E G 1, 9, 3, 4, 5, 8, triangle B D F 5, 8, 3, 4, 7, 3 triangle F D H ) mesh.getFaceSmoothingGroups().addAll( 0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5 ) return mesh How did the calculate and set up the texture coordinates for the cube?
8
Questions about rendering access in UDK I also asked about this over on the UDK forums, but haven't had much luck getting any responses. Basically, I have some experience with UT3 modding, but I'm just getting started with the UDK, and I have a few questions about the degree of control over the rendering you have. I gather that, despite the presence of several HLSL shader files in the UDK distribution (with the extension .usf), there is no means of implementing your own shaders outside of the material editor is this correct? (I know about the Custom node in the material editor, but it's very limited, and unwieldy for all but the simplest logic.) I understand that UE3 employs deferred rendering. I know you can access the color and depth at the current pixel in a post process. However, is there any way to access these or other G buffer attributes in a more general way? (Normals, position, values at neighboring pixels...) Are render targets supported in a general way? For the sake of argument, would it be possible to set up a camera to render depth from an alternate POV, then do shadow map style depth comparisons while rendering the main view? Is it possible to override all the materials on the client (en masse), such as those being used for the terrain or BSP geometry in the current level? (For implementing alternate vision modes and things of that nature.) The tools that come with the UDK are of course very polished, and it's hard to beat free Scaleform and SpeedTree, but I'm starting to think the platform is a terrible fit for anyone who wants to go above and beyond drag and drop material editing in terms of graphics. I feel like I have much more control over the rendering in a Source engine mod, for example.
8
SDL zooming upscaling without images becoming blurry? I'm working on a game with 32px tiles and I have one question regarding this and scaling. When I try to see my game fullscreen my image just becomes blurry. I remember playing NES games and when the image was scaled you just saw pixels as big blocks but they were still as crisp as when they were small, so I'm wondering how I can do this in SDL.
8
Why haven't graphics cards been designed for voxel rendering? I stumbled across the following article on Wikipedia regarding voxels that says One such problem cited by Carmack is the lack of graphics cards designed specifically for such rendering requiring them to be software rendered, which still remains an issue with the technology to this day. Does anyone know why this is the case and why voxels are not supported or optimized on a hardware level by graphics cards?
8
What platform were old TV video games developed on? I am very eager to know how TV video games (which we all used to play in our childhood) were developed and on which platform. I know how games are developed for mobile devices, Windows PC's and Mac but I'm not getting how (in those days) Contra, Duck Hunt and all those games were developed. As they have high graphics and a large number of stages. So how did they manage to develop games in such a small size environment and with lower configuration platform?
8
Unity Editor windows went black I recently bought a new laptop, with Intel i5 5200U HD graphics 5500 and AMD Radeon R5 M230 2GB and 8gb ram, and heres what happens with my Unity If open it, minimize it and then maximize again, it'll render some weird black screen in random areas of the editor, like this But, if I instead of minimizing unity and just opened other window , when I go back to unity nothing happens, no black weird "squares". So this just happens when I minimize unity itself and then maximize it, if I maximize another program over unity and then open unity again, it will not happen. Can anyone please tell me why this happens??? I always have to restart unity.... Is this a problem with my video card? plz help
8
What is the difference between "offline" and "real time" rendering? I have a rough idea real time is approximated with little or no global illumination. But how would you otherwise explain why offline rending takes so much longer? You hear things like "number of passes," et cetera... Can you explain the difference in simple terms?
8
How to calculate texture coordinates for a cube? I was reading about texture mapping on 3D shapes in javafx where I came across this code, static TriangleMesh createMesh(float w, float h, float d) if (w h d 0) return null float hw w 2f float hh h 2f float hd d 2f float x0 0f float x1 1f 4f float x2 2f 4f float x3 3f 4f float x4 1f float y0 0f float y1 1f 3f float y2 2f 3f float y3 1f TriangleMesh mesh new TriangleMesh() mesh.getPoints().addAll( hw, hh, hd, point A hw, hh, hd, point B hw, hh, hd, point C hw, hh, hd, point D hw, hh, hd, point E hw, hh, hd, point F hw, hh, hd, point G hw, hh, hd point H ) mesh.getTexCoords().addAll( x1, y0, x2, y0, x0, y1, x1, y1, x2, y1, x3, y1, x4, y1, x0, y2, x1, y2, x2, y2, x3, y2, x4, y2, x1, y3, x2, y3 ) mesh.getFaces().addAll( 0, 10, 2, 5, 1, 9, triangle A C B 2, 5, 3, 4, 1, 9, triangle C D B 4, 7, 5, 8, 6, 2, triangle E F G 6, 2, 5, 8, 7, 3, triangle G F H 0, 13, 1, 9, 4, 12, triangle A B E 4, 12, 1, 9, 5, 8, triangle E B F 2, 1, 6, 0, 3, 4, triangle C G D 3, 4, 6, 0, 7, 3, triangle D G H 0, 10, 4, 11, 2, 5, triangle A E C 2, 5, 4, 11, 6, 6, triangle C E G 1, 9, 3, 4, 5, 8, triangle B D F 5, 8, 3, 4, 7, 3 triangle F D H ) mesh.getFaceSmoothingGroups().addAll( 0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5 ) return mesh How did the calculate and set up the texture coordinates for the cube?
8
top down game checking, drawing enemy's line of sight area with obstacles Examples of what i'm going to need I'm using cocos2d to draw a CCTMXTiledMap, on those tiles i'll have to draw the LOS cone. How would i test if the player is within that cone, taking obstacles into account? How would i draw the line of sight area on the map tiles, again taking obstacles into account, like in the examples above? This is a re post of a question i've put on stackoverflow.
8
How do I generate a random curve for landscape (like Worms)? Possible Duplicate How do I generate terrain like that of Scorched Earth? How can I generate Worms style terrain? I must build random curve line for the 2D Game on the BitMap (like in Worms, from the side). Teacher said that I should do it using Terrain Generation through recourcy (I work in Delphi 7). I understand the main principle, but I don't know how to introduce it as code. All measurements according to the screen resolution.
8
Algorithms for Physically based shading models For real time interactive graphics, it seems to be practical now to use physically based shading models. I want to write some shaders to incorporate into an openGL based game engine that I'm writing and I wasn't sure of what algorithms to read up on. What algorithms and models seems to be practical and useful for real time graphics as of 2014?
8
How can I create a .ico file with all the various resolution sizes within one container? I was looking at this GOG.ico in ffmeg, and was surprised to see all the streams for it...indicating it was holding multiple images in the .ico container or .ico has some kind of self creation function where it generates the resolutions from a singular image based on condition? Or Maybe I'm overthinking it and its just a container with 7 different image variants stored inside? Regardless I was curious on what the method of creation was for this many .ico in one hybrid, and how to replicate it. I'm already familiar with how to create a singular .ico image using gimp and or ffmpeg....gimp is probably using ffmpeg itself. If its 7 images, those can be mapped to various streams of the .ico container using a tool like ffmpeg. What's a more automated workflow people use to create these kinds of elaborate .iso though?
8
Basic Collision Detection Math First a bit of background. I have yet to read a book on game development, though I do plan on picking one up sometime. A long time ago I made a simple pong game, followed by a simple Arkanoid type game. In both games I figured the collision detection by comparing the x, y, and z of the ball to the paddle. I did this calculation for each side of the ball and each side of the paddle. It was the only way I could think of to do it at the time. It was something along the lines of if (thing.x gt otherthing.x) if (thing.y gt otherthing.y) And so on. Is this the normal way of figuring collision detection? Did I over complicate it, or is this the basic way that it's done?
8
Quick sprite contouring solutions? I want to be able to quickly outline sprites(mainly with black, but other colours too) without doing it manually, as pictured. What would be a good program solution to do so? Edit I don't want it to happen at runtime, I just want to have the second version and I can scrap the first.
8
rendering a reflection on a texture I want to render reflection on a planar surface but the reflecting surface has a texture mapped onto it.Would the normal technique of using stencil buffer and then blending in the reflected image with the reflecting surface in this case? If not then could some one suggest me how to render reflection on a surface that has a texture mapped onto it? Thanks
8
Minecraft What is the reasonable face vertex limit for custom models? When adding custom models to function as tile entity representations, what is a reasonable ceiling based on performance to maintain for individual models? Assume that the player will have the potential possibility to place as many of these as desired, but functionally it would also be reasonable to assume no more than 20 would have to be rendered at once. If the answer has changed between significant versions of the game, it would be nice if that could be reflected
8
How do you make use of all texture units on today's graphics cards? I saw a review of the GeForce GTX 460 graphics card. It has 56 texture units. I'm not that knowledgeable about graphics effects. But, the ones I know use around 3 or 4 texture units. In this graphics card case, that would leave a lot of texture units idle. How are these graphics cards with so many texture units used?
8
text wrapping on a texture applied to a 3D model How would I create and implement a texture of text that wraps around a 3D model? The texture will just be white and you should be able to add text to it, but I need to create this so that when the texture is wrapped around a model of a person, the person is then composed of lines of text and the lines should not be distorted. In my head the way I would do it is to put the flattened texture in a file, and draw text on to it. Is this the best way? are there any issues that I'm unaware of that I might come across?
8
Should I make my games graphics lower quality so everyone can play it? The game I'm working on targets lower end graphics capabilities machine users. Should I make my game's graphics lower quality so that everyone can play it or make its graphics high quality so that people must go and buy a new GPU to play it?
8
Precomputing Visibility Having noticed that UDK (Unreal) and Unity 3 include similar pre computed visibility solutions that unlike Quake are not dependent on level geometry, I've been trying to figure out how the calculation is done. The original Quake system is well documented You divide the world into convex volumes that limit both the camera and the geometry. Each volume has a list of all the other volumes that are visible from it. Visibility would be computed by firing rays at some random distribution of points in the target volume and see if any hit. And because the position of the camera in the source volume could have an effect, those thousands of rays would have to be fired from multiple places in the source cell. So what I'm wondering is if there's any been fundamental change to this basic scheme in the intervening 15 or so years? I can see how to adapt it to a UDK Unity scheme that has regular source volumes and deals mostly with arbitrary meshes as the targets, but is there a better way than stochastic ray testing?
8
Which camera angle is used for these sprites? I am trying to replicate these sprites But I am not managing to make sense of how it is done. The character is facing forward ( lt ), but you can see the front of its body instead of just the side. Why? Is the camera positioned diagonally? Or is the body bent, as in a fighter stance? Or a combination of both? Which exactly is the positioning perspective view I should use to replicate it correctly?
8
Does projection take place before clipping in the rendering pipeline? At first I thought clipping happens before projection since new vertices may be added and the output of projection is in NDC which is 2D. However after a lot of googling, I found that some articles presentations images indicate that projection takes place before clipping which contradicting with other articles I've read and my initial thoughts. Could anyone tell me which one is correct and why?
8
Directional light view frustum stabilisation Calculating directional light view frustum based on scene objects(bounding frustum) results in noticeable scene jittering on camera movement in CSM and non CSM rendering code paths. What are the techniques to reduce or eliminate these jittering artifacts?