_id
int64
0
49
text
stringlengths
71
4.19k
24
FPS games don't they have unrealistic one eyed view? What are the causes? I have played many fps games, and noticed that the camera perspective is similar to the 'one eyed view' of the surroundings. It does not feel like bilinear. Is it because of the single flat view of the screen? Or is it the problem that can be fixed by some research? PS If you want, try to roam around with only one eye open.
24
What am I missing for this flycam to work correctly? Through reverse engineering, I've found an instruction to inject on which allows me to obtain the XYZ coordinates of the camera, as well as pitch and yaw. Thus, I have the following camX (Plane) Value is a float ranging from negative to positive camY (Plane) Value is a float ranging from negative to positive camZ (Up Down) Value is a float ranging from negative to positive camH (Horizontal Yaw) Value is a float in degrees from 180 to 180 camV (Vertical Pitch) Value is a float in degrees clamped to 89 to 89 I've written a script in Lua which allows me to bind directions (forward, backward, left, right, up, and down) to keys (W, A, S, D, Q, and E) however, those simply add subtract to from XYZ values along with a speed modifier, meaning if I'm facing forward and moving forward, everything is fine. Turning 180 degress with the mouse, though, effectively means back is forward, left is right, etc. What I'm looking to achieve is having those directional keys apply to wherever I'm aiming the mouse pointer. So, forward is always where I'm facing, etc. I've been piecing together the necessity of sin cos for calculations related to this however, I'm apparently in over my head with understanding everything I'm reading watching. I've found a number of calculations for this based on degrees, radians, converting degrees to radians, etc., but I'm just completely lost as nothing I've tried has panned out (even though I think I understand what I'm reading). I've ascertained that, where applicable, respective games have sin cos values stored somewhere in memory which I could use in lieu of calculating those values myself, but I'm honestly not sure what those values look like to seek out in the first place...or exactly what I need to do to make the calculations myself. They're probably staring me right in the face in nearby memory or on the stack as I trace through code, but I simply don't know. Are there multiple ways to calculate the appropriate angles via sin cos? Can I use some combination of the 5 values I listed above, along with a speed modifier and Lua's math. methods (math.rad, math.sin, math.cos), to get this flycam to work properly? Thank you for any guidance you can provide!
24
How the game engine handles camera view in DirectX I am quite confused with how the game engine handles camera view in DirectX. I know all the matrix stuffs, but where the projection matrix goes finally seems rarely mentioned. I looked up in the sample project by microsoft https msdn.microsoft.com en us windows uwp gaming tutorial create your first metro style directx game in the GameRenderer CreateWindowSizeDependentResources() method, the projection process is implemented XMFLOAT4X4 orientation m deviceResources gt GetOrientationTransform3D() ConstantBufferChangeOnResize changesOnResize XMStoreFloat4x4( amp changesOnResize.projection, XMMatrixMultiply( XMMatrixTranspose(m game gt GameCamera() gt Projection()), XMMatrixTranspose(XMLoadFloat4x4( amp orientation)) ) ) d3dContext gt UpdateSubresource( m constantBufferChangeOnResize.Get(), 0, nullptr, amp changesOnResize, 0, 0 ) However I can't figure out what the UpdateSubresources method has anything to do with projection (yet such projection in the UpdateSubresources method exists in some other samples I found). But wouldn't it be absurd to change the entire buffer every frame for projection? Isn't the data in the buffer in world view coordinates? The vertex shader seems also take the task of projection PixelShaderInput output (PixelShaderInput)0 output.position mul(mul(mul(input.position, world), view), projection) I do think this one is more plausible for the task. So the questions are Which one exactly is the process of projection, and how does it come along in two places? If it happens in UpdateSubresources, what should I do with dynamic buffer? If it happens in the shader, where does the world, view, projection matrics come from? How should I use it with SharpDX
24
How to avoid gimbal lock I am trying to write code with rotates an object. I implemented it as Rotation about X axis is given by the amount of change in y coordinates of a mouse and Rotation about Y axis is given by the amount of change in x coordinates of a mouse. This method is simple and work fine until on the the axis coincides with Z axis, in short a gimble lock occurs. How can I utilize the rotation arount Z axis to avoid gimbal lock.
24
XNA Placing a "field of view" rectangle on a minimap I'm building a top view 3ed person shooter. I have a HUD and in the HUD I have a minimap. I'm trying to draw on this minimap a rectangle which shows me my "visual field", meaning i see on the minimap which part of the map I'm currently seeing. I've nearly achieved this, except for a scaling problem. Here's my code float distX ScreenWidth Tools.MapToCamera(Camera.pos.X, Camera.pos.Y).X float distY ScreenHeight Tools.MapToCamera(Camera.pos.X, Camera.pos.Y).Y Vector2 topLeft new Vector2((calcX((Camera.pos.X distX) (Bg.Width Tools.minimap.Width) Tools.minimap.X)), (calcY((Camera.pos.Y distY) (Bg.Height Tools.minimap.Height) Tools.minimap.Y))) Vector2 rightBottom new Vector2 ((calcX((Camera.pos.X distX) (Bg.Width Tools.minimap.Width) Tools.minimap.X)), (calcY((Camera.pos.Y distY) (Bg.Height Tools.minimap.Height) Tools.minimap.Y))) Tools.DrawLine(topLeft, new Vector2 (rightBottom.X, topLeft.Y), Color.WhiteSmoke) Tools.DrawLine(topLeft, new Vector2 (topLeft.X, rightBottom.Y), Color.WhiteSmoke) Tools.DrawLine(new Vector2 (topLeft.X, rightBottom.Y), rightBottom, Color.WhiteSmoke) Tools.DrawLine(new Vector2 (rightBottom.X, topLeft.Y), rightBottom, Color.WhiteSmoke) ScreenWidth ScreenHeight are the current PreferredBackBuffer sizes. Bg means BackGround, contains height width, etc. of the map. calcX(float num) multiplies num by the current scale of the HUD. HUD scale is measured by hudScale ScreenWidth hudWidth calcY float calcY(float y) return y hudScale (ScreenHeight (hudHeight hudScale)) internal static V2 MapToCamera(float X,float Y) return new V2(X (Tools.w Bg.Width), Y (Tools.h Bg.Height)) public static void DrawLine(V2 start, V2 end, Color clr) sb.Draw(pointTex,start, null, clr,(float)Math.Atan2(end.X start.X, start.Y end.Y), new V2(0.5f, 1), new V2(1, (start end).Length()), SpriteEffects.None,0) pointTex new Texture2D(gd, 1, 1) (Color set White) These are all the functions in use. The end result is a rectangular frame around my position on the minimap. It follows me and i'm mostly in it's center. The problem is, this rectangle comes out about twice larger than it should be. Anyone care to help me with this? P.S I don't use any scaling on the map or the other sprites. Scaling is done only on the HUD and it's objects. Although, I use a zoom factor for my Camera class. Thanks in advance.
24
How to set "near" value properly? Usually, the camera's near value is set to sth like 0.1f. However, this (assuming 1.0f 1 metre) makes everything that's closer than 10 cm from the camera invisible. On the other hand, when working with big open terrains too small near value results in this (yes, that's a sandy coast and water P) instead of this (Apparently, when the difference in distance between too objects is too small, you get artifacts. Something to do with precision, I guess, and apparently the precision is dependent on the near camera value.) My question is is it possible to set near value to something like 0.001f and not get the above artifacts?
24
Realistic Camera Screen Shake from Explosion I'd like to shake the camera around a bit during an explosion, and I've tried a few different functions for rocking it around, and nothing seems to really give that 'wow, what a bang!' type feeling that I'm looking for. I've tried some arbitrary relatively high frequency sine wave patterns with a bit of linear attenuation, as well as a square wave type pattern. I've tried moving just one axis, two, and all three (although the dolly effect was barely noticeable in that case). Does anyone know of a good camera shaking pattern?
24
Restrict camera movements I'm making a game with target (orbit camera). I want to limit camera movements when in room. So that camera don't go through walls when moving around target and along the ground. What are common approaches? Any good links with tutorials? I use Unity3D, nevertheless I'm good with a general solution to the problem
24
Calculations for a camera system I am building an interactive application in Starling for which a camera system is required. Since there is no existing camera system in flash and the addons for starling do not meet our requirements, I am writing my own. Basically, it moves and scales all assets depending on the position of the camera. However, I am having trouble with zooming towards a specific point on the screen. Every object has a pivotpoint in the topleft corner so it scales relative to the topleft corner. For all kinds of reasons, I cannot change the pivotpoint of the objects. To compensate for this, the camera needs to move towards the point we want to zoom towards on the x y plane, but I cannot figure out how much it should move. Are there any known formulas for this? How do I calculate how the camera should move? EDIT I can actually move the pivotpoint to the center of the assets if that makes it much easier. EDIT This method is used on each sprite to update its position var zPos Number Number(this.z Game.cameraZ) var scale Number Game.focalLength (Game.focalLength zPos) var correctedScale Number scale (1 (Game.focalLength (Game.focalLength this.z))) this.scaleX this.scaleY correctedScale this.x Game.cameraX scale this.defaultX scale this.y Game.cameraY scale this.defaultY scale
24
Restrict camera movements I'm making a game with target (orbit camera). I want to limit camera movements when in room. So that camera don't go through walls when moving around target and along the ground. What are common approaches? Any good links with tutorials? I use Unity3D, nevertheless I'm good with a general solution to the problem
24
Static Camera on Endless Runner I am developing a mobile game in ue4. For this project, I need to have a "static" camera so that the player can move within the frame not affecting camera location and rotation. But as this is an endless scroller I also need to have the camera move at a constant pace limiting the player's area of movement. Does anyone know how to do this?
24
Looking for online resources to understand game camera properties What are some good online resources to understand game camera properties, such as fov, aspect ratio and more?
24
Camera view projection issue I made a simple OpenGL program but I can figure out why the camera is not working, here it's a little fragment of the Camera class public Matrix4f getView() initializes the view matrix return new Matrix4f().lookAt( new Vector3f(0f, 0f, 1f), camera position at 0,0,1 new Vector3f(0f, 0f, 0f), camera target at 0,0,0 new Vector3f(0f, 1f, 0f)) up axis set to "worldUp" (0,1,0) public Matrix4f getProjection() return new Matrix4f().perspective( (float) Math.toRadians(fieldOfView), the fov has a value between 0f and 180f, by default I set it to 90 viewportAspectRatio, the aspect ratio is equal to 1024 960 (screen height screen width)... even if I've not understood what is it... 0.1f, 1000f) I've not really understood what near and far planes are... public Matrix4f getMatrix() with this function I obtain the final camera matrix return getView().mul(getProjection()) And it's how I handle the camera Matrix in GLSL, created using camera.getMatrix() gl Position camera model vec4(position, 1.0) Without the camera all is fine here's the program running using gl Position model vec4(position, 1.0) (Yeah, it's a cube) But using the camera in the way I showed you before, increasing the FOV, I get this Could anyone look at my code and tell me where I'm wrong? I would be really happy... D
24
Why does this work? camera rotation from screen space I do not understand the math behind the linked function or why it works. I would like to know this. https sidvind.com wiki Yaw, pitch, roll camera void Camera motion(int x,int y) theta x 200 Adjust this to control the sensitivity phi y 200 target.x cos(theta) sin(phi) target.y cos(phi) target.z sin(theta) sin(phi)
24
What game systems exist which uses camera input? The group and I is in the middle of a semester project where we are currently researching on which game systems are using camera as input or as an interactive medium? We would like some help listing some of the game systems which uses camera input, as it seems hard to find other examples. Currently we know that webcam browser games uses camera input (Newgrounds webcam games), as well as the xbox kinect. I know this questions seems rather vague, though I still hope some people is capable of helping.
24
How do I properly implement zooming in my game? I'm trying to implement a zoom feature but I have a problem. I am zooming in and out a camera with a pinch gesture, I update the camera each time in the render, but my sprites keep their original position and don't change with the zoom in or zoom out. The Libraries are from libgdx. What am I missing? private void zoomIn() ((OrthographicCamera)this.stage.getCamera()).zoom .01 public boolean pinch(Vector2 arg0, Vector2 arg1, Vector2 arg2, Vector2 arg3) TODO Auto generated method stub zoomIn() return false public void render(float arg0) this.gl.glClear(GL10.GL DEPTH BUFFER BIT GL10.GL COLOR BUFFER BIT) ((OrthographicCamera)this.stage.getCamera()).update() this.stage.draw() public boolean touchDown(int arg0, int arg1, int arg2) this.stage.toStageCoordinates(arg0, arg1, point) Actor actor this.stage.hit(point.x, point.y) if(actor instanceof Group) ((LevelSelect)((Group) actor).getActors().get(0)).touched() return true Zoom In Zoom Out
24
How to prevent showing outside of Game Map ( Cocos2D x ) ( Scrolling game ) I'm trying to make a tower defense game and it can zoom in out and scrolling over my world map. How to scroll over the game and how to restrict it not to show outside of my map(black area). At below I scroll over the map by using CCCamera but I don't know how I can restrict it. CCPoint tap touch gt getLocation() CCPoint prev tap touch gt getPreviousLocation() CCPoint sub point tap prev tap float xNewPos, yNewPos float xEyePos, yEyePos, zEyePos float cameraPosX, cameraPosY, cameraPosZ First we get the current camera position. GameLayer gt getCamera() gt getCenterXYZ( amp cameraPosX, amp cameraPosY, amp cameraPosZ) GameLayer gt getCamera() gt getEyeXYZ( amp xEyePos, amp yEyePos, amp zEyePos) Calculate the new position xNewPos cameraPosX sub point.x yNewPos cameraPosY sub point.y GameLayer gt getCamera() gt setCenterXYZ(xNewPos, yNewPos, cameraPosZ) GameLayer gt getCamera() gt setEyeXYZ(xNewPos, yNewPos, zEyePos) And for zooming I used such code GameLayer gt setScale(GameLayer gt getScale() 0.002) zooming in
24
Unreal Engine custom camera I was looking at game engines and found Unreal Engine. I want to make a game where the player has to create a camera by crafting it and can customize how many pixels the camera detects and what the camera detects (Ex Sound, light, energy, etc). How could I do this?
24
Scale aware adaptive camera zooming I would like to implement an interactive zoom on a 3d object with mouse wheel.A naive approach is simple where you just scale camera marix Z poistion or modify FOV.But I would like the zoom scaling to be aware to the scale of the object and adapt its speed to it accordingly.It is standard behavior of CAD programs like Maya and 3d max.In this demo the zoom scaling is performed by exponential grow of the value but it would still be the same for models with radius of 10 units and 100000 units.I am sure eventually I will probably roll my own solution but I am interested to know if there are some industry standard techniques for this kind of interactivity. Also I am sorry if this is not an appropriate place to ask.I thought game devs should have some experience in this area.
24
Ortbit camera rotation multiplication order with quaternions? So I switched my camera to use quaternions and the first thing I noticed is that things started to 'roll' even though I'm not using any roll values. I looked it up and I found that people suggest that using rotation horizontal rotation vertical solves the issues instead of rotation rotation horizontal vertical which I thought was the more intuitive thing from doing matrix multiplication. The explanations I read as to why one works and the other doesn't were pretty vague and unclear. My question is why does it work this way? what's wrong with 'rotation h v ? Sample code float Horizontal 0 float Vertical 0 if (Keys 'W' Key Held) Camera gt Position Camera gt Forward() CamMovSpeed DeltaTime if (Keys 'S' Key Held) Camera gt Position Camera gt Forward() CamMovSpeed DeltaTime if (Keys 'A' Key Held) Camera gt Position Camera gt Right() CamMovSpeed DeltaTime if (Keys 'D' Key Held) Camera gt Position Camera gt Right() CamMovSpeed DeltaTime if (Keys 'E' Key Held) Camera gt Position vec Up CamMovSpeed DeltaTime if (Keys 'F' Key Held) Camera gt Position vec Up CamMovSpeed DeltaTime mouse int dx Mouse gt x Mouse gt LastX Mouse gt LastX Mouse gt x int dy Mouse gt LastY Mouse gt y Mouse gt LastY Mouse gt y if (Mouse gt MMB Key Held) pan Camera gt Position (Camera gt Up() dy Camera gt Right() dx) SpeedMul 2 DeltaTime if (Mouse gt RMB Key Held) orbit Horizontal dx SpeedMul 10 DeltaTime Vertical dy SpeedMul 10 DeltaTime if (Mouse gt Wheel) zoom Camera gt Position Camera gt Forward() Mouse gt Wheel SpeedMul 10 DeltaTime if (Horizontal ! 0 Vertical ! 0) quat QVertical(vec Right, Vertical) quat QHorizontal(vec Up, Horizontal) Camera gt Rotation QHorizontal Camera gt Rotation QVertical this works Camera gt Rotation Camera gt Rotation QHorizontal QVertical this doesn't Camera gt Rotation.Normalize()
24
Given the size of far and near plane, find other parameters of a frustum I am trying to find the correct parameters for setting up my perspective camera. But I don't know the mathematics involved in that. Can somebody point me to the right direction? My near plane should have width 1920 and height 1080 while my far plane should have width 3840 and height 2160. Aspect ratio is 1.7 Now how do I get the appropriate fovy, near plane distance and the far plane distance so that I get the desired frustum? Edit The world size is 3840 x 2160, and the entire world is divided into four quadrants. The size of a quadrant is 1920 x 1080. The user can zoom out and look at all four quadrants (Eagle eye view) or zoom in and look at one quadrant at a time. The important thing is, the world size should always be twice the size of a quadrant (and the area should be four times the area of a quadrant). When the game is actually being played, I just modify the Z position of the camera to zoom in zoom out. Decrease the Z position to zoom in, increase the Z position to zoom out.
24
How do I get the camera's local XYZ axis in Direct X 11? I'm currently working on quaternion rotation and I have them working, but it seems to be using the world XYZ axis to rotate around, which is an issue as the camera rotates itself around an axis. Similar to the last image, how do I rotate the camera around its local XYZ axis rather than the world XYZ? Edit The camera now rotates around its local axis, but when doing so, it does not change its values in world space. I can hold a key down and it will rotate around the correct axis, but once I release the key it will snap back to its original position before the rotation began. What must I do to allow the local changes to be applied to world space?
24
Implementing Screen Shake I'm looking to implement screen shake in a 2D top down game. From an earlier Q on this exchange, I learned that noise values could be a good candidate, so I intent to try that. But I wonder is a screen shake more about rotating the 2D topdown camera, or about jittering the position of that camera? Or does it really require both? Additionally, I would like to know if a good screenshake would require motion blur? Something like rendering your world in 4 small micro steps for each 60fps frame, maybe? Or can you shake the camera at 60fps with normal rendering and it still looks good? The quality I am after is something like shown in this video. It's kinda hard to judge, but it seems that video does translational jitter only?
24
How do I create a camera? I am trying to create a generic camera class for a game engine, which works for different types of cameras (Orbital, GDoF, FPS), but I have no idea how to go about it. I have read about quaternions and matrices, but I do not understand how to implement it. Particularly, it seems you need "Up", "Forward" and "Right" vectors, a Quaternion for rotations, and View and Projection matrices. For example, an FPS camera only rotates around the World Y and the Right Axis of the camera the 6DoF rotates always around its own axis, and the orbital is just translating for a set distance and making it look always at a fixed target point. The concepts are there implementing this is not trivial for me. SharpDX seems to have has already Matrices and Quaternions implemented, but I don't know how to use them to create a camera. Can anyone point me on what am I missing, what I got wrong? I would really enjoy if you could give a tutorial, some piece of code, or just plain explanation of the concepts.
24
How far back should I move my camera to fit a given GameObject in frame? In Unity with C , I want to calculate the minimum distance that my perspective camera has to be from a given GameObject (a procedurally generated mesh), so that the object is fully framed by the camera and leaving the least space possible around it in the screen. In other words, fully framed means that there is no part of the object outside the view area. To make things easier, my camera does not need to rotate or move it's fully static, with the exception of one axis to zoom in or out in order to frame the target object. So, in fact, we could summarize my problem as a zoom problem (not using FOV to zoom) where I need to frame a procedurally generated mesh whose size varies considerably. Would anyone be kind enough to point me in the right direction with code snippets, suggestions, etc?
24
How can I convert an orthographic camera to a perspective camera? How can calculate perspective camera parameters from orthographic camera parameters (left, right, top, bottom, near, far)? Specifically I don't know how calculate the FOV for a perspective camera from orthographic camera.
24
Why doesn't my game start in landscape mode? I want my game to run on all Android devices in LANDSCAPE FIXED mode, but this mode works only for some screen resolutions. PORTRAIT FIXED works well for all devices. I'm testing this on and emulator. How can I achieve this? Here's my camera engine initialization code public EngineOptions onCreateEngineOptions() cw 480 ch 320 camera new Camera(0, 0, cw, ch) EngineOptions engineOptions new EngineOptions( true, ScreenOrientation.LANDSCAPE FIXED, new RatioResolutionPolicy(cw, ch), this.camera ) engineOptions.setWakeLockOptions(WakeLockOptions.SCREEN ON) return engineOptions
24
How to figure out which tiles are within view, and where to draw them in the grid? I know there are a few questions on here similar to mine, but none of them seem to fit my needs. I'm using PyGame to implement a tile based game similar to Final Fantasy or Zelda from the early Nintendo and Super Nintendo generation consoles. The issue I'm running into is that I can't figure out how make a general way to render and position the tiles within view of the player camera. Once the camera starts moving, finding how much of an image is present in the camera view is very difficult, and then positioning it correctly is more complicated. Any suggestions on how I can figure out a general way to create the grid correctly?
24
GLFW Camera rotation system I need to make a camera rotation system similar to the one here http madebyevan.com webgl path tracing To rotate, press the Left mouse button and drag. So far I figured a basic idea of how this could be implemented. Note that although it looks like the scene is rotating in front of the camera but in my case I want to rotate the camera instead. Since the scene contains hundreds of thousands of triangles and I wouldn't wanna apply a matrix operation to each of them. I have represented my camera by a 4x4 matrix. The columns from 1 to 4 are side, up, look at and eye respectively. So here's what I thought so far. 1) Subtract old cursor position from new cursor position. Doing this for X and Y we get a direction vector as to where the mouse is headed. 2) Invert this direction. And translate the camera a little in this direction. So if the mouse is headed right, the camera heads left. This is done by checking the components of the direction vector. if (dir.x lt 0) cam.eye side if(dir.x gt 0) cam.eye side if (dir.y lt 0) cam.eye up if(dir.x gt 0) cam.eye up Note the translations are inverse. If less than zero than add the side vector instead of subtracting it. 3) Now find a vector perpendicular to it. So if the mouse was headed left or right the vector perpendicular to it is the Y axis. I calculate this by taking the cross(vec(0,0,1), direction vector). Note this is not the inverse but the actual direction vector. 4) Rotate the camera a little along this perpendicular axis a little. If you imagine these operations for the X axis. This feels like the camera is moving in some sort of circle around the scene. That is, the camera translates a little, then rotates. Next time it translates by adding the side (which was changed when it first rotated) it moves a little diagonally. In this way it feels like it rotates in a circle. So this works somewhat fine for pure X and Y rotation and upto a certain extent. Fails when i move the mouse diagonally or in Pure X Y rotation when I go beyond a certain point probably 90 degrees. Any advice or suggestions? I am not using OpenGL instead doing this with GLFW and GLM. The renderer is a raytracer pathtracer.
24
Vertical camera rotation has reduced speed range when looking at the character from the left or right I completed a camera tutorial for Unity. I can rotate a camera around my figure around the Y axis (Vector3.up) and I wanted to extend that for a Fixed Free 3rd Person camera (What are the types of 3rd person camera called?) where I can tilt aswell but I am stuck with the wrong rotation. I expect the camera to go up down whenever I use the joystick up down and always go left right when I use the joystick left right. But instead of that I get always left right movement as I wish but up down is only working at the center. The further I go left or right (see images) the less I go up down. This sounds super complicated, please see the images. I want the same behaviour as shown when the camera is behind the object and I want to avoid the behaviour at the right side like in the first image. Bonus would be if I can limit the up and down movement to say 160 so the player can't look top down. The Code looks like this (only RotateCamera() and UpdateCameraPosition() are relevant) using System.Collections using System.Collections.Generic using UnityEngine public class PlayerCamera MonoBehaviour System.Serializable public class InputSettings System.Object public float inputThreshold 0.1f public string VERTICAL AXIS "CameraVertical" public string HORIZONTAL AXIS "CameraHorizontal" public string LOCK TARGET PYRAMID "LockToPyramid" public string LOCK TARGET AREA "LockToArea" SerializeField private Transform target SerializeField private float cameraSpeed 2 SerializeField private Vector3 lookAtPlayerOffset new Vector3(0f, 0.6f) private PlayerController playerController private float cameraHorizontal, cameraVertical private float lockPyramid, lockArea private InputSettings inputSettings private Vector3 cameraPlayerOffset void Start() SetCameraTarget(target) inputSettings new InputSettings() cameraPlayerOffset transform.position playerController.transform.position private void Update() GetInput() private void LateUpdate() RotateCamera() UpdateCameraPosition() private void RotateCamera() Quaternion cameraRotationXDelta Quaternion.AngleAxis(cameraHorizontal cameraSpeed, Vector3.up) Quaternion cameraRotationYDelta Quaternion.AngleAxis(cameraVertical cameraSpeed, Vector3.right) this does not work how I wish it would after rotating around a bit cameraPlayerOffset cameraRotationYDelta cameraRotationXDelta cameraPlayerOffset private void UpdateCameraPosition() prevent camera from clipping through floor Vector3 newPos playerController.transform.position cameraPlayerOffset newPos.y newPos.y lt 0.2f ? newPos.y 0.2f newPos.y transform.position newPos transform.LookAt(target.transform.position lookAtPlayerOffset) private void GetInput() cameraHorizontal Input.GetAxis(inputSettings.HORIZONTAL AXIS) interpolated cameraVertical Input.GetAxis(inputSettings.VERTICAL AXIS) interpolated lockPyramid Input.GetAxis(inputSettings.LOCK TARGET PYRAMID) interpolated lockArea Input.GetAxis(inputSettings.LOCK TARGET AREA) interpolated private void SetCameraTarget(Transform targetTransform) target targetTransform if (target ! null) if (target.GetComponent lt PlayerController gt ()) playerController target.GetComponent lt PlayerController gt () else Debug.LogError("The camera's target needs a player character controller.") else Debug.LogError("Your camera needs a target")
24
Camera (pole cam) implementation problem In my project I have simple scene graph to render whole scene and Bullet physics SDK to provide physics simulation. Each rendered object is represented as scene node. Camera always has target and located behind this target. Target can be any scene node. First, I want to describe my rendering pipeline. 1)When time to render whole scene, we calculate view matrix from target's world matrix. We take offset vector in order to locate camera behind targeted scene node and transform it to scene node world's coordinate. Then add position of target with transformed offset vector. Finally, get inverse matrix of scene node. This method always is called before rendering whole scene. HRESULT CameraNode SetViewTransform(Scene pScene) If there is a target, make sure the camera is rigidly attached right behind the target if(m pTarget) Mat4x4 mat m pTarget gt VGet() gt ToWorld() Vec4 at m CamOffsetVector Vec4 atWorld mat.Xform(at) Vec3 pos mat.GetPosition() Vec3(at) mat.SetPosition(pos) VSetTransform( amp mat) Set normal matrix and calculate inverse matrix m View VGet() gt FromWorld() Get inversed matrix pScene gt GetRenderer() gt VSetViewTransform( amp m View) return S OK 2)Then, when time to render particular scene node, we calculate projection matrix and send it to the vertex shader. Mat4x4 CameraNode GetWorldViewProjection(Scene pScene) Mat4x4 world pScene gt GetTopMatrix() Mat4x4 view VGet() gt FromWorld() Mat4x4 worldView world view return m Projection worldView I have next problem, when I calculate view matrix from target's world matrix and locate target on coordinate ( x 0 y 10 z 1) it started to fall due to physics and jerk twitch. 1 When I set view camera matrix only offset position. Scene node falls without jerking twitching. 2 How I can fix this jerking twitching when camera is following the scene node ? I suppose that it is problem of matrices multiplication, when Bullet SDK sets new coordinates to scene node. But I have no idea how it can be solved.
24
Get the up vector of the camera in ARKit I'm trying to get the four vectors that make up the boundaries of the frustum in ARKit, and the solution I came up with is as follows Find the field of view angles of the camera Then find the direction and up vectors of the camera Using these information, find the four vectors using cross products and rotations This may be a sloppy way of doing it, however it is the best one I got so far. I am able to get the FOV angles and the direction vector from the ARCamera.intrinsics and ARCamera.transform properties. However, I don't know how to get the up vector of the camera at this point. Below is the piece of code I use to find the FOV angles and the direction vector func session( session ARSession, didUpdate frame ARFrame) if xFovDegrees nil yFovDegrees nil let imageResolution frame.camera.imageResolution let intrinsics frame.camera.intrinsics xFovDegrees 2 atan(Float(imageResolution.width) (2 intrinsics 0,0 )) 180 Float.pi yFovDegrees 2 atan(Float(imageResolution.height) (2 intrinsics 1,1 )) 180 Float.pi let cameraTransform SCNMatrix4(frame.camera.transform) let cameraDirection SCNVector3( 1 cameraTransform.m31, 1 cameraTransform.m32, 1 cameraTransform.m33) I am also open to suggestions for ways to find the the four vectors I'm trying to get.
24
How are scripted camera usually managed? I noticed that many open games (think platformers, adventure, action...) have very complex cameras that appear to mix both automatic placement most of the time, and sort of pre placed camera locations. How are these usually implemented? Is it just some sort of zone trigger that then launches an interpolation between the last camera state and the scripted camera for the zone, or are there more special details?
24
How to avoid gimbal lock in Unreal Engine (c )? I created an orbit camera (sometimes called turntable camera similar to the one with the "use UE3 orbit controls" setting in a static mesh view). I attached the camera to a USpringArmComponent with a TargetArmLength set to 400. In the tick function, I rotate the arm with this simple method Simple, clamped version FRotator Rotation CameraSpringArm gt GetComponentRotation() Rotation.Yaw CameraInput.X CameraRotationSpeed Rotation.Pitch FMath Clamp(Rotation.Pitch CameraInput.Y CameraRotationSpeed, 85.0f, 85.0f) CameraSpringArm gt SetRelativeRotation(Rotation) I had to clamp the pitch to hide the gimbal lock problem. But this prevent users to rotate completely around objects. I don't understand why the Z rotation (the yaw) occurs on the world z axis ( FVector UpVector which is (0, 0, 1)) and not on the local z axis. It turns out that this is exactly what I want. I tried to solve this gimbal lock problem with this other method Taken from https answers.unrealengine.com questions 232923 how can i avoid gimbal lock in code.html FRotator RotationDelta(CameraInput.Y CameraRotationSpeed, CameraInput.X CameraRotationSpeed, 0.f) FTransform NewTransform CameraSpringArm gt GetComponentTransform() NewTransform.ConcatenateRotation(RotationDelta.Quaternion()) NewTransform.NormalizeRotation() CameraSpringArm gt SetWorldTransform(NewTransform) It works, but this time, the Z rotation (yaw) occurs on the local Z axis. How can I change it to rotate around the world Z axis, and the local Y axis, without gimbal lock? I tried this hybrid solution, but the gimbal lock is still there Hybrid FRotator RotationDelta(CameraInput.Y CameraRotationSpeed, 0.f, 0.f) FTransform Transform CameraSpringArm gt GetComponentTransform() FRotator Rotation CameraSpringArm gt GetComponentRotation() Rotation.Yaw CameraInput.X CameraRotationSpeed Transform.SetRotation(Rotation.Quaternion()) Transform.ConcatenateRotation(RotationDelta.Quaternion()) Transform.NormalizeRotation() CameraSpringArm gt SetWorldTransform(Transform) I did solve this problem (a long time ago) in OpenGL using quaternions, so I tried this version Quaternion FRotator Rotator CameraSpringArm gt GetComponentRotation() FQuat Quaternion Rotator.Quaternion() Rotate around the world Z axis Quaternion FQuat(FVector UpVector, FMath DegreesToRadians(CameraInput.X CameraRotationSpeed)) Rotate around the local Y axis Quaternion FQuat(Rotation.RotateVector(FVector RightVector), FMath DegreesToRadians(CameraInput.Y CameraRotationSpeed)) CameraSpringArm gt SetRelativeTransform(Quaternion) But this does not work. I also tried this Quaternion transform FTransform Transform CameraSpringArm gt GetComponentTransform() Transform.ConcatenateRotation(FQuat(FVector UpVector, FMath DegreesToRadians(CameraInput.X CameraRotationSpeed))) Transform.ConcatenateRotation(FQuat(Rotation.RotateVector(FVector RightVector), FMath DegreesToRadians(CameraInput.Y CameraRotationSpeed))) Transform.NormalizeRotation() CameraSpringArm gt SetWorldTransform(Transform) without success.
24
How to panning camera on XZ axis with different angles I have an Orthographic camera where the position is x 0, y 100, z 0 and is pointing looking at x 0, y 0, z 0 . At this point, I'm able to capture the mouse movement and translate it to make the pan correctly. If the mouse goes 10 px down y I just have to move Z in the 3D world. The problem is that I don't know how to calculate if the camera position is in perspective, let's say position x 50, y 50, z 50 lookAt x 0, y 0, z 0 I guess I have to use some trigonometry, but I'm very lost, to be honest. Any guide would be very helpful.
24
How does time dependent camera rotation with a mouse work? The commonly used equation for camera rotation with a mouse does not involve time. This make sense since higher frame rates have smaller changes in mouse position and vise versa so it all evens out. If time slows down or speeds up, however, camera rotation from the mouse does not adjust accordingly. Just as you move slower when time is slowed, logically I also want rotating to be slower. One option is to multiply the change in position of the mouse with the same multiplier I'm using on time, but shouldn't it be possible to have change in rotation and change in time in the same equation, independent from framerate?
24
Is it possible to record video, depth map and 6D pose of any games? I'm not a game developer, but a researcher working on improving video quality in various contexts. For my current problem, I need to collect videos which have both RGB data as well as depth data. Video can be monocular or stereo. However, if there is egomotion, I'll need the camera pose (rotation and translation) as well. Is there any way to record all these information in any kind of games? Games can be on any platform PC games, mobile games or VR games. In short, I want to record gameplay but with additional information such as depth map and camera pose.
24
Asymetric Perspectiv View I am building a addon component that will show synthesized 3D view of the world (kind of FLIR camera simulation). The problem I am facing is that currently I am using Symmetric Perspective View with an eye pointed always on the center of the screen. I can manipulate field of view in both vertical and horizontal dimension however what I cannot do is to move the eye point view from the center, so that I can see 15 degrees up and 30 down while my eyes are pointing at at the screen at point that is located at 1 3 from the top of the viewport. Below is the picture of what I want to achieve, assuming the eye point is on sea level looking straight ahead to the horizon. I tried several techniques, I tried to enlarge my viewport proportionally and then increase FOVY and clip the viewport to have again 45 degree view, then I tried to manipulate the camera to look 15 degrees down, but none of the results are satisfactory, because the horizon of my 3d world view was never in range of 15 degrees up and 30 down and this is my requirement I guess the correct way is to define the correct frustum. Please confirm this is the case and how to set it assuming that FOV 64 FOVY 45 ratio 1 5 near plane 1 far 500000000.0
24
WebGL (three.js) rendering object behind camera far plane I need to have one object that is always visible rendered (a lens flare, to be exact). I don't want to set the camera far plane to the maximum possible distance between camera and the sun (performance reasons). Are there any tricks to this? Is this even possible? I could obviously play math and update flare's position so it's always no further than a camera far plane at the correct angle but I also want it to change size depending on the distance as well. If there are some solutions to such a problem I would be very thankful for answers on how it can be done in three.js.
24
RTS camera acceleration I have an RTS game with a camera that can be controlled by either edge pan, gamepad analog stick or keyboard buttons (WASD). The speed and snappiness are good for the general case, but sometimes feels to slow when going to far away places on the map. What's a good algorithm to implement some form of acceleration on the camera without making it feel less snappy? Preferably, this algorithm should work best for the analog stick, since mouse keyboard can easily use the minimap to navigate as well.
24
Camera (pole cam) implementation problem In my project I have simple scene graph to render whole scene and Bullet physics SDK to provide physics simulation. Each rendered object is represented as scene node. Camera always has target and located behind this target. Target can be any scene node. First, I want to describe my rendering pipeline. 1)When time to render whole scene, we calculate view matrix from target's world matrix. We take offset vector in order to locate camera behind targeted scene node and transform it to scene node world's coordinate. Then add position of target with transformed offset vector. Finally, get inverse matrix of scene node. This method always is called before rendering whole scene. HRESULT CameraNode SetViewTransform(Scene pScene) If there is a target, make sure the camera is rigidly attached right behind the target if(m pTarget) Mat4x4 mat m pTarget gt VGet() gt ToWorld() Vec4 at m CamOffsetVector Vec4 atWorld mat.Xform(at) Vec3 pos mat.GetPosition() Vec3(at) mat.SetPosition(pos) VSetTransform( amp mat) Set normal matrix and calculate inverse matrix m View VGet() gt FromWorld() Get inversed matrix pScene gt GetRenderer() gt VSetViewTransform( amp m View) return S OK 2)Then, when time to render particular scene node, we calculate projection matrix and send it to the vertex shader. Mat4x4 CameraNode GetWorldViewProjection(Scene pScene) Mat4x4 world pScene gt GetTopMatrix() Mat4x4 view VGet() gt FromWorld() Mat4x4 worldView world view return m Projection worldView I have next problem, when I calculate view matrix from target's world matrix and locate target on coordinate ( x 0 y 10 z 1) it started to fall due to physics and jerk twitch. 1 When I set view camera matrix only offset position. Scene node falls without jerking twitching. 2 How I can fix this jerking twitching when camera is following the scene node ? I suppose that it is problem of matrices multiplication, when Bullet SDK sets new coordinates to scene node. But I have no idea how it can be solved.
24
How do I output a webcam stream to a 2D canvas with SharpDX in C ? I'm writing an application for the Oculus Rift using the SharpOVR library to access HMD position data. I decided it is reasonable to use SharpDX to also output webcamera streams. Unfortunately, all SharpDX documentation is basically a copy paste from C documention and I only know basic C . C uses a different style, class names, etc. The MediaFoundation samples are also C only. I'm trying to write a pretty simple part that renders 2 videos from 2 webcams. I've managed to get webcameras list with this code, but I'm failing to init one it currently fails at source.Start(descriptor, null, time) According to the exception, I need to bind all Stream events first. Or I am missing something I need to init? Can one help with further actions from camera start till outputting to monitor? I know the code shouldn't be too hard, problem is adapting it to C .
24
DX11 Camera not working properly I'm currently trying to implement a ArcBall Camera which should fly around a given XMFLOAT3 position stored in a cb with a spezified distance to the position. I used the code from Allen Sherrod for this but the constant buffer that should pass the viewmatrix and the second one for the position to the shaders doesn't update. Any ideas? This should return the view mat XMMATRIX ArcCamera GetViewMatrix() XMVECTOR zoom XMVectorSet(0.0f, 0.0f, distance , 1.0f) XMMATRIX rotation XMMatrixRotationRollPitchYaw(xRotation , yRotation , 0.0f) zoom XMVector3Transform(zoom, rotation) XMVECTOR pos XMLoadFloat3( amp position ) XMVECTOR lookAt XMLoadFloat3( amp target ) pos lookAt zoom XMStoreFloat3( amp position , pos) XMVECTOR up XMVectorSet(0.0f, 1.0f, 0.0f, 1.0f) up XMVector3Transform(up, rotation) XMMATRIX viewMat XMMatrixLookAtLH(pos, lookAt, up) return viewMat The rotation (xRotation, yRotation) increases every frame. The shaders Texture2D colorMap register( t0 ) SamplerState colorSampler register( s0 ) cbuffer cbChangesEveryFrame register( b0 ) matrix worldMatrix cbuffer cbNeverChanges register( b1 ) matrix viewMatrix cbuffer cbChangeOnResize register( b2 ) matrix projMatrix cbuffer cbCameraData register( b3 ) float3 cameraPos struct VS Input float4 pos POSITION float2 tex0 TEXCOORD0 float3 norm NORMAL struct PS Input float4 pos SV POSITION float2 tex0 TEXCOORD0 float3 norm NORMAL float3 lightVec TEXCOORD1 float3 viewVec TEXCOORD2 PS Input VS Main( VS Input vertex ) PS Input vsOut ( PS Input )0 float4 worldPos mul( vertex.pos, worldMatrix ) vsOut.pos mul( worldPos, viewMatrix ) vsOut.pos mul( vsOut.pos, projMatrix ) vsOut.tex0 vertex.tex0 vsOut.norm mul( vertex.norm, (float3x3)worldMatrix ) vsOut.norm normalize( vsOut.norm ) float3 lightPos float3( 0.0f, 500.0f, 50.0f ) vsOut.lightVec normalize( lightPos worldPos ) vsOut.viewVec normalize( cameraPos worldPos ) return vsOut float4 PS Main( PS Input frag ) SV TARGET float3 ambientColor float3( 0.2f, 0.2f, 0.2f ) float3 lightColor float3( 0.7f, 0.7f, 0.7f ) float3 lightVec normalize( frag.lightVec ) float3 normal normalize( frag.norm ) float diffuseTerm clamp( dot( normal, lightVec ), 0.0f, 1.0f ) float specularTerm 0 if( diffuseTerm gt 0.0f ) float3 viewVec normalize( frag.viewVec ) float3 halfVec normalize( lightVec viewVec ) specularTerm pow( saturate( dot( normal, halfVec ) ), 25 ) float3 finalColor ambientColor lightColor diffuseTerm lightColor specularTerm return float4( finalColor, 1.0f ) Must I update all subresources and stuff before every time I call the context's draw method to render every my objects by the way?
24
How do I move a camera to a new X and Y position when the player collides with a certain object? The lowdown is this I'm making a 2D overhead game that takes place in separate screens in a room. Each screen is separated by a few blocks of black space (to give the impression of screens being separate rooms) and a rectangular "transition block" in doorways. My idea is, once a player touches this block, they are moved to the next adjacent screen along with the camera instantly. I can move a player to the next screen this way, the big hangup that I'm having is moving the camera as well. I want every collision box to have the coordinates needed to set the camera around the next screen once player box collision happens, similar to the setup I have with the player character.
24
Matrix camera, movement concept Someone told me that the movement concept in my 2D game is bad. When left or right arrow is pressed I'm scrolling the background which makes you feel that player avatar is moving (the avatar's X position remains the same). So... he told me something about matrix view. I should create all walls and platforms static and scroll only the camera and move the player's rectangle. I did a little research in Google, but found nothing. Can you tell me anything about it? How to start? Maybe links, books and resources?
24
Smooth Camera Offsets I am attempting to implement a sort of, smooth camera that has angle offsets from the player as they turn, creating a cinematic effect as well as visual feedback for when the player turns. Here is an example of the desired functionality. (Starts at 1 06) http youtu.be uJ6tD k RuA?t 1m6s One idea I was thinking of (if even feasible) is to store a "desired quaternion" of orientation of where the camera wants to look (preferably the player's quaternion of orientation) and a "current quaternion" of the cameras orientation and SLERP to the desired quaternion. Then possibly set a max angle between the two rotations to cap how large the angle offset could be. What are the ways of doing that?
24
glm perspective isn't working? So I'm learning how to make games and program, and while trying to setup a projection camera using GLM in GFLW, this line of code refuses to work and I can't figuire out why. The code is in the image below. Thanks!
24
How far back should I move my camera to fit a given GameObject in frame? In Unity with C , I want to calculate the minimum distance that my perspective camera has to be from a given GameObject (a procedurally generated mesh), so that the object is fully framed by the camera and leaving the least space possible around it in the screen. In other words, fully framed means that there is no part of the object outside the view area. To make things easier, my camera does not need to rotate or move it's fully static, with the exception of one axis to zoom in or out in order to frame the target object. So, in fact, we could summarize my problem as a zoom problem (not using FOV to zoom) where I need to frame a procedurally generated mesh whose size varies considerably. Would anyone be kind enough to point me in the right direction with code snippets, suggestions, etc?
24
How can I track a falling ball with a camera? I have been trying to get my camera to follow a falling ball but with no success. here is the code float cameraY (FrustumHeight 2) ((ball.getPosition().y) 2) (FrustumHeight 2) if (cameraY lt FrustumHeight 2 ) cameraY FrustumHeight 2 camera.position.set(0f,cameraY, 0f) Gdx.app.log("test",camera.position.toString()) camera.update() camera.apply(Gdx.gl10) batch.setProjectionMatrix(camera.combined) batch.begin() batch.draw(backgroundRegion, camera.position.x FrustumWidth 2, cameraY (FrustumHeight 2) , 320, 480) batch.draw(ballTexture, (camera.position.x FrustumWidth 2) ball.getPosition().x, cameraY ball.getPosition().y (FrustumHeight 2) , 32, 32) I'm sure I am doing this completely wrong what is the correct way to do this?
24
How can I emulate Diablo 3's Isometric view using Perspective? Using DX11, SimpleMath I am building a isometric game like Diablo 3 in 3D and I want to use a perspective camera that emulates their top down view. Projection Matrix CreatePerspectiveFieldOfView(1, width height, 0.1f, 100.0f) But after this I am a bit unsure how I am suppose to rotate the camera. I assume I could just do p Vector3(10, 10, 10) to get a 45 degree angle at any given p. How do I properly position, and rotate the camera and point it at position p, to mimic Diablo 3 view?
24
Follow camera for car object I have a car object in my game that has a car location like any other object. It is able to steer with bicycle KInematics and even rotates while it drives from a car heading vector. Now I need the camera to follow the car. I put the camera position 10 up and 10 behind the car. But I'm confused on where to make the camera look. I tried this pseudo code lookatcamera carLocation.x 5 cos(carHeading), carLocation.y, carLocation.z 5 sin(carHeading)
24
Unreal Engine custom camera I was looking at game engines and found Unreal Engine. I want to make a game where the player has to create a camera by crafting it and can customize how many pixels the camera detects and what the camera detects (Ex Sound, light, energy, etc). How could I do this?
24
What are some good methods of implementing RTS style Box selection? I'm scouting around for different methods of implementing a typical RTS box selection. This is for 3D game so I'm looking for methods for finding units within selection box in 3D world. Here is a quick screenshot of what a box selection looks like in 2D Starcraft
24
GLM Camera attached to model moves in opposite direction from the model I have been working on a component based engine with nested game objects each with there own transformation's. Each game object calculates its position in the world based on its parents world transformation multipled by there own local one (ModelsWorldTransform ParentsWorldTransform LocalTransform). This all works great until I attach the camera component to a model and in this scenario, as the model the camera is attached to moves away from the origin, the camera moves in the opposite direction to the movement whitest still seeming like it is in the correct position according to its mat4. All local transformations are stored in a mat4 rather then three vec3's for position, rotation and scale to save on calculations each update having to generate new mat4's. Transformation Component Update Code m world transformation m local transformation If we have a parent object if (m parent transform component) m world transformation m parent transform component gt GetWorldTransformation() m world transformation Camers position buffer being updated BufferData.view m transformation component gt GetWorldTransformation() m camera buffer gt SetData() I'm not sure what i'm missing, if anyone has any sugestions that would be apreatiated.
24
RTS camera acceleration I have an RTS game with a camera that can be controlled by either edge pan, gamepad analog stick or keyboard buttons (WASD). The speed and snappiness are good for the general case, but sometimes feels to slow when going to far away places on the map. What's a good algorithm to implement some form of acceleration on the camera without making it feel less snappy? Preferably, this algorithm should work best for the analog stick, since mouse keyboard can easily use the minimap to navigate as well.
24
How do I move the camera sideways in Libgdx? I want to move the camera sideways (strafe). I had the following in mind, but it doesn't look like there are standard methods to achieve this in Libgdx. If I want to move the camera sideways by x, I think I need to do the following Create a Matrix4 mat Determine the orthogonal vector v between camera.direction and camera.up Translate mat by v x Multiply camera.position by mat Will this approach do what I think it does, and is it a good way to do it? And how can I do this in libgdx? I get "stuck" at step 2, as I have not found any standard method in Libgdx to calculate an orthogonal vector. EDIT I think I can use camera.direction.crs(camera.up) to find v. I'll try this approach tonight and see if it works. EDIT2 I got it working and didn't need the matrix after all Vector3 right camera.direction.cpy().crs(camera.up).nor() camera.position.add(right.mul(x))
24
Orthographic camera and z axis in 3D isometric view game I'm working on a game in Away3d. I'm using an orthographic camera and I'm attempting to simulate an isometric view (the camera has a 37 angle and the y axis is up down, x axis is left right, and z axis is orthographic). I hit a snag in understanding how to handle the z axis of the NPC's relative to the player so that, for example, when the player is below an NPC in Y coordinates, it doesn't appear to be "under" the NPC, but "besides in front" the NPC, as currently all the 3D models in this scene are Z axis coordinate 0. My common sense says that there should be a relation in the sense that if the Y axis coordinates of the NPC is lower than the Y axis coordinates of the player, then the z axis coordinates should update following an allegory is so that the apparent view of the NPC relative to the player is that the NPC is "behind" the player. But as I said, because all the z axis are zero for all the models, the players' apparent position is "under" the NPC, as opposed to "in front" of the NPC.
24
How to make TEXT RENDER stand facing the camera of my character and also for the camera of simulation? I have an actor (Minion) with a Text Render just above it mesh As you have seen, I defined your rotation as absolute, so the text does not rotate along with the actor. In the game the rotation setting is good and acceptable at various times At other times it is very bad An example of what I want is something like the life of the champions in the League of Legends I would also like to know how to do this when I am simulating I've made some attempts at the camera of my main character. And I got some of what I tried. Character Blueprint I know I could put on the Event Begin Play because the camera does not rotate. Minion Blueprint I did (partially) Ignoring the fact that the text is inverted. I tried Split Struct and added 180 to each of the 3 axes (ROLL, PITCH, YAW) and I could not solve it anyway. Simulating nothing works EDIT 1 (Attempt made based on a rrat's answer) Blueprint I think I tried all the combinations and none worked the way I would. The combination defined in the image made the Text Render spin. EDIT 2 (Correction of the attempt I made wrongly) Correction that I did With the answer I was given I noticed a marked improvement. Result 1 Besides the value is no longer inverted. Result 2 But I still have not got what I like. Result I am looking for I wish life would be displayed the same way anywhere on the map the character is on. EDIT 3 (Removing roll) Blueprint In game there was no difference. EDIT 4 (Trying to explain what I want to happen) I think that because I can not speak English and have to use programs to translate things for myself, I do not know the best way to explain my doubts, the best words, among other things. I think the best way then to try to explain what I want is with images. I want regardless of the location or angle life is seen in the same way. Image Image Image In the LEAGUE OF LEGENDS anywhere on the screen that the champion is, the life bar appears the same way. I would like to achieve this result.
24
Isometric Mouse Camera Panning I am building an isometric game environemnt and i want to be able to pan the camera around the map by holding the right mouse button, can someone talk me through the logic for this please, i have made an attempt and tried a number ways. I can get the camera to move correctly the first time but when i click and hold the second time the camera resets itself, thanks in advance for any help import java.awt.Dimension import java.awt.EventQueue import java.awt.Graphics import java.awt.event.ActionEvent import java.awt.event.ActionListener import java.awt.event.MouseEvent import java.awt.event.MouseListener import java.awt.event.MouseMotionListener import javax.swing.JFrame import javax.swing.JPanel import javax.swing.SwingUtilities import javax.swing.Timer public class CameraMovementTest extends JPanel private Timer timer private int DELAY 10 private CustomMouseListener mouseListener private int positionX 0, positionY 0 public CameraMovementTest() mouseListener new CustomMouseListener() this.addMouseListener(mouseListener) this.addMouseMotionListener(mouseListener) this.setSize(500,500) this.setVisible(true) Swing Timer timer new Timer(DELAY, new ActionListener() Override public void actionPerformed(ActionEvent arg0) update() repaint() validate() ) timer.start() private void update() if(mouseListener! null) positionX mouseListener.getX() positionY mouseListener.getY() Override public void paint(Graphics g) super.paint(g) g.fillRect( positionX, positionY, 300,300) public class CustomMouseListener implements MouseListener, MouseMotionListener private int positionX 0, positionY 0 private int mouseClickX, mouseClickY Override public void mousePressed(MouseEvent evt) if(SwingUtilities.isRightMouseButton(evt)) mouseClickX evt.getX() mouseClickY evt.getY() Override public void mouseDragged(MouseEvent evt) if(SwingUtilities.isRightMouseButton(evt)) positionX mouseClickX evt.getX() positionY mouseClickY evt.getY() Override public void mouseMoved(MouseEvent arg0) Override public void mouseClicked(MouseEvent arg0) Override public void mouseEntered(MouseEvent arg0) Override public void mouseExited(MouseEvent arg0) Override public void mouseReleased(MouseEvent arg0) public int getX() return positionX public int getY() return positionY public static void main(String args) EventQueue.invokeLater(new Runnable() Override public void run() JFrame f new JFrame() f.setVisible(true) f.setSize(new Dimension(500,500)) f.setContentPane(new CameraMovementTest()) )
24
How to keep the camera view to within room dimensions me self view yview 0 ( me.y (view hview 0 2) view yview 0 ) 0.1 view xview 0 ( me.x (view wview 0 2) view xview 0 ) 0.3 This is the code I use to smooth camera movement in game maker. But it's going beyond the room dimensions showing stuff that shouldn't be shown, as the camera should stop following the player at a point. How do I achieve this?
24
glm perspective isn't working? So I'm learning how to make games and program, and while trying to setup a projection camera using GLM in GFLW, this line of code refuses to work and I can't figuire out why. The code is in the image below. Thanks!
24
How to draw chunks of a map within view of camera I want to draw chunks of a map in the screen when it's visible by the camera. I can draw a single chunk using map(celx, cely, screenx, screeny, 16, 16) This draws the chunk celx cely to the screen position screenx screeny. I can move this map where it's drawn using camera(camx, camy) This will offset the map by vector camx camy. This is basic PICO 8 api. For my problem, I define a world by chunks of the map pworld 0, 1, 2, 0, 3, 4, 0, 1, 2, 0, 3, 4, 0, 3, 4, 0, 3, 4, 0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2 World is laid out by chunks of 8x4 units. 0 means chunk where celx 0 cely 0 1 means chunk where celx 1 cely 0 and so on. Now when the camera is at say x 10,y 10 I have to draw the portion of the world indexes 1,2,9,10 I tried to calculate this using function v map(x, y) local i pworld flr(y 16) 8 flr(x 16) 1 local my flr(i 8) local mx i 8 return x mx,y my , x mx 1,y my , x mx,y my 1 , x mx 1,y my 1 end and draw using function draw() camera(cam.x, cam.y) local vm v map(camx, camy) map(vm 1 .x, vm 1 .y, 0, 0, 16, 16) map(vm 2 .x, vm 2 .y, 0, 0, 16, 16) map(vm 3 .x, vm 3 .y, 0, 0, 16, 16) map(vm 4 .x, vm 4 .y, 0, 0, 16, 16) end But this is wrong and I don't understand how this even should work. How do I know the indexes of the chunks to draw? How do I know at what screen position to draw the chunks? How does this work well with moving the camera?
24
Swapping Y and Z causes problems with Camera I am trying to swap the Y and Z axis in my program. Everything worked great when Y used to be the axis coming out of the plane. After having swapped y and x, I have been able to draw my terrain using the X Y as the plane and Z as the height, however when converting the code for the camera I am running into trouble. Here is the code that gives me the correct camera (so y is the axis coming out of the plane), along with an illustration. The code is in SharpDX which is a C wrapper for DirectX Setup the position of the camera in the world. Vector3 position new Vector3(PositionX, PositionY, PositionZ) Setup where the camera is looking forwardby default. Vector3 lookAt new Vector3(0, 0, 1.0f) Set the yaw (Y axis), pitch (X axis), and roll (Z axis) rotations in radians. float pitch RotationX 0.0174532925f float yaw RotationY 0.0174532925f float roll RotationZ 0.0174532925f Create the rotation matrix from the yaw, pitch, and roll values. Matrix rotationMatrix Matrix.RotationYawPitchRoll(yaw, pitch, roll) Transform the lookAt and up vector by the rotation matrix so the view is correctly rotated at the origin. lookAt Vector3.TransformCoordinate(lookAt, rotationMatrix) Vector3 up Vector3.TransformCoordinate(Vector3.UnitY, rotationMatrix) Translate the rotated camera position to the location of the viewer. lookAt position lookAt Finally create the view matrix from the three updated vectors. ViewMatrix Matrix.LookAtLH(position, lookAt, up) Here is an illustration of what this produces, I am able to successfully move the camera around with the mouse by having the mouse differences affect the x and y rotation Now if I swap all the X and Y in my terrain, without affecting the camera, I get this So now the camera is facing the terrain directly, and attempts to yaw will not produce the desired view. (the world will revolve around the camera when looking perpendicular to the terrain, and spin in place when looking at the horizon, the exact opposite of what we want.) Now I thought that maybe if I just change the lookAt to 0, 1.0f, 0 swap the y and z rotations, and change the Up vector to use Vector3.UnitZ, it would solve all my problems. Setup the position of the camera in the world. Vector3 position new Vector3(PositionX, PositionY, PositionZ) Setup where the camera is looking forwardby default. Vector3 lookAt new Vector3(0, 1.0f, 0) Set the yaw (Y axis), pitch (X axis), and roll (Z axis) rotations in radians. float pitch RotationX 0.0174532925f float yaw RotationZ 0.0174532925f float roll RotationY 0.0174532925f Create the rotation matrix from the yaw, pitch, and roll values. Matrix rotationMatrix Matrix.RotationYawPitchRoll(yaw, pitch, roll) Transform the lookAt and up vector by the rotation matrix so the view is correctly rotated at the origin. lookAt Vector3.TransformCoordinate(lookAt, rotationMatrix) Vector3 up Vector3.TransformCoordinate(Vector3.UnitZ, rotationMatrix) Translate the rotated camera position to the location of the viewer. lookAt position lookAt Finally create the view matrix from the three updated vectors. ViewMatrix Matrix.LookAtLH(position, lookAt, up) However, doing this, I get the same issue as before, the world will revolve around when looking perpendicular to the ground, but spin in place when looking parallel to the ground (so towards the horizon). I believe the camera now looks like this Which doesn't make sense to me, since I thought changing the Up vector to UnitZ would make Z face up and not parallel to the ground. Any suggestions to change the second code block is appreciated.
24
Camera Control Techniques in Games I am an MPhil student in Computing Science working on the problem of camera control in graphics. Though the literature of camera control dates back to the end of 1980s, the majority of them (up to my knowledge) is mainly academic and rarely used in games. Now part of my thesis should be dedicated to camera control methods used in games. But the problem is that I have not implemented all the games in the world, so I can't speak about them. But I suppose there are some references that game developers usually use. Can anybody help me with this? Even if it is from your own experience rather than a book.
24
How do I create a bounding frustum from a view projection matrix? Given a left handed Projection matrix, a left handed View matrix, a ViewProj matrix of View Projection How do I create a bounding Frustum comprised of near, far, left, right and top, bottom planes? The only example I could find on Google (Tutorial 16 Frustum Culling) seems to not work for example, if the math is used as given, the near plane's distance is a negative. This places the near plane behind the camera...
24
How to avoid gimbal lock I am trying to write code with rotates an object. I implemented it as Rotation about X axis is given by the amount of change in y coordinates of a mouse and Rotation about Y axis is given by the amount of change in x coordinates of a mouse. This method is simple and work fine until on the the axis coincides with Z axis, in short a gimble lock occurs. How can I utilize the rotation arount Z axis to avoid gimbal lock.
24
Design of a camera system Thinking about a common game, doesn't matter the type of the game, it's very likely that we need some camera type. For example Debug camera controlled by keyboard and mouse, with that we are able to move around in any place of our scene. Scripted camera with that we can instruct the camera to move around, following a determinate path. Player camera. ... Each of these camera types has its own update function. The easiest (and bad) system,is to have a camera manager class with a generic update function and specialized update functions for every camera type. Inside the generic update function we have a switch statement that, based on the camera type, calls the proper update function. Instead of this I've thought to another approach strategy pattern. We move each camera behavior (update method) in an appropriate class that implements a common interface. In the camera manager we have a member to that interface, and we can set dinamically any behavior we want. What do you think about that? What other systems do you suggest me? Thanks. Additional info there is the are real possibility that I need more than one camera active, for example for reflections. In short, I must take account also of that.
24
Given the size of far and near plane, find other parameters of a frustum I am trying to find the correct parameters for setting up my perspective camera. But I don't know the mathematics involved in that. Can somebody point me to the right direction? My near plane should have width 1920 and height 1080 while my far plane should have width 3840 and height 2160. Aspect ratio is 1.7 Now how do I get the appropriate fovy, near plane distance and the far plane distance so that I get the desired frustum? Edit The world size is 3840 x 2160, and the entire world is divided into four quadrants. The size of a quadrant is 1920 x 1080. The user can zoom out and look at all four quadrants (Eagle eye view) or zoom in and look at one quadrant at a time. The important thing is, the world size should always be twice the size of a quadrant (and the area should be four times the area of a quadrant). When the game is actually being played, I just modify the Z position of the camera to zoom in zoom out. Decrease the Z position to zoom in, increase the Z position to zoom out.
24
Understanding how to go from a scene to what's actually rendered to screen in OpenGL? I want something that explains step by step how, after setting up a simple scene I can go from that 'world' space, to what's finally rendered on my screen (ie, actually implement something). I need the resource to clearly show how to derive and set up both orthographic and perspective projection matrices... basically I want to thoroughly understand what's going on behind the scenes and not plug in random things without knowing what they do. I've found lots of half explanations, presentation slides, walls of text, etc that aren't really doing much for me. I have a basic understanding of linear algebra matrix transforms, and a rough idea of what's going on when you go from model space screen, but not enough to actually implement it in code.
24
Given the size of far and near plane, find other parameters of a frustum I am trying to find the correct parameters for setting up my perspective camera. But I don't know the mathematics involved in that. Can somebody point me to the right direction? My near plane should have width 1920 and height 1080 while my far plane should have width 3840 and height 2160. Aspect ratio is 1.7 Now how do I get the appropriate fovy, near plane distance and the far plane distance so that I get the desired frustum? Edit The world size is 3840 x 2160, and the entire world is divided into four quadrants. The size of a quadrant is 1920 x 1080. The user can zoom out and look at all four quadrants (Eagle eye view) or zoom in and look at one quadrant at a time. The important thing is, the world size should always be twice the size of a quadrant (and the area should be four times the area of a quadrant). When the game is actually being played, I just modify the Z position of the camera to zoom in zoom out. Decrease the Z position to zoom in, increase the Z position to zoom out.
24
Smooth Camera Offsets I am attempting to implement a sort of, smooth camera that has angle offsets from the player as they turn, creating a cinematic effect as well as visual feedback for when the player turns. Here is an example of the desired functionality. (Starts at 1 06) http youtu.be uJ6tD k RuA?t 1m6s One idea I was thinking of (if even feasible) is to store a "desired quaternion" of orientation of where the camera wants to look (preferably the player's quaternion of orientation) and a "current quaternion" of the cameras orientation and SLERP to the desired quaternion. Then possibly set a max angle between the two rotations to cap how large the angle offset could be. What are the ways of doing that?
24
Design of a camera system Thinking about a common game, doesn't matter the type of the game, it's very likely that we need some camera type. For example Debug camera controlled by keyboard and mouse, with that we are able to move around in any place of our scene. Scripted camera with that we can instruct the camera to move around, following a determinate path. Player camera. ... Each of these camera types has its own update function. The easiest (and bad) system,is to have a camera manager class with a generic update function and specialized update functions for every camera type. Inside the generic update function we have a switch statement that, based on the camera type, calls the proper update function. Instead of this I've thought to another approach strategy pattern. We move each camera behavior (update method) in an appropriate class that implements a common interface. In the camera manager we have a member to that interface, and we can set dinamically any behavior we want. What do you think about that? What other systems do you suggest me? Thanks. Additional info there is the are real possibility that I need more than one camera active, for example for reflections. In short, I must take account also of that.
24
How to panning camera on XZ axis with different angles I have an Orthographic camera where the position is x 0, y 100, z 0 and is pointing looking at x 0, y 0, z 0 . At this point, I'm able to capture the mouse movement and translate it to make the pan correctly. If the mouse goes 10 px down y I just have to move Z in the 3D world. The problem is that I don't know how to calculate if the camera position is in perspective, let's say position x 50, y 50, z 50 lookAt x 0, y 0, z 0 I guess I have to use some trigonometry, but I'm very lost, to be honest. Any guide would be very helpful.
24
RTS camera acceleration I have an RTS game with a camera that can be controlled by either edge pan, gamepad analog stick or keyboard buttons (WASD). The speed and snappiness are good for the general case, but sometimes feels to slow when going to far away places on the map. What's a good algorithm to implement some form of acceleration on the camera without making it feel less snappy? Preferably, this algorithm should work best for the analog stick, since mouse keyboard can easily use the minimap to navigate as well.
24
Camera horizontal shift I want stretch my game on two monitors. I use Unity 5.3. This version support the multi display. But, one camera one monitor. So, have two perspective cameras (at the same position). I search a "camera horizontal shift" option for create stretch effect. I don't find this option. Is there a trick ? A solution ?
24
What am I missing for this flycam to work correctly? Through reverse engineering, I've found an instruction to inject on which allows me to obtain the XYZ coordinates of the camera, as well as pitch and yaw. Thus, I have the following camX (Plane) Value is a float ranging from negative to positive camY (Plane) Value is a float ranging from negative to positive camZ (Up Down) Value is a float ranging from negative to positive camH (Horizontal Yaw) Value is a float in degrees from 180 to 180 camV (Vertical Pitch) Value is a float in degrees clamped to 89 to 89 I've written a script in Lua which allows me to bind directions (forward, backward, left, right, up, and down) to keys (W, A, S, D, Q, and E) however, those simply add subtract to from XYZ values along with a speed modifier, meaning if I'm facing forward and moving forward, everything is fine. Turning 180 degress with the mouse, though, effectively means back is forward, left is right, etc. What I'm looking to achieve is having those directional keys apply to wherever I'm aiming the mouse pointer. So, forward is always where I'm facing, etc. I've been piecing together the necessity of sin cos for calculations related to this however, I'm apparently in over my head with understanding everything I'm reading watching. I've found a number of calculations for this based on degrees, radians, converting degrees to radians, etc., but I'm just completely lost as nothing I've tried has panned out (even though I think I understand what I'm reading). I've ascertained that, where applicable, respective games have sin cos values stored somewhere in memory which I could use in lieu of calculating those values myself, but I'm honestly not sure what those values look like to seek out in the first place...or exactly what I need to do to make the calculations myself. They're probably staring me right in the face in nearby memory or on the stack as I trace through code, but I simply don't know. Are there multiple ways to calculate the appropriate angles via sin cos? Can I use some combination of the 5 values I listed above, along with a speed modifier and Lua's math. methods (math.rad, math.sin, math.cos), to get this flycam to work properly? Thank you for any guidance you can provide!
24
How do I ensure that perspective and orthographic projection matricies show objects at the same size on the screen? I am working on a 3d scene editor and would like to show the scene in orthographic projection. My current problem is that I am not sure how to calculate the orthographic projection matrix such that the objects do not appear at a completely different size on the screen. I currently have the following 2 functions for calculating the camera projection matrix. Matrix4.createPerspective function(fov, aspect, near, far, result) if (typeof result 'undefined') result new Matrix4() var ymax near Math.tan(fov Math.PI 360) var ymin ymax var xmin ymin aspect var xmax ymax aspect Matrix4.createFrustum(xmin, xmax, ymin, ymax, near, far, out result) return result Matrix4.createOrthographic function(left, right, top, bottom, near, far, result) if (typeof result 'undefined') result new Matrix4() var re result.elements var w right left var h top bottom var p far near var x ( right left ) w var y ( top bottom ) h var z ( far near ) p re 0 2 w re 4 0 re 8 0 re 12 x re 1 0 re 5 2 h re 9 0 re 13 y re 2 0 re 6 0 re 10 2 p re 14 z re 3 0 re 7 0 re 11 0 re 15 1 return result And this is how they are being used updateProjectionMatrix function(entity) var camera entity.getComponent('Camera') if (camera.isDirty()) camera.aspect camera.width camera.height if (camera.isOrthographic()) Matrix4.createOrthographic( 0, camera.width, 0, camera.height, camera.near, camera.far, out camera. projectionMatrix) else Matrix4.createPerspective( camera.fov, camera.aspect, camera.near, camera.far, out camera. projectionMatrix) , Here is an example of what I am trying to accomplish inside Blender (F5 toggles between Ortho and Perspective). Note that both cubes appear to be about the same size on the viewports
24
How to panning camera on XZ axis with different angles I have an Orthographic camera where the position is x 0, y 100, z 0 and is pointing looking at x 0, y 0, z 0 . At this point, I'm able to capture the mouse movement and translate it to make the pan correctly. If the mouse goes 10 px down y I just have to move Z in the 3D world. The problem is that I don't know how to calculate if the camera position is in perspective, let's say position x 50, y 50, z 50 lookAt x 0, y 0, z 0 I guess I have to use some trigonometry, but I'm very lost, to be honest. Any guide would be very helpful.
24
VR Camera and Hands height Using the Oculus Rift S, I just added OVRCameraRig and LocalAvatar or OVRPlayerController and when I press play I feel like I'm not as high as I want in my vision and hands, I change the position of Y of objets to make it more higher, but it affects to everything, my guardian setup floor goes more high too and it does not match with my floor. What can I do to solve this? What should I change or modify?
24
Can I position a player avatar behind the camera based on rotation? I'm currently reverse engineering a video game where what's rendered on the screen depends on the proximity of the player avatar to the camera. As such, for my freecam to work properly, I need to bring the player avatar along with the camera. Since I can't yet find how the player avatar is rendered to the screen, the next best thing I can think to do is to place the player avatar behind the camera at all times (I suppose essentially like making the player avatar orbit the camera, but always be behind it). I just can't quite pin down how to make this happen. The values I have are as follows Player Avatar X Y (Up Down) Z Camera X Y (Up Down) Z Pitch (In degrees natively, but converted to radians for my calculations) Yaw (In degrees natively, but converted to radians for my calculations) World Orientation To get my camera to move forward in the direction I'm pointing my mouse when pressing the Y key, these are the calculations I'm using (code is in Lua) Camera XYZ values local camCoordX readFloat(" cameraBase 990") Camera X local camCoordY readFloat(" cameraBase 994") Camera Y local camCoordZ readFloat(" cameraBase 998") Camera Z Camera pitch and yaw local pitch math.rad(readFloat(" cameraBase 1540")) Pitch local yaw math.rad(readFloat(" cameraBase 1544")) Yaw Sine and cosine calculations local sinOfYaw math.sin(yaw) Sine of Yaw local cosOfYaw math.cos(yaw) Cosine of Yaw local sinOfPitch math.sin(pitch) Sine of Pitch local cosOfPitch math.cos(pitch) Cosine of Pitch If Y key is pressed, write new camera XYZ values accordingly if isKeyPressed(VK Y) then writeFloat(" cameraBase 990", camCoordX (sinOfYaw speed)) writeFloat(" cameraBase 994", camCoordY (sinOfPitch speed)) writeFloat(" cameraBase 998", camCoordZ (cosOfYaw speed)) end Based on that information, is it feasible for me to accomplish placing the player avatar behind the camera at all times based on rotation? I'm open to other suggestions or options as well. Thank you for any help you can offer!
24
What are some common FOV for third person games? I am trying to decide on a field of view (FOV) for a third person game and I really don't know where to start. It would be very helpful to have some points of reference from recent TPS games and the FOV they use by default. The trouble is many third person games (or at least those I have tried) do not have any settings or information about what their default FOV is. I have had a look at Uncharted 4, Hitman (2016), The Division and GTA 5 but cannot find information about their FOV anywhere. I understand that each game is different and the type of gameplay will determine the FOV choice here, but it would be very useful for me to have some existing examples as a starting point. Is there any good place to find this out?
24
Three.js recover camera values When I start the script, camera has starting values. When I will move it and click button to set up startign values it is never same. What values I missed? The best way, I suppose, it is to look at the example http jsfiddle.net VsWb9 6652 I used console.log for debbuging camera values. HTML lt button id "buttonTest" gt TEST lt button gt Please, move cube before click! lt div id "wrapper" gt lt div gt JS var camera, scene, renderer, geometry, material, mesh init() animate() function init() window.wrapper document.getElementById('wrapper') var buttonTest document.getElementById('buttonTest') buttonTest.addEventListener('click', function() test() ) scene new THREE.Scene() camera new THREE.PerspectiveCamera(50, window.innerWidth window.innerHeight, 1, 10000) camera.position.z 500 scene.add(camera) geometry new THREE.CubeGeometry(200, 200, 200) material new THREE.MeshNormalMaterial() mesh new THREE.Mesh(geometry, material) scene.add(mesh) renderer new THREE.WebGLRenderer( preserveDrawingBuffer true ) renderer.setPixelRatio(window.devicePixelRatio) renderer.setSize(window.innerWidth, window.innerHeight) renderer.setClearColor(new THREE.Color("hsl(193, 50 , 57 )")) wrapper.appendChild(renderer.domElement) controls new THREE.TrackballControls(camera, renderer.domElement) controls.rotateSpeed 4.0 controls.zoomSpeed 1.2 controls.panSpeed 0.1 controls.noZoom false controls.noPan false controls.staticMoving true controls.dynamicDampingFactor 0.3 controls.keys 65, 83, 68 controls.addEventListener( 'change', render ) console.log('camera default ' camera.position.x ', ' camera.position.y ', ' camera.position.z) console.log('quaternion default ' camera.quaternion.x ', ' camera.quaternion.y ', ' camera.quaternion.z ', ' camera.quaternion.w) function animate() requestAnimationFrame(animate) controls.update() render() function render() camera.lookAt(scene.position) renderer.render(scene, camera) function test() lines below shows actual settings console.log('camera now ' camera.position.x ', ' camera.position.y ', ' camera.position.z) console.log('quaternion now ' camera.quaternion.x ', ' camera.quaternion.y ', ' camera.quaternion.z ', ' camera.quaternion.w) window.setTimeout(function() this is recovering camera values like it was on the sart of script it is not enought, what I missed? camera.position.x 0 camera.position.y 0 camera.position.z 500 camera.quaternion.x 0.0 camera.quaternion.y 0.0 camera.quaternion.z 0.0 camera.quaternion.w 1.0 console.log('camera recover default ' camera.position.x ', ' camera.position.y ', ' camera.position.z) console.log('quaternion recover default ' camera.quaternion.x ', ' camera.quaternion.y ', ' camera.quaternion.z ', ' camera.quaternion.w) ,1500)
24
Unity 5 Camera Functions Usage Error How to use functions like ScreenToRayPoint() in Camera class? I have used this code Ray ray camera.ScreenPointToRay(Input.mousePosition) and it says Component.camera is obsolete, instead using GetComponent() Then I tried these Camera cam GetComponent lt Camera gt () Ray ray cam.ScreenToRayPoint(Input.mousePosition) and it says Camera doesn't contain a definition of ScreenPointToRay and no extension method 'ScreenPointToRay' accepting a first argument of type 'Camera' could be found.
24
Citrus Engine How do I set up the camera? (Would be great if someone with greater rep created the Citrus Engine tag and add it to this question) I want to make a game where my inner starling stage dimensions are 320x240. My game will have three parts A vertical (left aligned 32x208) toolbar in Rect(0, 0, 32, 208) A horizontal (bottom aligned 320x32) toolbar in Rect(0, 208, 320, 32) The game content (right top aligned 288x208) in the remaining area Rect(32, 0, 288, 208). It will be handled by the state, view, and camera. My game would consist of only one playable state (which would render the level content by fetching the data from the server), and would be reset when the player goes to another map (other states are meaningless here since they are just form UI states). Please correct me if I'm wrong with these steps, and help me with the further I can set cameraLensWidth to 288px and cameraLensHeight to 208. I can set the inner center to 0.5 and 0.5 with no deadZone (my game will require it), and set the bounds to Rect(0, 0, myMapWidth tileWidth, myMapHeight tileHeight) to track player movement only inside the map, and ensuring the camera follows the target by focusing it in its center (as long as it can). What I don't know is how to ensure the camera top left corner position be at (0, 32). What must I do to ensure the last point?
24
GLFW Camera rotation system I need to make a camera rotation system similar to the one here http madebyevan.com webgl path tracing To rotate, press the Left mouse button and drag. So far I figured a basic idea of how this could be implemented. Note that although it looks like the scene is rotating in front of the camera but in my case I want to rotate the camera instead. Since the scene contains hundreds of thousands of triangles and I wouldn't wanna apply a matrix operation to each of them. I have represented my camera by a 4x4 matrix. The columns from 1 to 4 are side, up, look at and eye respectively. So here's what I thought so far. 1) Subtract old cursor position from new cursor position. Doing this for X and Y we get a direction vector as to where the mouse is headed. 2) Invert this direction. And translate the camera a little in this direction. So if the mouse is headed right, the camera heads left. This is done by checking the components of the direction vector. if (dir.x lt 0) cam.eye side if(dir.x gt 0) cam.eye side if (dir.y lt 0) cam.eye up if(dir.x gt 0) cam.eye up Note the translations are inverse. If less than zero than add the side vector instead of subtracting it. 3) Now find a vector perpendicular to it. So if the mouse was headed left or right the vector perpendicular to it is the Y axis. I calculate this by taking the cross(vec(0,0,1), direction vector). Note this is not the inverse but the actual direction vector. 4) Rotate the camera a little along this perpendicular axis a little. If you imagine these operations for the X axis. This feels like the camera is moving in some sort of circle around the scene. That is, the camera translates a little, then rotates. Next time it translates by adding the side (which was changed when it first rotated) it moves a little diagonally. In this way it feels like it rotates in a circle. So this works somewhat fine for pure X and Y rotation and upto a certain extent. Fails when i move the mouse diagonally or in Pure X Y rotation when I go beyond a certain point probably 90 degrees. Any advice or suggestions? I am not using OpenGL instead doing this with GLFW and GLM. The renderer is a raytracer pathtracer.
24
How to abstract mouse movement event streams for networked multiplayer? I'm building a mouse driven 3rd person shooter with client server network architecture. My input system is event based. Keyboard input is captured by keys.S.onUp() keys.S.onDown() events. When these events are triggered I immediately pass them across the network to be replicated on the server and other clients. Mouse input is captured by a mouse.onmove() event that receives data from screen coordinates x,y. This method is fired constantly as the mouse is moved producing a flood of data. This may be premature optimization, but I feel like its a bad idea to immediately pass this flood over the network. I currently have an abstraction that wraps the mouse.onmove() event and creates essentially a mouse.moveStart() and mouse.moveStop(). This significantly reduces the flood of mouse input to send on the network. Unfortunately, this abstraction produces a less responsive feel on the local client experience. I am setting a lookVelocity. When mouse.moveStart() is triggered the camera position and player rotation is updated per frame by the lookVelocity value. This produces a stiff fixed rate movement. If you rapidly move the mouse the camera movement is not reflected. Additionally, this is an isomorphic universal code base. Meaning I am using the same code between client and server. Ideally, I would like to reuse the same mouse event abstraction between both client and server.
24
How do I create a bounding frustum from a view projection matrix? Given a left handed Projection matrix, a left handed View matrix, a ViewProj matrix of View Projection How do I create a bounding Frustum comprised of near, far, left, right and top, bottom planes? The only example I could find on Google (Tutorial 16 Frustum Culling) seems to not work for example, if the math is used as given, the near plane's distance is a negative. This places the near plane behind the camera...
24
Unreal Y Axis boundary on a Camera Bounding Box I was wondering if I could get some help with modifying a Blueprint I'm currently using for a project in Unreal Engine. I am currently attempting to make a side scrolling platformer game and I'm using a camera bounding box Blueprint to restrict where the camera can move when tracking the player's movement, based on the following series of tutorials on YouTube by JvtheWanderer How to make a Metroidvania 2D Camera Blueprint System on UE. The camera bounding box works as intended, in that it prevents the camera from moving beyond the limitations of the camera box placed within the scene, but I want to go even further than that and establish a limitation within the camera bounding box that not only limits the camera's movement but also prevents the player from moving on the Y axis beyond the box's boundaries. Pretty much every platformer from Super Mario to Sonic the Hedgehog to Castlevania, have such boundary limitations, usually at the very start of the level and also frequently at the end. The limitation on the Y axis boundary is also frequently used to lock the player into a particular area for forced enemy encounters, mini boss fights as well as proper boss battles, a limitation that lasts until the player has vanquished the threat. I wish to implement similar boundaries on the camera bounding boxes within my project to restrict where the player can go without having to place walls everywhere to keep the player from wandering beyond the level's limits. Using walls to restrict the player's movement is fine for indoor areas, but it ends up looking distractedly out of place in outdoor areas if used in excess. I've included screenshots below of the Construction Script amp Event Graph from the SideScrollerCamera Blueprint I'm using. I would also include the Viewport and the Level Blueprint as well, except that the time being, I am unable post more than two links. Construction Script Event Script Any help with this matter would be greatly appreciated.
24
How can I emulate Diablo 3's Isometric view using Perspective? Using DX11, SimpleMath I am building a isometric game like Diablo 3 in 3D and I want to use a perspective camera that emulates their top down view. Projection Matrix CreatePerspectiveFieldOfView(1, width height, 0.1f, 100.0f) But after this I am a bit unsure how I am suppose to rotate the camera. I assume I could just do p Vector3(10, 10, 10) to get a 45 degree angle at any given p. How do I properly position, and rotate the camera and point it at position p, to mimic Diablo 3 view?
24
What is "simulated human operator" cutscene camera movement called? I have some questions related to the non smooth camera movement in cutscenes of quite a few games which appears to try and simulate the nervousness of a human operator holding it. This is opposed to camera shake in general gameplay when some event takes place, i.e. explosions, though they do also occur in the cutscenes I'm talking about. Examples of this effect are in many games, but the best example I have seen of what I am trying to describe is the opening cutscene (and many cutscenes) of Metal Gear Solid V Ground Zeroes. Be sure to choose 60 FPS to make the effect clearer. Is there a name for this "simulated human operator" effect, and do you know how this may have been achieved? My guess would be that the camera is being offset in both axes by some kind of random deviation function, which is always slightly elastically moving towards the origin to stop it wandering off wildly. Perhaps Brownian motion?
24
How to properly rotate towards a local point in Unity C ? (a local LookAt) I am having a nightmare trying to make a child object rotate towards a given point of its parent object, similar to what is possible at the world level when using LookAt. The problem is that most functions related to rotating in Unity C do not work for local level. In the description of function transform.Rotate one can have the impression that it allows that, trough passing the Space.Self parameter. However, to use that function one has to know the 3 angles between the 2 points of interest. And there is no function that allows such calculation. Anyone could please help implementing a local LookAt?
24
How to abstract mouse movement event streams for networked multiplayer? I'm building a mouse driven 3rd person shooter with client server network architecture. My input system is event based. Keyboard input is captured by keys.S.onUp() keys.S.onDown() events. When these events are triggered I immediately pass them across the network to be replicated on the server and other clients. Mouse input is captured by a mouse.onmove() event that receives data from screen coordinates x,y. This method is fired constantly as the mouse is moved producing a flood of data. This may be premature optimization, but I feel like its a bad idea to immediately pass this flood over the network. I currently have an abstraction that wraps the mouse.onmove() event and creates essentially a mouse.moveStart() and mouse.moveStop(). This significantly reduces the flood of mouse input to send on the network. Unfortunately, this abstraction produces a less responsive feel on the local client experience. I am setting a lookVelocity. When mouse.moveStart() is triggered the camera position and player rotation is updated per frame by the lookVelocity value. This produces a stiff fixed rate movement. If you rapidly move the mouse the camera movement is not reflected. Additionally, this is an isomorphic universal code base. Meaning I am using the same code between client and server. Ideally, I would like to reuse the same mouse event abstraction between both client and server.
24
Min. Distance for Spotlights in Ogre3d? I have a simple scene with only one empty box. Within that box I have the camera attached to a scene node, which the user can move (translate rotate). Additionally a spotlight is attached to the camera's scene node, facing into the same direction as the camera. So when the camera is moved, then the light moves as well. m pCamera m pSceneMgr gt createCamera("Camera") m pCamera gt setNearClipDistance(0.01) m pCamNode gt attachObject(m pCamera) Light spotLight m pSceneMgr gt createLight("spotLight") spotLight gt setType(Light LT SPOTLIGHT) spotLight gt setDiffuseColour(1, 1, 1) spotLight gt setSpecularColour(1, 1, 1) spotLight gt setSpotlightRange(Degree(50), Degree(100)) spotLight gt setSpotlightNearClipDistance(0) spotLight gt setDirection(m pCamera gt getDirection()) m pCamNode gt attachObject(spotLight) So, this basically works fine. There is one problem though As soon as the user moves the camera very close to a side of the box, then suddenly all is black, I can't see anyting. I am sure the camera is NOT moving through the box's side, it is still within the box. So the problem seems to be that the light does not "work" when it is too close to an object, as if the object (the box side in my case) is not reflecting the light any more. Any ideas?
24
How to determine what triangle in a mesh the cursor is pointing to? I have a game I'm working on in which items are "placed" and "selected". In order to do this, I need to determine what triangle in a mesh the cursor is pointing to. I have no trouble with getting the cursor position or the orientation position of the camera the problem is figuring out how they relate. I'm guessing that this involves ray picking, but I have no idea on how to convert a cursor position to a ray that I can test for intersection with a triangle. A little bit of background Game engine Irrlicht Language C
25
If I release code under GPLv3, can I specify what parts of users' code can be released under a new license? I have a few questions about what users of my engine could copyright in their own project. The way the engine would work would be that it would be a standalone program in Java that would load all of the scripts assets data from a folder that the game takes place in. The code could be written in Python or compiled to Java. Onto my questions, the code that they write would access some parts of my engine through Jython, and so I assume those would have to be released under the GPL. But what about the rest of the game other than the code? Like could they copyright their own art? I guess what I really want is that modifications of the code of the engine must be released under the GPL, but projects created using my engine can be released under any copyright, as long as they obey they distribute the engine itself under the GPL.
25
Splitting profit Equal share vs equity investment (work money spent share) Coming up with an appropriate one liner question was so hard I wrote the body before the header... So we're a team of 4 developers two designers, two coders to put it simply (naturally we don't tie ourselves up with labels). We've come up with a great game design together, and we're prepared to spend a great bulk of our time making a prototype. Problem is, there's a major difference of opinion when it comes to sharing potential revenue made by the game. Investment Share Two of us would like to have everyone keep a rough log of hours spent. Converting that to a previously agreed upon per hour salary, we could add it up with real money invested, and from that determine each developer's 'stake' in the game, i.e. their share. Equal Share The other two do not agree with that, among other reasons because they think some pieces of work are worth more than others, like a really great idea for a feature. Because of such immeasurables, they think an equal share for all would be the easiest. Another argument said (paraphrasing) "logging hours would take away the fun. I'm serious about this project, but I want to work with it on hobbyist terms, not like a second job." We need to formalize an agreement before development has gone too far, but how can we get past these core differences of opinion? Is this a common problem among first time startups?
25
Do I still owe Epic royalties if I am not enrolled in the Unreal Engine subscription? I have just started using Unreal Engine 4 and I have had a very good experience, though I have left the subscription now. I've made an Android game with it and I am planning to release it, but I have some doubts about the license. Suppose I post my game on my own website for, say, , will I still have to give 5 of my income to Epic? How will they come to know that the game is made in Unreal Engine? I am very confused in taking this step and I'm not really understanding what they mean by "5 gross revenue."
25
Will there be legal issues in using cracked assets for internal development? While the artwork for my game is being finished, can I use cracked assets for internal prototyping or testing? I am not going publish anywhere outside my company. Does it attract any legal issues? By "cracked assets" here I mean extracting embedded graphic assets like images and .swf files from a browser cache or an encrypted URL.
25
Do I still owe Epic royalties if I am not enrolled in the Unreal Engine subscription? I have just started using Unreal Engine 4 and I have had a very good experience, though I have left the subscription now. I've made an Android game with it and I am planning to release it, but I have some doubts about the license. Suppose I post my game on my own website for, say, , will I still have to give 5 of my income to Epic? How will they come to know that the game is made in Unreal Engine? I am very confused in taking this step and I'm not really understanding what they mean by "5 gross revenue."
25
What constitutes satire? For business reasons, I would like to know what constitutes satire in a game. Also, is there a better place to ask questions like this on the business side of game development?