_id
int64 0
49
| text
stringlengths 71
4.19k
|
---|---|
0 | Flicking issue on Linux build only I'm about to release my second game in the next couple of days. A tester now reported that he has a flicker issue running the game on Linux. It s related to objects like the water, the hex tiles and object on top of the hextiles. In Windows everything works like a charm but on Linux all the graphics flicker. I think it s a graphical problem related to the Z buffer since the flicking objects all have the same world space position. I m using the URP and created all the shader by Unity Shader Graph. Does anyone know how I can fix the issue? This it how it looks Any kind of help is appreciated! Thanks in advance! |
0 | Issue with interpolation on a burn shader (lerp and smoothstep) I'm trying to create a simple burn shader. See here for more info on the method I'm using. However, I don't get why replacing the smoothstep with a lerp results in completely different results. Am I missing something from my knowledge of maths? As far as I know they should return similar values given the same parameters. clips materials, using an image as guidance. use clouds or random noise as the slice guide for best results. Shader "Custom Dissolving" Properties MainTex ("Texture (RGB)", 2D) "white" BorderTex ("Border Texture (RGBA)", 2D) "white" EdgeColor ("Edge Color", Color) (1,0,0) EdgeSize ("Edge Size", float) 0.1 SliceGuide ("Slice Guide (RGB)", 2D) "white" SliceAmount ("Slice Amount", Range(0.0, 1)) 0.2 MaterialToggle UseClipInstruction ("Use Clip Instruction", Float) 0 SubShader Tags "Queue" "Transparent" "IgnoreProjector" "True" "RenderType" "Transparent" Cull Off Blend SrcAlpha OneMinusSrcAlpha CGPROGRAM if you're not planning on using shadows, remove "addshadow" for better performance pragma surface surf Lambert alpha addshadow struct Input float2 uv MainTex TEXCOORD0 float2 uv SliceGuide TEXCOORD1 float SliceAmount sampler2D MainTex sampler2D BorderTex sampler2D SliceGuide half SliceAmount half EdgeSize half3 EdgeColor void surf (Input IN, inout SurfaceOutput o) o.Alpha 1 Green is a good color for estimating grayscale values half fragSliceValue tex2D( SliceGuide, IN.uv SliceGuide).g ifdef UseClipInstruction clip(fragSliceValue SliceAmount) else TODO look for an alternative to this if (fragSliceValue SliceAmount lt 0) o.Alpha 0 endif half rampX smoothstep( SliceAmount, SliceAmount EdgeSize, fragSliceValue) half3 ramp tex2D( BorderTex,half2( rampX,0) ) EdgeColor.rgb o.Emission ramp (1 rampX) o.Albedo tex2D( MainTex,IN.uv MainTex) ENDCG Fallback "Diffuse" Left is the result of smoothstep (the provided code), right is what happens when I replace smooth with lerp. |
0 | Rotate bone on 1 axis to look at another object Unity 3D I'm new to game development and I'm trying to create a basic 3D Tower Defence game as my first project. I've created a basic turret in Blender and I'm now trying to rotate 2 different bones to make the turret follow the enemy. The first bone (which I have working) rotates the turret to face the direction of the enemy. The second which I can't get right needs to rotate the gun at the top of the turret to look down at the enemy. I need it to keep facing the same direction as the rest of the turret and don't want it to tilt to either side. So it needs to move on 1 axis only. I have tried various things, including examples from the Internet but I can't figure it out. So far I've managed to get it to look down at its base, up at the sky and spin like a wheel. Basically everything except where it's supposed to be looking. Here is the code I have at the moment if (enemiesInRange.Count gt 0) Rotate Platform This section works fine Transform platform transform.root.Find("Armature BaseBone PlatformBone") Quaternion targetRotation Final rotation (I.e. facing the enemy) Vector3 turretPosition platform.position Store the turrets current position turretPosition.y 0 0 the Y axis to stop the turret tilting up or down Vector3 targetPosition enemyManager.activeEnemies enemiesInRange 0 .transform.position targetRotation Quaternion.LookRotation(targetPosition turretPosition) float rotateSpeed 2.0f Time.deltaTime platform.transform.rotation Quaternion.Lerp(platform.transform.rotation, targetRotation, rotateSpeed) Rotate GunBox TO DO This part is not working Transform gunBox transform.root.FindChild("Armature BaseBone PlatformBone LowerArmBone UpperArmBone GunBoxBone") Vector3 gunBoxPosition gunBox.position float angle Vector3.Angle(gunBoxPosition, targetPosition) gunBox.RotateAround(gunBox.position, Vector3.left, angle) Debug.Log(angle) Here's a picture of the turret to give you an idea what I'm working with. You can see in the screenshot that the bone's default X rotation is roughly 50. Enemies walk through the trench around the turret. The PlatformBone rotates everything from the silver platform upwards. I also don't understand why the line below makes the turret rotate instead of tilting it. I would expect to rotate the turret around the Y axis to keep it flat on the floor and face another direction. Yet I seem to have to do the opposite (If I select the PlatformBone its Y axis is facing up as expected) turretPosition.y 0 0 the Y axis to stop the turret tilting up or down |
0 | Custom Update Function OR Multiple TimeScales I'm trying to create some interesting variations on timescales in my game. In essence, I'd like at least two, probably three or four separate timescales. The player the player timescale is likely to be the game scene timescale always. Mobs mob timescales can be altered (slower primarily, but also faster possibly) depending on cast spells or other effects in the game. Environment like mobs, the environment may be slowed or sped up depending on some game effects that occur. This would control things like traps, torches, etc. I'm thinking about things such as a Slow Time spell that might slow most mobs, but maybe not others (bool flag), something like a time elemental or something of that nature. I'd also like to possibly slow environmental time so the player may have a single use item that would allow them to move easily navigate traps (but not have guaranteed success if they still don't see the pattern or such). Having controller scripts that are added to all objects that listen for calls to change a multiplier on Time.deltaTime could be an option, but it's clunky, would have to be in all of the scripts, would have to be accounted for in many areas (movement, combat, possibly related effects), and is just not an ideal option. Inversely, the same effect (a lich for example casting slow on a player) could require opposing things. If I had global settings that contained a multiplier for player mob object that were impacted by an effect, I could still always use a multiplier with Time.deltaTime, but I still think it would be clunky. Has anyone done this? Are there any good ideas? I've searched a bit, but didn't find anything terribly useful. I've considered custom Update() classes, though I'm not entirely sure how to implement them. Maybe extend the FixedUpdate() class that's built in to take other things into account? I know that Time.timeScale is inherently for the current scene, but could I make instances of Time that are used by different scripts FixedUpdate()? Am I overthinking this? I feel like this shouldn't be quite so difficult. |
0 | Why Can't I duck reverb on the mix bus in Unity? i'm surprised there is so little documentation on this issue. But i'm trying to duck the audio on all the mix buses in Unity so that you can clearly hear the VO of the game. Now it works great except that I have some reverb zones in the game and any sound that plays in these reverb zones, the actual reverb itself is NOT DUCKED. So the VO is accompanied with a huge wall of wishy washy muddy noise. Anyone deal with this before? How can we deal with this? |
0 | How can I clamp the transform to a range? I am trying to create a level editor in Unity. When the position of an object is changed, I want that object to have its X and Y positions clamped between 1 and 10. So if the position is changed to 15, it will snap back to 10 and if the position is changed to 5, it will snap back to 1. I have tried using the OnValidate() method, but without success. It snaps everything correctly when I reload the script (make a change to the script and reopen Unity), but it will not snap it on the fly. void OnValidate() float newX Mathf.Clamp(child.position.x, 1, 10) float newY Mathf.Clamp(child.position.y, 1, 10) transform.localPosition new Vector2(newX, newY) Any suggestions on how I can accomplish this? |
0 | Maintaining facing direction at the end of a movement I want to make my 2D top down walking animation stop and revert to the idle animation while continuing to face in the same direction as the previous movement. For example, when I go up then stop, my character should continue to look up. If I go down, it should continue to look down. I use lastmoveX and lastmoveY floats for idle and for walking I use moveX and moveY floats. moveX and moveY change when I move with the joystick, but lastmoveX and lastmoveY do not change, and I don't know how to fix this. Here is my code using System.Collections using System.Collections.Generic using UnityEngine public class PlayerController MonoBehaviour private Rigidbody2D myRB private Animator myAnim public Joystick joystick public MoveByTouch controller SerializeField private float speed Use this for initialization void Start () myRB GetComponent lt Rigidbody2D gt () myAnim GetComponent lt Animator gt () Update is called once per frame void Update () myRB.velocity new Vector2(joystick.Horizontal, joystick.Vertical) speed Time.deltaTime myAnim.SetFloat( quot moveX quot , myRB.velocity.x) myAnim.SetFloat( quot moveY quot , myRB.velocity.y) if(joystick.Horizontal 1 joystick.Horizontal 1 joystick.Vertical 1 joystick.Vertical 1) myAnim.SetFloat( quot lastMoveX quot , joystick.Horizontal) myAnim.SetFloat( quot lastMoveY quot , joystick.Vertical) |
0 | Controlling Source Image From Script I am trying to make a health system where hearts display how much health you have. On each heart, I have an Image component. I am trying to control what source image the Image component uses from a script, but it is not letting me. I read the documentation for the Image component here and it has no sprite property. This is my code using System.Collections using System.Collections.Generic using UnityEngine using UnityEngine.UIElements public class HeartScript MonoBehaviour public GameObject heart1 public GameObject heart2 public GameObject heart3 public Sprite halfHeart public Sprite fullHeart public Sprite emptyHeart private double playerHealth 3 Update is called once per frame void Update() if(playerHealth 2.5) heart1.GetComponent lt Image gt ().image fullHeart heart2.GetComponent lt Image gt ().image fullHeart heart3.GetComponent lt Image gt ().image halfHeart else if(playerHealth 2) heart1.GetComponent lt Image gt ().image fullHeart heart2.GetComponent lt Image gt ().image fullHeart heart3.GetComponent lt Image gt ().image emptyHeart else if(playerHealth 1.5) heart3.GetComponent lt Image gt ().image emptyHeart heart2.GetComponent lt Image gt ().image halfHeart heart1.GetComponent lt Image gt ().image fullHeart else if(playerHealth 1) heart3.GetComponent lt Image gt ().image emptyHeart heart2.GetComponent lt Image gt ().image emptyHeart heart1.GetComponent lt Image gt ().image fullHeart else if(playerHealth .5) heart3.GetComponent lt Image gt ().image emptyHeart heart2.GetComponent lt Image gt ().image emptyHeart heart1.GetComponent lt Image gt ().image halfHeart else if(playerHealth 0) heart3.GetComponent lt Image gt ().image emptyHeart heart2.GetComponent lt Image gt ().image emptyHeart heart1.GetComponent lt Image gt ().image emptyHeart public void RemoveHealth() playerHealth playerHealth 0.5 How do I change the sprite the heart uses from a script? |
0 | How to find sequences of consecutive cards of the same suit in a hand, including wild cards? (Indian Rummy) I have a hand of 13 cards (chosen from 2 standard decks of 52 cards 4 jokers, so it's possible for the same card to appear multiple times). I want to search this deck to find sequences according to the rules of Indian Rummy A sequence must contain at least 3 cards, or up to 10 cards (We need at least two sequences to win minimize loss penalties, so at least 3 cards in the hand of 13 need to be left over to form another sequence) Cards in a sequence have consecutive rank 2, 3, 4, 5, 6, 7, 8, 9, 10, J, Q, K, A (wild cards count as any rank) A sequence must be all of the same suit , , , (wild cards can count as any suit) A sequence is called "pure" if it contains no wild cards standing in for other cards. eg. if Jacks are wild, the sequence 10 , J , Q is a pure sequence, while the (also valid) sequence 3 , 4 , 5 , J , 7 is not a pure sequence. Here's what I've come up with so far, but it does not correctly find all sequences matching the rules above. private List lt Card gt FindSequences() List lt Card gt sequence new List lt Card gt () int Count 0 sort in order cards cards.OrderBy (i gt i.FaceValue).ToList () sequence.Add(cards Count ) for(int i 1 i lt 9 i ) if((cards i 1 .cardRank cards i .cardRank 1) amp amp (cards i .cardRank cards i 1 .cardRank 1) amp amp (cards i 1 .cardRank cards i 2 .cardRank 1) ) amp amp (cards i 2 .cardRank cards i 3 .cardRank 1)) sequence.Add (cards i 1 ) sequence.Add (cards i ) sequence.Add (cards i 1 ) else Count check If Sequence Less Then 3 Clear Data if(sequence.Count lt 3) sequence.Clear () wildCard.Clear () return sequence |
0 | unity problem in limiting rotation of object? I am trying to make a game with unity. In this game, player uses mouse to rotate a container or box around x and z (rotate to left and right and also toward the camera and opposite). I used below code to control movement and limit it. float zRot transform.eulerAngles.z float xRot transform.eulerAngles.x float yRot transform.eulerAngles.y Debug.Log ( "xDir " xRot.ToString() " yDir " yRot.ToString() " zDir " zRot.ToString() ) if ((xRot gt 330 xRot lt 30) amp amp (zRot gt 330 zRot lt 30)) transform.Rotate (Input.GetAxis ("Mouse Y"), 0, Input.GetAxis ("Mouse X")) else Debug.Log(xRot.ToString() " " zRot.ToString ()) transform.Rotate ( transform.rotation.x , transform.rotation.y, transform.rotation.z ) The contatiner rotate 60 degrees in x and z and when it gets to the limits it stops, which is what I want. However when I rotate in both direction to the near the limits (330 or 30), the contatiner becomes out of control and it rotates freely and constantly. It's like the contatiner has passed the limits. The questions are Did I used a good approach to limit the rotation and how to stop the object completely when it get to the limit of rotation? When and why the object becomes out of control exactly? |
0 | How does one get sprites to hide unhide using OnMouseDown() in Unity? I'm right now using Unity for a school project and I need to be able to hide and unhide sprites for "expansions". I've looked all over the internet for this and I can't seem to find an answer. Here is the script so far using System.Collections using Systems.Collections.Generic using UnityEngine DisallowMultipleComponent public class addBlock MonoBehaviour void OnMouseDown() Debug.log(gameObject.name) |
0 | POST JSON method does not work when project has been exported to WEBGL and uploaded to firebase hosting I have a database set up on firebase and when i run my game in the unity edtitor and as a standalone exported Windows .exe file, the call gets made correctly to the database and JSON data gets posted correctly. However when I export the exact same project to WEBGL and upload it to my firebase hosting server, the JSON call does not work. Here is my code, could someone please help? using System using System.Collections using System.Collections.Generic using System.IO using System.Text using UnityEngine using UnityEngine.Networking using UnityEngine.SceneManagement using UnityEngine.UI public class GameOverMenu MonoBehaviour public InputField nameInputField public void MainMenu() SceneManager.LoadScene("Menu") public void SubmitScore() StartCoroutine(SendScore()) SceneManager.LoadScene("Menu") Use this for initialization void Start() Update is called once per frame void Update() public IEnumerator SendScore() Scoreboard s new Scoreboard(nameInputField.text, PlayerPrefs.GetInt("Score"), PlayerPrefs.GetInt("Seed")) Dictionary lt string, string gt headers new Dictionary lt string, string gt () headers.Add("Content Type", "application json") Hashtable data new Hashtable() data.Add("name", s.Name) data.Add("score", s.Score) data.Add("seed", s.Seed) UnityHTTP.Request postRequest new UnityHTTP.Request("post", "https alexclearythesisgame.firebaseio.com Scoreboard.json", data) postRequest.Send() yield return postRequest.isDone |
0 | How to blur entire scene but a specific spot in Unity? What I want is basically A way to blur every object sprite on the scene, but have a "blur free" circular zone, that can move. And everything that's behind that circular zone won't have the blur effect applied to it. In a 2D mobile game, how would I do that, especially in a way that's not too heavy, performance wise(if possible). And if it's not possible to do that in a way that won't completely destroy my performance, I also have those sprites already "pre blurred" so maybe there's a way to have both blurred and "unblurred" objects at the same position, and only draw the right parts of them as they go through the scene and reach the blur free zone. If there's a way to do that, that'd also help immensely. Thanks for your time. |
0 | Tools or techniques for UV unwrapping texturing terrain with features such as roads? I'm trying to create a map similar to Outset island in Loz Wind Waker. Here is an example of what I'm trying to create As you can see, the roads more or less have dedicated tris. However, when I look at the UV data, it looks like a complete mess For reference, this is the texture without any overlays I have no idea how to go about creating a similar effect in my own game are there any tools I can use? What is this technique called? |
0 | (Unity) Serializing Data with Custom Object Stored by Reference I have a custom class that I serialize deserialize to from file(s), and is not guaranteed to be identical each time the game runs (in this example, a language pack. it's possible to fix a typo manually). As a result, I do not use UnityEngine.Object on it, and deserialize it each load. However, some classes that implement UnityEngine.Object (Monobehavior), have references to it. these references must persist (so that the language stays the same each load) As per the Unity Scripting page, https docs.unity3d.com Manual script Serialization.html, this can cause undesired behavior "If you store a reference to an instance of a custom class in several different fields, they become separate objects when serialized. Then, when Unity deserializes the fields, they contain different distinct objects with identical data." Naturally, I'd prefer this not be the case. again from Unity documentation linked above When you need to serialize a complex object graph with references, do not let Unity automatically serialize the objects. Instead, use ISerializationCallbackReceiver interface to serialize them manually. I know how to implement the interface. How do I preserve the reference? (Disclaimer not an expert in serialization, so if it's a simple fix, I'm sorry. Unity documentation doesn't provide an example for references). |
0 | What happens when Time.time gets very large in Unity? As it is said here Time.time The time at the beginning of this frame (Read Only). This is the time in seconds since the start of the game. And as I know the time is stored in float. So, my question is what will happen when the value of time becomes very large? Can it overflow? Will it lose precision and cause bugs? Will the game just crash or what? |
0 | Make handlers visible on scene load In my dev enviroment I have a couple custom handlers I want to always be visible. I have already figured out that I can do this by deriving a class from Editor, linking it to the desired component type, and then appending the render method to SceneView.duringSceneGui (in OnEnable(), as the target gameobject information will be available there). However, this appending now only happens once a specific gameobject carrying the component has been selected. Is there a way I could have the Editor's OnEnable() method properly trigger on opening a scene, for all relevant gameobjects, without manually selecting them first? |
0 | FPS How to handle accuracy with red dot sight on guns I have a gun which you can aim down the red dot sight. If you are moving while aimed down sight, there is some bobbing that comes into play. The arms are bobbing, not the camera itself. Currently when the player fires their gun, I fire a ray from the center of the camera and check for a hit on a player. The problem is, if the player is aiming and moving, the red dot is bobbing(since the arms are) but because the camera isn't, the ray always fires down the center which doesn't line up with the red dot. It Doesn't feel right in gameplay. I was hoping someone may have some ideas to offer on how to handle this.. |
0 | Unity using raw mocap data without actually moving the character I play around with animations and unitys raw motion capture data asset pack. My problem is as followed I have an idle, walkforward and a runforward animation and play them based on the keyboard input. But i realized when i move my character backwards (using a movement script) but still playing the walkforward animation the character basically moves backward and forward at the same time. So how can i play the animation but without actually effecting the position of the character? Thank you |
0 | How to make effective permament rubble in a voxel game made in Unity3D? I'm working on a semi voxel game, and I already optimized the voxel structures' meshes by combining and chunking them. When I destroy a voxel, I spawn a low poly voxel fragment, with a rigidbody and a collider. And then I add an explosion force. Even if I turn off rigidbodies when they stop moving, or disable overall physics, the amount of separate meshes to render causes a huge fps loss after a few destroyed voxels. How could I achieve permanent rubbles? |
0 | Why does my character move diagonally? So i've been trying to make the player face towards the direction i pressed. That part works just fine but when I try to move the player it moves diagonally instead of left right. Any clues? if (Input.GetKey(KeyCode.RightArrow)) moves the player rb2d.velocity new Vector3(2.5f, 0, 0) transform.Translate(rb2d.velocity Time.deltaTime) flips the player right if (human.flipY true) human.transform.rotation Quaternion.Euler(0, 0, 270) else transform.rotation Quaternion.Euler(0, 0, 270) |
0 | Unity settings overlapping other settings error In my unity game, I am using the amazing unity asset aura volumetric lighting and when I add its script into my camera this happens Yeah... I can't see anyone else who got these settings overlapping each other on the stack exchange or anywhere else. Not only does it overlap but I can actually change its settings and the camera settings if I click in the right areas! I restarted unity but it still doesn't fix anything. Please help. Thanks in advance! |
0 | Method for Modifying Numbers based on Attributes in a Game A game that I'm working on in Unity has a diverse set of spells. The player character and NPCs all use an attributes which modify these spells in different ways. Our design is very fluid and the functions that modify these spells are continuously changing. As of now I've been hard coding the modifying function into the spell class itself while leaving different variables open for designers to modify it. Example for amount of missiles fired BaseQty Strength MissilesPerStrength Leaving the BaseQty and MissilesPerStrength attribute open in the inspector for designers. As our game balancing is changing so much I feel as if only leaving certain variables open isn't enough, as I'll have to go back and change the formula itself if required. I've thought about allowing designers to script the function itself, and for the spell class to get the number for (Example) missiles fired by interpreting the script, but I'm not sure how to go about doing this and in my mind it feels a little bit needlessly complicated for the small team that we are. It's not too much of a hassle to go into the class and change the formula, it just feels wrong not to have it separate. I'm looking for critique and suggestions on my ideas if possible. Thanks for reading. |
0 | FindObjectsOfType returns null array I am making the game in this tutorial https www.youtube.com watch?v OR0e 1UBEOU and at 2 53 23 he is coding a LevelController script. I noticed that mine wasn't working the way his was and so I went to try and debug the error. I found that the error was in the function FindObjectsOfType() You can find my code below. using System.Collections using System.Collections.Generic using UnityEngine public class LevelController MonoBehaviour private Enemy enemies private void onEnable() enemies FindObjectsOfType lt Enemy gt () Update is called once per frame void Update() Debug.Log( enemies) When I run this my debug log returns null values which is confusing because I have 3 objects (Monster, Monster (1) and Monster (2)) which have the Enemy script assigned to them. What could be going wrong? |
0 | Moving Object on One Axis Hi Im trying to move some 3D text on one axis, but for some reason it moves on 2 with this code. Can someone explain why this is how to fix it? Maybe this is a simple solution but i cant find it. using UnityEngine using System.Collections public class MoveTitle MonoBehaviour public bool ScrollDown false Use this for initialization void Start () Update is called once per frame void Update () if (Input.GetKeyDown("up") amp amp ScrollDown false) transform.Translate(0.0f, 1.2f, 0.0f) ScrollDown true if (Input.GetKeyUp ("up")) ScrollDown false if (Input.GetKeyDown("down") amp amp ScrollDown false) transform.Translate(0.0f, 1.2f, 0.0f) ScrollDown true if (Input.GetKeyUp ("down")) ScrollDown false |
0 | How can I make a gate with two cubes that close open nonstop? using System.Collections using System.Collections.Generic using UnityEngine public class Gate MonoBehaviour public GameObject gateParts Start is called before the first frame update void Start() Update is called once per frame void Update() for (int i 0 i lt gateParts.Length i ) gateParts i .transform.position Vector3.MoveTowards(gateParts i .transform.position, gateParts i 1 .transform.position, Time.deltaTime 3f) I want to make both cubes to move each other when they collide or touch each other by distance then move them far from each other. close open the gate nonstop. |
0 | Rotating quaternion smoothly with acceleration and deceleration I have this code for my linear acceleration and deceleration. var currentSpeed velocity.translation.z var decelerationDistance ((currentSpeed currentSpeed) movementSettings.acceleration) 0.5f if (distanceFromTarget gt decelerationDistance) velocity.translation float3(0f, 0f, min(currentSpeed (movementSettings.acceleration deltaTime), movementSettings.maxSpeed)) else velocity.translation float3(0f, 0f, max(currentSpeed (movementSettings.acceleration deltaTime), 0f)) Assuming the object is already pointed at the target, this accelerates an object up to its maximum speed at its acceleration rate, and then slows it down by the acceleration rate so that it comes to a stop exactly on the target. I'd like to have the same behaviour apply to rotation (which is a quaternion). i.e. Assuming the object is stationary, smoothly rotate to a target rotation, obeying acceleration and max rotation speed limits. I have made a start of converting this code to work on quaternions private static float Magnitude(Unity.Mathematics.quaternion quat) return CalculateQuaternionDifference(Unity.Mathematics.quaternion.identity, quat) private static float CalculateQuaternionDifference(Unity.Mathematics.quaternion a, Unity.Mathematics.quaternion b) float dotProduct dot(a, b) return dotProduct gt 0.99999f ? 0f acos(min(abs(dotProduct), 1f)) 2f ... var currentRotSpeed Magnitude(velocity.rotation) var rotDecelerationDistance ((currentRotSpeed currentRotSpeed) movementSettings.acceleration) 0.5f if (distance gt rotDecelerationDistance) velocity.rotation slerp(Unity.Mathematics.quaternion.identity, mul(rotationDiff, velocity.rotation), movementSettings.acceleration) else velocity.rotation slerp(Unity.Mathematics.quaternion.identity, mul(inverse(rotationDiff), velocity.rotation), movementSettings.acceleration) My questions are Unity.Mathematics.math.slerp() doesn't seem to play well with values of t over 1. When acceleration and deltaTime are low it works, but if they increase too much it breaks in ways I don't understand. How can I multiply a Quaternion by a scalar the same way I multiply the linear acceleration by deltaTime. This doesn't follow any max rotational speed limits or stop at 0. What is the equivalent of min max() for a quaternion? rotationDiff is the rotational difference between the current orientation and the target orientation. As the current orientation gets closer to the target orientation, this distance gets smaller. I know I should be using some static value instead, but I'm not sure what. Is there a way to normalize rotationDiff so it always has a magnitude of 1 radian? |
0 | How can I use spectrum data to calculate energy values of certain frequency bands? I'm working on a rhythm game, and need to detect beats in a piece of music. After all the research and reading through old posts in unity forum reddit etc., as far as I understood, I need to calculate the average energy, and if there is a sudden energy change spike, I want to register them as beats. So here's the thing as far as I understood public class something MonoBehaviour public AudioSource audioSource public float freqData new float 1024 void Start () audioSource GetComponent lt AudioSource gt () void Update () audioSource.GetSpectrumData (freqData, 0, FFTWindow.BlackmanHarris) Let's say I have this code in place. My sampling rate is at 48000 Hz. The array size determines the frequency range of each element. So with a size of 1024, my frequency resolution is 23.4 Hz. So freqData 0 represents the freq between 0 23.4 Hz. freqData 1 represents 23.4 46.8 Hz. so on and so forth. And I want to detect, let's say bass beats between 60Hz and 250 Hz. How can I translate this data into the average energy value for this frequency band? |
0 | Accessing vertex Input in surf directly without vert's out parameter In Tessellation Shader I can get the vertex Input in vertex shader as follows o.worldPos v.vertex.xyz But how do I get the worldPos directly without filling the out parameter in the vert function. Asking this because the shader is a DX11 Tessellation one and out parameter in vert function is not available at all. basically I want Initialize my Input shader and pass It to surface shader In vertex shader.I can do It but It's different In Tessellation shader so I need to Accessing vertex position in surf directly without vert's out parameter. Shader "Tessellation Sample" Properties EdgeLength ("Edge length", Range(2,50)) 15 MainTex ("Base (RGB)", 2D) "white" DispTex ("Disp Texture", 2D) "gray" NormalMap ("Normalmap", 2D) "bump" Displacement ("Displacement", Range(0, 1.0)) 0.3 Color ("Color", color) (1,1,1,0) SpecColor ("Spec color", color) (0.5,0.5,0.5,0.5) SubShader Tags "RenderType" "Opaque" LOD 300 CGPROGRAM pragma surface surf BlinnPhong addshadow fullforwardshadows vertex disp tessellate tessEdge nolightmap pragma target 4.6 include "Tessellation.cginc" float EdgeLength float4 tessEdge (appdata full v0, appdata full v1, appdata full v2) return UnityEdgeLengthBasedTess (v0.vertex, v1.vertex, v2.vertex, EdgeLength) float Displacement struct Input float4 position POSITION float3 worldPos TEXCOORD2 Used to calculate the texture UVs and world view vector float4 proj0 TEXCOORD3 Used for depth and reflection textures float4 proj1 TEXCOORD4 Used for the refraction texture void disp (inout appdata full v, out Input o) UNITY INITIALIZE OUTPUT(Input,o) o.worldPos v.vertex.xyz o.position UnityObjectToClipPos(v.vertex) o.proj0 ComputeScreenPos(o.position) COMPUTE EYEDEPTH(o.proj0.z) o.proj1 o.proj0 if UNITY UV STARTS AT TOP o.proj1.y (o.position.w o.position.y) 0.5 endif sampler2D MainTex sampler2D NormalMap fixed4 Color void surf (Input IN, inout SurfaceOutput o) o.Albedo float3(0,0,0) ENDCG FallBack "Diffuse" but I have error Shader error in 'Tessellation Sample' 'disp' no matching 1 parameter function at line 173 (on d3d11) |
0 | Gameobject stops moving or gets stuck when colliding with another GameObject I'm currently making a brick breaker style game. My issue comes when the ball is launched off the pad it gets stuck in between two blocks. I have all the blocks on a "Foreground" layer with an "Order in Layer" set to 0. The Ball is on the "Foreground" with an "Order in Layer" set to 1. But the ball will get stuck and quit moving. I've tried changing the orders of the players and it didn't help. Any thoughts? |
0 | should my static class be derived from UnityEngine.MonoBehaviour instead? I am working on creating an object to maintain state of the game, player data, etc. It is static (singleton lifecyle) through the life of my game. I started down the path of a static class, with a static instance, like this public class PlayerState private static PlayerState DATA INSTANCE new PlayerState() public static PlayerState Instance get return DATA INSTANCE Then I started looking around and I see implementations that derive from UnityEngine.MonoBehaviour and implement the instance handling differently, adding the script to a game object and using DontDestroyOnLoad() to ensure it stays in memory. So I am wondering what is the correct idiomatic pattern in unity for a singleton type? Do I really need all of the unity functionality of a UnityEngine.MonoBehaviour derived type (eg Update()) for my static type? I admit I am uneasy about static classes, in general. In this case, I am equally uneasy adding another type that potentially gets in the frame cycle when its not needed (I mean that unity will call the methods like Update() for every cycle). Thnx Matt |
0 | Assigning function to button in foreach loop causes all buttons to have the same argument value I have a script that generates buttons based on a list of assets. Each button is intended to to add the asset's resource path to a variable called SpawnGlobals.CurrentSelection. When printing the asset resource name of all button within the loop, the different resoruce paths are shown. When printing the value in the select function, the value is always the last item in the asset register list. I do not understand why this is happening. Am I using the foreach loop or lambda incorrectly? void Start () Add Foundry Assets Here SpawnGlobals.asset register.Add (new FoundryAsset ("Test Sphere", "sphere obj")) SpawnGlobals.asset register.Add (new FoundryAsset ("Building Block 10x10x10 01", "BuildingBlock10x10x10 01")) SpawnGlobals.asset register.Add (new FoundryAsset ("Building Block 10x10x10 02", "BuildingBlock10x10x10 02")) SpawnGlobals.asset register.Add (new FoundryAsset ("Building Block 10x10x10 03", "BuildingBlock10x10x10 03")) foreach (FoundryAsset asset in SpawnGlobals.asset register) SpawnGlobals.buttons.Add(Instantiate(button prefab)) GameObject current button SpawnGlobals.buttons.Last () current button.transform.SetParent(master.transform, false) current button.GetComponentInChildren lt Text gt ().text asset.AssetName print (asset.AssetName) current button.GetComponentInChildren lt Button gt ().onClick.AddListener (delegate select(asset.ResourcePath) ) public void select(string selected) print (selected) SpawnGlobals.CurrentSelection selected |
0 | How to improve the charactercontroller for jumping? I have an issue with my character controller, when I jump, it sometimes lands on surfaces it shouldn't be able to land on, like very steep slopes etc (even though my max slope is 45 deg). In this case, Controller.isGrounded() still is true. The controller doesn't fall off the steep slopes either, it just stays there and the controller can jump again. I also notice that the controller sometimes can climb heights very awkardly (after jumping), like it is barely touching the elevated ground surface yet it can "push move" to climb it's way up These problems both occur in the standard implementation given by unity https docs.unity3d.com ScriptReference CharacterController.Move.html See also https i.imgur.com AD6dVy2.png (here it can jump on the steep slope and stay there, UCC is the example script from unity copy pasted) How can I fix these things? Are there any existing solutions or guides for better character controllers? |
0 | How to move the arrow markers 1 unit only? I'm new to learning C . How can I move an arrow one unit at a time? Here's the code I'm using now using System.Collections using System.Collections.Generic using UnityEngine public class ArrowMovement MonoBehaviour float directX float directY public float moveSpeed 5f private Rigidbody rb private void Start() rb GetComponent lt Rigidbody gt () private void FixedUpdate() float h Input.GetAxis("Horizontal") 100 float v Input.GetAxis("Vertical") 100 Vector3 vel rb.velocity vel.x h vel.z v rb.velocity vel I just know how to do basic move |
0 | Unity UI circular clickable area I'm building some somewhat involved UI stuff using the Shapes vector package. I need to be able to detect clicks on a circle, but also need to be able to click on things right next to that circle in all directions (it's a large circle). I can limit the detected clicks to within the circle by simply using a square and checking distance of click from center of circle. But this still obscures clickable things behind the square but outside of the circle (in the corners of the square). How can I detect those? Thanks! |
0 | How can I move game objects using keyboard in the Unity scene editor? I know I can click on the "move" button and use my mouse to drag the object around. How can I do the same using my keyboard? |
0 | BoxCollider2D fails verification, where CircleCollider2D does not I have a 2D game object with no parent and a 2D box collider set with trigger enables. The dimensions are quite large. When I place the game object in the scene, manually, it works just fine. However, when I instantiate the prefab, it doesn't work in the inspector view, under the collider component a notification appears The collider did not create any collision shapes as they all failed verification. This could be because they were deemed too small or the vertices were too close. Vertices can also become close under rotations or very small scaling. When I switch to scene view, while the game is in play mode, I can not see the collider boundaries. Surprisingly, everything works fine, if I use the circle collider. Does anyone know what the problem could be? |
0 | How to create loading bar for asset bundles on Unity (without changing scene)? So from the official documentation and unity forum now I know that there's async version of LoadFromFile (that I also will probably call UnityWebRequest.GetAssetBundle later for updated asset). Now the problem is where can I call it, and how to get the .progress value, converting from this public AssetManager() if (!useAssetBundle) return foreach (var bundle name in bundle names) var full path Path.Combine(Application.streamingAssetsPath " Bundle ", bundle name) var bundle AssetBundle.LoadFromFile(full path) if (bundle null) LogHelper.Critical("Failed to load AssetBundle 0 ", full path) bundles bundle name bundle To this (that obviously wrong) private List lt AssetBundleCreateRequest gt loadPro new List lt AssetBundleCreateRequest gt () private IEnumerator lt AssetBundle gt loadOneAssetBundle(string full path) var creq AssetBundle.LoadFromFileAsync(full path) loadPro.Add(creq) yield return creq.assetBundle public AssetManager(MonoBehaviour parent) if (!useAssetBundle) return foreach (var bundle name in bundle names) var full path Path.Combine(Application.streamingAssetsPath " Bundle ", bundle name) var bundle parent.StartCoroutine( loadOneAssetBundle(full path)).assetBundle NOPE T T) if (bundle null) LogHelper.Critical("Failed to load AssetBundle 0 ", full path) bundles bundle name bundle NOPE 'n') public void Update() var progress 0f foreach (var pro in loadPro) progress pro.progress LogHelper.Debug("Progress f",progress) |
0 | How can I stop startcoroutine when pressing once on the escape key? I want like in games to pass over a cut scene or splash screen in this case. So the user can hit the escape key to pass it over. using UnityEngine using System.Collections using System.Collections.Generic using System.Linq using UnityEngine.Assertions.Must using UnityEngine.UI using UnityEngine.SceneManagement public class Splashes UnityEngine.MonoBehaviour Header("Splash Screen") public bool useSplashScreen true public GameObject splashesContent private List lt Graphic gt splashes new List lt Graphic gt () public float splashStayDiration 3f public float splashCrossFadeTime 1f void Start() if (!useSplashScreen splashesContent.GetComponentsInChildren lt Graphic gt (true).Length lt 0) return if we use splash screens and we have splash screens region Get All Splashes if you build on PC Standalone you can uncomment this foreach (var splash in splashesContent.GetComponentsInChildren lt Graphic gt (true).Where(splash gt splash ! splashesContent.GetComponent lt Graphic gt ())) splashes.Add(splash) for (var i 0 i lt splashesContent.GetComponentsInChildren lt Graphic gt (true).Length i ) var splash splashesContent.GetComponentsInChildren lt Graphic gt (true) i if (splash ! splashesContent.GetComponent lt Graphic gt ()) splashes.Add(splash) endregion And starting playing splashes StartCoroutine(PlayAllSplashes()) private IEnumerator PlayAllSplashes() Enabling Splashes root transform if (!splashesContent.activeSelf) splashesContent.SetActive(true) main loop for playing foreach (var t in splashes) t.gameObject.SetActive(true) t.canvasRenderer.SetAlpha(0.0f) t.CrossFadeAlpha(1, splashCrossFadeTime, false) yield return new WaitForSeconds(splashStayDiration splashCrossFadeTime) t.CrossFadeAlpha(0, splashCrossFadeTime, false) yield return new WaitForSeconds(splashCrossFadeTime) t.gameObject.SetActive(false) Smooth main menu enabling splashesContent.GetComponent lt Graphic gt ().CrossFadeAlpha(0, 0.5f, false) yield return new WaitForSeconds(0.5f) splashesContent.gameObject.SetActive(false) private void Update() if (Input.GetKeyDown(KeyCode.Escape)) public void ExitGame() Application.Quit() This splash is showing up each time the game is running. So I want to give the player the chance to press escape key to pass over it. |
0 | SECTR sectors global state For our game we decided to give SECTR (https www.assetstore.unity3d.com en ! content 15356) a try. SECTR loads scenes additively into the main scene, which enables us to work on it concurrently. It also unloads scenes when they aren't used. Time is very important in our game and objects have to perform actions at a certain time or have to be in a certain state at a time. We decided to have those actions events local to the sector. So now when the sector is unloaded, we lose the current progress of our action event. What is a good way to save this progress and how could we apply differences in time between unloading and loading to this progress? How do we handle events happening while the sector is unloaded? The obvious solution is to use global state for these events, but we'd like to keep it local to the sectors. |
0 | How can I make a character crouch? I tried to make a crouch system. In my script I changed .height and .center properties of collider in CharacterController component. But there are some problems. First, once I press "crouch button" collider is changing its height and center position but when I release the key the collider isn't returning to its previous state. Secondly, if I change its height and position by hand when collider is under a block, it's just stuck. How can I fix this problem? Here's my script using System.Collections using System.Collections.Generic using UnityEngine RequireComponent(typeof(CharacterController)) public class PlayerController MonoBehaviour public float moveSpeed public float jumpForce public CharacterController controller public GameObject myCamera private Vector3 moveDirection Vector3.zero public float gravityScale public Animator anim Use this for initialization void Start () transform.tag "Player" controller GetComponent lt CharacterController gt () Update is called once per frame void Update () Vector3 forwardDirection new Vector3 (myCamera.transform.forward.x, 0, myCamera.transform.forward.z) Vector3 sideDirection new Vector3(myCamera.transform.right.x, 0, myCamera.transform.right.z) forwardDirection.Normalize () sideDirection.Normalize () forwardDirection forwardDirection Input.GetAxis("Vertical") sideDirection sideDirection Input.GetAxis("Horizontal") Vector3 directFinal forwardDirection sideDirection if (directFinal.sqrMagnitude gt 1) directFinal.Normalize () if (controller.isGrounded) moveDirection new Vector3(directFinal.x, 0, directFinal.z) moveDirection moveSpeed if (Input.GetButtonDown("Jump")) moveDirection.y jumpForce if(Input.GetButtonDown("Crouch")) controller.height 1f controller.center new Vector3(0f, 0.5f, 0f) moveSpeed 3f moveDirection.y moveDirection.y (Physics.gravity.y gravityScale Time.deltaTime) controller.Move(moveDirection Time.deltaTime) anim.SetBool("isGrounded", controller.isGrounded) anim.SetFloat("Speed", (Mathf.Abs(Input.GetAxis("Vertical")) Mathf.Abs(Input.GetAxis("Horizontal")))) Thanks for any help! |
0 | Particle System is not Playing I made a simple spaceship that has a particlesystem. When I press quot space quot button, spaceship should fly and particle system should instantiate and play. But it's not playing. It seems in hierarchy as clone but not playing. As you see, particle effect is instantiating but not playing. It should play at bottom of spaceship Those are codes void FlyShip() if (Input.GetKey(KeyCode.Space)) rb.AddForce(Vector3.up jumpForce) if (!takeoffSound.isPlaying) rocketJetParticle is gameobject. rocketJetParticle Instantiate(rocketJetParticle, new Vector3(transform.position.x, transform.position.y 4, transform.position.z), transform.rotation) takeoffSound.Play() else Destroy( rocketJetParticle) takeoffSound.Stop() |
0 | How to set up bloom in Unity Last time I used image effects in Unity it was back when 3.0 was released and for bloom you used the object's alpha to indicate whether it should glow or not. Now in 4.2 I couldn't grasp how to apply the image effect properly. It seemed to put the whole scene to glow and alpha did not affect it. There was no help from the Unity docs which are pretty much just spec sheets for the effects and no proper tutorials around. This leads to couple of questions How do you create something like the screenshots in the Bloom page in the docs? Only the car windows and metal parts glow, but no glow on the concrete. How do you make a light source like the second image on the above link? I tried different shaders and lights on a gameobject but none of them produced anything like that. What does the HDR checkbox in the camera actually do? |
0 | Unity3D Problem Temporarily Stop Players Movement When Playing a specific Animation This is my first time in Unity3D and programming so forgive me if i'm a bit vague on this. I'm working on my own project in Unity3D but I couldn't get the player to stop moving when I press the key. I avoided using the GetButton since it triggered the animation to play twice. After bout 2 hours, I still can't find any answer to my problem. Can someone help me please? Here's part of the code of the the player script. if (Input.GetButtonDown("Fire1")) animator.SetTrigger("AttackSword") Debug.Log("Animation called") if(!animator.GetBool("AttackSword")) Move(inputDir, running) If you would like me to add the full code of the script, let me know. |
0 | Unity scaling and animations In my Unity project I have a script that programmatically changes the scaling of an object during the game. Recently I wanted to add a spawning animation clip to my object, which modifies the object's scaling (but only during a short delay). So I added an animation controller to the object, containing the spawning animation clip. I can now successfully run the spawning animation but the in game programmatic scaling changes don't work anymore, even when my animation is not running. Any idea? |
0 | Xcode Build Errors Unity 4.7.2 These two errors are doing my head in. New to Xcode. So please advise thank you. FYI I did not touch or modify anything of the Xcode that was generated by Unity. Upon opening there is a warning that says to set to recommended settings. Other than that just set the provisional file. Set the simulator device then pressed play but these two errors keep showing! include quot Unity GlesHelper.h quot file not found include quot UI Keyboard.h quot file not found |
0 | How can I simulate tapping on touch for Android? I'm searching on Input.Touch properties to find how can I simulate tapping on touch for Android. Something like Input.getmouseButtonDown() on mobile. |
0 | How can I get the Text component from TextMeshPro ? I'm getting null exception At the top of the screen public GameObject uiSceneText private TextMeshPro textMeshPro Then in the script at some point uiSceneText.SetActive(true) if (textMeshPro.text ! quot quot ) textMeshPro.text quot quot The exception error null is on the textMeshPro on the line if (textMeshPro.text ! quot quot ) textMeshPro is null. This screenshot shows the TextMeshPro in the Hierarchy and I see that the Scene Image object and its child Scene Text both are enabled true. I want to get the text and add replace a new text with the text already in the Text but the textMeshPro is null. The complete script using System.Collections using System.Collections.Generic using TMPro using UnityEngine public class PlayerSpaceshipAreaColliding MonoBehaviour public float rotationSpeed public float movingSpeed public float secondsToRotate public GameObject uiSceneText private float timeElapsed 0 private float lerpDuration 3 private float startValue 1 private float endValue 0 private float valueToLerp 0 private Animator playerAnimator private bool exitSpaceShipSurroundingArea false private bool slowd true private TextMeshPro textMeshPro Start is called before the first frame update void Start() textMeshPro uiSceneText.GetComponent lt TextMeshPro gt () playerAnimator GetComponent lt Animator gt () Update is called once per frame void Update() if (exitSpaceShipSurroundingArea) if (slowd) SlowDown() if (playerAnimator.GetFloat( quot Forward quot ) 0) slowd false LockController.PlayerLockState(false) uiSceneText.SetActive(true) if (textMeshPro.text ! quot quot ) textMeshPro.text quot quot textMeshPro.text quot Here I will add the new text. quot if (slowd false) private void OnTriggerEnter(Collider other) if (other.name quot CrashLandedShipUpDown quot ) exitSpaceShipSurroundingArea false Debug.Log( quot Entered Spaceship Area ! quot ) private void OnTriggerExit(Collider other) if (other.name quot CrashLandedShipUpDown quot ) exitSpaceShipSurroundingArea true Debug.Log( quot Exited Spaceship Area ! quot ) private void SlowDown() if (timeElapsed lt lerpDuration) valueToLerp Mathf.Lerp(startValue, endValue, timeElapsed lerpDuration) playerAnimator.SetFloat( quot Forward quot , valueToLerp) timeElapsed Time.deltaTime playerAnimator.SetFloat( quot Forward quot , valueToLerp) valueToLerp 0 |
0 | How do I sense when the player collides with a door in a GameController script? I want to be able to sense when the player collides with a door from my GameController script. The player is a public GameObject and doors are tagged as such. Here is psuedo code for what I'm looking for public class GameController MonoBehaviour public GameObject player void Update() if(player collides with game object tagged "Door") do something |
0 | Platformer Crouching problems I'm trying to implement a crouching system to my 2D platformer. The problem is, I don't want the character to be able to do anything else except jumping when it's crouching. I have many functions but I think posting the movement function here would be enough. When I gain velocity and then press crouch, I skid across the platform, and stop. First, my crouch function private enum State idle, running, jumping, falling, crouching void Update() crouch if(Input.GetButton( quot Crouch quot )) animator.SetInteger( quot state quot , 4) Crouch() else boxCol.size new Vector2(0.5454721f, 1.8065f) void Crouch() boxCol.size new Vector2 (0.6933689f,1.44953f) And here is my movement function. I tried to fix it by adding the if condition on top of it, but when I run and crouch, it slides a lot before stopping. What can I do to fix that? The character already has a physics material with zero friction. I know I can make the velocity zero if the player is crouching but that would just kill the gameplay. I need to add a smooth stopper. I'm not sure how to make a decelerating script and use it in my case. public void Move() if(animator.GetInteger( quot state quot ) ! 4) float x Input.GetAxis( quot Horizontal quot ) float moveBy x speed rb.velocity new Vector2(moveBy, rb.velocity.y) if (x gt 0f) transform.localScale new Vector2(1, 1) isRight 1 isLeft 0 else if (x lt 0f) transform.localScale new Vector2( 1, 1) isLeft 1 isRight 0 |
0 | Trapping Asserts in Visual Studio I'm writing a 2D game with some fairly complex off screen logic, with plenty of room for bugs. My usual habit is to write code with a lot of Asserts in it. When an Assert gets hit I drop into the debugger, have a look around, find and fix the bug. Works for me. Problem is how to get Unity Asserts to call into the Visual Studio debugger. So far they just seem to get generate a log message in the Editor Console, and then Unity carries right on as if they didn't matter. What use is that? There is an 'experimental' option in the Unity VS settings, but it doesn't seem to do anything (good or bad). Any suggestions? I can think of a couple of really hackish things to try, but surely there must be a right way? After further experimentation Enabling Assert.raiseExceptions (only) makes Unity grind to a halt (instead of carrying on regardless) but otherwise no change. No trap into VS. Enabling the experimental Exceptions mode (only) causes Unity to crash with a bug check on the first assert failure. Doing 1 2 is the same as 1 only. This is VS 2015 in case it matters. |
0 | Framerate Dependant Steering Behaviour I've been studying steering and flocking behaviors, if not directly from Craig Reynolds work on Steering Behaviors For Autonomous Characters (http www.red3d.com cwr steer ). I understand the basic concepts and how to apply forces but the main issue with this is performance. Using Unity, ECS and the job system, I have been able to implement a multi threaded implementation of some simple behaviors (seek, arrival, evade etc). The problem is simply that all of the implementations of steering behaviors that I find are all based on this same principle and they all seem to suffer from the same problem framerate independence. For example, a simplified example of seeking get a vector pointing directly at our target, at the maximum speed. desired velocity normalize(target position) max velocity get the steering force direction steering desired velocity velocity ensure the steering force never exceeds our maximum steering force steering truncate(steering, max steering force) ensure the velocity never exceeds the max speed velocity truncate (velocity steering , max speed) apply our velocity position velocity delta time When dealing with the final velocity, we are utilizing delta time to ensure that the velocity is applied in a framerate independent manner. But the steering force is not framerate independent at all. If this was running at 120fps, the steering would reach it's maximum velocity much faster than if the game was running at 30fps. Is there something more I should be doing here. I've experimented with using delta time with the steering, but that seems to make little difference. |
0 | Why this if logic isnot short circuiting? I've this script in UNITY to check if the hit RayCastHid2D variable has some gameobject with hole tag or a parent gameobject with hole tag when I Raycast... if (hit.collider ! null amp amp (hit.collider.gameObject.tag "hole" (hit.collider.gameObject.transform.parent.gameObject ! null amp amp hit.collider.gameObject.transform.parent.tag "hole"))) Everything is fine until if (hit.collider ! null amp amp (hit.collider.gameObject.tag "hole" but when I enter (hit.collider.gameObject.transform.parent.gameObject ! null amp amp hit.collider.gameObject.transform.parent.tag "hole") it throws a NullReferenceException.... But why? It supposed to short circuit here hit.collider.gameObject.transform.parent.gameObject ! null amp amp if there's no parent gameobject! |
0 | Unity2D How to destroy my Instantiate particle system I'm having trouble trying to destroy my Instantiated particle system, I can only destroy my gameobject using UnityEngine using System.Collections public class meteor destruction MonoBehaviour public GameObject meteor d effect public GameObject meteor explo effect public GameObject meteor explo2 effect public int pointsIncrement 1 void OnCollisionEnter2D (Collision2D col) var effect explo Instantiate(meteor explo effect, col.gameObject.transform.position, col.transform.rotation) var effect explo2 Instantiate(meteor explo2 effect, col.gameObject.transform.position, col.transform.rotation) var effect d Instantiate(meteor d effect, col.gameObject.transform.position, col.transform.rotation) Destroy (gameObject) Destroy (effect explo,3) Destroy (effect explo2,3) Destroy (effect d,3) It's generated after the meteor collide, but those 3 particle effects keeps adding inside my Hierarchy panel. |
1 | OpenGL draw functions and multi threading. How they work together? I want to apply multi thread in a simple way to control and draw 4000 objects. I am using SDL and OpenGL. control locations, collisions, calculations ... etc draw OpenGL draw functions glDrawArrays,glDrawElements etc For 4000 objects i think like this do you think it works ? |
1 | What are the benefits of capping frames per second? (If any) I am working on a simple game in OpenGL, using SDL for display initialization and input, and it appears in terms of timing I have two options available to me. Number one being just sleeping for the optimalTimePerFrame theTimeTakenToRender, when the optimalTimePerFrame in seconds 1 theFramerateCap. (I'm not entirely sure if this is necessary, as it is equivalent to using vsync, which may be more accurate. If my assumption is correct then please tell me wheter or not SDL OpenGL has vsync by default, and if not how to enable it.) The other is to measure the time since the last frame was rendered, and merely adjust movement accordingly. (The shorter time taken to render a frame, the less far entities move in that frame). In terms of performance, and decreasing flicker etc, what are the advantages and disadvantages of these two methods? |
1 | Calculate target angle to a certain Vector I'm developing a 3D game in LWJGL. I have a player class and a monster class and I need the monster class to chase the player when the player is within a certain distance. For that I have the monster directional vector (let's call it monDirVector) and the directional vector that points from the monster to the player (let's call it dirToPlayer) So the way my monster class works is that it increments its current rotation to match a target angle (this way I can simulate the walking turning animation). Given this I need to calculate the target angle so that the monDirVector overlaps the dirToPlayer making the monster walk right towards the player. How would I go about this? Here's a diagram for better understanding of what I need. I pretty much need to get the c angle and assign it as the target angle for the monster. The a is the monster current rotation and the b is the angle between the two vectors. Both of these are known. Please notice that the monster can be oriented any way and the player can be in any quadrant relative to the monster. This stands for a problem because I can't simply subtract the angle towards the player to the current rotation of the monster. I would really apreciate if someone helped me. |
1 | How to store sprite data in a VBO? I'm planning on rendering many sprites in my games, but I am not sure which method of storing their data to use. I haven't tried all of them yet, but I want to see if I also have anything important missing. Right now my options are Store information such as position, color, rotation, scale, and texIndex in a VBO, and then let the geometry shader output the triangles. (Not sure if possible) Compute the vertices for all sprites in the CPU and put it all into one VBO to draw. (Sounds like a waste of CPU time, especially in a game) Store repeated copies of the same four vertices in a VBO, and then use uniforms? (This is just an idea, I have no idea if it works or makes sense) Apparently I can just use instancing for this. What design should I use to efficiently store sprite data? Do any of my ideas hold? Edit I would greatly appreciate if pseudocode could be provided for a solution, although certainly not necessary. |
1 | OpenGLES 2.0 batching method and do not draw inactive object I am researching about "batching" objects for one big VBO and reduce draw call. These are what I am doing now Create interleaved VBO based on texture or render state (like Blending) So, for example, there will be two separate VBO and draw call if I want only some object has blending For more batching, add unused vertex data. eg) Object "A" doesn't need to have color data, but I added white empty color, so can be batched in interleaved array For hiding object, set vertex positions to 0.0. So, stay on the array, but cannot see. eg) if it is quad, set to 0.0, 0.0, 0.0, 0.0 These are my questions Should I create one Giant VBO, and use draw call based on offset?? If I do that, then is there any limitation on glbufferData?? (or performance drop, if too much data in) Is there any method that vbo can contain different types of interleaved vertex? Is multiple "offset" draw call not affect performance unlike single draw call? Should I bind multiple texture atlas to each unit, and use uniform? Currently, I am only binding one texture per draw. Is there correct way for above 3 method?? (Feels like not correct) I am updating "All" vertices every frame, even though some object doesn't need to be updated. (glBufferData) Should I do multiple update call?? Sorry for my wording, but please point me anything that I am doing something wrong. Thank you. |
1 | A simple "beam tracing" simulation I am trying to implement a simple beam tracing simulation. Basically it models the path of a beam within a pipe In the picture, the beam starts emitting from p1, hitting the boundary at p2 and reflects to p3. I was thinking of the algorithm of such a system. Assuming that all equations of the 2 blue curves are known 1) Get the parametrized representation of original line from p1, say p1(t). 2) Equate p1(t) with both curves. 3) Find the first intersection p2, determined by the lowest value of t. 4) Get the normal at the intersection and get the equation of the reflected beam, p2(t). 5) Equate p2(t) with both curves, eventually solving for p3. 6) Repeat for as many iterations as needed. My question is that, is there more sophisticated way of doing this? Or is there a library out there that implements this kind of functionality directly? Currently I am using C , and looking into WPF and also OpenGL, but I am afraid that OpenGL may be too low level (?) and requires much code to achieve this. |
1 | Blending vs Texture Sampling Lets say I want to blend two textures together A texture containing the result of the SSAO calculation. A texture containing the rendered scene. I could do it in two ways Use a shader that samples both the SSAO and scene textures, blends them together and outputs the final color to a render target. Render to the texture containing the scene and use a blending mode to blend the SSAO texture on top of it. Only the SSAO texture will be sampled inside the shader. Is it possible to give a general answer about which version is faster, or is it highly hardware dependent? |
1 | How to invert background pixel's color I'm writing a game and map editor using Java and jMonkeyEngine. In the map editor, I've got a brush done by wireframed sphere. My problem is I want to make it visible everywhere, so I want to invert color of the pixel "behind" this brush. I wanted to do it with fragment shader (GLSL), but I don't know how to get color of this pixel. |
1 | What is the correct way to draw layered Sprites in modern OpenGL? So what do I mean by layered sprites? Layered Sprites are Sprites that consist of multiple layers, e.g. you have sprite sheets for the basic Body, the Head, Clothes, Weapons, etc. Well now I wanted to know how you would draw this in the most efficient way right now the idea is to use vertex buffer objects (VBOs), but I heard that it is expensive to change the textures in a VBO... So what would be the correct way to draw them? My idea right now would be to draw the Map tiles first (with a texture atlas), then I would draw all the basic bodys, then the heads, then clothes, and then the weapons. Would that be the best way or is there a more efficient way? I also thought of using vertex arrays instead of VBOs, but as far as I know they are deprecated... |
1 | Getting started to OpenGL little question Im starting to learn opengl, and after 2 days, searching and trying hard, i finally installed all the libraries i need (lol). So, I'm following this tutorial, and it says to use glfwOpenWindowHint this way glfwOpenWindowHint(GLFW FSAA SAMPLES, 4) 4x antialiasing glfwOpenWindowHint(GLFW OPENGL VERSION MAJOR, 3) We want OpenGL 3.3 glfwOpenWindowHint(GLFW OPENGL VERSION MINOR, 3) glfwOpenWindowHint(GLFW OPENGL PROFILE, GLFW OPENGL CORE PROFILE) We don't want the old OpenGL But glfwOpenWindow didn't work with this Hint. So I made a little research, and changed the PROFILE parameter to this glfwOpenWindowHint(GLFW OPENGL PROFILE, 0) Ok, fine, now works. My question is if it is a big deal to change the opengl profile, or this will make no difference. And why I can't use GLFW OPENGL CORE PROFILE instead. |
1 | Hard edges with non power of 2 images in OpenGL When down scaling 2D images in Java2D, it does a great job at preserving hard edges while downscaling to non power of 2s. However, in OpenGL, I have been unable to find a solution to this. I have tried using GL NEAREST to get hard edges, but it also creates the wonky edges aswell. I cannot use GL LINEAR in this case either, but still, the linear interpolation still looks wonky. Example (Both rendering 30x30 hard image, rendering at 30x30 pixels). Please note I am aware that this occurs with images that aren't power of twos, like 30, but the main problem is, if I try to upscale or downscale an image that is power of two (say 16x16) to something that isn't a power of two (say 20x20), I still get this effect. This issue with scaling this occurs in Java2D, but it is by far less noticable. My question is What parameters will I need to use when rendering a non power of two image? So I can get a similar effect that I get in Java 2D. I can post source code if needed. EDIT Original Image (30 x 30 png) Turns out I wasn't scaling them at all! Both were being rendered at the original 30 x 30 pixels on both OpenGL and Java2D. Sample Code Java2D g.drawImage(heart, 0, 0, 30, 30, null) No rendering hints are being applied, just the bare minimum. In OpenGL Vertex Shader version 400 in vec2 position out vec2 textureCoords uniform float width uniform float height uniform float xOffset uniform float yOffset uniform mat4 transformationMatrix void main(void) gl Position transformationMatrix vec4(position.x, position.y, 0.0, 1.0) gl Position.x (xOffset) (width 2) gl Position.y (yOffset) (height 2) gl Position.y gl Position.y textureCoords vec2(position.x 1, position.y 1) Fragment Shader version 400 in vec2 textureCoords out vec4 out Color uniform sampler2D texture2d uniform float transparency void main() out Color texture2D(texture2d, vec2(textureCoords.x, textureCoords.y)) if(textureCoords.y gt 1 textureCoords.y lt 0) discard Rendering methods public void makeWonkyImage() render(0, 0, 30, 30, heart) public void render(float x, float y, float width, float height, int textureID) PostProcess.imageRenderer.render(x, Gfx.HEIGHT y, width 1000, height 1000, textureID) public void render(float x, float y, float width, float height, int textureID) shader.start() prepare(textureID) Vector2f position new Vector2f(( Display.WIDTH (width 1000f)) , ( Display.HEIGHT (height 1000f))) Matrix4f matrix Maths.createTransformationMatrix(position, new Vector2f(width Gfx.WIDTH 1000f, height Gfx.HEIGHT 1000f)) shader.loadTransformation(matrix) shader.loadScreenDimensions((float) Display.getWidth(), (float) Display.getHeight()) shader.loadOffsets(x, y) GL11.glDrawArrays(GL11.GL TRIANGLES, 0, quad.getVertexCount()) end() shader.stop() public void prepare(int texture) GL30.glBindVertexArray(quad.getVaoID()) GL20.glEnableVertexAttribArray(0) GL11.glEnable(GL11.GL BLEND) GL11.glBlendFunc(GL11.GL SRC ALPHA, GL11.GL ONE MINUS SRC ALPHA) GL11.glDisable(GL11.GL DEPTH TEST) GL13.glActiveTexture(GL13.GL TEXTURE0) GL11.glBindTexture(GL11.GL TEXTURE 2D, texture) GL11.glTexParameterf(GL11.GL TEXTURE 2D, GL11.GL TEXTURE MIN FILTER, GL11.GL LINEAR) GL11.glTexParameterf(GL11.GL TEXTURE 2D, GL11.GL TEXTURE MAG FILTER, GL11.GL LINEAR) |
1 | OpenGL object load in reverse I am trying to load a model and it is loading it in reverse. When I am trying to rotate it 180 degrees it changes the lightning as well. I am not sure what I need to do to change the position that eh model is facing when is being loaded. This is the object loader if (!submarineShader gt load("BasicView", "glslfiles basicTransformations.vert", "glslfiles basicTransformations.frag")) cout lt lt "failed to load shader" lt lt endl glUseProgram(submarineShader gt handle()) use the shader glEnable(GL TEXTURE 2D) cout lt lt " loading model " lt lt endl if (objLoader.loadModel("submarine submarine v2 submarine5.obj", model)) returns true if the model is loaded, puts the model in the model parameter cout lt lt " model loaded " lt lt endl model.calcVertNormalsUsingOctree() model.initDrawElements() model.initVBO(submarineShader) model.deleteVertexFaceData() else cout lt lt " model failed to load " lt lt endl |
1 | What is the correct way to reset and load new data into GL ARRAY BUFFER? I am using an array buffer for colors data. If I want to load different colors for the current mesh in real time what is the correct way to do it. At the moment I am doing glBindVertexArray(vao) glBindBuffer(GL ARRAY BUFFER, colorBuffer) glBufferData(GL ARRAY BUFFER, SIZE, colorsData, GL STATIC DRAW) glEnableVertexAttribArray(shader gt attrib("color")) glVertexAttribPointer(shader gt attrib("color"), 3, GL FLOAT, GL TRUE, 0, NULL) glBindBuffer(GL ARRAY BUFFER, 0) It works, but I am not sure if this is good and efficient way to do it. What happens to the previous data ? Does it write on top of it ? Do I need to call glDeleteBuffers(1, colorBuffer) glGenBuffers(1, colorBuffer) before transfering the new data into the buffer ? |
1 | What type of rotation is this and how do I apply it? I'm developing a 3D game in Java using LWJGL and JOML. In order to simplify player and NPC coding, I'm representing their orientation as two scalar quantities "direction" and "angle" (maybe not the best names). "Direction" is the compass direction that the object is facing, and "angle" is the angle of the object relative to a horizontal plane. So for example if a player has a direction of 180 and an angle of 45, it will be looking south at a point halfway between the horizon and the zenith. First question what is this kind of rotation called officially? I've already written code to handle calculations on points positions represented in this manner, but I don't know how to include the rotation in a transformation matrix for OpenGL. I have the following code to produce a transformation matrix for an object Matrix4f world matrix new Matrix4f().identity().translate((float) position.x, (float) position.z, (float) position.y).rotateY((float) Math.toRadians( direction)) (same code, more readable) Matrix4f world matrix new Matrix4f() .identity() .translate((float) position.x, (float) position.z, (float) position.y) .rotateY((float) Math.toRadians( direction)) where position is the location of the object and direction is the aforedescribed "direction" of the object. Second question how do I include angle in world matrix? |
1 | What is the better approach to redraw a static background scene for game? Take for an example I have the background scene of a game which should remain static for an interval.Then for every time redrawing the scene,is it better for me to store the color buffer for every pixel on the viewport and redrawing using the stored color buffer or to applying a texture to the whole background or to draw the triangular,quad meshes? |
1 | In an artillery game how do I mask out the part of the terrain that was hit Let's say I want to make a really simple artillery game, something like Gorillas. I don't have any experience in games, just some basic understanding of OpenGL. I want to do this for fun and to learn something. I figured out how to simulate gravity and generate simple terrain and how to do some collision detection. But I have no idea how do they mask out the part of the terrain that is hit with a projectile that leaves a hole in form of a circle. I tried enabling stencil buffer and put a quad which has a circle texture on it expecting to mask out only the circle part, but it masked out the whole quad I mean it was a rectangle shaped hole and not a circle shaped one. Next I thought maybe some blending will do the trick but I didn't figure it out how. Or should I just draw a polygon with many edges to look like a circle and use that instead (with the stencil buffer)? I'm just curious how it is done. Can someone point me in the right direction? |
1 | Having trouble running OpenGL on MSVC I'm using the OpenGL Programming Guide, 8th Edition with MSVC 2013 but I can't get the triangles.cpp file to run. These are the errors popping up http puu.sh jAokn c07420cf46.png |
1 | Animation in OpenGL using 3D Models I have created a model in Blender. Now i want to read that 3D model in my c program. I figured that a model can be exported to various file formats e.g. .obj, .3ds or COLLADA and then can be read in a c program. I have been searching web for doing that for quite a while and found many tutorials. But i ran into issues with most of them. For example, IN the Nehe tutorial they are using glaux which I don't want to use in my program. And the remaining tutorials use md2 which is not supported by Blender. So can anyone guide me which file format to use for exporting 3D model and how to load that in my OpenGL program? Also how can I animate that model? Is it possible to add extra effects like tone mapping after it has been loaded in the c program using OpenGL? if yes then how? P.S. I am using Linux for game developement |
1 | Models not rendering when far away I am making a game where objects are represented by colored cubes. I have files that mark the location and rotation of the cubes. I've been using Processing's 3D library and the cubes disappear within a certain distance, like this Why is this happening? |
1 | Gaussian blur not working correctly So pretty much I tried to add a gaussian blur to something I am making and it is acting oddly. The setup is like All framebuffers are cleared to rgba(0,0,0,0), normal blend enabled Draw some solid black objects to Framebuffer 0 Apply horizontal gaussian pass from Framebuffer 0 to Framebuffer 1 (this works fine) Apply vertical gaussian pass from Framebuffer 1 to backbuffer original black objects removed here, does not affect the process at all It seems like a standard gaussian layout, but the problem happens with the final view (presumably the vertical pass). It appears that the vertical gaussian pass dampened the previous horizontal one, making it a lot thinner (and making the box shadow effect I was going for look weird) (note on the right image on one of the center rectangles for example, the top and bottom are blurred like they should but the sides aren't) I just cant figure out why the vertical blur factor is at least 2x the horizontal one. I feel like this is because of something to due with alpha because the horizontal pass displays like it should until the second step happens. One final note, this still happens if I flip the horiz vert order, it just happens on the other axis. I honestly can't figure this out so any help would be nice. Also for more info, heres the fragment shader I use (all the data passed to it is correct) in vec2 ftexpos uniform sampler2D inputtex uniform vec2 vector float 15 gaussianraito float 15 ( 0.023089, 0.034587, 0.048689, 0.064408, 0.080066, 0.093531, 0.102673, 0.105915, 0.102673, 0.093531, 0.080066, 0.064408, 0.048689, 0.034587, 0.023089 ) void main() vec4 sum vec4(0.0) for (int i 7 i lt 7 i ) sum texture(inputtex, vec2(ftexpos.x i vector.x, ftexpos.y i vector.y)) gaussianraito i 7 gl FragColor sum |
1 | Where to store OpenGL object id s When working with OpenGL, you often recieve integer id s to keep track of OpenGL objects. For example, representing a simple mesh, you may have a number of references to objects like so GLuint arrayBuffer GLuint elementArrayBuffer GLuint texture Having many meshes, the id s add up quickly. Therefore my problem is, where it is most logical and efficient to store them? Alternative 1 My first thaught was to hand out the id s to corresponding game entities. Something along the lines class Entity public GLuint arrayBuffer GLuint elementArrayBuffer GLuint texture Other methods... Then having a render class Create the buffers and textures and pass id s to entity. renderer.init(entity) Render with corresponding buffer texture id s stored in entity renderer.render(projection, view, entity) Where it then is easy enough, in the implementation of the render method, just picking id s and render with them. Some disadvantages is that this is a leaky abstraction, where OpenGL concepts are "leaked" to game entities. Which does not feel quite right. An advantage might be that the method supposedly is efficient. You do not have to find id s, you know them for each rendering pass. Alternative 2 Another alternative would be to store the id s internally in the renderer. For example, an internal data struct struct RenderData GLuint arrayBuffer GLuint elementArrayBuffer GLuint texture Then having an unordered map with references to each entity and corresponding RenderData std unordered map lt unsigned int, RenderData gt render data Then in the rendering method, the correct render data is fetched from the map only by a unique id that is stored in each entity. Advantages of this approach, is that it is less "leaky", where only the entity id is passed to the renderer, to know what data to use. Disadvantages may be slightly less efficiency, because of lookups in the unordered map? I would greatly appreciate inputs on this, what approach to prefer, or if there are any other approaches out there? |
1 | Does opengl performs Visibilty algorithms based on z index? Does OpenGL performs Visibilty algorithms based on z index? Or we have to write our own algorithms. Mainly I'm referring to z buffer algorithm. Is it in built? |
1 | Efficient skeletal animation I am looking at adopting a skeletal animation format (as prompted here) for an RTS game. The individual representation of each model on screen will be small but there will be lots of them! In skeletal animation e.g. MD5 files, each individual vertex can be attached to an arbitrary number of joints. How can you efficiently support this whilst doing the interpolation in GLSL? Or do engines do their animation on the CPU? Or do engines set arbitrary limits on maximum joints per vertex and invoke nop multiplies for those joints that don't use the maximum number? Are there games that use skeletal animation in an RTS like setting thus proving that on integrated graphics cards I have nothing to worry about in going the bones route? |
1 | Data structure for collecting entities for instanced rendering My game, a citybuilder, has many simple entities that rendered via glDrawArraysInstanced. A large city has over 600,000 entities, but most of those entities are one of a few hundred meshes. Every frame, I need a way to collect all those entities so that every entity with the same mesh, texture, and shader pass can be rendered with one draw call. Since my game thread is separate from my render thread, I currently use a data structure, which I call the draw buffer, to collect a plan for what order to draw things in. This data structure has become a real problem. It's essentially a three dimensional chain of pointers to dynamic arrays, with the first dimension being shader pass, the second mesh, and the third texture. Every frame, the game goes through all entities (which are not in any particular order), filters out the ones that aren't in the view frustum, and inserts that mesh's data into the data structure. The draw buffer takes up a lot of memory and is highly fragmented. I feel like this isn't the right approach, but I'm not sure what the right approach is. I'm trying to figure out if there is a way to do it without storing what I plan to do, but I do think that can be done without multiple passes on the entity list. Maybe I can keep a master version of the data structure without view frustum culling and compute view frustum culling somewhere else. Then I wouldn't have to rebuild the data structure every frame. Instead I would have to update the data structure every time something is added or removed fortunately entities never change shader pass, texture or mesh, so entities will always stay in the same place in the master data structure. I hope this makes sense to someone because it barely makes sense to me. The question is, can I solve this problem with a better data structure, or should I consider a different approach entirely? Thanks for any advice you guys have. |
1 | How do game engines manage to 'detect' the opengl supported in a computer I am learning OpenGL 3.3 and OpenGL 2.1. I want to make a game engine with both. So far I can do advanced stuff with both of them. I noticed that graphics engines like Irrlicht support OpenGL 1.x to 4.x. When i run an Irrlicht program in my computer that supports OpenGL 3.3, it runs perfectly. And when i run it in a computer with OpenGL 2.1, it still runs perfectly. How does it know that my computer supports OpenGL 3.3, and adapt itself to it. Is it through trial and error? or how are they able to do this. That is a feature i want to incorporate in my game engine. |
1 | An API independent way of managing video memory? I'm developing a game. The game architecture is very modular. I have a "Graphics Engine", which uses either a Direct3D or OpenGL renderer. However the user does not have access to the renderers directly. Instead the "Graphics Engine" class is used, which uses a renderer interface. I also have a resource management system. As for now, the renderers are respoinsible for allocating and releasing video memory. That couples my design in a bad way, since the "Graphics Engine" has undesirable logic for transferring resources between the RAM and the VideoRam through sending signals to the "Renderer". I want the resource management system to be responsible for allocating video memory. I want to decouple the "Renderer" and "Graphics Engine" classes as much as possible, and make the renderer only responsible for rendering. Is there a way to obtain a universal pointer to a Video RAM buffer, and use it to render with both renderers? My problem is basically that I can only obtain OpenGL or Direct3D handles, which can be used only from the relevant API. And the "Graphics Engine" should not know which renderer it uses. |
1 | glColor3f() 0.0 no intensity 5.0 full intensity? I have a very weird behavior of openGL's glColor right now. I've wasted hours to detect why my line colors acting so weird until I found out that 0.0 rgb values is zero intensity and 5.0 lead to full intensity (yes NOT 1.0 it's not clamping!) If I color a line with 1.0, 1.0 1.0 I get very dark grey instead of white. If I color a line with 5.0, 5.0 5.0 I get white. Here is my code gl.glDisable(GL2.GL BLEND) gl.glDisable(GL2.GL TEXTURE 2D) We need to disable any shaders to get colored lines gl.glUseProgramObjectARB(0) gl.glColorMaterial(GL2.GL FRONT AND BACK, GL2.GL AMBIENT AND DIFFUSE ) gl.glEnable(GL2.GL COLOR MATERIAL) gl.glBegin(GL2.GL LINES) for (int lineIndex 0 lineIndex lt lineSegments.getNumberOfLines() lineIndex ) gl.glColor3d(5, 0, 0) gl.glColor3dv(lineSegments.getLineColor(lineIndex).data(), 0) gl.glVertex3fv(lineSegments.getLineStartPoint(lineIndex).floatData(), 0) gl.glVertex3fv(lineSegments.getLineEndPoint(lineIndex).floatData(), 0) gl.glEnd() gl.glEnable(GL2.GL TEXTURE 2D) gl.glEnable(GL2.GL BLEND) Why is OpenGL acting so weird on glColor? Did I miss a gl state or something? I could scale up the colors so I bypass this, but I want to find out whats wrong here. (The code example leads to a line that ist 100 red) |
1 | Cube world rendering optimizations I'm making a Minecraft like game and I would like to apply some optimizations. Firstly I didn't and I rendered the world using a vbo in which I stored a cube model and I drow all blocks of the worlds and off course it caused lag if there were too many blocks. Then I decided to draw the world face by face and check if a face is hidden from another one. This concept is simple if I have the world made only by cubes of the same size, like this Blue and black block are two blocks of the same size and the red face is the face to ignore since is hidden (overlapped) by another opaque face. As I said if I want to make blocks of different sizes (scale) position rotation but by always drawing them from faces, the matter is different since I can't know if a face is completely visible or not. Here's an image of a normal block with a "custom" block on it Blue block is the custom block, the black a "normal" block and the red is the face overlapped by the biggest one (then it should be ignored during rendering). Here the rotation is null but if there's no way to check rotated faces I can simply ignore it and draws all rotated blocks by default (I don't rotate all the world lol)... that's not what I'd like to do but if there's no way or the way is too difficult... Anyway, finally, how can I check (using GL functions or doing it in my code) if a face is hidden and to ignore it on rendering? I forgot to say that another optimization I did is if a block is hidden (has a block relative to each face) by other blocks I don't draw it. |
1 | How to create more vertexes from within a shader in OpenGL? when rendering voxels in octrees, the only information necessary is the current octree level, position and colour texture. But one has to send eight vertices to the rendering pipeline in order to render a complete cube and still needs colour texture. Is there a better way? Since geometry shaders wont be wide spread for quite some time the "emit vertex" method in the geometry pass wont suffice (or am i wrong? i don't think mesa currently supports geometry shaders). Also, afaik, backface culling is done before the geometry pass, so that would lead to additional overhead. My current idea is to send 'dummy vertices' and have the actual drawing data in between but that really shouldn't be the solution. kind regards |
1 | More than 8 lights without deferred shading lighting I want to know if there is any technique (efficient) to use more than 8 lights in a scene made with OpenGL and GLSL. Without making use of deferred shading lighting. I have not implementadon these techniques for their limitations and not being able to use transparency or antialiasing. If there is a good alternative, describe it with an example. I use OpenGL 2.0. Thank you. |
1 | How can I GL SELECT from a gluPerspective transformed scene? I'm using JOGL to access OpenGL methods on an old OpenGL version, because of school. I have written a method which is called before any objects are drawn. This method shows which object is picked. Before I call my "picking" method, I draw a camera. This is where the problems begin I can't click precisely on the objects, because they don't know they are transformed by the camera. I have tried to leave the projection matrix unchanged, but this leads to other problems... I first draw the camera, then do picking, then draw the objects to screen. What am I missing? My camera public void draw(GL2 gl) GLU glu GLU.createGLU(gl) gl.glMatrixMode(GL2.GL PROJECTION) gl.glLoadIdentity() glu.gluPerspective(viewingAngle, whRatio, 1, 1000) gl.glMatrixMode(GL2.GL MODELVIEW) glu.gluLookAt(xPos, yPos, distance, 0, 0, 0, 0, 1, 0) My picking method private void picking(GL2 gl) if (currentClickPoint ! null) int buffSize 512 int select new int buffSize int viewport new int 4 IntBuffer selectBuffer newDirectIntBuffer(buffSize) gl.glGetIntegerv(GL2.GL VIEWPORT, viewport, 0) gl.glSelectBuffer(buffSize, selectBuffer) gl.glMatrixMode(GL2.GL MODELVIEW) gl.glRenderMode(GL2.GL SELECT) gl.glInitNames() gl.glPushName( 1) gl.glMatrixMode(GL2.GL PROJECTION) gl.glPushMatrix() gl.glLoadIdentity() glut.gluPickMatrix((double) currentClickPoint 0 , (double) (viewport 3 currentClickPoint 1 ), 10.0, 10.0, viewport, 0) gl.glOrtho(0.0, 8.0, 0.0, 8.0, 0.5, 2.5) gl.glMatrixMode(GL2.GL MODELVIEW) drawObjects(gl, GL2.GL SELECT) gl.glMatrixMode(GL2.GL PROJECTION) gl.glPopMatrix() gl.glMatrixMode(GL2.GL MODELVIEW) gl.glFlush() selectBuffer.get(select) interpretClicks(select, gl.glRenderMode(GL2.GL RENDER)) resetClickPoint() |
1 | Camera Rolling when Implementing Pitch and Yaw I am implementing a camera in opengl for an android game and am having problems getting the pitch yaw roll of the camera correct. I have read many tutorials on quaternions and have implemented a basic camera based on this link. Everything works well except for when I pitch and yaw I get unwanted roll. I have read things that say I need to apply the pitch yaw rotation together to prevent this, and other things that said not to track individual changes to angles, but I'm not sure what they mean or how to extend this implementation to mine. I am new to this type of math so any help would be appreciated. My goal is to have a camera that can rotate similar to how a space ship would rotate I want the camera to be able to rotate 360 degrees in all directions and turn upside down which is why I was rotating the upVector too. Any suggestions on what I can do to not have the camera roll when rotating around the y and y axes? Thanks! Here is the code that translates user input into two angles, and rotates the lookVector, upVector, and rightVector of the camera. My problem is the camera is also rolling when I just pitch and yaw. Java Camera uses a position vector, lookAt vector, and upVector eventually calls Matrix.setLookAtM() Camera c mRenderer.theCamera PVector is a class that implements a 3 dimensional vector float thetaX getTouchChange(...) set by the user touching the screen float thetaY getTouchChange(...) set by the user touching the screen I keep 3 vectors in the camera class, one points up, one points right, and the 3rd is the point I am looking at Pitch rotate the up and look around right get up and look vectors PVector lookVector PVector.sub(c.getLookAtPoint(), c.getLocation()) PVector otherVector c.getUpDirection() get axis PVector rAxis c.getRightDirection() rAxis.normalize() rotate look PVector rotated Quaternion.rotateVector(lookVector, rAxis, thetaY) Add back camera location to convert vector to actual look points rotated.add(mRenderer.theCamera.getLocation()) c.setLookAtPoint(rotated) rotate up rotated Quaternion.rotateVector(otherVector, rAxis, thetaY) rotated.normalize() c.setUpDirection(rotated) Yaw rotate the look and right around up get up and look vectors lookVector PVector.sub(c.getLookAtPoint(), c.getLocation()) otherVector c.getRightDirection() get axis rAxis c.getUpDirection() rAxis.normalize() rotate look rotated Quaternion.rotateVector(lookVector, rAxis, thetaX) Add back camera location to convert vector to actual look points rotated.add(mRenderer.theCamera.getLocation()) c.setLookAtPoint(rotated) rotate right rotated Quaternion.rotateVector(otherVector, rAxis, thetaX) rotated.normalize() c.setRightDirection(rotated) In my rendering code I update the camera orientation by calling the following called in onSurfaceChanged Matrix.frustumM(m3DProjectionMatrix, 0, screenRatio, screenRatio, 1, 1, 1, 70) called in onDrawFrame Matrix.setLookAtM(m3DViewMatrix, 0, theCamera.getLocation().x, theCamera.getLocation().y, theCamera.getLocation().z, theCamera.getLookAtPoint().x, theCamera.getLookAtPoint().y, theCamera.getLookAtPoint().z, theCamera.getUpDirection().x, theCamera.getUpDirection().y, theCamera.getUpDirection().z) Matrix.multiplyMM(m3DMVPMatrix, 0, m3DProjectionMatrix, 0, m3DViewMatrix, 0) m3DMVPMatrix is eventually passed to the shader along with the shape data |
1 | problem loading Collada DAE model using Assimp in OepnGL 4.4 I am loading a model in my OpenGL application using Assimp library like this bool CGameObject LoadModelFromFile(char sZFilePath) std string fn sZFilePath std string td " " std string mfn getcwd(NULL, 0) td fn Assimp Importer importer const aiScene scene importer.ReadFile(mfn, aiProcess CalcTangentSpace aiProcess Triangulate aiProcess JoinIdenticalVertices aiProcess SortByPType) if (!scene) return false const int iVertexTotalSize sizeof(aiVector3D) 2 sizeof(aiVector2D) int iTotalVertices 0 for(UINT i 0 i lt scene gt mNumMeshes i ) aiMesh mesh scene gt mMeshes i int iMeshFaces mesh gt mNumFaces for (UINT f 0 f lt iMeshFaces f ) const aiFace face amp mesh gt mFaces f for (UINT k 0 k lt 3 k ) aiVector3D pos mesh gt mVertices face gt mIndices k Model.vertices.push back(pos.x) Model.vertices.push back(pos.y) Model.vertices.push back(pos.z) VAO glGenVertexArrays(1, amp m vao) glBindVertexArray(m vao) VBO glGenBuffers(1, amp m vbo) glBindBuffer(GL ARRAY BUFFER, m vbo) glBufferData(GL ARRAY BUFFER, this gt Model.vertices.size() sizeof(GLfloat), amp this gt Model.vertices 0 , GL STATIC DRAW) glVertexAttribPointer((GLuint)0, 3, GL FLOAT, GL FALSE, 0, 0) glEnableVertexAttribArray(0) unbind buffers glBindVertexArray(0) glBindBuffer(GL ARRAY BUFFER, 0) return success when everything is loaded return true and then render it like this for (GameObject iterator i GameObjects.begin() i ! GameObjects.end() i) CGameObject pObj i glUniformMatrix4fv(MatrixID, 1, GL FALSE, GetMVPMatrix(pObj gt position)) glBindVertexArray(pObj gt m vao) glVertexAttrib3f((GLuint)1, 1.0, 0.0, 0.0) set constant color attribute glDrawArrays(GL TRIANGLES, 0, pObj gt Model.vertices.size()) . . . function used to set MVP matrix GLfloat GetMVPMatrix(glm vec3 position) glm mat4 Model glm translate(mat4(1.0), position) glm mat4 MVP3 Projection View Model return amp MVP3 0 0 I created and exported my model using Blender. if I use OBJ format then it is rendered correctly but with DAE format it is like this My model has 5 meshes and debugging in Visual Studio shows that all 5 meshes are loaded. Loading the DAE file in assimp viewer shows the model file has been exported correctly Where am i doing it wrong ? |
1 | How to display three dimensional Objects in a 2D Game using OpenGL and orthographic Projection? I am creating a 2D (2.5D) game using OpenGL and orthographic projection. It is simple to have relatively flat objects, e.g. characters. I simply use a quad with a texture of the character and move that about. However, what is the best way to draw big objects that have depth, e.g. a big house? Do I use one quad with a three dimensional looking represantation of the house on it, or do I use multiple quads (e.g. front, side, top)? I prefer using one quad with a three dimensional looking texture on it. What are the drawbacks to this approach? |
1 | Access vertex data stored in VBO in the shader If I wanted to store extra data in a VBO for skinning (indices for indexing into an array of matrices of bones and floats for applying weights to those bones) How would I go about accessing that data in GLSL? I already know how to put the data in the buffer, interleaved with the rest of the data. For example I want each vertex for a model to contain Vec3 position Vec3 normal Vec4 color Vec2 texCoords int boneCount int 3 boneIndex float 3 boneWeight Then drawing like bind(vertexBufferID) glVertexPointer(3, GL11.GL FLOAT, stridePlus, 0 4) glNormalPointer(GL11.GL FLOAT, stridePlus, 3 4) glColorPointer(4, GL11.GL FLOAT, stridePlus, (3 3) 4) glTexCoordPointer(2, GL11.GL FLOAT, stridePlus, (3 3 4) 4) glVertexPointer(7, GL11.GL FLOAT, stridePlus, (3 3 4 2) 4) glDrawArrays(GL11.GL TRIANGLES, 0, VPNCTEnd) Then in my vertex shader I want Get indices of bones Get weights of bones Get bone transforms for each bone transform, apply it to gl Position at the specified weight What is the GLSL required for the last part of the process? It seems like I need to set a attribute pointer to the data somehow, but I'm not sure where to do that when the data is stored in the VBO. |
1 | Problem with rendering multiple textures from same image (LWJGL 3) I have started with LWJGL3 and trying to built game engine. I'm stuck on generating terrains. I have one image that contains two textures. So now for example I want to generate terrain with only texture 2, so my texture coordinates looks something like this textures (0.6f,0f) (0.9f,1f) ..... Everything is fine if terrain is not long, but if I stretch terrain on z axis, terrain is using texture 1. As you can see on image. I just can't understand why is doing that ? |
1 | Can you run OpenGL 2.0 on modern machines? I'm looking to get into 3d with OpenGL, using SDL to help with other stuff. I found plenty of good tutorials (lazyfoo is a big one), but "Learning Modern 3d Graphics Programming" uses a newer version of openGL that I'm not able to run! I opened up OpenGL extensions viewer, and I can only run OpenGL 2.0, whereas most tutorials are in higher versions. I ask this because I've heard that just about everything in 2. got depreciated in the newer versions, so I'm worried that my code might not work. I'm looking at a few other tutorials, but I'm so used to SDL that those just confuse me... So should I bother trying to do graphics programming now, or should I just wait until I can upgrade my computer? |
1 | Android OpenGL and non premultiplied textures As I understand it by default all bitmaps will will get the alpha channel premultiplied in Android if loaded using BitmapFactory. My setup I have a 32bit bitmap(.bmp) where the alpha channel has a gloss map and the rgb is a specular map. Issue I found out that I can load the bitmap without being premultiplied using the bitmap option opts.inPremultiplied false Result When I render the alpha channel of the texture like this in the shader gl FragColor.rgb vec3(texture.a) The resulting mesh is all white and not shades of gray as expected. I upload the texture with this method GLUtils.texImage2D(GL TEXTURE 2D, 0, GL RGBA, image, 0) I have also tried using this approach GLES20.glteximage as well with the same result although I might have done something wrong there. Can anyone spot where I have gone wrong or what else I need to do? |
1 | OpenGL behaviour depending on the graphics card? This is something that never happened to me before. I have an OpenGL code that uses GLSL shaders to texture a 3D model. The code involves a lot of GPU texture processing, blending, etc... I wanted to check how the performance of my code improves using a faster graphics card (both new and old are NVIDIA, using always the NVIDIA development drivers). But now I have found that once I run the code using the new graphics card, it behaves completely different (the final render looks wrong), probably because some blending effect is not performed correctly. I haven't really look into what has changed, but I am guessing that some OpenGL states are, by default, set different. Is this possible? Have you ever found different OpenGL GLSL behaviour using different graphics cards? Any "fast" solution? (So far I've thought of plugging back the old one, push all OpenGL default states, and compare with the ones I initially get using the new card..) Edit 1 The graphics cards are NVIDIA Quadro GX7300 (in which my code works OK) and NVIDIA GeForce GTX 560 Ti (in which the results changes fails) Edit 2 I have commented out a lot of my code, and apparently the strange behaviour has nothing to do with texture handling. A simple chessboard like floor looks differents. The diagonal white lines did not appear using the old NVIDIA Quadro GX7300. Any guess what OpenGL related thing could be causing this? Figure 1 Edit 3 I have now fixed the issue commented on the previous edit, regarding the weird unwanted diagonal thin whit lines. As I commented below, I had to remove the glEnable(GL POLYGON SMOOTH) , which was affecting the NVIDIA GeForce GTX 560 for whatever reason (probably due to reasons explained by mh01 in his answer. However, I am still facing a "texture blending" problem when using the NVIDIA GeForce GTX 560. I have a 3D model that is being textured using a shader that blends 8 different images to compute the right texture, depending on where the camera is at that particular moment. The resulting texture is then a combination of different images, and ideally they were blended nicely, using a set of blending weights computed each time the camera moves. The blending weights are still well computed, but the resulting texture is wrong. I am guessing that the GL BLEND function is somehow behaving different, but I have checked it in both graphics cards and it is actually the same one. I have no idea what else can be involved in getting this wrong result As you can imagine, the black line is where two original textures are being blended in order to get a seamless texture. If I use the exact some code and a NVIDIA Quadro GX730, the shader works as expected. However, when I use the NVIDIA GeForce GTX 560, the texture blending goes wrong. Any guess? This is the GLSL shader I am using. |
1 | Why projection window is between 1 and 1 Is it a convetion ? What we achieve with this ? I am reading about how the perspective and orthographic matrix is getting calculated and everyone is trying to normalize the homogenous coordinates to 1,1 Thank you ! Orthographic Projection |
1 | How do you determine which object surface the user's pointing at with lwjgl? Title pretty much says it all. I'm working on a simple 'lets get used to lwjgl' project involving manipulation of a rubik's cube, and I can't figure out how to tell which side square the user's pointing at. |
1 | How can I orient a circle relative to the position of a 3D object? How can I orient a circle that has a position to a 3D object? If I have a 3D object which is a hemisphere, I would like to let the circle be underneath that hemi sphere and whenever the hemisphere moves, the circle move with the same angle and position, and underneath it. In other words, I would like to attach a 2D circle to a hemisphere and parallel to it. |
1 | Order independent transparency in particle system I'm writing a particle system and would like to find a trick to achieve proper alpha blending without sorting particles because Each particle is a point sprite in a single mesh and I can't use scene graph ability to sort transparent nodes. The system node should be properly sorted, though. Particle position is computed on shader from initial velocity, acceleration and time. In order to sort the system I would have to perform all this computations on CPU, which is something I want to avoid. Sorting hundreds of particles against camera position and uploading it on GPU each frame seams to be quiet heavy operation. Alpha testing seems to be fast enough on GLES 2.0 and works fine for non transparent but "masked" textures. Still, it's not enough for semi transparent particles. How would you handle this? |
1 | In a shader, why does substituting a variable with the expression producing it cause different behaviour? I have this correctly working OpenGL shader producing Perlin noise float left lerp(fade(v), downleft, topleft) float right lerp(fade(v), downright, topright) float result lerp(fade(u), left, right) Then I tried plugging the definitions of right and left into result float result lerp(fade(u), lerp(fade(v), downleft, topleft), lerp(fade(v), downright, topright)) Surprisingly, this behaves completely differently, giving visible edges in my Perlin noise. Below are both results My whole 30 line shader is here. What is the difference between those? |
1 | What does the keyword "interface" mean in GLSL? interface vs out vec3 get color() struct vs out impl vs out vec3 get color() return vec3(1,0,0) Where is this behavior defined? None of the specifications uses the keyword interface. It is generally reserved in version 460. I myself accidentally, at Visual Studio, discovered that this keyword is recognized, thought about it and remembered the Nvidia language C for graphics. It has the ability to create interfaces. I tried it and it worked! This code does not crash with compilation errors. It compiles fine. To check, instead of the diffuse color, I used the method vs out impl get color(), and the objects were painted red. Where to see the specification of this functionality? On the Internet, a bunch of words interface and GLSL leads to articles on interface blocks, which is not the same. |
1 | How can I render a shiny, metallic surface? I am creating a game for which I want to render a kind of shiny black metallic surface with some refection property in it. I know that there would be something related to masks involved with it. But somehow I am not able to get started on it. So could someone give me insight in the matter or give a link to a good tutorial? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.