_id
int64
0
49
text
stringlengths
71
4.19k
30
What AI algorithm to use to find hidden shapes in a Battleships like game I'm developing a game similar to Battleships, only using different shapes. A single type of shape will be used within a game session, by both the human player and AI. There will be several objects spread out on the map (10x10 grid). I want to find a way to guess the location and position of the human player's "ships" after I have one or more hits. I'm aware that there will be a lot of randomness involved, but I want the AI to be smart enough to pose a challenge to the human player. I'm thinking about having the AI "sweep" the map and evaluate each possible shape, somehow assign a probability to a certain position and maybe refine the guess by additional sweeps (thus a higher difficulty). But I need some ideas on what logic to use, I don't want to check every possible combination. Thank you.
30
Unreal AI's can't target buildings After following some of the Unreal tutorials, I was able to create an AI that tracks the player. Now I want it to target specific buildings. I have tried various methods of modifying what was demonstrated in the Unreal videos but all I can get my AI controller to do is either stand in place or walk to the center of the nav mesh. Here is an example of what I have done If anyone can help, how can I get my AI to attack the Armory? Thanks.
30
How can I obtain in game data from Warcraft 3 from an external process? I am implementing a behavior algorithm and would like to test it with my lovely Warcraft III game to watch how it will fight against real players. The problem I'm having is that I don't know how to obtain information about in game state (units, structures, environment, etc.) from the running WC3 game. My algorithm needs access to the hard drive and possibly distributed computing, that's why JASS (WC3's editor language) isn't appropriate I need to run my algorithm from a separate process. Direct3D hooking is an approach, but it wasn't done for WC3 yet and a significant drawback of that approach would be the inability to watch how the AI performs online, since it uses the viewport to issue commands. How I read in game data from WC3 in a different process in a fastest and easiest way? Precisely what I need for Warcraft 3 exists for Starcraft BWAPI.
30
Behaviour tree code example? http altdevblogaday.org 2011 02 24 introduction to behavior trees Obviously the most interesting article I found on this website. What do you think about it ? It lacks some code example, don't you know any ? I also read that state machines are not very flexible compared to behaviour trees... On top of that I'm not sure if there is a true link between state machines and the state pattern... is there ?
30
Need some general direction for turn based planning AI I'm planning to make a turn based fleet battle game. While I think I can figure out most of the things I need, I have no idea about the AI handling this sort of game. I need some general direction about how should I deal with it. None of the things I described below is currently done except maybe unit movement stuffs. The combat resembles Frozen Synapse or Steambirds, in which you and your opponent plan their unit's actions and then execute it to see how it resolves. Since the units are naval warships, their movement is restricted by current direction and speed, etc etc. They attack when you order them to do so(not automatic). I think I can give scores to each move(like Chess AI), depending on its location, direction, possible movement area, known enemy positions and things like that. Then I can use Minimax algorithm to let the AI select the best move. Now, I'm asking because I can't figure out which situation should the score be based on. Should the AI make a plan assuming the enemies are all moving straight, predict the best enemy plan based on it, and then write the actual plan based on the prediction? Is it going to demand a lot of processing power? Or is there simpler way using Minimax? Unlike grid based tactical games, the area which a unit can move is a (semi )continual area. If I try to give score to all possible positions, that might take too much processing power. Is there a way to handle this issue neatly? There might be more efficient effective alternatives to the Minimax on this particular game which I do not know. So... i'm asking for general direction about how should I handle the AI. Details will be in when I actually make it.
30
What data should be cached in a multiplayer server, relative to AI and players? In a virtual place, fully network driven, with an arbitrary number of players and an arbitrary number of enemies, what data should be cached in the server memory, in order to optimize smooth AI simulation? Trying to explain, lets say player A sees player B to E, and enemy A to G. Each of those players, see player A, but not necessarily each other. Same applies to enemies. Think of this question from a topdown perspective please. In many cases, for example, when a player shoots his gun, the server handles the sound as a radial "signal" that every other entity within reach "hear" and react upon. Doing these searches all the time for a whole area, containing possibly a lot of unrelated players and enemies, seems to be an issue, when the budget for each AI agent is so small. Should every entity cache whatever enters and exits from its radius of awareness? Is there a great way to trace the entities close by without flooding the memory with such caches? What about other AI related problems that may arise, after assuming the previous one works well? We're talking about environments with possibly hundreds of enemies, a swarm.
30
Are there any games that contain a machine learning AI? Can anybody here give a reference to commercial AAA games that implement a machine learning AI?
30
Implementing priority structure into a behavior tree, to react to player actions My goal is to have a behavior tree that can run alone (autonomously) but also react to input from the player. I'm making an AI for a hack and slash game where the AI will fight you, chase you, etc. and play defensive based on certain stats (health etc), but I want to add an additional layer events from player. Say the player attacks, I want a random chance maybe 5 10 to trigger a dodge or step back from the AI. Currently I have its own selector running, but that is not sufficient, as it just has same priority or triggers on the same level as the other selectors sequences in the tree. So how do I go about implementing a priority into the tree? I want to have dodge, which should be labelled as maybe priority 1 if the event triggers, else it should just go about its usual business.
30
Chess Artificial intelligence with python and pygame I created chess game with python and pygame. Now I'm trying to make Artificial intelligence, but what is actually best way to do it? Some tips, tricks, links? Thanks
30
Scripting a sophisticated RTS AI with Lua I'm planning to develop a somewhat sophisticated RTS AI (eg see BWAPI). have experience programming, but none in game development, so it seems easiest to start by scripting the AI of an existing game I've played, Warhammer 40k Dawn of War (2004). As far as I can tell, the game AI is scripted with some variant of Lua (by the file extension .ai or .scar). The online documentation is sparse and the community isn't active anymore. I'd like to get some idea of the difficulty of this undertaking. Is it practical with a scripting language like Lua to develop a RTS AI that includes FSMs, decision trees, case based reasoning, and transposition tables? If someone has any experience scripting Dawn of War, that would also help.
30
Getting correct direction of the Car I am building a car racing game using the coordinate system in which I have enemies's cars and the player cars. I want to ask that what is the best way to find the the direction of enemies's cars so that they would facing the player's car and move towards the player's car.
30
Approaches to partner trick taking card game AI I am creating an android game out of a partner based trick taking card game. What are some generic approaches to AI players for these types of games and what are the advantages disadvantages of each?
30
Snake AI Is a Hamiltonian approach valid for all grid sizes? So, as has been done many times before, I am designing an AI that can play Snake as effectively as possible. It didn't take me long to find this extremely useful thread here How to find a safe path for an AI snake? where the top answer first and foremost recommends forming a Hamiltonian circuit for the grid and begin by just having the Snake follow this route. However, after attempting this, I realised it didn't work with my initial grid size (23x23), at least I don't think it does. My understanding may be incorrect, but from what I gather, with m rows and n columns, if mn is odd, then there is no Hamiltonian circuit possible. If this is the case, then should I abandon this method? Or is there any way of implementing it in some case?
30
How can I apply steering behaviors to a car controlled with turning and acceleration? I feel like I've got my head around steering behaviors, but I've having trouble applying them to a car. The steering behaviors return forces that one could apply to an object that can move in any direction, but a car can essentially only move forward and turn. I'm having trouble determining how hard the car should turn or how much it should accelerate forward based on the steering force. How can I translate a steering force into the car's input?
30
What should be created first in a video game? When starting developing a video game, should one focus on creating the environment (buildings, trees, mountains etc) first or the A.I. (Playable character, NPCs etc)?
30
Can anyone recommend an AI sandbox? I'm passionate person, who has been around AI from a long time 1 but never going in deep enough. Now it's time! I've been really looking for some way to concentrate on AI coding but couldn't succeeded to find an AI environment I can focus on. I just want to use an AI sandbox environment which would let me have tools like visibility information character controller able to easily define a level, with obstacles of course physics collider management triggers management don't need to be a shiny, eye candy graphical render this is about pathfinding, tactical reasoning, etc.. I have tried Unreal Dev Kit while the new release announce is about C coding, this is about external tools and will be released in 2013 Cry Engine really interesting as AI is presents here but coding with it appears to be an hell did I got it wrong ? Half Life source, C4, Torque, Dx Studio either quite old, not very useful or costly these imply to dig in documentation (when provided) to code everything, graphics included. Unity 3D the most promising platform. While you also need to create your own environment, there are lot of examples. The disadvantage is, in addition to spend time to have this env. working, is the languages choice C , Javascript or Boo. C is not that hard, but this implies you'll allways have to convert papers (I love those from Lars Linden) books codes, or anything you can have in Aigamedev are most often in C . This is extra work. I've look at "Simple Path", the very good Arong Greenberg work but no source provided and AngryAnt work. AI Sandbox this seems to be exactly what as AI coder I want to use. I saw some preview but from 2009 we still don't know what it will be about precisely, will it be opensource or free (I strongly doubt), will I be able to buy it? will it really provide me tools I need to focus on AI ? That being said, what is the best environment to be able to focus on AI coding only, is it even possible?
30
Approaches to partner trick taking card game AI I am creating an android game out of a partner based trick taking card game. What are some generic approaches to AI players for these types of games and what are the advantages disadvantages of each?
30
Card Game Suggestion I'm developing a gameboard for a 4 players cardgame well know in my region ( like a Bridge with 8 cards). I create a cardcontrol class, all the deck methods and all the logics for value card and score count. But I need some link or suggestion for the implementation of "computer player" based on a few rules to follow. thank you all for any help.
30
Why BlackBoardValue Says Invalid And False? UnrealEngine4 Think my ai do not move towards the sound because of the blackboard keys , one says invalid and another says false , i think that is the reason.... Enemy Ai should move to the location where sound is heared , here is the photo of BehaviourTree. Look at those 2 keys named "InvestigatePosistion" and "HasHearNoise" its says False and Invalid. so how make them work?
30
How to demo Advanced Game AI as a portfolio piece? Basically every Game company wants to see a portfolio that exhibits your skill set. If you're specializing in AI though, what and how should you show off your skills. Some thoughts Is nice graphics in an AI demo a must(get past nontechnical HR that don't understand AI, think 3d verse 2d)? Demo multi featured AI or a single focused example. Fundamental skills like path finding, hfsm, planning, etc are critical AI components but does it really impress as a portfolio piece? Single AI entity, multiple entities, or large populations, is more always better? Also as a portfolio piece ideally there would be an executable, and videos which should show off whatever is trying to be shown off within only a few minutes. Examples I would say any of the skills exhibited in this video would make for a great portfolio piece but are game companies really expecting this from a single person. Autodesk Kynapse AI Sandbox is another great example but this was made from the work of many many people. AI Sandbox Any examples of good AI portfolios would be great.
30
What alternatives exist of how an agent can follow the path calculated by a path finding algorithm? What alternatives exist of how an agent can follow the path calculated by a path finding algorithm? I've seen that the most easy form is go to one point and when the agent has reached this point, discard it and go to the next point. I think that this approach has problems when the game has physics with dynamic objects that can block the travel between point A and point B, then the agent is taken from his original trayectory and sometimes go to the last destiny point is not the most natural behavior. In the literature always I have read that the path is only a suggestion of where the agent has to go, but I don't know how this suggested path must be followed. Thanks.
30
Player Ai for games involving scoring goals like soccer, hockey, basketball? I've been working on a soccer game, and was thinking of different ways to program player AI's and was wondering how they actually work. The concept is probably similar to all games where there are balls and goals like Soccer, Hockey, Basketball etc. Which algorithm do they use ? I imagine if you have the ball, you can use A , to figure out a path passed other "enemy" players and score. That seems simple to implement. But if you do not have the ball then then it becomes really complicated you could either play "zone" in which case, you stay within a particular region on of the playing area, and orient yourself towards the ball. Or switch to playing "man", in which case you stick to some player on the other team. Was wondering what other algorithms people use, as machine learning for a simple mobile game seems like over kill.
30
State machine interpreters I wrote my own state machine tool in C and at this point I'm faced with two choices for specifying state machines. Crafting a little language and writing a interpreter. Writing a compiler for that language. I know the advantages disadvantages of each. I'd like to know what choices game programmers have made for their games. If you've used a state machine in your game in any form, I'd be interested in knowing how you did it.
30
Enemy evolution in shoot em ups Are there any shoot em ups in which the enemies actually evolve as a response to their relative success against the player? By evolve, I mean that each enemy has some genetic information encoding their behaviour. Enemies which last longer or do more damage are allowed to mix their genes to create new enemies. This way the enemies would become better adapted to the particular player.
30
How to implement Boids steering efficiently? (ie. find nearest neighbours of moving entities) As I understand it, modern pathfinding uses ideas such as Boids steering on top of traditional A and Dijkstra's algorithm. It's easy to find recommendations for implementing Dijkstra's algorithm efficiently in say C . That is I can find out what are the typical data structures to use. With Boids steering I cannot find anything that definitively says what is the recommended way to go. You have these moving units and you need to be able to find for each unit all the nearby units to apply steering or whatever else. What kind of data structure can I use to perform these nearest neighbour queries efficiently?
30
Developing a Bot for Fanorona game I am currently developing a bot for a game board called Fanorona. The game is not as popular as other board games such as chess or backgammon, but still it brings a lot of fun. Some of you may know it from Assassins Creed 3, specifically the game Connor and Achilles play together. http gasy fanorona.sourceforge.net docs fanorona rules.html In this link I provided the basic rules. I have studied what people usually use for bots for chess, algorithms like Alpha Beta Minimax. These algorithms require a score system in order to make specific decisions, take into account the branching factor, which in this game is hard to estimate. Additionally, the bot could be trained to recognize patterns. I have never worked with AI algorithms before, therefore I would like find out your opinion concerning this subject. What course of action would you employ in order to build a bot for this kind of game?
30
Can Neural Network play tic tac toe? Is this have any common sense? I'm thinking about theoretical possibility of playing tic tac toe by neural net. Is this have any common sense? Let's consider tic tac toe which contains 3 rows and 3 cols (it's 9 cells). Ok, then the input vector is contains of all our cells 0..8 , where 0 is "O", 1 is "X" and 2 is empty cell. But what we have for the target vector? On every NN's step we need to know some target and make our distance to target shorter. But the target of this game is only win (or maybe draw), I don't understand how we can make shorter our distance to win. Is any suggestions? My goal is to learn more about neural networks in games that's why I consider such easy game. Maybe it's not the best choice for the NN and I need to consider some other game? )
30
Negamax for turn based game I am working on a fast paced turn based game, each turn the player can move left, right or stay put and choose whether or not to fire. Each turn the shots move one block in the direction they are pointing. I would like to implement NegaMax on this game for curiosity's sake. Could anyone recommend a brilliant tutorial they tried and found satisfactory? Thanks
30
Dealing with multiple prerequisites in goal orientated action planning using A I'm currently having difficulty reasoning about how multiple prerequisites are satisfied using the A algorithm during planning. Assuming the following actions (with prerequisites in brackets) Get Material Make Gloves (need material) Get Iron Make Axe (needs iron) And the following goal (prerequisites in brackets) Chop Tree (needs gloves, needs axe) Now, assuming I am doing a backward search from the goal, as I understand it, I would start considering actions that have effects that directly correspond to the prerequisites of the node (this is where I think I'm going wrong). The problem with that is, the first 2 actions to consider are Make Gloves and Make Axe. However, for this goal to be satisfied they both need to be done. If I only build my graph by linking effects to prerequisites, I only consider each of those actions once, and only choose one. I.e. if I arbitrarily choose Make Axe that leads me to Get Iron, I don't have anything making me reconsider Make Gloves as Get Iron has no prerequisites. For the actions and goal I had above there are many routes to solving it, below I have highlighted one to illustrate my point. As you can see, in this case I would need Get Iron to be done after Make Gloves even though their effects prerequisites don't match in any way. Can someone tell me where I'm going wrong in my thinking?
30
What are the most common AI systems implemented in Tower Defense Games I'm currently in the middle of researching on the various types of AI techniques used in tower defense type games. If someone could be help me in understanding the different types of techniques and their associated advantages. Using Google I already found several techniques. Random Map traversal Path finding e.g. Cost based Traversing Algorithms i.e. A I have already found a great answer to this type of question with the below link, but I feel that this answer is tailored to FPS. If anyone could add to this and make it specific to tower defense games then I would be truly great full. How is AI most commonly implemented in popular games? Example of such games would be Radiant Defense Plant Vs Zombies Not truly Intelligent, but there must be an AI system used right? Field Runners Edit After further research I found an interesting book that may be useful http www.amazon.com dp 0123747317 ?tag stackoverfl08 20
30
Need some general direction for turn based planning AI I'm planning to make a turn based fleet battle game. While I think I can figure out most of the things I need, I have no idea about the AI handling this sort of game. I need some general direction about how should I deal with it. None of the things I described below is currently done except maybe unit movement stuffs. The combat resembles Frozen Synapse or Steambirds, in which you and your opponent plan their unit's actions and then execute it to see how it resolves. Since the units are naval warships, their movement is restricted by current direction and speed, etc etc. They attack when you order them to do so(not automatic). I think I can give scores to each move(like Chess AI), depending on its location, direction, possible movement area, known enemy positions and things like that. Then I can use Minimax algorithm to let the AI select the best move. Now, I'm asking because I can't figure out which situation should the score be based on. Should the AI make a plan assuming the enemies are all moving straight, predict the best enemy plan based on it, and then write the actual plan based on the prediction? Is it going to demand a lot of processing power? Or is there simpler way using Minimax? Unlike grid based tactical games, the area which a unit can move is a (semi )continual area. If I try to give score to all possible positions, that might take too much processing power. Is there a way to handle this issue neatly? There might be more efficient effective alternatives to the Minimax on this particular game which I do not know. So... i'm asking for general direction about how should I handle the AI. Details will be in when I actually make it.
30
approach for the system that record player actions and imitate it I am trying to implement some kind of system, that would allow AI to imitate player's action at certain points in similar cases. Case example Player (hp 10, mp 5) used skill BasicAttack when in battle When I set for a bot parameters like hp 10, mp 5 and state attack it should use BasicAttack skill because in same case player used that action. (I'd like to have ability to define similar cases like when hp 11, mp 6 then bot use BasicAttack too because it is similar to hp 10, mp 5 case but whatever) The question is that what are the basic approaches for such cases? Leaving aside snapshotting of player's state, how can such system be implemented? So far I have two ideas. My ideas are 1. order player parameters by priority and states and create tree where every node would be corresponding parameter value. For example node with name hp and value 10 will have a child note mp with value 5 thus I can create required paths to actions 2. I have event log with recorded player events. It keeps its record ordered. I iterate over it and using pattern matching I compare player parameters and retrieve corresponding player action The first approach seems much faster though. But I'd like to know if there already exist approaches for such systems, in order not to reinvent the wheel as I used to do. Are any?
30
Single or Multiple Behavior Trees? I just finished coding a generic Behavior Tree structure for my games. My question is, when creating behaviors for enemy AI's, do I create one large behavior tree with every possible configuration as a node or do I create many multiple Behavior Trees and simply swap them in and out as I need them? To better understand my question here are some examples One large Behavior Tree might look something like this Multiple smaller Behavior Trees might look like this In the second tree I would simply swap out different trees depending on when I need them. This is more apparent for things like evading and backing away from obstacles, I feel like that would need to be done alongside many other behaviors like chasing and evading. I hope that made sense.
30
How to demo Advanced Game AI as a portfolio piece? Basically every Game company wants to see a portfolio that exhibits your skill set. If you're specializing in AI though, what and how should you show off your skills. Some thoughts Is nice graphics in an AI demo a must(get past nontechnical HR that don't understand AI, think 3d verse 2d)? Demo multi featured AI or a single focused example. Fundamental skills like path finding, hfsm, planning, etc are critical AI components but does it really impress as a portfolio piece? Single AI entity, multiple entities, or large populations, is more always better? Also as a portfolio piece ideally there would be an executable, and videos which should show off whatever is trying to be shown off within only a few minutes. Examples I would say any of the skills exhibited in this video would make for a great portfolio piece but are game companies really expecting this from a single person. Autodesk Kynapse AI Sandbox is another great example but this was made from the work of many many people. AI Sandbox Any examples of good AI portfolios would be great.
30
How do client server cooperation based games like Diablo 3 work? Diablo 3 cooperates with Blizzard servers even during single player games. In fact, Blizzard has had problems with the games "melting their servers." I would like to ask How do the client and the server communicate? What details does the client leave to the server, and vice versa? What details are redundant both the client and the server know and how often do they disagree? The previous paragraph contains the important questions, but I have a few more that I must explain my motivation towards. I am interested in the programming of botting. Ethical botting I don't plan on actually abusing the automation to run 24 7. I just find it to be a great programming challenge to glean information from a game, and then make decisions from that information. I am stuck in the starting gate. The unofficial questions from this post would be How can I make a bot (language, tools, libraries)? Can I get information through the communication between client and server, rather than the brute force pixel detection easily used in more static games? There probably is a trust issue, and to that all I can say is that I promise not to abuse the answers. But please feel free to answer any of the questions you feel comfortable with. Thank you!
30
Best way of approach? Mob moving around a section of tiles Think the beginning stages of Metroid, where there would be a turtle like mob that would move around a section of tiles, normally 1 3 in a line. What would be the best approach to accomplish this sort of AI? It seems so simple but its giving me headache. The solution I've found have been to have a fixed distance for it to move, and changing directions once the distance has been met. This isn't that OOP though. Another would be have a rectangle made of the section of tiles, and when the bounds of the mob no longer contains the line of rectangle, change directions. This is proving to be harder to implement and only works for rectangular sections, not for abstract "tetris" like sections of tiles.
30
Space Invaders type game Keeping the enemies aligned with each other as they turn around? OK, so here's the lowdown of the problem I'm trying to solve. I'm developing a game in PyGame that's a cross between Space Invaders and Columns. I'm trying to make the motion of the enemies similar to that of the aliens in Space Invaders that is, they're all clustered in a grid, and if even one hits the side of the screen, the entire formation moves down and turns around. However, the motion of these aliens is continuous (as continuous as a monitor can be, anyway), not on a discrete grid like in the original. The enemies are instances of an Enemy class, and in turn they're held by a 2D array in a enemysquadron module (which, if you don't use Python, is in this case essentially a singleton due to the way Python modules work). Inside the Enemy class I have a class scope velocity vector that is reversed every time an Enemy object touches the edge of the screen. This won't do, though, because as time goes on the enemies just become disorganized and jumbled (i.e. not in a grid as planned). I haven't implemented the Enemies going downward yet, so let's not worry about that right now. Any tips?
30
How can AI compute movement in 2D games I know how to create a basic AI for a game, where the AI simple creates units that only march forward. The AI only decides when to create a new unit, taking into account a number of considerations. But now, I'm working on a game where the user controls a spaceship that can move in any of 8 directions (up, down, left, right, and any of the 4 diagonals). The user is battling another spaceship that moves in the same way. The spaceships can shoot missiles. The missiles fly in the direction the shooting spaceship is facing. I need to program the AI, but have no idea how to make such an AI. Could you give me some general direction on how to make an AI that computes it's movements on the screen? I have some very general idea, I would also like to know if this is somehow a decent direction The AI ship always tries to get inside a specific radius from the user's ship. Once it does, it moves up, down, left, or right to a point where it will be able to hit it's opponent. Then it shoots a missile in the direction of the user's ship. Is this a decent direction? Any ideas or places where I can get started learning these things? My main concern is how the AI will compute it's movements. EDIT Both the enemy ship and the player's ship can face any direction. Thanks
30
Monster's AI in an Action RPG I'm developing an action rpg with some University colleagues. We've gotton to the monsters' AI design and we would like to implement a sort of "utility based AI" so we have a "thinker" that assigns a numeric value on all the monster's decisions and we choose the highest (or the most appropriate, depending on monster's iq) and assign it in the monster's collection of decisions (like a goal driven design pattern) . One solution we found is to write a mathematical formula for each decision, with all the important parameters for evaluation (so for a spell decision we might have mp,distance from player, player's hp etc). This formula also has coefficients representing some of monster's behaviour (in this way we can alterate formulas by changing coefficients). I've also read how "fuzzy logic" works I was fascinated by it and by the many ways of expansion it has. I was wondering how we could use this technique to give our AI more semplicity, as in create evaluations with fuzzy rules such as IF player far AND mp high AND hp high THEN very Desiderable (for a spell having an high casting time and consume high mp) and then 'defuzz' it. In this way it's also simple to create a monster behaviour by creating ad hoc rules for every monster's IQ category. But is it correct using fuzzy logic in a game with many parameters like an rpg? Is there a way of merging these two techniques? Are there better AI design techniques for evaluating monster's chooses?
30
What behaviors should go into making a "non perfect" AI combatant? When making an npc combatant, it's easy obvious what to do to get a robot deathmachine by optimizing combat tactics, timing and attack types, but harder (and more interesting in a fight) to get an idiosyncratic, inpredictable enemy. What behaviors (algorithms?) are useful for creating a more organic, unconventional enemy? Edit My specific use case is with MMO like enemies, e.g. World of Warcraft, although with less graphics involved. Note that that means both human and inhuman enemies (animals, monsters, etc)
30
Collaborative Diffusion vs. A for loose armies combat any clear winner? Collaborative Diffusion (CD) takes a lot of the work that A does and combines (writes) it cheaply for multiple agents to read cheaply. This is because the majority of CD's processing works via a simple CA diffusion approach that produces a single shared map for every agent to use on a given game update. Agents then perform hill climbing within that space, which is also very cheap. The primary downside is that the data structure created by CD must apply for all agents that is, each agent's subjective view of the environment is identical A OTOH needs a path calculation per agent, each frame. In spite of this, the relatively low cost associated with CD would seem to make it, on average, a far more suitable approach than A , even when we must create unique views for agents (comments experience on this are welcome). EDIT Consider the following example using Collaborative Diffusion Two armies are on a battlefield, each army emitting a uniform scent. They charge (climb the scent gradient), and as the two lines clash, each agent takes on the first, closest enemy agent on the opposing line. This happens because on each approach step for each unit, it checks whether it's yet adjacent to an enemy if so it locks on attacks indiscriminately. This is much freer than specific targeting, which is where A would seem to be a better choice. Am I correct in these assumptions? Are there other downsides to CD as opposed to insert your flavour of A , for selecting ANY target between large groups?
30
Behavior trees Can sequences and selectors contain conditions? I can't wrap my head around this. Is it legal for parent nodes to contain additional logic ?
30
Are there any games that contain a machine learning AI? Can anybody here give a reference to commercial AAA games that implement a machine learning AI?
30
How does an AI determine the bearing to follow within a nav mesh? I've done some reading on nav meshes, and I understand how to generate a path of polygons to reach a goal. However, what I don't understand is how you determine the bearing to follow within each polygon. Without a central node to aim for, what do you aim for? I suppose you could cast a ray to the goal and then head to the point where that ray crosses into the next cell but that would only work if that next cell is actually on your path. If your ray doesn't cross the edge into the next cell, do you instead plot a path to whichever corner of the edge is closest to the goal? I think that would get you the path shown in the 3rd diagram, but would it work in all cases? http udn.epicgames.com Three NavigationMeshReference.html
30
Interrupt on behaviour tree I'm using a custom Behaviour Tree library (not UDK or any other engine) so I'm wondering on the best way to cause an interrupt to a currently running node. I don't have decorators or parallel nodes in this library so looking for a different way to do it. I don't care about the specific reasons as to why the interrupt is needed. In general it just needs to tell the currently running node to stop running so the tree can be transversed again and that would find out the reason as the main "threats" would be checked. I'm trying to think of a clean way to cause such interrupts in the tree. Generally the conditions that are already in the tree would be the reason for the interrupt (IsEnemyInRange, IsThirsty, IsHungry, etc) but if a node is running over multiple frames these don't get checked. Any ideas given the above limitations I listed?
30
How should I replan A ? I've got a pathfinding boss enemy that seeks the player using the A algorithm. It's a pretty complex environment, and I'm doing it in Flash, so the search can get a bit slow when it's searching over long distances. If the player was stationary, I could just search once, but at the moment I'm searching every frame. This takes long enough that my framerate is suffering. What's the usual solution to this? Is there a way to "replan" A without redoing the entire search? Should I just search a little less often (every half second or second) and accept that there will be a little inaccuracy in the path?
30
Python java framework for developing artificial intelligence for board games like Diplomacy Risk I am trying to pick a python java framework for developing artificial intelligence for board games like Diplomacy Risk for research purpose. I am specifically looking for multiplayer, multiple round games without any dominant strategies that require players to cooperate as well as non cooperate at certain times. The idea of the research is to not build the game itself or the AI bot, but to discover novel methods for game theory, specifically to deceive other players bots or humans. This research is specifically based on Deceptive Artificial Intelligence. Please share any frameworks(game code with stubs for decision making) that you may know about. It would be a lot of help for my work. If you have any ideas about some other game that I can pick, please mention that as well. I have tried a few frameworks but the lack of documentation is extremely annoying and I haven't been able to move forward in the process. Thanks and Regards!
30
Game Maker Studio Make objects avoid other instances of same object with A pathfinding I have a game where there are multiple enemies who must chase a single player. I have pathfinding set up using the GML A pathfinding with mp grid and a path. However, these enemies can walk on top of each other when seeking the player. To fix this, I also told the path to ignore enemies as well with mp grid add instances, but then they stop moving altogether because they see themselves as obstacles, thus trapping themselves within a bounding box. Is there a way I can add "all other enemies BUT self" with mp grid add instances? Here's my grid creation code (in a CONTROLS class for initializing variables) global.zombie ai grid mp grid create(0, 0, room width 50, (room height sp dashboard.sprite height) 50, 50, 50) mp grid add instances(global.zombie ai grid, obj obstacle, false) This is my path initialization code (in Zombie class) path path add() alarm 0 5 This is my path creation code in Alarm 0 (path updates every 2 seconds) mp grid path(global.zombie ai grid, path, x, y, nearest.x, nearest.y, true) path set kind(path, 1) path set precision(path, 8) path end() path start(path, MAX SPEED, 0, true) alarm 0 room speed 2
30
Developing AI for Zatacka I am trying to develop a stronger AI for the popular game, Zatacka. The basic aim of the game is to survive the longest. It's like the TRON game, but the characters can only turn smoothly, instead of 90 degree turns. I am new to game AI. I have the following ideas in mind, but I don't know how they can be implemented. Collision prediction. If another player is about to come into the same area, then the player should turn around and try to evade. Hole taking capability. How to detect holes. How to distinguish holes from a normal empty area. Early detection of a dead end and being able to turn around before its too late. The problem with this how do we determine a dead end, it could be a bit far than our current position. Doing a path finding algorithm and searching for the wall or a player's body might not be efficient enough as this has to be done for each steps. Are there any standard algorithms for these problems? Can somebody throw some light on how I can approach these problems?
30
How do I make NPC pathfinding look believable? Is there an "academic" way to have NPC walking randomly on a map, but having a believable comportment ? The obvious scenario is a armed guard who is walking around a basement to secure it. It's quite easy to set up a "believable" path. What I'm looking for is a way to simulation a crowd in a small town, in fact. How can I make their move look like they aren't goalless robots.
30
When to use AI prediction in a Fighting Game I am making a fighting game AI that can predict the player's next move using a N gram predictor. Once I have the prediction, when do I use it ? Do I wait till player makes a move and then use the prediction ? What about distance from the player ? How do I make the use of my prediction look realistic ?
30
What is the name of the AI algorithm used by most MMOs for monsters or mobs? Is there a standard name for the (fairly dumb) AI that most MMOs use where you 'aggro' a monster mob when you are within a certain radius of it, and the monster chases your character for a set amount of time or distance when you attempt to run away?
30
Client AI calculations vs. Server AI calculations I have been thinking about a game which would have an AI, and the AI would ideally do extensive calculations thinking many turns in advance. I am curious if there is some way to put most of the burden of the AI calculations on the client while still preventing cheating. Obviously some processing has to be done on the server, but is there a way to find some happy middle?
30
Best way of approach? Mob moving around a section of tiles Think the beginning stages of Metroid, where there would be a turtle like mob that would move around a section of tiles, normally 1 3 in a line. What would be the best approach to accomplish this sort of AI? It seems so simple but its giving me headache. The solution I've found have been to have a fixed distance for it to move, and changing directions once the distance has been met. This isn't that OOP though. Another would be have a rectangle made of the section of tiles, and when the bounds of the mob no longer contains the line of rectangle, change directions. This is proving to be harder to implement and only works for rectangular sections, not for abstract "tetris" like sections of tiles.
30
Optimize algorithm finding all possible moves for a turn based game I am working on the ai for a turn based game. To illustrate my problem this are the simplified rules of the game The game takes place on a tiled map with obstacles (black quads) like this The player has several tokens (like the two colored dots in the example picture) The player can move all his tokens in his turn the tokens move in a straight line until they hit an obstacle, the border of the map or another token each token could move two times in each turn. the player can move his tokens in any order he likes The AI needs a list of all possible turns it could make of one game state. My first attempt was to recursively go through all tokens and move them in any possible direction and order. that works of course but the problem is that with just four tokens there are several millions of possible turns (if each token could move two times). Most of the outcomes of these turns are the same (the tokens end in the same place). In the example above the tokens could move like this No matter in which order the four move actions (A,B,C,D) are made, the end positions of the tokens are the same. I am only interested in the possible end turn situations. So I implemented a transposition table in the turn generation algorithm to negate all of the equal turns. That works and in the end I have only several hundred of possible turns with four tokens rather than several millions. The problem is that the algorithm takes too much time because it has to calculate every possible turn. Does anybody has a hint how to prune the turn generation tree? Or any other idea how to calculate only the different possible turn outcomes? Note In the real game the map is slightly bigger (30 40 free cells) and there are up to 6 tokens.
30
How to implement a Behavior Tree (preferably in Unity 3D) I have a state machine I want to implement as a behavior tree. I now have an understanding of how they work but I can't seem to find a full implementation of one. I have implemented a simple one using a plug in called Behavior Machine but the problem I have with it is I feel like it limits me just as it did when I used it for making my FSM until I implemented my own FSM that is when I had the privilege of using my own functions to trigger my own state transitions and optimize it so that it doesn't use a lot of resources. The other problem I have is it is difficult to implement my own nodes and control which part executes (optimization). Any idea where I can find a good tutorial on how I can do this, preferably with Unity and in C or Java. Or where I can find a good library with proper documentation because the one I found on Git hub or other sites are buggy or just some classes with no documentation. Your help will be greatly appreciated. So basically, what I want to do is implement my own Behavior Tree. This way I think I can have better control on how it executes so that it can run on mobile devices smoothly.
30
Positive Evaluation Function Territory AI Help I am trying to implement some positive evaluation into my game. Let me explain. I have a series (anywhere from 5 to 25 (might vary)) of circles (three sizes, big medium small). And each player starts with a main circle which gains 1 every turn (like risk the circles gain at different speeds, the big one gains the fastest and the small one gains the slowest). You can use these numbers to take over other circles, if the "enemy" is on this circle it does a risk thing (you fight to see who takes it over, who ever has more wins most of the time, I'm not getting into this logic as it is rather irreverent). So this is a very basic overview of what occurs, now what I need some help with is how I can create an AI which will look at all available (and taken) circles and determine where it wants to move to. There would be circles closes to it, and circles in the enemies area and circles available and circles taken over. And it would need to determine where to move it's forces too next, and so on and so forth. So after some research I read a lot about Positive Evaluation, which it takes all possible moves and determines the best one. So I am wondering if anyone could help me out or give me some direction as to where to go from here. I think what I need to do is take all the circles with any forces on them, throw them into a table array with the cords of each circle and determine which would be best, then I would need to determine how many forces I want to send and where from. Help! ADDITION Allow me to explain a bit, http imm.io 4Xwy see that diagram lets assume the circles are sections of forces, and you gain a new force each "time interval" and lets say I want to take all my forces (I'm pink) the circle directly below me and capture it, well the AI would need look at all available circles and pick for example the big green on in the top right (since big would make more troops and its close to the "main"). I'm assuming what you said would still work? Weighted options?
30
How should I replan A ? I've got a pathfinding boss enemy that seeks the player using the A algorithm. It's a pretty complex environment, and I'm doing it in Flash, so the search can get a bit slow when it's searching over long distances. If the player was stationary, I could just search once, but at the moment I'm searching every frame. This takes long enough that my framerate is suffering. What's the usual solution to this? Is there a way to "replan" A without redoing the entire search? Should I just search a little less often (every half second or second) and accept that there will be a little inaccuracy in the path?
30
Path tables or real time searching for AI? What is the more common practice in commercial games path lookup tables or real time searches? I've read that in many games path lookup tables are pre calculated and baked into each map, so to speak, then steering behaviour is used to handle dynamic obstacles. or is it better practice to use optimised hierarchical A searches? I understand the pro's and cons of each, I'm just curious as to what is most often used in the industry.
30
AI Agent realistic leap to a player in a MMO In the last month I have been struggling with an issue, movement synchronization of a leap of an AI agent in a MMO. I know some theory and basic movement was not a problem with interpolation and stuff, the difficult part is implementing a leap move. My current setup has a server and multiple clients connecting to it. The movement of the AI agent and its behaviour is computed on the server, the AI client receives position updates and interpolates between the current position and the position received from the server. When the server decides that an AI has to attack, it sends an attack command to clients and, to make the thing more realistic, the clients stops their agents as soon as they receive the message and start an attack animation. The important part now during this attack animation, the agent quickly (0.6 s approx) translates (the translation is not in the animation) toward the targeted player. The client of that player then sends the landing position to the server which does some validation and hit checks and then resumes its behaviour computation sending new attack commands or position updates to clients. This creates a very pleasing effect on the client of the player being attacked but may cause weird translations (due to the error in the position of the agents and the players) on other clients. I tried predicting the "landing position" on the server but it's always somehow too wrong as it depends a lot on the target direction from the agent, and small changes in both the agent position and target position often lead to big errors. Having a good look and feel of the leap on the client is imperative. My questions are is there a better way to handle this in my case? is there a good practice to follow in this that I am completely disregarding?
30
Prevent instances from overlapping gamemaker studio I have a game where multiple instances of an enemy move towards a player in their step process. There can be as many as 50 instances on screen. The issue is that the instances all end up in 1 big group as they follow the player. I would like to prevent them from getting more than 2 pixels of each other. the code is as follows if distance to object(player) lt 160 direction ppoint direction(x,y, player.x, player.x, player.y) p potential step(x,y, player.x, true) If I put a distance check of 2px in this code, the instance starts making jerky movements and spins. I would like the instance to continue following the player, but just not allow themselves to overlap.
30
AI algorithms for Strategy Game I am working on building a Strategy Game (Animal Colonies fighting each other). It is going to be simple game targeting Kids, the beta version will only support single player playing against AI opponent. I am going to hire Algorithm programmers to build the AI algorithms of the opponent, BUT I am not sure what information I need to give for the programmers in order to build these algorithms. I want to have different AI levels difficulties (easy, medium, hard), BUT if the AI opponent is too easy to defeat, the game is boring, while if it is too difficult to defeat, the game is too frustrating, and the player will quit, so I need this issue to be addressed when building AI for different levels. I am not sure if the AI opponent should behave like a human player? I mean should the specification allow AI opponent to have access to all game resources ? or AI should behave like human player to use scouts to collect information about resources and enemy? I need to identify these points. I want to write detailed specification so algorithm programmers can do the work as expected. All what I have at the mean time is Colonies design (characters elements), maps with objects (grass, rocks, hills, tree .. etc). So, what exactly do I need to give them in order to build opponent AI?
30
Event Driven Behavior Tree deterministic traversal order with parallel I've studied several articles and listen some talks about behavior trees (mostly the resources available on AIGameDev by Alex J. Champandard). I'm particularly interested on event driven behavior trees, but I have still some doubts on how to implement them correctly using a scheduler. Just a quick recap Standard Behavior Tree Each execution tick the tree is traversed from the root in depth first order The execution order is implicitly expressed by the tree structure. So in the case of behaviors parented to a parallel node, even if both children are executed during the same traversing, the first leaf is always evaluated first. Event Driven BT During the first traversal the nodes (tasks) are enqueued using a scheduler which is responsible for updating only running ones every update The first traversal implicitly produce a depth first ordered queue in the scheduler Non leaf nodes stays suspended mostly of the time. When a leaf node terminate(either with success or fail status) the parent (observer) is waked up allowing the tree traversing to continue and new tasks will be enqueued in the scheduler Without parallel nodes in the tree there will be up to 1 task running in the scheduler Without parallel nodes, the tasks in the queue(excluding dynamic priority implementation) will be always ordered in a depth first order (is this right?) Now, from what is my understanding of a possible implementation, there are 2 requirements I think must be respected(I'm not sure though) Now, some requirements I think needs to be guaranteed by a correct implementation are The result of the traversing should be independent from which implementation strategy is used. The traversing result must be deterministic. I'm struggling trying to guarantee both in the case of parallel nodes. Here's an example Parallel 1 gt Sequence 1 gt leaf A gt leaf B gt leaf C Considering a FIFO policy of the scheduler, before leaf A node terminates the tasks in the scheduler are P1(suspended),S1(suspended),leaf A(running),leaf C(running) When leaf A terminate leaf B will be scheduled (at the end of the queue), so the queue will become P1(suspended),S1(suspended),leaf C(running),leaf B(running) In this case leaf B will be executed after leaf C at every update, meanwhile with a non event driven traversing from the root node, the leaf B will always be evaluated before leaf A. So I have a couple of question do I have understand correctly how event driven BT work? How can I guarantee the depth first order is respected with such an implementation? is this a common issue or am I missing something?
30
Dealing with AI in a simple strategy game Hi I am currently working on a small real time strategy game. The game consists in discovering and exploring planets to get knowledge and resource points. Every Unit of Time (UoT) in the game those points are adjusted based on a various factors. Basically the more planets you have explored the more points are generated, yet the more planets you get the greater the expenses per UoT, so it is a balance. There are a variety of planet types, some which allow to discover planets in a greater area, but produce little points, whilst others produce a lot of points. The longer the players lasts before running out of resource points and the more planets they explore the higher their score is. I kind of got a decent game at the moment, but I wanted to add an AI to have other explorers competing against the player. I don't really know how to handle it. So far my thoughts are to add weights to various parameters (like the number of points provided by each planet, the cost in points to explore it, the fact that it has been already explored, etc...). Those weights would be adjusted based on a variety of conditions, say if the player resource points drop below x then the weight attributed to the number of resource points provided would be greater than that of knowledge points. I was wondering if anyone had any thoughts on this, is it likely that I'm going to get myself tangled in all of it if I do it this way. Also one important thing to note is that the game space is randomly generated, i.e every time a UoT passes or an explorer explores a planet there is a chance that 2 new planets are being randomly generated. Which means I can't really use methods which are based on calculating a few turns ahead to make the best decision. Finally, the game is relatively peaceful, I don't have a combat system, and the only risk of exploring a planet already explored by another explorer would be that, if that explorer is nearby, he might just re explore it as soon as you have left meaning you have spent valuable resource points for no benefits, since all the point generation will go to the most recent explorer.
30
Card Game Suggestion I'm developing a gameboard for a 4 players cardgame well know in my region ( like a Bridge with 8 cards). I create a cardcontrol class, all the deck methods and all the logics for value card and score count. But I need some link or suggestion for the implementation of "computer player" based on a few rules to follow. thank you all for any help.
30
How to change the speed of an NPC? I created a blueprint. I added several components to the blueprint. I made it follow a target point and it does it correctly. I'd like to make the NPC make this way to the target point faster, but I do not know how to set the speed. How I Created Components of the NPC I've clicked on all components looking for a way to set their speed. I also clicked Class Settings and Class Defaults, but nothing resulted. Then I tried to find some function or command related to the speed, taking a reference from the NPC itself I added values to tranform, but did not change the result in the game at all Links that I clicked at the time I typed the title of the question How do I change the speed of an object without changing path travelled? Changing speed of an object
30
Minimax for Bomberman I am developing clone of Bomberman game and I am experimenting with different types of AI. First I used searching through state space with A and now i want to try different approach with Minimax algorithm. My problem is that every minimax article i found assumed players alternates. But in Bomberman, every player make some action at the same time. I think i could generate all possible states for one game tick, but with four players and 5 basic actions (4 moves and bomb place) it gives 5 4 states at first level of the game tree. That value will raise exponentially with every next level. Am I missing something? Are there any ways to implement it or should i use totally different algorithm? Thanks for any suggestions
30
How important is a single player mode in a 2 player game? So say you have a 2 player game, taking Chess as an example (except it's an original game with no ready to go AI available). Let's say there's also a social aspect to the meta game, so let's say it's a Chess game on Facebook where you can challenge your friends. How important is it to have a single player mode, knowing that an AI will need to be created (I've done minimax AI for tic tac toe, but nothing too sophisticated)? Is it important enough that it should be in the initial launch of the game? Can it wait for a future iteration (knowing that being hosted on the web means the game can be updated at any time)?
30
Useful resources for beginning AI What resources are available, including both free articles ebooks and physical books and things, for game developers looking to begin simple AI programming design? Note I know of this question, but that's more asking about where to start on a specific topic I'm more asking about resources in general.
30
Dealing with multiple prerequisites in goal orientated action planning using A I'm currently having difficulty reasoning about how multiple prerequisites are satisfied using the A algorithm during planning. Assuming the following actions (with prerequisites in brackets) Get Material Make Gloves (need material) Get Iron Make Axe (needs iron) And the following goal (prerequisites in brackets) Chop Tree (needs gloves, needs axe) Now, assuming I am doing a backward search from the goal, as I understand it, I would start considering actions that have effects that directly correspond to the prerequisites of the node (this is where I think I'm going wrong). The problem with that is, the first 2 actions to consider are Make Gloves and Make Axe. However, for this goal to be satisfied they both need to be done. If I only build my graph by linking effects to prerequisites, I only consider each of those actions once, and only choose one. I.e. if I arbitrarily choose Make Axe that leads me to Get Iron, I don't have anything making me reconsider Make Gloves as Get Iron has no prerequisites. For the actions and goal I had above there are many routes to solving it, below I have highlighted one to illustrate my point. As you can see, in this case I would need Get Iron to be done after Make Gloves even though their effects prerequisites don't match in any way. Can someone tell me where I'm going wrong in my thinking?
30
What are the most valuable research fields to explore nowadays concerning game AI? I'm having some troubles coming up with a subject for my master's degree thesis. I'd like to know what areas of research are valuable at this time (and the near future) so I can narrow my search and idea creation. Thanks in advance.
30
Design pattern for AI cooperation I'd like to implement an AI for my game, which has agents that should be able to take care of themselves. The Sims use a system of smart objects that advertise their services, a design pattern that seems very nice since all the logic is hidden away in the object to be used, and it seems simple enough to get started. I am currently stumped though how to extend it to interactions between multiple people, and to people using objects together, especially when it doesn't make much sense to use an object together. (I don't want a character to set up a boardgame and wait, no one comes and he leaves again)
30
Grid pathfinding with a lot of entities I'd like to explain this problem with a screenshot from a released game, DROD Gunthro's Epic Blunder, by Caravel Games. The game is turn based and tile based. I'm trying to create something very similar (a clone of the game), and I've got most of the fundamentals done, but I'm having trouble implementing pathfinding. Look at the screenshot. The guys in yellow are friendly, and want to kill the roaches. Every turn, every guy in yellow pathfinds to the closest roach, and every roach pathfinds to the closest guy in yellow. By closest I mean the target with the shortest path, not a simple distance calculation. All of this without any kind of slowdown when loading the level or when passing turns. And all of the entities change position every turn. Also (not shown in screenshot), there can be doors that open and close and change the level's layout. Impressive. I've tried implementing pathfinding in my clone. First attempt was making every roach find a path to a yellow guy every turn, using a breadth first search algorithm. Obviously incredibly slow with more than a single roach, and would get exponentially slower with more than a single yellow guy. Second attempt was mas making every yellow guy generate a pathmap (still breadth first search) every time he moved. Worked perfectly with multiple roaches and a single yellow guy, but adding more yellow guys made the game slow and unplayable. Last attempt was implementing JPS (jump point search). Every entity would individually calculate a path to its target. Fast, but with a limited number of entities. Having less than half the entities in the screenshot would make the game slow. And also, I had to get the "closest" enemy by calculating distance, not shortest path. I've asked on the DROD forums how they did it, and a user replied that it was breadth first search. The game is open source, and I took a look at the source code, but it's C (I'm using C ) and I found it confusing. I don't know how to do it. Every approach I tried isn't good enough. And I believe that DROD generates global pathmaps, somehow, but I can't understand how every entity find the best individual path to other entities that move every turn. What's the trick? This is a reply I just got on the DROD forums Without having looked at the code I'd wager it's two (or so) pathmaps for the whole room One to the nearest enemy, and one to the nearest friendly for every tile. There's no need to make a separate pathmap for every entity when the overall goal is "move towards nearest enemy friendly"... just mark every tile with the number of moves it takes to the nearest target and have the entity chose the move that takes it to the tile with the lowest number. To be honest, I don't understand it that well.
30
Useful resources for beginning AI What resources are available, including both free articles ebooks and physical books and things, for game developers looking to begin simple AI programming design? Note I know of this question, but that's more asking about where to start on a specific topic I'm more asking about resources in general.
30
what can be done to improve the ultimate insane real time strategy game AI? I had this chat with a friend of mine about whether an AI can be created or not that could beat any human without resource cheating in a real time strategy game. An AI that would play almost perfectly. The AI of today's games have many areas that can be improved, most of them rely on resource cheating, a better early base developement because of that, and attacking in waves. Still, the question is what would need to be done to improve on this to obtain the ultimate AI. Also, if you have any example of a game where a certain feature was used it would be great. Edit There is little clarification i can provide for those who haven't read the title or the few paragraphs describing the problem. This is about real time strategy games and the ultimate AI. That means Strarcraft, Warcraft, Generals, Red Alert, Age of Empires, AI War etc. Games that have more than one difficulty level, but the focus here is on the ultimate challenge. tenpn has a fantastic post filled with great resources. Thank you, tenpn! I wish more people would contribute in that direction.
30
what can be done to improve the ultimate insane real time strategy game AI? I had this chat with a friend of mine about whether an AI can be created or not that could beat any human without resource cheating in a real time strategy game. An AI that would play almost perfectly. The AI of today's games have many areas that can be improved, most of them rely on resource cheating, a better early base developement because of that, and attacking in waves. Still, the question is what would need to be done to improve on this to obtain the ultimate AI. Also, if you have any example of a game where a certain feature was used it would be great. Edit There is little clarification i can provide for those who haven't read the title or the few paragraphs describing the problem. This is about real time strategy games and the ultimate AI. That means Strarcraft, Warcraft, Generals, Red Alert, Age of Empires, AI War etc. Games that have more than one difficulty level, but the focus here is on the ultimate challenge. tenpn has a fantastic post filled with great resources. Thank you, tenpn! I wish more people would contribute in that direction.
30
Can Neural Network play tic tac toe? Is this have any common sense? I'm thinking about theoretical possibility of playing tic tac toe by neural net. Is this have any common sense? Let's consider tic tac toe which contains 3 rows and 3 cols (it's 9 cells). Ok, then the input vector is contains of all our cells 0..8 , where 0 is "O", 1 is "X" and 2 is empty cell. But what we have for the target vector? On every NN's step we need to know some target and make our distance to target shorter. But the target of this game is only win (or maybe draw), I don't understand how we can make shorter our distance to win. Is any suggestions? My goal is to learn more about neural networks in games that's why I consider such easy game. Maybe it's not the best choice for the NN and I need to consider some other game? )
30
Scripting a sophisticated RTS AI with Lua I'm planning to develop a somewhat sophisticated RTS AI (eg see BWAPI). have experience programming, but none in game development, so it seems easiest to start by scripting the AI of an existing game I've played, Warhammer 40k Dawn of War (2004). As far as I can tell, the game AI is scripted with some variant of Lua (by the file extension .ai or .scar). The online documentation is sparse and the community isn't active anymore. I'd like to get some idea of the difficulty of this undertaking. Is it practical with a scripting language like Lua to develop a RTS AI that includes FSMs, decision trees, case based reasoning, and transposition tables? If someone has any experience scripting Dawn of War, that would also help.
30
Complexity of defense AI I have a non released game, and currently it's only possible to play with another human being. As the game rules are made up by me, I think it would be great if new players could learn basic game play by playing against an AI opponent. I mean it's not like Tennis, where the majority knows at least the fundamental rules. On the other hand, I'm a bit concerned that this AI implementation can be quite complex. I hope you can help me with an complexity estimation. I've tried to summarize the gameplay below. Is this defense AI very hard to do? Basic Defense Game Play Player Defender can move within his land, i.e. inside a random, non convex, polygon. This land will also contain obstacles modeled as polygons, that Defender has to move around. Player Attacker has also a land, modeled as another such polygon. Assume that Defender shall defend against Attacker. Attacker will then throw a thingy towards Defender's land. To be rewarded, Attacker wants to hit Defender's land, and Defender will want to strike away the thingy from his land before it stops to prevent Attacker from scoring. To feint Defender, Attacker might run around within his land before the throw, and based on these attacker movements Defender shall then continuously move to the best defense position within his land.
30
How to create a reasonable AI? I'm creating a logic game based on Fox and Hounds game. The player plays the fox and AI plays the hounds. (as far as I can see) I managed to make the AI perfect, so it never loses. Leaving it as such would not be much fun for human players. Now, I have to dumb down the AI so human can win, but I'm not sure how. The current AI logic is based on pattern matching if I introduce random moves which make the board go out of pattern space the AI would most probably play dumb until the end of the game. Any ideas how to dumb down the AI in such way that is does not go from "genius" to "completely dumb" in a single move?
30
How to demo Advanced Game AI as a portfolio piece? Basically every Game company wants to see a portfolio that exhibits your skill set. If you're specializing in AI though, what and how should you show off your skills. Some thoughts Is nice graphics in an AI demo a must(get past nontechnical HR that don't understand AI, think 3d verse 2d)? Demo multi featured AI or a single focused example. Fundamental skills like path finding, hfsm, planning, etc are critical AI components but does it really impress as a portfolio piece? Single AI entity, multiple entities, or large populations, is more always better? Also as a portfolio piece ideally there would be an executable, and videos which should show off whatever is trying to be shown off within only a few minutes. Examples I would say any of the skills exhibited in this video would make for a great portfolio piece but are game companies really expecting this from a single person. Autodesk Kynapse AI Sandbox is another great example but this was made from the work of many many people. AI Sandbox Any examples of good AI portfolios would be great.
30
what can be done to improve the ultimate insane real time strategy game AI? I had this chat with a friend of mine about whether an AI can be created or not that could beat any human without resource cheating in a real time strategy game. An AI that would play almost perfectly. The AI of today's games have many areas that can be improved, most of them rely on resource cheating, a better early base developement because of that, and attacking in waves. Still, the question is what would need to be done to improve on this to obtain the ultimate AI. Also, if you have any example of a game where a certain feature was used it would be great. Edit There is little clarification i can provide for those who haven't read the title or the few paragraphs describing the problem. This is about real time strategy games and the ultimate AI. That means Strarcraft, Warcraft, Generals, Red Alert, Age of Empires, AI War etc. Games that have more than one difficulty level, but the focus here is on the ultimate challenge. tenpn has a fantastic post filled with great resources. Thank you, tenpn! I wish more people would contribute in that direction.
30
Better way to do AI Behavior in AS3 Flixel I'm making a game in Flixel and I need to program an NPC. It's rapidly turning more complex than I expected. I was wondering if there are any best practices, tutorials or examples that you can refer me to, to see how this is done. I can probably hack it together, which is what I always do, but it would be nice if I can make it maintanable and can add stuff later on. Here's screenshot to give you an idea The butler will be an NPC that will follow you, or guide you, and talk to you the whole time. EDIT More specifically What I have now is a long list of IF statements in the update loop of the butler (about 8 different cases), and all I have covered is his walking behavior. I want him to comment on things and sometimes switch his main behavior to be more aggresive or distant,... Is there any way to keep track of this, or is complex code with many many nested if statements the way to go?
30
Pattern for performing game actions Is there a generally accepted pattern for performing various actions within a game? A way a player can perform actions and also that an AI might perform actions, such as move, attack, self destruct, etc. I currently have an abstract BaseAction which uses .NET generics to specify the different objects that get returned by the various actions. This is all implemented in a pattern similar to the Command, where each action is responsible for itself and does all that it needs. My reasoning for being abstract is so that I may have a single ActionHandler, and AI can just queue up different action implementing the baseAction. And the reason it is generic is so that the different actions can return result information relevant to the action (as different actions can have totally different outcomes in the game), along with some common beforeAction and afterAction implementations. So... is there a more accepted way of doing this, or does this sound alright?
30
Optimize algorithm finding all possible moves for a turn based game I am working on the ai for a turn based game. To illustrate my problem this are the simplified rules of the game The game takes place on a tiled map with obstacles (black quads) like this The player has several tokens (like the two colored dots in the example picture) The player can move all his tokens in his turn the tokens move in a straight line until they hit an obstacle, the border of the map or another token each token could move two times in each turn. the player can move his tokens in any order he likes The AI needs a list of all possible turns it could make of one game state. My first attempt was to recursively go through all tokens and move them in any possible direction and order. that works of course but the problem is that with just four tokens there are several millions of possible turns (if each token could move two times). Most of the outcomes of these turns are the same (the tokens end in the same place). In the example above the tokens could move like this No matter in which order the four move actions (A,B,C,D) are made, the end positions of the tokens are the same. I am only interested in the possible end turn situations. So I implemented a transposition table in the turn generation algorithm to negate all of the equal turns. That works and in the end I have only several hundred of possible turns with four tokens rather than several millions. The problem is that the algorithm takes too much time because it has to calculate every possible turn. Does anybody has a hint how to prune the turn generation tree? Or any other idea how to calculate only the different possible turn outcomes? Note In the real game the map is slightly bigger (30 40 free cells) and there are up to 6 tokens.
30
How can I simulate limited AI vocabulary for a word game? I've got a small handful of competitive word games in progress, and while the preference is for (mostly asynchronous) play against other human opponents, I'd like to provide players the option of playing against an AI. I have my dictionary and I can easily give the AI full dictionary knowledge while it's playing, but my concern is that having the AI regularly playing words they're not familiar with will be a frustrating experience for players 'I would have won that game if it'd just used words I know!' mdash even if the AI's overall skill level is turned down. I'd rather create a weaker AI through a combination of (un)tuned play parameters and a weaker vocabulary mdash but I'm not sure how to limit that vocabulary to 'common' words. I've looked at several word frequency lists (for instance, the list of all words that appear in the Project Gutenberg books, sorted by number of occurences) but they all have a number of false negatives words that everyone knows that simply don't show up with any real frequency (for instance, CHEETAH shows up less frequently in the PG texts than VOCATIVE or SUTTEE). I've tried using search results to get estimates of a word's popularity, but they also tend to be prone to spurious mis estimates, and of course it's hard to get search results for an entire dictionary without running afoul of the terms of service on the search engines. Does anyone have suggestions on other good means of determining a rough frequency of word usage, or other ways of limiting word game AI that will feel natural to players?
30
PID controller error value for heading correction I'm using a PID controller in my AI to steer my NPCs to a desired heading (by adding torque). I've adapted the code from here http answers.unity3d.com questions 199055 addtorque to rotate rigidbody to look at a point.html The above example uses the cross product of the two headings as the error value for the PID controller. This works great if the angle between the desired and current headings is less than 90 degrees but if the angle is greater than that, the PID controller corrects to the opposite direction than I want (as it tries to correct to zero)! What is a good error value between two vectors to pass into a PID controller that works no matter what the angle between the desired, and current heading? Note because the PID controller returns the correction as a vec3, the error value must also be a vec3
30
What AI for a resource management combat? I have a very simple game 2 players (one human and a computer) and both have 1000 tokens. They must build an army. Depending on the type of soldier, it cost from 10 to 100 token to train one. Eventually I want them to attack the each other. What would be the AI to use in that case? Is there any example around that I could learn from? Thanks!
30
Is there a design flaw when an entity's state doesn't fully utilize enter(), execute() and exit()? I'm following Mat Buckland's Programming Game AI by Example, and I find that I don't always have use for enter(), execute() and exit() on an entity's state. For example, in an RPG, a weapon may have an equipped state, and I may use enter() and exit() for that state to add or subtract to a player's ability modifier, but there isn't really a need for execute(). Is this a design flaw in my engine, or a shortcoming of this approach?
30
Method to make character roam map isn't working I'm an absolute beginner in Unreal Engine, so sorry for any strange mistakes, formatting, etc. I'm trying to make a game for a project where an enemy chases the player through a maze when it spots the player, but when there's no player it roams around the maze. I have the following blueprint Every time I run it, it has varying outcomes. Sometimes the enemy will move around a few times, then randomly stop. It won't return a fail message or move. Other times, it will send a single fail message as soon as the program is run, then not return anything else. And other times, it won't send anything at all or move and just stand there. It'll also sometimes simply turn 95 degrees to the left then stop. There's also the issue that the enemy takes the same path every trial, even though it's supposed to choose a random point and move to it. The enemy easily chases the player through the maze and won't get stuck anywhere, but for some reason the enemy stops moving when it's executing the roam method. I tried adjusting the radius in the GetRandomPointInNavigableRadius part, but that didn't affect anything. I also tried making the character smaller in case it was getting stuck on certain turns, but that also didn't affect it. Removing the delay made it work a few times, but then it went back to not working after a few trials. Here's the rest of the code for reference and a picture of the maze and the enemy's width relative to the maze path Thank you for any help!
30
Is there a turn based strategy game that allows building AI from scratch? I'm very interested in programming Artificial Intelligence for a turn based game as a project. However, I'm looking for a (somewhat modern) game and can't really find one i'm interested in. Particularly, i'm looking at the Civilization francise. I'd love to be able to program the AI of Civilization V for instance. However, as of now, you can only slightly modify the settings of the AI http forums.civfanatics.com threads artificial unintelligence.536193 . Is there any game, which has been released at least in the last 10 years, that allows programming of the AI to (nearly) ground level?
30
AI for a mixed Turn Based Real Time battle system Something "Gambit like" the right approach? This is maybe a question that's been asked 100 times 1,000 different ways. I apologize for that ) I'm in the process of building the AI for a game I'm working on. The game is a turn based one, in the vein of Final Fantasy but also has a set of things that happen in real time (reactions). I've experimented with FSM, HFSMs, and Behavior Trees. None of them felt quot right quot to me and all felt either too limiting or too generic big. The idea I'm toying with now is something like a quot Rules engine quot that could be likened to the Gambit system from Final Fantasy 12. I would have a set of predefined personalities. Each of these personalities would have a set of conditions it would check on each event (Turn start, time to react, etc). These conditions would be priority ordered, and the first one that returns true would be the action I take. These conditions can also point to a quot choice quot action, which is just an action that will make a choice based on some Utility function. Sort of a mix of FSM HFSM and a Utility Function approach. So, a quot gambit quot with the personality of quot Healer quot may look something like this (ON) Ally HP 0 gt Choose quot Relife quot spell (ON) Ally HP lt 50 gt Choose Heal spell (ON) Self HP lt 65 gt Choose Heal spell (ON) Ally Debuff gt Choose Debuff Removal spell (ON) Ally Lost Buff gt Choose Buff spell Likewise, a quot gambit quot with the personality of quot Agressor quot may look like this (ON) Foe HP lt 10 gt Choose Attack skill (ON) Foe any gt Choose target gt Choose Attack skill (ON) Self Lost Buff gt Choose Buff spell (ON) Foe HP 0 gt Taunt the player What I like about this approach is it makes sense in my head. It also would be extremely easy to build an quot AI Editor quot with an approach like this. What I'm worried about is.. would it be too limiting? Would it maybe get too complicated? Does anyone have any experience with AIs in Turn Based games that could maybe provide me some insight into this approach.. or suggest a different approach? Many thanks in advance!!!
30
Good way to handle offscreen AI? For example sake Let's say there are 10 rooms in the world. And let's say the world is inhabited by 10 entities. And each entity has it's own "daily routine" where it performs certain actions in the room and also might navigate between the rooms as well. Given that the player can only be in one room at a time, what is a good way to keep track of the actions the other entities are performing in other rooms offscreen? The most straightforward option is to check on each of the 10 entities on every frame, check their position state and determine whether or not the entity should be in the room where player is located in at any given time. ( This however feels really resource heavy especially as the room entity amount is increased. ) Another option is to keep track of the time that has passed since the start of the game, then each of the entities checks whether its pattern intersects with the room the player is on, and if it does it checks against the time whether or not the entity is supposed to be in the same room at this particular time, entities whose patterns do not intersect with the current room the player is located in do nothing until the player enters a room which their pattern intersects and only at that point calculate whether or not they should render. ( But if they interact with the room, then they will have to always check the state of the rooms which intersect their route in order to determine their location at that point in time, which is not that great. ) The third option I came to would be to first of all only look at the routes which intersect the player location (as described previously), secondly upon entering a room, check if the player is in that room, if not then to only check the state of the room and how long will it take to proceed to the next room. For example a janitor NPC enters the room, checks the state of the room, sees that there is a spillage made by player, calculates how much time it will take to clean that up and how long the pathing will take etc. And until the mentioned time is due to enter the next room we only check if the player is in the room. The exact location of the NPC for rendering purposes would only be calculated when the player enters the room. After brainstorming a while I came to the third option, but I was wondering if perhaps there is a known or better way to handle things like these?
30
Snake AI Is a Hamiltonian approach valid for all grid sizes? So, as has been done many times before, I am designing an AI that can play Snake as effectively as possible. It didn't take me long to find this extremely useful thread here How to find a safe path for an AI snake? where the top answer first and foremost recommends forming a Hamiltonian circuit for the grid and begin by just having the Snake follow this route. However, after attempting this, I realised it didn't work with my initial grid size (23x23), at least I don't think it does. My understanding may be incorrect, but from what I gather, with m rows and n columns, if mn is odd, then there is no Hamiltonian circuit possible. If this is the case, then should I abandon this method? Or is there any way of implementing it in some case?
30
Doing a passable 4X game AI I am coding a rather "simple" 4X game (if a 4X game can be simple). It's indie in scope, and I am wondering if there's anyway to come up with a passable AI without having me spending months coding on it. The game has three major decision making portions spending of production points, spending of movement points and spending of tech points (basically there are 3 different 'currency', currency unspent at end of turn is not saved) Spend Production Points Upgrade a planet (increase its tech and production) Build ships (3 types) Move ships from planets to planets (costing Movement Points) Move to attack Move to fortify Research Tech (can partially research a tech i.e, as in Master of Orion) The plan for me right now is a brute force approach. There are basically 4 broad options for the player Upgrade planet(s) to its his production and tech output Conquer as many planets as possible Secure as many planets as possible Get to a certain tech as soon as possible For each decision, I will iterate through the possible options and come up with a score and then the AI will choose the decision with the highest score. Right now I have no idea how to 'mix decisions'. That is, for example, the AI wishes to upgrade and conquer planets at the same time. I suppose I can have another logic which do a brute force optimization on a combination of those 4 decisions.... At least, that's my plan if I can't think of anything better. Is there any faster way to make a passable AI? I don't need a very good one, to rival Deep Blue or such, just something that has the illusion of intelligence. This is my first time doing an AI on this scale, so I dare not try something too grand too. So far I have experiences with FSM, DFS, BFS and A
30
Unit turning in navmesh based pathfinding I'm working on an RTS game, and I'm using navmeshes for unit pathfinding. I do know how to find a general path within a navmesh, but how do you determine if the unit have enough space to turn? I have units of different shapes (mostly rectangles with different dimensions), and with different turn radii. Additionally some of units can turn in place, and some can move in reverse. So, how to find a path which unit can follow, considering that it can not rotate easily?
30
Grid pathfinding with a lot of entities I'd like to explain this problem with a screenshot from a released game, DROD Gunthro's Epic Blunder, by Caravel Games. The game is turn based and tile based. I'm trying to create something very similar (a clone of the game), and I've got most of the fundamentals done, but I'm having trouble implementing pathfinding. Look at the screenshot. The guys in yellow are friendly, and want to kill the roaches. Every turn, every guy in yellow pathfinds to the closest roach, and every roach pathfinds to the closest guy in yellow. By closest I mean the target with the shortest path, not a simple distance calculation. All of this without any kind of slowdown when loading the level or when passing turns. And all of the entities change position every turn. Also (not shown in screenshot), there can be doors that open and close and change the level's layout. Impressive. I've tried implementing pathfinding in my clone. First attempt was making every roach find a path to a yellow guy every turn, using a breadth first search algorithm. Obviously incredibly slow with more than a single roach, and would get exponentially slower with more than a single yellow guy. Second attempt was mas making every yellow guy generate a pathmap (still breadth first search) every time he moved. Worked perfectly with multiple roaches and a single yellow guy, but adding more yellow guys made the game slow and unplayable. Last attempt was implementing JPS (jump point search). Every entity would individually calculate a path to its target. Fast, but with a limited number of entities. Having less than half the entities in the screenshot would make the game slow. And also, I had to get the "closest" enemy by calculating distance, not shortest path. I've asked on the DROD forums how they did it, and a user replied that it was breadth first search. The game is open source, and I took a look at the source code, but it's C (I'm using C ) and I found it confusing. I don't know how to do it. Every approach I tried isn't good enough. And I believe that DROD generates global pathmaps, somehow, but I can't understand how every entity find the best individual path to other entities that move every turn. What's the trick? This is a reply I just got on the DROD forums Without having looked at the code I'd wager it's two (or so) pathmaps for the whole room One to the nearest enemy, and one to the nearest friendly for every tile. There's no need to make a separate pathmap for every entity when the overall goal is "move towards nearest enemy friendly"... just mark every tile with the number of moves it takes to the nearest target and have the entity chose the move that takes it to the tile with the lowest number. To be honest, I don't understand it that well.
30
How does pathfinding in RTS games work? crossposted from stackoverflow In a game such as Warcraft 3 or Age of Empires, the ways that an AI opponent can move about the map seem almost limitless. The maps are huge and the position of other players is constantly changing. How does the AI path finding in games like these work? Standard graph search methods (such as DFS, BFS or A ) seem impossible in such a setup.
30
How to ensure a condition in a behaviour tree when processing following nodes? Example tree (Source) As far as I understood, a sequencer iterates over the children until one failed or all are successful. If one children returns "running", the sequencer will start to process from that child on the next tick. Let's say "Do I have food?" takes longer than one tick because the AI has to walk to the fridge. When "Am I Hungry" was successful, it won't be processed anymore. Now while walking to the fridge, the hunger magically disappears. How do I prevent the tree from processing the other nodes even though I am not hungry anymore? Should every following node check the condition again? That doesn't seem to fit the idea of a behaviour tree. How do I implement a condition that has to stay true while following nodes are processed?
30
When to use AI prediction in a Fighting Game I am making a fighting game AI that can predict the player's next move using a N gram predictor. Once I have the prediction, when do I use it ? Do I wait till player makes a move and then use the prediction ? What about distance from the player ? How do I make the use of my prediction look realistic ?