Log

7.4 – Layered Influence Map Architecture

7.4.0 – The idea

When I was at GDC last week, I was impressed by Dave Mark’s talk on using a layouted layered Influence Map (IMap) to represent spatial knowledge for large scale worlds. And by querying it, instead of the environment, agents can effectively and efficiently position themselves based on simple spatial reasoning. So I wanted to try it out.

7.4.1 – The design

IMap has been out there for a while. It said to be mainly used in RTS games, where large scale of world and group behaviors were involved. It is initially a way of spatial knowledge representation, and then can be used for spatial reasoning and behaviors. One of the common form of an IMap is a grid system laying on top of the game world, where each actor registers itself to and query information from. In this sense, an IMap has three main considerations:
1. define the “influence” values
2. define how the influence values are propagated throughout the map
3. define the way of query this map

When it comes to including a richer set of actors, there is the Layered IMap, which is a collection of IMaps. The basic idea is, instead of putting all the influence values in one single map, using different IMaps for different types of actors or influences separately. So that each actor is responsible for combining information that is relevant to it–and only relevant to it–from multiple IMaps to make sense of the world. Each actor maintains its own Interest Map, which applies a fallout function on a working copy of the combined IMaps.

After getting the information, actors would be able to find the best location based on spatial reasoning.

7.2 – Layered Influence Map Demo

7.1 – Layered Influence Map Literature Review

5.2 – Hybrid (BT + Utility) Architecture

5.2.0 – The Idea

At certain point when I was implementing stuff, I started to realize that all kinds of AI systems could probably give very similar results, if not identical. It was just a matter of how hard it would be to design, tune, and debug, And that drew a clear line between different systems. Behavior tree and FSM were highly regarded for their nature of simplicity–they would give you exactly what you designed, but they were too mechanical, and could grow to be unnecessarily complex. Utility and planners were great to show off some organic features, they were simple too, in system wise, but they were hard to design and even harder to tune.

So there was some interesting balance between system complexity and design complexity. After all, the purpose of computing and artificial intelligence was to make all our life easier.

So far I have only been thinking from the system’s perspective. I tried to figure out the potential and strength of certain systems. But in really, it was usually the other way around. Game came first, then system could be developed. The next phase of this project would be focusing on re-engineering.

5.2.1 – The design

Behavior tree is driven precisely by design. This is its strength and limitation. On one hand, it is intuitive and easy to tune. One the other hand, it is not subject to changes in the environment and not good at handling unexpected situations. This is because decisions are rarely binary. In a lot of cases, the priority of behaviors cannot be clearly defined. Sure, you can walk around the problem with a more sophisticated tree, but is that really necessary?

So the primary problem with behavior trees is that the order of behaviors are preset and cannot be changed in run-time. There are a couple of ways to change it. One is using a utility system in the selector to score each child, and re-order them based on the priority.

5.2.2 – The utility behavior

Essentially, only the leaf child need to be scored–they are actions and conditions. The composites just need to propagate the scores from their children to their upper level. The Behavior class needs to be extended to support the scoring:

UtilityBehavior.h:

void addConsideration(Consideration* _consideration)
{
	m_Considerations.push_back(_consideration);
}
inline virtual float getScore(Blackboard* _blackboard)
{
	float finalScore = 1.f;
	if (m_Considerations.size() > 0)
	{
		for (Consideration* c : m_Considerations)
		{
			finalScore *= c->getScore(_blackboard);
		}
	}
	return finalScore;
}

5.2.3 – The utility selector

The onUpdate() of selector needs to be modified to calculate the score of individual child, reorder them, and pick the one with the highest priority.

UtilitySelector.h

virtual Status onUpdate(Blackboard* _blackboard) override
{
	if (m_Utility.size() == 0)
	{
		// Query for child utility values
		for (auto it = m_Children.begin(); it != m_Children.end(); it++)
			m_Utility[it] = (*it)->getScore(_blackboard);

		// Sort children from highest to lowest
		std::vector<std::pair<Behaviors::iterator, float>> sortedChildren;
		for (auto it = m_Utility.begin(); it != m_Utility.end(); it++)
			sortedChildren.push_back(*it);
		sort(sortedChildren.begin(), sortedChildren.end(), [=]
			(std::pair<Behaviors::iterator, float>& a,
			std::pair<Behaviors::iterator, float>& b) 
			{ return a.second > b.second; });

		// Set current to the highest
		m_CurrentChildInSorted = m_SortedChildren.begin();
	}
	// Search in utility order until a child behavior says its running
	for (;;)
	{
		m_CurrentChild = m_CurrentChildInSorted->first;
		Status status = (*m_CurrentChild)->tick(_blackboard);
		if (status != Status::BH_FAILURE)
		{
			if (status == Status::BH_SUCCESS) m_Utility.clear();
			return status;
		}
		if (++m_CurrentChildInSorted == m_SortedChildren.end())
		{ 
			m_Utility.clear();
			return Status::BH_FAILURE;
		}
	}
}

6.3 – Buddy AI Demo 2 – Ambient Following, Taking Cover, Combat Utility

6.2 – Buddy AI Demo 1 – Ambient Following

2.6 – Utility Demo 2 – Target Selection

<Utility name="Utility_Test_3">
  <Action id="Investigate">
    <Consideration name="NotSeeEnemy">
      <ResponseCurve type="Step" slope="-1" exponent="1" xshift="0" yshift="1"/>
      <Input id="SeeEnemy"/>
    </Consideration>
    <Consideration name="HaveLKP">
      <ResponseCurve type="Step" slope="1" exponent="1" xshift="0" yshift="0"/>
      <Input id="HaveLKP"/>
    </Consideration>
    <Consideration name="CoolDown">
      <ResponseCurve type="Polynomial" slope="0.5" exponent="2" xshift="0" yshift="0"/>
      <Input id="Time"/>
    </Consideration>
  </Action>

  <Action id="MoveToEnemy">
    <Consideration name="SeeEnemy">
      <ResponseCurve type="Step" slope="1" exponent="1" xshift="0" yshift="0"/>
      <Input id="SeeEnemy"/>
    </Consideration>
  </Action>

  <Action id="Alert">
    <Consideration name="HitByEnemy">
      <ResponseCurve type="Polynomial" slope="-0.95" exponent="2" xshift="0" yshift="1"/>
      <Input id="HitByEnemy"/>
    </Consideration>
  </Action>
  
  <Action id="Attack">
    <Consideration name="HitByEnemy">
      <ResponseCurve type="Polynomial" slope="-0.95" exponent="2" xshift="0" yshift="1"/>
      <Input id="HitByEnemy"/>
    </Consideration>
    <Consideration name="NearEnemy">
      <ResponseCurve type="Polynomial" slope="-0.95" exponent="2" xshift="0" yshift="1"/>
      <Input id="Distance"/>
    </Consideration>
    <Consideration name="EnemyHealthIsLow">
      <ResponseCurve type="Polynomial" slope="-0.5" exponent="4" xshift="0" yshift="1"/>
      <Input id="EnemyHealth"/>
    </Consideration>
  </Action>
  
  <Action id="Search">
    <Consideration name="NotSeeEnemy">
      <ResponseCurve type="Step" slope="-0.1" exponent="1" xshift="0" yshift="0.1"/>
      <Input id="SeeEnemy"/>
    </Consideration>
  </Action>
  
  <Action id="Run">
    <Consideration name="HealthIsLow">
      <ResponseCurve type="Polynomial" slope="0.5" exponent="4" xshift="0" yshift="0"/>
      <Input id="Health"/>
    </Consideration>
    <Consideration name="NearEnemy">
      <ResponseCurve type="Polynomial" slope="0.5" exponent="4" xshift="0" yshift="0"/>
      <Input id="Distance"/>
    </Consideration>
  </Action>
</Utility>

2.5 – Utility Architecture Update – Target Selection

2.5.1 – Target Selection Overall

I thought about two ways to do target selection. One way is to use a hierarchy system to select targets, and another way is to calculate the scores for all targets simultaneously. Mike Lewis suggested that the hierarchical approach is more “designer” intuitive, and has a much smaller search space. While it has the drawbacks of targets locking each other. The simultaneous approach takes M * N space, which would potentially have a significant impact on performance in large-scale cases. But can provide more reasonable and organic results.
In my implementation, I used the simultaneous approach.

2.5.2 – Add Targets

I used an approach similar to storing blackboard data to store the target information. Such information will be maintained by the AI actors, and then be passed to the utility system for update. Action scores are calculated based not only on the current world states, but also the targets. Instead of Action::getScore(Blackboard*) , it is now Action::getScore(Blackboard*, Target*)

Updated library is now on GitHub.

3.4 – GOAP Architecture

3.4.0 – The Design
Different from all the previous architecture, GOAP, or Goal oriented action planner, is a planner–it is not only be responsive to the current scenario, but predicts future moves. To do that, it interpret the game world as small, atomic Variables. Variables make up a World State to describe the status of the world. It plans Actions with A* graph searching, where World State serves as nodes, and Actions as edges. Each Action has a cost. A*’s heuristic function calculates the “difference” between two World States–the number of Variables that are different. I use forward searching in this example to search a path from the current World State to the Goal–this is different from more traditional method where backward searching is preferred to limit the search space. But if the design of World State is simple enough, forward search works just fine.

3.4.1 – World State
World State contains a list of Variables to describe the world. Distance is calculated based on the difference between two World States. If Variable in the second World State has the same value as in the first World State, they are considered equivalent.
WorldState.h

class WorldState
{
  std::map<int, bool> m_Variables;
public:
  int distanceToState(const WorldState& _other) const;
  bool meetGoal(const WorldState& _goal) const;
  bool onUpdate(const std::vector<Variable*> _variables);
}

Variables are imported from XML, with an unique integer id, a name for debugging, a boolean value, and parameters to determine the boolean value. The XML for Variables is somewhat like this:
<Variable id="EnemyInRange" key="0" let="200"></Variable> <!-- when less than or equal to 200, return true -->

3.4.2 – Action
Action has Preconditions and Effects. Preconditions and Effects both are just a list of Variables. Preconditions need to be meet to declare an Action “valid”. Effects will be applied during planning to forecast the future World State.
Action.h

class Action
{
  int m_iCost;
  std::map<int, bool> m_Preconditions;
  std::map<int, bool> m_Effects;
public:
  bool isValid(const WorldState& _state) const;
  WorldState proceed(const WorldState& _state) const;
}

3.4.3 – Planner
Planner uses A* to find a path through State nodes and Action edges and returns a list of Actions. It keeps a list of open nodes and a list of closed nodes for A* search. Nodes contains a World State, and necessary information like the last edge, G and H. In this example, if a node is closed, it will not be re-opened.
Planner.h

class Planner
{
  std::vector<Node>::iterator inOpenList(const WorldState& _state);
  bool inClosedList(const WorldState& _state);
  int getHeuristic(const WorldState& _state1, const WorldState& _state2) const { return _state1.distanceToState(_state2); }
public:
  std::vector<Action*> plan(const WorldState& _start, const WorldState& _goal, const std::vector<Action*>& _actions);
}

3.3 – GOAP Demo 1

<GOAP name="GOAP_Test_1">
  <VariableList> <!-- Variable List -->
    <Variable id="EnemyInRange" key="0" let="200"></Variable> <!-- true: let; false: gt-->
    <Variable id="EnemyLost" key="1"></Variable>
    <Variable id="EnemyDead" key="2"></Variable>
  </VariableList>

  <ActionList> <!-- Action List -->
    <Action id="Search" cost="1">
      <Precondition id="EnemyLost" value="true"></Precondition>
      <Effect id="EnemyLost" value="false"></Effect>
    </Action>
    <Action id="GoToEnemy" cost="1">
      <Precondition id="EnemyLost" value="false"></Precondition>
      <Precondition id="EnemyInRange" value="false"></Precondition>
      <Effect id="EnemyInRange" value="true"></Effect>
    </Action>
    <Action id="Attack" cost="4">
      <Precondition id="EnemyInRange" value="true"></Precondition>
      <Precondition id="EnemyDead" value="false"></Precondition>
      <Effect id="EnemyDead" value="true"></Effect>
    </Action>
  </ActionList>
  
  <InitialState> <!-- Initial world state -->
    <Variable id="EnemyLost" value="true"></Variable>
    <Variable id="EnemyInRange" value="false"></Variable>
    <Variable id="EnemyDead" value="false"></Variable>
  </InitialState>
  
  <GoalState> <!-- Goal world state -->
    <Variable id="EnemyDead" value="true"></Variable>
  </GoalState>
</GOAP>

3.2 – GOAP C++ Libraries

  • Goal oriented action planner
    • iteration 0 – use World State to represent the game world, use A* graph search for action planning, handle data loading from XML
    • iteration 1 – build as a static library and integrate with UE4, add another layer to translate Blackboard info to World State

3.1 – Planner Literature Review

Peter Higley and Chris Conway’s GDC 2015 Talk on GOAP (0:00, 20:00)
Jeff Orkin’s website on GOAP
Jeff Orkin’s Applying Goal-Oriented Action Planning to Games (2003)
Jeff Orkin’s Three States and a Plan: The AI of FEAR (GDC 2006)
Jeff Orkin’s Symbolic Representation of Game World State: Toward Real-Time Planning in Games
Glenn Fiedler’s GDC 2015 Talk on Physics for Game Programmers : Networking for Physics Programmers
Troy Humphreys’s Exploring HTN Planners through Example (Pro 1-12)
William van der Sterren’s Hierarchical Plan-Space Planning for Multi-unit Combat Maneuvers (Pro 1-13)
Éric Jacopin’s Optimizing Practical Planning for Game AI (Pro 2-13)
A* implementation

2.4 – Utility Architecture

2.4.0 – The design
Utility theory highlights the “motivation” behind every decision the AI makes. Each AI has a list of Actions. Each action contains a list of Considerations. Each consideration has a Response Curve and a Input channel. Via the response curve, a normalized Input value is remapped to another normalized value. The resulting values are then multiplied together to get the score of each action. AI searches through the list of action. Whoever has the highest score is selected to be performed next.

2.4.1 – Response curves
To make the remapping simpler, I created a preset of curves that are commonly used–Step, Linear, Polynomial, Logistic, Logit, Normal, Sin–based on Mike Lewis and Dave Mark’s GDC talks and articles. It has been proven that, using 4 parameters–slope, exponent, xshift, yshift–can build a wide variety of curves.

Step:

output = (_input - m_dXShift < 0.5) ? m_dYShift : m_dSlope + m_dYShift;

Linear:
output = (m_dSlope * (_input - m_dXShift)) + m_dYShift;

Polynomial:
output = (m_dSlope * pow(_input - m_dXShift, m_dExponent)) + m_dYShift;

Logistic:
output = (m_dSlope / (1 + exp(-10.0 * m_dExponent * (_input - 0.5 - m_dXShift)))) + m_dYShift;

Logit:
output = m_dSlope * log((_input - m_dXShift) / (1.0 - (_input - m_dXShift))) / 5.0 + 0.5 + m_dYShift

Normal:
output = m_dSlope * exp(-30.0 * m_dExponent * (_input - m_dXShift - 0.5) * (_input - m_dXShift - 0.5)) + m_dYShift;

Sin:
output = 0.5 * m_dSlope * sin(2.0 * M_PI * (_input - m_dXShift)) + 0.5 + m_dYShift;

2.4.2 – Data
XML is used for data storing and loading. The structure of a Utility file is like this:

  <Action id="Run">
    <Consideration name="Health">
      <ResponseCurve type="Linear" slope="-1" exponent="1" xshift="0" yshift="1"/>
      <Input id="Health"/>
    </Consideration>
  </Action>

I use a separate XML file to store input information, mainly for re-using.
  <Input id="Distance" min="0" max="10000"/>
  <Input id="EnemyHealth" min="0" max="100"/>
  <Input id="Health" min="0" max="100"/>
  <Input id="SeeEnemy" min="0" max="1"/>

2.3 – Utility Demo 1

<Utility name="Utility_Test_2">
  <Action id="Attack">
    <Consideration name="SeeEnemy">
      <ResponseCurve type="Step" slope="1" exponent="1" xshift="0" yshift="0"/>
      <Input id="SeeEnemy"/>
    </Consideration>
    <Consideration name="NearEnemy">
      <ResponseCurve type="Linear" slope="-1" exponent="1" xshift="0" yshift="1"/>
      <Input id="Distance"/>
    </Consideration>
    <Consideration name="HealthIsHigh">
      <ResponseCurve type="Linear" slope="1" exponent="1" xshift="0" yshift="0"/>
      <Input id="Health"/>
    </Consideration>
    <Consideration name="EnemyHealthIsLow">
      <ResponseCurve type="Linear" slope="-1" exponent="1" xshift="0" yshift="1"/>
      <Input id="EnemyHealth"/>
    </Consideration>
  </Action>
  <Action id="Search">
    <Consideration name="NotSeeEnemy">
      <ResponseCurve type="Step" slope="-1" exponent="1" xshift="0" yshift="1"/>
      <Input id="SeeEnemy"/>
    </Consideration>
    <Consideration name="HealthIsHigh">
      <ResponseCurve type="Linear" slope="1" exponent="1" xshift="0" yshift="0"/>
      <Input id="Health"/>
    </Consideration>
  </Action>
  <Action id="Run">
    <Consideration name="HealthIsLow">
      <ResponseCurve type="Linear" slope="-1" exponent="1" xshift="0" yshift="1"/>
      <Input id="Health"/>
    </Consideration>
    <Consideration name="NearEnemy">
      <ResponseCurve type="Linear" slope="-1" exponent="1" xshift="0" yshift="1"/>
      <Input id="Distance"/>
    </Consideration>
  </Action>
</Utility>

2.2 – Utility C++ Libraries

  • Utility system
    • iteration 0 – response curve functions based on Mike Lewis’s Choosing Effective Utility-Based Considerations (Pro 3-13) and Dave Mark’s GDC Talk Architecture Tricks: Managing Behaviors in Time, Space, and Depth (starting from 33:40), with data loaded from XML in runtime
    • iteration 1 – separate input from implementation. Add an input layer between UE4 and utility system
    • iteration 2 – build as a static library and integrate with UE4

1.4 – Behavior Tree Architecture

1.4.0 – The design

1.4.1 – Behavior
Behavior is the basic building block of the tree. Every node is essentially a child of the base class Behavior. The core functions are a tick() for single entry, and onInitialize(), onUpdate() and onTerminate(). Each behavior has a current status. Status can be INVALID (not initialized), RUNNING, SUCCESS (after terminated) and FAILURE (after terminated).
Behavior.h

Status tick(Blackboard* _blackboard) /* single entry point for updating this behavior */
{
  if (m_eStatus != Status::BH_RUNNING)
    onInitialize(_blackboard);
  m_eStatus = onUpdate(_blackboard);
  if (m_eStatus != Status::BH_RUNNING)
    onTerminate(m_eStatus);
  return m_eStatus;
}
virtual void onInitialize(Blackboard* _blackboard) { }
virtual Status onUpdate(Blackboard* _blackboard) { return m_eStatus; }
virtual void onTerminate(Status _status) { }

1.4.2 – Decorator
Decorator is a behavior with only one child. I created 1 types of decorator. Repeater to repeatedly running a child node for given times.
Repeater.h

Status onUpdate(Blackboard* _blackboard)
{
  for (;;)
  {
    m_pChild->tick(_blackboard);
    if (m_pChild->getStatus() == Status::BH_RUNNING) break;
    if (m_pChild->getStatus() == Status::BH_FAILURE) { return Status::BH_FAILURE; }
    if (++m_iCounter == m_iLimit) { return Status::BH_SUCCESS; }
    m_pChild->reset();
  }
  return Status::BH_INVALID;
}

1.4.3 – Composite
Composite has more than one child. A composite can be either a Sequence, a Selector, or a Parallel. Sequence returns SUCCESS when every child nodes return SUCCESS, FAILURE if one child returns FAILURE. Selector returns SUCCESS when one child node returns SUCCESS, FAILURE if all children return FAILURE. Parallel executes all children “at the same time”.
Sequence.h

virtual Status onUpdate(Blackboard* _blackboard) override
{
  for (;;)
  {
    Status status = (*m_CurrentChild)->tick(_blackboard);
    if (status != Status::BH_SUCCESS) { return status; }
    if (++m_CurrentChild == m_Children.end()) { return Status::BH_SUCCESS; }
  }
  return Status::BH_FAILURE;
}

Selector.h
virtual Status onUpdate(Blackboard* _blackboard) override
{
  for (;;)
  {
    Status status = (*m_CurrentChild)->tick(_blackboard);
    if (status != Status::BH_FAILURE) { return status; }
    if (++m_CurrentChild == m_Children.end()) { return Status::BH_FAILURE; }
  }
}

Parallel.h
virtual Status onUpdate(Blackboard* _blackboard) override
{
  unsigned int iSuccessCount = 0, iFailureCount = 0;
  for (auto it: m_Children)
  {
    Behavior* behavior = it;
    if (behavior->isTerminated() == false) behavior->tick(_blackboard);
    if (behavior->getStatus() == Status::BH_SUCCESS)
    {
      iSuccessCount ++;
      if (m_eSuccessPolicy == PL_REQUIRE_ONE) { return Status::BH_SUCCESS; }
    }
    if (behavior->getStatus() == Status::BH_FAILURE)
    {
      iFailureCount ++;
      if (m_eSuccessPolicy == PL_REQUIRE_ONE) { return Status::BH_FAILURE; }
    }
  }
  if (m_eSuccessPolicy == PL_REQUIRE_ALL && iSuccessCount == m_Children.size()) { return Status::BH_SUCCESS; }
  if (m_eFailurePolicy == PL_REQUIRE_ALL && iFailureCount == m_Children.size()) { return Status::BH_FAILURE; }
  return Status::BH_RUNNING;
}

1.4.4 – Data
XML is used to data loading.

<BehaviorTree name="BT_Test_2">
    <ActiveSelector>
        <Sequence name="FindSequence"> <!-- Attack the enemy if seen. -->
            <Condition id="SeeEnemy"/>
            <Action id="MoveToEnemy"/>
            <Action id="AttackEnemy"/>
        </Sequence>
    </ActiveSelector>
</BehaviorTree>

1.3 – Behavior Tree Demo 1

<BehaviorTree name="BT_Test_2">
    <ActiveSelector>
        <Filter name="Attack"> <!-- Attack the enemy if seen. -->
            <Condition id="SeeEnemy"/>
            <Action id="MoveToEnemy"/>
            <Action id="AttackEnemy"/>
        </Filter>

      <Filter name="Investigate"> <!-- Search near last known position for 3 times. -->
        <Condition id="HaveLastKnownPosition"/>
        <Repeater limit="3">
            <Sequence name="SearchNear">
                <Action id="Wait" time="1"/>
                <Action id="MoveToLastKnownPosition"/>
                <Action id="MoveToRandomPosition" range="400"/>
            </Sequence>
        </Repeater>
        <Action id="ClearLastKnownPosition"/>
      </Filter>

      <Sequence name="Patrol"> <!-- Randomly search. -->
        <Action id="Wait" time="1"/>
        <Action id="MoveToRandomPosition"/>
      </Sequence>
    
  </ActiveSelector>
</BehaviorTree>

0.5 – UE4 Level Setup


Top-down view for the initial level setup.

0.4 – Hierarchy FSM Architecture

0.4.0 – The design

After decades and decades, FSM is still widely used in games for behavior selection. So it makes sense to actually build one for UE4. The general idea is to run FSM from each actor, share knowledge through a customized Blackboard, and encapsulate State and Transition so that they can be re-used and even shared between different FSM.

0.4.1 – First iteration: FSM

State.h:

virtual void onEnter() = 0;
virtual void onUpdate(const float _deltaTime, Blackboard* _blackboard) = 0;
virtual void onExit(Blackboard* _blackboard) = 0;

Transition.h

virtual bool isValid(const Blackboard* _blackboard) const = 0;
virtual void onTransition() = 0;

StateMachine.h

virtual void setTransitionMap(const TransitionMap& _transitions, State* _initState);
virtual void update(const float _deltaTime, Blackboard* _blackboard);
virtual void setState(State* _state);

I also created a loader class to load a XML file and build the state machine in run-time. The XML file is like:

<fsm name="FSM_Test_1">
    <state name="Patrol" initial="1">
        <transition input="EnemySpotted" name="Combat"/>
    </state>
    ...
</fsm>

0.4.2 – Second iteration: Hierarchy

MetaState.h

class MetaState : public State, public StateMachine
...
void onUpdate(const float _deltaTime, const Blackboard* _blackboard);
virtual void setTransitionMap(const TransitionMap& _transitions, State* _initState);

Now a state can also be a state machine. But how about transitions? Transition to the upper level is different from transition to the same level, or to a lower level. So in the data file, each transition is given a level based on its from and to. Now the data file is like:

<fsm name="FSM_Test_1">
    <state name="Patrol" initial="1">
        <transition input="EnemySpotted" name="Combat" level="0"/>
    </state>
    <state name="Combat" meta="1">
        <transition input="EnemyLost" name="Search" level="0"/>
        <state name="Alert" initial="1">
            <transition input="Timeout" name="Attack" level="0"/>
        </state>
        <state name="Attack">
            <transition input="Timeout" name="Search" level="1"/>
        </state>
    </state>
    <state name="Search">
        <transition input="EnemySpotted" name="Combat" level="0"/>
    </state>
</fsm>

StateMachine.h

typedef struct
{
Transition* transition = nullptr;
State* state = nullptr;
int level = 0;
StateMachine* stateMachine = nullptr;
} UpdateResult;
virtual void onUpdate(const float _deltaTime, Blackboard* _blackboard);
virtual UpdateResult update(const float _deltaTime, Blackboard* _blackboard);
void updateDown(State* state, int level, Blackboard* _blackboard);

1.2 – Behavior Tree C++ Libraries

  • Behavior tree
    • iteration 0 – simple behavior tree v1 with data loaded from XML in runtime
    • iteration 1 – separate implementation and structure
    • iteration 2 – build as a static library and integrate with UE4

1.1 – Behavior Tree Literature Review

  • Game AI Pro – 4, 6, 7 on BT

0.3 – Math and HFSM C++ Libraries

  • 3D Math for easy manipulation
    • iteration 0 – raw C++ headers
    • iteration 1 – build as an UE4 static library
  • Hierarchy FSM
    • iteration 0 – raw C++ headers for simple FSM with data loaded from XML in runtime, integrated with blackboard
    • iteration 1 – re-usable nodes and memory management
    • iteration 2 – hierarchy, downwards and upwards transition
    • iteration 3 – build as an UE4 static library

0.2 – AI and Framework Literature Review

  • C++ for game programmers (book)
  • Introduction to game AI (book)
  • Artificial Intelligence for games (book)
  • Game Engine Architecture – 4 on Math
  • Game AI Pro 3 – 4 on player perception, 7 on FSM debug, 8 on modular, 12 on FSM
  • Wisdom 2 – 5.1 on FSM
  • Game AI Pro – 4 on FSM, BT

0.1 – Project starts. Environment settings

Hello World!

This purpose of this project is to study the latest game AI techniques and build a general purpose AI plugin for UE4.

  • UE: 4.18.3
  • Windows SDK: 10.0.16299.0
  • VS: Visual Studio 2017 (v141)
  • XCode: 11
The AI Project