Skip to content

Instantly share code, notes, and snippets.

@theWatchmen
Last active December 17, 2015 22:31
Show Gist options
  • Save theWatchmen/2d7377f94cee3a710eb8 to your computer and use it in GitHub Desktop.
Save theWatchmen/2d7377f94cee3a710eb8 to your computer and use it in GitHub Desktop.
Notes from the Nucl.ai conference stream

MPC - Creating the Epic Sky Battle in Guardian of the Galaxy

by Adam Davis, MPC

  • A.L.I.C.E proprietary framework
  • Get to know the ships (how they fly and manouvre)
  • Inspired by slow motion video of flies flying
  • For final shot 10000s ships used
  • Started with flocks and boyds
  • Then seeking behaviour
  • Finally added directable layer
  • Added background-foreground layer for camera depth
  • Then started moving camera forward to be more dynamic

New flight system:

  • flexible
  • performant
  • abstract
  • easy to use
  • lights and rendering

Technology available:

  • ALICE: artificial life crowd engine; not good at flying
  • Houding/ICE/Custom

Flow:

  • Ship initalisation
  • Behaviour Injection
  • Simulator
  • Battle interaction
  • Visualisation

Usually not large amount of autonomous agents

Started integrating into ALICE (wings, flaps, thrusters, etc.) Procedurally generated by combining different clips

The system gives information for FX team (i.e. how many frames until explosion) Also added a system to trigger events/effects. This resulted in nice decoupling

ALICE getting "old" - hard to take advantage of multicore systems Partnered with Fabric engine to refactor the system

Drivatar and Machine learning racing skills in the Forza Series

by Jeffrey Schlimmer, Turn 10 Studios

Robotic AI:

  • Game AI became monotonous, almost just moving traffic cones
  • Limited difficulty settings
  • Started with ghosts, both your own and other players
  • No racing against though (no contact, overtake, etc.)

Drivatar: play with your friends anytime you want, even if they are not online They drive the same car, they have real names, etc.

Track What the player does and how precisely they do it Infer behaviour for other tracks, other cars, etc.

Process:

  • download the players behaviour for the current track, car, ribbon, etc.
  • physics sim to understand how the car reacts and apply a percentage of that capability to simulate player
  • Track different parameters (i.e. speed in certain sections) to infer behaviour on similar section in other tracks
  • average utilisation and variance for each segment of each track ribbon
  • average utilisation and variance for each turn type

System is thought to allow designers to improve gameplay and manage behaviours that might cause player frustration

Single player vs Multi player is an important distinction: if model is too accurate, people competing in single player get frustrated with drivatars hitting them (because modelled too accurately)

Tactical Planning and Real-time MCTS in Fable Legends

by Gwaredd Mountain, Lionhead

Uses grid to evaluate rules in the tree

Problems (because of real time):

  • Continous space
  • Nondeterministic
  • Simultaneous Actions
  • Planning Horizon
  • Limited CPU
  • Authorial control

Solution: define a model for the game to improve evaluation time Turn it into a "board game" by using the nav mesh and generate arrays of values to be used in the evaluation

Initially, generated plan is bad. Need to analyse the search space. Issues:

  • Delayed reweard (i.e. health change)
  • Score diffusion
  • HBF (high branching factor) actions

Solution: create a "macro" action to evaluate (not single orders) Also create action buckets

Priority quantisation, to avoid always picking the best solution

Data processing for novel input devices

by Chris Mackenzie and Vikram Saran, Opaque Multimedia

Building the Massive Crowds of Assassin's Creed Unity

by François Cournoyer and Antoine Fortier, Ubisoft Montreal

  • AC1 limited to ~100 chars

  • Pre-spawned NPCs (also pre-allocated) and add to it later

  • 3 LODs (called bulk, different LOD meshes). Also for animation and AI

  • Autonomous: 0-12 meters (~500us to 5ms)

  • Puppet: 12-40 meters (~150us)

  • Low Res: > 40 meters (~25us)

  • Camera culling (rendering, animation and AI)

  • No hands bones

  • For collision system use of 2D partition map for queries

  • Always clamped on navmesh, simple crowd push behaviour

Legacy AI

  • no NPC interaction
  • no occlusion
  • no networking

Sheperds (AI directors)

  • unique ID for bulk (can be queried from other systems)
  • only static members
  • manually placed by designers (can manage count and density)
  • can edit specific positions

Wandering crowds

  • cover all of Paris
  • no pathfinding

How to manage LODs in the pool

  • tag to match real NPC mesh
  • match low res visual densities
  • the pool is constantly adjusting

Swapping:

  • best matching entity (colour, shape, etc.)
  • animation blending
  • swap when not in FOV (otherwise it pops)
  • also turn on full detailed AI, animation, etc.
  • Autonomous vs Puppet: only AI and animation
  • Puppet vs Low res: low level mesh
  • reset all modified variables
  • must transition in the correct AI state
  • swapping is costly (~5ms)
  • also swap/switch AI (i.e. dull bot -> shoot a guard -> swap with correct behaviour

Additional issues with networking (i.e. replication of full NPCs and events) Memory usage:

  • pool is fixed
  • 160 spawned, 90 active
  • 230 MBs for 2000 bulks

Multithreading:

  • Need good profiling tools
  • Good task scheduling
  • Lockless coding/Limit lock times
  • Try to remove all CPU idle

2D Map:

  • spatially repeating, double buffering (one for current frame, one for next)
  • lockless insertion, no remove
  • do insertion in one map, query in the other

Future work:

  • Dithering (popping of animations, meshes, etc.)
  • Deterministic reactions
  • Support low res interactions
  • Armies (fighting)

Innovations in Search-based AI from MCTS to Evolutionary Algorithms

by Simon Lucas, University of Essex

MCTS

  • Rolling Horizon Evolutionary Algorithms
  • Deep Neural Networks
  • Evolutionary Algorithms
  • Temporal Difference Learning

The evolution of crowds and AI in feature film animation

by Paul Kanyuk, Pixar Animation Studios

Animation is King - At Pixar the cycle animator is your boss. No shot goes to the director until the animation director approves

A bug's life

  • FSM
  • Animation splines
  • Procedural "look at"
  • still need manual input for more complex scenes

Ratatouille

  • Beyond FSMs ato Agent Based Crowds (Massive)
  • Locomotion Brain (still an FSM)
  • Could be seen as Search and Ranking (sensor + fuzzy logic) + Data Flow (weighted average from logic nodes)
  • Major flaw: change in weigths might lead to unintentional behaviour

Got expanded for Up and Wall-e

  • more agents (~50)
  • more complex terrain
  • vision based collision avoidance (Paper from Siggraph 2010)
  • also tried flow fields (looked like water flowing, not sentient beings with a purpose)

Predictive understeering (signal filtering): avoid sharp changes in the direction Hysteresis

Instead of changing speed, blend different cycles

Cars 2

  • Behavioural agents
  • Subsumption architecture from Brooks
  • small independent modules that combined give complex behaviours
  • Reaction (overshoot, leaning, etc.) can use spring system (harmonic motion)
  • Can also be used for animation (i.e. for limbs)

Re-did the whole asset pipeline with Presto, but the crowd system didn't have a clear entry point anymore

Used FSM for Brave (mostly static crowds)

Monster University ("ambient" crowds). Mostly FSM, but allowed for sketching (placement, tracks, etc.). Timeline based system

Return of the Agent Based AI at Pixar, which is a new system integrated in Houdini Pixar as a lot of clips (20K+ in Finding Dory)

  • hierarchical connectivity (hierarchic GFSM) and procedural connectivity
  • Agent brain is coded in VEX
  • Pointcloud KD Tree plugin, then all VEX
  • "Easy" to integrate with Bullet Solver (for rigid body dynamics)

Scene is encoded using Universal Scene Description (USD)

Introduction

Case-based rules for player behaviour cloning in Killer Instinct

by Bruce Hayles, Iron Galaxy Studio

sorry, lost first half of the session :(

Main idea is to record shadow of player to be able to:

  • allow players to play against friends that aren't online
  • transition from single player to online
  • play against famous players

Core of the system:

  • track and evaluate stream of events

Heuristics

  • simple custom features that summarize world state history
  • recorded along with the rest of the world state
  • might have issues with long lasting actions and locomotions (i.e. moving forward). They sample at 200ms so that they can break it up for analysis
  • important is to replay actions at the right frequency
  • must be reactable - can interrupt whatever current plan is, find a new plan and execute it

Scale

  • 400-700 patterns per match (mostly locomotions)
  • 50+ distance functions (distance between players)
  • 20+ filters
  • 40+ rules

Filtering actions

  • filter actions that are not possible
  • hard part is finding the right pattern
  • use of different distant functions based on context

Sequence modelling

  • many fighting game behaviours aren't combos, but require sequential ordering
  • how to clone high level sequences: Ngram, bias selection of next action

Adaptation during match, uses heuristic

Runtime evaluation

  • the status of the best plan returned from the search is evaluated at runtime
  • but the context might change (i.e. expect attack to hit but misses) so need to change plan on the fly

Conclusion

  • system generalises pretty well
  • issue if the player doesn't perform all possible behaviours (even 40 matches might not be enough)
  • heuristics need to be created for shadows to adapt

Dynamic pricing in mobile games

by Bill Grosso, Scientific Revenue

Generating Global Cultures and Characters in Ultima Ratio Regum

by Mark Johnson, University of York

Roguelike that takes place in 1600s

World generation

  • flag for nations
  • divided in different territories, civilizations, etc.
  • by zooming in you can get to settlements
  • zooming in further you can get to buildings (each nation has its own style)
  • nations vary in how they generate their city and town names (basde on location, i.e. close to the mountains, sea, etc.)
  • also creates banners and mottos for families

Procedural religion generation

  • deity or religion name
  • holy book
  • name of religious buildings
  • rewards
  • penance
  • festivities
  • altar generation
  • cathedral generation (architectural grammars)
  • etc.

Generating Aesthetics

  • extend cultural generation and codification into household items
  • linked to the style of the nation
  • i.e. doors, vases, decorations, paintings
  • different styles of clothing based on nation, social status, etc.

Generating people

  • challenge was to make each face distinct
  • genetics will be linked to graphical location
  • clothing, hairstyle, beard style, etc. will depend on nation and/or culture
  • many nations are given facial markers

Crowds and Individuals

  • crowds follow roads until close to their targets (spawn/despawn with player movement and LOS)
  • crowds spawn in weighted percentages (distance from capital, ideologies of the nation, etc.)
  • important NPCs are stored and tracked

Decision making

  • pathfinding
  • in game choices
  • environmental behaviour
  • real world human decisions aren't optimal (contrary to what happens in most game AI)
  • will be influenced by political pressure and upbringing (people will behave/react differently when interacting with someone from a different social status, religion, etc.)
  • also influenced by religious belief, cultural assumptions, social norms (what is polite, what is rude, etc.)
  • layers of weighting to determine decisions and behaviours
  • might have issues if contrasting factors (i.e. social status vs religion)
  • better not to pick always the strongest factor, otherwise very flat behaviour

The diplomacy AI of Total War: Attila

by Csaba Toth, Creative Assembly

Campaign AI (CAI)

  • problems to solve: construction, units, etc.

Diplomacy AI

  • what deals would I accept? deal evaluation
  • what deals would I like to offer? deal generation
  • how do I negotiate it? deal negotiation

Deal evaluation

  • data driven scoring logic
  • do I like this faction? -> stance value
  • would they make a strong ally/enemy? -> strategic value
  • what would others think if I signed this treaty? -> diplomatic value
  • how big is the benefit? -> economic value
  • strategic value might be influenced by current state of the game (multipliers applied and tweaked by designers)
  • also dynamic threat analysis for war and peace
  • easy to debug and to extend

Deal generation

  • objective: generate sensible diplomatic ideas
  • full list of possible diplomatic actions is too big
  • pre-filter list and generate plausible options
  • prioritise and evaluate list

Negotiation

  • AI has a list of diplomatic ideas
  • can counter-offer with own ideas
  • remembers failed offers and payment details (avoid presenting the same deal again)
  • random rule to avoid spamming the player
  • separate system to generate text for negotiations

Introducing AI personalities

  • each personality has different components and each component has different parameters
  • personality traits available to the player through UI
  • better to use very different values to make differences noticeable
  • use of personality groups and randomly select from theme to avoid the AI becoming predictable
  • each group has different personalities for different stages of the game

How to achieve final result

  • good programmers with good mentality
  • have a focused AI designer in the team: holds the vision know all AI systems bridge between between design team and programmers balance and test these systems help in scheduling
  • listen to user feedback

Behaviour Trees and Blackboards in EVE Online

by Freyr Magnússon, CCP Games

Why new AI?

  • new NPCs (with new behaviours, etc.)
  • grow in the solar system
  • making changes to the old AI was becoming very hard and risky
  • old system had been around for 10+ years

Old AI

  • basically state machines
  • unclear ownership of the state

Approach for new system

  • start small
  • story driven
  • iterative development (later added more aggressive NPCs)
  • parallel development with existing system

Infrastructure

  • base entity (python wrapper around C++ engine object)
  • ~150 methods in base entity class
  • components
  • introduced entity wrapper
  • added behaviour entity
  • no longer single instance with all logic, but collection of loosely coupled components

Behaviour Trees

  • completely done in python and run on the server
  • event driven (tasks subscribe to event, then get suspended)
  • updated once a second
  • polling world state
  • task queue for running branch of the tree
  • problem: need to reset the branch state and popup to higher level
  • sometimes events shouldn't interrupt running task; add decorator to make task not-interruptable
  • if too many resets, event driven benefit is dimished. Also need to be ready for tasks being interrupted
  • introduced monitors for monitoring tasks
  • it's the monitor that subscribes to events
  • keep monitoring after exit
  • remove monitor after cleanup
  • use of blackboards for agents knowledge and to communicate between behaviours
  • a blackboard is a collection of message channels which you can subscribe to

Challenges

  • event handling
  • legacy systems

Next steps

  • parameterised authoring tools
  • more behaviour modules
  • larger scopes of cooperation
  • more clever about combat tactics

Building a Galaxy: procedural generation in No Man's Sky

by Hazel McKendrick, Hello Games

Project

  • 13 people, 7 programmers
  • own multi-platform engine in C++
  • run-time generation
  • fun over realism
  • artist directed
  • PCG is irrelevant to end user

Why PCG?

  • game world scale
  • small team
  • unexpected outputs
  • individual experiences

Structure

  • engine agnostic to content origin (able to change or mix generation techniques)
  • multi-layered generation
    • start with seed, then chain of different generators (solar system, planet, creature, plants, etc.)
  • generation data is an asset
  • be able to save/load (serialize for inspection and debugging)
  • manual editing

Generating a planet

  • cube sphere projection
  • sphere is bad for storing data (voxels), so they store in a cube
  • need to undergo series of re-projections to move between the two
  • memory for 1km^3 ~5.5 GB -> hardware limitation
  • LODs: overlapping areas around camera, increase region scale, reduce voxel density, non uniform
  • LOD structure: region octree

Region creation process

  • generation
  • plygonisation (dual contouring, and other techniques)
  • spherification
  • physics construction
  • AI knowledge construction
  • decoration

Generator requirements

  • directable and consistent
  • real-time
  • varied
  • easy to modify and add to
  • data local: methods only know about single voxel

Noise techniques

  • perlin/simplex noise
  • widely applicable, scalable, repetitive, insufficient (good starting point)
  • noise fields: noise with a threshold, paths, caves, lines, etc.

Structured shapes

2D generation -> 3D generation (move beyond basic heightmap, add turbulence)

Voxel materials

  • density, sharpness
  • materials (air, base, rock, resource, etc.)
  • needed for polygonisation and shading
  • texture atlases, from voxel materials, slope textures
  • triplanar texturing: accross sphere, tight blending

Decoration

  • foliage, ecosystem
  • placement on the planet: terrain awareness, patterns
  • procedurally generated animations (i.e. bending trees)

In Game

  • component system: give context to objects (i.e. static, interactable, etc.)
  • everything done live: imposters, BTs, no baked lighting

Early Churn Prediction and Personalised Interventions in TOP11

by Miloš Milošević, Nordeus

  • machine learning algorithms to determine if a player will churn
  • intervention must happen at right time, otherwise intervention useless
  • features: registration, activity, game economy, clicks, game specifics
  • find those that correlate with churn

Models

  • support vector machines
  • logistic regression - easier to maintain and better for performance
  • gradient boosting trees
  • recurrent neural network
  • decision trees
  • random forest

How to intervene

  • look at the most used feature by the user to provide useful intervention messages

Measuring results

  • split users into 3 groups: control (no message), base (generic notification), test (targeted notification)
  • best to use click-through rates
  • base group achieved 30% better retention
  • test group achieved 40% better retention than baseline

Not just planning: STRIPs for Ambient NPC Interactions in Final Fantasy XV

by Hendrik Skubch, Square Enix

  • Goal: convey culture by filling the world with life
  • Emphasize differences between places

Problem with existing solution

  • smart objects: no relationship between objects
  • FSM or BTs: geared towards single agent behaviour
  • how to express causal and temporal relationships
  • tried to add gates to BTs to control, for example, conversations
  • still no relationship between agents (i.e. who am I talking to)

Goals of the system

  • mode interaction in a simple way
  • coordination via first-order language elements
  • multi-agent perspective
  • reusability

Approach

  • equip smart objects with knowledge about multiple assets
  • communication between elements
  • put the smartness into a single object "zone"
  • STRIPS

Tuple space

  • a type of blackboard
  • a set of tuples that offer query and manipulation API

The Language

  • a list of STRIPS rules operating on a Tuple Space

Example

  • table with three chairs
  • two characters sit down and start chatting
  • a third character arrives, sits down and wait for his turn to speak

Role allocation

  • adding a waiter
  • role allocation is NP hard, but in game greedy approach is fine
  • each iteration allocate role until satisfied or failure
  • satisfy task with the least remaining options
  • randomise output; if failure try again later

Introduction

IK Rig: Procedural Pose Animation

by Alexander Bereznyak, Ubisoft Toronto

Objective: create cool stuff with technology that hasn't done before

Motion fields

IK Rig: same animation can be played on different rigs, doing more with Mocap

Convert source Adjust Apply to Target
mocap actor rig with allo bones and animation change behaviour based on art inputs and those comeing fform the engine apply the new motion to source rig or any other thing
convert source data into ik chain proxy format

Don't need to animate all bones (need only ~30)

Any rig can play any animation

Realtime constraints

  • added in engine
  • act at runtime
  • physics friendly
  • can be updated at any time

Most of the talk was examples of animations: https://www.youtube.com/watch?v=V4TQSeUpH3Q

Spawn Trees - Inside the Witcher 3 Communities

by Michał Słapa, CD Projekt RED

Single system that controls all the spawning

Open world community system that must be suitable for different scenarios (big city, forests, hunted swamp) and also for environmental storytelling

Huge world to populate: night-day cycle, reactive community which interact with the player

Spawn community, control it, script it all in one place

Must inegrate with AI to control NPCs

Must inegrate with quest and gameplay systems

Witcher 2 system was designed for specific cases (towns and villages). Rigid structure, hard to modify, maintain and extend

Spawn tree

  • decision tree
  • volume based
  • logic processed at the branches (nodes)
  • leafs do the spawning (entries)
  • entries can be decorated with initializers

Tree structure

  • selector-like nodes
  • conditional
  • metanodes (i.e. subtree)

Entries

  • resposible for spawning given creature definition
  • kept clean: just a few most general common properties

Initizialers

  • modify spawn mechanism
  • modify spawned entities
  • also applied on higher level tree level for all entries of a given branch
  • extendable in scripts

AI integration

  • special node to inject idle behaviour
  • initializer could trigger higher level behaviours

Crowd control

  • initializers may modify an npc when it is reattached at runtime

Spawning logic

  • range and visibility conditions
  • detached list is monitored for despawn the entity
  • pooling used for bigger scenario (i.e. city)

Resource sharing

  • use of subtrees, can include trees within each other (resusability)
  • how to modify included tree? might lead to redundant data

Pros

  • scalable, extendable and customizable
  • connects with other systems (AI, quest, etc.)
  • resource reusing and iterative development

Cons

  • redundancy
  • flat structure

Solution

  • resource parameterization
  • extend upon resource sharing

Exploring the Relationship Between Gameplay and Animation

by Christopher Laubach, Riot Games

Role of animation

  • Character personality
  • Player Satisfaction

Constraints

  • small team
  • low-end spec resources
  • blend quickly into new spells (important for gameplay responsiveness)

If you have the opportunity to upgrade your animation system, do it!

Cascade blend: moving time window to blend over chain of joints

Intentional agreement between animation and gameplay

Turning a Chatbot into a Narrative Game: Language Interaction in Event[0]

by Sergey Mohov, Tequila Works

Read about the game concept at http://event0game.com

Build trust with the computer AI through chat

It's not "pure" AI, it tries to appear intelligent

How it works

  • emotion matrix
  • event-based context
  • input and output tags (dictionary with words classified by meaning)
  • semantic pattern compiler

Motion fields: Road to Next-Gen Animation

Michael Büttner, Ubisoft Toronto

  • Start with set of clips
  • Arrange them in blend trees and state machines
  • cascading "pose - trajectory - event"
  • phase matching blending differnt-length animations (time stretching)
  • blend and transitions need to make sure they are synced (insert markers)

Parametric blending

  • arrange family of animations based on parameters
  • doesn't address weight shift

State machines

  • blend trees mixed with state machines
  • states enacpsulate distnct types of movement, constraints, etc.

Hierarchical blend trees

  • locomotion
  • strafe
  • slope

Problem statement

  • human movmenet != loops
  • reduce complexity
  • improve quality

New approach

  • Let's start with raw mocap data
  • Invert the problem: select clip based on desired trajectory (match trajectory curves)
  • measure error between trajectories
  • pose: joint transforms including a trajectory section
  • length of trajectory is our "planning horizon"
  • game code knows where the character should be, not how
  • it's possible to change the velocity at every integration step
  • approximation of accelration or deceleration

How to find best matching pose

  • crazy idea? loop over all available poses and find the closest one
  • it would always select only the first frame of the animation
  • find best matching pose + best matching trajectory
  • only consider relevant joints depending on the movement (feet for walking, hand for climbing, etc.)
  • need also to match against rate of change (velocity)
  • need also to consider past matches
  • when matching poses, use lambda values to control quality vs responsiveness (ratio)
  • system data flow looks like vertex/pixel shaders (most values are constant while evaluating poses)

Optimize search (minimization)

  • online learning: remember winning candidate, problem is that you have to retrain the model when changes are made
  • use kNN (clustering)
  • can't build connectivity graph (too much memory)
  • best variable to find neighbours is trajectory
  • can't store full trajectory, sparse subsampling of the trajectory value (position and linear velocity)

Attempts

  • multi-dimensional scaling
  • kd-Trees work well, used for the final solution

Epic AI Systems and Director in Fortnite

Requirements

  • client/server game
  • server runs almost all AI
  • use C++ over scripting
  • profile AI
  • levels procedurally generated and combined at load time using templates
  • allow to extend AI systems for future content

Procedural navigation

  • dynamic method of navigation
  • environments change frequently
  • navmesh (only on server) and navgraph
  • use flow fields to verify connectivity

Environmental query system (EQS)

  • collects domain knowledge
  • reasoning in a continous space requires discrete sampling points (illusion of intelligence GDC talk)
  • find "interesting" points in real time, perform tests on the points and return scoring for them
  • used to find goals, points to attack the player, spawn points for AI, etc.

Behaviour Trees

  • all AI uses BTs

Gameplay System Tools

  • real-time info about the game, world state, etc. used for debugging
  • visual logger (hystorical data about the state of the game and decisions)
  • functional testing: maintain functionality while making functionality changes

AI LOD

  • need to minimize load on servers
  • reduce network traffic, system updates, etc.

Gameplay Abilities

  • used by content creators to implement gameplay logic without programmers
  • actors can have stats or effects on them, can execute actions
  • abilities send messages to other abilities
  • heavy use of gameplay tags for context
  • AI need to make decision on when to use abilities, tests must be done as fast a possible
  • AI need to handle interruption gracefully

Goal Manager

  • provides AI with high level tactical decisions
  • goals drive AI behaviour in BT
  • any actor can be a target

AI Director

  • originally used pre-defined waves: too rigid, lot of work
  • create game pacing
  • analyse player performance
  • frequency of engagement
  • scale pacing but not difficulty
  • intesity is configurable via curves that can be chained together
  • balancing: historical analysis using visual logger, analyse playtesting

References

Inside Watson: How IBM is making computers understand language

by @dalelane, IBM

  • Difference from search: search matches words, it doesn't understand the question
  • NIST, TREC (text retrieval conference)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment