Musings upon turning 38. AI, Technological Advancement, And What I’ve Learned In My Career So Far.

I’m turning 38 today, which somehow seems both old and young to me.  How can I only be 38?  I feel much older.  And on the other hand… I still feel like a little kid, and wonder about all these white hairs.

Why I Stopped Writing These Types Of Journals

There’s a lot of personal navel-gazing I could do, but then this would get even longer-winded than I typically am.  But I do notice that I really have not been doing any personal blogging in the last decade or so.  When I was new in my career as a game developer, I felt like I had lots to say, but did all that suddenly stop?

Partly I got busy.  Partly I was very stressed out and very unhappy for a lot of those years.  Some of that was related to work, some of that was being in an unhappy marriage but being unwilling to admit it.  Some of it was… no longer feeling like the answers to things are quite so obvious.  Or at least not easy to explain.

My Older Journal-Style Writings

My original writings on the nature of a superior AI got slashdotted, made the front page of reddit, were top on hacker news, and so on back in 2009.  I was really proud of that.  People read my work, felt like they understood what I was saying even if they don’t work in the field, and came away feeling satisfied and edified.

The problem is, a lot of it was pretty incorrect.  There WERE novel things in there, don’t get me wrong.  But the actual alchemy of what made the AI in the original AI War: Fleet Command really good was not something I myself had a good bead on.  You’d think I would have understood it, but I did not.

Where I Got Things Right

It’s true that I took a novel approach with the AI, and it’s true that it was highly performant code and gave really good results.  And a lot of the design maxims that I came up with are ones I still use:

  • Don’t try to pick truly optimal solutions or things become predictable.
  • Design your gameplay so that it is AI-friendly in the first place.
  • Have a lot of verbs for AIs to choose from.
  • Have a lot of agents so that players can’t follow exactly what the “group” is doing as a whole.

But none of these were at the heart of what I described in 2009, and in fact I was overly focused on my particular methodology.

A Hybrid Approach Surpassed The Original

Throughout the 2010 to 2014 period, Keith LaMothe actually methodically went through the original AI War and upgraded large portions of the AI code to be more “traditional” (aka more loosely based on decision trees and similar rather than flocking), and his results, when blended with mine, yielded a superior result.  When you think about it, that makes sense: blending two very different techniques should give more variability and more interesting results.

But Architectural Adjustments Surpass That

Then in 2015, I came up with a radical new approach for how the AIs should think in Stars Beyond Reach.  That was a game that was never released, but is a lot like Civilization in how it works at a basic level.  Essentially each AI would be simple, and would run in a background thread.  It would make whatever decisions it could, blind to all the other AIs, and they would all turn their results in at once.

I got the idea from bluffing/trick taking card games, of which I am fond.  In those games, I and my family are all sitting around a table, wondering what each of the rest of us are going to do, and we all put down a card that nobody else can see.  When everyone is ready, we turn over our cards and find out all together who read the table and their hands and the other players the best.

It turns out when you have AIs do the same thing, they’re able to run really quickly, and come up with results that are quite good.  When there are problems, like two units trying to move into the same space (which would be invalid), then solve who wins with a coin toss.

The end result was that we were able to run something like 14 AIs in parallel, and have the between-turn timings complete in part of one second where Civilization would groan and stutter over several seconds.  We also were doing less work than Civilization with our simulation, but the savings would have remained if we had continued to build it out.  The approach is just fundamentally better, given modern hardware.

It’s worth noting that, beyond that original idea, I did none of the coding on that.

Back To AI War, Via AI War 2

When we decided to embark on AI War 2 in 2016, we ported over the new AI handling from Stars Beyond Reach.  The old approach of AI in general from AI War Classic was out, and Keith largely built a completely new AI approach that was more traditional and didn’t have any real flocking going on.

Okay, there were a FEW flocking things, but those were largely for the individual ship combat in battles.  Not large strategic decisions.  And the result?  On par with AI War Classic, at least.

Then AI War 2 Evolved

Then Badger game along, and blew both Keith and I out of the water over the course of a couple of years.  Realizing that he had a LOT of time on the CPU with which to do calculations, and that he could store all sorts of working data about sub-groups and similar (fire teams, for example), he coded it to work based on… what I would describe as fairly traditional methods.

Seeing these things be so successful takes a lot of my notions about AI and kind of throws them out the window.  Badger’s code is great, and his strategies for the AI are clever, and in my opinion there is not a better AI opponent out there than what AI War 2 currently offers with its various NPC factions.

But here’s the thing: the actual code itself doesn’t look like something an alien wrote, or some inscrutable neural network that we can’t comprehend.  It looks like very well-organized AI code that you can find in most any game.  This is not a grand reinvention of AI as a concept, in other words.  If anything, compared to my supposed “grand reinvention” in 2009, this is a regression.  But it’s indisputably better.  So what gives?

It All Comes Down To Architecture

This is going to sound like I’m patting myself on the back and taking credit away from Badger, so I think it’s worth noting that I didn’t code any of the actual AI in AI War 2.  Or at least very little of it.  My brain doesn’t easily follow the sort of design patterns that Badger wrote, and I don’t tend to think of AI in the style that he implemented it.  So I’m not sure that I could have pulled off what he and Keith accomplished in the actual AI code.  That’s worth stating up front.

I might be able to now, with their example code already sitting there right in front of me.  But coming up with it out of whole cloth?  I’m not so sure.

It’s also worth noting that now we have NR SirLimbo coming in and making very complicated mods with notable AI, and he’s leaning on a lot of what Badger and Keith created, but also doing brand-new things all of his own. 

It’s ALSO worth noting that StarKelp, who is relatively novice as a programmer in general, came in and made a really fun and convincing faction — Civilian Industries — based around fairly simple rules and just all-around solid design.  He used the tools that were there, did not invent anything remotely new from a technical standpoint, but made one of the most fun factions by just thinking about how he wanted it to work.

So… what the heck?  Why is this sort of thing possible?

Four words: having time to think.

By which I mean both the developers/modders, sure, but mainly the AIs themselves.

Technical Revolutions In General

It feels like computing has not really been all that exciting in the last decade, at least not compared to the decade before it.  And certainly not the decade before that.  But I would argue that the advances in computing in the past decade are just as significant, but widely underappreciated and sometimes underused.

Let’s talk about the original DOOM as an example.  That was a first person shooter that was drawn entirely in software.  There was no dedicated GPU.  So all of the calculations for how to draw these polygons on the screen had to take up CPU time, and had to be run FAST.  So there was only but so much complexity that approach could ever render in an environment.

As we moved into the era of discrete GPUs, a whole array of new things became possible.  And it was very noticeable, because graphics are the easiest thing to see (obviously).  As the shader pipeline became a thing in the 2000s, suddenly we could spin off all these little mini-programs to hundreds or thousands of cores on a GPU, each one saying “take this vertex data and draw it like this.”  Later, the programs (shaders) got vastly more sophisticated, and a new era of Physically Based Rendering (PBR) was born.

So now there’s all this incredible art in tons of 3D games, and it can look pretty close to photo-real, or it can look intentionally anime-like.  You can learn how to make basic shaders on your own, and you can beat the absolute pants of of the best graphic artists from 20 years ago.  The old team working on the original Unreal has nothing on the water physics coded by random students and hobbyists around the world today.  But… that’s not remotely taking anything away from the graphic artists 20 years ago (or it shouldn’t be).

The bottom line is that now when someone goes to make a game, any game, they’re standing on the shoulders of decades of work of technical artists and technical art programmers and chip designers and more in order to do even the most basic things.  So we have all that architecture to our advantage, all of us, and there’s a whole heck of a lot of things we just don’t have to think about anymore.  They “just work,” and so we can think about the actual content.  Sound slightly familiar to something I described with AI War 2?

The Multi-Core Revolution

I follow computing closely.  I like knowing how things work, I like building computers, I like seeing all the different layers of compiled code.  Yet somehow I completely missed just how significant the multi-core revolution was.  I stumbled into it over the past few years, and really only in the last year has it dawned on me just how critical it has been.

Here’s the thing: when you’ve got a CPU that can think about a lot of complex things, and where “a lot” of data in the form of several MB of information is trivial in the scope of RAM… you’re in a whole new world.  Calculations for AIs can run on something approaching human-level timescales, and algorithms can come to better decisions than humans would.

One area of the game that I did work on is targeting logic.  A volunteer/modder, WeaponMaster, also contributed heavily to this area.  This was one of the heaviest part of the original AI War simulation, but I chose to break this out into its own background thread.  More specifically, I chose to have it NOT be part of the simulation at all.  Rather, it does its thinking, and later communicates its results back to the simulation to be integrated at the simulation’s convenience.

This means that instead of having 1-5 milliseconds to do the complex targeting for all the ships in a giant battle, we have… you know, whatever, I guess.  Take a few seconds if you really need it.  Maybe hand the data back to us in batches every few dozen milliseconds?

The absolute insanity of the freedom that gives us is something I can’t understate.  We’re talking about two or three or even four magnitudes of extra time to think about things if we need it.  As a result?  The ship AIs are able to make EXTREMELY good decisions in the heat of battle.  Better than any human.  You think you can micro better than the ships in AI War 2?  Please.  Unless we have a particular oversight in our algorithm somewhere, the machine figured out a better solution in roughly the time it took you to understand the basics of what you’re even looking at.

There will always be edge cases, places where humans can come in and make a better choice by hand, and I actually really like that.  But there’s definitely not a Starcraft-like urgency for you to give every order to all your moron units, because your units are instead largely as smart as you insofar as the limited scope of what they do.  So if you’re super-fast at clicking… well, there’s just not that many corrections for you to make.  If there are that many corrections, you’re better off giving us a suggestion (or looking at the open source code yourself) and we’ll just make the algorithm better again.

None of this was possible in the original AI War, and it isn’t possible in most games in general.  You can’t take that much complex data (positions, bonuses, priorities, ranges, multiple weapon systems, etc) all into one algorithm and spit out a good result in a reasonable amount of time.  So we changed the definition of what “reasonable” means.  And that was made possible by having strong multi-core machines basically be ubiquitous.

The same is true for all of the fire team logic and similar that Badger wrote, or even the simpler logic for StarKelp’s civilian industries.  In any other game I’ve ever worked on, Stars Beyond Reach aside, we would always have been thinking “do we have enough CPU time available for this?”  And if we had a novice programmer joining us, and they made some slightly less optimal choices, we would have been royally annoyed because it was making things worse for everyone.

Oh, hey.  That’s not a thing anymore, either.  If a novice’s mod code runs 20% slower for some reason, mainly because they don’t have a deep multi-decade fascination with the internals of computing and the programming languages we use, then… no harm, no foul.  That’s all on its own thread, and isn’t blocking anything else, so the game is completely unaffected.

This is, frankly, revolutionary.  Most games have not caught on to this yet.  Maybe I shouldn’t mention it, and keep a completive advantage.  But how you structure your AI code, and your simulation code as a whole, is essentially as important as what the actual AI code is, these days.  This structure, and optimizing it and refining it, is where I’ve spent the vast majority of my time over the past few years.

Not All Mult-Threading Is Equal

Right now we’re still a bit in the wild west of multithreading.  It reminds me of the “Web 2.0” days of the Internet, or the pre-PBR generation of shader programming.  There are not codified best practices for games as a whole, and any libraries that are out there (and there are a lot) are usually pretty task-specific.  We have not yet reached a state of general-purpose multi-core processing for games AI and simulations in the way that we have for game physics, game audio, or game graphics.

And I’m cool with that.  What I find most interesting is that I’m not sure how much people are even paying attention to this as a goal to achieve.  It’s certainly not on the list of any engine developers, near as I can tell.  They have indeed started making strides to make a lot of things use more cores in general, but they’re still very… task-oriented.  A lot of them follow my 2009 style of thinking, with a focus on individual agents (because in the case of these systems now, that actually is much simpler to do).

What I’m not seeing is a lot of large-scale AIs or multi-thread simulations being developed where individual parts of the simulation or AI are allowed to run for seconds at a time before their data is reintegrated.  Being able to do that is like a superpower.

I Like The Wild West

When I originally started writing this post, I was going to talk about how I’m grateful for all the various wild west periods I’ve been able to participate in.  The early days of indie game development was a big one.  I was feeling a bit sad about how some of those wild wests have instead become populous and post-gold-rush settled civilizations. 

I was thinking that, eventually, there will be no more frontiers to discover, and that’s a pretty big bummer.  But rather than feeling bummed out about it, I was feeling grateful to live in the time that I do now.  I was feeling a bit bad for people 100 years from now, who will have so many things mapped out for them, and so few constraints on resources, that they won’t get to innovate in certain ways that I’ve been fortunate to be able to.

But as I started to write, and as I compared my 2009 self to my 2020 self — and in particular as I thought about computer graphics and how those have changed from 1998 to 2020 — my perspective shifted.

I can make really awesome 3D scenes, on my own, these days.  I can do character art.  I can do my own motion capture, from my own body and face, in my home, for males and females and creatures.  These technological advancements give me the tools I need to make really interactive and believable cutscenes, if I were so inclined (I am inclined, I just don’t have time in my schedule).  Ten years ago, the idea of any of that would have been impossible, and I don’t feel sorry for myself now that I have this new power.  If I am to work on a cutscene, now I get to focus on the actual content, and not the technology or minutia of it.

AI, sooner or later, is going to head that same direction.  Same with game simulations for games like AI War 2.  Right now I get to live on the bleeding edge and help do things that nobody else can do, like those developers for Unreal 20 years ago.  But 20 years from now, a novice just poking around at game development for fun will be able to casually craft something far more involved than anything I, Keith, or Badger ever can in this moment.

Thinking of it in those terms, I’m okay with that.  There are a ton of games that I’d like to create, but that I can’t because they’d be too expensive (and thus too risky) to make.  I really do love being able to push the limits of technology, and there will probably always be some area in which I can do that.  But that won’t always be so directly coupled to the game itself.

A game like Stardew Valley offers nothing new on the technological stage, but is a revolution in design and personality and just plain fun.  Even with all our modern tools, it still took one developer a really long time to make that game.  This was someone focused entirely on the content and the artistry, not someone bogged down in the details of how to push the bleeding edge of technology. 

In my own way, in my own areas, that’s the sort of thing I also look forward to being able to do. 2021 is going to have a lot of that, I think.  I’ll still be slaying technical demons and pushing the edge of technology in still other areas, because that’s just plain one of my interests, but I won’t HAVE to in the way I was forced to for much of the last decade.  That’s a welcome thought.