34
AI Learns To Play Super Mario Kart On SNES | Retro Gaming News 24/7
(www.retronews.com)
Vintage gaming community.
Rules:
If you see these please report them.
The "learning" isn't the same kind of learning that humans do. There is no abstraction or meta layer, only whether or not a sequence of inputs achieved an output deemed successful by a human. Programs like these interact with the game, essentially, as one static screen shot at a time. For any given configuration, the input that is most likely to result in success (based on prior experience in the form of training) is reinforced so it becomes more likely, a bit like training a dog. Except a dog knows what a ball is.
This is similar to how Google's Go models worked. For any given configuration, a set of probabilities are generated based on the weights in the model, which are based on the training (initial values are arbitrary). The main difference is that Google could simulate zillions of AI vs. AI games at a high rate of speed. Anything with a live stream attached is mainly for entertainment value and subscriber count, otherwise you would have the game run at 1,000x speed so the computer could actually train faster.
But the side effect of this kind of training is that each level is a new experience. This is somewhat analogous to how infants learn to avoid holes while crawling, but then have to relearn that when they begin walking.
Yes but if it's first instinct is "go left" on 1-2, it's pretty apparent the reward function could use some tuning