Reading “The Problem of Good, in a Platform Game” got me thinking.
I doubt some of the assumptions on that page and I’d like to present my own here. It should be possible to model them into a simple agent for a game as well.
First, let’s assume that resources are finite and somewhat scarce. Changing your alignment (good/bad) need resources, so you won’t do it on a whim. I’d even say that there is a hysteresis, so you stay in your current alignment longer than in a perfect (linear) world.
So the question is: What changes you alignment?
My theory is that several forces influence your alignment (not necessarily in this order):
– Peer pressure
– Personal experience
– Prediction of the future
– Food/rest
Some comments on these forces:
1. If everyone around you is good, it’s hard to become evil, partly because they will fight this tendency, partly because you simply have no role model. We are all mimics. It’s hard to come up with something new on your own (again scarce resources: You don’t have all the time of the world nor can you change as often as you like).
You might argue that some people are born evil. I’d like to have proof of this, please.
2. Whenever you get into a situation X, you will rake your memory for similar situations to give you a guideline how to respond (again scarce resources). So if your experience tells you to be good in situation X (because that worked in the past), you will be good. Notice that only the outcome of the situation for *you* counts. So if you like to whine about being capitalized, the outcome of being abused is “good” for you – no need to get your lazy bum up and change.
3. If the situation is new, you have to come up with a plan. Again, you can’t think for years, there is some pressure on you. So the plan is never perfect and you will know that. So depending on your confidence of your plan, you will change your alignment or stay on safe (known) ground.
4. Most people are only civilized as long as they are fed and well rested. Just imagine to deprive someone from sleep for a day. They will get irritated much faster than a well rested person.
Model:
0 is neutal, > 0 is good, < 0 is selfish
1. is fairly easy to model: Just sum the influence of the people around you. Maybe multiply that with a factor depending on the psychological distance the people currently have to you. That is, your role models will feel pretty close even if they are on the other side of the globe while your neighbor could be on the other side of the moon, you couldn’t care less.
2. For every game situation, you need to calculate the net result. Use several standard games (prisoner dilemma and a couple more) and store the factors and the result in a memory per agent. When the next situation comes up, compare the setting with the memory and have the agent change its alignment according to the expected outcome of the game. When this is done, the agent is ready to play the game. Update the memory afterwards.
If the result is off the scale, change the alignment accordingly.
3. For every new game, have the agent play a few rounds against itself. Use the result as the alignment for the game. If the outcome is vastly different from what the agent expected, multiply this "unsureness" factor to the alignment change (if we’re more insecure, we are more susceptible for influence).
4. Give the agent a head and a stomach. Let them rest, eat (and work late and starve). 0 means "normal", 0 means rested/well fed. Scale this accordingly (maybe this is even logarithmic: If you are very hungry, you’ll even eat another agent) and add to the current alignment.
To map the linear alignment to the “good/selfish” alignment, use a hysteresis curve. The final result should show some “resistance” to change the current alignment and a preference to return to the current state (so if you’re currently selfish, being treated nice won’t count as much as being treated badly).
Nation and the 6th part of the Hitchhiker
1. January, 2010Just finished reading two books: “Nation” (Terry Pratchett) and “And Another Thing …” by Eoin Colfer.
When I browse through my favorite book store here in Zurich, I’m always looking for something new by Terry Pratchett. I’m a huge fan of his Diskworld series and always torn when there still isn’t another volume out. On one hand, I really miss his witty way to look at the world, on the other, a good thing takes time. So this time, I ambled into the other works of Pratchett but after the the carpet people and “Johnny and the bomb”, I wasn’t too thrilled. But I couldn’t walk away from “Just possibly the best book Pratchett has ever written” (Fantasy and Science Fiction).
And it is. It’s a hugely different setting than Diskworld but as witty and smart as you’d expect. It’s the story of a boy who sets out to become a man and becomes so much more. It’s about standing up against peril, evil and bullies. If you like Diskworld, you must read this, too.
Eoin Colfer was a similar issue: Part 6 of the THHGTTG? You’ve got to be kidding! I loved the stories around Artemis Fowl but The Hitchhiker? Is Eoin out of his mind? Luckily, he asked himself the same questions.
The net result: Definitely not a book by Douglas Adams but also definitely a book from the Hitchhiker series. Ever wondered where the animals come from that want to be eaten and can argue in their favor? There must be herds, right? There are. When Thor (the Norse god) needs some aiming practice, they “provide moo-ving targets”. Just like Adams, Eoin (pronounced Owen) likes to take things to the tip and I mean the utmost protruding electron. It’s a book about a world where all your wishes were granted. And you know the old saying. A fun read and at least one good laugh on every page. To put it another way: The worst thing about the book is its title.
If you’re still worrying whether you should dare to complete the trilogy with part 6, stop and buy.
Recommendation: Buy. Both. Now.
Share this: