Back to the Storyworld

After struggling with a great many programming problems with the engine and the front end, I have at last returned to the real work of Siboot: getting the storyworld into shape. And I already have a pretty good topic: what to do when spurned by another? Here’s a sample textual version of the context:

Joe greets Mary with much goodness.
Mary greets Joe with slight distrust.
Joe asks Mary what’s new.
Mary says that she has no news.
Joe tells Mary that he doesn’t like Tom.
Mary says ‘I see’.

Currently, this situation forces Joe to say goodbye to Mary because he has no other options. After all, she obviously doesn’t want to talk to him. Should Joe accept this situation or should he be able to address her taciturnity? “Why won’t you talk to me?” he might ask. In which case she could respond with something like “Because I greatly distrust you.” But what happens then? Should Joe be able to plead with Mary? What kind of responses could he reasonably bring to bear? Here are some possibilities:

“Why do you distrust me?”
This suffers from the problems raised by backstory. Mary might distrust Joe because of an event that occurred before the game started. Of course, this could be a feature, not a bug. If I pack a lot of inter-acolyte interaction into the novel, then I can use that as the backstory and the basis for the relationships. I might even be able to build those interactions into the HistoryBook. But this is a lot of work and could lead nowhere.

“Jane trusts me” 
This is a defense based on the fact that somebody else has a positive feeling for Joe. It automatically leads into a comparison of how other people think about Joe, which could be fruitful for Joe. But why does Mary want to give Joe that information?

“Please, I’ll be nice to you” 
This is an open-ended promise to do something nice in the future. It opens up a whole can of worms. I have the specific actions that can be made part of a deal: trading auragon information, perhaps giving relics, perhaps promising not to attack. These are all specific tradeable items. Should I differentiate between deals with objects and favors? After all, giving anybody any information is a favor, but it is of a scale smaller than what we normally see in deals. Should favors be treated as fundamentally different from deals? If so, are they necessarily one-sided, or is there an expectation of reciprocity? This brings up another possibility:

[Do a favor for Mary] 
This fits more neatly into the structure, with favors being smaller than deals, but they can improve the mood of a situation. Joe freely tells Mary that Tom likes her. This should make Mary more favorably inclined towards Joe. She still might respond to Joe with a noncommittal “I see”, but she might be warming up — this is where facial expressions would be crucial. Perhaps if Joe plies her with information, she might warm up to him. I like this.


Dishonesty
For the last year I have gone back and forth about lying, and I’m still on the fence. The inclusion of uncertainty statements in the transfer of information certainly makes lying both easier and more subtle. A liar could simply make his false statement highly uncertain; that way he can’t be caught. Of course, we can always have the degree of uncertainty affect the degree of weight assigned to a statement, which in turn affects its utility to the audience. 

Of course, if there are no lies, then there’s no reason for distrust, so I suppose that lies are a necessary component of the design. So now I must devise algorithms for both creating lies and detecting them. The trick this time, though, will be that lies will never be constructed out of whole cloth. Instead, a lie will consist of a deliberately misrepresented quantity, either a auragon count or a pValue.

The two big questions are:

1. What motivates a lie?
When should an actor lie? That breaks down into four subquestions:

a. When should an actor overstate another’s auragon count?
There are two possible goals here: first, to induce DirObject to attack the 3Actor so as to reduce the chances of 3Actor winning. This would only be useful when the DirObject doesn’t like 3Actor. 

The other possible goal is to induce the DirObject to make a poor choice in dream combat. This would work best when Subject knows that 3Actor actually possesses zero auragons of that type. DirObject would therefore be more likely to use the wrong auragon against 3Actor. Subject would do this to block a victory by DirObject.

b. When should an actor understate another’s auragon count?
This would encourage DirObject to attack 3Actor with the wrong auragon, causing 3Actor to lose.

c. When should an actor overstate another’s pValue?
In the case of pGood, I can’t see a benefit.
In the case of pTruthful, it might induce DirObject to lie to 3Actor.
In the case of pPowerful, it might anger DirObject so that DirObject confronts 3Actor

All in all, I don’t see much benefit here for Subject.

d. When should an actor understate another’s pValue?
In the case of pGood, it would undermine DirObject’s cooperation with 3Actor.
In the case of pTruthful, it would have little effect.
In the case of pPowerful, it would encourage DirObject to foolishly dominate 3Actor.


2. How are lies detected?
In general, at the time of the telling, each statement should be modulated by the pHonest value of DirObject for Subject, but we would also like a way for DirObject to back-modify pTruthful based on subsequent information. This is very messy stuff, though; it’s much easier to detect boolean lies. Proper handling would require a search of the HistoryBook and a comparison of past statements with current values. Another way might be to tap into the p3Values. Indeed, this could provide us with a completely new algorithm.

We use the p3Values of each ActorTrait, weighted by our own pTruthful value for each actor, to determine our pValue of that Trait. Then we adjust our pTruthful value for every other actor by the degree to which it deviates from our newly calculated pValue. 

I like the generality of this algorithm, but I’m not sure that it will produce good results. Still, I think I’ll try it.