Mechanics of Lies

I have set aside work on the novel for a while to permit me to refresh my creative writing juices; in the meantime, I have gone back to work on the storyworld. My first task is to insert the option to lie in responding to a deal. So I duplicated the honest answer verb tell auragon count, renamed it to lie about auragon count, and set to work on it. The verb itself is little different from the original verb; it is presented the same way and reacted to in the same way. The changes are in the options that previously included only the verb tell auragon count

The WordSockets for the new option are almost exactly the same as with the original verb; the only difference is in the 6Quantifier WordSocket. In the original verb, it is meant to be correct; in the lying verb, it must be wrong. I have already solved the problem of how to make it wrong: simply add a small random number to the true number. 

What bothers me, though, is writing the desirable scripts for the two options. What should induce an actor to tell the truth, and what should induce him to lie? That’s a complicated calculation. 

Whenever A tells a lie to B, B reduces trust in A by the degree to which A’s value differs from B’s value. Thus, lying costs credibility, which in turn reduces the degree to which future lies will be effective. A nasty side effect of this is that A could tell B the truth, but if B’s own value is far from the truth, B will reduce his trust in A. 

Perhaps I could cope with this by having B present his own value to A, so that A can respond to that. This gives both parties the opportunity to lie. It would also change the structure of deals, something I rather like. Instead of proposing different topics, the two parties agree on a single topic and share information about it. Thus, A might say to B, “Would you be willing to share information about X’s Y-value?” B responds with either “No” or his estimate of X’s Y-value, to which A must respond with his own estimate — which, of course, could be false. 

However, this approach gives A an advantage: he gets to respond to B’s value, whereas B must offer his value without knowing A’s value. Therefore, the sequence must be altered as follows:

A suggests a topic
B agrees to the suggestion
A states his value
B states his value

This means that A is the one going out on a limb. 

Another approach is starker: an extension of gossip. Actor A simply declares his value in the expectation that B will state his value in return. However, that expectation could be dashed by B’s refusal to tattle on X, an entirely reasonable position. Therefore, I think it better to use the sequence above. 

This also plays well into the other verbs. A can first state his pGood value for X, in the expectation that B will respond in kind. If B’s pGood for X is low, A can then propose a deal with greater confidence that B will agree. 

How will this mesh with future verbs such as the one that permits you to trade a bead for some valuable piece of information? 

A offers a bead for X’s value of Y.
B tells A X’s value of Y.
A gives B the bead.

How will lying help the liar?
The only value of lying is the likelihood that the victim will make a bad choice in dream combat. Overstating the Y-value will induce the victim to refrain from combat with X; understating it will encourage the victim to attack X. Thus, an overstatement lie can help X; an understatement lie can hurt X. This suggests that most lies should be understatement lies. 

Here’s where the drama enters: actors will not behave with pure rationality; they will be less inclined to hurt those they honor. But should that inclination be based on pGood only or on the sum of pGood, pHonest, and pPowerful? I think the latter. Thus, you want to preserve the good will of others even while lying to them. 

Should I replace the two verbs tell aura count and lie about aura count with a single verb? That single verb would merely add a semi-random number to the true pValue to report, and the magnitude of that semi-random number would increase with the inclination to lie. Can I actually design that calculation? 

the difference between (the sum of X's auragon counts) and (the sum of my own auragon counts)

That’s a measure of how much closer to winning X is than I am. In other words, it’s how dangerous X is, on a BNumber scale. I think that I should subtract from this the Esteem value (a custom operator that is the sum of my three pValues toward B) to determine the inclination to lie.  

Conclusions
This has been a long and meandering meditation. I think that I shall take the following steps:

1. Add a new verb target bead that can be a response to ask target
2. Create a custom operator Threat that reflects how much closer to winning X is than I am.
3. Subtract Esteem for B from Threat for X to determine the absolute amount by which to decrease the reported value.