What are the Costs of a Deal?

Currently I evaluate deals based solely on the magnitude of the benefits that the participant enjoys. I do not take into account the magnitude of the costs the participant pays in providing his half of the deal. The purpose of this design essay is to remedy that failure. 

Beads
The cost of delivering a bead is the same as the benefit of receiving a bead; I need only copy the formula for that.

Promise no attack
The cost of making a promise not to attack depends on the desirability of making that attack. I suppose it’s time for me to tackle that specification. It breaks down into (primarily) the likelihood of winning the battle. How do I calculate that?  That’s easy: it’s just the inverse of the sum of my uncertainties of his auragon counts:

Likelihood = -(URedAuragonCount + UGreenAuragonCount + UBlueAuragonCount)

This value is scaled by my goodwill towards him:

Goodwill = PGood + PHonest

So the final formula is:

Blend(Likelihood, Goodwill, -0.5)

I should probably make this a custom function.

Tell Auragon Count
There are two costs to this: the degree to which this helps the other guy win a battle; I cannot know that, because it depends upon his uncertainty value, which I don’t know. The second cost is the harm it does to the third party. That’s proportional to the combination of my Goodwill towards him and the uncertainty of my own information. 

Digression: should I revert to zero uncertainty?
In the original game, all auragon count values had zero uncertainty: you either knew the value or you didn’t. That made the logic easy. But for this game, I am permitting uncertainties. In the old game, you knew one set auragon count of each opponent perfectly, but didn’t know anything about the other two auragon counts. This does run counter to the fact that you can draw inferences based on past knowledge. Nah, this is just too messy a deviation from long-established plan.

Getting back to the cost of telling auragon counts, I think I’ll discount the possibility of the victim of your blabbing finding out about it. That simplifies matters; so I’m back to a simple formula: 

Cost of telling = Goodwill - UToldAuragonCount

There’s still a bit of a problem: what if I honestly tell somebody my uncertain value, and it’s so far off that they conclude that I am dishonest with them? 

One other idea: what if the ‘information injection’ is nothing more than the knowledge of who lost what last night? We could even go so far as to say that mind combat is public information. Both the deployments and the results are public knowledge. Does that make matters too easy? Yes, I think it does. 

But it suggests another possible deal channel: what auragons another person has deployed. A tells B that C deployed D-type auragon in a previous combat. Hmm, not very useful, is it? 

I have two problems here: first, each night’s combat rapidly changes the auragon count values, making previous information obsolete; and second, we need some way to improve our knowledge of auragon counts. 

What if actors could reveal their own auragon counts — but with the option to lie about it? Hmm, that opens up all sorts of interesting possibilities. It’s a much more direct form of interaction. 

I’ll sleep on this.