The title does not refer to my self-confidence; it refers to the use of the uValues in the game. Not only does every actor have a pValue for every other actor’s traits; they also have an estimate of the uncertainty of that pValue. This permits greater emotional fidelity. However, it also has its downsides.
The biggest downside is the cognitive load this places on the player. The player has to evaluate both the pValue and the uValue. These are expressed with exactly the same sympols, so they are easily confused. Moreover, the difference between a mean and an uncertainty are difficult for most people to appreciate.
Then there’s the realization I discussed yesterday, that I need to simplify this design. I need to take an axe to it and chop away at the cognitive underbrush, the tangle of too-clever ideas and cut it down to something simple and clear.
Thinking of it this way, I now wonder why I ever added uncertainty into the mix in the first place. It was definitely a dumb idea. Off with its head!
Later
The elimination of the uValues has raised a few problems. In several cases, I used uValues to determine an ideal course of action. For example, when an actor chooses which auragon he wishes to learn about in a deal, he simply elects for the one that has the highest uValue. This insures that uValues will always converge downward. But now I can’t use that approach.
One possibility is to retain uValues secretly. The actors retain these values and use them to make decisions, but never communicate them. Instead, those uValues are derived from discrepancies between intrinsic pValues and reported pValues. However, this would give computer actors a big advantage over the player, who can’t keep such precise records of events.