My lengthy peregrinations over the deal-making and information injection regarding auragon counts have slowly pushed me back to the original Siboot design. Does this mean that the original design was brilliant or that I have lost my creative powers?
It now appears that I must ditch the potential for utilization of historical information in estimating auragon counts. Each day must be handled in isolation. So why not revert completely to the original design: at the beginning of each day, you get complete, correct knowledge of a single auragon count of each of your opponents.
In the original game, you could not lie; all statements were constrained to be true. Should I retain this constraint? I had long assumed that people could lie in the new game. Will that work?
I don’t see why not. But should I permit overlap in knowledge of auragon counts? With four players, each one knowing one auragon count for each of his opponents, there can be full knowledge of all auragon counts only if there is no overlap in the knowledge. I can certainly insure that, and I think it tightens up the game.
Should uncertainties remain in the internal calculations? Probably so, because there is uncertainty based on the pHonest values. This suggests something new. You don’t want to pass on lies, so what if the ‘tell auragon count’ verb has WordSockets for the chain of information? Like so:
A; tell auragon count; B; C; auragon color; auragon count source A.
A; tell auragon count; B; C; auragon color; auragon count; source D.
This protects A from the accusation by B that A lied when A is merely repeating what was told him by D.
Next question: can A lie about D as the source? I don’t think so; that’s getting too devious for me to handle with algorithms, although I’d love to include it in the game.
But doesn’t this mean that the player must remember the source of each bit of information? What if D tells him something early in the morning and then he is asked for it late in the afternoon after other interactions? I suppose that I could have the the code auto-fill that information in for him; after all, if he can’t lie about it, there’s no need for the player to provide the information.
OK, so I will permit lies, but they’ll be harder to get away with. But now we have a new problem. Actor A gets information from B; Actor A thinks that B is lying. Actor C asks A for the information. Does A simply relay the information coming from B directly, without mentioning his distrust? I suppose so.
But then there’s the matter of memory. The player gets some information from Actor A but doesn’t quite trust the information. The sense of distrust is based on the subtle cues he gets. But that night, when it comes time to make dream combat decisions, how is the player to recall those cues?
Perhaps the player should have a response to that information along the lines of “I don’t believe you.” This opens up a huge can of worms. The player could get into a long argument along the lines of:
{ You’re a liar! | I don’t believe you. | Are you sure? }
{ Don’t call me a liar! | I assure you. | Well, I’m not really certain. | I’m sorry, I don’t know what I was thinking. }
I pursued an idea like this earlier and eventually discarded it. It hinges on the notion of uncertainty, which has dogged this entire design process. I’ve vacillated over the possibility of using uncertainty. On the one hand, it injects too much complexity into the game. On the other hand, it keeps popping up as a useful concept.