I've been puzzling over creating an optimized system that would run well with the presence of humans, and I realize that it really is awfully hard to get humans to stick with the system. Firstly, it is expected that humans will have differing opinions of just how optimal the system is, and invariably seek to "improve" it by various actions ranging from suggestions to outright disobedience.
In fact, humans themselves cannot be trusted to administer the system in the way originally intended, because they are likely to form different interpretations and opinions about the rules. These aspects of free will invariably serve to color and vary the implementation of the system. This is not to say that the variance cannot possibly be an improvement on the system. Unfortunately, if the system was carefully planned to achieve optimized results, it is very likely that any tweaks attempted would simply result in a less than optimal outcome.
Realistically, one may choose to approximate such a hypothetical system by factoring in the probable suboptimal human responses to each of the rules, such that the net effect of following the rules will be in line with the optimized plan. Unfortunately, this may involve implementing certain extreme penalties to strongly nudge probable behaviors away from suboptimal directions...
Friday, November 12, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment