(P, p) retraining policies (2007)

Abstract

Skills that are practiced infrequently need to be retrained. A retraining policy is optimal if it minimizes the cost of keeping the probability that the skill is learned within two bounds. The (P, p) policy is to retrain only when the probability that the skill is learned has dropped just above the lower bound, so that this probability is brought up just below the upper bound. For minimum assumptions on the cost function, a set of two easy-to-check conditions involving the relearning and forgetting functions guarantees the optimality of the (P, p) policy. The conditions hold for power functions proposed in the psychology of learning and forgetting but not for exponential functions.

Bibliographic entry

Katsikopoulos, K. V. (2007). (P, p) retraining policies. IEEE Transactions on Systems, Man & Cybernetics, Part A: Systems & Humans, 37, 609-613.

Miscellaneous

Publication year 2007
Document type: Article
Publication status: Published
External URL:
Categories: BusinessProbabilityMemory
Keywords: dynamic programminginstructioninventory managementmemoryoptimality

Edit | Publications overview