Our research group is using chess as a vehicle for studying the fusion of adaptation, multiple representation, and search technologies for real-time decision making. Chess systems like Deep Blue have achieved Grandmaster chess play with a brute-force search of the game tree and human- supplied information, like piece-values and opening books. However, subtle aspects of chess, including positional features and advanced concepts, are not capable of being represented or processed efficiently with the standard method. Since 1989, Morph I-III have exhibited more autonomy and learning ability than traditional chess programs in `adaptive pattern-oriented chess'. Like its predecessors, Morph IV is a reinforcement learner, but it also uses a new technique we call pattern-level TD and Q-learning to mathematically map the state space and effectively learn to classify situations. Its three knowledge sources include two traditional ones: material and a piece-square table, and a new method called Distance. These are combined using a simple genetic algorithm and a decision tree. This paper shows the effectiveness of fusing knowledge to replace search in real-time situations, since an agent which combines all sources is capable of consistently beating an agent which employs any of the individual knowledge sources. Surprisingly, the pattern-level TD agent is slightly superior to the pattern-level Q-learning agent, despite the fact that the Q-learning agent updates more Q-values on each temporal step.