The Maximum Entropy (MaxEnt) information theoretic model parametric framework was introduced in a prior paper for distributed decision fusion (DDF) without knowledge of prior probabilities of local decisions. The paper demonstrated the effectiveness of the MaxEnt fusion center by achieving the best, realistic detection performance with respect to published results of either the Bayesian formulation or the Neyman-Pearson criterion. This paper represents the framework of an extension of MaxEnt DDF, called E-MaxEnt using: individual sensor MaxEnt classifiers for targets classification/recognition, and by fusing local classifier decisions. Specifically, in E-MaxEnt each sensor has a front-end pre-processing system for both signal detection and to process unique target attributes extracted for example from observed target imagery, which attributes are stored for reference/learning/comparison in the sensors MaxEnt classifiers. Based on the degree of match, each sensor generates local binary decisions that are sent to a MaxEnt fusion center, in the usual parallel architecture. No assumptions are made about knowing any local decision rules. The sensors are taking simultaneous (synchronized) measurements with overlapping FOV overages. It should be noted that the above description is not meant to address the “needle-in-haystack” problem, but rather address finding the presence, viz., classify/recognize a previously seen “known” target in areas where previously seen targets most likely are, along with other targets. At the time of writing, the data sets to test the algorithm were not available, but front-end image processing and MaxEnt classifiers were implemented. It is hoped that someone could provide the necessary data sets so the efficacy of the method could be demonstrated and compared with alternative approaches.