learning problem-oriented decision structures from decision rules
This project is concerned with learning problem-optimized decision trees from rules. A standard approach to determining decision trees is to learn them from examples. A disadvantage of this approach is that once a decision tree is learned, it is difficult to modify it to suit different decision making situations. Such problems arise, for example, when an attribute assigned to some node cannot be measured, or there is a significant change in the costs of measuring attributes or in the frequency distribution of events from different decision classes. An attractive approach to resolving this problem is to learn and store knowledge in the form of decision rules, and to generate from them, whenever needed, a decision tree that is most suitable in a given situation.
An additional advantage of such an approach is that it facilitates building compact decision trees, which can be much simpler than the logically equivalent conventional decision trees (by compact trees are meant decision trees that may contain branches assigned a set of values, and nodes assigned derived attributes, i.e., attributes that are logical or mathematical functions of the original ones). The project describes an efficient method, AQDT-1, that takes decision rules generated by an AQ-type learning system (AQ15 or AQ17), and builds from them a decision tree optimizing a given optimality criterion.
The method can work in two modes: the standard mode, which produces conventional decision trees, and compact mode, which produces compact decision trees. The preliminary experiments with AQDT-1 have shown that the decision trees generated by it from decision rules (conventional and compact) have outperformed those generated from examples by the well-known C4.5 program both in terms of their simplicity and their predictive accuracy.
Michalski, R. S. and Imam, I. F., “On Learning Decision Structures”, Fundamenta Matematicae, 31(1), dedicated to the memory of Dr. Cecylia Raucher, Polish Academy of Sciences, 49-64, 1997.
Imam, I.F. and Michalski, R.S., “An Empirical Comparison Between Learning Decision Trees from Examples and from Decision Rules,” Proceedings of the Ninth International Symposium on Methodologies for Intelligent Systems (ISMIS-96), Zakopane, Poland, June 10-13, 1996.
Imam, I.F. and Michalski, R.S., “Learning Decision Trees from Decision Rules: A Method and Initial Results from a Comparative Study,” Reports of the Machine Learning and Inference Laboratory, MLI 93-6, School of Information Technology and Engineering, George Mason University, May 1993.
Imam, I.F. and Michalski, R.S., “Should Decision Trees Be Learned from Examples or from Decision Rules?” Lecture Notes in Artificial Intelligence, Springer Verlag; Proceedings of the 7th International Symposium on Methodologies for Intelligent Systems, ISMIS, Trondheim, Norway, June 15-18, 1993.
Imam, I.F. and Michalski, R.S., “Learning Decision Trees from Decision Rules: A Method and Initial Results from a Comparative Study,” Journal of Intelligent Information Systems JIIS, L. Kerschberg, Z. Ras and M. Zemankova (Eds.), Vol. 2, No. 3, pp. 279-304, Kluwer Academic, Boston, MA, 1993.
For more references, see publications section.