The benefits of target relations: A comparison of multitask extensions and classifier chains
|Title||The benefits of target relations: A comparison of multitask extensions and classifier chains|
|Publication Type||Journal Article|
|Year of Publication||2020|
|Authors||Adıyeke, E., and M. Gokce Baydogan|
|Keywords||Classifier chains, Ensemble learning, Multi-objective trees, Multitask learning, Stacking|
Multitask (multi-target or multi-output) learning (MTL) deals with simultaneous prediction of several outputs. MTL approaches rely on the optimization of a joint score function over the targets. However, defining a joint score in global models is problematic when the target scales are different. To address such problems, single target (i.e. local) learning strategies are commonly employed. Here we propose alternative tree-based learning strategies to handle the issue with target scaling in global models, and to identify the learning order for chaining operations in local models. In the first proposal, the problems with target scaling are resolved using alternative splitting strategies which consider the learning tasks in a multi-objective optimization framework. The second proposal deals with the problem of ordering in the chaining strategies. We introduce an alternative estimation strategy, minimum error chain policy, that gradually expands the input space using the estimations that approximate to true characteristics of outputs, namely out-of-bag estimations in tree-based ensemble framework. Our experiments on benchmark datasets illustrate the success of the proposed multitask extension of trees compared to the decision trees with de facto design especially for datasets with large number of targets. In line with that, minimum error chain policy improves the performance of the state-of-the-art chaining policies.