-
- 宮田 敏
- 財団法人癌研究会ゲノムセンター
書誌事項
- タイトル別名
-
- Non-parametric Regression Model Estimation Based on an Adaptive Model Selection Criterion
- テキオウガタ モデル センタク キジュン ニ モトヅク ヒセンケイ カイキ モデル ノ スイテイ
この論文をさがす
抄録
In modeling procedures for non-parametric regression, model selection via optimization of the model tuning parameters becomes an important problem. This article discusses optimal model selection with a data-adaptive penalty, and optimization using the evolutionary algorithm (EA). Most model selection criteria, like AIC, BIC and Mallows's Cp, utilize a fixed penalty to control the complexity of the models. These model selection procedures are non-adaptive and their performance depends on the required complexity of the model. For instance, model selection criteria with "large" penalty perform well only for "simple" models and vice versa. To avoid selection bias of this kind, we adopt the adaptive model selection criterion (AMSC) with a data-adaptive penalty, which is defined as the best estimator of the relative squared loss between the true model and the estimator, and is applicable to various degree of model structure complexity. In general, optimization for model selection is a complex non-linear problem. In most conventional methodologies, deterministic procedures, like grid search or the stepwise technique, were adopted and only suboptimal solutions were obtained. In this paper, we consider global optimization of the AMSC via stochastic optimization procedures (EA, for example). The proposed procedure is applied to 1) a hierarchical neural network, i) a single layer feed-forward neural network, ii) a radial basis functions network, and 2) a Support Vector Machine, and in modeling procedures for non-parametric regression, model selection via optimization of the model tuning parameters becomes an important problem. This article discusses optimal model selection with a data-adaptive penalty, and optimization using the evolutionary algorithm (EA). Most model selection criteria, like AIC, BIC and Mallows's Cp, utilize a fixed penalty to control the complexity of the models. These model selection procedures are non-adaptive and their performance depends on the required complexity of the model. For instance, model selection criteria with "large" penalty perform well only for "simple" models and vice versa. To avoid selection bias of this kind, we adopt the adaptive model selection criterion (AMSC) with a data-adaptive penalty, which is defined as the best estimator of the relative squared loss between the true model and the estimator, and is applicable to various degree of model structure complexity. In general, optimization for model selection is a complex non-linear problem. In most conventional methodologies, deterministic procedures, like grid search or the stepwise technique, were adopted and only sub-optimal solutions were obtained. In this paper, we consider global optimization of the AMSC via stochastic optimization procedures (EA, for example). The proposed procedure is applied to 1) a hierarchical neural network, i) a single layer feed-forward neural network, ii) a radial basis functions network, and 2) a Support Vector Machine, and its effectiveness is confirmed via simulation studies.
収録刊行物
-
- 応用統計学
-
応用統計学 33 (1), 71-91, 2004
応用統計学会
- Tweet
キーワード
詳細情報 詳細情報について
-
- CRID
- 1390282679418794496
-
- NII論文ID
- 10013540340
-
- NII書誌ID
- AN00330942
-
- ISSN
- 18838081
- 02850370
-
- NDL書誌ID
- 7075152
-
- 本文言語コード
- ja
-
- データソース種別
-
- JaLC
- NDL
- Crossref
- CiNii Articles
- KAKEN
-
- 抄録ライセンスフラグ
- 使用不可