-
- MITSUBOSHI Ryotaro
- Department of Informatics, Kyushu University RIKEN AIP
-
- HATANO Kohei
- Department of Informatics, Kyushu University RIKEN AIP
-
- TAKIMOTO Eiji
- Department of Informatics, Kyushu University
この論文をさがす
説明
<p>Following the formulation of Support Vector Regression (SVR), we consider a regression analogue of soft margin optimization over the feature space indexed by a hypothesis class H. More specifically, the problem is to find a linear model w ∈ ℝH that minimizes the sum of ρ-insensitive losses over all training data for as small ρ as posssible, where the ρ-insensitive loss for a single data (xi, yi) is defined as max{|yi - ∑h whh(xi)| - ρ, 0}. Intuitively, the parameter ρ and the ρ-insensitive loss are defined analogously to the target margin and the hinge loss in soft margin optimization, respectively. The difference of our formulation from SVR is two-fold: (1) we consider L1-norm regularization instead of L2-norm regularization, and (2) the feature space is implicitly defined by a hypothesis class instead of a kernel. We propose a boosting-type algorithm for solving the problem with a theoretically guaranteed convergence rate under a natural assumption on the weak learnability.</p>
収録刊行物
-
- IEICE Transactions on Information and Systems
-
IEICE Transactions on Information and Systems E107.D (3), 294-300, 2024-03-01
一般社団法人 電子情報通信学会
- Tweet
詳細情報 詳細情報について
-
- CRID
- 1390017843891630592
-
- ISSN
- 17451361
- 09168532
-
- 本文言語コード
- en
-
- 資料種別
- journal article
-
- データソース種別
-
- JaLC
- Crossref
- KAKEN
- OpenAIRE
-
- 抄録ライセンスフラグ
- 使用不可