Development of a Neural Network Simulator for Structure-activity Correlation of Molecules: Neco. (7). Hydrophobic Parameter (logP) Prediction of Perillartine Derivatives.

  • TAKAHASHI Risa
    Department of Human Culture and Sciences, Graduate School of Ochanomizu University
  • HOSOYA Haruo
    Faculty of Sciences, Ochanomizu University
  • FUKUDA Tomoko
    National Institute for Advanced Industrial Science and Technology
  • NAGASHIMA Umpei
    National Institute for Advanced Industrial Science and Technology

Bibliographic Information

Other Title
  • 分子の構造活性相関解析のためのニューラルネットワークシミュレータ  Neco (NEural network simulator for structure‐activity COrrelation of molecules)の開発 (7)  ペリラルチン類の疎水性パラメータlogPの予測
  • 分子の構造活性相関解析のためのニューラルネットワークシミュレータ:Neco(NEural network simulator for structure-activity COrrelation of molecules)の開発(7)ペリラルチン類の疎水性パラメータlogPの予測
  • ブンシ ノ コウゾウ カッセイ ソウカン カイセキ ノ タメ ノ ニューラル ネットワーク シミュレータ Neco NEural network simulator for structure activity COrrelation of molecules ノ カイハツ 7 ペリラルチンルイ ノ ソスイセイ パラメータ logP ノ ヨソク

Search this article

Description

We developed a neural network simulator for structure-activity correlation of molecules: Neco. A self-organized network model for high-speed learning was included in Neco, a perceptron type with three layers. In the hidden layer the neurons are self-organized by using Mahalanobis generalized distance.<br>  This report proposes an improved training algorithm to the network. A self-organizing module decides the number of neurons in the hidden layer, at first. Then, a neuron in the hidden layer has two informations which describe a characteristic of the neuron. In this way, the network can evaluate stochastic characteristics from input data better.<br>  Using this simulator, the hydrophobic parameter, logP, of perillartine derivatives was predicted. We used for inputs a set of six parameters: five STERIMOL (L, Wl, Wu, Wr, and Wd) and the sweet/bitter activity. The 22 sampled data are used for training. Our neural network can accurately predict hydrophobic parameter, logP. Compared with a normal perceptron network, the learning ability of our network is somewhat higher and its convergence speed is greatly much larger.<br>  This simulator doesn't depend on the machine environment because it codes by the Java programming language.

Journal

Citations (1)*help

See more

References(15)*help

See more

Details 詳細情報について

Report a problem

Back to top