Results 1 - 10 of 7650
Results 1 - 10 of 7650. Search took: 0.031 seconds
|Sort by: date | relevance|
[en] A method is presented that combines back-propagation with multi-layer neural network construction. Back-propagation is used not only to adjust the weights but also the signal functions. Going from one network to an equivalent one that has additional linear units, the non-linearity of these units and thus their effective presence is then introduced via back-propagation (weight-splitting). The back-propagated error causes the network to include new units in order to minimize the error function. We also show how this formalism allows to escape local minima
[en] A discrete time bipolar neural network depending on two parameters is studied. It is observed that its dynamical behaviors can be classified into six cases. For each case, the long time behaviors can be summarized in terms of fixed points, periodic points, basin of attractions, and related initial distributions. Mathematical reasons are supplied for these observations and applications in cellular automata are illustrated.
[en] One of the most famous methods that is used in Neural Network (NN) system is Back Propagation (BP) method, in which Sigmoid function is often used in the input unit system. Theoretically, however, BP method does not require the use of Sigmoid function. This paper discusses BPNN with non-Sigmoidal function, in which it is proved that non-Sigmoidal BPNN obtains far better result than the Sigmoidal one
[en] The binary perceptron is the simplest artificial neural network formed by N input units and one output unit, with the neural states and the synaptic weights all restricted to ±1 values. The task in the teacher-student scenario is to infer the hidden weight vector by training on a set of labeled patterns. Previous efforts on the passive learning mode have shown that learning from independent random patterns is quite inefficient. Here we consider the active online learning mode in which the student designs every new Ising training pattern. We demonstrate that it is mathematically possible to achieve perfect (error-free) inference using only N designed training patterns, but this is computationally unfeasible for large systems. We then investigate two Bayesian statistical designing protocols, which require 2.3N and 1.9N training patterns, respectively, to achieve error-free inference. If the training patterns are instead designed through deductive reasoning, perfect inference is achieved using N + log2 N samples. The performance gap between Bayesian and deductive designing strategies may be shortened in future work by taking into account the possibility of ergodicity breaking in the version space of the binary perceptron. (paper)
[en] It is known that both excitatory and inhibitory neuronal networks can achieve robust synchronization only under certain conditions, such as long synaptic delay or low level of heterogeneity. In this work, robust synchronization can be found in an excitatory/inhibitory (E/I) neuronal network with medium synaptic delay and high level of heterogeneity, which often occurs in real neuronal networks. Two effects of post-synaptic potentials (PSP) to network synchronization are presented, and the synaptic contribution of excitatory and inhibitory neurons to robust synchronization in this E/I network is investigated. It is found that both excitatory and inhibitory neurons may contribute to robust synchronization in E/I networks, especially the excitatory PSP has a more positive effect on synchronization in E/I networks than that in excitatory networks. This may explain the strong robustness of synchronization in E/I neuronal networks. (paper)
[en] Speckle is a major quality degrading factor in optical coherence tomography (OCT) images. In this work we propose a new deep learning network for speckle reduction in retinal OCT images, termed DeSpecNet. Unlike traditional algorithms, the model can learn from training data instead of manually selecting parameters such as noise level. The proposed deep convolutional neural network (CNN) applies strategies including residual learning, shortcut connection, batch normalization and leaky rectified linear units to achieve good despeckling performance. Application of the proposed method to the OCT images shows great improvement in both visual quality and quantitative indices. The proposed method provides good generalization ability for different types of retinal OCT images. It outperforms state-of-the-art methods in suppressing speckles and revealing subtle features while preserving edges. (paper)
[en] This paper studies the problem of delay-dependent passivity for uncertain neural networks (UNNs) with discrete and distributed delays. Without considering free weighting matrices and multiple integral terms, which may cause more numbers of linear matrix inequalities (LMIs) and scalar decision variables. By constructing a suitable Lyapunov–Krasovskii functional (LKF) and combining with the reciprocally convex approach, some sufficient conditions are established in terms of LMIs. Compared with existing results, the derived criteria are more effective due to the application of delay partitioning approach which takes a full consideration of all available information in various delay intervals. Two simulation examples are given to illustrate the effectiveness of the proposed method.
[en] Data pre-processing is tremendously used for enhanced classification of gases. However, it suppresses the concentration variances of different gas samples. A classical solution of using single artificial neural network (ANN) architecture is also inefficient and renders degraded quantification. In this paper, a novel modular ANN design has been proposed to provide an efficient and scalable solution in real–time. Here, two separate ANN blocks viz. classifier block and quantifier block have been used to provide efficient and scalable gas monitoring in real—time. The classifier ANN consists of two stages. In the first stage, the Net 1-NDSRT has been trained to transform raw sensor responses into corresponding virtual multi-sensor responses using normalized difference sensor response transformation (NDSRT). These responses have been fed to the second stage (i.e., Net 2-classifier). The Net 2-classifier has been trained to classify various gas samples to their respective class. Further, the quantifier block has parallel ANN modules, multiplexed to quantify each gas. Therefore, the classifier ANN decides class and quantifier ANN decides the exact quantity of the gas/odor present in the respective sample of that class. (paper)
[en] This paper describes the simulation of generalized net model of ART2 neural network. The article represents the test process of learned network and in order to achieve it the learned bottom-up and top-down weight values are applied in two tokens. When all tokens are initialized the network is ready to be started and the input vector is presented into it. The article also shows how each token changes its values during the test process. At the end we have two cases – either the input vector is going to be obtained by cluster, or it is going to be rejected. Key words: generalized nets, neural networks, adaptive resonance theory
[en] We consider N-variable binary optimization problems. The solution space is confined to the hypercube of 0 ≤ vi≤ 1, where vi are the variables of the problem (and also, the state of corresponding neurons). The feasible solution are assumed to lie in the corners of the hypercube. (author)