The lateral inhibition as conditional entropy enhancer §

Three kinds of redundant sensing have appeared to be utilized by the majority of living beings. Of these, the most remarkable feature of distributed sensor networks is the lateral inhibition (LI), where sensors output in proportion to its own excitation and each sensor negatively influences its nearest neighbors. This brings about local effects such as contrast enhancement, two-point discrimination, and funneling. In information theory, entropy is a measure of the uncertainty related to a random variable. Shannon entropy, quantifies the anticipated value of the information included in a message, usually in units such as bits. The purpose of this study is to analyze lateral inhibition mechanism in the light of the Shannon entropy. This biological mechanism can be adapted to any artificial system such as sensory networks. With the aim of adapting this biological mechanism to the sensory networks it is desired to create an information filter with the benefits of information filter feature of lateral inhibition mechanism. The information has to be quantified in order to filter. In this point, the Shannon entropy concept is intended to be used.


INTRODUCTION
The system, which is defined as a distributed sensory network (DSN) and consists of a set of geographically scattered sensors, is utilized in order to collect data from its environment. DSN can be adapted to many advanced systems such as robotics, automation, aerospace etc. Although this system is very practical in applications, it has some drawbacks. In DSN, the redundancy is the most important problem to be solved since it is the main cause of long processing time and extreme processing energy. In highly redundant sensing, redundancy means that transferred information via different sensors or the same sensor at different times overlaps [1]. At that point, Lateral Inhibition (LI) with its simplicity and ubiquity is one way to overcome redundancy with the benefits of its low pass filter feature.
Ernst Mach was the first person having attempted to describe the lateral inhibition based on his experiments. Hartline et al. introduced lateral inhibition by analyzing the facetted compound eye of Horseshoe crab (Limulus) [2]. Barlow was a student of Hartline continued his study on Horseshoe crab and investigated influence of lateral inhibition on its behavior [3]. Georg von Békésy found a large spectrum of inhibitory incidents in sensory systems, and explained them in the sense of sharpening. He explored methodical effects of lateral inhibition in all aspects of human sensing, especially on hearing [4]. Brooks discussed that lateral inhibition may be adapted to robotics [5].
Information could be defined as a representation of knowledge. Information being transferred via sensor signals contains also noise and unnecessary data. For an efficient transmission, it requires to be filtered. An information filter could be used to filter the undesired information. To filter the information, first of all, it needs to be quantified. Claude E. Shannon introduced entropy, an evaluation of uncertainty concerned with random variables, is used to measure the anticipated value of the information. Entropy can be used in order to quantify information [6].
Harry Nyquist explained that communication channels had maximum data transmission rates, and formulated a method to compute those rates in finite bandwidth noiseless channels [7]. Nyquist's colleague Ralph V.L. Hartley used the information word as a measureable amount and established the first mathematical foundations for the information theory [8]. Claude E. Shannon, considered father of information theory, described entropy as the fundamental measure of information [6].
The present paper concerns with the formulation of lateral inhibition mechanism as an information filter. By applying lateral inhibition mechanism to sensory network, it is focused on filtering the unnecessary information thanks to the *Corresponding authour Email: syildirim@cu.edu.tr (S. Yıldırım) § This paper was presented at the IMSEC-2016 low pass filter feature of lateral inhibition in the light of Shannon entropy concept. The main purpose is to get efficient noiseless information by using inferior quality cheap sensors through lateral inhibition, instead of expensive and quality sensors.

LATERAL INHIBITION
Lateral inhibition is the most important feature of the biological distributed sensory network. In this mechanism each sensor effects its nearest neighbors negatively resulting in contrast enhancement, two-point discrimination and funneling. Fig. 1 shows schematic of lateral inhibition mechanism [5].

Figure 1: Lateral inhibition schematic
The strengths of the connections, as shown above, are generally put in order as excitant among adjacent receptors and inhibitory among further receptors. To state the matter differently, if one sensor signal is considered, contrary to what the excitatory connections try to do, inhibitory connections try to decrease its signal. As a result, all sensors in the network receive a mixture of inhibitory and excitatory signals from their neighbors. Hence distinction between signals of the sensors which have the strongest output and signals of the sensors which have the weaker output become higher due to this competitive work [9].
In the Fig. 1, the impulse of the sensor before the lateral inhibition is illustrated as e, the result of competitive work is demonstrated by I which is called effect of lateral inhibition. The impulse of the sensor after the lateral inhibition is illustrated by x which is weaker than e. Due to the inhibitory and excitatory coefficients and number of neighbors' x can vary all the network.

Mathematical Formulation of Lateral Inhibition
The mathematical formulation of LI was obtained from experiments based on Hartline study on the visual system of Horseshoe Crab [2]. The results are summarized considering two neighbor sensors A and B in the following mathematical formulas: where x A and x B are after inhibition mechanism impulses, e A and e B are individual impulses and x A 0 and x B 0 are threshold frequencies of A and B respectively. Also β AB is inhibition coefficient of B on A and β BA is inhibition coefficient of A on B. These results can be extended to cases where one sensor has two or more neighboring sensors. For example sensors mentioned before (A and B) have another neighbor C, three equations should be required to determine the responses of each sensor. Each equation must contain two inhibition terms as: Here β ij are inhibition coefficients among A, B and C.When equations are extended to define the effect of lateral inhibition mechanism on n number of sensor and self-excitatory influences are considered, they are transformed into following form; where; α is the self-excitation coefficient of sensors, p=1,2,3,...,n; j ≠ p and α>0 and β>0 All threshold frequencies are assumed zero for simplicity [9].

ENTROPY (IN INFORMATION THEORY)
Information could be defined as a representation of knowledge. Entropy was introduced as a measure of the uncertainty related to a random variable which quantifies the expected value of the information contained in a message, usually in units such as bits or nats [6]. Shannon entropy concept measures inadequate information and define the uncertainty [10]. Formulation of Shannon entropy, the entropy of a random variable X, is illustrated below; where P(x) denotes probability of x. If the entropy of the signal is low, it means that probable quantities are abundantly obtained.

MUTUAL INFORMATION
Mutual information, is a dimensionless quantity with units of bits, measures the information of one random variable about another random variable. It can be considered as the reduction in uncertainty about one random variable given knowledge of another [11]. This phenomenon can be described with Fig. 2. illustrated below. where H(X) is uncertainty of X and H(X|Y) is uncertainty of X given knowledge of Y.

H(Y) H(X) H(Y |X ) H(X |Y ) I(X ;Y )
If one system has a LI mechanism, mutual information certainly exists in this system and only sensors which have mutual information are active. Note that amount of mutual information depends on position of the sensor.

LATERAL INHIBITON AS AN INFORMATION FILTER
The lateral inhibition mechanism filters the mutual information hence there are some important factors to be considered while filtering the information in the sensor networks. First of all, frequency of the obtained information should be taken into consideration as the frequency is related to mutual information. And also characteristic of the desired or undesired information is one of the important factors. For the reliability, distance of the sensor to the source is important and mutual information might be desired to be high. European Mechanical Science (2017); Volume 1, Issue 1 In Fig.2 green line represents the sensor signals before the LI and blue line represents the sensor signals after applied LI. As shown above, the lateral inhibition filters the amount of information and increases the contrast. The lateral inhibition is considered to be filtering the mutual information.

COMPUTER SIMULATIONS
To simulate effect of the LI on the sensory network, it is considered that there is a group of a scattered photodiodes which is schematically illustrated below and there is a one light source above the central photodiode. In the sensory network it is assumed that all photodiodes have 8 neighbors except located on the borders. The response of the sensors is defined logically as the photodiode which is on the center of the group has maximum strength. The defined sensor strengths without LI are seen from the Table 2 below. Table 2. Schematic illustration of the defined sensor strengths without LI In this simulation, inhibition coefficient (β) was chosen as 0.05 and the excitation coefficient (α) was chosen as 0.15. The results after applied the LI on the system can be seen in Table 3. After the implementation, the strength of the sensors with and without the LI was compared in Figure 3. Table 3. Schematic illustration of the sensor strengths after LI 6.5500 6.5000 8.5500 6.5000 6.5500 6.5000 11.6000 12.2000 11.6000 6.5000 8.5500 12.2000 13.7500 12.2000 8.5500 6.5000 11.6000 12.2000 11.6000 6.5000 6.5500 6.5000 8.5500 6.5000 6.5500 With the increase of the number of neighbors, maximum amplitude increases as well. This results in contrast enhancement hence discrimination of a desired object can be easier. If it is assumed that two sensors do not work and one of the broken sensor is neighbor of the examined sensors, simulation results can be seen in Table 5 and Figure 4.  Table 5. Simulation results of a sensor strengths (two sensors broken, shown bold) 6.9500 -2.7000 8.9500 6.5000 6.5500 6.9000 12.0000 12.6000 11.6000 6.5000   Fig. 4, it is seen that there is a decrease from the contrast on the right side of the network therefore it does not prevent all the system to work. Also there is no decrease from the maximum contrast.
It is assumed that this sensory network considered as a group of thermal receptors. Even though a problem occurs on one of the sensors of the group, it still operates well. Also it should take into consideration that constant maximum contrast is crucial part of this advantage.

CONCLUSIONS
Lateral inhibition provides a series of advantages to filter undesired information, as well as emphasizing the desired one. Maximum signal intensity decreases after being processed with LI and detected signal sharpens. LI mechanism reduces the number of active sensors and it causes a reduction of the system cost. After LI is applied signal processing speed increases for the same processor and in the case of information storage, memory requirement is reduced. It may even reduce the costs and increase the reliability.
LI has also some disadvantages in practice. LI mechanism is applicable to in multi-sensor networks. This points to the use of a multitude of sensors. Cost of initial investment and overhaul are expected to be high. LI further increases weight and volume of the system.
But the system possesses fault-tolerant structure, in that even one or more sensors break down in the sensory network the system can continue its normal operation without major trouble. This is because mutual information is shared by every individual sensor in the network.