You are on page 1of 15

International Journal on New Computer Architectures and Their Applications (IJNCAA) 2(1): 8- 22 The Society of Digital Information and

Wireless Communications, 2012 (ISSN: 2220-9085)

Tracing one neuron activity from the inside of its structure Luciana Morogan Military Technical Academy Bucharest, Romania morogan.luciana@gmail.com this was not enough because of our need in dealing with the internal processes taking place into one neuron device from a neural-like network. In order to materialize this kind of internal information, we introduced the molecular information as objects or object compounds (we already detailed this aspect in [6] and [7]). They are viewed as both computational and information carrying elements. In this manner, one neuron device represents itself a complex system for information processing making use of object molecules that offer support for computational processes, [6]. A very important role, played by this objects, can bealso underlined. Specific internal structure, [6], of each neuron can be modeled as neuron nucleus containing its genetic material (the role of genetic control is presented in the second section of this article). This can be represented in our model as different objects that are grouped together in order to express some specific properties that can materialize the neuron genetics. The way this kind of information can be represented and used will be further detailed in the forth section of this paper, while in the third section we introduce the platform on which our model is based on. We also need to introduce, in the second section, some genetic background in order to explain our choices in one neuron potential construction. The way information 8

ABSTRACT As new areas of neural computing are trying to make at least one step beyond the definition of digital computing, the neural networks field, was developed around the idea of creating models of real neural systems. The key point is based on learning rather than programming. We designed the model we present in this paper, based on the the needs introduced by new trends in neural computing. We created a model for information processing inside neurons. It is constructed, from a neural inside point of view, as a feedback system that controls the flow of information passing through the neuron. Processes of internal neural learning take place. We translated them as processes of learning at molecular level. Our work is disseminated into the construction of an algorithm of learning, that we introduce in the end of this paper. Keywords-Neural networks; hybrid intelligent systems; nature inspired computing technique; evolutionary computing. I. INTRODUCTION In classical neural networks (see [1], [2], [3], [4], or [5]) the electrical information exchanged between neurons is usually viewed as physical information. For our model,

International Journal on New Computer Architectures and Their Applications (IJNCAA) 2(1): 8- 22 The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2220-9085)

can be controlled in modeling the neuron behavior and an algorithm of the learning process at the molecular level are also presented in the final section. A network is created by placing a finite set of neuron devices in the nodes of a finite directed graph. A neural-like system structure is represented by an ensemble composed of network units. In addition, we create a parallel distributed communication network of networks of neurons. There are possible two types of interactions translated as neural communication: local (between neurons) and global interactions (between networks, or neurons of different networks). The mean by which the communication can be realized is the existence of a finite set of directed links between neurons in this graph. The directed links are called synapses. For this model, it is necessary also the communication with the surrounding environment. We also refer with the same term - synapse - to the directed communication channel from one neuron to the environment and vice-versa we. The system may evolve or involve depending on the synaptic creation or deletion from one network during the computations. The design of the binding affinities between neurons was already described in [8], [9] and [10], as not any neuron can bind to any other neuron. Connections in a network are dependent on sufficient quantities of the corresponding substrates inside the neurons and the compatibilities between the types of the transmitters1 in the presynaptic neurons and those of receptors in the posts-synaptic neurons. Synapse weights, representing degrees of the binding affinities, are assigned

to each synapse. The value of the binding affinity degree is considered to be zero for neurons with no synapse between them and for the so-called synapses between neurons and the environment. 1 The transmitters and receptors are specific protein-like objects in the pre and postsynaptic neurons respectively, in accordance with the real biological transmitters and receptors proteins ([11], [12], [13]). The main idea (we already introduced in [14]) is focused on potential changes (or mutations), that may occur into one neuron, influencing the expression of its genetic material. Those changes, as in real biological cases, will influence the binding degrees (between neurons and between neurons and the environment) inducing this way both modified synaptic communications and/or modifications into the network structure (see [9] and [10]). Analyzing the neuron activity rate, we can control the flow of information and the effect of the events. A result resides in the fact that modifications into the synaptic communications will also determine the neuron to adapt to its inputs and modeling this way its behavior (first time we mentioned this idea in [8]). The neuron ability to adapt to its inputs leads to a process of learning at the molecular level and defines the neuron as a feedback control system. Implications involved by such a dynamic neuron design over the information processing are introduced. II. VIEW FROM INSIDE NEURON: REPRESENTATION ONE

International Journal on New Computer Architectures and Their Applications (IJNCAA) 2(1): 8- 22 The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2220-9085)

AND THE ROLE OF GENETIC CONTROL In order to understand the processes we model into ourneuron functional design we felt the need to introduce some biological explanations about what means and what is the meaning of genetic control (see [15] or [16]). Neurons are controlled by DNA strands forming chromosomes in the nucleus of the neuronal cell. All neurons have the same chromosomes and the same DNA. What makes the difference then? The answer is found in the fact that differentiation is realized at the level of DNA portions that are being expressed by different genes depending on proteins with which they are associated and delimited by. One may say that the proteins are those that governs the neuron structure and functionality. Two classes of genes are already known: structural genes that produce both functional proteins and the messenger RNA (mRNA), and regulatory genes that control precisely the expression of first class of genes, the structural genes. The process of mRNA production (leaving the neuron nucleus in forming proteins or enzymes in the cell by a process called translation) is called transcription. mRNA production from a DNA template is catalyzed by an enzyme. The process of transcription begins with the attachment of this enzyme by a regulatory region in the DNA called promoter. At another location into the DNA is found a so called super-gene that codes the regulatory genes. For short, during the processes of neuronal differentiation and neuronal development, one superregulatory gene can govern other genes

which in their turn can activate subfamilies of regulatory genes. In biology, there are processes that lead to internal neuronal changes in case of indirectly activation of the transfer channels of the postsynaptic neuron (see [15] or [16]). To explain the biological phenomena, we must say that the transmitters action into the postsynaptic neurons are not dependent on their chemical nature, but the properties of the receptors of the binding neurons. There are the receptors that determine if the synapse is excitatory or inhibitory2. The receptors are also determining the way transfer channels are activated. In case of direct gating they are quickly producing synaptic actions. In case of indirect gating the receptors are producing slow actions that usually serve for neuron behavior modeling by modifying the affinity degree of the receptors. From our point of view, we chose for our model precisely this second case as the biological secondary messenger (we refer to [16] for the explanation of this term) has a profound effect over the alteration of genes expression. In this manner, we chose to model bellow learning at a molecular level. This is defined as the process that simulates the neuron device ability to adapt to its inputs by altering the expression of its genes. The neuron input, represented by transmitters-like objects, is modeled bellow as being capable of determining internal changes both into the neuron structure and influencing the neuron device behavior. To the neuron device a genetic internal timing is associated. In normal conditions, this internal timing 10

International Journal on New Computer Architectures and Their Applications (IJNCAA) 2(1): 8- 22 The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2220-9085)

leads to a gene activation. In abnormal conditions it leads to gene malfunction. The robustness comes at two stages: non-functional processes are changeable by feed-back mechanisms and nonfunctional neurons are killed. III. BACKGROUND FOR MODELING THE NEURON BEHAVIOR The following definitions in this section were already introduced by the authors in a series of already published papers (see [8], [9], or [10]). In this section, we chose to remind them as they represent the background for modeling the neuron behavior. One neuron in the network represent a system for information processing. It offers objects (biological protein molecules equivalents) as a support for computational processes. We introduced the tree - like architecture of such a neuron in [6] as the construction (n) = (0,j0,1,j1,,r,jr) where the finite number of r compartments are structured as an hierarchical tree-like arrangement. Their role is to delimit protected regions as finite
2

1,j1 .. r,jr , such as for j1 jr


natural numbers, not necessarily disjoint, with jk r for k {1 ,.., r}, we define the proper depth level of the k - th compartment. We may have a maximum of r inner depth levels. The initial neuronal architecture of n and the objects found in each of its compartment regions are described by the initial configuration. Formally, this is represented by (0,j0 : o0, 1,j1 : o1,, r,jr : or) . For each k,jk , with k {1,..r}, ok may be written under the form of the string objects 1m1 2m2 . pmp , where p N is finite. Each imi represents the quantity mi (mi N) of

ai found in the region k,jk of the neuron


n. If mi = * , then we face an arbitrary finite number of copies of ai. The model of synaptic formation was already introduced in [9] and [10]. We underline a restriction coming from biology field, and representing the main idea, we focused on, in modeling the synaptic formation: not any neuron can bind to any other neuron. We consider N, the finite set of neurons in a network componence, network contained into the system structure. The following finite sets are defined: (1) OCni is the set of all clusters of objects compounds/complexes for the neuron ni and OCnj the set of all clusters of objects compounds/complexes for the neuron nj (OCni OCnj 0). (2) OCntiOCni the set of all neurotransmitters for ni and OCrj OCnj the set of all receptors for nj . (3) For manipulating more easier the

An excitatory synapse is a synapse in which the spike in the presynaptic cell increases the probability of a spike occurring in the postsynaptic cell. spaces and to represent supports for processes involving the objects that are embedded inside ([6] is recommended for more details). 0;j0 - found at the j0 = 0 level of the tree root - is the inner finite space of the neuron (containing all the other regions). The inner hierarchical arrangement of the neuron is represented by the construction

11

International Journal on New Computer Architectures and Their Applications (IJNCAA) 2(1): 8- 22 The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2220-9085)

sets in (2), we refer to the set of neurotransmitters for ni by OCnt and the set receptors for nj by OCr. A finite set L of labels is considered. There is a labeling function of each object compound/complex defined by l : OCn L, where n is ether ni, or nj . Tr = l(OCnt) 2L is the set of labels of all neurotransmitters for ni and R = l(OCr) 2L the set of labels of all receptors for nj . We consider T = {i.\i N , =1/k , k N *fixed} The set of discrete times. For all i OCn found in the region k,jk of n and mi its multiplicity, the quantity found into the substrate (in the region k,jk of neuron n) of one organic compound/organic complex ai at the computational time t, t T is the function Cn:k,jk : T OCn N {*} defined by Cn:k,jk (t, i) = {mi , if mi represents the number of copies of ai into the substrate *, if there is an arbitrary finite number of copies of ai into the substrate .} A few properties of quantities of organic compounds/ organic complexes into the substrate are presented below: 1) The quantity of substrate, at the computational time t , t T found into k,jk is Cn:k,jk (t) = Cn:k,jk (t, 1) + Cn:k,jk (t, 2) + Cn:k,jk (t, p) =

for all mi N. If there is i {1 , ., p} such as Cn:k,jk (t, i) =*, then Cn:k,jk (t,) = * . 2) If at the moment t we Cn:k,jk (t, i) = mi and at a later time t' a new quantity mi of ai was produced, supposing that in the discrete time interval [t, t'] no object ai was used (one may say consumed), then Cn:k,jk (t', i)=Cn:k,jk (t, i) + m'i = mi + m'i 3) If at the moment t we Cn:k,jk (t, i) = mi and at a later time t' a new quantity mi of ai was consumed, supposing that in the discrete time interval [t, t'] no object ai was used (one may say consumed), then Cn:k,jk (t', i)=Cn:k,jk (t, i) - m'i = mi m'i We make the observation that mi -- m'i 0 (mi m'i) because it can not be consumed more than it exists. 4) ) If at the moment t we Cn:k,jk (t, i) = mi and at a later time t' we have Cn:k,jk (t, i) = m'i supposing that in the discrete time interval [t, t'] no object ai was produced or consumed, then a) if mi < m'i we say that there is a rise in the quantity of ai; b) if mi > m'i we say that there is a decrease in the quantity of ai; c) if mi = m'i we say that no changes occurred in the quantity of ai. 12

International Journal on New Computer Architectures and Their Applications (IJNCAA) 2(1): 8- 22 The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2220-9085)

The binding affinity depends on a sufficient quantity of substrate and the compatibility between the type of transmitter and receptor type at time t of synaptic formation (t T). The sufficient quantity of substrate is a fair ratio admitted r between the number of neurotransmitters released by ni into the synapse and the number of receptors of the receiver neuron nj . Without restraining generality, we consider the sufficient quantity of substrate at time t (t T) of synaptic formation as a boolean function of the assessment ratio evaluation. For trans Tr with m its multiplicity (where for OCnt with l() = trans and trans Tr we have Cni:0,0 (t, ) = m) and rec R with n its multiplicity (where for b OCr with l(b) = rec and rec R we have Cni:0,0 (t, ) = n), we define qt : Tr R {0,1}, qt (trans, rec) = { 0 , if m/n r 1 , if m/n = r } The compatibility function is a subjective function Ct : Tr R {0,1},such as for any trans Tr and any rec R, Ct(trans; rec) = { 0 , if trans and rec are not compatible 1 , otherwise } . The binding affinity function is the one that models the connection affinities between neurons by mapping an affinity degree to each possible connection. We consider Wt N the set of all affinity degrees (at time t) ,

wt ={ wtij N , i,j {1,2,,|N| , ni , nj N} For any ni , nj N , trans Tr (the transmitters type of neuron ni), rec R (the receptors type in neuron nj ), Ct (trans, rec) = with {0,1} , and qt (trans, rec) = y with y {0,1}, we design the binding affinity function as a function Atf : (N N) TrR) Ct(TrR) qt(TrR) W where Atf ((ni , nj); (trans, rec), , y) = { 0 , if x= Ct (trans, rec) =0 y {0,1} 0 , if (x= Ct (trans, rec) =1) (y= qt (trans, rec) =0) wtij , if (x= Ct (trans, rec) =1) (y= qt (trans, rec) = 1) ( wtij 0) } Theorem 1 (The binding affinity theorem): For Pi Pbni a finite set of biochemical processes of neuron ni by which it produces a multiset of neurotransmitters of type trans (trans Tr), at the computational time t (t T), and, in the same time, for Pj Pbnj a finite set of biochemical processes of neuron nj by which it produces a multiset of receptors of type rec (rec R), we say that there is a binding affinity between ni and nj with the binding affinity degree wtij ( wtij 0) if and only if there is wtij Wt*such as Atf ((ni, nj), (trans,rec), 1, 1) wtij . Definition 1: For any ni,nj N, trans Tr (the transmitters type of neuron ni) and rec R (the receptors type in neuron nj ) at time t, if there is a binding affinity

13

International Journal on New Computer Architectures and Their Applications (IJNCAA) 2(1): 8- 22 The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2220-9085)

between ni and nj with the binding degree (wtij Wt) and wtij 0 then we say that there is a connection formed from ni to nj . This connection is called the synapse between the two neurons and it is denoted by synij . Each synapse has a synapse weight wtij = w;w N*. If instead of ni we have e then the synapse synej represents the directed link from the environment to the neuron nj and if instead of nj we have e then the synapse synie represents the directed link from the neuron ni to the environment. The synapses weighs are considered to be zero (wtie= wtie = 0). Definition 2: We say that there is no connection from neuron ni to neuron nj if and only if wtij = 0. Theorem 2 (One way synaptic direction theorem): For Pi Pbni a finite set of biochemical processes of neuron ni by which it produces a multiset of neurotransmitters of type trans and receptors of type rec', at the computational time t (t T), and in the same time for Pj Pbnj a finite set of biochemical processes of neuron nj by which it produces a multiset of receptors of type rec andneurotransmitters of type trans', if there is wtij in Wt with wtij 0 such as Atf ((ni, nj), (trans,rec), 1, 1) = wtij then there is no wtij in Wt with wtij 0 such as Atf ((ni, nj), (trans',rec'), 1,1) = wtij . (If there is wtij in Wt such as Atf ((ni, nj), (trans',rec'), 1,1) = wtij then wtij = 0 . IV. A WAY OF INFORMATION CONTROL IN MODELING THE NEURON BEHAVIOR

The neuron genome is represented as a memory register of the result of the previous information processing and the effect of events. The temporary memory of all informations of both the cellular and surrounding environment and the partially recording of the results of the previous computations are translated as DNA sequences. From the information processing point of view, for each neuron in N, of great importance are considered: 1- the neuron architecture and the corresponding initial neuron configuration. For each compartment, initial objects are introduced and processes handling objects are defined; 2- the initial quantities found into the substrates of each compartment; 3- the neuron genetics represented bellow by neuron genome in terms of totality of all genes encoders of genetic information that is contained into DNA. We will refer to it by G; genetic code represented as * the set G = (g1,.., gk); k N of genes in the genome componence along with their appropriate expressions (later defined in this paper); * the controllers of genes expression as a set of objects representing the class of genes controllers. We denote this class by C = (c1,.., cq); q N; genetic information represented by all information about the cellular and external environment. An internal timing that sets up the neuron activity rate is considered. An important observation must be underlined: this internal timing must not be confound with the computational

14

International Journal on New Computer Architectures and Their Applications (IJNCAA) 2(1): 8- 22 The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2220-9085)

timing representing the time unit for both internal processing and the neural network manner of external electrical exchanges of spikes between neurons. One may say that the computational timing may be the real time. The reason is found again in biology where molecular timescale is measured in picoseconds (10-12s). On such timescales chemical bonds are forged or broken developing this way the physical process that we call it life. The internal timing is characterized by an intracellular feedback-loop. It is measured between consecutive activations of groups of genes. On activation, the expression of a group of genes encode proteins after which they will be turned off until the next activation. The computational timing we defined as the set T = {i.\i N, =1/k , k N *fixed} meanwhile the internal timing TnR,, TnR T, that sets up the neuron activity rate as TnR = { Tk\k N , Tk = ti , i N ti T} where Tk represents the moment of a gene (group of genes) activation. Corresponding to the computational time ti, i N, in parallel, we may deal with an activation of a gene (group of genes) at time Tk (Tk notation = ti) and then the gene (group of genes) is turned off at a later moment in time ti+p; 0 < p < j, p, j N. It is possible that not only one computation may take place until the next activation either of the same gene (group of genes) or a different one (group) at Tk+1 (Tk+1 notation = tj ). The internal timing is measured between consecutive activation of the same

gene (group of genes). If in the discrete time interval [Tk ,Tk+1] the neuron device computes starting from ti to tj (ti, ti+1,.. tj ), then the time interval will represent the internal neuron timing characterizing the gene (group of genes) that were activated at Tk. This can be illustrated in Figure 1. We generalize the assumptions presented in the above statements by presenting two successive activations of all genes of genome G. We underline the fact that if the arrangement of the genes into the genome is (g1; : : : gk), bellow they will be arranged following the moment of theirexpression. We denote by Tig the time of the activation of

Figure 1. Internal timing setting up the neuron activity rate (measured between consecutive activation of the same gene/group of genes) is not the computational timing. In parallel, corresponding to the computational time ti the activation of a gene (group of genes) may take place at the internal time Tk. At time ti+p; 0 < p < j the gene (group of genes) is (are) turned off. Not only one computation may take place between consecutive activations of the same gene (group of genes) or a different one (group). It is considered Tk+1 such as Tk+1 = tj . In the discrete time interval [Tk; Tk+1] the 15

International Journal on New Computer Architectures and Their Applications (IJNCAA) 2(1): 8- 22 The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2220-9085)

neuron device may compute starting from ti; ti+1.. tj . the gene g and Tgi +1 the time of the next activation of the same gene g. For k1 + k2 .. + kr = k and for each gjl with j {1,r} and l {1,.,kj} representing genes that are activated in the same time Tij , we have: at one activation of all genes: at time Ti1 , the genes considered to be activated are represented by (g11, g12,, g1k1 ). Their corresponding activation times, representing in fact the same Ti1 , are represented by ( Ti1g11, Ti2g22,.., Ti2g2k2) at time Tir , the genes considered to be activated are represented by (gr1 , gr2 ,, grkr) Their corresponding activation times, representing in fact the same Tir , are represented by ( Tirgr1, Tirgr2,.., Tirgrkr) at time Ti1+1 , the genes considered to be activated are represented by (g11 , g22 ,, g1k1) Their corresponding activation times, representing in fact the same Ti1+1 , are represented by ( Ti1+1g11, T i1+1g12,.., T i1+1g1k1) at the next activation of all genes: at time Ti2+1 , the genes considered to be activated are represented by (g21 , g22 ,, g2k2) Their corresponding activation times, representing in fact the same Ti1+1 , are represented by ( Ti2+1g21, T i2+1g22,.., T i2+1g2k2) at time Tir+1 , the genes considered to be activated are represented by (gr1 , gr2 ,, grkr) Their corresponding

activation times, representing in fact the same Tir+1 , are represented by ( Tir+1gr1, T ir+1gr2,.., T ir+1grkr) Two observations are arising from these considerations: 1) For i1, i2 ,., ir such as the order relation that is considered between the internal times of genes activations is represented by Ti1 < Ti2 < . < Tir , is not necessarily that the next genes activation internal times preserve the same order relation (is not necessarily that Ti1+1 < Ti2+1 < .. < Tir+1). 2) In case of a gene suffering some changes because of mutations (mutation case will be discussed later in this paper), for i {i1,..ir} the gene gjl activated at time Ti gjl is not necessarily the same gene gjl activated at Ti+1gji . Considering the statements above, we can now define the neuron activity rate. Definition 3: For the neuron n (n N) the neuron activity rate is the mapping nRate : TnRN N .. N (the cartesian product of N .. N is considered of k times) that, for all genes (g1, .., gk) composing the neuron genome, is defined by nRate (Ti+1, i+1) = (Tg1i+1-- Tg2i , Tg2i+1+ Tg2i , Tgki+1-- Tgki) We introduce next a few properties of one neuron activity rate. Property 1: Genes that are activated at the same step are the genes from the same group. There are some values in the array defining the neuron activity rate that may be equal for genes that are activated at the same step. The next two properties are presenting 16

International Journal on New Computer Architectures and Their Applications (IJNCAA) 2(1): 8- 22 The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2220-9085)

cases when some changes may occur into the neuro functionality (cases of mutation, of at least one of the genes involved into that processes, influencing the neuron activity rate). In this manner, there are considered the iteration steps nRate(Ti+1, i + 1) = Tg1i+1- Tg2i , Tg2i+1 - Tg2i , Tgki+1Tgki) and nRate(Ti+1, i + 2) = Tg1i+2- Tg1i+1 , Tg2i+2 - Tg2i+1 , Tgki+2- Tgki+1) Property 2: If there are some values Tgji+1- Tgji= Tgli+1- Tgli , then it is not necessarily that at the next step of the iteration Tgji+2- Tgji+1= Tgli+2- Tgli+1 Two meanings are arising from this property. Genes from the same group of genes are activated at the same time. This is the case considered in Property 2: genes gj and gl are activated at the same step of the iteration. From Property 1 all values in the neuron activity rate array of different genes from the same group, at any iteration step, should be the same. If any differences occurs, then we face the case in which mutations occurred to at least one of the genes in the same group, if Tgji+2- Tgji+1Tgli+2+ Tgli+1 We can obtain the influences over the neuron activity rate can be seen by analyzing Tgji+2- Tgji+1Tgli+2- Tgli+1 Property 3: If it is considered to be affected the expression of one gene, let us say it would be gj , then we are dealing with some changes into the neuron activity: mutations occurred, and in this case gj can be analyzed.

Tgji+1- Tgji Tgli+2+ Tgli+1 Property 4: (This property can be seen as a conclusion drawn from the previous properties.) There can be detected some changes into the neuron functionality by analyzing the differences that may occur into the neuron activity rate values. In the design of the system, at the level of individual neuron, a finite set of symbol states is assigned is composed by a finite set of functional states and the non-functional state that corresponds to the neuron death. The neuron activity rate function provides the neuron states by the aid of a mapping f such as at each iteration step f : N N , where the cartesian product of k times represents the array containing the returned values of the neuron activity rate function. The functional states are mapping levels of the neuron functionality, passing through various degrees of functionality. In order to model the neuron behavior, we needed a way of controlling the information. In this manner a solution we found was to assign the neuron with a new ability, the one that makes it to adapt to its inputs by the alteration of its genetic material. This process we defined as the process of learning at a molecular level. The neuron device will be viewed as a feedback control system. We suppose Tk time of expression of the gene g, encoding protein-like object a. After expression, g will be deactivated. For each controller object c from the set of all considered controllers C, with c OCn, there is a 17

International Journal on New Computer Architectures and Their Applications (IJNCAA) 2(1): 8- 22 The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2220-9085)

gene g G such as we define the process of genetic expression of one gene as the gene expression function in relation to the internal timing Tk as Exp(Tk) : G OCn CD, Exp(Tk)(g , c) = avar, where c is the controller of the expression of gene g, a is the produced object and var its multiplicity (representing the quantity of a produced at the expression of gene g). Three remarks arise from the definition of the genes expression process: Remark 1: At a specific moment in time the expression of a gene under the influence of a controller encodes only one type of object. (At a specific moment in time one gene controlled by a controller can encode only one object.) Remark 2: At different activations a controller can influence the expression of different genes from a group. (After the deactivation of a gene that encoded one type of object, the same controller can activate, at the next iteration, either the same gene, either another gene in the sequence of genes from the same group.) Remark 3: In the same time, there are controllers that activate different genes, each one encoding different objects. (There is a set of genes that can be activated in the same time, each one of them being under the influence of different controllers and encoding different types of objects.) We consider T1 TnR the moment of a gene (group of genes) activation.

Bellow, expressed as moments of T1, we present the properties of the genes expressions. Property 5: At time T1 2 TnR, different controllers of different genes from the group of genes activated in the same time will lead to different results of their gene expression functions. Formally, for all ci cj and for all gi gj (i j) such as ci controls the expression of gi and cj controls the expression of gj , we have Exp(T1)(gi, ci) Exp(T1)(gj , cj). Property 6: A gene controlled by different controllers at different moments in time will lead to different results of the gene expression function. Formally, for all ci cj (i j) controllers of the same gene g and for all Tk TnR, k 1 such as T1 < Tk, we have Exp(T1)(g , ci) 6= Exp(Tk)(g, cj). Property 7: Different genes controlled by the samemcontroller at different moments in time will lead to different results of their gene expression functions. Formally, for all c that can control different genes gi 6= gj and for all Tk TnR, k 1 such as T1 < Tk, we have Exp(T1)(gi , c) Exp(Tk)(gj , c). Property 8: A gene controlled by the same controller at different consecutive times leads to the same result of the gene expression function. Formally, for all c that controllers a gene g, we have Exp(T1)(g , c) = Exp(T2)(g , c). Property 9: For all c and for all g, the expression of gene g being controlled by c, if at time T1 we have 18

International Journal on New Computer Architectures and Their Applications (IJNCAA) 2(1): 8- 22 The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2220-9085)

Exp(T1) (g , c) = var1 and at time T2 (such as T2 - T1 is a positive value not necessarily T2 = T1 + 1) we have Exp(T2)(g , c) = bvar2 such as we obtain two different object types a and b (a b) with var1 var2, then we say that the gene g was mutated (underwent a genetic mutation). A. Algorithm of learning at molecular level We introduce in this subsection an algorithm of learning at molecular level. At neuron level, the process of learning finds its definition into the neuron ability to adapt to its inputs. This adaptation process is done by the adoption of the products objects resulted from the alteration of the gene(s) expression. The conditions of this to happen are also introduced. Before presenting the algorithm, we introduce bellow the necessary algorithm pre-conditions: the region in the neuron n architecture in which the objects are to be encoded by processes of genes expressions (corresponding to the neuron nucleus) must be chosen; a discrete time interval is considered. The steps in this algorithm correspond to the internal times of activation of the same group of genes (we consider working with genes from the same group of genes); the gene considered to suffer a mutation (at step m) is g, g 2 G and its controller is considered to be c, c C. Algorithm of learning at molecular level Step 0. (Step of internal time T0 of the group of genes activation. It represents a previous step in which the controller c of gene g was produced): The controller c is produced. It will be able to control the expression of gene

g at a next activation of the group of genes from which gbelongs to. Step 1. (Step of internal time T1 of the group of genes activation, T0 < T1): The gene g controlled by c will encode object a in var1 number of copies. The gene g expression function is Exp(T1)(g , c) = var1 . .......... Step j. (Step of internal time Tj of the group of genes activation such as T1 < Tj Tm-1, for m N,m > 1): The gene g controlled by c will encode the same object ain var1 number of copies (for all steps from Step 1 to Step j). The results of this step are considered to be: the neuron activity rate: nRate(Tj, j) = Tg1j Tg1j-1,. Tgj- Tgj-1, TgkjTgkj-1) the product of the gene g expression process: Exp(Tj)(g , c) = avar1 the quantity found into the substrate in region p,jp of neuron n: Cn:p,jp (Tj) = Cn:p,jp (Tj, ) +

) (

var1 )

.......... Step m. (Step of internal time Tm of the group of genes activation, Tm-1 < Tm):

19

International Journal on New Computer Architectures and Their Applications (IJNCAA) 2(1): 8- 22 The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2220-9085)

If in the discrete time interval (Tm-1, Tm) no changes occurred, then the results of this step are considered unchanged from those at Step j; If in the discrete time interval (Tm-1, Tm) some changes occurred and the gene g suffered a mutation, then this fact involves some changes. A change is considered to be the cause of a natural mutation suffered by the neuron. The mutations can be followed by analyzing the differences in the neuron activity rate. The results will be seen as follows: the neuron activity rate: nRate(Tm,m) = Tg1m- Tg1m-1,. Tgm- Tgm-1, Tgkm- Tgkm-1) such as Tgm -Tgm-1 Tgj- Tgj-1 for 1 jm-1 This fact will involve the next two items bellow the product of the gene g expression process: Exp(Tm)(g , c) = bvar2 the quantity found into the substrate in the region p,jp of neuron n: Cn:p,jp (Tm) = Cn:p,jp (Tm, b) +

From the properties above as var1 var2, we obtain that


Cn:p,jp (Tj) Cn:p,jp (Tm .

( (

) )As

at time Tj in the neuron substrate there is no bvar2 then

) ( )

Step m+1. (Step of internal time Tm+1 of the group of genes activation, Tm < Tm+1): The same biochemical processes that led to c at the internal time T0 produce at step m + 1 a different controller c (c c). This way, the new substrate will undergo some changes (into the receptors conformations) by inducing modified binding affinity degrees of the neuron. Consequences of learning at molecular level A series of consequences we draw from the algorithm above: 1. The alteration of the genetic expression result into modifications of the binding affinity degrees. 2. Any changes of the binding affinity degrees between neuron devices in the componence of a network imply modified synaptic communications. 3. Modifications of the binding affinity degrees determine the neuron to adapt to its inputs. This ability determines the possibility of modeling the behavior of one neuron. 4. The neuron new state, provided by changes occurred into the modified neuron activity rate, is memorized into the memory of DNA. This way, the definition of the learning process at the molecular level along with the view over the neuron device as a feedback control system are justified. V. CONCLUDING REMARKS In the present article we focused over he way the information flow passing to, from and through one neuron 20

International Journal on New Computer Architectures and Their Applications (IJNCAA) 2(1): 8- 22 The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2220-9085)

(from a neural network) can be controlled and monitored in modeling one neuron behavior. In addition, we introduced an algorithm of learning at molecular scales. What brings the novelty in the neural networks field is the view that falls over the neuron behavior. In an usual artificial neural network, one neuron is analyzed from the outside point of view. We introduced the inside look not only into its structure, but also into its behavior. There are some obvious conclusions arise from the model. REFERENCES [1] Maass, W.: Networks of Spiking Neurons: The Third Generation of Neural Network Models. Neural Networks 10(9), 1659-1671 (1997). [2] Maass, W., Schmitt, M.: On the Complexity of Learning for Spiking Neurons with Temporal Coding. Information and Computation 153(1), pp. 26-46 (1999). [3] Maass, W., Natschlager, T.: Associative Memory with Networks of Spiking Neurons in Temporal Coding. In: Smith, L. S., Hamilton, A. (eds.) Neuromorphic Systems: Engineering Silicon from Neurobiology. World Scientific, pp. 21-32 (1998). [4] Natschlaeger, T.: Efficient Computation in Networks of Spiking Neurons - Simulations and Theory. Ph.D. thesis, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria (1999). [5] Natschlager, T., Maass, W.: Finding the Key to a Synapse. In: Leen, T. K. et al. (eds.) Advances in Neural Information Processing Systems. NIPS 2000 Conference, vol. 13, pp. 138145. MIT Press, Cambridge (2001).

[6] Morogan, L. M.: Prion Neural Systems, Why Choosing a Hybrid Approach of Natural Computing?. In: Elefterakis, G., et all, (eds.) Infusing Research and Knowledge in South-East Europe, 4th Annual South-East European Doctoral Student Conference, vol. 1, pp. 381-391. Thessaloniki (2009). [7] Morogan, L. M.: Coding Information in New Computing Models. DNA Computing and Membrane Computing. In: Analele Universitatii Spiru Haret, vol. 2, pp. 59-73. FRM, Bucharest (2006). [8] Morogan, L. M.: Prion Neural Systems: Synaptic Level. In: Psychogios, A., et al. (eds.) Infusing Research and Knowledge in South-East Europe. 5th Annual South-East European Doctoral Student Conference, pp. 436-444, Thessaloniki (2010). [9] Morogan, L. M.: Prion Neural System: Modeling the Binding Affinities Between Neurons of a Network. In: Abraham, A. et al. (eds.) Proc. 2010 of International Conference on Computer Information Systems and Industrial Management Applications with Applications to Ambient Intelligence and Ubiquitous Systems, pp. 143-147, IEEE Press,, Krakow (2010). [10] Morogan, L.: A Model of Synaptic Formation Between Neurons in a Network. In: International Journal of Computer Information Systems and Industrial Management Applications 3, pp. 88-95 (2011). [11] Alberts, B., Johnson, A., Lewis, J., Raff, M., Roberts, K., Walter, P.: Molecular Biology of the Cell. Garland Science 4th ed., New York (2002).

21

International Journal on New Computer Architectures and Their Applications (IJNCAA) 2(1): 8- 22 The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2220-9085)

[12] Benga, Gh.: Biologia Moleculara a Membranelor cu Aplicatii Medicale. Dacia, Cluj-Napoca (1979). [13] Israil, A. M.: Biologie Moleculara. Prezent si Perspective. Humanitas, Bucharest (2000). [14] Morogan, L. M.: View from inside one neuron. In: Snasel, V., Platos, J., ElQawasmeh, E. (Eds.), ICDIPC 2011. Communications in Computer and Information Science (CCIS188), Part I, pp. 176188. Springer-Verlag, Berlin Heidelberg (2011). [15] Istrail, S., Ben-Tabou De-Leon, S., Davidson, E.: The regulatory genome and the computer. Developmental Biology, (310)187-195 (2007). [16] Gregory, R. L.: Oxford companion to the mind. Oxford University Press (2004). .

22

You might also like