You are on page 1of 53

Waveform generation and new intentional approach of digital data encoding

Said Mchaalia email: mchaalia@yahoo.de Msc Raja Mchaalia Electronics, Science University, Tunis, Tunisia. email: ssfofo@yahoo.de Prof. Dr. Eng. Heinrich Loele Microwave department, TU Ilmenau, Germany email: ssfof@yahoo.de

Avanced digital sciences are distinguished, however exactly true right science secret signs are involving with the clean clear compile-compute-conclude aspect analysis. Herewith, I , Said Mchaalia, would like to thank God very much to deliver the new proposal ideas of exactly true right data encoding proceeding processing and control data flow graph generations based on deep investigation of information theory and the digital acquiring of current data as defined grphics inside [1].

Figure 0001: main principles of digital data transmission Just return back your visual brain power to the within this email attachment said-thesis.pdf, you could so easy prove that the proposal true right VHDL Control Data Flow Graph involving real sun light color, which is white, whereby it was composed from the original main synchronized suitable sure signed four primordial primary online (actually alive live) light colors. These light colors are inspiration items for real wares discrete event simulation. Therefore, in street rules, the light green color indicates move process node, though the light red color identifies the stop process node, even though the light yellow color represents tempo-delay process node. On the hand, in many other digital transmission-receptionabsorption proceeding procesing such that parallel states machines modeling-simulation environment dynamism, the light green color indicates ready event state, even though light red color indicates wrong processing event state, though light yellow color depicts swichting bitwise operaton event activity. Although, my works proposal in the attachment thesis was an inspiration item from the Gaussian format encoding that is used in cellular phones digital data transmission. Though, the actually alive active works of Said Mchaalia was since August

2012 to transform Shannon's mathematical information on theory based on just language of FREQUENCY. Hence, the works of Said Mchaalia is involving in rajathesis.pdf and SSSPaper.pdf. The intentional effect of just language of FREQUENCY was to search a mathematical follwoing description function based on just language of FREQUENCY and involves its values inside {[0, 1]}, even though this mathematical following description function is then: sin2(2.pi.f .tim+ theta). Then, Said Mchaalia and Raja Mchaalia did introduce this mathematical following description function based on just language of FREQUENCY in Claude Shannon's mathematical formulation of data transmission. Hence the new proposal mathematical following formulisation of digital data transmission:

([sin 2 ]n log2 ( [sin 2 ] ))


n

and

f [sin (2 pi t + sph)] 2[0, 1]

Figure 0002: original principles of digital data transmission Figure 0001 and figure 0002 represent the original main depicting proceeding processing environment dynamism for digital data mechanism analyssis. Thus, mechanism analysis involves digital-analog and anlog-digital data proceeding processing based on just language FREQUENCY, whereby the frequency identification is the main primordial thread-task for the digital proceeding processing and data encoding. Mainly, a Said Mchaalia's mathematical theoretical based on frequency formalism of data transmission such that
n

([sin 2 ]n log2 ( [sin 2 ] ))


n

and

f [sin (2 pi t + sph)] 2[0, 1]

illustrate the big chanllenge in digital-analog and analog-digital data encoding and waveform generation.

Light color identification

Light color event occurrence

Light color description The intensity of the magnitude of sun light power is nil during instantaneous time value, which is so fast as it could not probably be measured. This time value could be less then 10-19 seconds

Light black color

sun light power oscillation is null.

Light blue color

To explain, could you please watch movies involving sun light power oscillation is so fast within this website: http://www.nasa.gov/multimedia/videogallery/index.htm l This sun light color attains the mountain and contributes sun light power oscillation is slower in the photo-synthesis in the transformation of fruits and then the above oscillation of sun vegetables to be ready for eat and usage. light. To explain, could you please watch movies involving within this website: http://www.nasa.gov/multimedia/videogallery/index.htm l

Light green color

Light yellow color

Sun light color power oscillation is a defined human measurable unit, which could vary may be from 10-19 Hertz to 1019 Hertz perhaps more of less.

No comments:

Light red color

Sun light color power oscillation is a This sun light could attain Earth' s central kernel as like defined human measurable unit, example the light color of magma, which could found which could vary may be from 10-19 either in Alaska or in Japan by volcano emissions. Hertz to 1019 Hertz perhaps more of less.

Light undefined color

No idea till yet

No comments

Further information database on discrete event simulation using the original primordial the sun light colors, would be, when God would, transmitted in the next few couples of days. Otherwise, could you please publish a frequency range of light color oscillation that is involving within your command line color light color from the cmd command. I would be thanksfull for this precious legacy information. I, Said Mchaalia, would like to thank very much for your help and considerations. Sincerely, Susi

Figure 1: made representation of real picture identifying true right exactly real sun light color. Figure 1 depicts the true right exactly real sun light color. This true right exactly real sun light color could be only deduced from basic oscillations starting up 9.700 GHz to reach may be 300 GHz or perhaps more. Although, as proposal ideas for the reconstruction and the new environment dynamism of Value Change Dump files, I, Said Mchaalie involves herewith, this new float format of VCD file shown in figure 2, whereby for each clock-cycle number a proposal number of signal lists involving signal codes, which are character pointers into filling in vcd files.

Figure 2: new VCD file format based on actually alive incoming data from hardware offset.

Figure 2 shows the exatly true right Value Change Dump file, whereby the header of such a file is reduced to contain just this file ID, which is string pointer for actually alive active incoming data from real hardware offest proceeding processing. See the mathematical waveform description involving with this document wave-report.pdf for more details. Furthermore, this header file include the time unit used within any future modeling-simulation proceeding processing for the Control Data Flow Graph construction and then the depicting of such a compile-compute-conclude aspect analysis. This depecting dynamics mechnamism could be is drawan during digital draw simulation, which uses the following description functions that are: putpixel(.,.,.), circle(., . , .), line(.,.,.), and so on till complete graphics of considered inside hash table control data flow graphs.

Figure 3: drawn control data flow graph throughout the depicting environment dynamism involving putpixel(.,.,.), circle(., . , .), line(.,.,.) ,

Figure 3 shows drawn control data flow graph throughout the depicting environment dynamism involving putpixel(.,.,.), circle(., . , .), line(.,.,.), whereby the intentional of the mathematical modeling-simulation proceeding processing aspect anlysis dynamics mechnamism povide honour pride and legacy of digital data encoding. This mathematical honour depticing exactly true right control data flow graph was involving with my works inspired from the mathematical information theory of Claude Shannon and Lempel-Ziv, see attachment paper SSSpaper.pdf and said-thesis.pdf for more details. In fact, Claude Shannon [3], did purpose the digital data encoding as Entropy formulation, which is the following description;
n

( pn log2 ( p ))
n

herewith it was Shannon's mathematical formulation, which was based on the

measurement calculation of incertitude amount quantities. Therefore, this incertitude amount quantity measurement environment dynamism was inpiration item from the stochatic random basics theory. Although, my works proposal in the attachment thesis was an inspiration item from the Gaussian format encoding that is used in cellular phones digital data transmission. Though, the actually alive active works of Said Mchaalia was since August 2012 to transform Shannon's mathematical information on theory based on just language of FREQUENCY. Hence, the works of Said Mchaalia is involving in rajathesis.pdf and SSSPaper.pdf. The intentional effect of just language of FREQUENCY was to search a mathematical follwoing description function based on just language of FREQUENCY

and involves its values inside {[0, 1]}, even though this mathematical following description function is then: sin2(2.pi.f .tim+ theta). Then, Said Mchaalia and Raja Mchaalia did introduce this mathematical following description function based on just language of FREQUENCY in Claude Shannon's mathematical formulation of data transmission. Hence the new proposal mathematical following formulisation of digital data transmission:

([sin 2 ]n log 2 ( [sin 2 ] ))


n

and

f [sin (2 pi t + sph)] 2[0, 1] 1 y.log( ) , y

In fact, basic light color development following description function flow is theoretically

where y is the stochastic value for a given simulation. Thereby, the following description function 2 (sin ( x)) function of any buy-sell cell during following functionalism fabrication flow is an identification of old Gaussian representation. This identification depicts any fuzzy proceeding processing analysis within any following functionalism fabrication flow. The just language usage of such a (sin ( x)) 2 function consists to flow the origin of coordination system (the intersection of x-axis and y-axis, which are two perpendicular straight lines) during any proceeding processing through x-axis. The values engendered by this (sin ( x)) 2 function are nulls (zeros) for the boundary values and maximum for the middle average of considered segment [a, b] such that [a ,b] , thus sin 2 (a )=0a=0 and hence sin 2 (b)=0 b= pi / pi=3.14 . Although
(sin ( a+b a+b pi )) =1 = . Therefore, the (sin ( x)) 2 function owes magnitude level 2 2 2
2

variation inside proceeding processing analysis within any stochastic systems. The (sin ( x)) 2 function has the most around measurement ability to depict any current data edge flow inside control data flow graphs. Figure 1 shows a control data flow graph whereby current data edge {(time_event, value_event)} could be basically mathematically modeled by (sin ( x)) 2 function such that x= x(t )/t =time event and value event=(sin ( x)) 2 .

Figure 5: an example of original primordial representation of control data flow graph whereby a( . ) sin2 ( . ) is used as data edge following flow measurement value. The start interruption process node within figure 5 has role to interrupt most around circuit to start collect data to be measured within any simulation time. End interruption process node plays a similar rule as cutoff link switcher. Furthermore, the other nodes are arithmetic-logic operation nodes. The following motor flows of data edges is that the following values are binary made representation, whose last values are instantaneously stored inside grounded to gathering information database node. Figure 5 depicts an original primordial made representation of control data flow graph, whereby a( . ) sin2 ( . ) function could be then used to identify selfish set of 256 color developments based on step scaling. Therefore, each selfish 256-step could be thereby used for one color scaling. Hence, the involving waveform has the outward appearance shown in figure 6, whereby the magnitude variation level process and proceed light color development.

square(sin(.)) used witin amplitude modulation

1.5 1 0.5
intensity

0 -0.5 -1 -1.5
time

Figure 6: a(f(t)).sin2(f(t)) variation level within any waveform generation proceeding processing, where 0<abs(a(f(t)))<1 Figure 6 illustrates a(f(t)).sin2(f(t)) (where 0<abs(a(f(t)))<1) variation level within any waveform. This variation level could be therefore used in digital data encoding. For each variation level a binary value made representation would be whose signal assignment. Hence, included intensities within any waveform depict measurable signal variation levels for example voltages levels such that -100 volts to 100 volts or amperes levels such -20 ampere to 20 amperes and other kind of measurable quantities involving intensities, which are states or qualities of being intense. The mathematical format of the energy inside such a thread-task processing, is the integration of produced power within instantaneous time values; E=ClearVolume ( power (t) dt)dV , whereby the clearness of envisaged volume-form is depending on the power within the light production mechanisms. This power could be varied between 2.5 Watts LED to 500 Watts projectors or more 1000 Watts light bulbs. In my opinion, there is not till now any other source of light production either the electrical energy converters such leds, light bulbs, electrical arcs and similarly. where power(f, t) is the electrical puissance defined as follows; power ( f , t)=a [ sin( f ,t )] [cos (f ,t )] , where: 0 < abs(a) < 1. Thus sin(.) is the sinusoidal function. Furthermore, the cos(.) is the integration or may the differential function in time of the sin(.) and the abs(.) is the absolute value; power ( f , t)=b [sin ( f , t)][sin( f ' ,t )] where: 0 < abs(b) < 1. Note that to receive back just a white picture from digital satellite Astra or similar digital satellite in a far way distance about 401000000 meters, 18 GHz emission frequency of this picture is required. This frequency is produced by an electrical oscillator characterizing the magnetic effect characterizing the loadstones of electrical energy in magnetic form to be sent ( characterizing the charging within the capacitor ( calculation of the power, which is about 24Volts and current intensity, which is varying during the
L i( f , t ) ) and electrical effect t C i ( f , t ) t ). The within energy is just the

transmission process. Now the growing gradually actually question is: which current intensity should I use within this send-receive digital signal processing? If this current intensity is 1Ampere, then the energy is E = [24 Watts ] t = 24 Watts x Seconds (seconds time are needed to receive back the white picture which was sent). Although, based on the works of Claude Shannon [10], the in time movable power could be modeled as follows:

b log 10 (a

intensity ) , where (a ,b)2 two variable parameters to be identified within voltage

involving genetic algorithm or similarly. Moreover, the measurable amount quantity couple (intensity, voltage)event is the measurement amount quantity couple (R i(t), C i ( f , t ) t ) based on the laws of Ohm, Kirchhoff and co. Furthermore,

1 x log 10 ( ) function is error proceeding processing analysis, whereby x 1 lim x0 [ x log 10 ( )] could be easy defined as null [10]. In neural network and fuzzy proceeding x 1 processing theories, the x log 10 ( ) function illustrates the kernel motor flow of learning and error x 1 correction mechanisms. As Shannon did define the inside variable x within the proposal x log 10 ( ) x to be involving in [0 1] such that x [0,1] . Therefore, this inside variable x could be then equal
to the right measurable quantities, which are
lim x0 sin ( x) =0 x x=(sin ( f (t )))
2

or

x=

sin ( f (t)) where f (t ) 0

t=timeevent

during proceeding processing of any commercial event activity. On the other hand the and this function is constraint boundary function such that:

Therefore, after the above definition of the inside variable x, the function

1 x log 10 ( ) could be also x

sin ( x) 1 . x

used to achieve light color development modeling and simulation of instantaneous discrete data transmission. In fact, the environment dynamism of digital signal processing consists to define first of all signals and systems, whereby a system could be easy defined as transfer function putting out the outputs function of its inputs as like example the Newman compute machine model. Although signal is defined as a waveform representation. For discrete event simulation and job scheduling either parallel task following threading or serial data acquiring acquisition and displaying, the involving waveform is magnitude variation level representation function of iterations, whereby the time is so depicted as sequential discrete periods t=nT , n . In review abstract, the measurable physical amount is a variation level of whose magnitude level (up-stairs, down-stairs basic logic influence system) during modeling and simulation proceeding processing analysis.

(sin(.) )/(.) function following description

magnitude

time

Figure 7: normal logic magnitude variation level encoding digital-analog data. sin (.) functional behavior variation during processing analysis of a digital (.) data transmission-reception-absorption system. Thus, this functional behavior could be used in encoding digital data to be send and receive with lossyless data transmission. The within this function following description involving encoding techniques are the magnitude summit value for logic true and the not function of such a value is the logic false. Hence within light color development modeling, this magnitude variation levels are the step-scaling from light black color to light white color. Figure 7 illustrates the In conclusion models, two models such that (sin( x ( f , t ))). tan( X ( f ,t )) where X(f, t) characterizes the amount of amplitude to be enough in order to be absorbed by the reception antennas and or

1 (sin( x ( f , t))). y.log( ) , where y is the instantaneously probability of error occurred on y

signal absorption at time t.


Be awoken to be aware of exactly trust sciences,
Said Mchaalia

Digital waveform encoding based on click character op-codes


Author: Said Mchaalia Email: mchaalia@yahoo.com Author: Raja Mchaalia Email: ssfofo@yahoo.de

Abstract:
Advanced digital data encoding is far away to invent op-codes able to clean up old electronics background [15] and [16]. Therefore there are many illustration illusions inside some algorithms of digital data encoding [10]. The new invention of be selfish auto-compile-compute-conclude environment dynamism, allow fast modeling of digital data transmission based on involving electronics circuit and the theory of digital data transmission such the transmission of digital video broadcasts. In fact, ASTRA did invest many years to be awake away for being aware background of digital-analog discrete event simulation techniques and disciplines. Indeed, this paper will illustrate the usage of new digital data encoding techniques based on binary representation and the mathematical modeling of digital data transmission [11], [15] and [16].

Keywords: light color development storage, discrete event simulation, compute-compile-conclude thread-tasks and serial-parallel scheduling. Introduction: Digital design is main primordial entity to achieve several surround systems. Thereby, in modeling and simulation investigation branches, an entity is an object of interest in system proceeding processing analysis. Although, each entity has its own property inside alive active surround system. This property is called entity attribute. The entity property surrounds entirely entity activity and entity events. Hence, the entity event is the instantaneous value change dump of the state of current entity (ready to be executed, inside queue, fetched, ran, stored again). Furthermore instead of investigating entity activity inside alive active surround system, which is hard to be identify, resolving basic logic influence system on these tasks and threads allow the investigation of system states throughout entity activities. Hence, digital design requirement envelops digital data transmission, whereby digital data transmission is based on discrete event simulation, which was started with telegram data transmission threads. Therefore, the basics of discrete event simulation has its roots from the language motor flows. In this field, whereby each character is just one entire entity inside alive active surround alphabet as Shannon [3] did call it, an alphabet is a selfish set of characters {a, n, m, l, p}. Grounded to this theory of selfish set, the processing surround account was invented. Thus, start to process counting characters inside a selfish set of letters, is a processing account within measurement calculations, whereby gathering discovering scopes within discrete event simulation. Thereby, initialize a variable to be number and start to add just one to this number in order to reach the desired value at the end of simulation time. For example, define a to be 8, then add(a, 1) to be equal to 400 after certain simulation time. Most around development environment and industrial advances, command and control of processors to achieve their tasks and threads aware away of best powers, whereby object aims of many disciplines wish, are deep investigation motor flows. Thus modeling and simulation are basics in those disciplines. In fact, modeling and simulation search their roots in the applied maths and measurement calculation

principles, whereby mathematical functions and procedures are aware away of motor flows. This paper is organized as follows: first of all an overview on the mathematical functionalism will be given. Next, an illustration of modeling and simulation within discrete event simulation will be depicted. After, the how to measure principles would be evolved in details and finally a conclusion will be deduced.

Figure 1: basic logic influence systems into digital data encoding methodologies

Figure 1 depicts the basic logic influence system on digital data encoding methodologies, whereby digital-analog conversion and modeling-simulation should be token places inside any mathematical aspects of digital data encoding and compression [3] and [10]. 1. Mathematical functionalism aspects of digital data encoding: 1.1. Usable functions following descriptions: Maths is science of invisible philosophy processing analysis. This science is belong to compilecompute-conclude effect aspects of following motor flows. Measurement calculation steps are impure and addictive when their boundary conditions are limitless or although pure and noble when those boundary conditions would be awoken within modeling and simulation tasks and threads. The strength of model descriptions in order to provoke right simulation processes and to bring into innovative world of development environment is the aim object of each entity activity inside interest active system. Indeed, in digital data transmission, mathematical functionalism is characterized by following description functions and fear flows in modulation fields. Therefore, aware away of the identification of those function is subject of many workers within innovation system development environments. In fact, the sinusoidal function, which has periodic characteristics during simulation time, is the awoken waveform of data edge representation inside measurement calculation within system development environment. Therefore, within system development environment of digital data transmission such that text files and emails inter-exchanges, data edge representations are electrical

currents aware away of electromagnetic waves. To determine, the values of those data edge representations, the resolution of Maxwell-Ampere equation is interest away such that resolve: Curve B (t ). dl= function(Surface i (t) .ds) , where dl and ds are measurement quantities for both curves (one dimension measurement length) and surface (two dimension measurement coordination viewpoints) during discrete simulation time intervals. Similarly in fact, the number of months inside a year is always twelve months. This number of twelve months is data edge representation inside measurement account of year enumerations. Indeed, the main synchronization secret of data edge representation is value change dump during following motor flows through measurement calculation nodes, whereby discrete event simulation time have to be onwards and forwards proceeded steps with test-bench characteristics. These test-benches faithful for digital data transmission modeling and simulation. In fact, light color development is a crazy and lazy aim object of many searchers within discrete event simulation basics, such that Microsoft command line color invention, whereby the command help color from the black screen DOS environment dynamism, determines the principle requirement for light color choices. Indeed, based on this idea, event states inside discrete event simulation could be model within a selfish set of light color development. Although light color development is interest task in this discrete event simulation disciplines. The frequency variation level inside electrical {(inductorcapacitor-resistor)} circuit, allow any light color development. Figure 1 shows, frequency variation level for light color development [12] and [13].
sin(.) function following description

magnitude

time
Figure 2: sin(2.p.f.t) function following description most around used in digital data transmission-reception-absorption processes.

In figure 2, the first variation of currently involved function for digital data transmissionreception-absorption processing analysis is a clear clean sinusoidal aspect. Therefore it is starting from nil magnitude value, then after certain time values it did attain its first magnitude summit value, then after it did attain a negative magnitude value and finally it did backwardly attain its magnitude summit value. The time value variation allowing to attain the magnitude summit values originally and its first successor backwardly defines the time period f for such a 1 sinusoidal function following description. This function is sin(2.p.f.t), where f = is original T

main frequency characteristic of such a function following description. Furthermore, t is the shortest time interval value to be involved within the processing analysis. In some digital data transmission applications, this value is about 0.325 nano seconds (3.25x10-10 seconds). On the other hand, searching gathering discovering following description function of any event state inside the discrete event simulation modeling, is ahead hard thread-task. From the inspiration-item Shannon following description works, which finalize following description function based on stochastic probabilities, this function is the Logarithm functional behavior description. This logarithm functional behavior is used to mainly calculate lossyless data sent within signals. Logarithm functional behavior has negative infinite values when the time composition variable has values inside the segment [0 1]. Then the nil value, when this time composition variable is one and positive relaxed infinite values otherwise.
Log(.) function following description

magnitude

time
Figure 3: Log(2.p.f.t) function following description for a given digital data transmission-reception-absorption.

Figure 3 illustrates the logarithm functional behavior variation during processing analysis of a digital data transmission-reception-absorption system. This functional behavior could be used in encoding digital data to be send and receive with lossyless data transmission. The within this function following description involving encoding techniques are the magnitude summit value for logic true and the not function of such a value is the logic false. This magnitude variation level is fuzzy encoding digital-analog data. Hence within light color development modeling, this magnitude variation levels are the fuzzy step-scaling from light black color to light white color. 1.2. Required light color development mathematical following description function flow: Signal manufacturing tools send different kinds of signals to be received from distinct destinations. Principally, these tools are characterized by transmission antennas and reception sensors. The modulation aspects is theoretical task and threads during modeling and simulation of digital data transmission. Grounded to some readings (see modulation principles and Fourier

transformation within digital signal processing theory), the main usable functions in the modulation branches and fields are the following functions described in table 1. Table 1, presents some ideas about useful function for modeling and simulation inside modulation processing analysis. Notice that the real behavior of transmission-receptionabsorption processing analysis is based on the correlation of the function following description and the function using inside the modulation process. This correlation processing analysis is function of the passed values of considered signal and predicted ones. For more details on the correlation phenomena, presented books and papers in digital data transmission and encoding such the mathematical information theory of Shannon [3] is utility toolboxes for gathering discovering database's information within those brand fields and disciplines. Through my works in such branch fields and disciplines, an abstract table review shown in table 1, concludes the useful modeling-simulation mathematical description functions. Functions
sin (2pft ) 2pft

Characteristics

Utilities

boundary function for all values of inside used in lab experiments and parameters such that p, f, and t. other fields.
2

(sin (2pft ))

boundary function for all values of inside used as Gaussian replacement in parameters such that p, f, and t. wireless digital transmission such that cellular phones. characterizing the fast descent to nil when most around used when the the calculated inside phase is growing defined phase is less than eighth gradually to infinite values. of one tour. characterizing the fast fading to infinity used in supra high frequency when the calculated inside X(f, t) is modulation fields. growing gradually to fourth tour values. boundary function for all values of inside used as replacement of the first parameter x when define limit of the defined function inside this fraction of infinity and nil to be nil. table. Table 1: functions used in modulation branch fields.

( 2+ 2 )

tan ( X ( f , t))

1 x log( ) x

In fact, basic light color development following description function flow is theoretically 1 y.log ( ) , where y is the stochastic value for a given simulation. Thereby, the following y description function (sin ( x)) 2 function of any buy-sell cell during following functionalism fabrication flow is an identification of old Gaussian representation. This identification depicts any fuzzy proceeding processing analysis within any following functionalism fabrication flow. The just language usage of such a (sin ( x))2 function consists to flow the origin of coordination system (the intersection of x-axis and y-axis, which are two perpendicular straight lines) during any proceeding processing through x-axis. The values engendered by this (sin ( x))2 function are nulls (zeros) for the boundary values and maximum for the middle average of considered segment [a, b] such that [a ,b] , thus sin 2 (a )=0a=0 and hence

sin 2 (b)=0b= pi / pi=3.14 .

Although (sin ( a+b )) =1 a+b = pi . Therefore, the (sin ( x))2 function owes magnitude level
2 2 2

variation inside proceeding processing analysis within any stochastic systems. The (sin ( x))2 function has the most around measurement ability to depict any current data edge flow inside control data flow graphs. Figure 1 shows a control data flow graph whereby current data edge {(time_event, value_event)} could be basically mathematically modeled by 2 2 (sin ( x)) function such that x= x(t)/t =time event and value event=(sin ( x)) .

Figure 5: an example of original primordial representation of control data flow graph whereby a( . ) sin2 ( . ) is used as data edge following flow measurement value.

The start interruption process node within figure 5 has role to interrupt most around circuit to start collect data to be measured within any simulation time. End interruption process node plays a similar rule as cutoff link switcher. Furthermore, the other nodes are arithmetic-logic

operation nodes. The following motor flows of data edges is that the following values are binary made representation, whose last values are instantaneously stored inside grounded to gathering information database node. Figure 5 depicts an original primordial made representation of control data flow graph, whereby a( . )sin2 ( . ) function could be then used to identify selfish set of 256 color developments based on step scaling. Therefore, each selfish 256-step could be thereby used for one color scaling. Hence, the involving waveform has the outward appearance shown in figure 6, whereby the magnitude variation level process and proceed light color development.

square(sin(.)) used witin amplitude modulation

1.5 1 0.5
intensity

0 -0.5 -1 -1.5
time

Figure 6: a(f(t)).sin2(f(t)) variation level within any waveform generation proceeding processing, where 0<abs(a(f(t)))<1

Figure 6 illustrates a(f(t)).sin2(f(t)) (where 0<abs(a(f(t)))<1) variation level within any waveform. This variation level could be therefore used in digital data encoding. For each variation level a binary value made representation would be whose signal assignment. Hence, included intensities within any waveform depict measurable signal variation levels for example voltages levels such that -100 volts to 100 volts or amperes levels such -20 ampere to 20 amperes and other kind of measurable quantities involving intensities, which are states or qualities of being intense. The mathematical format of the energy inside such a thread-task processing, is the integration of produced power within instantaneous time values; E=ClearVolume ( power (t) dt)dV , whereby the clearness of envisaged volume-form is depending on the power within the light production mechanisms. This power could be varied between 2.5 Watts LED to 500 Watts projectors or more 1000 Watts light bulbs. In my opinion, there is not till now any other source of light production either the electrical energy converters such leds, light bulbs, electrical arcs and similarly. where power(f, t) is the electrical puissance defined as follows;
power ( f , t)=a [ sin( f ,t )] [cos (f ,t )] , where: 0 < abs(a) < 1. Thus sin(.) is the sinusoidal function. Furthermore, the cos(.) is the integration or may the differential function in time of the sin(.) and the abs(.) is the absolute value; power ( f , t)=b [sin ( f , t)][sin( f ' ,t )] where: 0 < abs(b) < 1.

Note that to receive back just a white picture from digital satellite Astra or similar digital satellite in a far way distance about 401000000 meters, 18 GHz emission frequency of this picture is required. This frequency is produced by an electrical oscillator characterizing the

magnetic effect characterizing the loadstones of electrical energy in magnetic form to be sent ( L i( f , t ) ) and electrical effect characterizing the charging within the capacitor ( ). The within energy is just the calculation of the power, which is about 24Volts and current intensity, which is varying during the transmission process. Now the growing gradually actually question is: which current intensity should I use within this send-receive digital signal processing? If this current intensity is 1Ampere, then the energy is E = [24 Watts ] t = 24 Watts x Seconds (seconds time are needed to receive back the white picture which was sent). Although, based on the works of Claude Shannon [10], the in time movable power could be modeled as intensity ) , where (a ,b)2 two variable parameters to be identified follows: b log 10 (a voltage within involving genetic algorithm or similarly. Moreover, the measurable amount quantity couple (intensity, voltage)event is the measurement amount quantity couple
(R i(t),
t C i ( f , t) t

1 x log 10 ( ) function is error proceeding processing analysis, whereby x 1 lim x0 [ x log 10 ( )] could be easy defined as null [10]. In neural network and fuzzy x 1 proceeding processing theories, the x log 10 ( ) function illustrates the kernel motor flow of x learning and error correction mechanisms. As Shannon did define the inside variable x within 1 the proposal x log 10 ( ) to be involving in [0 1] such that x [0,1] . Therefore, this x inside variable x could be then equal to the right measurable quantities, which are sin ( f (t )) 2 where t=timeevent during proceeding processing of any x=(sin ( f (t ))) or x= Furthermore,
f (t )

C i ( f , t ) t ) based on the laws of Ohm, Kirchhoff and co.

commercial event activity. On the other hand the lim x0 sin ( x) =0 and this function is
x

constraint boundary function such that:

1 x log 10 ( ) could be x also used to achieve light color development modeling and simulation of instantaneous discrete data transmission. In fact, the environment dynamism of digital signal processing consists to define first of all signals and systems, whereby a system could be easy defined as transfer function putting out the outputs function of its inputs as like example the Newman compute machine model. Although signal is defined as a waveform representation. For discrete event simulation and job scheduling either parallel task following threading or serial data acquiring acquisition and displaying, the involving waveform is magnitude variation level representation function of iterations, whereby the time is so depicted as sequential discrete periods t=nT , n . In review abstract, the measurable physical amount is a variation level of whose magnitude level (up-stairs, down-stairs basic logic influence system) during modeling and simulation proceeding processing analysis. Therefore, after the above definition of the inside variable x, the function

sin ( x) 0 1 x

(sin(.) )/(.) function following description

magnitude

time

Figure 7: normal logic magnitude variation level encoding digital-analog data.

Figure 7 illustrates the

sin (.) (.)

functional behavior variation during processing analysis of a

digital data transmission-reception-absorption system. Thus, this functional behavior could be used in encoding digital data to be send and receive with lossyless data transmission. The within this function following description involving encoding techniques are the magnitude summit value for logic true and the not function of such a value is the logic false. Hence within light color development modeling, this magnitude variation levels are the step-scaling from light black color to light white color. In conclusion models, two models such that (sin( x ( f , t))). tan( X ( f ,t )) where X(f, t) characterizes the amount of amplitude to be enough in order to be absorbed by the reception 1 antennas and or (sin( x ( f , t))). y.log( ) , where y is the instantaneously probability of error y occurred on signal absorption at time t. 2. Modeling of discrete event simulation based on light color development: 2.1. Introduction: The human person attached invisible Satan within modeling and simulation is the discrete event simulation definition and usage. This definition and description has no responsibility on the chosen modeling and simulation methodology or similarly for doing things exactly well. Although extreme angry aspects could be sometimes engender within modeling and simulation when wrong ways were chosen. However this has effects on the selfish reflexive lust, this should not be a reflexive lost results. In this section light color development discrete event simulation modeling assures the vectorization methodology to be associated as motor flow for fuzzy proceeding processing for searching gathering discovering light color identification, such

that blue faded to light black or blue fade to light white. Moreover many other example could be treated such that processing light black color to be light white color. Therefore, the principles used within involving techniques is the fuzzy formalism functionalism and leaning motor flow. 1 2 ) function and of Thus, within the environment dynamism of [(sin( x)) ]log( [(sin( x))2 ] sin( x) x [ ] log( ) function, the manipulation of inside fuzzy variation motor flows x sin ( x ) becomes cleaner and clearly compile-compute event attributes, whereby event attribute could be light dark black color, light purple color or light aqua color and so on. In next section a deep definition of discrete event simulation will be investigated and then the light color development modeling method will then be integrated. Moreover, processing light black color to be light white color. Therefore, the principles used within involving techniques is the fuzzy formalism functionalism and leaning motor flow. Thus, 1 2 ) function and of within the environment dynamism of [(sin( x)) ]log( 2 [(sin( x)) ] sin( x) x [ ] log ( ) function, the manipulation of inside fuzzy variation motor flows x sin ( x ) becomes cleaner and clearly compile-compute event attributes, whereby event attribute could be light dark black color, light purple color or light aqua color and so on. In next section a deep definition of discrete event simulation will be investigated and then the light color development modeling method will then be integrated. Notice, that the involving sin( x) 1 x 2 ) or [ ] log( ) , where function could then be [(sin( x)) ]log( 2 x sin ( x ) [(sin( x)) ] x= f ( circuit frequency ) . Hence, the circuit frequencies could generated within any association of the general purpose digital-analog data edge modeling depicted within figure 300.

Figure 300: original main sufficient suitable dynamics mechanism to gather digital data from core processing unit inside environment dynamism.

Figure 300, illustrates the original main sufficient suitable dynamics mechanism to gather digital data from core processing unit inside environment dynamism. In fact, as shown in figure 1, the data edge current following flow could be divided to measurable voltages throughout any parallel association of distinct resistor-capacitor filter values, which allow mixtures of frequency determination through the famous low pass filter formula such that f index i =[ 1 ]
RC
index i

For ASCII encoding similarity, the measurable number of the envisaged couples (resistorcapacitor) is then equal to eight low pass (resistor-capacitor) filters, whose frequencies could be measurable throughout the measurement of resistor values and capacitor values. Therefore, the eight-bit-word encoding techniques. Hence, consider that the first index filter has the following measurable values, which could be 75 Ohm resistor's value and 23 micro Farad capacitor's value, and the last index filter has he following measurable values, which could be 100 mega Ohm resistor's value and 1000 micro Farad capacitor's value. By this way, the two involving frequencies are respectively; 579.71 Hz and 0.00001 Hz. Figure 300 depicts the data edge current following flow, which could be faded to measurable voltages throughout any parallel association of distinct resistor-capacitor filter values.

Furthermore, the multiplexer usage allows to get out the first measurable current amount quantity. Although, the next tradeoff of measurable current amount quantity could be direct measured from current source generator, which would be CMOS or bipolar transistor-transistor language. Indeed, the within core processing unit entity is the mnemonic, mimetic and compilecompute-conclude proceeding processing analysis, which involves test verification motor flow of any gathering measurement values and then to choose the data edge current flow requirements. Hence, the within core processing unit entity has input-output interfaces such that huge hard disk to store data and co-core proceeding processing motor flows, which treats any gathered digital data and resolve whose basic logic influence systems. Those co-core proceeding processing motor flows could be implemented in C-language algorithms for any microcontrollers or any EPROMS (electrical programmable read only memory). The digital data encoding and compression techniques will be in details presented in the next sections, whereby the basic logic influence system of ASCII digital data encoding techniques is the primordial main following digital data encoding and compression algorithms to be aware of further processing analysis aspects such that production time amelioration within electrical machines, image quality improvement for digital TVs, radars and co. In electrical branch field disciplines, the sinusoidal function model is always used to simulate input-output signals within considered circuits. Then, the signal correlations or mathematical signal multiplexers are basic tools for image illustrations like those treated in MRI scientific fields [6] and [7]. Magnetic Resonance Imaging description is based on the cutoff frequency included within Fourier transformation and just using-language of t=nT , n to depict received image from IMB-AT compatible PCI interface cards and similarly input-output interface component. Thus, detailed description of correlation and involving mathematical following description functions will be reviewed in the next paragraphs. In fact, within electrical data edge flows, the main characteristic of produced waveform is the periodic aspect during proposal test-bench and simulation proceeding processing. The mathematical sinusoidal description function, has the characteristics, such that it attains its maximum magnitude value in the middle array of simulation time. At the boundaries within measurement arrays of simulation time, which are 2pi = 6.28 period periodic measurement arrays, the values of the mathematical sinusoidal description functions are nulls. Indeed, the magnitude-value variations are basic tools for signal encoding. Arithmetic encoding is primordial main original thread-task within digital data transmission. Arithmetic encoding is based on the encoding of the magnitude-value in integer form, whereby the inferior boundary integer within a float magnitude-value variation would be used in such an arithmetic encoding [8] and [9]. The maximum voltage or current magnitude-value variations inside electrical circuits depend on the technical applications. For digital data transmission, such that sendreceive alphabets and like thread-tasks, the middle average voltage is 10 Volts [9]. The basic logic of the arithmetic encoding of this maximum 10 volts voltage level magnitude-value is to convert 22 integer value to binary, because the positive and negative measurement processing analysis and error measurement calculations involving within modeling-simulation. The envisaged arithmetic encoding in computer language is to associate the binary 00000b to -22 volts measurement magnitude-value, then the binary 00001b to -21 volts measurement magnitude-value, after 00010b to -20 volts measurement magnitude-value till the binary 10110b for +22 volts measurement magnitude-value. The true-false logic involved within any one-volt magnitude-value variation inside the envisage arithmetic encoding, is illustrated using figure 2 and or figure 6 and figure 10 described below, whereby logic true is the magnitudevalue associated with the highest value level and the logic false is the lowest value level of magnitude measurement.

2.2. Discrete event background and structure: In digital inline verification of hardware design of consumer electronics, modeling-simulation processing analysis of digital data transmission and similarly, the motor kernel flow of discrete event simulation is the following famous flow structure; if clock's event occurs and clock's event value is equal to logic true or false, then while some synchronized constraint boundary conditions {do somethings, which are resolving of basic logic influence systems}. Discrete event background in VHDL Discrete event background in C-language

process (sensitive list clk) { if (clk'event and clk == 1) do { resolve basic logic influences systems; } while (constraint boundary conditions); end if; } end process;

//light black color assignment event attribute value.

unsigned char clk ='\0';


// logic basics assignment event activity value.

bool clckEvent = false; . if (clkEvent and clk == '1') { while {(constraint boundary conditions) resolve basic logic influences systems; } }

Table 100: discrete event background structures in signal assignment (VHDL) language and C-language.

Table 100 illustrates the difference of discrete event background structures in two different programming languages such C-language and VHDL-language. As it was lastly in 2008 defined, VHDL-language is concerning signal assignment's measurement calculations. Although, C-language is programmable control data flow graphs involving operating systems like Linux and command line controls like tar -xf filename.tar. Thus, the invisible Satan of event descriptions is: - Think of an occurrence in order to have whose effect and whose judgment: to consider or to anticipate both worried and hopeful whose side happenings. - Think out or through event activity is to think about hereby until whose conclusion is reached. Therefore, understand and resolve event activities, means resolve basic logic influence systems on event occurrences. - Think up an arrangement of system environment to reach whose invention or devising; means make decision to change the event states inside this system environment. As example, think of an occurrence, which is writing a book. Think out this event activity, is to think through its processing analysis and then to think up that this event results will be measurement principles of compile-compute-conclude processing analysis aspects. 2.3. Discrete event dynamism and aspects:

Discrete event dynamism engenders event occurrence, event activity, event attribute, event state and event environment. Discrete event aspects are the instantaneous outward appearances, which should be associated with it dynamism. Indeed, this dynamism is unknown effect analysis; fuzzy or similar analysis start from true right sources engendering gathering database information over event occurrences. Hence, analysis of inside instruction introduction of event occurrence processing and its real viewpoint illustration. - event occurrence: what does it happen instantaneously right now? - event activity: what is it going on currently live? - event attribute: which property is characterizing this event occurrence? - event state: why is it so depicted? - event environment: how much time does event occurrence need to be faded? In details of modeling and simulation, a list of couples (timeevent, valueevent) would be involved within this simulation processing analysis. Therefore, Table 2 illustrates an example of discrete event simulation list. The main task in this modeling simulation processing analysis is to identify event values for each time value, whereby mathematical ratios are equations of xi to yj, which depict the fraction between xi and yj;
xi yj

. Thus, These mathematical ratios are often

involving for error processing analysis. As example the entropy calculation introduced by Shannon in 1948 [3], with his mathematical theory of information. This entropy measures the incertitude amount within each set of signals to be sent. Probabilities occur often in measurement processing analysis. Thus, probabilities are ratio values less or equal to one. To associate with measurement processing analysis probabilities, which are determined as ratios of values at time t to maximum value for all time. Therefore, consider a set of measurements {(t1,V1), (t2,V2), (t3,V3), ..,(tn,Vn)}. The maximum value of all time is (tj,Vmax). The probability determination is defined as follows; {(t1,
V1 V max

), (t2,

V2 V max

), (t3,

V3 V max

), ..,(tn,

Vn V max

)}. Those probabilities would be involving

within each measurement processing analysis of amounts and quantities, which are gains in magnitudes or gains in amounts, legacies, error corrections and other kinds of measurement processing analysis.

light color development locally set as global.

Event

Description

(timeG, light green color)

read to be processed.

in this event occurrence, a quantity of electrical signal in volts would be transferred to core processing unit.

(timeB, light blue color)

in this event occurrence, a quantity of be inside queue within any cache. electrical current in Amperes would be exchanged between the bios and its input output devices.

(timeY, light yellow color)

in this event occurrence, a quantity of fetched operating symbol into core electrical current in Amperes would be processing unit. exchanged between core processing unit and its inter-components. in this event occurrence, a quantity of electrical current in Amperes would be scheduled finished surround symbol exchanged between core processing picked up and returned back into unit and computer input output devices, cache. which are LCD display, mouse, keyboard, printer and so on. basic event environment interrupting involving power on. scaling steps from light black to light white color whose correlations would be used within any other light color development

(timeR, light red color)

(timeBW, (light black color, light white color))

Table 2: Example of discrete event simulation based on light color development.

Table 2 presents discrete event simulation motor flows based on light color developments. Thus, from the advanced scientific works inside NASA [14] and [15], the primordial original main selfish set of light is the four element light color set, whereby these light colors are; light blue color, light green color, light yellow color and light red color. The light blue color is an event state of being inside queue. Hence, the roots of this choice is based on the Earth's Sky light color. Although, from the scientific advanced works inside NASA [14] and [15], the oscillation of sunlight envelops the fatal following frequency flows. The fatal following frequency flows are so fast and may for this reason they are involving in the blue coloration of the Earth's Sky. Furthermore, the light yellow color is an event state fetched operating symbol to be processed just joined and linked within core processing unit. On the other hand, the light red color characterizes the finished scheduled surround symbol or stopped scheduling proceeding processing. Grounded to discrete event simulation theory [16], the activation of processing (set-reset op-codes) is generated throughout the couple (light black color, light white color), whereby the light black color illustrates the end interruption node proceeding processing, although the light white color depicts the start interruption node processing dynamism mechanisms. For the depicted involving control data flow graph, picture 5 is a made representation of such a dynamism mechanism proceeding processing.

[sin(.)] .[probability.Log(probability)] wave-form modeling simlation processing where probability is a ratio of signal value at iteration i to max signal value.

magnitude

time
Figure 12: waveform within digital data modeling-simulation processing analysis involving

[sin ( x)]. [

[ xt ] [x t ] ]. ln([ ]) function. (lim ( x)) ( lim ( x)) [ xt ] [x t ] ]. ln([ ]) (lim ( x)) ( lim ( x))

Figure 12 depicts the function [sin ( x)]. [

used to modulate digital data

transmission. Notice that p is the ratio of 314 to 100, f is the oscillation frequency, sin(.) sinusoidal function, ln(.) is logarithm function and lim(.) presents the superior boundary or the maximum value for a given set of signal values.

[exp(.)] wave-form modeling simlation processing

magnitude

time
Figure 9: waveform inside modeling simulation processing analysis within digital data transmission-reception-absorption process.

0.8x2xpxfxt

Figure 9 depicts the exact waveform of resistor-capacitor filter most around used in digital data processing analysis. The charging of the capacitor depicts the holding value to be stored and the discharge depicts the nil values within the simulation. Grounded to this detail, the analog to digital conversion processing could be easy deduced by magnitude values throughout resistor value variations. As realization example, a resistor value variation from 10 Ohms to 100 Mega Ohms could be used in such a task realization. In fact, the above resistor value variation, could be computed in modeling simulation processing analysis as mathematical power function of two chosen functions such that the function depicted by figure 9 and a function, which could better used in modulation such that 2 a(.) sin (.) , or others defined in table 1.

[sin(.)] .[exp(.)] wave-form modeling simlation processing

magnitude

time

Figure 10: waveform within modulation and modeling-simulation processing analysis. Figure 10 shows the waveform depicted by the mathematical function [sin ( .)]. e(.) . Within this mathematical function the magnitude change values indicates the resistor value variations for resistor-capacitor filter. The nil values indicates the logic false and the summit magnitude value indicates the logic true. The between value variation determines the fuzzy logic processing analysis. Notice the original frequency within this waveform is the ratio of one to the interval time separating two successive magnitude summit values. This frequency is measured in Hertz (unit is Hz). So, varying resistor values allows magnitude variations of output node. This produces a frequency variation inside the envisaged circuit. For equivalence task, an inductor could be involved. Therefore, the new circuit would be depicted with inductorcapacitor circuit. The inside circuit frequency could be measured throughout this mathematical formula; L.C.4.p2 . f 2 =1 , where p is a ratio of 314 to 100, and f is the frequency to be calculated. The frequency calculation is background variations of L when C is constant or vice versus or together varying in time. Therefore, the frequency is a ratio of 1 to 2.p L.C . In fact, when wanting to set the frequency to its highest possible value, the inductance L and the capacitance C, should be less as they could be. As example for given values of C = 1 micro Farads, and L = 1 micro Henry, the frequency is then equal to about 159.23 Hz.

Figure 11: inductor-capacitor-resistor circuit for frequency variation realization.

Figure 11 depicts the realization of frequency variation inside a circuit to filter the signal output. The inductor L receive signal input edge characterizes current flows in Amperes. These current flows sustain or incur some basic logic influences. These current flows incur phase shifting and magnitude modification. Then they maintain their following flows within the circuit. Some of them will traverse the resistor and others will contribute for capacitor charge. The output node is resistor-capacitor filter characterizing the 3dB lossy magnitude for cutoff frequency. In fact, to measure this cutoff frequency within the 3dB lossy magnitude, the following mathematical functional operation Gain dB=20.Log10 ( The output voltage waveform is illustrated by 1ea.RC.t . Although, the input voltage waveform is depicted by: sin (2.p.f.t ) . 3dB lossy magnitude from the maximum gain,
GaindB=20.Log10 (
V out ) V

should be onward proceeded.

1 in Hz. dt In fact, frequency oscillation realizations is the aim object of digital data transmission branch fields. The simple way to achieve this is the usage of circuit included in figure 11. Incurring onwards send-receive those frequency is the subject aim of digital data transmission such this involved within digital satellites processing analysis. The main original theme of this incurring onwards is the data encoding decoding processing analysis. Hence, Shannon did propose an idea of data encoding based on the bit-word-length calculations thus the minimum amount of bits to be used to encode a character a for example found in an alphabet set of N characters. This number is thus calculated 2 x = N +1 . To search x, just introduce the logarithm function as follows: 2 x= N +1 log 2( 2x )=log 2 ( N +1) . Therefore, this x number could be determine as follows; x=log2 ( N +1 ) . cutoff frequency; Indeed, the logarithm conversion between bases is: log a ( y)= ln ( y) .
ln (a)

V out ) V

, allow then short time interval measurement dt, which is characterizing

Thus,

ln ( N +1) x=log2 ( N +1 )= ln( 2)

, where ln(.) is the natural logarithm function. As example,

where the ASCII code (255 characters) were encoded, the amount of bits was eight bits. To encode one character from the 255 character alphabet set, a sequence of eight bits is required for example 10011010b. To send this character, the above techniques such the charging and discharging of the capacitor of figure 11 eight times or more would be involved. In fact, the 1b represents the highest magnitude amount, however the 0b represents the nil environment of the

magnitude amount. An other methodology is to convert such a binary sequence to integer value and to use the potentiometer command and controlling for digital-analog converting. Figure 8 represents such a processing analysis. For further gathering discovering database's information abut energy sources, see below for charging chemical battery equations details. 2.2. Mathematical equations involved inside measurement processing: First mathematical equation involves within the measurement processing is the sous-traction or sub-traction; sub(a, b) which means the rest to be counted ab . The next mathematical equation is the addition operation; add(a, b) which means the number to counted is a +b . The most around difficult mathematical equation is power calculation. Final mathematical equation involves within measurement processing is the fraction operation or ratio calculations. Thereby, power calculation is based on quantity integration during time simulation and ratio calculation is based on quantity derivation during time simulation. As example, considering RC-circuit, which presents a measurement unit in filtering branch field. This unit is the gain on magnitude in decibels. In digital-analog waveform generation like [14] or 11 GHz radar images processing, the optimal value of the capacitor is 1000 micro Farads and by this way a calculation of possible measurable resistor is about 0.4 micro Ohm.

Figure 8: resistor-capacitor filter response circuit.

Figure 8, presents resistor-capacitor filter response circuit. This circuit has as input edge the current flows, which could be measured either using coulomb or Ampere, then two principally elementary components that capacitor C measured in Farads, and resistor R measured in Ohm. Finally, this circuit has an output node, which deliver instantaneous voltages depicted by Figure 9. Notice that every inside space is measurable entities although light and whose theoretical aspects, which have their roots to the works of Einstein and Planck and co, who did define light as measurable amount quantity and did win noble prices. Just remember domicile power frequency in USA is 60 Hz, however its value in Sweden is 50 Hz. How did not the jury of made representation discover that the frequencies are distinct but the energies are the same with 75 Watts light bulb????

Einstein performed thread-tasks for mathematical energy theory: Einstein performed thread-task in mathematical energy theory, is the famous formula: E=mC 2 , where m is a weight in SI system measurement, and C is light velocity as defined constant in SI system measurement. Indeed, light velocity was never be constant. Furthermore, light velocity could not be nevertheless determined whereby philosophy processing analysis aspects would be investigated within modeling-simulation processing analysis. Although, clear compile-compute-conclude processing analysis of mathematical energy theory is the how to measure principles inside Watts' measurement calculations. Therefore, first of all, a clean clear mathematical power theory should be investigated. As original main thread-task of power measurement calculation is: a. current ( t)event . voltage(t )event , where 0< a<1 . For energy source producing, see sections below. This energy production start from battery charges' chemical equations (see below; my work in 1997 with accumulator's band graph modelingsimulation thread-task) [12] and [13]. For Einstein and light producing fans, the comment question is: could you please produce any light to be transmitted anywhere with a constant velocity without any energy source background investigation?? In fact, consumer commerce measurement principles consist to buy energy source producer such a 12 Volts car battery or 24 Volts truck battery or similar capacitor's energy source producers. Hence, to produce light within the bought energy source producers such a 12 Volts charged battery, is to buy Amperes-to-light converters such light bulbs, which are led emit diodes, lamps, electrical arcs, light bulbs, projectors, and other kind of current-to-light converters. For radio event activity light emission processing analysis, the kernel nuclear energy source producer could never emit visible observed light (see nuclear centers for electricity productions if you could see any light phenomena within the kernel of nuclear atoms such Uranium or similarly). Thus, to calculate the produced light with any bought light bulb using 12 Volts charged battery. Hence the distance could be varied from 1 meter to perhaps 1000 meter or may be more within just one second time. Therefore, the envisaged velocity calculation of produced light is: distance (t) velocity= . This is the main definition of velocity (read books and papers of
t

velocity definition and measurement principles). So, in one second time, could any energy source producer deliver a light velocity of: C=2.99 x 108 m/ s velocity measurement quantity in SI system measurement? Clearly and exactly true not. So, why did Einstein and co define the light velocity as constant declared with this value C=2.99 x 108 m/ s ?? I think, the problem was the black sun investigation study. Notice, that from [14] the sun oscillates with till now undefined frequencies and so the black dark aspects of the sunlight oscillations shown in many pictures token from this website: http://www.nasa.gov/multimedia/videogallery/index.html. Otherwise consider the following formula of light motion velocity identification:
wavelengthwavefrequency =C=2.9910 8 m/ s ,

Hence, from the above experiments, the distinct wavelengths vary from 1 meter to may be 1000 meters (100 meter wavelength of laser with 3 Volts battery energy source producer), nevertheless the wave frequency is constant. The mathematical product of which measurement quantities could thus never be constant.

Planck performed thread-tasks for mathematical energy theory: Planck performed thread-task in mathematical energy theory, is the famous formula: E=hN , where h is his constant in SI system measurement, and N is wave frequency as defined variable in SI system measurement. In fact, within same frequency values and distinct power values of envisaged light bulbs, the energy could never be thus constant, due to the constant value of the h, which is the definition of Planck's constant; see http://hyperphysics.phy-astr.gsu.edu/ for more details about this mathematical definition of energy.

Figure 200: proposal electrical circuit of Light Emission processing analysis.

Figure 200 depicts an electrical circuit of the phenomena of light emission. Light could only emitted and transmitted using Light Emit Diodes and light bulbs or similarly. Therefore, the measurement quantities of lights and whose velocity is depending on those involving tools, such electrical energy-to-light converter toolboxes. Indeed, these energy-to-light converter toolboxes allow light motion anywhere. The most around application is the disco emit light gaming-operations, optical-fiber data transmission, electrical arcs, the color of Earth's Sky at night and so on, whereby the movable distance of the produced light could be may be attained 400000 meters or more in the next high-tech light bulb production processing. In fact, the motion of light is hard thread-task to be achieved. Nevertheless, the true right light motion velocity is original main sufficient suitable organization flow for researchers in electrical branch and field's disciplines, whereby the true right definitions of velocity should have been integrated within any light motion phenomena study. The insight of logic processing analysis of light motion velocity with source energy batteries of

cars and trucks, is the distance variations belong to this light velocity motion mechanism from 1 meter (car rear-light) to 300 meter bright light (headlight of car). Hereby, to calculate light motion velocity, the time event value would be needed. For example, for 10 seconds time, the above detailed distances could so easy reached (1 meter to 300 meters). By this way, the light motion velocity, which is defined as: velocity= distance (t ) , proves that the light motion
t

velocity could never be constant and till now not yet reached the C, C=2.99 x 108 m/ s value identified by Einstein and co. The most around intense light motion velocity is the electrical arc motions' production within the insight of high power voltages. As reference application for light motion velocity, the primordial thread-task production is to take the beam bicycle lighting production, which is based on the physical power that could be involved within such an organization flow. When this power is less ore mere, the produced beam bicycle light is not clear bright, and when this power becomes stronger, the produced beam bicycle light becomes intense bright. The mathematical format of the energy inside such a thread-task processing, is the integration of produced power within instantaneous time values; E=ClearVolume ( power (t) dt)dV , whereby the clearness of envisaged surface-form is depending on the power within the light production mechanisms. This power could be varied between 2.5 Watts LED to 500 Watts projectors or more 1000 Watts light bulbs. In my opinion, there is not till now any other source of light production either the electrical energy converters such leds, light bulbs, electrical arcs and similarly. Viewpoint Conclusion overviews: In this section an introduction viewpoint over discrete event simulation was given. To fill in the requirement of the exactly true right definition of discrete event simulation, counting processing of the number of years should be involved within any modeling and simulation inside discrete event simulation. In fact, the start of year is the birthday of the first month in every year. Hereby, event occurrence is the birthday of the first month. Hence, event activity is add(month's birthday, one day), which means processing increment of each month's birthday in order to finalize one complete month counting, so the second month and so on. Furthermore, event attribute is characteristics within each event. Herewith, the envisaged event is the counting of the number of years. Therefore, event attribute is an account identification of each month. This account identification is the most significant number of days within each month. For example December's account identification is thirty one days. Moreover, event state is collection of variable and signals, which describe the behavior inside a system. In fact, the start of month and its end are two primordial event states involved within each counting processing of the number of years. Although, the main original event states accomplished within this counting processing are the start and the end of year identifications. On the other hand, those event states define the boundaries of year's number counting processing, which is the envisaged system to model and simulate. In this section an overview on how to measurement principles was given. The secret sign of these measurement principles is the usage of interface model card for gathering databases visualizations as viewpoints and outward appearances on LCD displays or similarly. This model card could be used for many applications depending on the within involving sensors As next step within this digital data processing analysis is the storage space optimization to save huge of digital data on currently used hard disk. This requires digital data compression to be involved within.

Many digital data compression techniques were invented and used, which are Cadence model VCD (value change dump) filling in files (reference http://www.cadence.com), Lempel-Ziv (gzip command), tar command, jpeg compression techniques... As digital data transmission and manipulation start to grow up, the to associated hard disk storage spaces become the most around measurement ability. To resolve basic logic influence systems on those measurement abilities, mathematical arrays using different kinds of data compression techniques are invented. Some of them, the most average used within digital data transmission and manipulation are Cadence model; VCD fill in files. This technique, which was developed by me, Said Mchaalia, in 2000 September in Dortmund CEI [11] and [16], has the following format shown in figure 14. Whereby, the header file contains general gathering data information, which are date, version and time units, and scoping modules' naming and declarations. Then after, transition event occurrences, which are instantaneous couple values of transition time values and transition event activity values. Hence, grounded to discrete event simulation theory, which ascribes modeling and simulation by couple set {(timeevent, valueevent)iteration } for each discrete iteration inside processing analysis. Thus, when an event, which is instantaneous occurrence associated with the change of states inside system environment, a time transition and signal value transition incur and sustain to be maintained and filled in the value change dump file. The measurement units involved within this value change dump file are nano second or other simulation time unit and binary sequences, which are the amount of bits for signal encoding like the famous eight bit-word used within ASCII codes. In digital signal processing, instantaneous event occurrences should be converted to iterations by the methodology techniques; time=nT /n , whereby n is a varying number from nil to infinity.

Figure 14: VCD (value change dump) file format developed on September 2000.

Although the organization methodology to develop such a VCD fill in file like that shown in

figure 14, is depicted with figure 15. Thereby, figure 15 shows the principles of the computecompile-conclude processing analysis aspects. This processing analysis is grounded to discrete event simulation. Thereby, the just values or parameters are incoming data for each time value events. These time value events is characterizing the manner to change the synchronized simulation time either within the events, which occur on the clock cycle based simulation or synchronized time values with event occurrences on driven cycle based simulation. For example the phenomena of light color changes. Notice that sign could be any object, action, event, pattern, etc., that conveys a meaning.

Figure 15: principles of discrete compute-compile-conclude processing analysis

The main intentional background development depicted in figure 15, is the principles of

language measurement synchronization. Furthermore, civilization is the local process whereby following flows achieve an advanced stage of development and organization. Hence, cultural fill in or intellectual refinement, which is good taste deduced throughout genetic algorithms, or fuzzy logic and neural networking processing analysis. In fact, these algorithms have mimetic and mnemonic object aims involving within human society that has highly developed material and spiritual resources and a complex cultural, political, and legal organization that is an advanced event states within social development environments. Indeed to make and to burrow (which means moving or progressing by or as if by digging holes or tunneling) are two original main primordial event attributes, which ascribe and relate incoming data to inherent sources of quality and office characteristics. In fact, neutralism is a core processing uniform-unity, which has to role and rule event states during the progress of inherent data sources. This inherent data source progresses is starting from read-signal [11] or start-process node depicting mathematical functionalism of registering and triggering, which is pulse or circuit that initiates the action of another component. For example using Timer 71054 within digital transmission processing analysis as shown in figure 13 and ascribed in the header of each VCD file (value change dump file) as viewed in figure 14. Finally there, zing and yes are output nodes, which represent measurement of differential databases levels. Whereby zing, which means move and progress fast, permits digital sensory; transmitting impulses from sensors to the model card meilhaus300 to be processed within modeling and simulation task and threads' achievements. Thus, transmitting impulses are incoming data within shortest time intervals. In fact, binary decision diagram processing analysis views over collect data and its storage in binary format was illustrated. For further manipulation of the stored data, some viewpoints of binary decision diagram processing analysis would be hereby presented. General purpose following motor flow of the binary decision processing analysis could be depicted in Figure 05, whereby the gathering discovering data information's database start with the start-process node and finally the end-process node collect inside circuit signals and send them to the grounded to gathering database information node. The following motor flows of edges, which represent data values, are binary values from received data through the incoming data node input signals. For example, received 6 volts would be converted to 00110b and so on. The incoming data has the format of (timeevent, valueevent). For example, (00.00.04, 9 volts), which would be converted to (00000000000000011b, 01001b), the first binary inside-couple value represents the event time and the second binary inside-couple value represents the event value. The further processing of obtained event activities characterizing event occurrences are enveloped inside a self set of couples: {(timeevent , valueevent )}index . To start binary decision diagram processing analysis, sufficient suitable measurement array of event activities is required. Thus, this measurement array is based on the indexing system sign of event occurrences. The start process node illustrates the index null of event activity inside the envisaged system environment to reach measurement calculations within modelingsimulation processing analysis. In fact, in many branch disciplines, these measurement principles are always within involved, because the core processing unit is an arithmetic logic unit, which manipulate insight's binary values. Therefore, digital data compression techniques depicts mainly used digital data compression techniques are those whose roots were coming from Lempel-Ziv [3] and co theoretical aspects and effects. The bright idea of such a data compression technique is the usage of the sliding windows background development. Whereby, focusing on the windows that allows the

absorption of the most significant amount of digital data involving during modeling and simulation processing analysis. Thereby, the utilization methodology is to fill in the mere (slight and small in quantities involving within any sliding window methodology) array by moving sliding window most around left side and right side of the center digital data amount. In dictionary digital data compression techniques, the involving algorithm could be describe as follows show in table 5. Table 5 illustrates the algorithm development for dictionary digital data compression based Lempel-Ziv and ASCII coding ideas. Whereby, table map tables were involved, one hash table for instantaneous storing vector of characters and another one for instantaneous storage of character position inside a file. This character position is characterizing by the character position in line and the line position in the file. In C-language, the data type declarations are defined as that illustrated within Figure 17, which views over digital data compression techniques. Event Time
time t , whereby set local global simulation timer to time t.

Characteristics
mere array is hash table of index ed vector of characters. In C-language; typedef map<int, vector<char> > hashtable;

define mere array for data storage.

define mere array for position storage.

time t + t

mere array is position table of vector of integers. In C-language; typedef map<char, pair<int, int> > positiontable;

reading current character

time t + 2.t

store it in the hash table and then store its position in the position table.

searching if read character is inside the mere array

time t + j.t j>2

searching within the position table based on the character key. If the return value is not nil, this character exists inside the hash table, thus just store its position in the position table. Else, store it in the hash table and its position in the position table.

while not end of file { moving sliding windows to next character and loop the operation to reading character step again. }

time t + n.t n>j time t + m.t m>n

The fastness of this algorithm depend on the number of clock cycle reserved for the read from files and writing in access memory.

when end of file { storing the hash table and position table in a file.}

The fastness of this algorithm depend on the number of clock cycle reserved for the writing in files and reading from access memory. Table 5: digital data compression's algorithm development.

typedef map <int, vector<char> > hashtable; typedef map <char, pair<int, int> > positiontable; int main (int c, char *v[]) { if (c <= 1) { cout << No input file names was given \n; exit(0); } else { hashtable htable; hashtable::ietrator it = htable.begin(); positiontable ptable; positiontable::iterator ik = ptable.begin0; vector <char> vect; FILE *fptr = fopen( v[1], r); FILE *gptr = fopen( v[2], w); unsigned int lnbr = 0; // line postion inside to be read file. unsigned int c pos = 0; // character position inside to be read file. char cptr; unsigned int id = 0; fscanf(fptr, %c, &cptr); if (cpos == 0) { ptable.insert(ik, pair <char, pair<int, int>> (cptr, pair<int, int>(lnbr, cpos))); vect.psuh_back(cptr); htable.insert(it, pair <int, vector<char>> (id, vect)); } else { while {!fptr.eof() {resolve basic logic influence systems;}}} } return 0; } Figure 17: declaration of envisaged tables within C-language programming.

In nowadays designers have to verify a huge of complex levels of Digital circuits, embedded software and on-chip analog circuitry with fragmented methodologies that substantially impede verification speed and efficiency. They also face a large number of technical issues including design performance, capacity, test development, test coverage, mixed-signal verification, and hardware-software co-verification methodologies. In fact, optimizing verification speed is a complex research subject. Overall, verification methodologies are used by the designers at a variety of design integration levels. To improve digital hardware design using these verification methodologies, many digital simulation techniques are used. One of them is discrete event simulation, which has successful track record in the improvement of hardware verification process. In contrast to other simulation methods (like differential equations) in which systems evolve continuously in continuous time, the systems in discrete event simulation are described by discrete events and appropriate processes. Discrete event simulation performs, indeed, each event or transaction or item individually using an appropriate process. Simulation, however, is not a satisfactory solution to the validation problem of digital hardware for many reasons such as: each schedule (run) proves only the correctness of the design under verification for that particular sequence of inputs (stimuli); and only one design under verification state and input combination are visited per simulated clock cycle. However cyclebased simulation involves these simulation limitations, it is still a sophisticated technology choice for the validation process of large synchronous systems, in which logical simulation is nicely scalable regarding to designer requests. To propagate values from system inputs to system outputs, a simulation clock cycle is required. After finishing one cycle, the next cycle will be begun. Moreover, practical cycle-based simulators allow for circuits with multiple clocks and interface to event-based simulation. However, cycle-based simulation ignores system delays and inter-phase relationships. This

limits the amount of information about the design that can be extracted from the simulation. Note that cycle-based simulation does not work for asynchronous designs and cannot be used in timing verification. Event-driven simulation environments uses the traditional discrete event simulation mechanism and considers system delays and inter-phases. During each verification process using either cycle-base simulation or event-driven simulation, we have the opportunity of outputting the simulation results to waveform diagrams. For a detailed performance evaluation of the design under verification, a signal trace file format called Value Change Dump file (VCD file for short) has been developed by Cadence to store signal waveforms. Not only the input and output signal waveforms are stored in this file, but also the internal signal waveforms too. The waveforms will be needed to get out a trace of the real behavior of the designs. By this way, the physical size of this file can become excessively large, although the signal waveforms are held in this file in a compact format. On the other hand, using cycle-based simulation allows to reduce stored signal waveforms, because it does not consider all real transactions of signals that are for example caused by delays and inter-phases. But this will not be useful to improve the verification process for all digital circuit types as well as to perform timing verification. Nevertheless even when using cycle-based simulation, the generated waveform files are usually huge, often exceeding the capabilities of the storage system. As follows, first of all we did present an overview of digital design simulation and describe how waveform can be generated during the simulation process. Secondly, we did introduce the benefits of discrete event simulation and cycle-based simulation in detail, and illustrate how they will be used in our work. Finally, the format of a signal trace file, which is created during the verification phase and digital data compression techniques were discussed. Therefore, the how to measure principles section Next a small annex about different culture viewpoints will be presented. Measurement principles proceeding processing dynamics analysis: As aspects define a way in which something can be viewed by the mind or appearance to the eye, measurement processing analysis aspects are viewpoints and onwards outward appearances. To use the computing-compiling-concluding unit in order to visualize those viewpoints and outward appearances, an interface PCI card, such this model card meilhaus.300, would be developed and integrated within the processing. Thereby, the model card is an interface card for IBM-AT and compatible with ISA-16 bit stuck capacity or more, with Analog-Digital and Digital-Analog converters, and with 24 Transistor-Transistor Logic (TTL; 7400, 7804, etc,...) input-outputs. This model card could be involving software programming using C-language or other similar languages. In fact, Analog-Digital converter generate 12 bit word throughout 16 channels; Analog-Digital channel 0 to Analog-Digital 15, or 8 differential input channels and a multiplexer. Its frequency is variable from 0 Hz to 200 KHz. The manipulation of the converter elements, which are bloc devices and input-output types involving in; single or multiple timer, trigger and interrupt mods, chosen channel number, unipolar with positive values or bipolar with negative and positive values and magnitude gains. Within the software development, those bloc devices and inputoutput types are addressed, interrupted and manipulated. Two types of programming converter are distinguished: programmable gate array PGA 203 for which, the magnitude gain is 1or 2 or 4 or 8. Furthermore, the unipolar voltage value variations are engendering in the following measurement set; {[0V 10V], [0V 5V], [0V 2.5V], [0V 1.25V]} The programmable gate array PGA 202 for which the magnitude gains is 1 or 10 or 100 or 1000. For this PGA 202, the unipolar voltages are engendering in the following measurement

set; {[0V 10V], [0V 1V], [0V 0.1V], [0V 0.01V]}. The choice of one of those input-output types is based on software development. Notice that the converter receives its input-output via a sub-digital connector (50 pins). The Analog-Digital (reference of such a ADC is MAX176) converter is powered by 3W DC/DC converter. However, the Digital-Analog (reference of such a DAC is DA664) generates a 12 bit word throughout four digital-analog channels, which are channel A, channel B, channel C, and channel D. It is connected directly to the 16 bit data bus and the 8 bit address of the computingcompiling-concluding processing unit. It is isolated by 1W DC/DC converter. To secure the converter, each channel is connected to the ground of the converter by resistor and capacitor as described in figure 11. Hence, digital input-output system (reference of such a system is BCT543) generates an input or an output word of 8 bits as defined in the example of ASCII code in the previous section. The choice of digital input-output ports, which are; {(DIO port A0, , DIO port A7), (DIO port B0, , DIO port B7), (DIO port C0, , DIO port C7), and (DIO port D0, , DIO port D7)} Only one those ports is chosen for any digital input-output processing analysis through software development. Indeed, registers involved within this model card are 8 bit-word register used for control and 16 bit-word used for instantaneous data storage. Although, Timers (reference of such a device is 71054) are synchronous devices with the 8253 compatible. It has three 16 bit timer. However, the first one and the second one are cascaded, the third one is independent. The second one generates a frequency of 1.5MHz. The third one generates a frequency of 3MHz. Thus, the output of the first one activates the channels. The time of this channel activations is controlling via the scan-time input of the first one, which allows the control of channel activation.

Figure 13: model card meilhaus300 for logic bus interface communication

Figure 13 shows the different on board components of the model card meilhaus300. This model card is a PCI interface IBM-AT compatible card. To visualize measurement processing analysis aspects using this model card, software development based on C-language or similarly would be used. Thus the usage of the computer as oscilloscope to illustrate the received data from the model card and put them into the display throughout putpixel(.,.,.,) function. The x-axis is then chosen to indicate the number of iteration involving within data reception and the y-axis indicate the magnitude value levels of the received voltage from the model card. The division processing analysis is arbitrary choices for users.

To use the involving within this model card software development for graphics visualization in two dimension coordination system such that x-axis or exactly time-axis and y-axis or exactly magnitude-axis, sensor requirements are incurring. Therefore, many sensor kinds could be distinguished, which are low noise bloc sensor, temperature sensor, velocity sensor, volume level sensor, etc.. Hence, the original main assigned sufficient suitable tasks are to search and investigate sophistical sensors, which allow true right converting of detected data to voltage measurements.

References: [1] Bodanis, David (2005), Electric Universe, New York: Three Rivers Press, ISBN-978-0-307-33598-2. [2] SI base units, SI brochure (8th ed.), BIPM, http://www.bipm.org/en/si/si_brochure/chapter2/2-1/, retrieved 2012, August 12th. [3] J. Ziv and A. Lempel, A Universal algorithm for sequential data compression, IEEE transaction on information theory, vol. IT-23, No-03, 1997 May. [4] Mathematical cousres for engineers. [5] Basic magnetic flux courses. [6] Worgang Becker and al., Magetic Localization of EEG Electrodes for Simultaneous EEG and MEG measurement, IEEE confirence on medicine biology, Lyon 1992, pp 34-36. [7] Diekmann and al., Comparison of MEG, EEG and frequency MRI responses to identical electrical stimuli delivered peripheral nerve.[ [8] Said Mchaalia, Bond graph modeling-simulation techniques for battery chargin system with 400 Amperes average current data edge flows (using 32 bits PCI interface card PCI IBM-AT compatible interface cards), ASC, at National tunisian engineering university, 1997, Tunisia, Headreference: Prof. Ksouri. [9] Said Mchaalia, Measurements with 11GHz noise radar (using PCI interface card like meilhaus300 for PCI IBM-AT compatible interface cards), Microwave department, at Ilmenau technical university, 1998, Germany, Headreference: Professor Heinrich Loele. [10] Claude Shannon, mathematical theory of data transmission,1948. [11] Said Mchaalia, Raja Mchaalia, Digital waveform generation principles involving data encoding and compression techniques, www.bushcenter.com and co, August 20th 2012. [12] MCHAALIA S., EL KAMEL A., KSOURI M., BORNE P. Neuromimetic approach of multimodel

representation, , CESA'98, Conf. IMACS-IEEE on Computational Engineering in Systems Applications, Hammamet (Tunisia), Vol.1, pp500-504, April 1998.
[13] MCHAALIA S., EL KAMEL A., KSOURI M., BORNE P., Robustness analysis of neuromimetic approach in multimodel representation, ICSSE'98 International Conference on Systems Science and Systems Engineering, Proc, Peking (Chine), pp180-184, August 1998. [14] www.science.nasa.gov visited August 21st 2012. [15] Said Mchaalia, Waveform compression (draft), Computer Engeering Institut, Dortmund university, Germany, December 11th 2002. [16] Said Mchaalia, Raja Mchaalia, Digital waveform representation based on light color kind's selfish set , th www.bushcenter.com and co, August 22 2012.