You are on page 1of 54

M ASARYKOVA UNIVERZITA FAKULTA INFORMATIKY

}w!"#$%&123456789@ACDEFGHIPQRS`ye|
M ASTER S T HESIS

Design, implementation and simulation of intrusion detection system for wireless sensor networks

Bc. Lumr Honus

Brno, spring 2009

Declaration
Hereby I declare, that this paper is my original authorial work, which I have worked out by my own. All sources, references and literature used or excerpted during elaboration of this work are properly cited and listed in complete reference to the due source.

Advisor: RNDr. Andriy Stetsko ii

Acknowledgement
I would like to express my thanks to my supervisor RNDr. Andriy Stetsko and to Bc. Barbora Micenkov. I am gratefull to Bc. Jana Jourov for moral support.

iii

Abstract
In this masters thesis, a simple intrusion detection system (IDS) for wireless sensor networks is designed and implemented. The proposed IDS is based on watchdog monitoring technique and is able to detect selective forwarding attacks. Besides, the improvements that make watchdog monitoring technique more reliable are described and the results of simulations of the IDS on TOSSIM simulator are presented.

iv

Keywords
Wireless Sensor Networks, Intrusion Detection System, Watchdog Monitoring Technique

Contents
1 2 Introduction . . . . . . . . . . . . . . . . . . . . . . Wireless Sensor Networks . . . . . . . . . . . . . . . 2.1 Applications . . . . . . . . . . . . . . . . . . . . 2.2 Hardware Platforms . . . . . . . . . . . . . . . . 2.3 Security Objectives . . . . . . . . . . . . . . . . 2.4 Attacker model . . . . . . . . . . . . . . . . . . 2.5 Possible Attacks . . . . . . . . . . . . . . . . . . TinyOS . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Versions . . . . . . . . . . . . . . . . . . . . . . 3.2 NesC . . . . . . . . . . . . . . . . . . . . . . . 3.3 TinyOS Architecture . . . . . . . . . . . . . . . 3.3.1 Boot Sequence . . . . . . . . . . . . . . 3.3.2 Concurrency . . . . . . . . . . . . . . . 3.3.3 Hardware Abstraction Architecture . . . Intrusion Detection Systems . . . . . . . . . . . . . . 4.1 IDS Classication . . . . . . . . . . . . . . . . . 4.2 Watchdog Monitoring . . . . . . . . . . . . . . . 4.2.1 Regular Collisions . . . . . . . . . . . . 4.2.2 Ambiguous Collisions . . . . . . . . . . 4.2.3 Partial Dropping . . . . . . . . . . . . . 4.2.4 Receiver Collisions . . . . . . . . . . . . 4.2.5 Limiting The Transmission Power . . . . Radio Stack . . . . . . . . . . . . . . . . . . . . . . . 5.1 Collection Tree Protocol . . . . . . . . . . . . . 5.2 Active Message Layer . . . . . . . . . . . . . . 5.3 Hardware Dependent Radio Stack on TMote Sky 5.3.1 Receiving Message . . . . . . . . . . . . 5.3.2 Transmitting a Message . . . . . . . . . 5.4 TOSSIM . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Sending message . . . . . . . . . . . . . 5.4.2 TOSSIM Shortcomings and Problems . . Implementation . . . . . . . . . . . . . . . . . . . . 6.1 System Overview . . . . . . . . . . . . . . . . . 6.2 Tapping communication . . . . . . . . . . . . . 6.3 Scheduler . . . . . . . . . . . . . . . . . . . . . 6.4 Statistics Manager . . . . . . . . . . . . . . . . . 6.5 Detection Manager . . . . . . . . . . . . . . . . 6.5.1 Selective Forwarding Engine . . . . . . . 6.5.2 Advanced Selective Forwarding Engines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 4 4 5 6 6 7 8 8 9 10 10 11 11 13 13 14 15 15 17 17 17 19 19 21 22 23 26 27 28 29 31 31 32 33 35 36 37 38 1

6.6 Response Module . . . . . . . . . . . . . . . Deployment and Simulation . . . . . . . . . . . . 7.1 Deployment on Tmote Sky . . . . . . . . . . 7.2 Deployment on TOSSIM . . . . . . . . . . . 7.3 Simulation . . . . . . . . . . . . . . . . . . . 7.3.1 Error ratio . . . . . . . . . . . . . . . 7.3.2 Advanced selective forwarding engine 7.3.3 Simulation with attackers . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

39 40 40 40 41 41 43 45 46

Chapter 1

Introduction
The wireless sensor networks (WSNs) are often deployed in physically insecure environment where we can hardly prevent attackers from the physical access to the devices. Since making nodes resistant to physical tampering would make them much more expensive, we have to reckon that an attacker may capture the nodes and retrieve the cryptographic material via physical tampering. In this masters thesis we will design and implement a simple intrusion detection system based on watchdog monitoring technique. The intrusion detection system will be deployed on network that uses Collection Tree Protocol (CTP) for gathering data measured on nodes, and will be able to detect selective forwarding attacks. We will focus on problems that are related to watchdog monitoring technique and we will present improvements in this technique that make watchdog monitoring technique more reliable on network with a large number of ambiguous collisions. In the second chapter we will present the wireless sensor networks and describe the security objectives we would like to achive. Besides, the attacker models that should be taken into account in WSN will be presented and the basic attacks that attackers can perform will be described. We have implemented our intrusion detection system for nodes using TinyOS. In the third chapter, we will take a closer look on this operating system. In the fourth chapter, we will give a brief overview over intrusion detection systems and look at watchdog monitoring technique in detail. In addition, we will also describe problems that affect the watchdog monitoring technique accuracy in the real environment and techniques that intruder may use to confuse our intrusion detection system. In the next chapter, we describe the radio stack from the top to the bottom. We also compare hardware dependent radio stack on physical nodes that uses CC2420 radio and radio stack on TOSSIM simulator. At last, we will point out the problems that make simulation of our IDS inaccurate and we will describe the steps that we had to do to improve it. The sixth chapter is devoted to the description of our intrusion detection system. First, we will give a brief overview of system architecture and then we will describe individual components of the system. In the seventh chapter we will give instructions on how to deploy our intrusion detection system and present the results of simulations on revised TOSSIM.

Chapter 2

Wireless Sensor Networks


A wireless sensor network is an ad-hoc network composed of a large number of small inexpensive devices denoted as nodes (motes). These nodes are battery-operated devices capable of communicating with each other without relying on any xed infrastructure.

Base Station (Sink) Internal Node Leaf Node

Figure 2.1: Wireless Sensor Network A typical WSN (see Figure 2.1) consists of base station and nodes that sense the enviroment and send data to the base station. The base station (also denoted as sink or gateway) is more powerfull than other nodes and serves as an interface to the outer world. When any node needs to send a message to the base station that is outside of its radio range, it sends it through internal nodes. The internal nodes are the same as others, but besides of local sensing they also provide forwarding service for others. A typical wireless sensor node is equipped with one or more sensors that are capable of monitoring physical or environmental conditions such as temperature, humidity, pressure, vibrations or light intensity. In addition, each node is equipped with a radio transceiver, an energy-efcient microcontroller, and an energy source, usually a battery. Due to energy constraints, the nodes are only capable of limited amount of computation and signal processing. Compared to a conventional approach that deploys a few expensive and sophisticated sensors, the WSN performs networked sensing using a large number of relatively unsophisticated and cheap sensors. We can summarize the advantages of the WSN approach as greater coverage, accuracy and reliability at a possibly lower cost [13].

2.1

Applications

The development of WSNs was originally motivated by military applications. As the predecessor of WSNs we can consider the Sound Surveillance System known as SOSUS. This network consisted from bottom mounted hydrophone arrays connected by undersea communication cables and was used for anti-submarine warfare during the cold war [25]. Military currently uses wireless sensor networks for instance for battleeld surveillance sensors could detect, classify and localize hostile forces 24-hours a day in all weather conditions. 4

2. W IRELESS S ENSOR N ETWORKS Nowadays the WSNs are used in many industrial, civilian, environmental and commercial areas. Most of the current WSN applications fall into one of the following classes [13]: Event Detection and Reporting Applications that fall into this class have a common characteristic: the occurrence of the events of interests is not regular. A WSN of such type is expected to be inactive most of the time and activates only when the event occurs. Typical applications of this class are intrusion detection systems or forest re detection systems. Data Gathering and Periodic Reporting These applications are often used to monitor environmental conditions such as temperature, humidity or lighting. These applications usually periodically sense the environment and send measured values to a base stations. Sink-initiated Querying Application of this type, rather than periodically reporting its measurements, waits for a base station (sink) query. That enables the base station (sink) to extract information at a different resolution or granularity, from different regions of space. Tracking-based Application In many application areas we are interested in tracking the movements of some object. WSNs for this purpose combine some characteristics of the above three classes. For instance, when target is detected, the base station has to be notied promptly (event detection). Then, the base station may initiate queries to receive time-stamped location estimates of target, so it can calculate trajectory (sink-initiated query) and keep querying the appropriate set of sensors [13].

2.2

Hardware Platforms

There are many types of sensor node hardware platforms. The platforms differ from each other, but we can nd several common denominators. First, it is a slow processor. With a few exceptions, contemporary sensor nodes contain a processor that operates on the frequency of order of megahertz. Second, it is a limited size of memory for the program and usually only few or tens kilobytes of RAM. Although Moores law, that is also applicable to WSN, predicts that hardware for sensor networks will be smaller, cheaper, and more powerful, there always will be compromises: nodes will have to be faster or more energy-efcient, smaller or more capable, cheaper or more durable [7]. Recently we can see two trends: rst, more powerful platforms such as IMote2 are introduced [33], and on the other hand we can see an effort to further miniaturization and price reduction [24].

Figure 2.2: Tmote Sky sensor mote [23] For the purposes of the work a present-day node was used Tmote Sky. The Tmote Sky has 8MHz Texas Instruments MSP430 F1611 microprocessor, and is equipped with 10kB RAM and 48kB of ash 5

2. W IRELESS S ENSOR N ETWORKS memory. This 16-bit RISC ultra low power processor features extremely low active and sleep current consumption. The processor is in a sleep mode most of the time in order to minimize power consumption [23]. The TMote platform features the Chipcon CC2420 radio, which is an IEEE 802.15.4 compliant radio. The radio operates on the 2.4GHz unlicensed band and is capable of data rate of 250kbps [23]. The mote has integrated humidity, temperature and light sensors. This platform fully supports the TinyOS operating system.

2.3

Security Objectives

In WSNs we would like to provide similar security mechanisms as in other types of (wireless) networks, such as [27]: Data Condentiality: Sensor networks are often used for gathering sensitive data. We would like to ensure that the data is protected and will not leak outside of the sensor network. Data Authentication: We would like to have an opportunity to verify that received data really was sent by the claimed sender. Data Integrity: Data integrity property ensures that data has not been modied or altered by unauthorized party during transmission. Data Availability and Freshness: Sensor networks are often used to monitor time-sensitive data events, therefore it is crucial to ensure that the data provided by the network are fresh and available at all times. Graceful Degradation: We would like the sensor network mechanisms to be resilient to node compromise. When a small portion of nodes become compromised, the performance of network should degrade gracefully.

In conventional computer networks, the message authentication, condentiality, and integrity are usually achieved by end-to-end security mechanisms such as SSH or SSL. The reason is that in conventional networks end-to-end communication is the dominating trafc pattern [13]. By contrast, in sensor networks, many nodes are usually sending data to the single base station. In-network processing such as data aggregation, duplicate elimination, or data compression is very important to be run in an energy-efcient manner. To achieve efcient in-network processing, the internal nodes need to access, modify, and possibly suppress the contents of messages. For this reason, it is often not possible to use the end-to-end security mechanisms between a sensor node and a base station [13].

2.4

Attacker model

According to the [14], we can distinguish mote-class and laptop-class attackers. A mote-class attacker can have access to one or several sensor nodes with similar capabilities as other nodes in the network. Contrariwise, a laptop-class attacker has access to much more powerful devices, for example laptops, which may have more capable CPU, longer battery life or high-power radio transmitter. It allows him to perform some attacks that are hardly feasible for a mote-class attacker (such as wormhole attack, see 2.5). He can also perform some mote-class attacks much more effectively, for example a single attacker might be able to jam the whole network. 6

2. W IRELESS S ENSOR N ETWORKS Furthermore, attackers can be outsiders or insiders. An outsider attacker has no special access to a network. In contrast, an insider attacker can have access to cryptographic keys or other code used by network and is a part of the network. For example, the insider can be a compromised node or a laptopclass adversary who stole cryptographic keys, code, and data from the legitimate nodes [27]. We assume that the attacker can easily become an insider because the wireless networks are usually deployed in physically insecure environment and the adversary is easily able to capture nodes and then extract cryptographic primitives using the physical tampering [27]. Furthermore, we assume that the intruder can capture any node in the network, but generally only a limited number of them.

2.5

Possible Attacks

There are many types of attacks on WSNs, which have been described in [27, 5], but we focused particularly on selective forwarding attacks. In the case of wormhole attack, an attacker establishes a tunnel between two nodes, usually using more powerful communication channel. Afterwards he is able to convince two nodes that they are neighbors or can offer a better route to a base station to the others. In the sinkhole attack, a malicious node tries to draw as much as possible trafc from the particular area by making itself look attractive with respect to the routing metric. As a result, the malicious node attracts all the trafc that is destined to a base station [15]. In the selective forwarding attack, a malicious node may refuse to forward some or all packets. This attack is most effective when the malicious node performs routing operations for a large part of the network, therefore this attack is often preceded by sinkhole or wormhole attacks in order to increase the malicious node attractiveness.

Chapter 3

TinyOS
TinyOS is an open-source operating system designed for wireless embedded sensor networks [30]. It is a component-based, event-driven operating system, written in nesC programming language. The core of the operating system has a very small memory footprint and is expandable through various components. TinyOS has component library that includes frequently used components such as network protocols or sensor drivers. An application connects components that needs through wiring specications. Thus we can compile the minimal-size operating system just with those components that applications really need. TinyOS is not the only operating system applicable in wireless sensor networks, there are several others such as SOS [3], Contiki [6] or MANTIS [2], but it is probably the most popular, particularly in the academic sphere. It has been adopted by thousands of developers worldwide and supports many platforms [4].

3.1

Versions

TinyOS 1.0 was released in September 2002. This rst generation became very successful, but experience revealed some shortcomings in design. In particular, there were problems with reliability of larger applications that stem from the component composition and also porting applications to new platforms was very difcult. Because TinyOS was well established and major changes had to be done, it was decided to create a fresh start, rather than try to make changes in current mostly stable code. To avoid many problems that were in the rst generation, TinyOS 2 have used a component hierarchy based on four design principles [19]: Telescoping Abstractions The abstractions are logically split across hardware devices and have a spectrum of ne-grayed layers. The highest layers are most simple and portable, while the lowest allow hardware specic optimizations. Partial Virtualization Some abstractions, usually the top layers of telescoping abstraction, are virtual or shared, while others, such as buses, are not. The virtualization simplies application development. Static Binding and Allocation Every resource and service is bound at compile time and all allocation is static. This means that components allocate all of the state they might possibly need. Service Distributions A service distribution is a collection of components that are intended to work together, providing unied and coherent API. An application wires only to the service components, leaving internal components to ensure that underlying implementations work properly. 8

3. T INYOS The rst stable version of TinyOS 2, which is also referred to as T2, was released in November 2006. When we refer to TinyOS in this work, we mean the second generation.

3.2

NesC

NesC is a programming language designed to facilitate the structuring concepts and execution model of TinyOS. It is an extension of C which guarantees the code will be efcient on the target microcontrollers that are used in sensor networks [9]. NesC applications are built by writing and assembling components. The components are software units that consist of a specication and an implementation. The specication states which interfaces the component uses and which provides. In implementation, there is the logic behind the interfaces programmed. Interfaces dene a bidirectional relationship between components [19]. To connect two components, rst of them has to provide an interface while the second has to use it. The connection is called wiring. For example lets have two components ApplicationC and TimerC (see Figure 3.1). These components are connected (wired) through Timer interface.

Figure 3.1: Example of Cooperating Components The interfaces contain commands and events. The component, which provides an interface, has to implement interfaces commands and the component, which uses an interface, has to implement its events. In the example (see Figure 3.1), the Timer interface contains an fired event and a start command. TimerC provides the Timer interface, so it has to implement start command, ApplicationC uses the Timer interface, thus it has to implement fired event. The components then communicate between themselves by calling commands and signaling events. In our very simple example, the ApplicationC may call Timer.start in order to execute Timer.start command in TimerC and TimerC may signal Timer.fired to execute event Timer.fired implemented in ApplicationC. NesC also supports following features that are widely used in TinyOS: Generic components may be instantiated more than once and can take type types or constants as arguments. 9

3. T INYOS Tasks are pieces of code whose execution takes some signicant time, but which are not timecritical. Posted tasks are executed later by the TinyOS scheduler when the processor is idle. Atomic sections are time-critical pieces of code. They are implemented by disabling interrupts, therefore they should be as short as possible.

3.3

TinyOS Architecture

TinyOS is a component-based operating system. There is always a top-level conguration component1 . In this top-level component it is dened which components will be used. Usually it includes the MainC component, which is responsible for the hardware initialization and the boot sequence (see Section 3.3.1), and main application component. Besides these, the top-level component can also initialize needed service components, sensors, timers or radio abstractions and wire them to the application component. The components initialized at a top-level conguration are able to further initialize other components that they need, and there is always a path from the top-level component to any used component. This allows us to compile the minimal-size system only with components that have been referenced.

conguration TopLevelComponentC{ } implementation{ components MainC, AppC; components new TimerMilliC() as Timer0; AppC > MainC.Boot AppC.Timer0 > Timer0; }

Figure 3.2: Example of top level component For example (see Figure 3.3), lets have very simple top-level conguration component that denes three components to be used MainC, AppC and an instance of generic component TimerMilliC() that reffered as Timer0. Furthermore this conguration denes that AppC component should use Boot interface provided by MainC component and Timer0 interface provided by Timer0. 3.3.1 Boot Sequence

Compiled TinyOS is a binary program that is programmed into the mote. When programming is done, the mote is restarted. After the restart, the mote begins executing main() function, that is implemented in the RealMainP component. This component is responsible for performing TinyOS boot sequence. The system interface to the boot sequence is provided by the MainC component components that need to wire to the boot sequence should wire the MainC component. The standard boot sequence consists of ve steps :
1. Conguration components are special components that are used to assemble other components together

10

3. T INYOS 1. Platform Bootstrap: First, the very low-level hardware initialization is performed, such as setting processor mode. On most platforms there is no low-level initialization needed and platform bootstrap is just an empty function. Scheduler Initialization: The scheduler is initialized in order to provide applications with possibility to post their tasks. Platform Init: Now TinyOS performs platform initializations that are specied in PlatformC component. These initializations must usually follow in a very specic order due to hidden hardware dependencies. After the platform init is done, the scheduler performs all tasks that could have been posted during the platform init. Software Init: The software init is now on. This means that all components (applications) that are wired to the MainC.SoftwareInit perform their initialization function. At the end, the scheduler performs the posted tasks again. Start Main Loop: Before the main loop is started, the system interrupts are enabled and the Boot.booted is signaled to the applications. Concurrency

2. 3.

4.

5.

3.3.2

There are two types of code in TinyOS, synchronous and asynchronous. The asynchronous code is initiated by the hardware interrupts and runs preemptively. It means that whenever an interrupt is caught, the current code execution is interrupted and interrupt handler instantly begins its execution. The interrupt can preempt tasks or other interrupts, but cannot preempt atomic section. The functions called by asynchronous code must be asynchronous as well. The only way that an asynchronous code can execute a synchronous function is to post a task. The synchronous code is only reachable from tasks. Because the tasks are not preempted by other tasks and run atomically with respect to each another, we can rely that no other task executes suddenly and modies our data. Tasks are executed by the task scheduler. The task scheduler runs the posted tasks and switches processor to sleep mode when all tasks are done. The standard scheduler has nonpreemtive FIFO scheduling policy, but it can be replaced for custom one. In order to improve reliability, the scheduler has reserved slot for each possibly postable task. Thus the task queue can never overow and task posting will fail if and only if the task has already been posted [31]. 3.3.3 Hardware Abstraction Architecture

The hardware abstraction architecture (HAA) [10] is used in TinyOS to promote easy portability of applications over a dozen of platforms. The hardware abstraction functionality is organized in three distinct layers (see Figure 3.3). Each layer has dened responsibilities and is dependent on interfaces provided by lower layers the lower layers are specic for each platform and serve for handling with the underlying hardware. The hardware-independent top layer is then directly used by applications. Hardware Presentation Layer The components in this layer are positioned directly over the HW interface. They access hardware by memory or by port mapped I/O and contrariwise hardware can request servicing by signaling an interrupt. The components in this layer hide the most hardware-dependent code and provide more usable interface simple functions calls for the rest of the system. On the other hand components in this layer are stateless and do 11

3. T INYOS

Figure 3.3: Hardware Abstraction Architecture [10] not provide any substantial abstraction over the hardware beyond automating frequently used command sentences [10]. Hardware Abstraction Layer The components in abstraction layer use the interfaces provided by HPL components. They try to build useful abstractions hiding the complexity associated with the use of hardware resources. In contrast to HPL they are allowed to maintain state so they can be used for performing arbitration and resource control. Abstractions at the HAL level are tailored to the concrete device class and platform. To maintain the effective use of resources, HAL interfaces are rich and expose the specic hardware features [10]. Hardware Independent Layer The highest tier of HAA takes the platform specic abstractions provided by HAL and convert them to hardware independent interfaces. These interfaces can be used by cross-platform applications [10].

12

Chapter 4

Intrusion Detection Systems


It has become clear that we cannot achieve the satisfactory level of security only by using cryptographic techniques as these techniques fall prey to insider attacks in which the attacker has compromised and retrieved the cryptographic material of a number of nodes [15]. In order to counter this threat some additional techniques such as intrusion detection system has to be deployed. Intrusion detection system (IDS) is dened as a system that tries to detect and alert on attempted intrusions into a system or network [11]. IDS is composed of IDS agents that runs on some or all nodes. The IDS solutions developed for ad-hoc networks cannot be applied directly to sensor networks because the WSN has several specicities. In particular, the computing and energy resources are more constrained on WSN, thus there is not possible to have an active full-powered agent inside of every node. Also, the IDS must be simple and highly specialized to the specic WSNs protocols and threats [26].

4.1

IDS Classication

There are three basic approaches in intrusion detection systems according to the used detection techniques [12]. Misuse detection technique compares the observed behavior with known attack patterns (signatures). Action patterns that may pose a security threat have to be dened and stored in the system. The advantage of this technique is that it can accurately and efciently detect instances of known attacks, but it lacks an ability to detect an unknown type of attack. Anomaly detection The detection is based on monitoring changes in behavior, rather than searching for some known attack signatures. Before the anomaly detection based system is deployed, it usually must be taught to recognize normal system activity (usually by automated training). The system then watches for activities that differ from the learned behavior by a statistically signicant amount. The main disadvantage of this type of system is high false positive rate. The system also assumes that there are no intruders during the learning phase. Specication based The third technique is similar to anomaly detection it is also based on deviations from normal behavior, but the normal behavior is specied manually as a set of system constraints. Thus there is no learning phase which is particularly difcult in WSNs. Furthermore, we can divide WSNs networks into three categories according to IDS topology. Stand-alone Each node operates an independent IDS agent that is responsible for detecting attacks. Individual agents do not cooperate with others and do not share any information. The advantage of this approach is its simplicity and the fact that an attacker is not able to forge any misinformation because nodes dont rely on others. 13

4. I NTRUSION D ETECTION S YSTEMS Cooperative Each node still runs its own IDS, but the individual IDSs cooperate between themselves. Generally the main challenge of this approach is the question of how to deal with compromitted neighbors. We dont want malicious node to be able to confuse others with some misinformation. In [16], the authors presented the voting algorithm which is based on the presumption that an attacker is not able to outnumber the legitimate nodes. The solution requires establishing of key management in order to achieve votes authentication. Hierarchical The network is divided into clusters with cluster head nodes. The cluster head has the responsibility for communication with other cluster heads or base stations. The cluster heads can be more powerful or less energy constrained devices than the other nodes. The IDS then only runs on the cluster head node.

4.2

Watchdog Monitoring

Watchdog monitoring technique [21, 26] is the way how to detect misbehaving nodes. It is based on the broadcast concept of communication in sensor networks, where each node hears the communication of surrounding nodes even if it is not intended. This technique depends on sufcient density of deployed nodes. In case of wireless sensor networks the density of deployed sensors is usually high enough because of the requirements for graceful degradation the network must continue to work even if a small portion of nodes fails.

Figure 4.1: Possible watchdogs Suppose that node A wants to send a message to node C which is outside of its radio range. So it sends this message to the intermediate node B and node B forwards it to node C (see Figure 4.1). Let SA be a set of all nodes that hear the message from A to B and SB be a set of nodes that hear a message from B to C. We can dene a set of possible watchdogs of the node B as an intersection of SA and SB . This means that any node that lies in the intersection region is able to hear both messages and is able to decide whether node B forwards messages from node A. 14

4. I NTRUSION D ETECTION S YSTEMS 4.2.1 Regular Collisions

When two nodes that are close to each other transmit packets at the same time, they will cause a collision. In result, the signals from both nodes interfere and none of these packets is successfully received. In order to avoid this collisions, wireless sensor networks often follow Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) protocol for Media Access Control (MAC). Following this protocol, a node listens to a trafc on medium before trying to send own packet. Thus if two nodes that are close enough want to send packet at the same time, the CSMA algorithm will prevent the one, who wants to start sending little bit later, to send it. The collision occurs only when both nodes begin transmit exactly at the same time. 4.2.2 Ambiguous Collisions

Let the node D be the watchdog for the node B (see Figure 4.2). The ambiguous collision [21] occurs at the node D when the node B transmits a packet at the same time as the node E. The nodes B and E are far apart enough to be able to send packets at the same time. This means that the node B considers channel to be clear (the radio signal is below its clear channel threshold) despite the node E is transmitting a packet and vice versa the node E considers the channel to be free although the node B is transmitting. As a result, the node D (that is between them) do not hear any of these packets, because the signals interfere.

D A B C

Figure 4.2: Abmiguous collision Particularly the node D do not hear packet that has been forwarded and it may suspect the node B that performs a selective forwarding attack. Therefore the node D cannot decide whether the node B is misbehaving or not on the basis of a single observation. We can distinguish two types of ambiguous collisions: At rst, there are collisions where both signals (on Figure 4.2 from the node B and the node E) are similarly strong, and when they occur, the watchdog is unable to receive even one packet due to radio interference. We can hardly ght against this type of collision. At second, there are collisions where the signal from the rst node (B) is much stronger than the signal from the second one (E). If the rst node (B) begins transmitting rst, the watchdog successfully captures the beginning bytes of the packet frame and starts receiving. It successfully receives the whole packet although the second node (D) interferes it during the transmission, 15

4. I NTRUSION D ETECTION S YSTEMS because the signal-to-noise1 ratio is high enough during the whole transmission. On the other hand, when the second node (E) starts transmitting rst, the watchdog begins receiving its packet. Then a stronger signal from the rst node (B) interferes with the second one (E) and causes that signal-to-noise ratio becomes too low for successful reception. As a result, the watchdog (D) does not receive the packet from second node (E) due to radio interference and neither does not receive the packet from rst node (B), because it did not capture the beginning bytes of the packet frame. The number of ambiguous collisions in network exponentionally grows with a number of packets that are transmitted in network. That makes the watchdog technique unreliable in networks with more trafc. We can never completely avoid ambiguous collisions, but we can try to reduce the occurence of ambiguous collisions of second type. We have proposed a novel technique that is based on suppressing weak signals. Suppressing Weak Signals Let the node D be the watchdog for node B (see 4.3). The node B forwards trafc from the node A and from the node G. The node E sends messages very often2 to node F. Although the node E is very far from node D, it often causes ambiguous collisions with node B and prevent the node D from eavesdropping on the node B.

D A G
Figure 4.3: Suppressing weak signals Thus considering that watchdog monitors the node with strong signal, we could improve the probability of successful eavesdropping on the target node by suppressing weak signals coming from distant nodes. In 5.3.1 we show that we can achieve suppressing weak signals on CC2420 chip by lowering the receiver sensitivity on the watchdog (D) node. Then the watchdog node (D) does not detect the beginning of the packet frame from distant nodes (E) and thus they do not block packets from closer nodes (B). To detect selective forwarding attack on node B, the watchdog node D needs to hear both incoming and outgoing packets to this node. The negative effect of this technique is that by lowering the receiver sensitivity we may also inadvertently supress incoming packets to the node B (for example from the node G, see Figure 4.3).
1. 2. We consider the weaker signal to be part of the noise This can easily occur when the node F forwards a messages from a large part of network.

16

4. I NTRUSION D ETECTION S YSTEMS Early Packet Dropping We could also limit the occurrence of ambiguous collisions of the second type by early dropping the packets from the nodes we dont want to monitor. On CC2420 we could achive early dropping packets by enabling hardware address recognition. When hardware address recognition is enabled, the node rejects the packet that does not match its address immediately after reading the address header. Thus the node does not have to receive the whole packet. Unfortunately, CC2420 cannot be congured to have more than one hardware address. Therefore we would have to change the watchdog hardware address to be the same as the address of the monitored node. 4.2.3 Partial Dropping

A malicious node can also circumvent watchdog by dropping packets at lower rate than watchdogs congured minimum misbehaving threshold. We cannot set the minimum misbehaving threshold too low because due to ambiguous collisions the watchdog never hears all messages and false alarms would be raised too often. 4.2.4 Receiver Collisions

In the receiver collision problem, node B forwards a packet, but the watchdog node D1 is not able to verify whether node C successfully received it or not (see Figure 4.4). When B is malicious, it can delay forwarding the packet until C is transmitting, and purposefully cause a collision [21]. We can try to detect this type of attack by monitoring the forwarding delays that are introduced by the malicious node. The malicious node may know when C starts transmitting but usually it has to wait until it happens. Unfortunately, the forwarding delays may be also caused by other factors, such as a local CSMA collision. Generally, by this attack, the attacker can only confuse a subset of all possible watchdogs, particularly he cannot confuse watchdogs that are close to node C (the node D2 ).

D1 A B

D2 C

Figure 4.4: Receiver collisions

4.2.5

Limiting The Transmission Power

A malicious node can control the transmission power in order to confuse the watchdog (see Figure 4.5). The malicious node B can lower the transmitting power such that the signal is strong enough to be heard by watchdog D1 , but too weak to be successfully received by the target node (C). 17

4. I NTRUSION D ETECTION S YSTEMS The countermeasures against this type of attacks are relatively simple. First, we can monitor the signal strength and compare it with the average signal strength of messages from this node. Moreover, the attacker never knows who is the watchdog he doesnt know how much he must lower the transmission power (He doesnt know if the watchdog is the node D1 or the node D2 .

D2 A D1 B C

Figure 4.5: Limiting transmission power

18

Chapter 5

Radio Stack
The essential prerequisite for an implementation of an efcient intrusion detection system is to fully understood how a network communication works. In this chapter, we will describe in detail the radio stack from the highest to the lowest layers. In particular, we will focus on time delays introduced by individual layers.

Figure 5.1: Radio Stack We will also compare the implementations of radio stack on the TMote Sky sensor nodes [23] and on the TOSSIM simulator [32], we will point out the differences between them and show that the current simulator implementation is not sufciently precise for the purposes of accurate simulation of IDS that uses a watchdog monitoring technique. We can split the radio stack into three parts (see 5.1): we will consider the network protocol to be the rst one in the work we will consider Collection Tree Protocol (CTP) [8], the second part will be Active Message Layer, and the third and the lowest part, the platform dependent implementation of the radio stack. Generally, the applications can be wired directly to Active Message Layer that only allows them to broadcast messages to their neighbors. The network protocol usually incorporates an advanced network logic such as multi-hop delivery or routing support. In this work we assume that the applications use Collection Tree Protocol [8].

5.1

Collection Tree Protocol

The Collection Tree Protocol (CTP) is now probably the most common network protocol in TinyOS. It is used in an environment where one needs to collect data from a large number of nodes to one or more base stations. CTP builds one or more collection trees, which are rooted at the base stations. Each node picks a neighbor node to be its parent and in this way they form a tree. Whenever a node wants to send a message to the base station, it simply sends it to its parent. The CTP packet is shown in Figure 5.2. Together, the origin, seqno, collect_id, and THL elds denote a unique packet instance within the network [8]. 19

5. R ADIO S TACK
PC THL ETX origin seqno collect_id data

Figure 5.2: CTP Packet, P - routing pull bit, C congestion bit, THL time has lived, also referred as hop counter, ETX ETX of last transmitter (see Routing Engine), origin originating adress of the packet, seqno origin sequence number, collect_id specic application id The inner nodes serve as forwarders. When they receive a message from descendants, they just increment the message hop counter and forward the message towards the root of the tree. CTP is best suitable for relatively low trafc rates. For bandwidth-limited systems or high rate applications it is better to deploy a more sophisticated protocol, which can perform an aggregation function and pack multiple small frames into a single packet [8]. Moreover CTP provides sufciently elaborated congestion control, when node becomes congested it just set congestion bit in outgoing packets. The major CTP components are: Collection Sender: The collection sender is a virtualized sender abstraction. An application that intends to send packets using the CTP has to explicitly wire to this component. The wiring is parametrized by collect_id, which then can be used to identify packets originated at this application. Link Estimator: CTP uses the link expected transmissions (ETX) value to assess the link quality. The link ETX is computed by LinkEstimator component, and should represent the expected number of transmissions needed to successfully deliver packet by the specic link. Current implementation of LinkEstimator derives the link ETX from the number of lost beacon packets. Routing Engine: The CTP routing engine uses the ETX value as a routing metric. The nodes ETX represents the expected number of transmissions that would have to be done to deliver a packet to the root if we used this node as our parent. The root ETX is dened as 0. The routing engine computes the nodes ETX for each neighbor as a sum of the neighbors ETX and the assessment of the link quality to this neighbor (link ETX). The tree is proactively maintained by periodic beacons sent by the nodes. The beacon contains the nodes parent, current hop count and its ETX value. Beacons are transmitted periodically with a random jitter every 4.096 12.288 seconds as a messages of AM type 0x70 [8]. Each node keeps the best candidates for its parents the nodes with the best ETX in its routing neighbor table and updates this table every time a beacon is received. Before the node sends its beacon packet, it reselects its parent. CTP avoids changing the network topology unless it is necessary, therefore a new parent is picked only when the new parent has much better ETX then the current one (it has to have ETX better by PARENT_SWITCH_THRESHOLD, by default it is 15) or when the current parent is congested and the new parents ETX is at least as good as the current one. The last condition is worth emphasizing, CTP never allow to choose worse new parent. On the one hand it prevents node of selecting new parent that is descendant of the current congested one (the current parent would have to forward this packet after next hop), on the other hand this leads to 20

5. R ADIO S TACK Forwarding Engine: The forwarding engine is responsible for the forwarding trafc as well as the trafc generated on the node. It maintains the send queue of pending packets and schedules their transmitting. The forwarding engine is also responsible for detecting routing loops. Packets in the send queue are sent out in FIFO order. The forwarding engine does not distinguish between packets, the packets generated on the node are treated identically as packet being forwarded. When the forwarding engine wants to send a packet, it takes the rst packet from the send queue and sends it to the parent. It does not remove the packet from the queue. If the link layer supports packet acknowledgments, the forwarding engine enables them, and retransmits the packet that has not been acked for maximum MAX_RETRIES times (default is 30 times) and drop packet only after all these attempts fail. This feature signicantly improves delivery reliability. The packet is dequeued from the send queue only after the sending has been successful or when all transmission retries have been exhausted. The forwarding engine controls the transmission scheduling. When the forwarding engine receives a packet to forward, and the send queue is empty, and there is no any previous transmit back-off scheduled, the packet is sent out immediately. When the packet is successfully delivered (has been acked), the forwarding engine starts the back-off timer and does not allow to send next packet for 16 31 milliseconds. If the previous transmission wasnt successful and the packet was not acked, the forwarding engine waits for 8 15 milliseconds and then tries to retransmit.

5.2

Active Message Layer

The active message layer is the basic network HIL. Each platform must provide the ActiveMessageC component. Active message communication is virtualized through four components AMReceiverC, AMSnooperC, AMSnoopingReceiverC, and AMSenderC. These components are generic and each of them takes an active message ID (AM type) as a parameter. The parameter distinguishes different applications (usually each application has a unique AM type), and allows applications to communicate on separate channels.

Figure 5.3: Application Wiring Example For example, lets have two application (see Figure 5.3). First application wires to AMReceiverC and AMSenderC and parametrizes these wirings with value of 0x01. Thus messages from this application will be sent out with AM type of 0x01 and messages coming from network with AM type of 0x01 will be 21

5. R ADIO S TACK forwarded to this application. Similarly messages from the second application will have AM type of 0x02 and Active Message Layer will forward incoming messages with AM type of 0x02 to this application. There are three reception abstractions: 1) AMReceiverC is signaled when received messages is destined to this node, 2) AMSnooperC is signaled when received is signaled to other node in network, 3) AMSnoopingReceiverC, that is signaled every time a message is received. Applications wire only to abstractions that they needs, in our example the Application 1 wires only to AMReceiverC because it does not need snooping messages destined to other nodes. The AMSenderC is an abstraction for sending messages out. Each application that wires to this abstraction has reserved slot in the sending queue implemented in AMQueueP. The AMQueueP component implements the sending queue and is responsible for the exact order in which outgoing messages are serviced. By default, it goes in the round-robin fashion through messages from the AMSenderC clients and forwards them to the underlying layer. AMPacket Although it is not dened how the Active Message Packet should exactly be formatted (each hardware dependent implementations internally uses diferent format), each ActiveMessageC component has to provide AMPacket interface. Using these interface we can access Active Message Packet headers. There are following headers: 1) destination that denes the destination node, 2) source states source node of this message, 3) AMType states message Active Message type, 4) group refers to logical network identier (there may exist a logical subnets in WSN, where nodes communicate only with nodes in the same subnet) [18].

5.3

Hardware Dependent Radio Stack on TMote Sky

The TMote Sky sensor node is based on the TMote platform. In the TMote platform, the Chipcon CC2420 chip for radio communication is used [23]. The CC2420 is a 2.4 GHz IEEE 802.15.4 compliant RF transceiver. It was designed for low-power and low-voltage wireless applications [1]. The CC2420 radio stack is organized in several layers: Active Message Layer On the TMote platform, the Active Message Layer is provided by CC2420ActiveMessageP component. When sending a message, some details are lled in the packet header within this layer (packet type, destination address, packet source and destination pan group address) and the packet is passed down onto the stack. When receiving a message, this layer decides if the message is destined for the node and signals it to the corresponding ReceiverC or SnooperC component. Unique Send/Receive Layer When sending a message, this layer supplies each outgoing message with a unique data sequence number (dsn). The layer keeps an internal counter that increments by one for each packet and writes it into dst byte in the packet header. The data sequence numbering is utilized during packet receiving. The layer keeps the history of the data sequence numbers of the last RECEIVE_HISTORY_SIZE received messages. If the newly received message matches a message in a recent history, it is dropped. This layer ensures that higher layers never hear a single message twice. Particularly in the context of watchdog monitoring, the watchdog never hears a packet that is being retransmitted because it was not delivered on the rst try. 22

5. R ADIO S TACK Packet Link Layer This optional layer is responsible for the automatic message retransmission and ensures reliable packet delivery. Automatic retransmission is activated on a per-message basis, therefore we have to set a ag for each outgoing message that should use this layer. It is the most reliable when software acknowledgments are enabled, because it will fail if it receives false-acknowledgments. More details can be found in [22]. Low Power Listening Layer This layer provides an asynchronous low-power listening implementations that denes the radio duty cycles. The radio is not listening all the time, but it is only turned on for dened intervals. The layer is disabled by default, it can be enabled by dening LOW_POWER_LISTENING. Low Pan Layer The next layer provides compatibility with other networks using 6LowPAN protocol[20]. 6LoWPAN is an acronym of IPv6 over Low power Wireless Personal Area Networks. TinyOS implements this protocol in 6lowpan library that can be found in tos/lib/net/6lowpan. 6LoWPAN protocol is also supported in other WSN, which do not run TinyOS. Unfortunately, original TinyOS isnt compatible with other 6LowPan networks. It uses network frames (called T-Frames) that do not include the network byte identier. Therefore I-Frames were developed to provide interoperability as dened by 6LowPan specications. The 6LowPan layer is optional and it is only used when the interoperability with other 6LoWPAN networks is required. It adds the network identier byte to packets on the way out. While receiving, the low pan layer detects whether the received packet belongs to TinyOS communication. If not, the packet is signaled to 6LowpanSnoop interface instead of the default ones. This layer can be enabled by compiling application with CC2420_IFRAME_TYPE macro. CSMA Layer The CSMA layer should take control of access to the medium, but the current CSMA implementation is integrated with code in CC2420TransmitP. At the current version, the CSMA layer only computes random radio back-offs. TransmitP/ReceiveP Layer These layers are responsible for interacting directly with the radio hardware. We will describe these layers in detail in the following section. 5.3.1 Receiving Message

The received RF signal is amplied by the low-noise amplier (LNA) and down-converted in quadrature (I and Q) to the intermediate frequency (IF). At IF (2 MHz), the complex I/Q signal is ltered and amplied, and then digitized by the ADC. Automatic gain control, nal channel ltering, de-spreading, symbol correlation and byte synchronization are performed digitally [1, page 16]. For the purposes of this work, the received signal ampliers are particularly interesting. The signal is amplied twice: rst by the low-noise amplier (LNA) and second by the variable gain amplier (VGA). By default, the LNA and VGA are automatically controlled by Automatic Gain Control loop. 23

5. R ADIO S TACK LNA is used for initial amplication of the received signal. We found out that AGC keeps the LNA in high-gain mode most of the time (see 7.3). The second amplier (VGA) amplies the signal just before it is processed by Analog/Digital Converter. AGC adjusts VGA in order to keep the received signal strength inside of the dynamic range of the Analog/Digital Converter. In 7.3 we have shown that we can disable the automatic mode of AGC and set the ampliers manually. Using this technique, we can suppress weak signals that are coming from distant nodes.

Figure 5.4: Simplied diagram of demodulator [1] After the signal is digitized by the ADC, it is processed by demodulator (see Figure 5.4). The most important for our work is that now the RSSI and LQI values are derived. RSSI Received signal strength indicator is an estimate of the average received signal strength. The CC2420 has a built-in received signal strength indicator whose value is always averaged over 8 period symbols. Usually, the received signal strength is measured when the transmission starts and then its value is written into the packet header. LQI The link quality indication measurement is dened in [28] as the characterization of the strength and/or link quality of received signal. The LQI value is required to be limited to range 0 through 255, with at least 8 unique values. CC2420 provides an average correlation value for each incoming packet, based on the 8 rst symbols at the beginning of the packet. The correlation value has 7 bits and can be looked upon as a measurement of the chip error rate. Correlation value of about 110 indicates the maximum quality frame while the value of about 50 is typically the lowest quality frame detectable by CC2420 [1]. The current TinyOS implementation doesnt perform any conversion of this value and takes it directly as LQI. The CC2420 has several hardware pins that TinyOS utilizes to receive a packet. First it is SFD pin (Start Frame Delimiter pin). This pin is used both in receiving and transmitting mode. Simply put, the SFD pin is high when radio is busy (receiving or transmitting a message). Second, it is RXFIFO register that serves as receive buffer. Furthermore, there are FIFO and FIFOP pins1 . These pins are used to assist the microcontroller in supervising RXFIFO [1]. The pin activity during receiving is shown in Figure 5.5. The packet reception runs as follows: 1. When CC2420 detects the start of the frame delimiter, SFD pin goes up. This causes an interrupt captured by CC2420TrasmitP but TinyOS uses this information only for packet time-stamping purposes.

1. The function of these pins is very similar. By default settings, the only difference is that the FIFOP pin goes up a little bit later when hardware address recognition is enabled. TinyOS only utilizes the FIFOP pin.

24

5. R ADIO S TACK

Figure 5.5: Pin activity during receiving [1]

2.

If address recognition is disabled, SFD pin goes down again as soon as the last byte of MPDU is received. If address recognition is enabled and the frame fails at address recognition, SFD pin goes down immediately. This is used only for time-stamping, again. In TinyOS receiving begins as FIFOP pin goes up. This happens when number of unread bytes in RXFIFO exceeds the threshold FIFOP_THR. In TinyOS the threshold is implicitly set to zero, so receiving begins immediately. Only if address recognition is enabled, the FIFOP pin stays down until the incoming frame passes address recognition. Once the FIFOP pin goes up, the interrupt is red in CC2420ReceiveP. TinyOS rst attempts to acquire the SPI (Serial Peripheral Interface)2 In case of success, it starts reading by calling RXFIFO.beginRead. Because the rst byte of each message is the packet length, we know how many bytes we have to read. When reading is nished, the event RXFIFO.readDone is red.

3.

4.

5.

When the packet CRC is checked, the metadata details, especially RSSI and LQI, are lled in and the packet is passed up onto the stack. This metadata can be later accessed by the CC2420Packet interface.

The most important thing about receiving messages on CC2420 we have to realize is that to be able to snoop messages destined to other nodes, we have to disable hardware address recognition. When the hardware address recognition is disabled, SFD and FIFOP pin stays up until we receive the whole packet. Therefore when the node begins receiving a packet, it is received until the end (no matter how weak the received signal is). Hence when the node has already started receiving a packet from a distant node and some close node begins transmitting, the node not only does not receive the second packet but also does not successfully receive the rst one due to the interference. In other words, the ambiguous collision occurs.
2. The SPI is not only used for reading out the RXFIFO, but for example also to register access or command strobes

25

5. R ADIO S TACK 5.3.2 Transmitting a Message

In the current CC2420 radio stack implementation, the CC2420TransmitP component performs the CSMA algorithm and the CSMA layer only provides random radio back-offs. The CSMA algorithm does not allow radio to begin transmitting while the channel is not free. The CC2420 chip incorporates a special pin for this purpose the CCA pin. The CCA(Clear Channel Assessment) pin is only high if the node is not receiving valid data and the strength of received signal is lower than the programmed carrier sense threshold. The carrier sense threshold can be programmed by the RSSI.CCA_THR registers (default value is -77dB) and MDMCTRL0.CCA_HYST (default value is 2dB). The resulting carrier sense threshold is dened as: CCA_THR CCA_HYST Thus the channel is clear when the receiving signal strength is lower than -79dB and the node is not receiving any valid data. Transmitting of a message runs as follows: 1. First, the CC2420TransmitP sets the radio transmission power and loads the outgoing message into the outbound tx buffer TXFIFO. When the TXFIFO buffer is loaded, the TXFIFO.writeDone() is signaled. If the CCA (Clear Channel Assessment) is not required, the CC2420TransmitP begins transmitting immediately by calling STXON.strobe(). Otherwise the CC2420CsmaP is asked to compute a random delay the initial back-off as: backo f f = random mod (31 CC2420_BACKOFF_PERIOD) + CC2420_MIN_BACKOFF For default values (CC2420_BACKOFF_PERIOD=10, CC2420_MIN_BACKOFF=10) this back-off lasts between 0.3125 and 10 milliseconds. 4. After the initial back-off, CCA is sampled. Even if the channel seems to be clear (CCA is high), sending is delayed by additional 0.21875 milliseconds, just for case that the rst sample was taken during the acks turn-around window. If the channel was not clear, the CC2420CsmaP is asked to compute the congestion back-off. This back-off is computed as follows: backo f f = random mod (7 CC2420_BACKOFF_PERIOD) + CC2420_MIN_BACKOFF For default values it takes from 0.3125ms to 2.5ms. 5. Finally, the attempt to send is now performed. CC2420TransmitP calls STXONCCA.strobe() which samples cca one more time and begins transmitting only if the channel is still clear. If the channel is not clear, a new congestion back-off is scheduled. When the packet is sent, SFD pin goes up as soon as the SFD eld is completely transmitted (see Figure 5.6). This res an interrupt that is used for timestamping packets. If SFD pin doesnt go up in 10 milliseconds, the CC2420TransmitP assumes that something is wrong, ushes the transmit buffer and schedules retransmission. 26

2.

3.

6.

5. R ADIO S TACK

Figure 5.6: Transmitting a message

7.

When the complete MPDU is transmitted, SFD pin goes down again. If the message acknowledgment is not required, sending is nished. Otherwise, we wait 8ms for the ack. When the ack comes, packet metadata is updated (ack bit, acks RSSI and LQI). Even if the ack doesnt come, success is signaled. The only way how to detect if the packet was acked is by interface PacketAcknowledgements which is provided by ActiveMessageC.

At the end, it is important to point out that in default mode3 the node will never start transmitting when receiving valid data (even though a packet from distant node with weak signal). It is also worth mentioning that a single clear channel test is not sufcient, the CC2420TransmitP always tests channel twice.

5.4

TOSSIM

TOSSIM is a discrete event simulator. It keeps events sorted by the execution time in a queue and gradually executes them. There are several radio models that we can use for network behavior simulation some of them are included in TinyOS distribution (SimpleRadioModel, BinaryInterferenceModelC, CpmModelC), others were described in [17]. We have used the default TOSSIM radio model CpmModelC. It is based on the Closest Pattern Matching (CPM) algorithm. This model is based on statistical extraction from empirical noise data [17]. According to the [17], the model should provide more precise simulation than others by exploiting timecorrelated noise characteristics. The TOSSIM radio stack consists of three layers: Active Message Layer This is the top level of the TOSSIM radio stack. It provides Active Message communication for the upper layers and also wires to the underlying network model. The responsibility of this layer is to ll in details into headers of outgoing messages (e.g. source address, destination address) and signal the incoming messages to the correct interface. Packet Layer Model TossimPacketModelC Packet-level radio model particularly provides the CSMA functionality. When the message is about to be sent, this layer performs the basic CSMA algorithm, and passes the message to the
3. By setting CCA_MODE differently we may change the clear channel assessment algorithm to be less restrictive, but this is the kind of behavior that we need at the watchdog node.

27

5. R ADIO S TACK lower layer only when the channel is clear. It also keeps the state whether the node is currently transmitting a message. When the gain radio model signals a received message, the TossimPacketModelC only checks if the node is transmitting another message. If not, it signals the message to the upper layer, otherwise it drops it. Unfortunately, this check is not sufcient as we will show in 5.4.2. Radio Gain Model - CpmModelC The radio model basically performs two actions: rst, it takes the message that is being sent out, and schedules the receive event for each node in the network. Second, when transmitting is nished, it performs the scheduled receive events and decides if the message should be received and possibly signals the receive event to the upper layer. 5.4.1 Sending message

While the message is about to be sent, it is dispatched at the ActiveMessageC. The ActiveMessageC only lls the headers in and forwards the message to the TossimPacketModelC. This TossimPacketModelC implements a basic CSMA algorithm. First it computes the initial backoff as: backo f f = (random mod SIM_CSMA_INIT_HIGH SIM_CSMA_INIT_LOW) + SIM_CSMA_INIT_LOW SYMBOLS_PER_SEC

For default values (SIM_CSMA_INIT_HIGH = 620, SIM_CSMA_INIT_LOW = 20, SYMBOLS_PER_SEC = 65536) it takes between 0.30518 ms and 9.76577ms4 . After the back-off, the channel is sampled. First, environment noise is computed as
n

environment_noise = (log 10rssii /10 )


i=0

10 log 10

where n states for a number of currently transmitting nodes and rssii is the received signal strength from node i. The channel is considered to be clear when the environment_noise is lower than clearTreshold variable. The default value of clearThreshold variable is -72dB. By SIM_CSMA_MIN_FREE_SAMPLES it is dened how many free samples are needed before transmission is allowed to start (it is 1 sample by default). If the number of needed free channel samples was not met, a new back-off value is computed as E = EXPONENT_BASEiteration backo f f = (random mod (SIM_CSMA_HIGH SIM_CSMA_LOW) E) + SIM_CSMA_LOW SYMBOLS_PER_SEC

The default value of SIM_CSMA_HIGH is 160 and the value of SIM_CSMA_LOW is 20. Compared to the calculation of the initial CSMA back-off, a new parameter E is introduced. This parameter depends on the iteration of the algorithm and on the dened EXPONENT_BASE. Because the default EXPONENT_BASE is 1, E is always equal to 1. Thus by default, the back-off timer is set from 0.30518 ms to 2.4414 ms. If we do not need any more free samples, the transmit indicator is set to true and the transmission is scheduled after additional 0.1678 ms caused by RXTX_DELAY. Now the number of symbols to transmit is computed as
4. All times are internally converted to the number of simulators ticks

28

5. R ADIO S TACK 8 (SENDING_LENGTH + HEADER_LENGTH) BITS_PER_SYMBOL + PREAMBLE_LENGTH

symbols =

When an ack is required, ack time is added symbols = symbols + ACK_TIME and at the end the duration is computed. duration = symbols SYMBOLS_PER_SEC

Finally the message is passed to the CPM model. First CPM goes through all the motes in the network and for each of them determines whether the mote could successfully receive the message. A node may loose this message due to: 1) being off, 2) the signal-to-noise ratio being too low, 3) being mid-reception (is currently receiving another message). If the node is already receiving another message, the CPM model takes the new message as noise and tests if the signal-to-noise ratio of the previous message is still high enough. Then the CPM model enqueues the message reception event to the events queue, no matter if the mote is going to successfully receive this packet or not. The reception event is performed after the transmission duration expires. The CPM model checks one more time if the packet could be delivered and then denitely drops the message or signals it to the upper layer. 5.4.2 TOSSIM Shortcomings and Problems Receives even when transmitting When TOSSIM handles the reception event, it checks if it is not currently transmitting a packet. The problem is that TOSSIM checks if the node is not transmitting only at the end of receiving, not at the beginning or during it. This is not sufcient and may cause a false reception of message that was delivered only a little while after the node nished its transmission. Although most of the message was received during the ongoing transmission, for TOSSIM the message is successfully received. This bug was reported and was conrmed by TOSSIM developers. We also developed a solution that xes this misbehavior. Begins transmitting even when receiving valid data In TOSSIM, the clear channel assessment algorithm only checks if the received signal strength is under the CCA_THRESHOLD. By contrast, on CC2420, the CCA algorithm also requires that the node must not be receiving valid data (considering default mode). For example, when TOSSIM is receiving a packet with weak signal (i.e. -85dB), the channel is considered to be free (RSSI is under CCA_THRESHOLD). By contrast, the CC2420 would consider this channel as busy. To achieve more accurate simulation, we had to reimplement the clearChannel command in CPM model so that it took into into account if the node was receiving or not. 29

5. R ADIO S TACK Inaccurate CSMA algorithm First, the CCA_THRESHOLD had to be changed from -72dB to -79dB to be the same as on the CC2420. Second, the CC2420 radio always performs two CCA tests before it starts transmitting, the rst test is scheduled by random back-off timer, the second is always 0.21875 milliseconds after the rst one. We modied the TossimPacketModelC to achieve the same behavior for TOSSIM.

30

Chapter 6

Implementation
We have designed our intrusion detection system to be as small as possible but also to be easily extensible for new funcionality that would allow detecting more complex attacks. Our IDS is composed from autonomous agents. Each node runs IDS agent. Most of the time each agent sleeps and consumes only a minimum power. The agents are independent, they only notify others when they detect intrusion. However other nodes do not trust this information and each node decides about intrusion on its own. Each IDS agent can have one or more Detection Engines. The Detection Engines are replaceable components that are used to detect attacks. Each Detection Engine is specialized to detect a specic attack using a specic technique. At the same time an IDS agent may run only one Detection Engine. Each Detection Engine is active only for a certain period of time during which it monitors and analyzes trafc on a network. When the time expires, the Detection Engine is replaced for another engine or the IDS agent is put to the sleep mode. The application of Detection Engine is called the test. The sense of separate replaceable Detection Engines is possibility to share system resources between them (particularly the very size-limited RAM memory). More details will be given in Section 6.5. We have implemented two basic Detection Engines: the rst one detects selective forwarding (see Section 2.5) attacks and the second detects selective forwarding attacks using improved watchdog monitoring technique based on suppressing of weak signals (see 4.2.2).

6.1

System Overview

Boot.boot()

Scheduler

Response Module

Comm Module

AMReceiverC(0x77) AMSenderC(0x77)

Detection Manager

Active Engine Detection Engine2 Staticstics Manager


AMTap

Shared Components

Figure 6.1: Proposed IDS architecture Our proposed IDS agent (see Figure 6.1) consists of ve main components: 1) Scheduler component starts and stops tests and keeps a queue of scheduled tests, 2) Detection Manager serves as a bridge between the Active Detection Engine and other components, 3) Statistics Manager processes tapped 31

6. I MPLEMENTATION communication, manages a neighbor table and passes communication that belongs to monitored nodes to active Detection Engine 4) Response Module reacts on alerts raised by the Detection Engine or that came over the network 5) Communication Module facilitates comunication with other IDS agents. The arrows on the Figure 6.1 represent relations (wirings) between componens. There are also wirings to external components: 1) Scheduler uses Boot interface, 2) Statistics Manager uses AMTap interface (see Section 6.2), 3) Communication Module uses Send and Receive interfaces through which sends and receives messages. The following algorithm is a brief overview over the IDS agent funcionality. The individual components will be described in detail in following sections. 1. When the node boots, the Scheduler is initialized via Boot.booted event and starts the internal periodic timer (IDSTimer). Every time the IDSTimer res and no test is currently performed, the Scheduler tries to take the rst test from the tests queue. There may be two possibilities: a) If there is a scheduled test in the tests queue, the rst test it will be performed immediatelly. The Scheduler asks Detection Manager to activate Detection Engine that is required by this test. Then the Scheduler starts the chosen Detection Engine. b) If the tests queue is empty, the new test is (spontaneously) scheduled with a some given probability P. 3. The Detection Engine cooperate with the Statistics Manager. The Statistics Manager holds the basic information about neighbors and process all tapped communication. When the Detection Engine nd out that some attack is being performed, it raises an alert in the Response Module. The Response Module could react on this alert in several ways: a) It may warn other nodes using the Communication Module b) It may schedule an additional test to tests queue 6. When the time for the test expires, the Detection Engine is deactivated. The algorithm continues at the point 2.

2.

4.

5.

6.2

Tapping communication

Our IDS agent analyzes all messages in the nodes communication range. For this reason it needs to hear all network communication that is processed by the node. This includes both messages being received and snooped and messages that are being sent out. Most applications and network libraries in TinyOS do not access a radio hardware directly, but they use virtualized radio abstractions the AMSenderC to send messages out, the AMReceiverC to receive messages destined to this node and the AMSnooperC for promiscuous hearing to messages that are not destined to this node. They also can use AMSnoopingReceiverC to receive all packets whether destined to this node or not. The radio abstractions for receiving (AMReceiverC, AMSnooperC, AMSnoopingReceiverC) are directly wired to the ActiveMessageC component the highest layer of the hardware dependent radio stack (see Section 5.2). The sending abstraction (AMSenderC) is not wired directly to the ActiveMessageC, 32

6. I MPLEMENTATION there is an one more interlayer between them the AMQueueC, that provides queueing of outgoing messages.
Application Collection Tree Protocol

AMSenderC

IDS Agent

AMReceiverC

AMSnooperC

AMQueueC

Staticstics Manager

AMTap

ForgedActiveMessageC Forged Components ActiveMessageC Hardware Dependent Radio Stack

Figure 6.2: Forged components To allow tapping of comunication we have inserted an interlayer between the virtualized service abstractions and ActiveMessageC ForgedActiveMessageC. The virtualized radio abstractions used for receiving messages and the AMQueueC component were forged for the modied ones. The modied components are completely the same, except that they are wired to the ForgedActiveMessageC instead of the ActiveMessageC.During the compilation, these forged components overlay the default ones 1 . Thus deploying of our intrusion detection system does not require any changes in the code of the top level applications or in the CTP library. The ForgedActiveMessageC acts as a bridge forwards the messages from the virtualized radio abstractions to the ActiveMessageC and vice versa. Moreover it provides the AMTap interface using which IDS agent taps the processed messages. The AMTap interface has three events: 1) receive, 2) snoop, 3) send. These events are implemented in the component that uses the AMTap interface in the Statistics Manager (see 6.4). When an application or the CTP sends a message, it is dispatched by the ForgedActiveMessageC. The ForgedActiveMessageC rst signals AMTap.send event to allow the Statistics Manager to process the message and after that forwards the message to the ActiveMessageC. Vice versa when a message is being received, ForgedActiveMessageC rst signals the AMTap.receive or the AMTap.snoop event and then forwards the message to the corresponding receiving abstraction (AMReceiverC, AMSnooperC or AMSnoopingReceiverC).

6.3

Scheduler

The Scheduler2 component is responsible for scheduling tests and managing their lifecycle. The Scheduler has internal periodic timer, whose period is dened by the IDS_CLOCK parameter. This timer serves
1. It may seem that one could simply forge the ActiveMessageC itself. But we have to realize that ActiveMessageC is already platform-dependent component. 2. TinyOS has a system component SchedulerC that facilitates tasks scheduling. Therefore we had to use different name for our IDSs internal scheduler IDSSchedulerC. When we refer to the scheduler, we mean the IDS scheduler.

33

6. I MPLEMENTATION as the internal IDS agent clock. The module holds a queue of scheduled tests TestsQueue. Each test task is a structure of following type:
typedef struct { uint8_t detection_engine ; uint8_t param; } test_task_t ;

The rst element of the type denes which Detection Engine will be used for this test and the second element is optional parameter that will be passed to the Detection Engine during the initialization. The test task can be scheduled by two ways: rst, it can be scheduled by the response module in response to an alert or can be scheduled randomly spontaneously. The random test task is spontaneously enqueued with probabilitity P every time when the main periodic timer res. The probability P is given as START_THRESHOLD + n START_THRESHOLD_INC 65536 where n is a number of cycles and START_THRESHOLD (default set to 2048), START_THRESHOLD_INC (default set to 1024) are constants dened in IDS.h. Thus for the default values the probability that IDS spontaneously enqueue the test task is 3.125% at the beginning and it is increased by 1.5625% after each unsuccesful attempt. The IDS Scheduler has three states SLEEP, INITIALIZING, RUNNING. Most time the scheduler spends in the SLEEP mode when no test is running and the IDS agent consumes minimum energy. P=
SLEEP
IDSTimer.fire()

TestsQueue not empty

INITIALIZING
Select Engine
startDone()

RUNNING
TestTimer.start

TestTimer.fired() DetectionEngine.stop TestsQueue empty Spontaneous Equeue DetectionEngine.start stopDone() DetectionEngine.evaluate

evaluateDone()

DequeueTest

Figure 6.3: IDS Scheduler When the main periodic timer res and Scheduler is in SLEEP mode, Scheduler checks if there are any tests in the TestsQueue. If not, it tries to enqueue new test spontaneously and goes to sleep. If there is at least one scheduled test in the TestsQueue, the Scheduler begins to perform the rst test in the queue. It changes its internal state to INITIALIZING and then using the DetectionManager interface chooses the required engine. Then it starts the engine itself by calling DetectionEngine.start(). After the engine is started up, it signals the DetectionEngine.startDone( run_time ), and as parameter passes how long it wants to run. The IDS scheduler changes its internal state to RUNNING and sets the TestTimer to re after the expiry of the required time. When the TestTimer res, the time for test has elapsed and scheduler calls DetectionEngine.stop() in order to stop the engine. The engine dont have to stop immediately (for example, it may need to complete some tasks), but when it does, it signals the DetectionEngine.stopDone(). The scheduler then starts the third phase of Detection Engine lifecycle the evaluation. It calls DetectionEngine.evaluate() 34

6. I MPLEMENTATION and after the evaluation is nished the Detection Engine signals back DetectionEngine.evaluateDone(). Now the Scheduler dequeues the nished test and switch the scheduler state to SLEEP mode.

6.4

Statistics Manager

The Statistics Manager has three main responsibilities, rst it processes communication tapped at the FordedActiveMessageC (and possibly forwards packets to the Detection Engine), second it manages the table of neighbors and third it decides if node is monitored or not.
typedef struct { uint16_t node_id; // node identication number uint16_t hello_num; // number of captured hello messages uint8_t avg_rssi ; // average received signal strength of hello message uint16_t tests ; // number of tests carried out uint16_t good_counter; uint16_t bad_counter; uint16_t congested ; } neighbor_t ;

Figure 6.4: Neighbor Table Structure The Statistics Manager manages a neighbor table that contains records of the neighbor_t (see Figure 6.4). In this table the Statistics Manager stores the basic information about each neighbor. The node_id states the unique neighbor address, the hello_num is a number of captured hello messages from this neighbor, the avg_rssi is an average received signal strength and the tests is a number of tests carried out on this neighbor. Three other variables (good_counter, bad_counter, congested) serve as temporary variables for Detection Engines. These temporary variables is reset at the beginning of each test. Maximum number of storable neighbors is dened in conguration le as a constant MAX_NEIGHBORS. This constant should be larger than expected number of surrounding nodes, because we want to keep all neighbors that are in the radio range in the table. We set the MAX_NEIGHBORS value to 32 by default, but it has to be increased when deploying the IDS on very dense networks. When using CTP protocol and a new neighbor appears in network, it noties others by sending hello broadcast [8]. When a Statistics Manager processes packet of such type, it updates a neighbor table. If the neighbor table doesnt contain record of this node, new entry is added. If we already have record of this node, we increment the counter of received hello messages from this node (hello_num). Then we compute the new average RSSI (avg_rssi) value as: avg_rssi = avg_rssiold max(hello_num, 20) + rssimessage max(hello_num, 20) + 1

Thus the avg_rssi is averaged over maximally 20 measurements. We can consider the the averaged RSSI value as a reliable indicator of the link quality to neigbors [29]. From the avg_rssi we can estimate the probability of sucessful eavesdropping on target node. In 7.2 we will show that monitoring only closer nodes caused lower number of false positives. Besides of handling a list of neighbors, the Statistics Manager is also responsible for decision, whether the neighbor is currently monitored. For various reasons, the Detection Engine may want to monitor only a small subset of all nodes. The number of monitored neighbors is set by the Detection Engine, which adjusts the number of monitored neighbors according to its needs. 35

6. I MPLEMENTATION The Statistics Manager always keep the monitored neighbors on rst positions in the neighbor table and keeps the number of currently monitored nodes. The Statistics Manager is able to decide if the node is monitored or not by checking only rst monitoredNodes records in neighbor table instead of browsing the whole (potentially large) table. This operation is time critical, because it has to be done every time a packet is processed and the processed packet is delayed until it is released by returning from AMTap event. When the processed packet is destined to a node that is monitored, the Statistics Manager by signaling StatisticsMngr.destinationMonitored passes the packet to the further evaluation to the Detection Engine. Similarly, if the packet was sent by the monitored node, the StatisticsMngr.sourceMonitored is signaled. The Statictics Manager counts the number of messages that were destined to the node and how many messages the node sent out. The Detection Engines can get this information by calling command StatisticsManager.incoming and StatisticsManager.outgoing.

6.5

Detection Manager

The Detection Manager is a large subsystem that is composed of many components. It holds all Detection Engine implementations and also components that are used by these Detection Engines (CtpStorage, MemoryManager). Moreover, it has also a special bridge component called DetectionManagerBridgeP. The DetectionManagerBridgeP is a component that provides the external interfaces DetectionManager interface and DetectionEngine interface. The Scheduler that needs to wire to Detection Engines do not wire them directly but through this component. DetectionManager interface provides two commands: the selectEngine command and the enginesCount command. This interface is used by the Scheduler which uses the selectEngine command to select the active Detection Engine. DetectionEngine interface (in DetectionManagerBridgeP component) serves as a bridge to Active Detection Engine. For example when Scheduler calls start command, this component forwards this call to the Detection Engine that is currently selected. Vice versa, when the Detection Engine signals for example startDone, this component signals it back to the Scheduler.

The advantage of this approach is a possibility of simple addition of new Detection Engines. The new Detection Engine is specied only in Detection Manager conguration component. Each Detection Engine has to provide DetectionEngine interface, which contains start, stop and evaluate commands and startDone, stopDone and evaluateDone events. We can rely that only single Detection Engine will be active at the same time, thus we can let engines to share resources and memory between themselves, the Detection Manager ensures that only one Detection Engine will access them at the same time. The Detection Engines usually require a non-trivial ammount of operating memory for their activities, they for example often need to store packet signatures or even the whole packets. Unfortunatelly, the TinyOS do not support dynamic memory allocation and all memory has to be statically allocated at the compile time. Therefore we introduced a minimalistic Memory Manager component, that allocates all the memory, that we can spare for Detection Engines purposes, and components then can get pointer to this memory by calling MemoryMngr.malloc. The Memory Manager does not provide any memory protection, but allows two different engines to use the memory in different ways. 36

6. I MPLEMENTATION
Response Module

Scheduler

Detection Manager BridgeP

SelectiveForwarding Engine AdvancedSelective ForwardingEngine

Memory Manager

CtpStorage

Statistics Manager

Figure 6.5: Detection Manager The Selective Forwarding Detection Engines stores the temporary packets in the CtpStorage component. The CtpStorage stores only a minimal packet signatures (see Figure 6.6). This component is used by the Selective Forwarding Engine and by the Advanced Selective Forwarding Engine. Each packet signature in this component takes 7 bytes. The packet signature consists of the 1-byte identier of forwarder position in neighbor table, 2-byte origin identier, 1-byte sequence number, 1-byte collect_id and 1-byte thl (time had lived) counter and 1byte number of retries (the number of times we heard this packet, it is greater than 1 when the forwarder didnt receive the packet at the rst time).
typedef struct { uint8_t forwarder ; // node that should forward the packet am_addr_t origin ; // packet origin uint8_t seqno; // packet sequence number uint8_t type ; // packet collect_id uint8_t thl ; // time has lived uint8_t retry ; // the number of retries } ctp_packet_sig_t ;

Figure 6.6: CtpStorage record

6.5.1

Selective Forwarding Engine

The selective forwarding engine is the basic engine used for detecting the selective forwarding attack. Initialization This engine sorts the neighbor table according to the average RSSI value and then starts to monitor at maximum SELECTIVE_FORWARDING_MONITORED_NEIGHBORS (it is 10 by default) nodes with the greatest signal strength. Running When the packet destined to one of the monitored nodes is snooped (or when we send out packet destined to the monitored node), the selective forwarding engine stores it in the CtpStorage 37

6. I MPLEMENTATION component using the CtpStorage.insert command. The CtpStorage does not store the whole packet, but only its minimal signature, which is neccessary to re-identication of this packet and identier of node which should forward this packet (see Figure 6.6). If we want to store a packet, but the CtpStorage is already full, we drop the oldest packet and increment the bad counter of the node that was supposed to forward this packet. On TOSSIM simulator3 it may happen, that we hear the same packet that is already in CtpStorage. This can happen in the case, when the packet wasnt received at the forwarder node on the rst try and is being retransmitted. In this case we do not insert new packet into the CtpStorage, but only increment a retry counter at the previous one. When we receive a packet sent by a node, that we are monitoring, we check if this packet signature is stored in CtpStorage by calling CtpStorage.check. If we nd such packet, we remove it from the CtpStorage and increment the monitored nodes good counter (see Section 6.4). Thus when a node successfully forwards a packet, good counter is incemented by one. Conversely, when a node doesnt forward a packet, bad counter is incremented. In CTP, the node that is congested sets a congestion ag in the packet header. The congested node is then allowed to drop packets. When the Selective Forwarding Engine detects a packet from monitored node with set congestion ag, it increments congestion counter of corresponding node. The neighbor with non-zero congestion counter is not considered as the attacker. When the selective forwarding engine called to stop, it does not stop instantly, but continues its work for an additional STOP_INTERVAL miliseconds. In this period it does not insert new packets to the CtpStorage, but allows to check and remove the packets that were forwarded after the regular test period elapsed. Evaluation The selective forwarding engine rst go throught all packets which werent forwarded and thus remained in the CtpStorage and for each found packet increments the bad counter of corresponding node. Then the selective forwarding engine sequentially goes through the monitored nodes and calculates for each node the error ratio: err_ratio = bad_counter 255 bad_counter + good_counter

If the error ratio is greater than threshold dened in SELECTIVE_FORWARDING_THRESHOLD, the selective forwarding engine raises an alert by calling Response.alert. We set the default value of the SELECTIVE_FORWARDING_THRESHOLD to 110. 6.5.2 Advanced Selective Forwarding Engines

The Advanced Selective Forwarding Engine works similar way as the basic, but it monitores only nodes with a strong signal. In initialization phase the Advanced Selective Forwarding Engine rst checks, if the node serves as a forwarder. Each node that serves as forwarder is characterized by non-zero number of received packet in sufciently long interval. Thus the Advanced Selective Forwarding Engine rst measures the number of messages that are incoming into this node in interval of 5 seconds. This is done by calling the StatisticsMngr.incoming() command twice in interval of 5 seconds.
3. On CC2420 radio, there is a Unique Receive Layer that prevents the watchdog from hearing retransmitted message, see Section 5.3.

38

6. I MPLEMENTATION When the difference between these two values is equal to zero, the node didnt receive any message and is considered to not be a forwarder. Then the advanced selective forwarding engine reduce the sensitivity of receiver by calling RadioControl.setLowGainMode(TRUE). If the node is a forwarder or the low gain mode is not successfully enabled, the Advanced Selective Forwarding Engine wont start. Furthermore, the Advanced Selective Forwarding Engine works the same way as the basic.

6.6

Response Module

The response module is responsible for reaction taken after an alert is detected. The alert can be raised by two ways: rst, it can be called from the Detection Manager, or it may come from the network. When the alarm is raised, the response module can perform several actions it can schedule an additional test, or it may broadcast message to other nodes to warn them. In this thesis, we didnt deal with reactions of the Response Module. The only reaction that the Response Module carried out was logging the alert report.

39

Chapter 7

Deployment and Simulation


The implemented intrusion detection system is a library that can be linked to a any application. In order to deploy the IDS agent on any node the programmer only need to add the following rule into the Makefile. CFLAGS += -I$(IDS_DIR) The compiler then will include our IDS and will use the modied active message components instead of the default ones.

7.1

Deployment on Tmote Sky

The TMote Sky sensor node [23] is based on the tmote platform. In fact, the tmote is just an alias for the telosb platform in current TinyOS implementation. The TMote Sky uses CC2420 radio [1]. To allow the mote to be able to promiscuously snoop the network trafc, we have to disable address recognition. If the address recognition was enabled, the packet not destined to the mote would be rejected too early and it would preclude us to use the watchdog monitoring technique. We disable the hardware address recognition by adding the following line into the Makele: CFLAGS+=-DCC2420_NO_ADDRESS_RECOGNITION Compilation: We compile the program by typing: $make tmote Installation: If the compilation was successfull, we can install the program into the mote by typing: $make tmote install [mote_id]

7.2

Deployment on TOSSIM

Compilation: The program can be compiled for simulation in TOSSIM by typing: $make micaz sim TOSSIM supports two programming interfaces C++ and Python. The Python interface is generally easier to use, but it has a performation bottleneck it is twice slower than C++ interface [32]. Because we need to simulate our intrusion detection system in large networks, we use more efcient C++ interface. For the purposes of IDS sumulation, we have created a simple program called tester.c1 . We can compile the tester.c program and link it against TOSSIM by typing:
1. It is included along with IDS source code

40

7. D EPLOYMENT AND S IMULATION $g++ -g -c -o tester.o tester.c -I$(TOSDIR)/lib/tossim $g++ -o tester tester.o build/micaz/tossim.o build/micaz/sim.o \ build/micaz/c-support.o

7.3
7.3.1

Simulation
Error ratio

At rst, we have tested proposed intrusion detection system without any deployed intruders. The aim of this simulation was to measure the natural error ratio of watchdogs in different enviroments. The error ratios for selective forwarding attacks were dened as bad_counter 255 good_counter + bad_counter where bad_counter is a number of packets that werent forwarded and good_counter is a number of packet that were forwarded. The error ratio will help us to estimate an ALERT_THRESHOLD for selective forwarding engines, that will lead to the acceptable number of false positives while maintaining the ability of IDS to detect selective forwarding attack (see Section 2.5) with a low ratio of dropped packets. In following simulations, we have set each node i to generate and send message using CTP every t seconds. We will refer this period t as a sensing rate. First, we let the network initialize for one minute, and then performed ten one minute cycles. At each cycle j, nodes i1 , . . . , im , . . . , in that satised im mod 10 = j started their IDS agent and performed a basic selective forwarding test (thus during the simulation each node in network performed a single test). For the purposes of these tests, the ALERT_THRESHOLD was set to 0. Therefore detection engines raised an alert for each monitored node that forwarded some packets2 (even if forwarded all packets and had error ratio equal to zero). Note: We have to realize that it is not possible to repeat the simulation twice with the same results. There are several random factors we cannot eliminate (e.g. random noise, random CSMA back-offs, random CTP back-offs) that causes that network topology is always different. Therefore, even if we monitor the same neighbors by the same watchdog in the same simulation time, every time we get different results and particularly different number of alerts. For this reason we had to normalize the simulations results to be able to compare individual simulations. On following charts, the values on X-axis expresses the error ratios that are equal to or greater than the given value. For example the value of 100 includes all alarms with error ratio from 100 to 255. Various sensing rate At rst, we measured an error ratio in relation to a number of messages that were generated and transmitted through the network. Each IDS agent was set to randomly choose 10 neighbors, whose average rssi was under -86dB. We deployed 100 nodes in a topology dened in 15-15-tight-mica2-grid.txt (it is included in TinyOS-2.1.0 distribution and can be found in tos/lib/tossim/topologies). This topology is very dense, each node has more than 50 neighbors. We tested three cases with different sensing period: 1) t = 5 seconds, 2) t = 2.5 seconds, 3) t = 1.250 seconds.
2. With the possible exception of the nodes, that were congested and thus were allowed to drop packets.

41

7. D EPLOYMENT AND S IMULATION


100 90 80 70 60 50 40 30 20 10 0 0 25 t = 5 seconds 50 t = 2.5 seconds 75 t = 1.250 seconds 100 125 Error Ratio 150 175 200 225 250

Occurrence

sensing period t 5sec 2.5sec 1.25sec

trafc generated 13068 26136 52272

ambiguos collisions 30100 148981 2477286

Figure 7.1: Dependence of Error Rate on Sensing Period The results of this test are shown in Figure 7.1. We can see that increasing trafc in network makes the watchdog monitoring technique less reliable. For sensing periods 5s and 2.5s there are almost no occurences of errors with greater value than 90 (0% for t = 5s, 1.1.03% for t = 2.5s) so if we set the ALERT_THRESHOLD to 90 it would no false positives for t = 5s and four false positives for t = 2.5s. In contrast, for t = 1.250s it would cause over 10% (26) false positives. We have also measured a number of packets that were lost because the receiver was in mid-reception of another packet. We can consider this value as a number of ambiguous collisions that occured on the network. We can see that linear increase in the number of transmitted messages in network caused exponential increase of ambiguos collisions (see Section 4.2.2). Various monitoring thresholds At the second test, we tested the dependency of error ratio on the value of MONITORING_THRESHOLD (A watchdog never monitors neighbors with lesser average RSSI than is MONITORING_THRESHOLD). For this simulation we have used the custom network topology. This topology was less denser than 15-15-tight-mica2-grid.txt, each node had approximately 25 neighbors.
100 90 80 70 60 50 40 30 20 10 0 0 -89dB 25 -80dB 50 -75dB 75 100 125 Error ratio 150 175 200 225 250

Occurrence

Figure 7.2: Dependence of error ratio on monitoring thresholds Each node in the network worked with sensing rate t = 2.5s and we tried to set various monitoring thresholds 90dB, 80dB and 75dB. IDS agents were activated in the same way as in the previous test (each node performed a single selective forwarding test), but each IDS agent monitored all its neighbors that was below the monitoring threshold (we had to increase capacity of CtpStorage to make it 42

7. D EPLOYMENT AND S IMULATION possible). The results of this test are shown on gure 7.2. It is obvious that when we limit monitoring only to nodes with stronger signal, an average error ratio is lower. On the other hand this reduces IDS performance, because each IDS agent can monitor only a limited subset of neighbors. 7.3.2 Advanced selective forwarding engine

The purpose of this experiment was to conrm the hypothesis that by reducing the sensitivity of the watchdogs receiver we can suppress weak signals from distant nodes and achieve more reliable eavesdropping on near nodes. In CC2420 radio, we can reduce the receiver sensitivity by lowering the signal amplication that is performed before the signal is processed in analog digital convertor. The simpliest way how to control the signal amplication that is performed is by setting the AGCTRL register. By changing the value of this register, we can override the low-noise-amplier settings and manually set four modes automatic, low-gain, medium-gain and high-gain. By default (automatic mode) the LNA gain mode is choosed by AGC (Automatic Gain Control) that dynamically adjust the mode according to strength of received signal. In opposite, other modes set the LNA gain mode xedly and do not change it.
7
15m ~ -84dB

15m ~ -84dB

6 2

15m ~ -86dB

2m ~ -62dB

Figure 7.3: Nodes deployment We deployed ve nodes as shown in Figure 7.3. The node 0 represented the watchdog, the node 2 that was only 2 meters from the watchdog and represented the close node we wanted to monitor. The remaining three nodes (6,7,9) were 15 meters from the watchdog and represented the nodes we do not intend to eavesdrop on. All sending nodes were possitioned in such way to be sufciently far from each other to avoid the CSMA collisions. The nodes 2, 6, 7, 9 were set to send 500 messages in 10 seconds. Each message had 25 bytes and was destined to node 0. We have disabled message acknowledgements and sent messages directly using Active Message Layer without any message retransmions 3 . We have dened four receiver modes m0 , m1 , m2 , m3 . In the rst mode m0 the watchdogs low-noiseamplier was controlled by AGC, in the m1 mode the LNA was set to low-gain, in the m2 to medium-gain and in the m3 to high-gain. The node 0 (watchdog) always worked in one of these modes and counted the number of messages numi that were received from each node i during last 10 seconds interval. Because we know that each node has always sent 500 messages during the interval, we can calculate the packet receive ratio as PRRi = numi . 500
3. This guarantees that each node send exactly 500 messages in 10 seconds

43

7. D EPLOYMENT AND S IMULATION For each interval we periodically changed the mode in which node 0 worked in, and we repeated in each mode more than 10 times. The averaged results are summarized in table 7.1. mode automatic low-gain medium-gain high-gain node 2 num2 PRR2 450.21 90.04% 498.73 99.75% 472.27 94.75% 448.14 89.63% node 6 num6 PRR6 328.29 65.66% 1.93 0.39% 158.71 31.74% 333.93 66.79% node 7 num7 PRR7 258.07 51.61% 0.33 0.07% 59.93 11.99% 224.57 44.91% node 9 num9 PRR9 251.29 50.26% 0.53 0.11% 189.67 37.93% 275.31 55.06%

Table 7.1: Packet receive ratio in different modes In the automatic and high-gain mode4 , the watchdog was able to receive around 90% of packets sent by the node 2 and approximately half of packets from the distant nodes. In the medium-gain mode, the probability that packet from the close node will be successfully received had increased to 95%, in low-gain mode to almost 100%. This was achieved at the cost of smaller number of received packets from the distant nodes. In lower-gain modes, the watchdog often does not recognized the beginnings of messages that came from the distant nodes, thus it does not start receiving them. In result, the distant nodes did not block the watchdog so often and it was able to receive more packet from the close node. In other words, the number of ambiguous collisions, that occured from the perspective of the watchdog, was much smaller. In conclusion, the experiment conrmed that the reducing of watchdogs receiver sensitivity will improve the accurancy of eavesdropping on close nodes. Simulation In this simulation, we have tested the difference between the basic selective forwarding engine and advanced selective forwarding engine. An advanced selective forwarding engine tries to get benet from suppressing weak signals. We have simulated the IDS behavior on similar network as in the rst test. The only difference was that we scheduled each node to perform two tests the basic and advanced. Both tests were performed one after another, thus we can assume that both engines had same data.
100 90 80 70 60 50 40 30 20 10 0 0 25 50 75 100 125 Error ratio 150 175 200 225 250 Basic Selective Forwarding Engine Advanced Selective Forwarding Engine

Occurrence

Figure 7.4: Advanced Selective Forwarding Engine We can see on 7.4 that Advanced Selective Forwarding Engine had smaller number of occurencess of higher error ratio.
4. We can see that results achieved in the automatic mode are more or less the same as in the high-gain mode. This is due to AGC that for most of the time sets the linear-noise-amplier to high-gain mode.

44

7. D EPLOYMENT AND S IMULATION 7.3.3 Simulation with attackers

In the last simulation, we measured the number of false positives and number of false negatives in a network with attackers that performs selective forwarding attack. We have simulated 100 nodes, that were deployed according to the 15-15-medium-mica2-grid.txt topology. We have deployed 5 attackers to the network. Each of these attackers have performed selective forwarding attack by dropping packets that should be forwarded with a probability p = 0.5. The attackers have not performed any actions that would make their nodes more attractive with respect to the routing metric. These attackers were chosen randomly after the network inicialized (after 50 seconds) from the internal nodes (that forwards trafc from other nodes towards to base station). Unfortunately for this simulation, the network that uses CTP can dynamically change its topology, so an attacker may become a leaf node and does not forward trafc from other nodes anymore. In this case, the attacker loses the ability to perform a selective forwarding attack and becomes passive. It is not possible to detect passive attacker, because it behaves the same way as regular nodes. On the other hand, the passive attacker can become active again when other nodes choose its to be their parent again. In order to measure false negatives, we need the same number of attackers to be active during the whole simulation. Thus we detected events when the attacker ceased to be an internal node and in such a case we restarted the whole simulation from the beggining.
5 4 Number 3 2 1 0 0 100 False Negatives 200 False Positives 300 400 500 Simulation Time (Seconds) 600 700 800 900

We have performed 10 simulations and averaged the results. The results are shown at 7.3.3. The blue line represents the number of undetected intruders in the network at the given time (false negatives), the red represents the number of nodes that are wrongly considered as intruders (false positives).

45

Chapter 8

Conclusion
We can divide the contribution of this work into three parts: We have designed and implemented a simple intrusion detection system that is able to detect selective forwarding attacks. The proposed intrusion detection system allows easy extensibility through simple addition of new detection engines. We have improved reliability of watchdog monitoring technique on network with a large number of ambiguous collisions. The technique is based on limiting the watchdogs receiver sensitivity and allows the watchdog to suppress weak signals coming from distant nodes. As a result, the watchdog is able to eavesdrop better on close neighbors because it does not hear distant nodes. We have shown that this technique is feasible on CC2420 radio chip. By bypassing the automatic gain control loop we forced the radio receiver to amplify received signal lesser than in automatic mode. The watchdog node then does not recognize packet preambule (synchronization headers and frame delimiter) from distant nodes, therefore it does not start receiving packets from them. In 7.3.2 we described an experiment that conrmed that the technique really works. In comparison with the experiment without the use of this method, the receiver node lost fewer packets from the close mote and the packet receive ratio from that node signicantly increased. The simulation presented in 7.3.2 has shown that this technique would raise less false positives when we restrict watchdog to monitor only the closest nodes. To be able to accurately simulate our intrusion detection system on TOSSIM simulator, we had to modify the simulator. The simulator had several serious deciencies. First, it had a bug that allowed a node to receive message at the same time when it was transmitting another one. This bug was reported and it was conrmed by TOSSIM developers. Second, the CSMA algorithm on TOSSIM was different from the one on CC2420. Particularly, it considered channel to be free even if the node was receiving valid data. Although the CC2420 radio could be set to behave this way, by default it considers channel to be free only if the node is not receiving data. All these deciencies have been xed and the simulation of the designed IDS has been successfully performed on the TOSSIM simulator.

46

Bibliography
[1] CC2420 datasheet, Jan. 2008. Documents/CC2420.pdf. Available at http://inst.eecs.berkeley.edu/~cs150/

[2] Shah Bhatti, James Carlson, Hui Dai, Jing Deng, Jeff Rose, Anmol Sheth, Brian Shucker, Charles Gruenwald, Adam Torgerson, and Richard Han. MANTIS OS: an embedded multithreaded operating system for wireless micro sensor platforms. Mob. Netw. Appl., 10(4):563579, 2005. [3] Han C., Kumar R. Rengaswamy, Shea R., Kohler E., and Srivastava M. SOS: A Dynamic Operating System for Sensor Networks. In Proceedings of the Third International Conference on Mobile Systems, Applications, and Services (Mobisys 2005), Seattle, Washington, June 2005. [4] David E. Culler. TinyOS: Operating system design for wireless sensor networks. http://www. sensorsmag.com/sensors/article/articleDetail.jsp?id=324975. [5] Ana Paula R. da Silva, Marcelo H. T. Martins, Bruno P. S. Rocha, Antonio Alfredo Ferreira Loureiro, Linnyer Beatrys Ruiz, and Hao Chi Wong. Decentralized intrusion detection in wireless sensor networks. In Azzedine Boukerche and Regina Borges de Araujo, editors, Q2SWinet, pages 1623. ACM, 2005. [6] Adam Dunkels, Bjorn Gronvall, and Thiemo Voigt. Contiki - a lightweight and exible operating system for tiny networked sensors. In LCN 04: Proceedings of the 29th Annual IEEE International Conference on Local Computer Networks, pages 455462, Washington, DC, USA, 2004. IEEE Computer Society. [7] Jeremy Elston and Deborah Estrin. Wireless sensor networks, chapter Sensor networks: A bridge to the physical world, pages 321. 2004. [8] Rodrigo Fonseca, Omprakash Gnawali, Kyle Jamieson, Sukun Kim, Philip Levis, and Alec Woo. The Collection Tree Protocol (CTP). http://www.tinyos.net/tinyos-2.x/doc/html/tep123.html. [9] David Gay, Matt Welsh, Philip Levis, Eric Brewer, Robert Von Behren, and David Culler. The nesc language: A holistic approach to networked embedded systems. In In Proceedings of Programming Language Design and Implementation (PLDI), pages 111, 2003. [10] Vlado Handziski, Joseph Polastrey, Jan-Hinrich Hauer, Cory Sharpy, Adam Wolisz, and David Cullery. Flexible hardware abstraction for wireless sensor networks. In Proceedings of Second European Workshop on Wireless Sensor Networks (EWSN 2005), Istanbul, Turkey, 2005. [11] Richard Heady, George Lugar, Mark Servilla, and Arthur Maccabe. The architecture of a network level intrusion detection system. Technical report, University of New Mexico, Albuquerque, NM, August 1990. 47

8. C ONCLUSION [12] Krontiris Ioannis, Tassos Dimitriou, and Felix C. Freiling. Towards intrusion detection in wireless sensor networks. In 13th European Wireless Conference, Paris, April 2007. [13] Aravind Iyer, Sunil S. Kulkarni, Vive Mhatre, and Catherine P. Rosenberg. A Taxonomy-based Approach to Design of Large-scale Sensor Networks. Kluwer, 2004. [14] Chris Karlof and David Wagner. Secure routing in wireless sensor networks: attacks and countermeasures. Ad Hoc Networks, 1(2-3):293315, 2003. [15] Ioannis Krontiris, Tassos Dimitriou, Thanassis Giannetsos, and Marios Mpasoukos. Intrusion detection of sinkhole attacks in wireless sensor networks. In Miroslaw Kutylowski, Jacek Cichon, and Przemyslaw Kubiak, editors, ALGOSENSORS, volume 4837 of Lecture Notes in Computer Science, pages 150161. Springer, 2007. [16] Ioannis Krontiris, Thanassis Giannetsos, and Tassos Dimitriou. LIDeA: a distributed lightweight intrusion detection architecture for sensor networks. In SecureComm 08: Proceedings of the 4th international conference on Security and privacy in communication netowrks, pages 110, New York, NY, USA, 2008. ACM. [17] Hyung June Lee, Alberto Cerpa, and Philip Levis. Improving wireless simulation through noise modeling. In IPSN 07: Proceedings of the 6th international conference on Information processing in sensor networks, pages 2130, New York, NY, USA, 2007. ACM. [18] Philip Levis. Packet protocols. http://www.tinyos.net/tinyos-2.x/doc/html/tep116.html. [19] Philip Levis, David Gay, Vlado Handziski, Jan-Heinrich Hauer, Ben Greenstein, Martin Turon, Jonathan Hui, Kevin Klues, Cory Sharp, Robert Szewczyk, Joseph Polastre, Philip Buonadonna, Lama Nachman, Gilman Tolle, David Culler, and Adam Wolisz. T2: A second generation os for embedded sensor networks. Technical report, Telecommunication Networks Group, Technische Universitat Berlin, 2005. [20] 6LoWPAN IPv6 over low-power wireless personal area networks. http://6lowpan.tzi.org/. [21] Sergio Marti, T. J. Giuli, Kevin Lai, and Mary Baker. Mitigating routing misbehavior in mobile ad hoc networks. In MobiCom 00: Proceedings of the 6th annual international conference on Mobile computing and networking, pages 255265, New York, NY, USA, 2000. ACM. [22] David Moss and Philip Levis. Packet link layer. http://www.tinyos.net/tinyos-2.x/doc/ html/tep127.html. [23] Moteiv. Tmote sky datasheet, 2006. tmote-sky-datasheet-102.pdf. [24] Moteiv. Tmote mini datasheet, 2007. Datasheet.pdf. http://www.cs.uvm.edu/~crobinso/mote/

http://www.sentilla.com/pdf/eol/Tmote_Mini_

[25] John Pike. Sound surveillance system (sosus). online. http://www.globalsecurity.org/ intell/systems/sosus.htm. [26] Rodrigo Roman, Jianying Zhou, and Javier Lopez. Applying intrusion detection systems to wireless sensor networks. In Proceedings of IEEE Consumer Communications and Networking Conference (CCNC 06), pages 640644, Las Vegas, USA, January 2006. 48

8. C ONCLUSION [27] Tanya Roosta, Shiuhpyng Shieh, and Shankar Sastry. Taxonomy of security attacks in sensor networks and countermeasures. In In The First IEEE International Conference on System Integration and Reliability Improvements. Hanoi, pages 1315, 2006. [28] IEEE Computer Society. IEEE standard 802.15.4a-2007, August 31, 2007. [29] Kannan Srinivasan and Philip Levis. Rssi is under appreciated. In EmNets 2006, May 2006. [30] TinyOS. http://www.tinyos.net/. [31] TinyOS Core Working Group. TEP 106: Schedulers and Tasks. http://www.tinyos.net/tinyos2.x/doc/html/tep106.html. [32] TOSSIM tutorial. Avaiable at http://docs.tinyos.net/index.php/TOSSIM. [33] Imote2 datasheet. online. http://www.xbow.com/Products/Product_pdf_files/Wireless_ pdf/Imote2_Datasheet.pdf.

49

You might also like