UBICC_BOOK_2007 Volume 2 Issue 4 by tabindah

VIEWS: 751 PAGES: 107

									UBICC Journal
Ubiquitous Computing and Communication Journal 2007 Volume 2 No. 4 . 2007-08-15 . ISSN 1992-8424

UBICC Publishers © 2007 Ubiquitous Computing and Communication Journal

Edited by Usman Tariq.

Co-Editor Dr. Kevin Curran

Ubiquitous Computing and Communication Journal
Book: 2007 Volume 2 No. 4 Publishing Date: 2007-08-15 Proceedings ISSN 1992-8424
This work is subjected to copyright. All rights are reserved whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illusions, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication of parts thereof is permitted only under the provision of the copyright law 1965, in its current version, and permission of use must always be obtained from UBICC Publishers. Violations are liable to prosecution under the copy right law.

UBICC Journal is a part of UBICC Publishers www.ubicc.org

© UBICC Journal

Typesetting: Camera-ready by author, data conversation by UBICC Publishing Services.

UBICC Publishers

Table of Contents

Papers
22 Resource management strategy to support real time video across UMTS and WLAN networks K. Ayyappan, I. Saravanan, G. Sivaradje . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 23 Capacity estimation for SIR-based power controlled CDMA system with mixed cell sizes Sami A. El-Dolil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 24 A cluster based approach toward sensor localization and K-Coverage problems Zhanyang Zhang, Olga Berger, Shane Sorbello . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 25 Power aware virtual node routing protocol for ad hoc networks AShwani Kush, Ram Kumar, Phalguni Gupta . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 26 Java based implementation of an online home delivery system Fiaz Ahmad, Mohamed Osama Khozium . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 27 Towards mobility enabled protocol stack for future wireless networks Fawad Nazir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 28 Efficient implementation of downlink CDMA equalization using frequency domain approximation F Kamali, M. Dessouky, B Sallam, Fathi abd El-Samie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 29 Performance studies of MANET routing protocols in the presence of different broadcast route discovery strategies Natarajan Meghanathan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 30 A novel approach to adaptive control of networked systems A. H. Tahoun, Fang Hua- Jing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 31 Different MAC protocols for next generation wireless ATM networks Sami A. El-Dolil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 32 Intelligent network architecture for fixed-mobile convergence services JongMin Lee, Ae Hyang Park, Jun Kyun Choi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

RESOURCE MANAGEMENT STRATEGY TO SUPPORT REAL TIME VIDEO ACROSS UMTS AND WLAN NETWORKS
K.Ayyappan, I.Saravanan, G.Sivaradje and P.Dananjayan Department of Electronics and Communication Engineering Pondicherry Engineering College, Pondicherry, India Email: shivaradje@ieee.org ABSTRACT The communication world is expecting an environment where a single terminal can support multiple services with pervasive network access. This paper addresses the challenges and resource management strategies to support real time video across UMTS and WLAN networks. A priority based service interworking architecture with Hybrid coupling is proposed to achieve seamless continuity of real time video sessions across the two networks. QoS consistency is an important challenge that needs to be addressed since QoS degradation can occur during vertical handover. The results indicate that QoS consistency can be achieved for real time video sessions with certain conditions and restrictions. The proposed scheme enables WLAN to support more number of UMTS video subscribers with better QoS consistency. Keywords: Priority based service, Hybrid coupling, QoS consistency, QoS degradation, Vertical handover.

1. INTRODUCTION In the past decade, there was fast evolution and successful deployment of a number of wireless access networks. Now the focus is turned towards the next generation communication networks [1, 2] which is aimed at seamlessly integrating various existing wireless communication networks [3], such as wireless local area networks (WLANs, e.g., IEEE 802.11 a/b/g and HiperLAN/2), wireless wide area networks (WWANs, e.g., 1G, 2G, 3G, IEEE 802.20), wireless personal area networks (WPANs, e.g., Bluetooth, IEEE 802.15.1/3/4), and wireless metropolitan area networks (WMANs, e.g., IEEE 802.16). The Technology of seamlessly integrating various existing wireless networks is called as Convergence Technology. This technology combines different existing access technologies such as cellular, cordless, WLAN type systems, short range wireless connectivity and wired systems on a common platform to complement each other in an optimum way and to provide a multiple possibilities for current and future services [4] and applications to users in a single terminal. The next generation communication network will be heterogeneous and provide multiple services anywhere and anytime with users getting the benefit of seamless internet access with multimode access capability. Seamless integration doesn’t mean that the radio access technologies are converged into a single network. Instead the services offered by the existing radio

access technologies are integrated. By converging voice, video and data networks onto a single IP based network, a business can lower it's total cost of network ownership by reducing expenditures on equipment, maintenance, network administration and carrier charges while enhancing it's communications capabilities. The goal is set forth and the research today is focused on integrating different combination of existing networks. As on now, the most popular networks are cellular network and the wireless local area network (WLAN). The interworking [5-8] between third generation (3G) cellular network and WLAN has been considered as the suitable path towards the next generation of wireless networks. To be specific, both radio access networks have their own merits. The cellular networks support both circuit-switched and packet-switched services and have benefits such as global coverage, universal roaming and well defined infrastructure. But it has low data rate and lack of capability to service bandwidth demanding applications. On the other hand, WLAN supports only packet-switched services. It supports high data rate at low cost over local coverage and is very efficient in serving bursty traffic. But it has limitations such as poor mobility management, interference, vague infrastructure and is not suitable for serving time sensitive services. Security is not good either. When real time services are to be supported, WLAN lacks capability to serve them.

1

This is because WLAN is optimized for local, high rate and low cost data service [9]. Despite WLAN having lot of QoS deficiencies to handle real time services, it doesn’t mean WLAN cannot handle it. For WLAN to handle real time services, QoS consistency is the major challenge that needs to be addressed.

and relieving the burden of core networks through dynamically distributing traffic in low level network, and enhancing the robustness of the integrated networks through adding a new wireless link. 2.1 SUPPORT FOR REAL TIME VIDEO SERVICE When real time packet switched video is to be supported across the cellular network and WLAN, the vital component that needs to be maintained is the QoS consistency for the service. This is because to ensure seamless continuity [18], the QoS level for the service has to be maintained in the two network domains. But it is a very challenging issue because IEEE 802.11 WLAN was initially developed without paying much attention to QoS aspects, aimed primarily at simple and cost-effective data service. Even with the recent IEEE 802.11e developments, WLAN QoS still exhibits several deficiencies with respect to 3G QoS. It is a difficult task for WLAN to support real time video service because of QoS deficiencies. The deficiencies include equal error protection across different media streams, no control on residual BER and MAC service data unit, no dedicated radio channels and no soft handover can be achieved. Nevertheless, the QoS deficiencies of WLANs do not necessarily mean that seamless session continuity from UMTS to WLAN cannot be supported. UMTS subscribers admitted to WLAN are called as UMTS roamers. When a video session initiated in 3G network transits to a WLAN environment, the video session should continue seamlessly without any noticeable change in quality of service (QoS). In this regard, not only is 3G-based access control required, but also access to 3G-based services is needed over the WLAN network. 2.2 CONDITIONS CONTINUITY FOR SEAMLESS

Internet Se ver r

Internet

GGSN
Tight coupling

L oose coupling

SGSN

W LA gateway N

RNC

Hybrid coupling

N od e B

A cce ss point

WL AN coverage

UM TS coverage

Figure 1. Interworking architectures 2. INTERWORKING ARCITECTURE To achieve efficient interworking, the architecture plays an important role. The issue is more important when real time services are to be supported across the two networks. A new architecture is proposed for the interworking of cellular network and the WLANs. But the most promising ones are Tight coupling, Loose coupling and Hybrid coupling. In Tight coupling, the coupling is such that WLAN appears to the cellular core network as another cellular access network. In Loose coupling, WLAN and cellular networks are completely separated and are connected through the Internet. But both coupling schemes [1012] have drawbacks such as static routing of traffic, high latency during vertical handover [13-15] and increased burden for core networks. In order to overcome these shortcomings, Hybrid coupling architecture is developed. In Hybrid coupling, a new wireless link using IEEE 802.16 standard is created between base station (BS) in cellular network and 802.11 WLAN within a same cell area. Figure 1 illustrates the three coupling schemes. Hybrid coupling has advantages including dynamically reducing signaling cost and handoff latency [16, 17]

To ensure seamless continuity of video sessions, WLAN can accept UMTS roamers as long as the two conditions are satisfied. i. The video streams of all UMTS roamers admitted to the WLAN must experience at least the same QoS level as negotiated in the UMTS network i.e., the MAC SDU (MSDU) loss rate in the WLAN must not exceed the corresponding UMTS SDU error ratio (10-3 ). The terminals of UMTS roamers make every effort to transmit all video packets within their delay bound, which is considered equal to 40 ms for consistency with UMTS. However, if a video packet is delayed for more than 40 ms, it is dropped. This policy

2

guarantees that the delay experienced by all successfully transmitted video packets will be smaller than 40 ms. ii. At the same time, the bandwidth available to WLAN data users must not diminish below a predefined threshold. The admission policy may need to ensure that WLAN data users have at least ‘L’ Mbps of bandwidth available no matter how many UMTS roamers are admitted into the WLAN. So the admission policy will reject further association requests from UMTS roamers when the bandwidth reservation limit is reached. 2.3 PRIORITIZED CHANNEL ACCESS For WLAN to support real time video sessions, the channel access mechanism plays a major role. Traditional IEEE 802.11 WLAN have DCF (Distributed co-ordination function) and PCF (Point co-ordination function) as the channel access mechanism [19]. DCF does not have any provision to support QoS. All data traffic is treated in a first come first serve, best-effort manner. All STAs (stations) in the BSS (basic service set) contend for the wireless medium with the same priority. This causes asymmetric throughput between uplink and downlink, as the AP (Access point) has the same priority as other STAs but with much higher throughput requirement. There is also no differentiation between data flows to support traffic with QoS requirements. When the number of STAs in a BSS increases, probability of collisions becomes higher and results in frequent retransmissions. Therefore QoS decreases as well as overall throughput in the BSS. Although PCF was designed to support time-bounded traffic, many inadequacies have been identified. These include unpredictable beacon delays resulting in significantly shortened CFP (Contention free period), and unknown transmission duration of polled STA making it very difficult for the AP to predict and control the polling schedule for the remainder of the CFP. In addition there is no management interface defined to setup and control PCF operations. So neither DCF nor PCF provide sufficient facility to support traffic with QoS requirements. So enhancements such as EDCA (Enhanced distributed channel access) and HCCA (Hybrid controlled channel access) are made in IEEE 802.11e [20-22]. The enhancements are made to provide priority for a particular service. To provide priority the maximum backoff time for the service is made minimum, thereby increasing the chance for that particular type of service users to access the channel.

3. PERFORMANCE EVALUATION The performance of the coupling schemes are analyzed for both contention based and contention free channel access. 3.1 Contention Based Channel Access In Contention based channel access [23], both UMTS roamers and WLAN data users contend with each other to access the channel. Here, the UMTS roamers can be admitted to the system as long as the WLAN can support L Mbps of data traffic and the QoS experienced by the video streams meets or exceeds the QoS negotiated in the UMTS environment. To support real time video sessions, the channel access is invoked by giving priority to a particular type of users (i.e., either WLAN data users or UMTS roamers). 3.2 Priority to WLAN Data Users The priority can be given to particular type of users by changing the maximum backoff time during their contention for channel access. Users possessing high priorities are given less backoff time than the users possessing low priority. By assigning priorities, the low priority users have to wait long time to access the channel. This mechanism helps high priority users to have more opportunities to access the channel than the users having low priority. When WLAN data users are given preferential access to the channel, the limiting factor for the maximum number of UMTS roamers in the WLAN is not the bandwidth reservation constraints but rather the MSDU loss rate of video streams.
0.16 0.14
MSDU Loss rate for video traffic ---> Hybrid coupling Tight coupling Loose coupling

0.12 0.1 0.08 0.06 0.04 0.02 0

0

10

20 30 40 No. of UMTS roamers --->

50

60

Figure 2. MSDU Loss rate for video traffic (L= 7 Mbps) vs. No. of UMTS roamers Figure 2 reveals that when the WLAN data traffic (L) is 7 Mbps, the MSDU loss rate for video traffic reaches the UMTS negotiated value (10–3)

3

when there are 34, 37 and 43 UMTS roamers in case of Loose coupling, Tight coupling and Hybrid coupling respectively. Figure 3 reveals that when the WLAN data traffic (L) is 5 Mbps, the number of UMTS roamers that can be accepted to WLAN are 39, 42 and 50 in case of Loose coupling, Tight coupling and Hybrid coupling respectively.
0.16 0.14
MSDU Loss rate for video traffic ---> Hybrid coupling Tight coupling Loose coupling

This is because the data packets are served with high priority than the video packets. Figure 4 and Figure 5 illustrates the average delay for delivered packets for different WLAN offered data traffic (i.e., for 7 Mbps and 5 Mbps).
35
Hybrid coupling (video traffic) Tight coupling (video traffic) Loose coupling (video traffic) data traffic

Average delay for delivered packets(ms) --->

30

25

0.12 0.1 0.08 0.06 0.04

20

15

10

5

0

0.02 0

0

10

20 30 40 No. of UMTS roamers --->

50

60

0

10

20 30 40 No. of UMTS roamers --->

50

60

Figure 5. Average delay for delivered packets (L= 5 Mbps) vs. No. of UMTS roamers It is also clear that the delay for data packets is less compared to video packets and also the delay for video packets in Hybrid coupling architecture is less compared to both loose coupling and tight coupling architectures.
0.35
L = 7Mbps L = 5Mbps

Figure 3. MSDU Loss rate for video traffic (L= 5 Mbps) vs. No. of UMTS roamers
45

Average delay for delivered packets(ms) --->

40

35

Hybrid coupling (video traffic) Tight coupling (video traffic) Loose coupling (video traffic) data traffic

30

25
0.3

MSDU Loss rate for data traffic --->

20

15

0.25

10

0.2

5

0.15

0

0

10

20 30 40 No. of UMTS roamers --->

50

60

0.1

Figure 4. Average delay for delivered packets (L= 7 Mbps) vs. No. of UMTS roamers When the WLAN offered data traffic is high, the number of WLAN data users are more and thereby results in accepting less number of UMTS roamers and vice versa. From Figure 2 and Figure 3, it is clear that the number of UMTS roamers accepted is more for Hybrid coupling than Loose coupling and Tight coupling. This is due to the wireless link established between the base station and WLAN Access point (AP) within the same macro cell area to achieve dynamic distribution of traffic. But the traffic distribution is static incase of Loose coupling and Tight coupling. When WLAN data users are given preferential access to the channel, the delay for video packets will be larger than the delay of data packets.

0.05

0

0

10

20

30

40 50 60 70 No. of UMTS roamers --->

80

90

100

Figure 6. MSDU Loss rate for data traffic vs. No. of UMTS roamers 3.3 Priority to UMTS roamers When UMTS roamers are given preferential access to the wireless medium, the loss rate experienced by video packets is almost negligible since the UMTS roamers are given preferential access to the wireless medium. Therefore, the limiting factor for the maximum number of UMTS roamers in the WLAN is not the loss rate of video streams but rather the bandwidth reservation constraints. In order to respect the WLAN data users, they are given a bandwidth threshold. So WLAN can

4

accept UMTS roamers as long as the bandwidth reservation policy is respected.
140
Hybrid coupling (video traffic) Tight coupling (video traffic) Loose coupling (video traffic) data traffic

120

100

80

60

40

20

0

0

10

20

30

40 50 60 70 No. of UMTS roamers --->

80

90

100

Figure 7. Average delay for delivered packets (L= 7 Mbps) vs. No. of UMTS roamers Figure 6 illustrates that when the WLAN offered data traffic (L) is 7Mbps, the MSDU loss rate for data traffic is equal to zero up to 47 UMTS roamers. Up to this number, the capacity offered to the WLAN data users is indeed 7 Mbps and hence the bandwidth reservation policy is respected. But, when more than 47 UMTS roamers are admitted to the WLAN, the bandwidth reservation policy cannot be satisfied as the bandwidth utilized by WLAN data users is quickly diminished. So the maximum roamers that can be accepted to WLAN are 47.
60
Hybrid coupling (video traffic) Tight coupling (video traffic) Loose coupling (video traffic) data traffic

number of UMTS roamers accepted is more for the channel access mechanism where the UMTS roamers are given preferential access to the channel. This is because the video packets are served with higher priority and thereby enhance acceptance of more UMTS roamers. When UMTS roamers are given preferential access to the medium, the video packets are served with high priority. Therefore the delay for the delivery of video packets will be very much lesser than the delay negotiated in UMTS domain. Since Hybrid coupling enables dynamic distribution of traffic with the wireless link established between BS and WLAN in the same cell area, the delay for video packets will be much lesser than the delay for both tight coupling and loose coupling interworking architectures. So the data packets are served with less priority which results in increased delay for delivery. But the delay for data service is not an issue since data service requires reliability rather than delay for delivery. So the video packets will be delivered with less delay, when the UMTS roamers are given preferential access to the medium. Figures 7 and 8 reveal that when UMTS roamers are given preferential access to the WLAN channel, the delay experienced by video packets is very small for all coupling schemes. So the advantage of this type of channel access is the acceptance of more number of UMTS roamers and decrease in the delay of video packets. But this gain is achieved at a cost in delay performance of WLAN data users.
Percentage of WLAN channel time spent in contention free mode --->

Average delay for delivered packets(ms) --->

60

Average delay for delivered packets(ms) --->

50

50

40

40

30

30

20

20

10

10

0

0

10

20

30

40 50 60 70 No. of UMTS roamers --->

80

90

100

0

0

5

10

15 20 25 No. of UMTS roamers --->

30

35

40

Figure 8. Average delay for delivered packets (L= 5 Mbps) vs. No. of UMTS roamers When the WLAN offered data traffic is reduced (i.e., L = 5 Mbps), the number of UMTS roamers accepted increases to 78. This is because, when WLAN offered data traffic increases, more number of WLAN data users will be present in WLAN and thereby leaving less bandwidth for the acceptance of UMTS roamers. It is also clear that the

Figure 9. Percentage of WLAN channel time spent in contention free mode vs. No. of UMTS roamers 3.4 Contention Free Channel Access In Contention free channel access, the UMTS roamers do not contend with the WLAN data users. The channel access is managed by the AP and occurs in a centric fashion. The Access point control

5

the access to the wireless channel by assigning transmission opportunities (TXOPs) to requesting WLAN terminals. Since access to the channel is centrally controlled and there is no contention or collision and this mode is appropriate for providing parameterized QoS services. But the consequence is the poor channel utilization. This is because; the AP tries to respect the negotiated delay bounds and allocates more radio resources to a UMTS roamer than required. When the WLAN offered data traffic is 7 Mbps, almost 35% of channel time can be spent in contention free mode. It can be revealed from the Figure 9 that when the number of UMTS roamers increases, the percentage of WLAN channel time spent in contention free mode increases linearly and for 35% of channel time, the number of UMTS roamers supported is 26 and is less compared to contention based channel access. This is due to the poor management of the channel by Access point and thereby leads to inefficient channel utilization. When the WLAN offered data traffic reduced to 5 Mbps, 37 UMTS roamers are accepted. This is because, when the WLAN offered data traffic is reduced, more channel time is available for UMTS roamers. The video packets are not lost because the AP tries to respect the negotiated delay bounds. Therefore, the QoS experienced by the UMTS roamers in this mode is affected only by the delay characteristics and not loss rate.
22 20
Average delay for video packets(ms) ---> Hybrid coupling Tight coupling Loose coupling

interesting to note, that average delay for contention free channel access is larger than the corresponding delay in the contention-based channel access. This is due to the inefficient TXOP allocation of the AP and it makes a worst case estimation and allocates more channel time to the polling stations so as to accommodate the largest MSDU size. 4. CONCLUSION The simulation results show that the number of UMTS roamers accepted to WLAN is more for contention based channel access than contention free channel access and the count is maximum for channel access mechanism when UMTS roamers are given preferential access to the medium. In contention free channel access, the resource utilization is poor because the AP provides more resources to the users than required. So this paves way towards the acceptance of less number of UMTS roamers. The average delay for video packets is also low in the channel access mechanism when UMTS roamers are given preferential access to the medium. This is because, when UMTS roamers are given priority to access the medium, the video packets are served with high priority and thereby require small delay for delivery. But the consequence is that, the delay for the delivery of data packets increases. The attained delay for data packets is accepted because data service requires reliability rather than delay. The proposed scheme suggests that WLAN can support seamless continuity of video sessions for only a limited number of UMTS subscribers, which depends on bandwidth reservations, WLAN access parameters, and the QoS requirements of video sessions. The results also depicts that the proposed scheme accept more roamers than tight coupling and loose coupling architectures. This is because, Hybrid coupling dynamically distributes traffic by the wireless link created between the base station in UMTS network and 802.11 WLAN within a same cell area.

18 16 14 12 10 8 6 4

0

5

10

15 20 25 No. of UMTS roamers --->

30

35

40

REFERENCES [1] Safwat, A. M., Mouftah, H., 4G Network technologies for mobile telecommunications, IEEE Network, Vol. 19, No. 5, pp. 3 - 4, September 2005. Akyildiz, I. F., Mohanty, S. and Xie, J., A Ubiquitous mobile communication architecture for next-generation heterogeneous wireless systems, IEEE Radio Communications, Vol. 43, No. 6, pp. S29 S36, June 2005.

Figure 10. Average delay for video packets vs. No. of UMTS roamers Figure 10 illustrates the average delay experienced by the delivered video packets and is below the delay bound negotiated in the UMTS domain (40 ms). The average delay experienced by video packets is less for Hybrid coupling. This is because of the dynamic distribution of traffic. It is

[2]

6

[3]

[4]

[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

Cavalcanti, D., Agrawal, D., Cordeiro, C., Xie, B. and Kumar, A., Issues in integrating cellular networks, WLANS, and MANETs: A futuristic heterogeneous wireless network, IEEE Wireless Communications, Vol. 12, No. 3, pp. 30 - 41, June 2005. Magnusson, P., Lundsjo, J., Sachs, J. and Wallentin, P., Radio resource management distribution in a beyond 3G multi-radio access architecture, IEEE Communications Society, Globecom 2004, pp. 3372 - 3477, 2004. Soldatos, J., Kormentzas, G., On the building blocks of Quality of service in heterogeneous IP networks, IEEE Communications Surveys & Tutorials, Vol. 7, No.1, First Quarter 2005, pp. 70-89, 2005. Salkintzis, A. K., Interworking techniques and architectures for WLAN/3G integration towards 4G mobile data networks, IEEE Wireless Communication, Vol. 11, No. 3 , pp. 50 - 61, June 2004. Luo, H., Jiang, Z., Byoung-Jo Kim, Shankaranarayanan, N. K. and Henry, P., Integrating Wireless LAN and cellular data for the enterprise, IEEE Internet Computing, Vol. 7, No. 2, pp. 25 - 33, April 2003. Gazis, V., Alonistioti, N. and Merakos, L., Toward a generic Always Best Connected capability in integrated WLAN/UMTS cellular mobile networks (and Beyond), IEEE Wireless Communications, Vol. 12, No. 3, pp. 20 - 29, June 2005. Song, W., Jiang, H., Zhuang, W. and Shen, X., Resource management for QoS support in Cellular/WLAN interworking, IEEE Network, Vol. 19, No. 5, pp. 12 - 18, September 2005. Varma, V. K., Ramesh, S., Wong, K. D. and Friedhoffer, J. A., Mobility management in integrated UMTS/WLAN networks, IEEE ICC 2003, Vol. 2, pp. 1048 - 1053, May 2003. Feder, P. M., A seamless Mobile VPN Data Solution for UMTS and WLAN Users, Bell Laboratories - Mobility Solutions, Lucent Technologies Inc., 2003. Ahmavaara, K., Haverinen, H. and Pichna, R., Interworking architecture between 3GPP and WLAN Systems, IEEE Communications Magazine, Vol. 41, No.11, pp. 74 - 81, November 2003.

Pack, S. and Choi, Y., A study on performance of hierarchical mobile IPv6 in IP-based cellular networks, IEICE Transaction on Communications, Vol. E87-B, No. 3, March 2004. [14] Montavont, N. and Noel, T., Handover management for mobile nodes in IPv6 networks, IEEE Communications Magazine, Vol. 40, No. 8, pp. 38 - 43, August 2002. [15] Bernaschi, M., Cacace, F., Iannello, G., Za, S. and Pescape, A., Seamless internetworking of WLANs and cellular networks: Architecture and performance issues in a Mobile IPv6 scenario, IEEE Wireless Communications, Vol. 12, No. 3, pp. 73 - 80, June 2005. [16] Zhang, Q. et al., Efficient mobility management for vertical handoff between WWAN and WLAN, IEEE Communication magazine, Vol. 41, No. 11, pp. 102 - 108, November 2003. [17] McNair, J. and Zhu, F., Vertical handoffs in Fourth generation multinetwork environments, IEEE Wireless Communications, Vol. 11, No. 3, pp. 8 - 15, June 2004. [18] Lampropoulos, G., Passas, N., Merakos, L. and Kaloxylos, A., Handover management architectures in integrated WLAN/Cellular Networks, IEEE Communications Surveys & Tutorials, Vol. 7, No. 4, Fourth Quarter 2005, pp. 30 - 44, 2005. [19] Chung, S. and Piechota, K., Understanding the MAC impact of 802.11e: Part 1,CommsDesign, October 2003. [20] Mangold, S. et al., Analysis of IEEE 802.11e for QoS Support in Wireless LANs, IEEE Wireless Communication, Vol. 10, No. 6, pp. 40 – 50, December 2003. [21] Xiao, Y., IEEE 802.11e: QoS provisioning at the MAC layer, IEEE Wireless Communications, Vol. 11, No. 3, pp. 72 - 79, June 2004. [22] Chung, S. and Piechota, K., Understanding the MAC impact of 802.11e: Part 2,Communications Design, October 2003. [23] Xiao, Y., Qos guarantee and provisioning at the Contention-based wireless MAC Layer in the IEEE 802.11e Wireless LANs, IEEE Wireless Communications, Vol. 13, No. 1, pp. 14 - 21, February 2006.

[13]

7

CAPACITY ESTIMATION FOR SIR-BASED POWER CONTROLLED CDMA SYSTEM WITH MIXED CELL SIZES
Sami A. El-Dolil Dept. of Electronic and Electrical Comm. Eng., Faculty of Electronic Eng, Menoufya Univ. Msel_dolil@yahoo.com

ABSTRACT

In heavily populated areas, cell splitting used to increase the capacity of the cellular system. Cell splitting produces a cellular system with mixed cell sizes. Many previous studies assumed strength-based power control, which maintains received power at a desired level regardless of changes in the number of active users and in the amount of total other cells interference. However, with signal-tointerference ratio (SIR)-based power control systems, that maintain the received SIR at a desired level, the power level is a function of the above two variables. This study calculates the reverse link capacity of SIR-based power control system with mixed cell sizes.
Keywords: CDMA system capacity, power control, and cell splitting. Then the cellular system is configured with cells of mixed sizes. In a uniform cell size environment, all the cells are in the identical condition and the interferences received by individual cells are equal. Therefore, every cell has the same reverse link capacity. In a mixed cell size environment, however, cells are different from each other. Thus, each cell will have a different reverse link capacity [3]. In this paper the reverse link capacity of SIR-based power control system with mixed cell sizes is calculated. Consider a macro cell is split into three micro cells, as an example, and the reverse link capacities for the three micro cells and the neighboring macro cells are calculate. The reminder of this paper is organized as follows. In section II, the reverse link capacity of an SIRbased power control system with uniform cell size is calculated. In section III, the reverse link capacity of SIR-based power control system with mixed cell sizes is obtained. In Section IV, conclusion is presented. 2 REVERSE LINK CAPACITY OF SIR-BASED POWER CONTROL SYSTEM WITH UNIFORM CELL SIZE In a multiple-cell CDMA system, a MS is power-controlled by the BS that sending the highest strength pilot signal to the MS. This BS is called the home BS of the given MS. The path loss L between the MS and the BS is described as,

1

INTRODUCTION

Capacity estimation in Code Division Multiple Access (CDMA) systems is an important issue which is closely related to power control, cell sizes, and other factors. Power control is needed to minimize each user’s interference on the reverse link in varying radio environments and traffic conditions [1]. Previous studies [2]- [4] considered power control systems in which each user’s signal arrives at the home base station (BS) with the same signal strength. The BS measures the received power level and compares it with a desired level and then transmits power control bit(s). This system is referred to as a strength-based power control system. The signal-to-interference ratio (SIR) is more important than signal strength in determining channel characteristics (e.g., bit error probability ), where SIR- based power control determining the value of the power control bit by comparing the received SIR with the desired SIR threshold. The received signal power level varies according to the number of active home-cell users and the amount of other cells interference. Thus, the analysis of an SIR-based power control system is significantly different from the analysis of strengthbased power control systems. On the other hand the cell size is determined based on the traffic load and population density of the service area. In heavily populated areas, cell splits are used to increase the capacity of the cellular system. After cell splitting, macro cells are split into small micro cells. The split cells have different cell sizes from those of the other cells surrounding them.

L ∝ r
where,

−µ

10

ζ / 10

(1)

8

r distance from a MS to a BS; µ path loss exponent; ζ attenuation in dB due to shadowing, which is a Gaussian random variable with standard deviation σ of 8 dB and zero mean. 2.1 Calculation of Total Other Cell Interference First, the reverse link interference from each tier to the center cell is calculated separately. Then, we can obtain the total other cell interference which is the interference produced by all users who are powercontrolled by other BS’s. If the interfering subscriber in another cell is located at a distance rm from its BS and ro from the BS of the desired user, as shown in Fig. 1, the other user when active, produces an interference to the desired user’s BS given by [4],

difference has zero mean and a variance of 2σ2 . Assuming a uniform density of users and normalizing the hexagonal cell radius to unity, the user density is given by, 2 (3) = ρ N 3 3 where N is the number of users per cell. Then the total other cell interference can be expressed as [4], [5],

⎛r ⎞ r Io = ∫∫ S ⎜ m ⎟ 10(ξo −ξm ) / 10 Q(ξ o − ξ m , o ) ρdA ⎜r ⎟ rm ⎝ o⎠
where m is the cell site index and is given by,
µ rm 10 −ξ m = min rkµ 10 −ξ k , k ≠ 0

µ

(4)

(5)

⎛10(ζ0 / 10) ⎞ ⎛ rµ ⎞ ⎛ r ⎞ I(ro, rm) = S⎜ µ ⎟ ⎜ (ζm / 10) ⎟ = S⎜ m ⎟ 10(ζ0 −ζ m ) / 10 ≤ 1 ⎜ r ⎟ ⎜10 m ⎟ ⎜ r ⎟ ⎠ ⎝ 0⎠ ⎝ 0 ⎠⎝
(2) where S is the received power at the home BS, the first term is due to the attenuation caused by distance and blockage to the given BS, while the second term is the effect of power control to compensate for the corresponding attenuation to the BS of the out-of-cell interferer. For all values of the above parameters, the expression is less than unity, otherwise the subscriber would switch to the BS that makes it less than unity.

µ

and
Q (ξ o ⎧ ⎪1, if ro ⎪ −ξm, ) = ⎨ rm ⎪ 0, ⎪ ⎩ ⎛ rm ⎞ ⎜ ⎜ r ⎟ 10 ⎟ ⎝ o ⎠ otherwise
µ
( ξ o − ξ m ) / 10

≤1

(6)
For simplicity the smallest distance rm is used rather than the smallest attenuation described in (6)

rm = min rk , k ≠ 0

(7)

7 18 17 6 16 5 15 14 13 4 12 r0 0 3 11 1 8 9

A Gaussian model is considered for I0. Assuming that the received power S, the distances, and the shadowing are mutually independent, the mean of I0, can be expressed as, µ ⎡ ⎛ rm ⎞ ( ξ o − ξ m ) / 10 E ( I 0 ) = E ⎢ ∫∫ S ⎜ ⎜ r ⎟ 10 ⎟ ⎢ ⎝ o ⎠ (8) ⎣

rm 2
10

.Q ( ξ

o

− ξ
{σ

m

,

⎤ ro ) ρ dA ⎥ rm ⎦

= E [ S ]e

ln( 10 ) / 10 }2

⎛ rm ⎞ ∫∫ ⎜ ⎜ r ⎟ Q ( x ) ρ dA ⎟ ⎝ o ⎠
2σ
2

µ

(9)

where, ⎛ 10 µ log 10 ( ro / rm ) − x =⎜ ⎜ 2 ⎝ 2σ and

ln( 10 ) ⎞ ⎟ 10 ⎟ ⎠

,

Q ( x) =

1 2π

∞

∫e
x

(− y2 / 2)

dy

Figure 1: A hexagonal cellular
In a strength-based power control system [3], S is constant regardless of changes in the number of active users and in the total other cell interference. In SIR-based power control system, however, S is a function of the number of active home users and total other cell interference. If it is assumed that maximum received power is limited to hmax, then S is a random variable in the range of [0, hmax]. Since ζo and ζm are assumed to be mutually independent, the

The variance of Io can be expressed as, {σ ln(10 ) / 5}2 2 ⎧

Var ( I 0 ) =

∫∫ ⎜ r ⎜ ⎝

⎛ rm ⎞ ⎟ ⎟ o ⎠

2µ

⎫ E [ s ]e ⎪ ⎪ ⎪ ⎪ 2 ⎨.Q ( M ) − E [ S ] ⎬ ρ dA ⎪ {σ ln(10 ) / 10 }2 2 ⎪ Q ( x)⎪ ⎪.e ⎩ ⎭
(10)

where,

9

⎛ 10 µ ln(10) ⎞ ⎟ M =⎜ log 10 (ro / rm ) − 2σ 2 ⎜ 2 5 ⎟ ⎝ 2σ ⎠
The integrals in (9),and (10) can be numerically obtained. Thus the mean and variance of I0 can be obtained by calculating E[S] and E[S2]. 2.2 Calculation of E[S] and E[S ] The received power from a user at the home BS is assumed to be a random variable H with a probability density function (pdf) of fH(h) when the MS is active. Eb/Io is given by [1], ⎛ Eb ⎞ h/R (11) ⎜ ⎜ I ⎟ = ⎟ 3 ⎝ o ⎠ ( HK + I o ) / W + η o 2 where, R data rate; W chip rate; ηo background noise; H received power of an active user; K number of active users among (N-1) users. When MS’s are power controlled to maintain the minimum power satisfying the required Eb/Io (i.e., γ) the power level H is given by [1],
H = 3 W η o + I 2 3 G − K 2 γ
o

Then the pdf fH(h) can be obtained as,

fH (h ) =

∑

N −1 i= 0

FH
N −1 0

K =i

( h )π

i

(14)

where πi = Pr{K= i}, and
h max

2

∑π
i=0

i

=1
h max

Now we can define E[S] and E[S2] as follow,

E[S] = P

∫ hf H ( h )dh = P ∑ π i
0

N −1

∫ hF
0

H K =i

( h ) dh
(15)

E[S 2 ] = P ∫ h 2 f H (h)dh = P∑ π i
0 i =0

hmax

N −1

hmax

∫h
0

2

FH K =i (h)dh
(16)

where P is the voice activity factor. 2.3 System Capacity In the SIR-based power control system, Pout can be defined as [1],

(12)

hmax K ≤ α }+Pr{K>α}

Pout = Pr{The required power is higher than (17)

⎫ ⎧η + Y f h max K ≤ α ⎬. Pr {K ≤ α } + Pr{ K > α } Pout = Pr ⎨ ⎭ ⎩α − K

where G =W/R is the processing gain. H in (12) is the required MS power to satisfy Eb/Io = γ. If K exceeds (3/2)(G/ γ), the required power become negative value and system enters an outage state. With a power limit of hmax, the system is also in an outage state when the required power exceeds hmax. Thus outage occurs when the required power is higher than hmax, or less than 0 in a power limited system. The following variables are defined for simplicity η = (3/2)Wηo, α = (3/2)G/γ, and Y=Io The cumulative distribution function (cdf) of H under a condition of K=i, is expressed as [1],

Pout

(18) N −1 ⎛ α h max − kh max − η − E ( I o ) ⎞ ⎟ + ∑πk = ∑ π k Q⎜ ⎜ ⎟ k = ⎣α ⎦+1 Var ( I o ) k =0 ⎝ ⎠
⎣α ⎦

where ⎣ X ⎦ is the greatest integer which is less than or equal to x. 2.4 Comparison Between Strength-Based Power Control and SIR-Based Power Control In this section, system capacities based on two different power control schemes are compared. It is interesting to note that system capacities based on two different power control schemes are identical in a single cell environment because of no other cell interference. In the signal strength-based power control system, S is a constant [4]. Pout is given by, Pout = Pr {Eb / I 0 < γ } (20) Fig. 2 shows Pout versus the number of users for two different power control schemes. By considering the first and second tiers for µ = 4, σ = 8dB, W = 1.2288 Mcps, R = 9.6Kbps, ηo = 1.3x10-20 W/Hz, hmax = 2x10-14 W, γ = 7dB, P = 3/8 [1]. The values of E(I0) and Var(I0) are numerically obtained as 0.06175×N and 0.004875×N respectively. The reverse link capacity N is defined as the maximum integer N satisfying the outage probability Pout less than or equal to the threshold. For Pout =0.01, an SIR-based power control system can support

(19)

⎧ 0, h<0 ⎪ h ⎪ ⎪ Ao + A max + ∫ α − i . f y ( x (α − i ) ⎪ 0 FH K =i ( h ) = ⎨ ⎪ − η ) dx , 0 ≤ h ≤ h max ⎪ ⎪ 1, h > h max ⎪ ⎩
(13) where Ao and Amax denote the probabilities that H is below zero and exceeds hmax, respectively. These probabilities are given by [6],

Ao =

−∞

∫α −i f
∞ h max

0

y

( x (α − i ) − η ) dx ( x (α − i ) − η ) dx

A max =

∫α −i f

y

10

approximately 30% more users than a strength-based power control system according to Fig. 2.

Figure 2: Outage probability of strength-based and SIR-based power control systems.
Fig. 3 shows the effect of activity factor P, where as P decreases, more users can be accommodated by SIR-based power control system. As an example, the capacity of SIR-based power control system for Pout =0.01, is 36 and 66 users for P = 1 and P = 3/8, respectively.

3 REVERSE LINK CAPACITY OF SIR-BASED POWER CONTROL SYSTEM WITH MIXED CELL SIZE In a uniform cell environment, individual cell has identical E(I0) and Var(I0) since every cell is in the same condition. In a mixed cell environment, however, the individual cell does not have the same cell size and outside environment and receives a distinct amount of interference from outer cells. Furthermore, due to different cell sizes, each cell has a different received signal power . Thus, each cell will have different values of E(I0) and Var(I0) from the other cells. Each cell will have a different reverse link capacity. In order to calculate the reverse link capacity for each cell, we should know E(I0) and Var(I0) for each cell. Let Si denote the received signal power at the cell site of cell i. We calculate Si for a given cell size, after that, calculate Io/Si, mean, and variance. Finally, obtain the reverse link capacity for the given cell. A cell size in a CDMA cellular system is mainly determined by the path loss and the pilot signal power transmitted at the cell site. When a mobile user transmits a signal at a cell boundary, the received signal power at the cell site depends mainly on the cell size is shown in Fig. 4 [7]. If a mobile user belonging to cell j is located at the cell boundary, the received signal power S can be expressed as [3],

S = St
dB St Si Sj

1 10 Rµ j

−ξ

j

/ 10

(21)

Rj BSj Pj dB Cell boundary (a)

Ri

distance BSi

Figure 3: Outage probability versus the number of users for various values of P.

Pi

Prx Rj Ri distance

Cell boundary BSi BSj (b) Fig.4. Cell boundary condition (a) Reverse link (b) Forward Link.
11

where St is the transmitted signal power of the mobile user, which has a range from zero to Smax, and Rj and ζj are the radius of cell j and the corresponding shadowing factor (zero mean and σj dB standard deviation), respectively. From (19), we have;

Sj

⎛R =⎜ i Si ⎜ R j ⎝

⎞ ⎟ ⎟ ⎠

µ

(29)

10 log St = 10µ log Rj +10 log S + ζj

(22)

Since the received power S varies according to the number of active home cell users and the amount of other cell interference in SIR-based power control system, with mean E[S] and mean square E[S2] as in (15) and (16) respectively. Then the standard deviation of S can be expressed as,
(23) E[S 2 ] − E 2 [S ] It is clear that 10 log St is a Gaussian random variable with a mean of (10µ log Rj +10 log E[S]) and a standard deviation of 10 log σs + σj dB. In CDMA cellular systems, the interference received at a cell site is proportional to the number of other users. Therefore, as the number of other users increases, S should be increased so that 10 log St will be increased. An outage occurs when 10 log Smax < 10 log St. If the system requires that the outage probability Pout be less than or equal 0.01, then,

When a mobile user is located at the cell boundary between cell i and cell j, the average received pilot signal power from the cell site of cell i is equal to that from the cell site of cell j, which is

1 1 Pi = µ P j µ Ri Rj

(30)

σs =

where Pj and Pi are the pilot signal power transmitted by cell site of cell j and cell i, respectively. From (30), Pi can be expressed as,

⎛ R ⎞ Pi = ⎜ i ⎟ P j ⎜R ⎟ ⎝ j ⎠

µ

(31)

If a mobile user is located at a distance of ri from the cell site of cell i and rj from the cell site of cell j. From (21) and (31) the mobile user will select cell j as the serving cell if,
− ξ / 10 ⎛ Ri ⎞ 10 −ξ i / 10 10 j ⎜ ⎟ Pj < Pj ⎜R ⎟ ri µ r jµ ⎝ j ⎠ i≠ j

µ

(32)

Pr (10 log S max < 10 log S t ) = ⎛ 10 log S max − (10 µ log R j + 10 log E[ S ]) ⎞ ⎟ ≤ 0.01 Q⎜ ⎜ ⎟ 10 log σ s + σ j ⎝ ⎠ (24) The received signal power at the cell site of cell j, Sj, can be defined as the maximum S satisfying Pout ≤ 0.01. The cell site of cell j will require Sj when the cell is in fully loaded state. It is important to note that if cell j requires S larger than Sj due to the increase of the number of other users, the call of the mobile user will be forced terminated. The Sj (in dB) is given as [3],

Since the received power at cell j is Sj , the signal power transmitted by the mobile user can be expressed as,

S t = S j r jµ 10

ξ j / 10

(33)

Then the interference produced by the mobile user to the cell site i, when active is, ⎛ 10 ξ j / 10 ⎞ ⎛ r jµ ⎞ (34) ⎟ ⎟⎜ I ( ri , r j ) = S j ⎜ ⎜ r µ ⎟ ⎜ 10 ξ i / 10 ⎟ i ⎝ ⎠⎝ ⎠ From (29) I(ri ,rj), can be expressed as,

⎛ 10 ξ j / 10 I ( ri , r j ) = S i ⎜ ⎜ rµ i ⎝

⎞ ⎛ r jµ ⎟ ⎜ ξ / 10 ⎟ ⎜ 10 i ⎠⎝

⎞⎛ R i ⎞ ⎟⎜ ⎟ ⎟⎜ R j ⎟ ⎠ ⎠⎝

µ

≤1

10logS j = −σ j Q (0.01 +10logSmax −10µ logRj )
−1

(25) In a similar manner, the received signal power at cell i, Si can be expressed as,

(35) If a CDMA system consists of M non-uniform cells and there are Nj users for cell j, then the total other user interference-to-signal ratio at the cell site of cell i is,

10logSi = −σ i Q−1 (0.01) + 10logSmax − 10µ log Ri
(26) The logarithmic ratio of Sj to Si is Sj R = (σ i − σ j ) Q −1 ( 0 . 01 ) + 10 µ log i 10 log Rj Si (27)

Io = Si

j =1, j ≠ i k =1

∑ ∑

M

Nj

x k , j I k , j ( ri , r j ) ⎛ Ri ⎜ ⎜R Sj ⎝ j

⎞ ⎟ ⎟ ⎠

µ

(36)

Sj

⎛ R ⎞ ((( σ − σ ) Q − 1 ( 0 . 01 )) / 10 ) = ⎜ i ⎟ 10 i j ⎜R ⎟ Si ⎝ j⎠

µ

(28)

If ζj and ζi are assumed to have an equal probability distribution, then Sj / Si can be simplified to,

where xk,j is a random variable to represent voice activity for the user x =1 with probability α and x = 0, with probability 1-α, and Ik,j(ri,rj) denotes the interference produced by the kth user in cell j. E(Io) and Var(Io) for cell i can be evaluated by using (36), (9), and (10). Then the reverse link capacity of cell i can be calculated by using (19). As an example for mixed cell sizes, we assume that the center macro cell BS0 in Fig. 1 is split into three micro cells (BS0(1), BS0(2), and BS0(3)) and the new cell sites are located at the center of the split cells as

12

shown in Fig. 5.

BS1 BS2 Rmacro BS6
BS0(1) Rmicro BS0 BS0(2) BS3 BS4 BS5

(3)

expressed as, ⎛ ⎜ 18 N c x I (r , r ) ⎜ ∑∑ i , BSj i , BSj o m + ⎜ j =7 i =1 S macro ⎜ 6 N macro x i , BSj I i , BSj ( ro , rm ) I o 2 = S macro ⎜ ∑ ∑ ⎜ j = 2 i =1 S macro ⎜ −µ ⎜ 3 N micro xi , BSj I i , BS 0( k ) (ro , rm ) ⎛ Rmicro ⎞ ⎞ ⎜+ ⎜ ⎟ ⎟ ⎜R ⎟ ⎟ ⎜ ∑∑ S micro ⎜ k =1 i =1 ⎝ macro ⎠ ⎟ ⎠ ⎝ (38) From Fig. 5, Rmicro/ Rmacro = 3 / 2 and from Fig. 2 for Pout = 0.01, Nc = 66. E(Io1) and Var(Io1) for the micro cell and E(Io2) and Var(Io2) for the macro cell can be numerically obtained by using (9) and (10). As shown in table 1, the values of E(Io1), Var(Io1), E(Io2) and Var(Io2) are functions of Nc, Nmicro, and Nmacro. The reverse link capacities are obtained by substituting the values of E(Io1), Var(Io1), E(Io2) and Var(Io2) into (19), under the same condition used in section II. The reverse link capacities (RLC) for cell BS1 and BS0(1) are the maximum Nmacro and Nmicro satisfying Pr(BER > 10-3) < 0.01 at BS1 and BS0(1) respectively. The reverse link capacities are given in Table 1, with that when Rmicro/ Rmacro equals 3 / 4 . From the results, we can see that, cell radius is one of the factors affecting the reverse link capacity of the cell. Note that the reverse link capacity of the micro cell increases as Rmicro/ Rmacro decreases. On the other hand, the reverse link capacity of the first tiered macro cell significantly decreases as Rmicro/ Rmacro decreases. As Rmicro decreases, Smicro increases, as shown in Fig. 4 (a). Then the reverse link capacity increases since I01/ Smicro decreases. The results show that the reverse link capacity of the first tiered macro cells is decreased by cell split. This is due to the increased interference resulted from the increased number of users in the split cell area. In FDMA cellular systems, channel capacity is simply increased as much as frequency reuse by cell split. Also in CDMA cellular systems, reverse link capacity in a service area can be increased by cell split. When Rmicro/ Rmacro = 3 / 2 after the cell split, the reverse link capacity in the split cell area is undoubtedly increased from 66 channels to 65×3 = 195 channels. However, the capacity increase is obtained at the expense of the reverse link capacity decrease at the first tiered macro cells surrounding the split cells. That is, the reverse link capacity in the first tiered cells decreases from 66 to 54 channels.

Fig. 5. A cellular system with mixed cell sizes.

The reverse link capacity of the first tiered macro cell is heavily affected and is likely to be reduced by the new micro cells. However, that of the second tiered macro cell is affected very little because of the enough distance between the second tiered macro cells and the micro cells. Now the reverse link capacity of the micro cell (i.e.Io1) will be calculated first. After that, the reverse link capacity of the first tiered macro cell (i.e.Io2) will be calculated. As shown in Fig. 5, micro cell BS0(1) receive interference not only from the 18 outer macro cells but also from two neighboring micro cells BS0(2) and BS0(3). If an interfering user is located at distance rm from its cell site and distance ro from micro cell BS0(1), the total interference received at BS0(1) from other cells is expressed as [3],
µ ⎛ 18 N c xi , BSj I i , BSj (ro , rm ) ⎛ R micro ⎞ ⎜ ⎟ ⎜ ⎟ ⎜R ⎜ ∑∑ S macro j =7 i =1 ⎝ macro ⎠ ⎜ ⎜ 6 N macro xi , BSj I i , BSj (ro , rm ) ⎛ Rmicro ⎞ µ ⎜ ⎟ I o1 = S micro ⎜ + ∑ ∑ ⎜R ⎟ S macro ⎜ j =1 i =1 ⎝ macro ⎠ ⎜ 3 N micro xi , BSj I i , BS 0( k ) (ro , rm ) ⎞ ⎜+ ⎟ ∑∑ ⎟ ⎜ k =2 i =1 S micro ⎠ ⎝

(37) where Nc is reverse link capacity before cell split, Nmacro and Nmicro are the reverse link capacity for the first tiered macro cells and the micro cells after cell split, and Rmacro and Rmicro are the radius of the macro cell and the micro cell, respectively. Similarly, if an interfering user is located at distance rm from its cell site and distance ro from macro cell BS1, which is one of the first tiered cells, the total interference received at BS1 from the other cells is

13

Table 1. a Mean and Variance for Io1 and Reverse link capacity for micro cell. Rmicro/ Rmacro E(Io1) Var(Io1) RLC for micro cell

0.000675×Nc + 65 0.01238×Nc + 0.00249× Nmacro + 0.02868×Nmacro + 0.03053× Nmicro) 0.0035× Nmicro 0.01238×Nc + 80 0.1132×Nc + 3/4 0.00360×Nmacro + 0.03807×Nmacro + 0.04405× Nmicro 0.00540× Nmicro Table 1. b Mean and Variance for Io2 and Reverse link capacity for macro cell.

3/2

Rmicro/ Rmacro

E(Io2) 0.00394×Nc + 0.0487×Nmacro + 0.03053× Nmicro 0.00035×Nc + 0.0124×Nmacro + 0.0289× Nmicro

Var(Io2) 0.000102×Nc + 0.00447× Nmacro + 0.00413× Nmicro 0.000038×Nc + 0.00123×Nmacro + 0.0040× Nmicro

RLC for macro cell 54 40

3/2 3/4

4 CONCLUSION From the above analysis it is clear that; in a uniform cell size environment, all cells are in identical conditions and the interferences received by individual cells are equal. Therefore, every cell has the same reverse link capacity. The results show that, with Pout = 0.01, an SIR-based power control system can support approximately 30% more users than a strength-based power control system in a uniform cell size environment. Also, calculating the reverse link capacity of SIRbased power control system with mixed cell sizes when splitting a macro cell into three micro cells, as an example, and calculate the reverse link capacities for the three micro cells and the neighboring macro cells. The results show that, the radius of the cell is one of the factors that affecting the reverse link capacity of the cell, where, the reverse link capacity of the micro cell increases as Rmicro/ Rmacro decreases. On the other hand, the reverse link capacity of the first tiered macro cell significantly decreases as Rmicro/ Rmacro decreases. 5 REFERENCES [4] K. S. Gilhousen, I. M. Jacobs, R. Padovani, A. J. Viterbi, L. A. WeaverJr.,and C.E.Wheatley III, “On the capacity of a cellular CDMA system,” IEEE Trans. Veh. Technol., vol. 40, pp. 303–312, May 1991. [5] K. I. Kim, “CDMA cellular engineering issues,” IEEE Trans. Veh. Technol., vol. 42, pp. 345-349, Aug. 1993. [6] S. J. Lee, H. W. Lee, and D. K. Sung, “Capacities of single-code and multi-code DSCDMA systems accommodating multimedia services,” IEEE Trans. Veh. Technol., vol. 48, pp. 376-384, Mar. 1999. [7] J. Shapira, “Micro cell engineering in CDMA cellular networks,” IEEE Trans. Veh. Technol., vol. 43, pp. 3817-3825, Nov. 1994.

[1] D. K. Kim and D. K. Sung, “Capacity estimation for an SIR-based power-controlled CDMA system supporting on-off traffic,” IEEE Trans. Veh. Technol., vol. 49, pp. 1094–1101, July 2000. [2] W. C. Lee, “Overview of cellular CDMA,” IEEE Trans. Veh. Technol., vol. 40, pp. 291–302, May 1991. [3] H. G. Jeon, S. M. Shin, T. Hwang, and C. E. Kang “Reverse link capacity analysis of a CDMA cellular system with mixed cell sizes,” IEEE Trans. Veh. Technol., vol. 49, pp. 2158– 2163, Nov. 2000.

14

A CLUSTER BASED APPROACH TOWARD SENSOR LOCALIZATION AND K-COVERAGE PROBLEMS
Zhanyang Zhang Computer Science Department College of Staten Island / City University of New York 2800 Victory Boulevard, Staten Island, NY 10314, USA zhangz@mail.csi.cuny.edu

ABSTRACT In this paper we present a cluster based approach to using a sensor’s sensibility and laser signal to address both sensor node localization and K-coverage problems. After sensors were randomly deployed, a virtual grid was mapped over the deployment area. Location guided laser beams are projected to the centers of grid cells to trigger sensors within one hop of communication range to form sensor clusters in the virtual grid. Therefore, all clusters are aware of their locations within the precision of location guided laser beams. We assume there are redundant sensors in each cluster. Thus the cluster members can operate alternatively (duty-cycle) in active or sleep modes under the coordination of the cluster head to conserve energy while providing sufficient and robust coverage in the presence of both node power depletions and unexpected failures. We introduce two distributed algorithms to form sensor clusters and to manage cluster members in duty cycles while maintaining the required K-coverage. Simulation and analysis results show that our algorithms scales well with overhead cost as a linear function of sensor population in terms of energy consumption.. Keywords: Sensor localization, location guided laser, cluster and K-coverage. but their heuristic does not provide a guaranteed solution. In [5], the authors investigate linear programming techniques to optimally place a set of sensors on a sensor field (three dimensional grids) for a complete coverage of the field. In many applications, such precision deployment of sensor nodes is not feasible. Our approach to sensor location problem does not assume the existence of anchor nodes and time synchronization. We provide an integrated solution to both sensor localization and the K-coverage problems. In addition, due to the short battery life that powers individual sensors, the lifetime of sensor networks are severely restricted. Significant research has been done to improve energy efficiency both at the individual sensor level and the sensor network as a whole [6, 7]. By observing societies formed by certain natural species, such as an ant colony, we learn that a colony can sustain its functions and life for a long period of time despite the fact that each individual ant has a limited lifespan. This is achieved through member collaborations and reproduction. Inspired by such observations, our research addresses the K-coverage problem while maximizing the lifetime of sensor networks and supporting robust WSN operations through selforganized sensors. Most existing energy efficient techniques at the individual sensor level are

1

INTRODUCTION

Wireless sensor networks (WSN) extend our capability to explore, monitor and control the physical world. A wireless sensor has limited data processing, storage and communication capabilities, and most critically, limited energy resources. In many WSN applications, large quantities of sensors are randomly deployed over a region to perform a variety of monitoring and control tasks. The problem of determining a sensor’s physical location is a challenging one and the sensor’s location information is extremely critical for many applications. For example, location information is necessary to assess the coverage requirements that generally are known as the K-coverage problem. Given the limitations of these sensors, it is not feasible to equip each sensor with a GPS receiver. This has motivated many researchers to seek GPS-less solution for locating sensors. In particular, the authors in [1, 2, and 3] designed distributed algorithms that yield sensor node locations without addressing the K-coverage problems. In addition, their algorithms require the existence of anchornodes (sensor nodes with known locations) and time synchronization of the whole network. Wang et al. [4] addresses the connected K-coverage problem with a localized heuristic algorithm for the problem,

15

orthogonal to our work. Therefore they can be applied within our model to further enhance WSN energy efficiency. The key components of our model are a cluster formation/localization algorithm and a cluster maintenance algorithm. We developed a signal stimulation model (SSM) that uses location guided laser beams to trigger sensors within one hop of communication range to form geographically bound sensor clusters. Based on the SSM model, we proposed a sensor cluster localization algorithm (SCLA) that can form a cluster and elect a cluster head based on their responses to the stimulation signal projected at the center of a grid cell. Once the cluster is formed, all the cluster members are aware of their locations. They can provide required Kcoverage as long as the number of members is greater than or equal to K. In reality, sensor failures happen randomly due to hardware/software malfunction or power depletion. We proposed a sensor cluster maintenance algorithm (SCMA) that can prolong WSN lifetime by maintaining the smallest subset of cluster members in active mode necessary to perform required tasks while permitting the remaining cluster members to enter sleep mode in order to conserve energy. The sensors in sleep mode turn off their communication and sensing functions while sleeping. Periodically they awaken and may replace one or more active nodes which are deceased (either because they have depleted their power or failed prematurely) or when their energy levels fall below a minimum threshold. Unlike an ant colony which can maintain its population through reproduction, we assume that a WSN does not have sensor replacement capability. To achieve the simultaneous goals of energy conservation and providing K-coverage with fault tolerance operations, it is obvious that SCMA requires a sensor population with sufficient density and redundancy. Our approach is particularly suitable to open-space environment monitoring WSN applications which are subject to long network operation time and harsh operating conditions with a high degree of random node failures. The rest of the paper is organized as follows. In section 2, we describe the SSM mode as the basic operational model for solving both sensor localization and K-coverage problems. The SCLA algorithm and its performance analysis are also presented in section 2. In section 3, we present the SCMA algorithm and discuss the behaviors of different cluster components. State transition diagrams are used to demonstrate the interactions between different cluster components. A set of definitions and theorems are presented in section 4 to prove that the above proposed algorithms satisfy the K-coverage requirements during the life of WSN operations. In section 5, we show the results of our simulation study to validate the algorithms and

performance analysis. paper in section 6. 2

Finally, we conclude this

THE CLUSTER MODEL AND CLUSTER FORMATION ALGORITHM

We introduce the SSM model (SSM) as a basic model that defines the parameters and constraints within which a WSN application is deployed and operated. The SSM model is defined by the following set of assumptions. 1. Sensors – Each sensor has sensing, data storage, data processing and wireless communication capability that is equivalent to a MICA Mote developed at UC Berkeley [8, 9]. Each sensor covers a communication cell and a sensing cell defined by radius Rc and Rs respectively. All of the sensors are non-mobile and they can sense optical signals (delivered by laser beams). 2. Sensor network – A sensor network consists of a set of homogeneous sensors. Sensors can communicate with each other via wireless channels in single or multiple hops, thus they form an ad-hoc network. There are one or more base stations located outside the sensor region but near the border of the sensor network with wired or long range wireless communication links to the Internet for collecting data or disseminating queries and control commands to the sensor network. 3. Deployed region – Sensors are deployed over an open-space area. A virtual grid marks this area. Each cell in the grid is a D-by-D square. 4. There is a lightweight location guided laser designator system that can project a laser beam to a given location (x, y) with acceptable accuracy, such as the system produced by Northrop Grumman which has target range up to 19 kilometers with accuracy of 5 meters [10]. Obviously, this assumption requires a line-ofpath for laser beams to reach the sensors at ground level. However for many open space environment monitoring applications, this is not a major problem. 5. To ensure the coverage and connectivity of a sensor network, the model requires that D, Rc and Rs satisfy the following constraints: • To ensure that a sensor anywhere in a cell can cover the cell, it must satisfy the condition, Rs 2 ≥ 2D 2. In our model we assume Rs 2 = 2D 2. • To ensure a sensor anywhere in a cell can communicate with a sensor anywhere in a neighboring cell, it must satisfy the condition, Rc ≥ 2Rs. In our model, we assume Rc = 2Rs. Figure 1 below shows the parameters that define a virtual grid with 4 neighboring cells.

16

Figure 1: Parameters of a virtual grid Every point in the region can be represented by a pair of (x, y) coordinator values. A sensor has three possible states, U (unknown), H (cluster head) and M (cluster member). Initially all sensor states are set to U. During the post-deployment phase, an object flies over the deployed region and projects a laser beam to the center of a grid cell (Xc, Yc). The sensors nearby will sense the signal. The sensor readings are stronger if they are closer to the projected laser beam. The sensor with the strongest reading is identified as the cluster head (state=H). All sensors that have a reading greater than λ (λ-cut) and are one hop away from the cluster head become members of the cluster (state=M). Ideally, λ should maximize the possibility of including a sensor in the cluster if it is within the cell, and minimize the possibility of including a sensor in the cluster if it is outside the cell. An optimal value of λ can be obtained through experimentation and simulation. Since an accurate light energy propagation model is extremely difficult to obtain and light wave is a form of electromagnetic wave, we believe it is a reasonable assumption that the light signal decay model should be similar to the attenuation of radio waves between antenna and wireless nodes close to the ground for our study and simulation. Radio engineers typically use a model that attenuates the power of a signal as 1/r2 at short distances (where r is the distance between the nodes), and as 1/r4 at longer distances [11]. In our model the size of the virtual grid is small and thus we assume the light signal attenuation follows the short distance model. Figure 2 shows the cluster formed in grid cell 5. The black dot indicates the sensor node is a cluster head in the cluster.

In this paper, we assume that a laser beam is projected to one cell at a time with a cluster forming time interval, T, for each cell. T should be just long enough to allow the sensors in a cell to form a cluster, but not so long as to cause unnecessary delay for the operations between cells. In general, T is a function of sensor density - n (the number of sensor nodes in a cell), radio propagation delay τ and IEEE 802.11 MAC layer back off delay β in its CMSA/CA protocol with p as a probability of package collision in the form of: T = n2 β p + n τ + c (1)

Where c is a constant for initial delay from sensing the light signal to transmitting the data. With this assumption, a sensor node can only belong to one cluster, since once it joins a cluster it will not respond to the laser signals projected to other cells. This works even for the sensors located on the border of grid cells. Based on the model described above, we present the SCLA algorithm with the following steps: 1. Let t0 be the time when the laser beam is projected to a cell and let T be the time interval for cluster formation. Let t = t0 when starting the algorithm. 2. While t > t0 and t < ( t0 + T) repeat step 3 and 4. 3. For each sensor with unknown status that has detected the signal, if the sensor reading is greater than λ, then it will broadcast a message with the sensor id and sensor reading (SID, Value) to its neighbors within one hop of communication. Otherwise it keeps silent. 4. When receiving a message, a sensor with unknown status acts according to the following rules: • Rule1 - If the reading value of the received message is greater than its own reading and its own reading is greater than λ, then it will set the state = M (a cluster member) and reset its local memory. • Rule2 – If the reading value of received message is less than its own reading and its own reading is greater than λ, then save the message (SID, Value) in its local memory. 5. For a sensor that still has unknown status, if its own reading is greater than λ it will set the state=H (a cluster head). 6. The cluster head sends the cluster membership information (SID, Value) pairs, which were saved at step 4, to the closest base station. 7. Project laser beam to the next cell and repeat the above steps 1 to 6 until all the cells are visited.

Figure 2: Sensor cluster localization and formation

17

Our analysis shows the SCLA algorithm performs and scales well when sensor network size increases in terms of both the number of cells in a grid and the total number of sensors in a cell. Let L be the largest number of communication hops from a cluster head to the closest base station. Let M be the total number of cells in the grid. Let N be the total number of sensors deployed. Let n(i) be the number of sensors in cell(i). The cost of SCLA algorithm in terms of the number of messages transmitted is given as:
Cost ( L, M , N ) ≤ ∑ n(i ) + M ∗ L
i =1 M

Let finite state automata, F = (Q, Σ, ∆, δ, γ, q0) where: q0 is the initial state (q0 ∈ Q), Σ is a set of inputs, Q is a set of states, ∆ is a set of outputs, Q, δ is a state transition function: Q x Σ γ is an output function: Q x Σ ∆. To help understand the state transition diagrams, we list all the message definitions in the table below. Tab. 1 message definitions for SCMA algorithm
Message (A, SID) (Ack, SID) (Ack, SID) (E, HID) (H, HID) (H, SID) (I, Sensor_list) (N, SID) (R, SID) (Status, HID) (Status, SID) (W, HID) (W, SID) Tx / e Pa / e Description Activate a member (SID) Member Head Acknowledgement: a member (SID) is activated Member Base Acknowledgement: a member (SID) is a new head Head Base Emergency message to the base station Head All Broadcast “hello” members message Member Head Reply to “hello” message by a member (SID) Head All Broadcast initial members active sensor list Base Member Appoint a member (SID) to be the new head Member Base A member requests the cluster status Head Base Send cluster status to the base station Base Member Send cluster status to a member (SID) Head All Broadcast members “wakeup” message Member Head Reply to “wakeup message by a member (SID) Time trigger – while in state x for time interval Tx then return to previous state without output. x ∈ (c, r, s, w, p, u, n, g). Event trigger – power level below a. From Head To Member

(2)

where

∑ n(i) = N
i =1

M

If we assume sensors are uniformly distributed, then we have: Cost(L, M, N) ≤ M*(N/M)+ M*L = N+M*L (3) Formula (3) is equivalent to the notation, O(N), when M and L are significantly smaller then N, which is true in most high-density sensor networks. These results show that it is feasible to deploy large amount redundant sensor nodes in order to compensate for high rates of sensor failure with limited overhead cost. 3 SENSOR CLUSTER ALGORITHM MAINTENANCE

The SCMA algorithm consists of three components, namely, cluster member component, cluster head component, and base station component. These components work together to achieve the following functions: 1. Coordinating a subset of cluster members as active nodes that perform required tasks while tagging the remaining cluster members as sleep nodes to conserve energy. 2. Replacing the nodes that failed unexpectedly or whose energy level falls below a certain threshold to guarantee required coverage. 3. Warning the network operators if the required coverage is going to be compromised. After the cluster is formed according to the SCLA algorithm, all cluster members are in active mode (m=A) by default. A member listens to the initialization message, (I, sensor_list), from the cluster head. A member turns into sleep mode (m=S) unless its SID is on the sensor_list in the initialization message. We use finite state automata to precisely describe the behavior of each component in a simple canonical form. Particularly we employ a special type of finite state automata, called Mealey machine, which is formally defined by Hopcropt and Ullman [12] as below:

3.1

The Cluster Member Component We use two state transition diagrams, one represents cluster members in active mode and the other represents cluster members in sleep mode. Figure 3 defines the behavior of an active member node within the SCMA algorithm and how it can change to sleep mode or become a cluster head when a cluster head has failed. For example, when an active member receives a “Hello” message from the cluster head (H, HID), it replies with message (H, SID) and goes back to the same state. An arrow indicates such a transition with a text label on top in

18

the form of (input message) / (output message). The same notation applies to all the state transition diagrams in this section.

3.3 The Base Station Component In the SCMA algorithm, the function of a base station component is to oversee the status of clusters and the condition of networks. Figure 6 shows how it works.

Figure 3: members

State transition diagram for active Figure 6: State transition diagram for base stations 4 K-COVERAGE THEOREMS

Figure 4 describes the behavior of a sleep member node and how it can change to active mode.

We introduce the following definitions and theorems to formally prove that the proposed SCLA and SCMA algorithms together can solve the Kcoverage problem. Definition 1: K-coverage problem: a WSN application is required to guarantee that for any given points in the deployed region, there are at least K sensors whose sensing range can cover the points. Figure 4: members State transition diagram for sleep Definition 2: The membership degree of a cluster is the number of sensors in a cluster including the header. Theorem 1: For a cluster(i) formed at a cell(i) in according to the SCLA algorithm, if the membership degree of the cluster(i) is no less than K during the lifetime of operation, then the area within the cell(i) is K-covered. Proof: Based on the assumption 5.1 in the model definition section, a sensor located anywhere within a cell can cover any points in the cell including the border of the cell. If the cluster(i) has no less than K member sensors during the lifetime of the operation. Then, by definition 1 & 2, any points within the cell(i) are covered by no less than K sensors during the lifetime of the operation. Therefore cell(i) is Kcovered. Theorem 2: Given a deployed region R which is enclosed in a virtual grid G, if there is one cluster formed in each cell with membership degree no less than K during the lifetime of the operation, then the deployed region R is K-covered. Proof: By theorem 1, every cell in G is K-covered and the virtual grid G encloses the entire region R, therefore the deployed region R is K-covered. Based on Theorem 2, it becomes obvious that SCMA algorithm can satisfy a WSN application Kcoverage requirement as long as it can maintain at least K active member sensors in the cluster at each cell during the life of WSN operations.

3.2 The Cluster Head Component The cluster head component coordinates members between active and sleep modes. It updates cluster status and synchronizes the cluster status with the base station. It uses an active node counter (Ac) and sleep nodes counter (As) to track active members and sleep members. The Figure 5 below describes how a cluster head works. The cluster head issues an emergency message, (E, HID) to base station if the required coverage is going to be compromised within this cluster.

Figure 5: State transition diagram for cluster heads

19

5

SIMULATION AND DATA ANALYSIS

In order to validate the model and algorithms presented in this paper and to gain insights into how the algorithm works, we conducted simulation studies using the NS2 simulator with Monarch Extensions to ns [13, 14] for SCLA algorithm. We extended our simulation for SCMA algorithm with C++ modules to study the energy conservation and the impact on WSN lifetimes. Our simulations are implemented with two scenarios. The first scenario involves simulating a single cell grid with different sensor densities (number of sensors in the grid). The focus of this simulation is to study the performance and scalability of our model against sensor density. In the second scenario, we take the same measurements from a multi-cell grid simulation with considerations of both sensor density and the size of the deployment area in terms of the number of cells. The purpose of this simulation study is to understand the performance and scalability of our model in a multi-cell grid. We set the cell dimension to 10 meters for all the simulations presented in this paper. Our simulation tests indicate that the outcomes are not as sensitive to the cell dimension as they are to sensor density. We let the number of sensor nodes vary from 10 to 80 in increments of 10. In our simulation model, we set the propagation delay between two nodes as 10 ms (τ= 10ms). We use multicast in UDP protocol to simulate sensor node broadcast in one hop distance. We set p as the probability for a node to receive the broadcast message successfully (p in the range 0 to 1). The message package size is set to 128 bytes and the bandwidth between two nodes is set to 2mbs. To simulate IEEE 802.11 MAC layer CMSA/CA protocol, we introduce a back-off time delay, a random number between 50 and 100ms, which is assigned to a node when it detects that a channel is busy. The node will back-off for a delay interval before it tries to broadcast again. The simulation results capture two key measurements, the number of messages being transmitted and the time interval for cluster formation in the grid. All of the simulation results presented below are the average of five simulation runs. Figure 7 shows the number of messages being transmitted in a single cell grid. It compares the analytical result with simulation results. It indicates that the cost of message transmissions is close to a linear function of n, the number of sensor nodes in the cell.

Figure 7: Messages transmitted in single cells To better understand cluster membership distributions and study the impact of λ values on member selections, we used a single grid cell of 10 meters by 10 meter with λ values in the range [0.02, 0.04]. The simulation result in Figure 8 shows the percentage of sensors that are dropped from the cluster for the cell as the value of λ changes. It shows the higher λ value leads to more sensor nodes being excluded from the cluster.

Figure 8: Membership distribution over γ values The simulation results below are for a multi-cell grid scenario with the same key measurement as we presented for a single cell grid. Figure 9 shows that with a fix number of sensor nodes, the number of messages being transmitted actually drops as we expected as the cost function defined in Formula (3). Because there are fewer collisions as sensor density decreases. This simulation result indicates that the number of messages being sent is more sensitive to the density in each cell than the number of cells in a grid.

Figure 9: Performance and scalability in multi-cells

20

Figure 10 presents an interesting measurement, the percentage of sensors wrongly claimed by clusters. It is closely correlated with the γ values. The ratio of disputed sensors drops or stays at the same level after the number cells reach 16 due to the decrease in sensor density.

Figure 10: values

Number of disputed sensors with λ

Figure 11 shows the percentage of sensors that are unclaimed in a multi-cell grid with 80 sensors total. This is more likely happens to sensors at the cell borders and is sensitive to λ values.

parameter. It is obvious that different WSN applications may require different K-coverage. In our initial simulation, we baseline our study in unit degree coverage with K=1. We assume sensors are uniformly distributed over grid cells during deployment for simulation study purposes. We use the sensor node power consumption characteristics published by Crossbow [15] for our energy consumption computations. Figure 12 shows a cluster lifetime vs. the number of nodes in a cluster, which has a linear growth in terms of sensor population. But we expect this linear growth at lower slopes as K-coverage increases (more active nodes required). It also incurs overhead costs for the cluster head to probe active nodes with “hello” messages. Further simulation can be done to study this impact and how to control the head node’s probing cycle to balance the trade-off of required coverage and overhead costs. Given the cell size in our simulation (10 m by 10 m), 100 nodes per cell is far beyond the most population density of WSN applications.

Figure 12: Network lifetime over number of nodes in a cluster It is interesting to observe the relationship between the average node lifespan and node density. If we let an active node cycle into sleep mode when its power level falls below a certain threshold, it might increase the average member node lifetime as node density increases, since the workload is more evenly distributed among a larger node population. But the simulation result in Figure 13 shows the average node lifetime is slightly below linear growth in term of node density. Particularly, as the node density goes beyond 50 nodes. This result is not in total agreement with our theoretical analysis on SCMA. We contribute this discrepancy to the additional overhead cost of switching an active node to sleep node on a volunteer basis when the active node’s power level is below a certain threshold. We are aware of that the average lifetime of a cluster head is expected to be shorter than member nodes, since the head node assumes more duties than members. We could introduce the same logic to the SCMA algorithm by permitting a head node to sleep when its power level falls below a certain threshold. We might expect a greater overhead cost in doing so. Currently we are conducting more extensive

Figure 11: Percentage of unclaimed sensors Our simulation study of the SCMA algorithm is designed to understand the relationships between the network lifetime, sensor node density and the overhead cost with K-coverage constraints. The WSN lifetime is defined as the time interval that a WSN network can sustain its operations and services with respect to K-coverage requirement. In other words, the service quality, such as sensing and communication coverage requirements, cannot be compromised during the lifetime of WSN operations. Given the parameters defined in the SSM model, the communication coverage of a WSN can be reduced to the sensing coverage (assumption 5). The sensor network lifetime is defined by the shortest cluster lifetime. If a cluster is unable to provide the required coverage in one cell then the whole network is considered compromised. In the notation of SCMA(K), the degree of sensing coverage (Kcoverage requirement) is defined as an input

21

simulation studies to understand this issue better.

Figure 13: Average node lifetime vs. node density Figure 14 shows that as the percentage of active duty cycle (the percentage of active time over total node lifetime) is decreased as the node density increases with K=1 coverage. We expect this percentage to follow the same trend but at less decline rate percentage as the K value increases, since more nodes must stay active.

Figure 15: Percentage of SCMA message vs. node density per cell 6 CONSLUCSION

Figure 14: Percentage of active duty time vs. number of nodes in a cluster Based on the simulation results shown in Fig. 13, we expect the cluster head probing cycle (Th – the time interval before sending next “hello” message) of “hello” messages will impact the overhead cost of the SCMA algorithm. As interval Th increases, the overhead cost of SCMA will decrease. We conducted simulation tests under three Th values. The simulation results shown in Figure 15 validate our expectations. It shows the overhead cost as the percentage of SCMA message transmitted over the total potential bandwidth capacity of a sensor cluster. For three different Th values, we can see the impact of Th remains significant only before the node population reaches a “critical mass” where nodes/cell = 5 for K = 1 coverage. As the node population increases beyond the critical mass, the total message capacity of the cluster grows at a much faster than the overhead cost. This result tells us that SCMA scales well as node population increases measured by the percentage of overhead cost. This result also suggests that it is most cost-effective to increase the Th value for the clusters with a smaller population (less than 5 sensor nodes per cell).

In this paper we propose a unique solution to address both sensor localization and K-coverage problems that can conserve energy while supporting robust WSN operations. Simulation results show that both of our distributed algorithms, SCLA and SCMA, perform and scale well with overheads in linear proportions to the deployed sensor population in a variety of deployment densities. SCMA can guarantee the K-coverage requirements of WSN applications over their network lifetimes and it can warn network operators should coverage requirements be compromised. The work reported in this paper leads to several interesting topics for future research. We plan to investigate the possibility of using this model for differential K-coverage problems [16] where different cells may require different K-coverage within the same WSN application. We are conducting more extensive simulation studies on two issues. One investigates the overhead impact and the worthiness of having a head node fall back to sleep if its energy level falls below a certain threshold. The other studies the impact of having K-coverage values as a function of time and location. We expect these studies will help us to refine our model and to achieve even greater energy efficiency. We are aware that we did not take into consideration the energy consumed by sensors for providing normal operational tasks. Our analysis and simulation results only explore the overhead portion of energy consumed by SCLA and SCMA without regard to the application. The energy credited to WSN normal operational tasks are highly application dependent, therefore it is out of the scope of this paper.

22

7

REFERENCES

ACKNOWLEDGEMENT This research is continuing through participation in the International Technology Alliance (ITA) sponsored by the U.S. Army Research Laboratory and the U.K. Ministry of Defense under Agreement Number W911NF-06-3-0001. This work was also supported (in part) by a grant from The City University of New York PSC-CUNY Research Award Program (68358-0037) and a grant from CUNY Research Foundation (80210-14-04).

[1] L. Doherty et al., “Convex Position Estimation in Wireless Sensor Networks,” IEEE INFOCOM, Alaska, April 2001. [2] N. Bulusu et al., “GPS-less Low-Cost Outdoor localization for Very Small Devices,” IEEE Personal Communications, October 2000. [3] A. Savvides et al., “Dynamic Fine-Grained Localization in Ad-Hoc Networks of Sensors.” ACM SIGMOBILE, Rome, Italy. July 2001. [4] X. Wang, G. Xing, Y. Zhang, C. Lu, R. Pless, and C. Gill, “Integrated coverage and connectivity configuration in wireless sensor networks,” in Proceedings of the ACM SenSys, 2003. [5] K. Charkrabarty, S. Iyengar, H. Qi, and E. Cho, “Grid coverage for surveillance and target location in distributed sensor networks,” IEEE Transaction on Computers, 2002. [6] I.F. Akyildiz et al. “A Survey on Sensor Networks.” IEEE Communications Magazine, August 2002, pp. 102-114. [7] C. Jone, et al. “A Survey of Energy Efficient Network Protocols for Wireless networks”, Journal of Wireless Networks, Kluwer Academic Publishers, 2001, Vol. 7, pp. 343-358. [8] “MICA Wireless Measurement System”, http://www.xbow.com/Products/ [9] “University of Berkeley Mote”, http://webs.cs.berkeley.edu/ [10] Northrop Grumman, “Lightweight Laser Designator Rangefinder Data Sheet”, http://www.dsd.es.northropgrumman.com/DSDBrochures/laser/LLDR.pdf [11] J. Broch, et al. “A Performance Comparison of Multi-Hop Wireless Ad Hoc Network Routing Protocols.” The 4th Annual ACM/IEEE International Conference on Mobile Computing and Networking, October, 1998, Dallas, Texas. [12] Hopcroft J. and Ullman J., “Introduction to Automata Theory, Language, and Computation.” Addison Wesley publishing company, 1979. [13] “The Network Simulator ns-2: Documentation”, http://www.isi.edu/nsnam/ns/ns-documentation.html. [14] “Monarch Extensions to ns”, http://www.monarch.cs.cmu.edu/ [15] “MICA2 AA Battery Pack Service Life Test” http://www.xbow.com/Support/Support_pdf_files/M ICA2_BatteryLifeTest.pdf [16] X. Du and F. Lin, “Maintaining Differentiated Coverage in Heterogeneous Sensor Networks.” EURASIP Journal on Wireless Communications and Networking, Vol. 4, 2005, pp 565-572.

23

Power Aware Virtual Node Routing Protocol for Ad hoc Networks A.Kush1 , R.Kumar2, P.Gupta3 Department of Computer Science, Kurukshetra University, Kurukshetra, INDIA , akush20@rediffmail.com 2 Department of Computer Science, Kurukshetra University, Kurukshetra, INDIA , rkckuk@rediffmail.com 3 Department of Computer Science. & Engineering , Indian Institute of Technology Kanpur INDIA pg@cse.iit.ac.in
1

ABSTRACT
A recent trend in Ad Hoc Network routing is the reactive on-demand philosophy where routes are established on demand. Most of the protocols in this category, however, use single route and do not utilize multiple alternate paths. This paper proposes a scheme to improve existing on-demand routing protocols by introducing the power aware virtual node scheme in network topology. The scheme establishes the multi paths without transmitting any extra control message. It offers quick adaptation to distributed processing, dynamic linking, low processing, memory overhead and loop freedom at all times. It also uses the concept of power aware node during route selection and concept of Virtual Nodes which insures fast selection of routes with minimal efforts and faster recovery. The scheme is incorporated with the Ad-hoc On-Demand Distance Vector protocol and its performance has been studied on simulated environment using NS-2. It is found that the scheme performs very well compared to existing schemes.

Keywords : Mobile ad hoc networks, routing, AODV, DSR 1. INTRODUCTION A Mobile Ad Hoc Network, properly known as MANET [20] is a collection of mobile devices equipped with interfaces and networking capability. Hosts [19] can be mobile, standalone or networked. Such devices can communicate with another node within their radio range or one that is outside their range by multi hop techniques. An Ad Hoc Network is adaptive in nature and is self organizing. It is an autonomous system of mobile hosts which are free to move around randomly and organize themselves arbitrarily. In this environment network topology may change rapidly and unpredictably. The main characteristic of MANET strictly depends upon both wireless link nature and node mobility features. Basically this includes dynamic topology, bandwidth, energy constraints, security limitations and lack of infrastructure. MANET is viewed as suitable systems which can support some specific applications as virtual classrooms, military communications, emergency search and rescue operations, data acquisition in hostile environments, communications set up in Exhibitions, conferences and meetings, in battle field among soldiers to coordinate defense or attack, at airport terminals for workers to share files etc. In an Ad Hoc Network, neither the network topology nor the membership is fixed; thus the traditional wired network routing protocols cannot be deployed for this paradigm. Taking into consideration both changing topology as well as changing membership, in addition to route establishment or discovery, ad hoc routing protocols provide ‘route maintenance’, for the broken routes in case of member node in the route moving out of the range or leaving the network. This makes route maintenance an essential paradigm for ad hoc networks protocols. Several routing protocols for ad hoc networks have been proposed as Dynamic Source Routing (DSR) [7], Dynamic Distributed Routing (DDR) [10], Temporarily Ordered Routing Algorithm (TORA) [11], Ad Hoc On Demand Distance Vector Routing (AODV) [13] and Relative Distance Micro Discovery Ad Hoc Routing Protocol (RDMAR) [1]. Major emphasis has been on stable and shortest routes in all these protocols while ignoring major issue of delay in response whenever break occurs. Some other areas of consideration are: 1. Most of the simulation studies use fixed environment, instead of random scenes. 2. Reconstruction phase requires better approach in all protocols for fast selection of new routes. 3. Real life scenarios need to be simulated instead of predefined scenes. 4. Stable routes for better packet delivery In the reactive protocol AODV [13], a node discovers or maintains route to a destination if and only if it is the initiator of the route to that destination or is an intermediate node on an active route to that destination. Otherwise, it does not maintain routing information to that destination. AODV maintains loop-free routes, even when the local connectivity for a node on the route changes. This is achieved by maintaining a counter for each node, called a sequence number. This sequence number of nodes increment every time as the local connectivity of the node changes. In AODV, the route discovery is initiated by the source by generating and broadcasting a route request packet

24

RREQ which contains sequence numbers for source as well as destination nodes, called source-sequence-num and destination-sequence-num, respectively. When a node receives a RREQ packet, if the node is itself the destination or it has a valid route to that destination, it determines the freshness of its route table entry (provided such an entry exists) for that destination by comparing the destination-sequence-num in the RREQ with that of its route table entry. The node then either responds with a route reply RREP (if it itself is the destination or has a fresh route to that destination) or rebroadcasts the RREQ to its neighbors. The node makes an entry for this route request in the route table and stores the address of the node from which it has received this request as the next hop in the route to the source of this request packet. Similarly when a node receives a response RREP for the request it stores the address of the node from which it received the response RREP as the next hop in the route to that destination. As the RREP travels back to the source, the intermediate nodes forwarding the RREP, update their routing tables with a route to the destination. In this paper a new scheme power aware virtual node ad hoc routing protocol has been suggested which would allow mobile nodes to maintain routes to destinations with more stable route selection. This scheme responds to link breakages and changes in network topology in a timely manner and also takes care of nodes that do not have better power status. It also uses concept of virtual nodes to participate in route selection, where virtual nodes are neighboring nodes at one hop distance form participating nodes and have better power status. The distinguishing feature of power aware Ad hoc routing protocol is its use of virtual nodes and power status for each route entry. Given the choice between two routes to a destination, a requesting node is required to select one with better power status and more active virtual nodes (VNs). This makes route maintenance and recovery phase more efficient and fast. Section 2 discusses a look at the related work while section 3 analyzes new proposed scheme. Section 4 describes the simulation environment and results; Conclusion is given in the last section. 2. RELATED WORK

i) Table Driven Protocols or Proactive Protocols ii) On-Demand Protocols or Reactive Protocols 2.1 Table Driven or Proactive Protocols In Table Driven routing protocols each node maintains one or more tables containing routing information to every other node in the network. All nodes keep on updating these tables to maintain latest view of the network. Some of the existing table driven or proactive protocols are: DSDV [12], DBF [2], GSR [4], WRP [8], ZRP [6] and STAR [5]. 2.2 On Demand or Reactive Protocols In On Demand routing or reactive protocols, routes are created as and when required. When a transmission occurs from source to destination, it invokes the route discovery procedure. The route remains valid till destination is achieved or until the route is no longer needed. Some of the existing on demand routing protocols are: DSR [7], DDR [10], TORA [11], AODV [13] and RDMAR [1] Study has been concentrated for reactive protocols because they work well in dynamic topology. Surveys of routing protocols for ad hoc networks have been discussed in [3, 15, 16]. A brief review of DSR and AODV is presented here as new scheme has been compared with these protocols. 2.2.1 Dynamic Source Routing (DSR) DSR uses dynamic source routing [7] and it adapts quickly to routing changes when host movement is frequent, however it requires little or no overhead during periods in which host moves less frequently. Source routing is a routing technique in which the sender of a packet determines the complete sequence of nodes through which to forward the packets, the sender explicitly lists this route in the packet’s header, identifying each forwarding hop by the address of the next node to which to transmit the packet on its way to the destination host. The protocol is designed for use in the wireless environment of an ad hoc network. There are no periodic router advertisements in the protocol. Instead, when a host needs a route to another host, it dynamically determines one based on cached information and on the results of a route discovery protocol. It is on demand routing based on Flat architecture. DSR is based on two mechanisms as Route discovery and Route maintenance. To perform Route discovery a ROUTE_REQUEST is sent and answered by ROUTE_REPLY from either the destination or from another node that knows route to destination. Route cache is maintained to reduce cost of route discovery. Route Maintenance is used when sender detects change in topology or source code has got some error. In case of errors sender can use another

A routing protocol is needed whenever a packet needs to be handed over via several nodes to arrive at its destination. A routing protocol finds a route for packet delivery and delivers the packet to the correct destination. Routing Protocols have been an active area of research for many years; many protocols have been suggested keeping applications and type of network in view. Routing protocols can broadly classify into two types:

25

route or invoke Route Discovery again. Performance of this algorithm as follows: (a) It works well when host movement is frequent. (b) It works well over conditions such as host density and movement rates. (c) For highest rate of host movement the overhead is quite low. (d) In all cases, the difference in length between the routes used and optimal route lengths is negligible. (e) It makes full use of the route cache. (f) It improves handling of errors. The DSR is single path routing. It suffers from scalability problem due to the nature of source routing. As network becomes larger, control packets and message packets also become larger. It does not guarantee shortest path route. 2.2.2 Ad hoc On Demand Distance Vector Routing The Ad hoc On-Demand Distance Vector (AODV) routing protocol is intended for use by mobile nodes in an ad hoc network. It offers quick adaptation to dynamic link conditions, low processing and memory overhead, low network utilization, and determines unicast routes to destinations within the ad hoc network. It uses destination sequence numbers to ensure loop freedom at all times (even in the face of anomalous delivery of routing control messages), avoiding problems (such as “counting to infinity'') associated with classical distance vector protocols. One distinguishing feature of AODV is its use of a destination sequence number for each route entry. The destination sequence number is created by the destination to be included along with any route information it sends to requesting nodes. Using destination sequence numbers ensures loop freedom and is simple to program. Given the choice between two routes to a destination, a requesting node is required to select the one with the greatest sequence number. AODV has been termed as a pure on-demand route acquisition system, since nodes not on a selected path do not maintain routing information or participate in routing table exchanges. Route Requests (RREQ), Route Replies (RREP), and Route Errors (RERR) are the phases defined by AODV. A node disseminates a RREQ when it determines that it needs a route to a destination and does not have one available. A node generates a RREP if either (i)it is itself the destination, or (ii) it has an active route to the destination, A node initiates processing for a RERR message in three situations : (i) if it detects a link break for the next hop of an active route in its routing table while transmitting data (and route repair, if attempted, was unsuccessful), or (ii) if it gets a data packet destined to a node for which it does not have an active route and is not

repairing (if using local repair), or (iii) if it receives a RERR from a neighbor for one or more active routes. AODV satisfies the following properties: (a) It is loop free routing protocol (b) It is quick in adaptation to dynamic link conditions (c) In AODV nodes that are not on a selected path do not maintain routing information or participate in routing table thus reducing (d) It establishes new routes quickly (e) In this Hello messages are used as periodic broadcasts for beaconing. (f) In this concept of sequence number is used for selection of fresh routes. (g) It erases all invalid routes within a finite time. (h) It has reduces control overhead. AODV does not specify any special security measures. It does not make any assumption about the method by which addresses are assigned to the mobile nodes, except that they are presumed to have unique IP addresses. Consideration for other better routes is absent in AODV. Also it does not exploit the fast and localized partial route discovery method. HELLO messages causes carry overhead. Links are always considered as bidirectional, RREP messages are bounced back where they are originated. Bidirectional assumption might cause improper execution of the protocol.

3. Proposed Scheme The proposed scheme takes care of on demand routing and also power features along with a new concept of virtual nodes. Virtual nodes (VN) are nodes at the one hop distance from its neighbor. These virtual nodes help in reconstruction phase in fast selection of new routes. Selection of virtual nodes is made upon availability of nodes and their power status. Each route table has an entry for its power status (which is measured in terms of Critical, Danger and Active state) and number of virtual nodes attached to it. Whenever need for a new route arises, check for virtual nodes are made, their power status is checked and a route is established. Same process is repeated in route repair phase. Route tables are updated at each Hello interval as in AODV with added entries for power status and virtual nodes. The proposed scheme is explained with the help of an example shown in Figure 1. It is assumed that there are 12 nodes and nodes are numbered 1 through 12. Assume further that the node with index 1 is the source while destination is the node with index 4. Note that the route discovered using power aware virtual node ad hoc routing protocol may not necessarily be the shortest route between a source destination pair.

26

S

2

3.1 Route Construction (REQ) Phase

11 5
6

3
12

4 D

7

10

11

8

9

Stable node

Unstable node

Figure 1: An Example of stable routing If the node with index 3 is having power status in critical or danger zone, then though the shortest path is 1—2—3—4 but the more stable path 1—2—5—8— 9—10—4 in terms of active power status is chosen. This may lead to slight delay but improves overall efficiency of the protocol by sending more packets without link break than the state when some node is unable to process route due to inadequate battery power. The process also helps when some intermediate node moves out of the range and link break occurs in that case virtual nodes take care of the process and the route is established again without much overhead. In Figure 1 if the node with index 8 moves out, the new established route will be 1—2—5—11—9—10—4. Here the node with index 11 is acting as virtual node (VN) for the node with index 5 and the node with index 8. Similarly the node with index 12 can be VN for the nodes with indices 7, 10 and 4. Some work already have been done on using multiple routes approach in ad hoc network protocols; the scheme by Nasipuri and Das [9], Temporally-Ordered Routing Algorithm (TORA) [11], Dynamic Source Routing [7] and Routing On-demand Acyclic Multi path (ROAM) [14], but these algorithms require additional control message to construct and maintain alternate routes. The proposed routing scheme is designed for mobile ad hoc networks with large number of nodes. It can handle low, moderate, and relatively high mobility rates. It can handle a variety of data traffic levels. This scheme has been designed for use in networks in which all the nodes can trust each other, and there are no malicious intruder nodes. There are three main phases in this protocol: REQ (Route Request) phase, REP (Route Reply) phase and ERR (Route Errors) phase. The message types are also defined by the protocol scheme. The messages are received via UDP, and normal IP header processing applies.

This scheme can be incorporated with reactive routing protocols that build routes on demand via a query and reply procedure. The scheme does not require any modification to the AODV's RREQ (route request) propagation process. In this scheme when a source needs to initiate a data session to a destination but does not have any route information, it searches a route by flooding a ROUTE REQUEST (REQ) packet. Each REQ packet has a unique identifier so that nodes can detect and drop duplicate packets. An Intermediate node with an active route (in terms of power and Virtual Nodes), upon receiving a no duplicate REQ, records the previous hop and the source node information in its route table i.e. backward learning. It then broadcasts the packet or sends back a ROUTE REPLY (REP) packet to the source if it has an active route to the destination. The destination node sends a REP via the selected route when it receives the first REQ or subsequent REQs that traversed a better active route. Nodes monitor the link status of next hops in active routes. When a link break in an active route is detected, an ERR message is used to notify that the loss of link has occurred to its one hop neighbor. Here ERR message indicates those destinations which are no longer reachable. Taking advantage of the broadcast nature of wireless communications, a node promiscuously overhears packets that are transmitted by their neighboring nodes. When a node that is not part of the route overhears a REP packet not directed to itself transmit by a neighbor (on the primary route), it records that neighbor as the next hop to the destination in its alternate route table. From these packets, a node obtains alternate path information and makes entries of these virtual nodes (VN) in its route table. If route breaks occurs it just starts route construction phase from that node. The protocol updates list of VNs and their power status periodically in the route table. 3.2 Route Error and Maintenance (REP) Phase Data packets are delivered through the primary route unless there is a route disconnection. When a node detects a link break (e.g. Figure 2, receives a link layer feedback signal from the MAC protocol, the node with index 1 does not receive passive acknowledgments, the node with index 2 does not receive hello packets for a certain period of time, etc.), it performs a one hop data broadcast to its immediate neighbors. The node specifies in the data header that the link is disconnected and thus the packet is candidate for alternate routing. Upon receiving this packet, previous one hop neighbor starts route maintenance phase and constructs an alternate route through virtual nodes by checking their stability and power status. Route Recovery involves Finding VN, their Power status, Invalidate route

27

erasures, Listing affected DEST, Valid route update, New route (in worst cases). (1) Nothing is done if Mobile Host that has moved is not the part of any active route , or power status of that node is below danger level which is not part of active route. (2) If current host is SRC (Source) and host moved is next_hop then REQ is sent to search VN and Power status is checked. (3) Local Repair scheme is used if host moved is an active route.

All this route maintenance occurs under local repair scheme.
3.2.1 Local Repair When a link break in an active route occurs, the node upstream of that break may choose to repair the link locally if the destination was no farther and there exists VNs that are active. To repair the link break, the node increments the sequence number for the destination and then broadcasts a REQ for that destination. The Time to live (TTL) of the REQ should initially be set to the following value
TTL = max (VN attached, 0.5 * #hops) + POWER status

Figure 2: Route Error and Maintenance Phase Nodes which have an entry for the destination in their alternate route table transmit the packet to their next hop node. Data packets, therefore, can be delivered through one or more alternate routes and are not dropped when route breaks occur. To prevent packets from tracing a loop, these mesh nodes forward the data packet only if the packet is not received from their next hop to the destination and is not a duplicate. When a node of the primary route receives the data packet from alternate routes, it operates normally and forwards the packet to its next hop as the packet is not a duplicate. The node that detected the link break also sends a ROUTE ERROR (ERR) packet to its previous neighbor to initiate a route rediscovery. The reason for reconstructing a new route instead of continuously using the alternate paths is to build a fresh and optimal route that reflects the current network topology. Figure 2 shows the alternate path mechanisms at the time of route error ERR. In this phase when route error message sent to previous neighbor of any intermediate node it just reinitiate route construction phase by

considering power status of all its virtual nodes.

where #hops is the number of hops to the sender (originator) of the currently undeliverable packet. Power status is checked from route table VN attached is the number of virtual nodes attached. This factor is transmitted to all nodes to select best available path with maximum power. Thus, local repair attempts will often be invisible to the originating node .The node initiating the repair then waits for the discovery period to receive reply message in response to that request REQ. During local repair data packets will be buffered at local originator. If, at the end of the discovery period, the repairing node has not received a reply message REP it proceeds in by transmitting a route error ERR to the originating node. On the other hand, if the node receives one or more route reply REPs during the discovery period, it first compares the hop count of the new route with the value in the hop count field of the invalid route table entry for that destination. If the hop count of the newly determined route to the destination is greater than the hop count of the previously known route the node should issue a route error ERR message for the destination, with the 'N' bit set. Then it updates its Route table entry for that Destination. A node that receives a ERR bit set. Then it updates its Route table entry for that Destination. A node that receives a ERR message with the 'N' flag set must not delete the route to that destination. The only action taken should be the retransmission of the message. Local repair of link breaks in routes sometimes results in increased path lengths to those destinations. Repairing the link locally is likely to increase the number of data packets that are able to be delivered to the destinations, since data packets will not be dropped as the ERR travels to the originating node. Sending a ERR to the originating node after locally repairing the link break may allow the originator to find a fresh route to the destination that is better, based on current node positions. However, it does not require the originating node to rebuild the route, as the originator may be done, or nearly done, with the data session. When a link breaks along an active route, there are often multiple destinations that become

28

unreachable. The node that is upstream of the lost link tries an immediate local repair for only the one destination towards which the data packet was traveling. Other routes using the same link must be marked as invalid, but the node handling the local repair may flag each newly lost route as locally repairable; this local repair flag in the route table must be reset when the route times out. In AODV, a route is timed out when it is not used and updated for certain duration of time. The proposed scheme uses the same technique for timing out alternate routes. Nodes that provide alternate paths overhear data packets and if the packet was transmitted by the next hop to the destination as indicated in their alternate route table, they update the path. If an alternate route is not updated during the timeout interval, the node removes the path from the table. 3.3 Route Erasure (RE) phase When a discovered route is no longer desired, a route erasure broadcast will be initiated by Source, so that all nodes will update their routing table entries. A full broadcast is needed because some nodes may have changed during route reconstruction. RE phase can only be invoked by SRC (source). The ERR message is sent whenever a link break causes one or more destinations to become unreachable from some of the node's neighbors. 4. SIMULATION AND RESULTS

node has at least a path to any other node, usually just a few hops away. Meanwhile due to the high volume of routing control messages, congestion happens frequently in such networks. A sparsely connected ad hoc network bears different characteristics. In such a network, paths between two nodes do not always exist, and routing choices are more obviously affected by the mobility of the network. In the simulation study, simulations have been carried out in both sparse and dense networks. Area of simulation for dense medium selected has been taken as 1 km* 1 km, and the number of nodes to be 20 and 50. The transmission range of each node in the dense network is 300 m. In case of sparse medium the nodes have been taken as 10 and network area as 700*700 meters whereas the range of transmission is 200 m. B. Degree of Mobility Varying the degree of mobility, or the moving speed of each node in the network, is a useful way to test how adjustable a routing protocol is to the dynamic environment. There are several mobility models used in the past. The proposed scheme uses the random waypoint because this has been used more widely than other mobility models. In this model, each node begins the simulation by remaining stationary for a fixed “pause time” seconds. It then selects a random destination in the simulation space and moves to that destination at a speed distributed uniformly between a minimum and a maximum speed. Upon reaching the destination, the node pauses again for “pause time” seconds, selects another destination, and proceeds there as previously described, repeating this behavior for the duration of the simulation. In the simulation scenes , the minimum moving speed has been taken as 0 and maximum speed as 30m/sec. Different speeds as 1, 2, 5, 10, 15, 20 and 30 meters per second have been used for checking effect of mobility. The pause time has been varied between 0 and 500 seconds. A pause time of 0 second corresponds to continuous motion, and a pause time of 500 corresponds to no motion. The simulation time has been taken as 500 seconds. C. Number and Duration of Data Flows Because on-demand protocols query routes only when data flows exist for them, the number of data flows would influence the number of paths found and the control overhead for on demand protocols, such as AODV, TORA and DSR. How well a protocol adjusts to the change of data flows is another important criterion for evaluating a routing protocol. In the simulations environment, the number of data flows has been varied between 5 and 50. Many connections have been established among nodes. Distant connections have been set even if connection fails after some time. Random scenarios has been created, where many

Simulation study has been carried out to study the Performance study of existing different protocols. Simulation Environment used for this study is NS [21]. Earlier versions of ns have no support for multi-hop wireless networks or MAC sub layer, but its latest version (NS-2.28) provides support for MAC sub layer and a lot of support for wireless environments. Wireless environments are taken from next release with embedded features ported from CMU/Monarch's code [19]. 4.1 Parameters used for Testing Many parameters have been used for evaluating performance of new scheme. Degree of connectivity among nodes, speed , number of duration and data flow, type of packets, size of packets are some of the parameters that influence the performance of routing schemes. A. Degree of Connectivity among Nodes In many scenarios simulated in previous simulation studies of ad hoc networks, nodes were usually densely connected. In a highly dense network, almost every

29

connecting paths are initially far away and also some initially connected paths move too far away till the end of simulation In most previous simulation studies, each data flow started at an early time of the simulation period, and continued until almost the end of the period. In present simulations, besides this long lasting flow pattern, protocols have been tested under data flows that last shorter time periods. Packet size used is 64 bytes and 512 bytes. D. Other Factors There have been other factors also for which the scheme has not changed the values and studied the effects. The effect of having a static node or a few static nodes as points of attachments to the Internet, such that most of the traffic in the ad hoc network is to and from such point(s) has not been taken into account. In the simulation environment of the study and several previous simulations, traffic type chosen has been the constant bit rate source (CBR). In a real case, there are all kinds of popular applications with different traffic patterns from CBR. Simulations have been carried out for TCP and UDP both. The behavior of DSR protocol has been quite different for UDP and TCP Packets. DSR handles UDP much nicely compared to TCP packets at fast speeds. To observe the protocols more objectively, it would be worth trying different applications in the future. 4.2 Metrics Simulation results have been compared with other existing protocols like AODV, DSR and TORA. Simulations have been conducted on P-IV processor at 2.8 GHZ, 512 MB of RAM in Linux Environment with ns-2.26 with facilities for wireless simulations. The following metrics have been used in the simulation study. 1. Packet Delivery Ratio: The fraction of successfully received packets, which survive while finding their destination. This performance measure also determines the completeness and correctness of the routing protocol. If F is fraction of successfully delivered packets, C is total number of flows, f is id, R is packets received from f and T is transmitted from f, then F can be determined by F=

delay primarily depends on optimality of path chosen. This metric can be defined by Average end-to-end Delay =

1 S

S

∑ (r
i =1

i

− si )

where S is number of packets received successfully, ri is time at which packet is received and si is time at which it is sent, i is unique packet identifier. 3. Routing Overhead: The number of routing packets sent by the routing protocol to deliver the data packets to destination.

Graph 1: Average delay in packet delivery Since PAVNR and AODV both have the same amount of control message overhead, we used a different metric for efficiency evaluation. It has been clearly visible in Graph 1, that the average path cost of PAVNR is higher than that of AODV and others when link break is relatively low. That can be explained as follows: PAVNR has a much higher success ratio than AODV when the link break rate is 50%. Those connections, which PAVNR is able to establish but AODV is not, tend to have relatively long routing paths, as observed in the simulation. They also tend to have higher cost, which brings the average path cost up. There are two reasons for this result. First, when route breaks PAVNR uses longer alternate paths to deliver packets that are dropped in AODV, Second when there are multiple paths, redundancy is created and hence increases the number of data transmission. It has been observed that efficiency has slightly been sacrificed in order to improve throughput and protocol effectiveness. Next graphs are comparison among PAVNR, AODV, DSR and TORA in terms of packet delivery ratio.

1 C Rf ∑ C f =1 T f

2. End-to-End Delay: Average end-to-end delay is the delay experienced by the successfully delivered packets in reaching their destinations. This is a good metric for comparing protocols and denotes how efficient the underlying routing algorithm is, because

30

packet delivery ratio 50 nodes
packet delivery ratio 10 nodes

1
1 0.98 0.96 0.94 0.92 0.9 0.88 0.86 0.84

packet delivery ratio

packet delivery ratio

0.95 0.9 0.85 0.8 0.75
15 0 30 0 50 0 50 0

PAVNR

PAVNR
AODV
TORA
DSR

AODV

TORA
DSR

15 0

25 0

10

50

0

pause tim e

pause tim e

Graph 4: Packet delivery ratio for 50 nodes
Graph 2: Packet delivery ration for 10 nodes

packet delivery ratio 20 nodes 0.99 0.96 0.93 0.9 0.87 0.84 0.81 0.78 0 10 100 pause tim e 200 400

packet delivery ratio

PAVNR
AODV
TORA
DSR

The performances of all four protocols have also been tested in a random scene environment also where different nodes have different speeds and movement pattern are different for connecting nodes. The scenario has been simulated for 10, 20 and 50 nodes. PAVNR performed better than others and came out to be more close to DSR in higher number of nodes, TORA packet delivery ratio dropped at higher nodes but it was good for less number of nodes.
performance in random scene
% ag e o f p ackets d elivered
98 94 90 86 82 78 74 70 10 20 50

pavnr aodv tora dsr

Graph 3: Packet delivery ratio for 20 nodes In the scene the speed has been changed form 1 meter per second to 25 meters per second. It was observed as in Graph 2 and Graph 3, that Packet delivery ratio was very good for TORA as it is good in sparse mediums, performance of PAVNR was below ADOV and DSR, reason being less virtual nodes available and more time spent in calculating power status, but the performance of PAVNR was overall best for 20 and 50 nodes proving the point that it was better to take care of power and virtual nodes factors. It was much better than its counter parts for 50 nodes (Graph 4) i.e. in dense mediums. The reason is easy availability of virtual nodes and more nodes available for recovery phase.

nodes

Graph 5: %age of Packets delivered in random scenario In Graph 5 random scene layouts is taken with 10, 20 and 50 nodes with varying pause times and varying speeds. TORA performance gets poorer with increasing speeds. AODV and DSR performance has been relatively constant throughout the process.

31

detailed and realistic channel models with fading and obstacles in the simulation. 6. References 1. G. Aggelou and R. Tafazoli, “Bandwidth efficient routing protocol for mobile ad hoc networks (RDMAR)”, CCSR, UK 1997. 2. D. Bertsekas and R. Gallager, “Data Networks” Prentice Hall Publ., New Jersey, 2002 3. J. Broch, D. A. Maltz and J. Jetcheva, “A performance Comparison of Multi hop Wireless Adhoc Network Routing Protocols”, Proceedings of Mobicomm’98, Texas, 1998. 4. Tsu-Wei Chen and M. Gerla, "Global State Routing: A New Routing Scheme for Ad-hoc Wireless Networks" Proceedings of international computing conference IEEE ICC 1998. 5. J. J. Garcia, M. Spohn and D. Bayer, “Source Tree Adaptive Routing protocol”, available at IETF draft, www.ietf.org . 6. Z. J. Hass and M. R. Pearlman, “Zone routing protocol (ZRP)”, Internet draft. available at www.ietf.org . 7. D. B. Johnson and D. A. Maltz, "Dynamic Source Routing in Ad Hoc Networks", Mobile Computing, T. Imielinski and H. Korth, Eds., Kulwer Publ., pp. 15281, 1996. 8. S. Murthy and J. J. Garcia-Luna-Aceves, "An Efficient Routing Protocol for Wireless Networks", ACM Mobile Networks and App. Journal, Special Issue on Routing in Mobile Communication Networks, pp.183-97, 1996. 9. A. Nasipuri, R. Castaneda, and S. R. Das, “Performance of Multi path Routing for On Demand Protocols in Mobile Ad Hoc Networks”, ACM/Baltzer Journal of Mobile Networks( MONET). 10. N. Nikaein and C. Bonnet, “Dynamic Routing algorithm”, available at Instiut Eurecom, Navid.Nikaein@eurocom.fr. 11. V. D. Park and M. S. Corson, “A Highly Adaptive Distributed Routing Algorithm for Mobile Wireless Networks”, Proceedings of the IEEE International Conference on Computer Communications (INFOCOM), Kobe, Japan, pp. 1405-1413, 1997. 12. C. E. Perkins and P. Bhagwat, “Highly dynamic destination-sequenced distance vector routing (DSDV) for mobile computers”, Proceedings of ACM SIGCOMM 94, pp. 234–244, 1994. 13. C. E. Perkins and E. M. Royer, “Ad-Hoc On Demand Distance Vector Routing”, Proceedings of the 2nd IEEE Workshop on Mobile Computing Systems and Applications (WMCSA), New Orleans, LA, pp. 90-100, 1999. 14. J. Raju and J. J. Garcia-Luna-Aceves, “A New Approach to On-demand Loop-Free Multi path Routing”, Proceedings of the 8th Annual IEEE International Conference on Computer

Graph 6: End to end delay in delivery of packets Graph 6 shows that PAVNR has longer delays than AODV and others. One can only measure delays for data packets that survived to reach their destination. PAVNR delivers more data packets and those packets that are delivered in PAVNR but not in AODV take alternate and possibly longer hop routes. PAVNR with longer delays does not represent its ineffectiveness since these protocols use the same primary route. 5. CONCLUSION

In this paper a new scheme has been presented that utilizes a mesh structure and alternate paths. The scheme can be incorporated into any ad hoc on-demand unicast routing protocol to improve reliable packet delivery in the face of node movements and route breaks. Alternate routes are utilized only when data packets cannot be delivered through the primary route. As a case study, the proposed scheme has been applied to AODV and it was observed that the performance improved. Simulation results indicated that the technique provides robustness to mobility and enhances protocol performance. Study is going on currently investigating ways to make this new protocol scheme robust to traffic load. The Power aware virtual node Routing protocol gives a better approach for on demand routing protocols for route selection and maintenance. It also takes care of Power factor which improves the performance of protocol. It was found that overhead in this protocol was slightly higher than others, which is due to the reason that it requires more calculation initially for checking Virtual nodes and power checks. This also caused a bit more end to end delay. The process of checking the protocol scheme is on for more sparse mediums and real life scenarios and also for other metrics like Path optimality, Link layer overhead. The proposal is also to check this protocol for multicast routing. Additionally, the plan is to further evaluate the proposed scheme by using more

32

Communications and Networks (ICCCN), Boston, MA, pp. 522-527, 1999. 15. S. Ramanathan and M. Steenstrup, “A survey of routing techniques for mobile communications networks”, Mobile Networks and Applications, pp. 89– 104, 1996. 16. E. M. Royer and C. K. Toh, “A review of current routing protocols for ad hoc mobile wireless networks”. IEEE Communications, pp. 46–55, 1999. 17. A. Tanenbaum, “Computer Networks”, Prentice Hall, New Jersey, 2002. 18. C. K. Toh, “Ad hoc mobile wireless Networks”, Prentice Hall, New Jersey, 2002. 19. CMU/Monarch's Code, available at www.monarch.cs.edu 20. National Science Foundation, “Research priorities in Wireless and mobile networking”, available at www.cise.nsf.gov. 21. NS Notes and Documentation, available at www.isi.edu/vint.

33

JAVA BASED IMPLEMENTATION OF AN ONLINE HOME DELIVERY SYSTEM
Mr. Fiaz Ahmad Dr. Mohamed Osama Khozium Assistant Lecturer Assistant professor Faculty of Information Technology MISR University for Science & Technology, 6th of October City, EGYPT fiaz.ahmad@yahoo.com Osama@Khozium.com

ABSTRACT Technology means science and theories implementation to help the human-beings. We are also familiar, how the computer technology and computer developments are introducing luxuries in the life of the mankind of this planet. This paper reveals not only the benefits of the computer technology that are directly making the life of human-being easier and easier but also putting valuable impact on the environment of this society. The paper describes the design and implementation phases of the Online Home Delivery System. The paper commences with highlighting the momentous aspects of computer technology and its development effects on today’s society. The system also brings to light how computer technologies are mounting the luxuries of today’s life by introducing new amazing aspects every day. Java is used to develop the system. Java is an object oriented language that’s well suited to designing software that work with in conjunction with the internet. The system is designed and developed for providing the home delivery service in a completely different way. The system can be utilized in the real world environment and can give fruitful effects in the business. The key contribution of the proposed system is the entirely new concept that is “delivery to password secured box”. The system is also introducing a unique interface for placing order using cellular phones. This system is a generic product developed for prospective organizations which are providing facility of home delivery of their goods. The system also shows how to co-op up with security issues while considering resources with all its availability. Keyword: Robust and Secure, Cellular Phone Application, Platform independency, POST, J2ME, Internet Security. 1. INTRODUCTION Every moment that comes to us brings new challenges. The rising boom of computer technology has brought new horizons to our attention. Today continuous progress and service delivery has changed business as well as the daily life of today’s human-being. Continuous advancement in computer technology has introduced many valuable impacts on today’s life. Online Home Delivery System is also a powerful reflection of computer technology. Is it OK ... to use a home-delivery service? It's not the idea, but the application of the idea that is the key to success. So say the business gurus. Indeed, it's striking how many successful businesses are based on ideas that failed for others before them.

The case of Webvan.com is a good example that there are often rich pickings to be had from the carcass of failure. Webvan was one of the most luminescent stars of the dotcom boom - and one of the most startling failures of its inevitable crash. The company's founders raised about $1bn to fund their idea of a super-efficient home-delivery. Initially serving Silicon Valley in California. The company's fleet of vans promised to deliver to customers within 30-minute time slots. Customers loved the service but the company grossly over reached and it floundered with colossal debts [1]. Information technology advancements have introduced a number of incredible things that was a trance in the past. The idea of home delivery service is a very strong idea for today’s business

34

and it can put valuable effects on the business of any organization in today’s competitive business environment. The Online HDS (Home Delivery System) is developed for replacing the existing manual system at organizations providing facility of home delivery with online shopping capabilities. The far-off user can place order from web and from internet enabled cellular phones. It provides online Shopping facility to remote users. It would like be a point of order system. The system will capture Sales Information at POST (point of sale terminal), Manages Inventory, and Customers Information. Unlike the existing outdated largely manual Sale, and Inventory systems. The product provides accurate and up-to-date Sale, Inventory, and Customer information to the management. Tesco's e-grocery service has also proved a big success. it is now the world's largest homedelivery service, with 150,000 orders a week and sales in 2005 of £719m - an annual growth of 24%. Considering that home shopping only accounts for 2% of Tesco's total group sales of £37bn, there is still huge potential for growth [1] The successful implementation of the system is also introducing some environmental benefits. In this way, Information technology is impacting valuable effects on the environment of this planet. There are also possible environmental advantages - not something you can usually say with supermarkets - to an increased move towards home deliveries. You would think it is obviously better to encourage people not to drive themselves to a store and instead rely on a van making multiple drop-offs, thereby cutting the need for many journeys. 2. BACKGROUND The idea of Home Delivery Service is a crucial for any organization that wants to do strong business in the market. Research by the University of London centre for transport studies in the late 1990s showed that even with vans each carrying just eight customer orders per round, an estimated 70-80% reduction in total vehicle kilometers could be achieved if it stopped customers going to the shops by car. A

related questionnaire also showed that 74% of car owners said they used their cars less because of their home deliveries [1]. The main purpose of this system is to replace the existing manual system. Limitations of the manual system are as follows. Stock checking is time consuming, and error prone. Items can be placed at other locations in the store. Due to this reason, item tracking process is very cumbersome and time consuming. • In the manual system it is very difficult to maintain the records of items, like item price, quantity, and last purchase rate. • No facility to maintain the records of suppliers and manufacturer. • No synchronization between item quantity at POST and at Store. • In rush hours, the sale speed gets to low and cashier can make a mistake. • Sale invoice does not include any item description. • If two items has same price on sale invoice then it is difficult to identify the item. • This problem can also generate difficulties on return of sales. • Inventory is handled very poorly. • No tracking of item categories and sub categories. A successful implementation of the System can improve the image of the organization, catch the attention of more Customers and an automated system fulfills customers and owner’s needs. 3. DFD OF THE SYSTEM In the late 1970s data-flow diagrams (DFDs) were introduced and popularized for structured analysis and design (Gane and Sarson 1979). DFDs show the flow of data from external entities into the system, showed how the data moved from one process to another, as well as its logical storage [2]. The DFD of the proposed system is given in figure 1. It shows different process and system behavior while interacting with it.
•

35

Figure1: Data Flow Diagram of the proposed system 4. SALIENT FEATURE OF THE PROPOSED SYSTEM The proposed system is also bringing-in a new concept for the efficient and secured home delivery. The concept of "delivery to box". This idea will be very supportive for any organization for the speedy deliveries as well as it will also eliminate the need for the customer to be at home. The "delivery to box" service (where the shopping is left in a password-secured box outside the home, thereby eliminating the need for the customer to be at home and allowing the driver greater flexibility to choose more efficient routes), the average journey length per delivery dropped to 0.9km [1]. When the customer will place an order, he/she will provide a password to open the box that is outside his/her home. The password will be recorded with the order receipt. So that the deliverer could put the order into the box. In the last few decades the usage of internet and mobile technology increased in a very rapid way. This technology also impact very valuable impacts on today’s life. The graph given in figure 2 shows the rapid growth of the usage of internet technology.
Usage of Internet

1970

1980

1990

2000

2005

Figure 2: Growth in internet usage In the same way, usage of mobile technology is also increased in a very speedy way. Now a day more cellular phones are used to connect with the internet for achieving different tasks and accessibility of internet using cellular phones is putting a clear effect on today’s business. The graph given in figure 3 shows the usage of cellular phones to connect with the internet.

36

M illions
400 200 000 800 600 400 200 0

More handsets than PCs connected to the I nternet!

cellular

connected to Internet

1995 1996 1997 1998 1999 2000 2001 2002 2004 2006

Most organizations are dependent on computer systems to function, and thus must deal with systems security threats. Small firms, however, are often understaffed for basic information technology (IT) functions as well as system security skills. Nonetheless, to protect a company’s systems and ensure business continuity, all organizations must designate an individual or a group with the responsibilities for system security. Outsourcing system security functions may be a less expensive alternative for small organizations [7]. Possible security threats that can affect any business system are: 5.2.1 Security Threats: - Malicious Threats - Unintentional Threats - Physical Threats 5.2.1.1 Malicious Threats: i. Malicious Software (codes) ii. Unauthorized Access to Information iii. System Penetration iv. Theft of Proprietary Information v. Financial Fraud vi. Misuse of Public Web Applications vii. Website Defacement 5.2.1.2 Unintentional Threats: Malfunction Equipment Malfunction Software Malfunction Human Error Trap Door (Back door) User/Operator Error 5.2.1.3 Physical Threats: Physical Environment Fire Damage Water Damage Power Loss Civil Disorder/Vandalism Battle Damage 5.2.2 The formulation of following steps can enhance information security structure for any organization i.e. 1. Identify Security Deficiency 2. Continuous IT planning for technical & operational tasks 3. Self Assessment mechanism 4. Incident handling procedures 5. Information recovery methodology 6. Back up of Data & Configuration 7. Future Security Visions

Figure 3: Usage of handsets connecting with internet. The major contribution of the proposed system is that the system is going to provide a new facility that was not introduced before this in such type of systems, is the usage of cellular phone for putting online order and the” delivery to password secured box ”. This facility can put valuable effects on the business of any organization as well as it can introduce ease for the customers. 5. DESIGN AND IMPLEMENTATION 5.1 Development Environment Java is a programming language that is well suited for designing such type of software that work in conjunction with the internet [3]. Additionally it’s a cross platform language, which means its program can be designed to run the same way on Microsoft Windows, Apple Macintosh and most versions of UNIX, including Solaris. Java extends beyond desktops to run on devices such as televisions, wristwatches, and cellular phones as it is small, secure, and portable [4]. Java is best known for its capability to run on World Wide Web pages [5]. Java’s strength include platform- independence, object oriented nature, as well as easy to learn [6]. Furthermore, java has JSP (Java Server Pages), Struts, EJBeans (Enterprise Java Beans), like dominant technologies that create attraction for the development of distributed web applications. For all the above mentioned advantages, java was selected to develop the System. 5.2 Security & Privacy Threats and Controls: Security and privacy issues have much more importance in any organization and can’t be neglected for any secured business system. The term “system security threats” refers to the acts or incidents that can and will affect the integrity of business systems, which in turn will affect the reliability and privacy of business data.

37

Quality measures for security Coordination with departments for regular monitoring of all servers. 10. Develop action plans and milestone for information security Security safeguards needed to be improved via identification & authentication where low risk environment prevails. While considering security procedures access privileges need to be monitored and controlled for every level of access. Organizations have to apply departmental zones with reference to security control and access

8. 9.

mechanism. As one key mechanism that is often neglected by many organizations is continuous monitoring of network traffic with all its available resources [8]. A combination of preventive and detective controls can mitigate security threats. 5. 3 Design Class Diagram of the Proposed System In the Unified Modeling Language (UML), a class diagram is a type of static structure diagram that describes the structure of a system by showing the system's classes, their attributes, and the relationships between the classes[9].

1

customer cusid cuspass
reset() validate()
0..*
1 view 1..*

manager manid mname
view 1

can give 1

get info()

products product name product price
production dse()

1

1 salesman commission salemanID salman Name

1

1 can

1

saleman Commission() show sman record()

can

can

change password oldpass newpass confnewpass
1

maintain

sale man record saleman ID saleman Name

1..*
change passs()

place order orderid orderdse
1

create() update() delete()

1..*

AddCart Item id Nmae Unit price Quantity name
Item descreption() opname2()

administrator adminId admname

tak e 1 1..*

getsysdescriptio()

cashier
caId caname
1
1

single date dialogue(search) date(from) date(to) year

1

set record()

1

sets

1
1

getup() details()

sets
sales smId user id saleno order no

can

Item sub category SCid MCid MC Name description purchase price sale price quantity

generate
sale reports name date

get report details()

1..*

1

set info() amount received() give discount()

sets

Initialize MC() Initialize SC()

1..*

Item main category main Item Id main Item Name

sign in user name password

Order report RName Date

get report details()
1
1

Item Description()

Figure 4: Design Class Diagram of the system proposed

38

5.4 STRUCTURE OF THE SYSTEM The proposed system is a distributed web application, containing three modules. 1. Web Module 2. Cellular Phone Module 3. Desktop Module (Server Side Module) Struts are used as architecture that is famous model view controller pattern. EJBeans (Entity Java Beans) are used an application layer between browser and data base. Through the web application of the system customer can log in to the super store and can do shopping according to his/her needs. The cellular phone application is developed using J2ME (Java 2 Micro Edition) to facilitate the customer to place order using cellular phones. That is basically a Midlet and data moved from Midlet to JSP and from JSP to EJBeans

(Inside application server which is Bea Web Logic) and then to the database. The basic functionality is to place order and display a unique order id and display it to the user. It is important how at run time a catalog is made and its sub items are retrieved from database using EJBeans and displayed on a constrained memory and user interface cellular device. The desktop application (server side application) that is communicating with the database through Bea Web Logic, which is an application server for sending and retrieving data from the data base. 5.5 State Chart Diagram of Super Store Management A state chart diagram shows the behavior of classes in response to external stimuli. This diagram models the dynamic flow of control from state to state within the present system [10].

select manage store activity

modify existing b decision

wait for any activity at main menu
press exit

wait for any businessDecision activity

select b decision to be modified

press exit

select new decision

display b decision form
add b decision
press ok

press ok

enter details
press save

display save confirmation
press ok
display modify confirmations

entering details

modify details
press update

display b decision details to be modified

Figure 5: State Chart Diagram of Super Store Management 6. CONCLUSION features that are currently needed by the management. The Proposed system will capture Sales Information at POST, Manages Inventory, Customers Information, and provides online Shopping facility to remote users. Unlike the existing outdated largely manual Sale, and Inventory system. The product provides accurate

The design and development phases of the proposed system for Online Home Delivery are described in this paper. The manual system, of any organization or super store can takes care of its stock and store items to a limited extent. It does not provide technically mature and sophisticated

39

and up-to-date Sale, Purchase, Inventory, and Customer information to the management. This will reduce duplication of work and improve the efficiency of the available resources. The supermarket delivery service means that I can get large and bulky items delivered and use the local shops for smaller things. It has also proved indispensable for ordering groceries for my housebound elderly relative in another county. It seems that home deliveries offer environmental advantages, but much more so if we are less demanding about delivery slots and favor using secured delivery boxes [1]. Among the advantages of the system that are normally not available in other similar systems is the facility, “delivery to the password secured box”. The system provides the facility to the customer to choose the delivery option while ordering online. In the case of delivery to box the system inquired for the password that is dispatched with the customer address on the order receipt. The system also facilitates the customer by giving payment option. The customer can pay online as well on home after receiving safe his/her order. The system is also participating to achieve environmental benefits as well as personal benefits e.g. saving money, time etc. The system was tested and showed a high accuracy and success. The system can be utilized in research knowledge-seekers its usage, properties and applications. REFERENCES [1] Guardian Website, “Is it OK ... to use a homedelivery service?”, http://www.guardian.co.uk/g2/story/0,,169849 6,00.html, 2007 [2] Scott W., “The Object Primer 3rd Edition”, Cambridge University Press, 2004 ISBN#: 0-521-54018-6, http://www.agilemodeling.com/artifacts/dataFl owDiagram.htm , 2007 [3] Java web site, Sun Microsystems, java.sun.com, 2007. [4] Newman A., A Special Edition Using Java, Indianapolis, IN, Que Corporation, 1996. [5] Gridley M., Web programming with java, Indianapolis, IN, Sams.net, 1996. [6] Horstmann C., Core Java 1.2, Sun Microsystems’s Press, California, 1999. [7] P. Paul Lin ,The CPA Journal online.” System Security Threats and Controls” , http://www.nysscpa.org/cpajournal/2006/706/e ssentials/p58.htm [8] Khozium, and et." Process Management for Information Security Assessment", The 2006

International Arab Conference on Information Technology (ACIT'2006)p 45. [9] Wikipedia “The Free Encyclopedia”, http://en.wikipedia.org/wiki/Class_Diagram , Last visit, 2007. [10] Smart Draw “What is a UML State chart Diagram?” , http://www.smartdraw.com/tutorials/software/ uml/tutorial_09.htm , Last visit, 2007.

40

TOWARDS MOBILITY ENABLED PROTOCOL STACK FOR FUTURE WIRELESS NETWORKS
Fawad Nazir, Aruna Seneviratne National ICT Australia (NICTA), Australia University of New South Wales (UNSW), Australia {fawad.nazir,aruna.seneviratne}@nicta.com.au

ABSTRACT Future wireless networks have two widely accepted characteristics. Firstly, they will be based on all-IP based network architecture and secondly they will integrate heterogeneous wireless access technologies. As a result, there exist today a multitude of solutions aimed at managing these imminent challenges. These solutions are at varying stages of deployment, from purely analytical research, to experimentally validated proposals, right through to fully standardised and commercially available systems. In this paper we discuss the meaning, requirements, responsibilities and solutions for mobility management on all seven layers of the OSI communication stack. We identify internet mobility requirements and perform valuable three dimensional analyses between internet mobility requirements, mobility management protocols and layers of OSI communication stack. We also quantify types of mobilites possible in the future wireless networks and associate them with the responsible layers. In the end we conclude that no single layer in the OSI stack is responsible to completely address all the internet mobility requirements and support all the mobility types. We strongly believe that every layer has its own responsibilities in order to support mobility. Therefore in order to deal with mobility challenge we should have a “Mobility Enabled Protocol Stack” instead of mobility management solution on a specific layer. We argue that the best approach to build a complete mobility enabled protocol stack for future wireless networks is based on the concept of Co-Existence of mobility management protocols proposed on different layers, in a way that we get best out of each. In the end, in order to support our arguments, we propose a novel mobility enabled protocol stack, naming mechanism and wireless network architecture for the future wireless networks. Keywords: Mobility, Mobility Management, OSI communication stack, mobile networking, wireless network architecture, heterogeneity.

1

INTRODUCTION

Mobility is an unmistakable truth in human lives and time is always a constraint, while communication is a necessity. Communicating while moving to save time has become a challenge. However, mobility is not just limited to communication, as historically Internet was build for communicating only, now its application are way beyond communicating. Same is becoming true for wireless networks and application of mobility. This idea is driving the research in wireless networks. An obvious question is, why traditional internet (TCP/IP) can’t fulfill the requirements of future wireless networks? Two of the fundamental problems in TCP/IP stack that hinder the use of mobility are: there is no support for mobility in TCP/IP and the other problem is the tight binding of Application, transport and IP layers (Figure-1). This opens up a new and challenging area

of research i.e. “Mobility Management”. As a result, there exist today a multitude of solutions aimed at managing this problem. These solutions are at varying stages of deployment, from purely analytical research, to experimentally validated proposals, right through to fully standardised and commercially available systems. Most of these solutions aim to solve mobility management problems at a specific layer and questions like what layer does mobility belong? [4] are being addressed. We strongly believe that mobility handling task does not belong to any specific layer in the TCP/IP stack. Every layer in the communication stack has its own responsibility in order to support mobility. For us mobility is functionality with its own types and requirements. Therefore, in this paper we have studied mobility management in detail while talking mobility types and mobility requirements as a reference. We also introduce a notion of mobility

41

management protocol co-existence. Co-existence means co-existence of different mobility management solutions proposed on different layers of OSI stack to form a “Mobility Enabled Protocol Stack”.

introduce a new mobility management solution and architecture, followed by the conclusion and future works in the last section. 2 Mobility Types and their relationship with the OSI Stack Layers

Figure 1 : Layer Binding in TCP/IP Stack

Co-existence is further divided into two types AndBased Co-existence (ABC) and Or Based Coexistence (OBC). ABC means, simultaneous existence of multiple mobility management protocols and OBC means selection of an appropriate mobility management protocol based on multiple factors like context, preference, etc. In this paper we propose an ABC based mobility enabled protocol stack, as OBC based solutions have challenges that the beyond the scope of this paper. Furthermore, we demonstrate how we can create a hybrid solutios towards mobility management using And-Based Coexistence of link layer, network layer, new layer and session layer mobility management solutions. Our proposed solution can fulfil all the mobility requirements and support physical, logical and QoS mobility. The rest

A clear and precise definition of mobility is required in order to perform an analysis of mobility management solutions. Mobility can be categorised in different ways. On the top level we think mobility has three broad types physical, logical and QoS mobility. Throughout our paper, we will use these types as a reference for our comparisons and analysis. Physical Mobility: Physical mobility deals with the physical movement of the device while continuing to be reachable for incoming requests and maintaining ongoing sessions/connections. Physical mobility is further divided into two categories local and global mobility. Local mobility deals with the movement of device within a single administrative domain. On the other hand global mobility deals with the movement of device within two or more different administrative domains. Local mobility is further divided into two categories inter-subnet mobility and intra-subnet mobility. Inter-subnet mobility is moving within multiple different subnets and intra-subnet refers to the movement of device within a single subnet.

Figure 2 : Classification of Mobility Types and OSI Layer Responsibilities

of the paper is organized as follows. In the next section we describe mobility types and their relationship with the OSI stack layers. In section 3, we discuss what does mobility means on all seven layers of the OSI stack? We review the mobility management requirements in section 4. Section 5, presents mobility management solutions on different layers and analyse them according to the mobility types and mobility requirements. Finally, we

Global mobility can also be referred to as internetwork mobility. Inter-network mobility is the movement of device within two different networks domains. Now let’s have a look at layers in the OSI stack that are responsible for dealing with physical mobility. At the base level, physical mobility is divided into three types’ intra-subnet, inter-subnet and inter-network. In the case intra-subnet mobility,

42

when ever the mobile node will move it will have to re-associate itself to the new access point (AP) at the link layer. In the case of intra-subnet mobility we don’t have to change our IP address so network layer may not be involved. Inter-subnet and inter-network mobility pose similar responsibilities to the OSI stack layers. In both of these cases we not only have to re-associate to new access point but also get new IP address, so both link layer and network layers are responsible. Other than these responsible layers, layer dependency rules apply as mentioned in section 1. Logical Mobility: Logical mobility deals with the possibility of mobility without the physical movement of the device. It is further classified into two types interdevice mobility and intra-device mobility. Intradevice mobility deals with the mobility within the device. Interface change, and address change are two types of intra-device mobility. Interface change can be further divided in two types same access technology mobility (a.k.a. horizontal handoff) and different access technology mobility (a.k.a. vertical handoff). Same access technology mobility means changing to another interface of the same access technology and different access technology mobility means changing to access interface of the different access technology e.g. changing from 802.11 to CDMA. The second type is inter-device mobility that deals with the movement of mobile objects [1] from one device to another. It can further sub-divided into session mobility and application/flow mobility. Session mobility means moving session or application/communication state to some other device for example, a user may want continue a session begun on a mobile device on to the desktop PC when entering his/her office. A user may also want to move parts of a session, e.g. if he has specialized devices for audio and video, such as a video projector, video wall or speakerphone. Application/flow mobility on the other hand means movement of application and flow state between devices. This type of mobility is also known as code mobility, agent mobility or process mobility. At the base level logical mobility can be divided into five types namely, same access technology mobility, different access technology mobility, address change, session mobility and application/flow mobility. In logical mobility we assume that mobile device is not moving. Even being static a mobile device can change its access interfaces. If the device changes its interface, the OSI layer dependencies will depend on whether it changes interface within same subnet, different subnet or different network. Having known this the same layer dependencies will apply as in the case of physical mobility. In the case of different access technology mobility link layer is responsible for association of the mobile device with the new access

technology and mobility can be handled, at the lowest, on the IP layer. Address change mobility can be handled at the network layer, as address change does not require change in the current access point association. In application/flow mobility all layers from transport to application layer will be responsible. This is because of the tight binding of application layer with that of session and transport, as discussed in section 1. Presentation layer is involved in the case in which that the new device has different presentation characteristics/abilities (content adoption). The session layer mobility only involves the session layer as we have to deal with the session states only, assuming that the device to which we are moving our sessions has all the capabilities and application as of the previous device. QoS Mobility: QoS mobility refers to maintaining the same operating environment during mobility, changing devices or while changing network service providers. This is further classified in to two types service mobility and preference mobility. In preference mobility the idea is that the user working preferences, working priorities, and context should move [2] with the user during mobility. Service mobility takes into account that the user should get same services and QoS level in the new visited network. For example, the IntServ and DiffServ parameters should be activated or negotiated in your new network. Both of the QoS mobility types ensure to maintain the same QoS in the new/visited network. In general QoS mobility is a tricky term in the sense that QoS can have different semantics for different networks, different service providers and different users. In general, preference mobility is dealt with at the application layer. Preference mobility solutions keep record of the user preferences and performs operations like user and network context gathering. In the case of service mobility, this could involve application layer (application based services), presentation layer (quality and level of presentation) and network layer (throughput guarantee, low latency etc). 3 Meaning of Mobility on Different OSI Stack layers

In the classical TCP/IP stack mobility has no well-defined place and meaning. Many solutions have been proposed for mobility support at different layers starting from Application to link layer. Physical layer is not involved as mobility at physical layer is handled implicitly by the wireless access technology, e.g. the physical movement of mobile nodes with respect to a single wireless access point. Every mobility management solution has its own strengths and weaknesses. To propose a new mobility solution for a specific layer it is important to understand, what does mobility mean in

43

association to that particular layer? In this section we will describe the meaning of mobility at different layers of the TCP/IP stack and study each layer by answering the following questions. What services are provided by this layer that are affected by mobility? What mobility types will affect this layer? How mobility can affect this layer? What is needed at this layer in case mobility? Outcome of this layer inorder to support mobility? 3.1 Physical Layer Mobility At the physical layer message is actually sent out over the network. The basic functions of physical layer are encoding, signaling, data transmission and reception of data. Encoding and signaling takes care of transforming the data from bits that reside within the computer into signals that can be sent over the network. After transformation the physical layer actually transmits the data over the link, and of-course is also receive the signals. Mobility will not affect the function of physical layer as they have to ensure that signals are transmitted and received even when the device is moving. There are other challenges associated to physical layer like fading and multi-path on the radio channel, which are outside the scope of this paper. 3.2 Link Layer Mobility Link layer mobility is also known as link layer handoffs. There are two types of link layer handoffs, horizontal handoffs and vertical handoffs. Horizontal handoffs may be invisible to higher layers, since it may occur within a single subnet. A vertical handoff is a handoff between different access technologies. Vertical handoffs are usually visible to network layers and cannot be handled at the link layer without substantial amount of effort. In the case of mobility the link layer perform channel scanning, detecting availability of potential access technologies, monitor channel conditions, authentication and reassociation. All of these operations can be performed within access points in same or different access technologies based on the type of handoff. This layer could be affected by all kind of physical mobility types and interface change mobility (logical mobility). In the case of mobility the link layer can detect different access technologies available, signal to noise ratio of different access point’s available, channel at which different access points are operating, information about the overlay networks etc. The information about channel conditions can be helpful in making decision about queuing, packet dropping and QoS[3]. Knowledge about potential links and link properties is useful in the case of overlay networks. As overlay networks have multiple heterogeneous link and involve choosing, initialing and decisions making about vertical and horizontal handoffs. Moreover, link layer techniques are also responsible for communication between different

link layer devices, to enable heterogeneity and proactive context caching for fast handoffs. The information about imminent link layer handoff to network layer could significantly improve performance of IP layer handoff. In [3], the proposed low-latency mobile-IP handoff scheme utilizes the information of signal strength to detect link layer handoffs. Using this information it speeds up network-layer handoff by replaying cached foreign agent advertisements. 3.3 Network Layer Mobility As discussed in the previous section the link layer mobility management deals with directly connected devices while network layer makes communication possible between different/remote networks. Location management (reach-ability) and naming (IP Address) are the two services provided by the network layer that could by highly effected in the case of mobility. This layer could be affected by inter-subnet mobility, inter-network mobility, interface change and address change. Network layer is also responsible for providing the required QoS services, so it will also be affected by service mobility. The major change which layer may undergo in case of mobility is the change of IP address when ever mobile device enters new network. The address change leads to the challenge of location management which apparently is another responsibility of network layer. Change of network requires the device to be configured to the new network setting, device should be able to get a new IP address and updating any naming service so that it can be reached by the corresponding hosts (location update). Protocols like DHCP and IPv6 autoreconfiguration allows dynamic reconfiguration of hosts by providing them with a new IP address and configuration parameters in a new visited network. DNS, Dynamic DNS and home agent binding (Mobile IP) are the mechanisms for location management. Furtherore, another challenge on the network layer will be the dynamic routing of the packets to reach the destination. The following two distinct solutions for routing are possible [4], use host specific routes and updating them as each host moves or use routes to sub-networks and add indirection agents to the architecture. The first approach is not scalable as numbers of internet hosts are increasing exponentially. The second approach is being followed by all Mobile IP [5] based solution. 3.4 Transport layer Mobility During mobility packet loss, link capacity and change of IP address affect the transport layer protocols. Packet loss affects the flow and congestion control algorithms. As in connection establishment Bandwidth Delay Product (BDP) is used to define the window size. This window size is then used for the duration of the connection. Once the device moves to a new network link capacity

44

might change, this will also effect the BDP. Therefore, new window size should also be negotiated again in the visited network. These issues of BDP are dealt by the mobility aware transport layer protocol. If we use the conventional transport layer implementation for mobility management, then transport layer will be the most affected layer because of its tight binding with the layer above and below it, as discussed in section 1. This layer in affected in the case of inter-subnet mobility, internetwork mobility, horizontal handoff, vertical handoff, address change, session mobility and application/flow mobility. In transport layer mobility management the reliability and integrity of end-toend data delivery, connection reestablishment after disconnection, longer connection state maintenance, waiting for reconnection, assessing transfer rates for new link and for ongoing connections are important issues. All of these issues are responsibility of layer 4 (Transport layer) mobility management and higher layers mobility management mechanism. The distribution of these tasks between transport, session and application layer is a thoughtful process. In any case the obvious tasks of transport layer mobility management solution will be reliable data delivery, re-ordering, re-connection and integrity. 3.5 Session Layer Mobility Main purpose of session layer is to maintain state information about the parameters involved in the session state and communication state. Mobility causes unexpected termination of transport layer service to an ongoing application communication session, which may result in the loss or invalidation of information relating to the state of the session. If session layer is used then this layer is independent of transport and lower layers. Session layer mobility protocol is only affected in the case of session mobility and application/flow mobility. Assuming that the session layer is used, then the session state information may include the number of bytes already transferred and written to disk for a file transfer application, the encryption keys and security associations set up for a secure remote login session and synchronization data for combining incoming streams. The responsibility of the session layer mobility management protocol is to ensure that this information is not lost as a result of connection termination at the transport layer. This means that application needs to provide the relevant session layer mobility handling mechanism with an access to all of the information that is required to pause, checkpoint, and restart the current session. Once transport layer service is re-established and a new communication socket is obtained, then this information can then be used to restart the application communication session in the same state as it was when the previous transport layer connection was terminated.

3.6

Presentation Layer Mobility

The presentation layer is responsible for the delivery and formatting of information to the application layer for further processing or display. It relieves the application layer of concern regarding syntactical differences in data representation within the end-user systems. The issues like screen resolution, codec versions, application versions etc are to be dealt with at the presentation layer in the case of mobility. This might be needed in the case of session and application/flow mobility within different devices in a Personal Area Networks (PAN). As different devices in the PAN might have different presentation capabilities and options. A presentation layer mobility management protocol should be capable of content adoption and can detect the capabilities of the new device and modify the presentation of the data accordingly. 3.7 Application Layer Mobility Application layer mobility solutions are also known as application specific mobility solutions. There are no set requirements for a mobility solution at the application layer, so we can not study it according to mobility affects on it. Application layer is highly flexible and is dependent on the underline layers for network access and socket establishment etc. Application layer can detect the changes occurred in the underline layers and can act accordingly to provide an application specific mobility solution. The important thing to keep in mind is that if the session state is maintained by the application, then the application needs to handle session mobility by itself. On the other hand if the session state is made available to a ‘session layer mobility handling protocol’ then the application doesn’t have to handle the mobility. Thus the benefit of session layer mobility is that the applications don’t need to deal with mobility handling. The drawback is that applications (or programming languages and compilers) need to be rewritten so that they can use the services provided by the session layer. Mobility Management Requirements In this section we will study the relationship between the mobility requirements and the mobility management protocols on different OSI layers. Nine major mobility requirements are listed here i.e. location management, handoff management, security, Quality of Service (QoS), connection reestablishment, end-to-end reliability, multi-homing and layer specific performance enhancement. First of all we will give a brief overview of all these requirements and then in the next section we will see how mobility management protocols fulfill these requirements at different layers. 4

45

4.1

Location Management Location management is a process that involves identifying the location of the mobile node while it is moving within different networks. It includes two major takes [6] location registration or locations update and call delivery. 4.2 Handoff/handover Management Handoff management is required to keep the connections alive while the mobile node is moving. Handoff management can be done on several layers for example, link layer, IP layer, transport layer and even on the application layer. Handoff management at the IP Layer is divided into two major types’ inter-domain handoff and intradomain handoff [6]. At the link layer are divided into two major categories horizontal handover/handoffs and vertical handovers/handoffs [6]. 4.3 Security Security mainly involves authentication, confidentiality, integrity and authorization to access of the network resources. Firstly, the MN needs to authorize and authenticate itself while roaming in a new environment. Secondly, when QoS resources are provided to the MN authorization should be confirmed so as to detect Denial of Service Attacks. Another important questions arise is that which layer should be responsible for security. Security solutions for wireless networks have been proposed at different layers like: link layer (WEP), IP Layer (IPSec), application layer (SSL) etc. 4.4 Quality of Service (QoS) Transparency of QoS is a complex and important area in the future wireless network [8]. While moving within different networks, user should have guarantee of QoS. This term is also referred to as service mobility [7]. QoS provisioning also comprise data plane (mainly traffic control e.g. classification and scheduling) and control place (mainly admission control and QoS signaling) functions. Changing location during the lifetime of the dataflow introduces changed paths, thus it requires to identify the new path and install new resource control parameter via path-coupled QoS signaling. This is really a challenging problem in the wireless domain. Mobility management solutions trying to resolve QoS issues should address the above mentioned issues. 4.5 Connection Re-establishment When mobile nodes move from one network to another they might loose their existing connection, and they need to re-establish the connections after getting the new IP layer details. This connection reestablishment is the task of transport layer in particular. This problem can only we resolved at transport layer or the layer above. In

mobility management for user to have transparent view of mobility we need to have connection disconnections and reconnections seamless and transparent from the user or applications. 4.6 End-to-End Reliability

End-to-end reliability is another important research area in wireless and wired internet. This feature is more dependent on the transport layer services. In the conventional internet, transport layer protocols are dependent on the services provided by the network layer. They do not consider the link properties, thus the congestion control of transport layer does not distinguish between packet loss caused by wireless link or from the normal packet loss in wired network. This behavior degrades the performance of end-to-end connection in the wireless network. Therefore, in the mobility management protocols on transport layer should consider this factor for providing better end-to-end reliability.

Figure 3 : Mobility Management Requirement Analysis 4.7 Multi-homing Multi-homing is an essential component for future wireless networks. Multi-homing means that a device is capable of communicating with the help of multiple interfaces at the same time. These interfaces can be of the same or different access technologies ( e.g. WiFi, GRPS, GSM, CDMA, and Bluetooth). 4.8 Layer Specific performance enhancement There are several protocols which are built to enhance the performance or to add new functionality to the current protocols. In our discussion of internet mobility requirement analysis we will also have a look at the protocols (IAPP, LWAPP, FastMIPv6) that are developed to enhance the performance of protocols on a specific layer. Mobility Management Solutions on Different Layers In this section we will discuss mobility management protocols proposed on different layers 5

46

of the OSI Stack. First of all we will briefly see how these protocols work, then we will analyze them according to the types of mobility they support (Figure-4) and finally we will see what all mobility management requirements they fulfill (Figure - 2). 5.1 Application Layer Solutions 5.1.1 L-7 Mobility This approach [9] introduces the concept of inter-domain mobility, that allows users to migrate their connectivity between different network domains. By adding a simple extension to current mobility practices for inter-domain mobility, L7mobility provides support for hot and policy mobility. Inter-domain mobility enables handoff between two infrastructures that have nothing in common and may use totally incompatible mobility solutions. In this approach applications have to create a new TCP connection every time the device handoffs. Other link layer issues like, IP layer and transport layer issues are dealt by a Connection Diversity Framework [9]. L7-mobility provides handoff management and QoS using policy mobility concept. The location management and connection reestablishment is the responsibility of the application. The applications handle the mobility part specific to them, such as restarting their IP connection and discovering remote application proxies. The application delegates all the generic mobility functionality and link specific mobility management to the connection manager and interacts with it through a well defined API. The role of the connection manager is to discover, evaluate, setup and monitor various paths to the infrastructure on behalf of the various applications. It directly manages various link layers and includes abstraction modules specific to each link layer. The connection manager performs link discovery to find different paths to the infrastructure. It activates and configures link layers on-demand to enable their use, monitor them for failure, and disconnects them when idle. The policy manager is responsible for policy based QoS guarantee. It selects the most appropriate link to connect to the infrastructure based on the current policy, application requirement and link availability. 5.1.2 Application Layer Mobility Using SIP (ALMSIP) ALM-SIP uses Session Initiated Protocol (SIP) [7] to provide terminal, personal, session and service mobility to applications ranging from Internet telephony to instant messaging. Terminal mobility is explained in two different scenarios pre-call mobility and mid-call mobility. When ever a mobile node changes it address is registers its new address to its home Registrar. In the case of mid-call mobility in addition to registering with the home registrar it also send an INVITE request to the corresponding node with its new IP address (this is a similar concept as route optimization Mobile IPv4). The other mobility

types are also discussed in detail in the paper [7]. ALM-SIP provides location management, handoff management, QoS, and connection re-establishment. 5.2 Session Layer Solutions 5.2.1 Session Layer Mobility Management SLM [10] proposes a framework to manage connections to the mobile hosts. This protocol integrates the notions of Quality of Service (QoS) management and mobility management and forms a base for overall session management. The QoS management is carried out in a number of ways. Firstly, it maintains the normal IP routing semantics between two hosts, so it allows resource reservation using both IntServ and Diffserv, without breaking their semantics. In the case the host changes its address, the existing reservations can be torn down and a new reservation can be established. Secondly, SLM allows the placement of intermediate proxy modules for data filtering. In [1] some enhancements to SLM are proposed. It proposed to use Network Access Identifier for mobile objects [1] naming. Although this name is globally unique it is not sufficient due to the diversity of endpoints that can be created by the same mobile objects. In order to distinguish between these communication end-points, this name should be combined with another naming component for example Universally Unique Identifier (UUID). The UUID is then generated per end-point. An endpoint identity (EPID) constructed from the pair [NAI, UUID] fulfills the above requirement. This then allows the end-point to move not only within a device but also between devices so it supports both inter-device and intra-device mobility (Figure-2). SLM also provides internet mobility services like handoff management, QoS and connection re-establishment. SLM has introduced two entities on the session layer, one is reflector and the other is connector. The applications communicate with the reflector and the reflector redirects the connection to the connector. The connector is responsible for handoff management and connection reestablishment. Whenever the mobile node changes it point of attachment to the network the connector layer on the mobile device establishes a new connection with the connector layer on the corresponding node. Therefore, the handoff is transparent from the applications. SLM used a new entity called User Location Server (ULS) for location management. 5.3 Transport Layer Solutions 5.3.1 MSOCKS MSOCKS [11] presents an architecture called Transport Layer Mobility (TLM) that allows mobile nodes to not only change their point of attachment to the internet, but also control which network interface to use for different kind of data leaving from and arriving at the mobile node. This

47

approach is implemented using split proxy mechanism and its an extension of SOCKS. In MSOCKS, when a MN changes its IP address, then it shall open a new connection to the proxy and sends a RECONNECT messages with the connection identifier of the existing connection. Upon receiving a RECONNECT message, the proxy separates the old connection between MN and Proxy (MN-Proxy) from the connection between Proxy and CN (ProxyCN), and concatenates in the new MN-Proxy connection. The proxy then concatenates the new connection to the Proxy-CN connection in place of the old MN-Proxy connection and closes the old connection. Once the concatenation is setup, the proxy sends an ok message to MN. MSOCKS provide handoff and location management using the proxy. When ever the mobile node makes a BIND or CONNECT request to the proxy asking to be connected to the corresponding node, the proxy issues a new connection identifier with which the logical session between the mobile node and the proxy are tracked. MSOCKS RECONNECT request is added to support multi-homing support in the mobile node. When the MSOCKS library wants to change the address or network interface that a TCP connection uses to communicate with the MSOCKS proxy, it simply opens a new connection to the proxy and sends a RECONNECT message specifying the connection identifier of the original connection. In this way MSOCKS also supports multi-homing.

implementation of an end-to-end architecture for internet host mobility using dynamic updates to the Domain Name System (DNS). This protocol supports mobility management using TCP migrate option. The migrate option uses a token to identify the connection and DNS is used for location management. In TCP Migrate option the token is negotiated at the connection establishment time and after successful token negotiation and connection can be uniquely identified by (source address, source port, dest address, dest port) or (source address, source port, token). This enables a mobile node to reestablish a previously-established connection from a new address by sending a special Migrate SYN packet that contains the token. The mobility management requirements satisfied by TCP Migrate are handoff management, connection reestablishment and end-to-end reliability. The location management in TCP Migrate is done using DNS. Handoff management in TCP Migrate is achieved through Migrate TCP option. TCP migrate also provides endto-end reliability as it is an enhancement of the conventional TCP and there is no middleware in between like proxy in the case of MSOCKS.

Figure 6 : mSCTP Operation

Figure 4 : Change of IP Address in MSOCKS

Figure 5 : TCP Connection Migration

5.3.2 TCP Migrate TCP Migrate [12] presents the design and

5.3.3 mSCTP Stream Control Transmission Protocol (SCTP) [13] is a new transport protocol, existing at an equivalent level with UDP and TCP. The two major functions provided by SCTP that make it unique from other transport layer protocols are multistreaming and multi-homing function. In particular, the multi-homing feature of SCTP enables SCTP to be used for Internet mobility support, without support of network routers or special agents. The ADDIP extension [14] enables an SCTP endpoint to add a new IP Address, delete an unnecessary IP address and also change the primary IP address used for the association in an active SCTP association. mSCTP [15] is build on top of SCTP with ADDIP extension for supporting soft handover in the transport layer. mSCTP supports handover and multi-homing but it does not support location management. mSCTP, similar to SCTP and therefore supports unicast only. In SCTP MN has only one association with the CN. At the initiation of SCTP

48

association the MN and CN negotiate list of IP address. Among the list of IP address one of the address is chosen as a primary address and other are specified as active address. When ever a mobile node enters into a new network it gets a new IP address, it then sends an Address Configuration Change (ASCONF) Chunk with Add IP Address parameter to inform the CN of the new IP address. On receiving the ASCONF, CN shall add the new IP address to the list of association address and reply the ASCONFACK chunk to MN. While MN is moving, MN may change the primary path to the new IP address by path management function. The SCTP association, therefore, can continue data transmission while moving to a new network. MN can also inform CN to delete the IP address of previous network from the address list by sending ASCONF chunk with delete IP address parameter. This is done when MN confirms that the link of the previous network has failed permanently. 5.4 Network Layer Solutions 5.4.1 MIPv6 Mobile IPv6 [16] protocol allows nodes to remain reachable while moving around in the IPv6 internet. Mobile nodes are always identified by their home address, regardless of their current point of attachment to the internet. While situated away from their home network, a mobile node is also associated with a care-of-address, which provides information about the mobile node’s current location. IPv6 packets addressed to a mobile node’s home address are transparently routed to its care-of-address. This protocol is both suited for mobility across homogenous and heterogeneous media. MIPv6 supports bidirectional tunneling and route optimization (Figure - 6).

management is done in the similar way, the corresponding node creates a connection with the Home Agent and in the case of handoff the Mobile Nodes open up a new connection with Home Agent. This makes handoff transparent from the corresponding node. This behavior of movement transparency of the mobile node becomes void in the case of route optimization. In the case of route optimization when ever the mobile host changing its point of attachment it sends a biding update to both the HA and the CN. In this case the architecture of corresponding node also needs to be changed. The security in Mobile IPv6 and NeMo is achieved by using IP Security (IPSec) tunnels between HA and the CN and the MN and the CN in the case of route optimization. Several variants of mobile IPv6 are proposed to improve its performance like FastMIPv6 [30], HMIP [31],NeMo[17] etc.. 5.4.2 Network Mobility The objective of the NEMO [17][25] is to develop a mechanisms that provide permanent Internet connectivity to all the mobile network nodes via their permanent IP addresses and to maintain ongoing sessions as the mobile network changes its point of attachment to the Internet. In the network mobility architecture the mobile router (MR) takes care of all the nodes within the network, irrespective of their capabilities. As a first step, the IETF NEMO Working Group is developing a basic protocol [18] that ensures uninterrupted connectivity to the mobile network nodes, without considering issues such as route optimization. The NEMO Basic protocol requires the MR to act on behalf of the nodes within its mobile network. Firstly, the MR indicates to it’s HA that it is acting as a MR as opposed to a mobile host. Secondly, the MR informs the HA of the mobile network prefixes. These prefixes are then used by the HA to intercept packets addressed to the mobile nodes and tunnel them to the MR (at its careof address), which in turn decapsulates the packets and forwards them to the mobile nodes. Packets in the reverse direction are also tunneled via the HA in order to overcome Ingress filtering restrictions [19].

Figure 7 : Modes Supported by MIPv6

The location management and handoff management in MIPv6 and NeMo is almost similar, the only difference is that in the case of MIPv6 its provide location and handoff management for one host and NeMo provide the same for the whole network associated with the mobile router. For location management they both use binding updates to the Home Agent (HA) and Corresponding Nodes (in case of route optimization) whenever they change their point of attachment to the internet. Handoff

Figure 8 : Communication between LIN6 Nodes

5.4.3 LIN6 LIN6 [20] [26] is a new protocol that’s

49

supports mobility for IPv6. LIN6 claims to have handoff in 50 milli-seconds. It is basically a Location Independent Network Architecture (LINA) [20] with IPv6 support. LINA employs separation of identifier and locator to support node mobility. In the application layer, a target node can be specified by its identifier in addition to the conventional model in which the target node is specified by the locator. When the application specifies a target node, the identifier sub layer maps the identifier to the corresponding locator, and then the delivery sublayer “embeds” the identifier in the locator. In conventional networks the IP address has two semantics associated with it, one is the identification and the other is location. LINA introduces two entitles in the network layer to support node mobility, the interface locator (uniquely identifies the nodes current port ) and node identifier (signifies the identify of the node). Location management is done by DNS and a mapping agent (MA). LIN6 defines two types of network address. The LIN6 generalized ID (formed by concatenating LIN6 prefix and LIN6 ID) and LIN6 address (network prefix and LIN6 ID). LIN6 generalized ID is formed by concatenating LIN6 prefix and LIN6 ID and is used at transport layer to identify the connection. The LIN6 address is formed by concatenating network prefix and LIN6 ID and is used for routing packets over the network. Figure 9 shows the how communication between LIN6 nodes is done.

generalized IDs. The security association is decided by using the destination LIN6 generalized ID, and then IPsec calculation is executed. 5.4.4 Cellular IP Cellular IP [21], is an internet host mobility protocol that takes an alternative approach to that found in mobile telecommunications (e.g. General Packet Radio Service) and in IP Networking (Mobile IP). Cellular IP represents a new mobile host protocol that is optimized to provide access to a Mobile IP enabled internet in support of fast moving wireless hosts. The universal component of cellular IP network is a base station which serves as a wireless access point but at the same time routes IP packets and integrates cellular control functionality traditionally found in Mobile Switching Center (MSC) and Base Station Controllers (BSC). The cellular IP network is connected via gateway router. Mobility between gateways (i.e. Cellular IP access networks) is managed by Mobile IP while mobility within access networks is handled by Cellular IP. The location management and handoff management support are integrated with routing. To minimize control messages, regular data packets transmitted by mobile hosts are used to establish host location information. Uplink packets are routed from mobile to the gateway on hop-by-hop basis. The path taken by these packets is cached in the base-station. To route downlink packets, that are address to a mobile host, take is the same as the path used by recent packets transmitted by the host. When the mobile host has no data to transmit then it periodically sends empty IP packets to the gateway to maintain its downlink routing state. To perform handoff a mobile host has to tune its radio to the new base station and send a route-update packet. This creates routing cache mapping on route to the gateway hence configuring the downlink route to the new base station. 5.5 Link Layer Solutions 5.5.1 Inter Access Point Protocol (802.11F) IEEE 802.11F [22] or Inter-Access Point Protocol, is a recommendation that describes an optimal extension to IEEE 802.11 to enable wireless accesspoints to communicate among multi-vendor systems. Briefly, IAPP is a set of functionalities and protocol used by an AP to communicate with other AP's on a common distribution system (DS). It is part of a communication system comprising of Access Points (AP's), Mobile Stations (STA's), Arbitrarily connected DS and Remote Authentication Dial In User Service (RADIUS) servers. Radius provide two functions, mapping of Basic Service Set (BSS) Identification (BSSID) of an AP to its IP address on the DS and distribution of keys to the AP's to allow the encryption of the communication between the AP's. The basic functions of IAPP are to facilitate

Figure 9 : MIP & CIP Integration Architecture

It uses IPSec for security. In the sending node, IPSec is processed as follows. When the packet is passed from the transport layer, the source and the destination address fields in the IPv6 header contain LIN6 generalized IDs. The security association is decided by using the destination LIN6 generalized ID, and then IPsec calculation is executed. After that, the source and the destination LIN6 generalized IDs are converted to LIN6 addresses. In the receiving node, IPsec is processed as follows. Upon packet reception, the source and the destination address fields of IPv6 header contain LIN6 addresses. First, these LIN6 addresses are converted to LIN6

50

certain maintenance of Extended Service Set (ESS), support the mobility of STA's, enable AP's to enforce the requirement of a single association for each STA at a given time and enable proactive caching for fast hand-off.

Figure 10 : Working of IAPP

The Working of IAPP is demonstrated in Figure-10. Mobility management requirements fulfilled by IAPP are handover management, security and layer specific performance enhancement. In addition to these it also provides a mean for access point communication and proactive context cashing for fast handovers. It provides 802.11i (WPA2) based security and also provide secure way for inter access point communication within a single Extended Service Ser (ESS).

frames received from mobile stations (STA) to the AR for processing via the LWAPP protocol. Similarly, packets from authorized mobiles are forwarded by the AP to the AR via this protocol, if the protocol works on layer 3. These forwarding operations between APs and ARs are accomplished according to a LWAPP transport layer specification which defines how to tunnel 802.11 frames in 802.3 (Ethernet) frames or IP packets in UDP packets. The Lightweight Access Point (LWAPP) protocol is said to fulfill the handoff management, security and layer specific performance improvement at the link layer. As discussed above, there are two major components of LWAPP, wireless LAN controller and lightweight access points. The real-time frame exchange and certain real-time portion of MAC management are accomplished by access points and other management tasks like authentication, security management, and handoff management are handled by the wireless LAN controllers. LWAPP provides cellular like fast handoffs which makes it’s a excellent protocol to support mobile application such as voice over WLAN. The performance enhancement is achieved by transferring the intelligence to a centralized wireless LAN controller and making the access points lightweight. Within a LAN the mobile station does not have to re-authenticate itself as far as it is moving within the range of Wireless LAN Controller.

Figure 11 : LWAPP Architecture Figure 12 : Working of HIP

5.5.2 Light Weight Access Point Protocol (LWAPP) LWAPP [23] is a protocol designed to make communications between access points and wireless switches automatic. This protocol allows a router or switch to interoperably control and manage a collection of wireless access points. Inorder to move some of the loading due to Wi-Fi processes and function complexity to the centralized wireless switches or routers. LWAPP is a protocol that defines how lightweight access points communicate with Access Routers (AR). It assumes a network configuration that consists of multiple APs connected either via layer 2 (Ethernet), or layer 3 (IP) to an AR. The APs can be considered as remote RF interfaces, being controlled by the AR. The AP forwards all 802.11

5.6 New Layer Solutions 5.6.1 Host Identity Protocol HIP [24] handles mobility by introducing a thin layer of additional resolution between the network and transport layers, decoupling transport sockets from network level addresses. Instead of binding to the IP addresses, HIP enabled applications bind to 128-bit Host Identity Tags (HIT), a global identifier generated by hashing a public key. In order for HITs to be globally reachable, some kind of infrastructural support (location management) is required to be able to map HITs to routable network level addresses. At present several mechanisms including DNS,

51

distributed hash tables, and rendezvous servers are being investigated as a means to provide this mapping. However, once both hosts engaged in endto-end communication are aware of each others HIT, no further infrastructural support is required unless both hosts change network location simultaneously with no prior notification. Due to the decoupling between network and transport layers, HIP enables applications on the mobile node to continue communication oblivious to changes in available network addresses and also provides a mechanism to directly signal a change in network address to the correspondent node. An authentication process proceeds each HIP communication session. HIP uses a four-way key exchange to verify the identity of the hosts, termed Initiator (I) and Responder (R). Mobility management solution by adding a new layer appears to be quite useful in term of fulfilling mobility requirements. HIP can provide location management, handoff management, security and multi-homing. HIP decouples the transport from the internetworking layer, and binds the transport associations to the Host Identities and keep internetworking layer addresses for routing. Therefore, HIP can provide for a degree of internetworking mobility and multi-homing. HIP mobility includes IP address changes to either party. Thus, a system is considered mobile if its IP address can change dynamically. HIP links IP addresses together, when multiple IP addresses correspond to the same Host Identity, and if one address becomes unusable, or a more preferred address becomes available, existing transport associations can easily be moved to another address.

Figure 13 : Mobility Management protocols and their support for mobility types

6 Proposed Mobility Enabled Protocol Stack Our proposed mobility enabled protocol stack, naming mechanism and wireless network architecture for the future wireless systems is shown in Fig. 14. In this architecture we demonstrate how we can use IAPP (Link layer solution), LWAPP

(Link layer solution), 802.11 (Link layer solution), MIPv6 (Network layer solution), HIP (A new layer solution), SLM (Session layer solution) and FreezeTCP together in-order to support the entire mobility requirement (Fig-X) and to support all mobility types (figure-X). Generally, it is not recommended that the above mentioned protocols should co-exist with their full implementations. The reason is that they have different endpoint objects [1] and many mobility requirements will be redundant if they co-exist. Therefore, in our solution we propose to use an And-Based Co-existence (ABC) mechanism, as described in section 1. In this proposed scheme, we propose a new endhost design, an enhancement of the naming service (e.g. DNS) to include End-Point Identities (EPID) [1] and new network architecture. We believe that in the future wireless networks end-host will have multiple interfaces and can have multiple server/client applications running on it. There may be cases in which we are just aware of the application name/id but do not have information about the device on which the application resides and the network it may be located in. Moreover, the mobile users might want to move application/session/flows between different devices in a Personal Area Network (PAN). Keeping these future mobility requirements in mind, we have proposed an end host design with three levels of identities i.e. interface identity, host identity and application identity. These identities have many to one and one to many relationships respectively. Interface identities are Care-of-Address (MobileIPv6) which a device gets in the visited network. A device identity is a Host Identify Tag (HIT) as defined in Host Identity Protocol (HIP) [Ref]. Mobile Object (MO) identity is EPID [1]. Another advantage of such architecture is that there is no tight binding of IP layer, transport layer and application layer as shown in (Figure-1). The naming system (e.g. DNS) is modified to have another mapping of Application ID (AID) to the IP address. The proposed naming system looks like, IP Address (could be multiple, as a device could have multiple interfaces): HIT (Theoretically/logically should be only One): Application ID (Could be multiple as device can have multiple applications running on it). In this case if a user only knows the application/service he/she needs to access and have no information about the location (IP) of the service and device its resides on (HIT), still the user can query the DNS with the AID to get corresponding HIT and IP address information. In our network architecture we propose to have lightweight access points with which mobile devices will directly communicate through 802.11 interface. The lightweight access point and access router will communicate using LWAPP [Ref] based messages. Inter heterogeneous access routers communication will be done by using 802.11F (IAPP) and this

52

protocol can also support proactive caching for fast mobile host handoffs. The working of this proposed architecture is shows in the figure X, with the help of three mobility scenarios. First, movement of the mobile device within the same LWAPP administered domain. Second, movement of mobile device within two or

HA and CN (in case of router optimization). In the case of (6), where the mobile object (MO) moves from one device to another, the application needs to update its corresponding HIT and IP address in the naming system. HIP and MIP integration architecture is explained in HarMoNy [27]. The transport protocol which we plan to use is FreezeTCP[28], it is

Figure 14 : A Proposed Mobility Management Architecture based on the AND-Based Co-Existence (ABC) Concept, for Future Wireless Networks

more different LWAPP administered domains. Finally, the movement of mobile objects from one device to another within a Personal Area Network (PAN). In the case of (1), the mobile device is moving between the lightweight AP’s within the administrative range of single LWAPP enabled access router. In this case as all the management operations including handover management is handled by the same Access Router (AR) so we do not have to make any changes to the end point (mobile device). (we have to associate Link Layer function) In the case of (2), the mobile device is moving between two different LWAPP administrative domains. Now the mobile device will acquire a new address (CoA) from the new access router. IAPP will initiate a fast handoff (3) as discussed in section 5.5.1. Finally when the handover is complete the mobile node needs to update its CoA with in DNS,

a connection migration scheme that’s lets the MH ‘freeze’ or stop an existing TCP connection during handoff by advertising a zero window size to the CN, and unfreezes the connection after handoff. This technique is suitable for our architecture as it’s a mobility aware scheme and reduces packet loss during the handoff process. Moreover this technique is specific to transport later and can work with other higher or lowers layers techniques. In our proposed architecture the vertical handoff can be achieved using any context aware vertical handoff application layer solutions [2][28]. 7 Conclusion In this paper we are trying to emphasize on the concept of distribution of the mobility management tasks to all layers of the OSI protocol stack. We introduced a notion of “Mobility Enabled Protocol Stack” instead of mobility management solution on a

53

specific layer. In order to distribute the mobility management tasks to all OSI layers, in this paper we discuss these layers according to the mobility management requirements, their responsibilities in case of mobility, mobility types that can affect them and mobility types that they can support. We describe current proposed protocols for mobility management on different OSI layers. In addition to this, we have also pointed out the mobility management requirements that these protocols can fulfill and mobility types that they can support. As all these protocols are specific to a mobility solution on a particular layer, therefore they inherit limitations enforced by the dependency of that layer on other layers. Keeping this in mind, we propose And-Based Co-existence of mobility management solutions on different layers, to be the ideal solutions to materialize the concept of “Mobility Enabled Protocol Stack” for future wireless networks. We also propose a novel mobility enabled protocol stack that’s based on our proposed notion of And-Based Co-existence. In our technique we also eliminate the dependencies of different OSI layers on each other to introduce flexibility and hot-swapping of interfaces and mobile objects on a single device. Our proposed technique fulfills all the mobility types and requirements and open up new horizon to a new area of Co-existence. We identified two types of co-existence techniques And-Based and Or-Based. Although in this paper we proposed a solution based on the AndBased co-existence concept, but still we are not ignoring Or-Based co-existence. In future work, we plan to study Or-based co-existence in more detail, while thinking of redundancy as an opportunity not a threat for mobility management. The vision of our research is to make a hot-swappable mobility management stack that can be modified, changed and moved according to the network and user context. 8 REFERENCES

[1] Ismailov, Y. Holler, J. Herborn, S. Seneviratne, A. Internet Mobility: An Approach to Mobile End-System Design. IEEE International conference on Mobile Communication and Learning Technologies, (April. 2006), pp. 124-124. [2] Balasubramaniam, S. Pfeifer, T. Indulska, J, Active Node suppoprting Context-aware Vertical Handover in Pervasive Computing Enviournment with Redundant Positioning, IEEE International symposium on Wireless Pervasive Computing (ISWPC) 2006, Phuket, Thailand, (January 2006) [3] Yuan Chen, Lemin Li. A Fair Packet Dropping Algorithm Considering Channel Condition in Diff-Serv Wireless Networks. The Fourth International Conference on Computer and

Information Technology (CIT'04), (June 2004) pp. 554-559. [4] Wesley M. Eddy, At What Layer Does Mobility Belongs?, IEEE Communication Magazine, (Oct 2004). pp 155-159 [5] C. Perkins. IP Mobility Support for IPv4, January 2002. RFC 3220. [6] Ian F. Akyildiz, Xie. J, Mohanty. S, A Survey of Mobility Management in Next-Generation AllIP-Based Wireless Systems, IEEE Wireless Communications, (August 2004). pp. 16-28. [7] Schulzrinne. H, Wadlund. E, Application-Layer Mobility Using SIP. Mobile Computing and Communication Review, (July 2000) Volume1, Number 2 [8] Fi. X, Hogrere. D, Narayanan. S, Soltwisch. R, QoS and Security in 4G Network. First Annual Global Mobile Congress. (Oct 2004). [9] Tourrilhes. J, L7-Mobility: A framework for handling mobility at the application layer, 15th IEEE International Symposium on Personal, Indoor and Mobile Radio Communications, (2004), pp. 1246-1251. Vol.2 [10] Landfeldt. B, Larsson. T, Ismailov. Y, Seneviratne. A, SLM, A Framework for Session Layer Mobility Management, In the Proc. IEEE ICCCN (Oct 1999). [11] Maltz. D, Bhagwat. P, MSOCKS: An architecture for Transport Layer Mobility, In the Proc. INFOCOM (1998) [12] Snoeren. A, Balakrishnan. H, An End-to-End Approach to Host Mobility, In the Proc., of MobiCom (2000) [13] L. Ong, J. Yoakum, An Introduction to the Stream Control Transmission Protocol (SCTP), RFC 3758, (May 2002) [14] Stream Control Transmission Protocol (SCTP) dynamic address reconfiguration. IETF Internet draft, Mar 2003 [15] M. Riegel and M. Tuexen. Mobile SCTP. IETF Internet Draft; draft-riegel-tuexen-mobile-sctp02.txt, Feb. 2003. [16] D. Johnson, C. Perkins, J. Arkko, Mobility Support in IPv6. RFC 3775, June 2004 [17] V. Devarapalli, R. Wakikawa, A. Petrescu, P. Thubert, Network Mobility (NeMo) Basic Support Protocol. IETF RFC 3963, (Jan 2005) [18] NeMo Working Group: http://www.ietf.org/html.charters/nemocharter.html [19] Ferguson P, Senie. D, “Network ingress Filtering: Dafeating Denial of Service Attack which employ IP source Address Spoofing”, IETF RFC 2267, (Jan 1998) [20] Ishiyama. M, Kunishi. M, Teraoka. F, An Analysis of Mobility Handling in LIN6, International Symposium on Wireless Personal Multimedia Communication (2001) [21] A. G. Valko, A. T. Campbell, J. Gomez,

54

"Cellular IP - A Local Mobility Protocol," IEEE 13th Annual Computer Communications Workshop, Oxford, Mississippi, (October 1998). [22] Chun. C, Shin. Kang, An Enhanced Inter-Access Point Protocol for Uniform Intra and Intersubnet Handoffs. IEEE Transactions on Mobile Computing, Vol. 4, (August 2005). [23] Zhaohui Cheng, Manos Nistazakis and Richard Comley. "Security Analysis of LWAPP", IWWST-2004, London, UK, (April 2004) [24] R. Moskowitz, P. Nikander, Host Identity Protocol (HIP) Architecture, IETF RFC 4423, (May 2006) [25] Perena. E, Sivaraman. V, Seneviratne. A Survey on network mobility support, ACM SIGMOBILE Mobile Computing and Communications Review, Volume 8, Issue 2 (April 2004), pp. 7-19. [26] Xiaoming Fu, Dieter Hogrefe, Deguang Le, A Review of Mobility Support Paradigms for the Internet, IEEE Communications Surveys and Tutorials, Volume 8, No. 1, First Quarter, IEEE, ISSN 1553-877X, ( 2006). [27] Herborn. S, Haslett. L, Boreli. R, Seneviratne, HarMoNy – HIP Mobile Networks. VTC 2006. [28] T. Goff, J. Moronski, D. S. Phatak, V. Gupta, “Freeze-TCP: a true end-to-end enhancement mechanism for mobile environments”, IEEE INFOCOMM, Tel Aviv, Isreal. Pp 1537-1545, (March 2000) [29] Vidales. P, Baliosian. J etl. Autonomic System for Mobility Support in 4G Networks, IEEE Journal on Selected Areas in Communications. Vo1 23. pp 2288-2304. [30] R. Koodli, Fast Handovers for Mobile IPv6, IETF RFC 4068, (July 2005) [31] H. Soliman, C. Castelluccia, K. El Malki, L. Bellier, Hierarchical Mobile IPv6 Mobility Management (HMIPv6), IETF RFC 4140, (August 2005)

55

EFFICIENT IMPLEMENTATION OF DOWNLINK CDMA EQUALIZATION USING FREQUENCY DOMAIN APPROXIMATION
F. S. Al-kamali+, M. I. Dessouky, B. M. Sallam, and F. E. El-Samie++ Department of Electrical Communications Faculty of Electronic Engineering Menoufia University Menouf, Egypt E-mails: +faisalalkamali@yahoo.com, ++fathi_sayed@yahoo.com ABSTRACT A signal transimitted through a wireless channel may be severely distorted due to intersymbol interference (ISI) and multiple access interference (MAI). In this paper, we propose an efficient CDMA receiver based on a frequency domain equalization with a regularized zero forcing equalizer and unit clipper decision with parallel interference cancellation (FDE-RZF-CPIC) to combat both ISI and MAI. This receiver is suitable for downlink zero padding CDMA (ZP-CDMA) cellular systems. The effects of the decision function, the channel estimation, the number of cancelled users, and the user loading on the performance of the proposed receiver are discussed in the paper. The bit error rate (BER) performance of the proposed receiver is evaluated by computer simulations. It has been found that the proposed receiver provides a good BER performance, even at a large number of interfering users. At a BER of 10-3, the performance gain of the proposed receiver is about 2 dB over the RAKE receiver with clipper decision and parallel interference cancellation in the half loaded case (8 users ) and is much larger in the full loaded case (16 users). Keywords: Downlink CDMA, Decision Functions, PIC, FDE-RZF, Zero Padding, Channel Estimation. 1 INTRODUCTION suffers from the error propagation phenomena that contributes to progressively enhanced interference [12]. There are several algorithms for interference cancellation in CDMA systems [ 6-13 ]. Most of these algorithms are designed for the uplink. For uplink interference cancellation, it is assumed that the base station knows all the spreading codes of the active users. For downlink CDMA, the receiver knows only the spreading code of the desired user. As a result, PIC has been assumed to be applicable at the base station, and not at the mobile terminal where only the information stream is to be decoded and the spreading codes of the interfering users are unknown. However, in last years, many algorithms have been proposed for the estimation of the codes of the interfering users and PIC has been applied for downlink CDMA [7]. The main target of this paper is to analyze the performance of PIC for downlink CDMA with different decision functions and hence to develop an efficient receiver based on FDE and PIC that is suitable for downlink CDMA. The remainder of this paper is organized as follows: in section 2, the system model of downlink CDMA is presented. In Section 3, the concept of PIC

Recently, single-carrier transmission with frequency domain equalization (FDE) has attracted much attention for its excellent performance even in strong frequency selective channels. In practice, the number of fingers in the RAKE receiver is limited because of hardware complexity. The use of an FDE can alleviate the complexity problem of the RAKE receiver arising from too many paths in a severe frequency-selective channel. It has been shown that the FDE can take the place of the conventional RAKE receiver with much improved BER performance for DS/CDMA signal reception over a severe frequency-selective channel [1-5]. This gives the CDMA with FDE the power to compete with multi carrier CDMA (MC-CDMA) in fourth generation systems. The performance of CDMA is mainly limited by the interference from other users, which is called the MAI. Therefore, PIC has to be applied in CDMA receivers [6-13]. PIC has gained a considerable attention for its potential ability to increase system capacity and its simplicity. However, the conventional PIC often

56

is discussed. Section 4 deals with the different possible decision functions. FDE for downlink ZPCDMA is presented in section 5. In Section 6, the proposed FDE-RZF-CPIC algorithm is presented. Channel estimation is discussed in section 7. The relative performance of the proposed receiver is compared to some existing approaches in section 8. Finally, Section 9, gives the concluding remarks. Notations: The symbols (.)H, (.)T, and (.)-1 designate complex conjugate transposition of a matrix, transposition of a matrix, and inverse of a matrix, respectively. Vectors and matrices are represented in boldface. 2 SYSTEM MODEL

elements of vectors d m and d m −1 were equal, i.e., if a zero padding (or cyclic prefix) process is used. The length of the zero padding must be greater than W. In this paper we will use a zero padding method as in [1]. So, Eq.(3) can then be written as:
r = Hd + n

(4)

Where H=H0+H1 has now a circulant structure.. H can be written as:

In downlink CDMA, the channel is common with frequency selective fading. We assume that the channel parameters vary slowly with time, so that for sufficiently short intervals the channel is approximately a linear time-invariant system. The baseband channel response can then be expressed by Dirac delta functions as follows [9]:

⎡ h[0] ⎢ . ⎢ . ⎢ H = ⎢h[W −1] ⎢ 0 ⎢ . ⎢ ⎣ 0

0 h[0]

. 0 h[W −1] . . h[0] ⎤ . . . ⎥ . . . h[W −1]⎥ . ⎥ . . 0 ⎥ . . ⎥ . . . 0 ⎥ ⎥ . 0 h[W −1] . . h[0] ⎦

(5)

The vector d can be represented as:
d = CSb

h (t ) =

∑h
w

W

(6)

w

δ (t − τ w )

(1)

where hw , and τw are the complex fading and propagation delay of the w-th path, and W is the number of multipath components of the channel impulse response. In this paper, we assume block fading, where the path gains stay constant over one block duration. The received signal at the mobile can be written as[9]:

term [6]. H0 and H1 can be written as [6]:

Where C is an (N×L)×(N×L) scrambling code matrix. S is an (N×L)×(K×L) orthogonal code matrix, and H 1d m −1 is the inter block interference (IBI)

r(t) = ∑∑∑hw AKbk (l)c(t −τ w )sk (t − lTs −τ w ) + n(t)
l k w

L KW

(2)

where Ak is the amplitude, bk(l) ∈ {-1,1} is the lth bit, sk (t) is the spreading code of user k, and c(t) is the scrambling code. In matrix form, the received signal in Eq. (2) can be written as follows [6]:
r = H 0 .d m + H 1 . d m − 1 + n m

0 . . . . . 0 ⎤ ⎡ h[0] ⎢ . ⎥ h[0] ⎢ ⎥ ⎢ . ⎥ . H0 = ⎢ ⎥ . h[W −1] ⎢ ⎥ . . ⎢ . ⎥ ⎢ 0 . . h[W −1] . . . h[0]⎥ ⎣ ⎦

, and
⎡0 ⎢. ⎢ ⎢. H1 = ⎢ . ⎢ ⎢. ⎢ ⎢. ⎢0 ⎣ . . 0 . . . . . . . . h[W − 1] . . . . h[ 0 ] ⎤ ⎥ . ⎥ ⎥ . ⎥ h[W − 1]⎥ ⎥ 0 ⎥ . ⎥ ⎥ 0 ⎦

(3)

(7)

Where d m is the mth block of the transmitted signal, r is the received vector, while nm is the additive noise with zero mean and variance of σn2. H0 is the (N×L)×(N×L) matrix describing a multipath channel having impulse response h(t) of length W. L is the number of symbols for each user. N=TS/Tc is the number of chips per bit (spreading factor). TS is the symbol period. Tc is the chip period. K is the number of active users. We can observe that inter-block interference (IBI) would disappear from Eq. (3) if the last W-1

The structures of the individual components in Eq. (6 ) are given bellow [14]:
S = diag [ S S ......S ]

(8)

57

S = [ s 1 s 2 ........ s K ]

(9)
T

s = [s
k

k

(0), s k (1),..........., s k ( N − 1)]
T T

(10) (11) (12) (13)

in frequency domain can be summarized as follows [6]: 1- Apply the RAKE receiver on the received signal as follows [14]:

b

= [b (1)b ( 2).......b ( L )]

T

d RAKE = Ψ ( Λ H R Τ )

(16)

C

= diag [c (1), c ( 2),......, c ( N × L )] = [b1 (l ), b2 (l ),........., bK (l )]T

Where Λ is a diagonal matrix containing the FFT of the circulant sequence of H, and RT is the FFT of r. 2-Estimate all interferences as follows:
T H ˆ b int = U C d RAKE

b(l )

(17)

Where SK is the signature sequence (code), and b is a vector consisting of the users amplitudes and the transmitted bits, and bK(l) is the lth bit. In this paper, we suppose a perfect estimation of the downlink interfering codes. So, we can write the received signal as follows:

3- Discard the detected zero symbols at the end of each block to produce b int , then take the decision as follows:

~ b int = f dec {b int }

(18)

r = HCS d b d + HCUb int + n

(14)

Where Sd is an ((N×L)×L) matrix consisting of the spreading code of the desired user, U is an (N×L)×((K-1)×L) matrix consisting of the spreading codes of the interfering users, bd is an L×1 vector consisting of the desired symbols, and bint is a ((K1)×L)×1 vector consisting of the interfering symbols. In the zero padding method, NZP zeros will be added to the end of NF-NZP symbols to build a block of NF symbols before transmission. At the receiver, the FFT detection will be performed on the padded data block, the detected zeros at the end of this data block will then be discarded after despreading. The zero padding process is illustrated in Fig. (1).
3 PIC FOR DOWNLINK ZP-CDMA

where fdec(.) is the tentative decision function. 4- Add zeros for padding , then regenerate the MAI as follows: ~ r MAI = H CU b ' int (19) where b

~'

int

is the zero padded version of b int .

~

5- Use PIC to cancel the effects of interference on the received signal to obtain an interference free signal:

z = r − rMAI
6- Apply the RAKE receiver on the vector z as follows:
d RAKE = Ψ ( Λ
H

(20)

Ψ

−1

z)

(21)

Parallel interference cancellation for CDMA systems has attracted much interest, due to its structured architecture that facilitates easy implementation. It was first introduced in 1990 [15]. Such multistage PIC methods attempt to cancel MAI based on tentative decisions. The idea of PIC is to estimate the multiple access and multipath induced interferences and then to subtract the interference estimate. The circulant matrix H can be efficiently diagonalized by the fast Fourier transform (IFFT) and inverse fast Fourier transform (FFT). Let Ψ-1 and Ψ denote the FFT matrix and the IFFT matrix, respectively. A circulant matrix A can be written as (see appendix 1) :
A = ΨΛΨ
−1

7- Descramble and despread the obtained data. 8- Finally, discard the detected zero symbols and perform hard decision. Due to error propagation, PIC with hard decision may perform worse than PIC with linear or soft decision functions. On the other hand, harddecision interference cancellation can completely cancel interference when the hard decisions made are correct [12].
4 DECISION FUNCTIONS

The performance of PIC depends on the decision function used in the interference cancellation iterations, e.g., hard, soft, null zone, unit clipper, and hybrid decision functions [12]. The general model for the decision function is:

(15)

y = f dec ( x)
The following, decision functions can be used:
The hard limiter [12]:

(22)

where Λ is the FFT of the circulant sequence of A. The implementation of the RAKE receiver with PIC

58

16 zeros

1 block (256 chips)

1 slot data (2560 chips )

FFT detect +despreading

Block- by – block detection

FFT detect +despreading FFT detect +despreading

Figure 1: CDMA transmission for FDE detector using zero padding
y

⎧1 x ≥ 0 ⎪ y = fdec(x) = ⎨ ⎪−1 x < 0 ⎩

(23)

x

Hard Limiter

x >1 ⎧ 1, ⎪ y = fdec(x) = ⎨ x, x ∈[−1,1] ⎪−1, x < −1 ⎩

(26)

It makes a hard decision for one of the two possible symbols.
The null zone function [11]:
y

It makes a soft bit decision when the soft bit estimate lies inside the interval [ -1 , 1 ] to avoid propagation error , and makes a hard decision when the soft bit estimate lies outside the interval [-1 , 1] to avoid the noise magnification [10].
x

⎧ 1, x > cn ⎪ y = fdec(x) = ⎨ 0, x ∈[−cn , cn ] ⎪−1, x < −cn ⎩

1 -1 -cn

(24)
-1

cn

1

5

FREQUENCY DOMAIN EQUALIZER FOR DOWNLINK ZP-CDMA.

Null Zone

It makes a hard decision when the soft bit estimate lies outside the interval [-cn,cn], and sets the decision result to zero when the soft bit estimate lies inside the interval [-cn , cn]. Where cn is the null zone decision threshold (0≤ cn ≤1) [11].
The linear decision function:

y = f dec ( x ) = x

(25)

The application of FDE techniques makes single carrier modulation a potentially valuable alternative to OFDM, especially in regard to its robustness to RF implementation impairments. Linear ZF based chip level equalization has been one of the most popular equalizers for single user downlink CDMA [16]. Because of the noise enhancement in the ZF equalizers, we propose the application of the regularized zero forcing equalizer. The time domain ZF estimation of d is given by [16]:
d ZF = ( H
H

It offers analytical access to the PIC performance, but performs worse than other decision functions.
The unit clipper decision function [11]:
y 1 -1 1 -1 x

H )

−1

H

H

r

(27)

To encounter the problem of noise enhancement in ZF equalizers, a new regularization term is added into Eq. (27) to yields:

Unit Clipper

59

d RZF

= (H H H + αΙ) −1 H H r = M −1H H r = Gr

(28)

Where α is a regularization parameter . The solution of Eq. (28) requires the inversion of the matrix M which has dimensions of (N×L)×(N×L). This inversion process is impractical in real time. Thus, a simplification is required for this inversion process. The equalizer matrix G can then be easily calculated as follows:

RAKE receiver with CPIC to estimate, regenerate, and cancel all the interfering users. Then the FDERZF equalizer is used to reduce the ISI effect and to provide better estimate of desired user's data. In this section, a specific data detection algorithm for downlink CDMA is derived which is based on FDERZF equalizer and CPIC. The proposed FDE-RZFCPIC system model is depicted in Fig. (2). The FDE-RZF-CPIC algorithm can be summarized as follows: 1- Estimate all interferences as follows:

G =Ψ Λ
where:

−1
M

Λ Ψ

H

−1

(29)

ˆ b int = U T C H Ψ Λ H .R T

(33)

2- Discard the detected zero symbols at the end of

Λ

−1
M

= [Λ Λ + αI]

H

−1

(30)

each block to produce b int , then take the decision as follows:

The FDE-RZF algorithm can be summarized as follows: 1- Apply the FDE-RZF on the received signal as follows:
d FDE − RZF = Ψ ( Λ
−1
M

~ b int = f dec {b int }

(34)

where fdec(.) is a decision function that transforms the soft estimate into a unit clipper decision. 3- Add zeros for padding , then regenerate the MAI as follows: ~ r MAI = H CU b ' int (35) where b

Λ RT)
H

(31)

3- Then, a better estimation of the symbol of interest can be obtained as follows:
ˆ bd

~'

= S d C d FDE − RZF
T

H

int

(32)

is a zero padded version of b int .

~

4- Finally, discard the detected zero symbols at the end of each block, and then use the decision function. A major advantage of this equalization method is its low computational complexity. The price to be paid is a reduction of the data rate caused by insertion of zero padding or cyclic prefix.
5.1 Complexity

4-Use PIC to cancel the effects of interference on the received signal to get an interference free signal:
Z

= r − rMAI

(36)

5- Apply the FDE-RZF to the signal vector Z as follows:

d FDE−CPIC = Ψ ( Λ M .Λ .Ψ

−1

H

−1

(Z))

(37)

The complexity of a P-point FFT is of the order of Plog2P. The FDE provides a complexity of Ο(P log2 P ) which shows a significant reduction as compared to that of the direct inversion of a matrix of dimensions P×P that has a complexity of the order of Ο(P3) [1]. The FDE has also less complexity than that of the RAKE receiver which has a complexity of Ο(P2) [1].
6 FDE-RZF WITH CLIPPER PIC

6- Then, a better estimation of the symbols of interest can be obtained as follows:
ˆ bd
T = S d C H d FDE −CPIC

(38)

This section gives the proposed receiver which is used to improve the performance of the PIC for downlink CDMA. The proposed receiver uses the

7- Finally, discard the detected zero symbols at the end of each block, and then use the decision function. The performance of FDE-RZF-CPIC depends heavily on the channel estimate, not only in the detection step, but also in the interference regeneration step. It is more efficient when the system is heavily loaded.

60

7 CHANNEL ESTIMATION

In this section, we consider the channel estimation method which depends on the pilot signal. When the pilot sequence is transmitted, the received signal in Eq. (4) is expressed as: r = Dh + n (39) where the complex channel impulse response h is expressed as:
FDE-RZF
Implementation of

downlink synchronous ZP-CDMA system, in which each user transmits BPSK symbols. These symbols are spread. After spreading, the resulting sum signal is scrambled using a complex scrambling sequence. The propagation channel is taken to be chip-spaced with delay spread of 3Tc. For all simulations, we take N=16, and the block size is NF=256 chips with 16 zeros (NZP=16) at the end of each block as shown in Fig. (1). All users are assigned the same power.
Descrambling, Despreading, And remove ZP
desired user's data

Channel Estimation
Received signal

(Λ Λ+αI)
H

-1

IFFT

Hard decision

RAKE

FFT

Genera tion of

-

ΛH

IFFT

Descrambling Despreading, And remove ZP

Unit Clipper decision

2
. . .

K

Interference Regeneration

Figure 2: The structure of FDE-RZF-CPIC for downlink ZP-CDMA system.

h = [h1 , h2 ,.........., hW ]T

(40)

D is the circulant pilot sequence matrix. The MMSE channel estimates are found by minimizing the following squared error quantity:

Figure 3 compares the performance of the FDERZF and that of the TDE and the RAKE receiver for 8 users . It is clear that the equalization in the time domain is identical to that in the frequency domain. The only difference is in the method of implementation. Both equalizers have better performance than that of the RAKE receiver only.
10
0

ˆ h = arg min r − Dh
h

2

(41)

SF=16,K=8,α =1
RAKE Zero padding TDE-RZF Zero Padding FDE-RZF Zero padding

Assuming white gaussian noise, the ZF solution is given by:
AVERAGE BER

ˆ h ZF = (DH D) −1 DH r

(42)

10

-1

However, using zero forcing channel estimation, the channel estimation accuracy significantly degrades due to the noise enhancement.
8 SIMULATION RESULTS

10

-2

Several simulation experiments are carried out to test the performance of the proposed FDE-RZFCPIC algorithm and compare it to other algorithms. The simulation environment is based on the

10

-3

0

5
SNR

10

15

Figure 3: Performance of FDE-RZF, TDE-RZF and RAKE receiver Vs the SNR .

61

The performance of the RAKE receiver with PIC and different decision functions is studied. Figures 4, and 5 illustrate the average BER versus the threshold of the null zone decision function (cn) at different SNR values and different number of users.
SF=16,K=8
NULL ZONE DECISION

single-user RAKE receiver is very poor, even for high SNR values. Parallel interference cancellation improves the performance significantly. Better performance can be obtained with the clipper decision function. Linear decision performance is worse than that of the single user RAKE receiver for the heavily loaded case (K=16). This is justified by the fact that PIC with linear decision is limited by noise enhancement.
10
0

10

0

SF=16,K=8

10
AVERAGE BER

-1

SNR=[ 6 9 12 15] db
AVERAGE BER

10

-1

10

-2

10

-2

10

-3

10

-3

RAKE Null zone decision unit clipper decision hard decision soft decision

10

-4

0

0.2

0.4

0.6

0.8

1

10

-4

THRESHOLD
Figure 4: Performance of the RAKE receiver with PIC Vs null zone decision threshold (cn) at different SNRs .

0

5
SNR

10

15

10

0

SF=16, K=16
NULL ZONE DECISION

10

0

Figure 6: Performance of the RAKE receiver with PIC at different decision functions Vs the SNR for the half loaded case. cn=0.4. SF=16,K=16

10
AVERAGE BER

-1

SNR=[ 6 9 12 15] db
AVERAGE BER

10

-1

10

-2

10

-2

RAKE Null zone decision unit clipper decision hard decision soft decision

10

-3

10

-4

0

0.2

0.4

0.6

0.8

1

10

-3

0

5
SNR

10

15

THRESHOLD

Figure 5: Performance of RAKE with PIC Vs null zone decision threshold (cn) at different SNR .

Figure 7: Performance of the RAKE receiver with PIC at different decision threshold functions Vs the SNR for a full loaded case. cn=0.4.

The optimal performance is obtained when cn=0.4. Figures 4, and 5 show that cn is nonsensitive to SNR-changes and to system-load changes. The effect of the tentative decision function on the performance of PIC for K=8 (half loaded), and K=16 (Full loaded ) are studied and shown in Figs. 6, and 7. As the number of users increases, the performance deteriorates. The performance of a

The performance of the proposed FDE-RZFCPIC is compared to that of the RAKE receiver, FDE-RZF equalizer, the RAKE receiver with hard decision PIC, and the RAKE receiver with unit clipper decision PIC. The effect of the regularization parameter on the performance of FDE-RZF-CPIC is examined in two experiments and shown in Figs. 8 and 9. The optimal

62

AVER AG BER

value of the regularization parameter α is equal to 1. This value is neither sensitive to SNR-changes nor to system-load changes. The effect of the choice of the tentative decision function on the performance of the proposed receiver is studied ( Fig. 10 ). The performance with clipper decision outperforms the performance with all other decision functions .
10
0

10

0

SF=16,K=16

10

-1

10

-2

SF=16,K=8
FDE-RZF-CPIC

10

-3

RAKE FDE-RZF w ith Null zone decision PIC FDE-RZF w ith CPIC FDE-RZF w ith hard decision PIC FDE-RZF w ith linear decision PIC

10
AVERAGE BER

-1

SNR=[ 6 12 ] db

10

-4

0

5 SNR

10

15

10

-2

Figure 10: Performance of the proposed receiver at different decision Functions Vs the SNR.

10

-3

10

-4

10

-2

10

0

Regularization Parameter (α )

Figure 8: Performance of FDE-RZF-CPIC scheme Vs regularization parameter (α ) at different SNR for a half loaded system. .
0

SF=16,K=16 FDE-RZF-CPIC

10

10
AVERAGE BER

-1

SNR=[ 6 12 ] db

10

-2

Figures 11, and 12 show the performance of five reception schemes as a function of SNR of each user for 8 and 16 users, respectively. From Figs. 11, and 12, it can be observed that there is a clear improvement achieved by FDE-RZF–CPIC scheme over other reception schemes. In Fig. 12, BER performances of all receivers are worse than the performances in Fig. 11 because of the increment in the number of users. The FDE-RZF-CPIC scheme improves the performance significantly, without saturation of the performance for high SNRs like the RAKE receiver. For the heavily loaded case (Fig. 12), the performance of FDE-RZF equalizer is greater than that of the RAKE with PIC scheme. This can be explained by the fact that at heavily loads, the RAKE receiver sees too much interference, which makes its decisions about interfering users unreliable.
10
0

SF=16,K=8,α =1

10

-3

10

-1

10

-3

10

-2

10

-1

10

0

10

1

AVERAGE BER

10

-4

Regularization Parameter (α )

10

-2

Figure 9: Performance of FDE-RZF-CPIC scheme Vs regularization parameter (α ), at different SNR for a full loaded system..

10

-3

RAKE

10

-4

FDE-RZF hard decision rake+pic unit clipper decision rake+pic FDE-RZF-CPIC

0

5
SNR

10

15

Figure 11: Performance of different reception schemes Vs the SNR for a half loaded system.

63

10

0

SF=16,K=16, α =1
RAKE FDE-RZF hard decision rake+pic unit clipper decision rake+pic FDE-RZF-CPIC

users increases.
10
0

SF=16
FDE-RZF-CPIC

AVERAGE BER

AVERAGE BER

10

-1

10

-1

10

-2

10

-2

10

-3

10

-3

0

5 SNR

10

15

10

-4

4

6

8

10

12

14

Number of Cancelled Users

Figure 12: Performance of different reception schemes Vs the SNR for a full loaded system.

Figure 14: Performance of FDE-RZF-CPIC scheme Vs number of canceled users. α=1, and SNR =12 dB.

The effect of user loading on the performance of the FDE-RZF-CPIC scheme is studied and presented in Fig. 13. The BER of all receivers degrade with increasing the number users. In this case, the BER performance of FDE-RZF-CPIC scheme degrades a little bit with increasing the number of users, but it is still better than the other schemes. This observation may be due to the MAI. The MAI when the number of users is large should be greater than the case when the number of users is low. Even after interference cancellation, some residual MAI still exists. Therefore, the performance loss may be attributed to the residual MAI.
10
0

The effect of channel estimation accuracy on the performance of the FDE-RZF-CPIC scheme for K=8 are studied and shown in Figs. 15, and 16. The performance of FDE-RZF-CPIC scheme with ZF channel estimation shows a loss of 1 dB at BER of 10-2 when compared with the case of perfect channel knowledge. Because the noise enhancement in the ZF channel estimation. LMMSE channel estimation gives better performance.
10
0

SF=16,K=8, LMMSE Channel estimation rake+pic chann. know n rake+pic chann. estimation FDE-RZF+CPIC chann. know n FDE-RZF+CPIC chann. estimation

SF=16,α =1
RAKE FDE-RZF hard decision rake+pic unit clipper decision rake+pic FDE-RZF w ith CPIC

10
AVERAG BER

-1

10 AVERAG BER

-1

10
10
-2

-2

10

-3

10

-3

0

2

4

6
SNR

8

10

12

10

-4

4

6

8 10 Number of Users ,K

12

14

Figure 15: Performance of Interference Cancellation Vs the SNR for exact and LMMSE channel estimate.

Figure 13: Performance of different reception schemes Vs the number of active users. and SNR =12 dB.

Figure 14 depicts the average BER performance as a function of the number of cancelled users, at a fixed SNR per user of SNR=12 dB. This graph shows that the performance of the FDE-RZF-CPIC scheme improves when the number of cancelled

64

10

0

SF=16,K=8, zero padding ,ZF Channel estimation

10
AVERAG BER

-1

(A.2) where each raw is a circular shift of the raw above and the first raw is a circular shift of the last raw. The primary difference between the matrices Q and

Q c is that they differ only by elements added in the
upper right and lower left parts to produce the cyclic structure in the raws. If the matrix size S is large and the number of non zero elements on the main diagonals compared to the number of zero elements is small (i.e the matrix is sparse), the number of elements added to the upper right and lower left parts does not affect the matrix, because they are small in proportion to the main diagonal elements. It can be shown from the eigen value distribution of both
10 12

10

-2

10

-3 rake+pic chann. known rake+pic chann. estimation FDE-RZF+CPIC chann. known FDE-RZF+CPIC chann. estimation

10

-4

0

2

4

6
SNR

8

Figure 16: Performance of Interference Cancellation Vs the SNR for exact and ZF channel estimate.

9 CONCLUSION

The paper presents an efficient FDE-RZF-CPIC receiver for downlink CDMA. This receiver is implemented using frequency domain approximations rather than the time domain implementation to reduce complexity. The comparison studies show that the proposed receiver outperforms several traditional receivers for different loading cases. The sensitivity of the proposed receiver is also studied for different decision functions and different channel estimation methods. The obtained results indicate that the proposed receiver performance is robust for the different channel estimation methods.
APPENDEX 1 Toeplitz to circulant approximation

c It is known that an SXS circulant matrix Q is diagonalized by [1]: −1 c Λ=Ψ Q Ψ (A.3) where Λ is an SXS diagonal matrix whose c elements λ ( s, s ) are the eigenvalues of Q and where Ψ is an SXS unitary matrix of eigen vectors c of Q . Thus we have:

matrices that Q equivalent.

and Q

c

are asymptotically

ΨΨ = Ψ Ψ = I The elements φ ( s1, s2 ) of [17,18]: ⎡ j 2πs1s2 ⎤ ψ ( s1, s2 ) = exp ⎢ ⎥ S

*t

*t

(A.4) Ψ are given by

⎣

⎦

(A.5)

Let Q be an SXS Toeplitz matrix of the following form: q ( −l ) 0 ⎤ ⎡ q ( 0)

⎢ ⎢ Q = q(k ) ⎢ ⎢ ⎢ 0 ⎣

q(k )

⎥ ⎥ q ( −l ) ⎥ ⎥ q ( 0) ⎥ ⎦

(A.1)

It can be approximated by an SXS circulant matrix

Q c defined as [17,18]:

Q

c

⎡ q ( 0) ⎢ ⎢ ⎢ q(k ) ⎢ 0 = ⎢ ⎢ ⎢ q ( −l ) ⎢ ⎢q ( −1) ⎣

q ( −l )

0

q(k )

q ( −l )

0

q(k )

⎤ ⎥ q(k ) ⎥ ⎥ ⎥ 0 ⎥ q ( −l ) ⎥ ⎥ ⎥ q (0) ⎥ ⎦
q (1)

2 for s1, s2 = 0,1,........, S − 1 and j = −1 The eigen values λ ( s, s ) can be called λ (s ) . For these eigen values, the following relation holds [17,18]: k ⎡ − j 2πms ⎤ λ ( s ) = q (0) + ∑ q ( m) exp ⎢ m =1 ⎣ S ⎥ ⎦ (A.6) −1 ⎡ − j 2πms ⎤ + ∑ q ( m) exp ⎢ S ⎥ m = −l ⎣ ⎦ s = 0,1,........, S − 1 c Because of the cyclic nature of Q ,we define: q ( S − m) = q ( − m) (A.7) and thus Eq.(A.6) can be written in the form [17,18]: S −1 ⎡ − j 2πms ⎤ (A.8) λ ( s ) = ∑ q ( m) exp ⎢ m =0 ⎣ S ⎥ ⎦ for s = 0,1,........, S − 1 Thus the circulant matrix can be simply diagonalized by computing the DFT of the cyclic sequence q (0), q (1),......., q ( S − 1) .

65

10 REFERENCES

[1] I. Martoyo, T. Wesis, F. Capar, and F.Jondral, “Low complexity cdma downlink receiver based on frequency domain equalization,” in Proc. IEEE Vech. Tech.. Conf., pp. 987-991, Oct. 2003. [2] J. Pan, P. De, and A Zeira, “Low complexity Data Detection Using Fast Fourier Transform Decomposition of Channel Correlation Matrix, ” IEEE Global Telecom. Conference, vol. 2 , pp. 1322-1326, Nov. 2001. [3] D. Falconer, S. L. Ariyavisitakul, A. BenyaminSeeyar, and B. Edison, “Frequency domain equalization for single-carrier broadband wireless systems,” IEEE Mag. Commun., Vol. 40, no. 4, pp. 58-66, Apr. 2002. [4] F. petre, G. Lues, L. Deneire, and M. Moonen, “Downlink frequency domain chip equalization for single-carrier block transmission ds-cdma with known symbol padding,” in Proc. GLOBCOM, pp. 453- 457, Nov. 2002 . [5] K.L. Baum, T.A. Thomas, F.W Vook, and V. Nangia, “Cyclic-prefix CDMA: an Improved Transmission Method for Broadband DSCDMA Cellular Systems,” IEEE Wireless Comm. And Networking Conference, vol. 1, pp.17-21, Mar. 2002. [6] B. Mouhouche, K. Abed-Meraim, N. Ibrahim, and P. Loubaton, “Combined MMSE equalization and blind parallel interference cancellation for downlink multirate CDMA communications,” IEEE 5th Workshop on Signal Processing Advances in Wireless Communications, pp. 492–496, 11-14 July 2004. [7] B. Mouhouche, K. Abed-Meraim, and S. Burykh, “Spreading code detection and blind interference cancellation for DS/CDMA downlink,” IEEEISS STA ’04, Sydney, Australia, pp. 774-778, 2004. [8] Z. Gao, Q. Wu, and J. Wang, “Combination of LMMSE equalization and multi-path interference cancellation in WCDMA receivers” IEEE ICMMT 4th International Conference, pp. 838– 841, 18-21 Aug. 2004. [9] M.F. Madkour, S. C. Gupta, and Y. .E. Wang, “Successive Interference Cancellation Algorithms for Downlink W-CDMA Communications, ” IEEE Trans. Wireless Comm., vol. 1, N. 1 , January 2002. [10] L. B. Nelson, and H. V. poor, “Iterative multiuser receivers for CDMA channels: An EM- based approach,” IEEE Trans. Commun., Vol. 44, pp. 1700-1710, Dec. 1996. [11] A. L. C. Hui, and K. B. Letaief, “Multiuser asynchronous DS/CDMA detectors in multipath fading links, ” IEEE Trans. Commun., vol. 46, pp. 384–391, Mar.1998.

[12] W.Zha, S., and D. Blostein , “Soft-decision multistage multiuser interference cancellation, ” IEEE Trans. Techn., Vol. 52, No. 2, pp. 380389, 2003. [13] A. Klein, G. K. Kaleh, and P. W. Baier, “Zero Forcing and Minimum Mean-Square-Error Equalization for Multi-User Detection in Code Division Multiple-Access Channels,” IEEE Trans. Vehic. Tech., vol. 45, no. 2, pp. 276-87 May 1996. [14] S. Werner, and J. Lilleberg, “Downlink channel decorrelation in CDMA systems with long codes,” IEEE 49th Vehicular Technology Conference, vol. 2, pp. 1614–1617, 16-20, 1999. [15] M. Varanasi, and B. Aazhang, “Multistage detection in Asynchronous Code-Division Multiple Access Communications,” IEEE Trans. Commun. , 38(4), pp. 509-519, 1990. [16] A. Klein, “Data detection algorithms specially designed for the downlink of CDMA mobile radio systems,” IEEE 47th Vehicular Technology Conference, vol. 1, pp. 203–207, 4-7 May 1997. [17] H.C. Anderws and B.R. Hunt, Digital Image Restoration. Englewood Cliffs, NJ: PrenticeHall, 1977. [18] Jan Biemond , Jelle Rieske and Jan J. Gerbrands, “A Fast Kalman Filter For Images Degraded By Both Blur And Noise,” IEEE Trans. Acoustics, Speech and Signal Processing, Vol. ASSP-31, No.5, pp.1248-1256, Oct. 1983.

66

Performance Studies of MANET Routing Protocols in the Presence of Different Broadcast Route Discovery Strategies
Dr. Natarajan Meghanathan Department of Computer Science Jackson State University Jackson, MS 39217 Email: nmeghanathan@jsums.edu ABSTRACT Simulation studies for the Mobile Ad hoc NETwork (MANET) routing protocols have so far employed flooding as the default mechanism of route discovery. During flooding, each node broadcasts the packet exactly once, causing the broadcast storm problem [1]. Several efficient broadcasting strategies [1][2] that reduce the number of retransmitted route query packets and the number of retransmitting nodes have been proposed in the literature. These include the probability-based, area-based and neighbor-knowledge based methods to reduce the retransmission overhead. Our contribution in this paper is an ns-2 simulation based analysis on the impact of employing these broadcasting strategies for route discovery on the hop count and stability of routes. We use the minimum-hop based Dynamic Source Routing (DSR) protocol [3] and the stability-based FlowOriented Routing Protocol (FORP) [4] as the routing protocols for our analysis. We compare the hop count and stability of DSR and FORP routes determined under conditions that guarantee at least 92-95% success in route discoveries and simultaneously minimize the number of retransmissions and retransmitting nodes. Keywords: Broadcasting, Routing, Stability, Hop count, Mobile Ad Hoc Networks called flooding to discover the routes. Whenever a source node has data to send to a destination node, but does not have the route to the same, it will initiate a broadcast route-query process. In the case of flooding, the source node broadcasts a RouteRequest-Query (RREQ) packet to its neighbors. Each node in the network will broadcast this RREQ packet exactly once when they see it the first time. The destination node receives the RREQ packets along several paths, chooses the best route according to the route selection principles of the particular routing protocol and notifies the source node about the selected route using a Route-Reply (RREP) packet. Flooding is a very expensive process with respect to the bandwidth and energy usage. With resourceconstrained environments like those of MANETs, employing flooding for on-demand route discovery will be very costly. Flooding also introduces lot of redundancy in the packet retransmission process. In [1], it has been observed that with flooding, when a node receives a packet for the first time, at least 39% of the node’s neighborhood would have also received the message simultaneously and on average only 41% of additional area could be covered with a rebroadcast. In general, when a node rebroadcasts a message after hearing it k times, the expected additional coverage decreases exponentially with increasing values of k [1]. These observations motivated researchers to introduce several efficient

1

INTRODUCTION

A Mobile Ad hoc NETwork (MANET) is a dynamic distributed system of autonomously moving wireless nodes (such as laptops, personal digital assistants, etc) and lacks a fixed infrastructure. The network has limited bandwidth as the wireless medium is shared and is prone to transmission interference. Nodes are battery-powered and operated with a limited transmission range. As a result, routes in MANETs are often multi-hop in nature and have to be discovered by the nodes themselves. There is no centralized administration like in cellular networks. Several unicast and multicast MANET routing protocols have been proposed in the literature. The route discovery could be either proactive or reactive. In the proactive approach, nodes determine and maintain routes for every possible source-destination pair, irrespective of their requirement. Reactive or on-demand MANET routing protocols determine a route only when required. It has been observed [5][6] that with a dynamically changing network topology where route accuracy and routing overhead are crucial, ondemand routing protocols are to be preferred over the proactive protocols. We will focus only on ondemand routing for the rest of this paper. Currently, all the on-demand MANET routing protocols employ a simple form of broadcasting

67

broadcasting strategies that will minimize the number of redundant retransmissions and at the same time maximize the chances of the broadcasted message reaching all the nodes in the network. The techniques for efficient broadcasting can be grouped into three families [1][2]: probability-based methods, area-based methods and the neighbor knowledge-based methods. In probability-based methods, each node is assigned a probability for retransmission. In area-based methods, a common transmission range is assumed and a node will rebroadcast if only sufficient new area can be covered with the retransmission. In neighborknowledge based methods, each node stores neighborhood state information and uses it to decide whether to retransmit or not. One or more broadcasting techniques have been proposed under each of the above three families. The objective of all these broadcasting techniques is to minimize the number of retransmitted messages and the number of nodes retransmitting the message. More information on the different broadcasting techniques can be found in Section 3. The performance of the different efficient broadcasting techniques under different conditions of topology changes and offered broadcast traffic has been studied in [2]. As the number of retransmitting nodes and the retransmitted messages get reduced when using these broadcasting techniques for RREQ propagation, the quality of the routes chosen may be different compared to those routes discovered using simple flooding. This formed the motivation for us to implement these broadcasting techniques and use them for route discovery in on-demand MANET routing protocols. On-demand MANET routing protocols can be classified into two broad categories [7]: minimumweight based routing protocols and stability-oriented routing protocols. The Dynamic Source Routing (DSR) protocol [3] is a well-known minimumweight based protocol that selects routes with the minimum hop count. The Flow-oriented Routing Protocol (FORP) [4] was observed to discover the most stable routes within the class of stable path routing protocols [8]. The stability of routes selected by a routing protocol is quantified in terms of the number of route transitions incurred by the protocol for a source-destination (s-d) session. More information on DSR and FORP is provided in Section 2. In this paper, we implement the probabilitybased method, the distance-based technique (areabased method), the Multi-Point Relaying (MPR) and the Minimum Connected Dominating Set (MCDS) based techniques (neighbor-knowledge based method) as the route discovery strategies for DSR and FORP and study the impact of these broadcasting techniques on the quality of routes chosen by the two routing protocols. We specifically

study the impact on two principal routing metrics, viz., the stability and hop count. We compare the stability and hop count of DSR and FORP routes chosen with these broadcasting techniques with those discovered using flooding. Flooding helps to discover the minimum hop routes for DSR and the most stable routes for FORP. But, these efficient broadcasting techniques may not yield the minimum hop routes for DSR or the most stable routes for FORP. The rest of the paper is organized as follows: In Section 2, we briefly discuss the DSR and FORP protocols. Section 3 discusses the different broadcasting techniques that have been published in the literature. Section 4 describes the simulation environment, illustrates the results and interprets them. Section 5 concludes the paper. Note that we use the words ‘route’ and ‘path’, ‘message’ and ‘packet’, ‘rebroadcast’ and ‘retransmit’ interchangeably in this paper. 2 REVIEW OF PROTOCOLS MANET ROUTING

In this section, we briefly review the minimumhop based Dynamic Source Routing (DSR) protocol [3] and the stability-based Flow-Oriented Routing Protocol (FORP) [4] – the two protocols we use for our simulation analysis. 2.1 Dynamic Source Routing (DSR) Protocol The unique feature of DSR [3] is source routing: data packets carry the route from the source to the destination in the packet header. As a result, intermediate nodes do not need to store up-to-date routing information. This avoids the need for beacon control neighbor detection packets that are used in the stability-oriented routing protocols. Route discovery is by means of the broadcast query-reply cycle. A source node s wishing to send a data packet to a destination d, broadcasts a Route-Request (RREQ) packet throughout the network. The RREQ packet reaching a node contains the list of intermediate nodes through which it has propagated from the source node. After receiving the first RREQ packet, the destination node waits for a short time period for any more RREQ packets and then chooses a path with the minimum hop count and sends a Route-Reply Packet (RREP) along the selected path. If any RREQ is received along a path whose hop count is lower than the one on which the RREP was sent, another RREP would be sent on the latest minimum hop path discovered. 2.2 Flow-Oriented Routing Protocol FORP [4] utilizes the mobility and location information of nodes to approximately predict the

68

Link Expiration Time (LET) for each wireless link. FORP selects the route with the maximum Route Expiration Time (RET), which is the minimum of the LET values of the constituent links of the route. Each node periodically sends a beacon control message to its neighbors and the message includes the current position of the nodes, velocity, the direction of movement and the transmission ranges. Each node is assumed to be able to predict the LET values of each of its links with the neighboring nodes based on the information collected using beacon packets. FORP assumes the availability of location identifying techniques like GPS (Global Positioning System) [9] and also assumes that the clocks across all nodes are synchronized. Given the motion parameters of two neighboring nodes, the duration of time the two nodes will remain neighbors can be predicted as follows: Let two nodes i and j be within the transmission range of each other. Let (xi, yi) and (xj, yj) be the co-ordinates of the mobile hosts i and j respectively. Let vi, vj be the velocities and Θi, Θj, where (0 ≤ Θi, Θj < 2π) indicate the direction of motion of nodes i and j respectively. The amount of time the two nodes i and j will stay connected, Di-j, can be predicted as follows:

process continues until each node in the network has retransmitted the packet. As a result, all nodes reachable from the source receive the packet. Flooding causes the broadcast storm problem [1] which is characterized by redundant rebroadcasts, channel contention and collision of messages. 3.2 Probability-based Methods 3.2.1 Probabilistic Scheme When a node receives a broadcast message for the first time, the node rebroadcasts the message with a probability P. If the message received is already seen, then the node drops the message irrespective of whether or not the node retransmitted the message when received for the first time. For sparse networks, the value of P has to be high enough to facilitate a higher packet delivery ratio. When P = 1, the scheme resorts to simple flooding. 3.2.2 Counter-based Scheme A broadcast message received for the first time is not immediately retransmitted to the neighborhood. The message is queued up for a time called the Random Assessment Delay (RAD) during which the node may receive the same message (redundant broadcasts) from some of its other neighbors. After the RAD timer expires, if the number of times the same message is received exceeds a counter threshold, the message is not retransmitted and is simply dropped. 3.3 Area-based Methods

Di − j =

− (ab + cd ) +

(a 2 + c 2 )r 2 − (ad − bc) 2 a 2 + c2

where, a = vi cosΘi – vj cosΘj; b = xi – xj; c = vi sinΘi – vj sinΘj; d = yi – yj Route discovery is accomplished using the broadcast query-reply cycle with RREQ packets propagating from the source node s to the destination node d on several paths. The information recorded in this case by a node i receiving a RREQ message from a node j is the predicted LET of the link i-j. The destination d will receive several RREQ messages with the predicted LETs in the paths traversed being listed. The s-d path that has the maximum predicted RET is then selected. If more than one path has the same maximum predicted RET, the tie is broken by selecting the minimum hop path of such paths. 3 REVIEW OF STRATEGIES BRAODCASTING

In general, the broadcasting strategies can be grouped into four families: Simple flooding, Probability-based methods, Area-based methods and Neighbor knowledge based methods. 3.1 Simple Flooding A source node initiates flooding by broadcasting a packet to all its neighbors. The neighbor nodes in turn rebroadcast the packet exactly once and the

3.3.1 Distance-based Scheme When a node receives a previously unseen broadcast message, the node computes the distance between itself and the sender. If the sender is closer than a threshold distance, the message is dropped and all future receptions of the same message are also dropped. Otherwise, the received message is cached and the node initiates a RAD timer. Redundant broadcast messages received before the expiry of the RAD timer are also cached. When the RAD timer expires, the node computes the distance between itself and the neighbor nodes that previously broadcast the particular message. If any such neighbor node is closer than a threshold distance value, the message is dropped. Otherwise, the message is retransmitted. 3.3.2 Location-based Scheme Whenever a node originates or rebroadcasts a message, the node puts its location information in the message header. The receiver node calculates the additional coverage area that would be obtainable if it were to rebroadcast. If the additional coverage is less than a threshold value, all future receptions of the same message will be dropped. Otherwise, the

69

RAD timer is started. Redundant broadcast messages received before the expiry of the RAD timer are also cached. After the RAD timer expires, the node considers all the cached messages and recalculates the additional obtainable coverage area if it were to rebroadcast the particular message. If the additional obtainable coverage area is less than a threshold value, the cached messages are dropped. Otherwise the message is rebroadcast. 3.4 Neighbor Knowledge based Methods 3.4.1 Multi-point Relaying Under this scheme, each node is assumed to have a list of its 1-hop and 2-hop neighbors, obtained via periodic “Hello” beacons. The “Hello” messages include the identifier of the sending node, the list of the node’s known neighbors and the Multi-Point Relays (MPRs). After receiving “Hello” messages from all its neighbors, a node has the 2-hop topology information centered at itself. Using this list of 1-hop and 2-hop neighbors, a node selects the MPRs – the 1-hop neighbors that most efficiently reach all nodes within its 2-hop neighborhood. Each node selects the set of MPRs using a greedy approach of iteratively including the 1-hop neighbors that would cover the largest number of uncovered 2-hop neighbors. 3.4.2 Minimum Connected Dominating Set A Connected Dominating Set (CDS) is a set of nodes in the network such that all nodes in the network are either in the CDS or directly attached to a node in the CDS. A Minimum Connected Dominating Set (MCDS) is the smallest CDS, in terms of the number of nodes in the CDS, for the entire network. The size of the MCDS is the minimum number of retransmissions required in a broadcasting process so that all nodes in the network receive the broadcast message. Determining the MCDS for a given network graph is an NP-complete problem and hence several heuristics have been proposed to approximate the MCDS for a given network graph. 4 SIMULATIONS

The route discovery mechanism in each of DSR and FORP is implemented with the following broadcasting strategies: Simple flooding, Probabilistic broadcasting, Distance-based broadcasting, MPR and MCDS-based broadcasting. We choose the probabilistic scheme over counterbased scheme as the range of counter values to experiment with changes dynamically depending on network density and node mobility. We choose the distance-based scheme over the location-based scheme because of the higher overhead in computing the additional coverage area when a node receives multiple broadcast messages from its neighbors. The probability of retransmission was varied from 0.1 to 1. The threshold distance for triggering a broadcast is varied from 20m to 200m, in increments of 20m. We do not let any intermediate node to reply for the RREQ packets and disable local route repairs as this may affect our goal on studying the effect of the different broadcasting strategies on the routing metrics. We do not expect much congestion in our network scenarios. Hence, the value of the RAD timers used for the distance-based scheme is 0.01 seconds, as suggested in [2]. 4.2 Beacon Messaging Each node periodically broadcasts a “Hello” beacon message in its neighborhood. The “Hello” message contains the following information: the location of the node, its velocity and direction of moving, the 1-hop neighbor list of the node, and the set of MPRs for the node. The “Hello” message is used by FORP and the MPR and MCDS based broadcasting strategies. In the case of FORP, the clocks across all nodes are assumed to be synchronized and each node also keeps track of the previously advertised location of its neighbor nodes. This helps to keep track of the direction in which the neighbor node is moving. 4.3 Simulation Models The physical, data link and MAC layer models are based on the multi-hop wireless network extension [5] provided by the CMU’s Monarch research group. The MAC layer uses the Distributed Coordinated Function (DCF) of the IEEE Standard 802.11 [11] for wireless LANs. The radio model uses the standard channel bandwidth of 2Mbps. The signal propagation model used is the two-ray ground reflection model [5]. The interface queue stores both the routing and data packets sent by the routing layer until the MAC layer is able to transmit them. We use a FIFO-based interface queue of length 100. The node mobility model used is the Random Waypoint model [12]. Each node starts moving from an arbitrary location to a randomly selected destination location at a speed uniformly distributed

We use ns-2 (version 2.28) [10] as the simulator for our study. The network dimensions are 1500m x 300m. The transmission range of each node is 250m. These values are very commonly used in MANET simulations. We vary the density of the network by conducting simulations with 25 nodes (low density) and 50 nodes (high density). The simulation time is 1000 seconds. While we implemented the FORP protocol, we used the implementation of DSR that comes with ns-2. 4.1 Broadcasting Strategies

70

in the range [vmin, …, vmax]. Once the destination is reached, the node may stop there for a certain time called the pause time and then continue to move by choosing a different target location and a different velocity. In this paper, we set vmin = 0. The vmax values are 5, 10 (low mobility), 20, 30 (moderate mobility), 40 and 50m/s (high mobility). The pause time is 0 seconds. 4.4 Performance Metrics We study the following performance metrics for DSR and FORP: (i) Percentage of successful route discovery – ratio (expressed as percentage) of the number of successful route discovery attempts to the total number of route discovery attempts. (ii) Number of retransmitted messages – the number of messages received at all the nodes in the network per successful route discovery attempt, averaged over all s-d sessions for the entire simulation time. (iii) Number of retransmitting nodes – the number of nodes retransmitting the RREQ packet in the network per successful route discovery attempt, averaged over all s-d sessions for the entire simulation time. (iv) Number of route transitions – average of the number of route discoveries required for all s-d sessions. (v) Hop count per route – average of the number of hops in routes, time-averaged over all s-d sessions. 4.5 Percentage of Successful Route Discovery We refer to a “successful route discovery” as the scenario when at least one RREQ packet broadcast by the source reaches the destination. The flooding, MPR and MCDS approaches guarantee successful route discovery if the underlying network is connected. With the probability and distance-based broadcasting techniques, there is always a chance that the RREQ packet does not reach its intended destination, even though the underlying network may be connected. The network density plays a huge role in determining the minimum value for the probability of retransmission and the maximum threshold distance value for retransmission that would maximize the number of successful route discoveries. Larger the network density, the lower the minimum probability of retransmission and larger the maximum threshold distance for retransmission that would maximize the chances of a successful route discovery. In this paper, we set ourselves a target of “92-95%” successful route discoveries for each s-d session. For a given probability of retransmission, the percentage of successful route discoveries increases as the network density increases. For a given network density, the percentage of success in route

discoveries increases as the probability of a retransmission increases. At high network density, one can obtain 100% success in route discoveries when the probability of a retransmission is beyond 0.7. With 25 nodes in the network, the maximum achievable percentage of successful route discovery is only 95%. Such a limitation arises due to the poor connectivity of the network at low density. For each network density, we define a Threshold Probability, ProbThresh, as the probability of retransmission that results in 92-95% success in route discoveries and at the same time the number of retransmitted messages and the number of retransmitting nodes is the minimum. For fixed probability of retransmission values below ProbThresh, the percentage of success in route discoveries decreases with increase in node mobility. This is due to the increase in the number of route discovery attempts as the node mobility increases. The value of ProbThresh was observed to be 0.7 with a network of 25 nodes and 0.4 with a network of 50 nodes. The percentage of successful route discoveries for DSR under the probabilistic schemes is shown in Figures 1.1 and 1.2. Similar results are obtained for FORP too.

Figure 1.1: Network of 25 Nodes

Figure 1.2: Network of 50 Nodes Figure 1: Percentage of Successful Route Discoveries with Probabilistic Scheme Similarly, to obtain 92-95% success in route discovery attempts, we choose DistTresh = 100m as the maximum threshold distance value for retransmission in a network of 25 nodes and DistTresh = 180m as the maximum threshold value in a network of 50 nodes. The percentage of successful

71

route discoveries for DSR under the distance-based scheme is shown is shown in Figures 2.1 and 2.2.

Figure 2.1: Network of 25 Nodes

increases, the number of retransmitting nodes increases. The destination node gets the RREQ message through several paths and thus can choose the best path depending on the route selection principles of the protocol employed. The route is learnt with the least possible route-acquisition delay, but with the maximum message retransmission overhead. On the other hand, the number of retransmitting nodes in the case of MCDS based route discovery is the minimum since the RREQ message is retransmitted only by nodes that are part of the approximate MCDS. But, the MCDS approach tends to increase the route-acquisition delay, as prior to route discovery, the MCDS itself needs to be determined. We run a distributed version of the Kou’s heuristic [13] in the network to approximate the MCDS. Each node then learns the set of its MCDS neighbors and the presence/absence of the node in the MCDS.

Figure 2.2: Network of 50 Nodes Figure 2: Percentage of Successful Route Discoveries with Distance-based Scheme Figure 3 shows the percentage of successful route discoveries using the selected threshold values for the probability and distance-based schemes and the other broadcasting techniques, including flooding.

Figure 4: Average Number of Retransmitting Nodes per Route Discovery

Figure 3: Percentage of Successful Route Discoveries with Different Broadcasting Techniques 4.6 Reduction in Retransmission Overhead Since we did not let any intermediate node to reply for RREQ messages, the number of retransmitting nodes (Figure 4) and the number of retransmitted messages (Figure 5), depend only on the network density, node mobility and the broadcasting strategy used. With simple flooding (Figure 4), each node retransmits the RREQ message exactly once. Hence, as the network density

Figure 5: Average Number of Retransmitted Messages per Route Discovery For the MPR, the probabilistic and distance-based schemes, the number of retransmitting nodes and the number of retransmitted messages is in between the two extremes set by simple flooding and MCDS. The MPR approach is not scalable as it does not take into account the path taken by the RREQ message. The set of MPR nodes is selected statically using a greedy approach of choosing neighbor nodes that covered the maximum number of 2-hop neighbors. When a node receives a RREQ message, the node does not remove from its MPR set the neighbor

72

nodes that might also have received the RREQ message. The number of 1-hop and 2-hop neighbors of a node is doubled as the network density is doubled. As a result, the number of nodes constituting the MPR set (the number of retransmitting nodes) also doubles, when the number of nodes in the network is increased from 25 to 50 nodes. When simulated under the probabilistic and distance-based schemes using the threshold values mentioned in Section 4.5, we observe that the number of retransmitting nodes (Figure 4) required in a network of 50 nodes is only 20% more than the number of retransmitting nodes required in a network of 25 nodes. Similarly, we observe that in a lowdensity 25-node network operated at DistTresh = 100m, the number of retransmitting nodes required to guarantee a 92-95% success in route discovery is only 40% more (and does not double) than that required in a high-density 50-node network operated at DistTresh = 180m. From Figure 5, we also observe that the number of retransmitted messages with flooding and MPR quadruples as we double the network density. This illustrates that flooding and MPR are not scalable broadcasting techniques. For the MCDS scheme, the number of retransmitted messages just doubles as the network density doubles. With the probabilistic scheme operated under the appropriate threshold values, the number of retransmitted messages in a 50-node network is 2.4 times to that incurred in a 25node network. With the distance-based scheme operated under the appropriate threshold values, the number of retransmitted messages in a 50-node network is only 1.3 times to that incurred in a 25node network. These two observations illustrate that the probabilistic and distance-based schemes, when operated at the appropriate threshold values for retransmissions, are scalable. This is one of the significant contribution and finding of our research. 4.7 Average Hop Count per Path From Figures 6 and 7, one can observe that the average hop count per path for both DSR and FORP is not very much influenced by node mobility and is only affected by the broadcasting strategy used. When simple flooding is used as the route discovery strategy, the destination node learns about several routes from the source of the RREQ message to itself. From this set, the destination node can then choose the best route according to the route selection principles of the routing protocol. When we employ the different broadcasting strategies, we are reducing the number of retransmitting nodes as well as the number of retransmitted messages. Hence, the destination node learns only relatively fewer routes compared to the situation when flooding is used.

Figure 6.1: Network of 25 Nodes

Figure 6.2: Network of 50 Nodes Figure 6: Average Hop Count per Path for DSR In the case of DSR (Figures 6.1 and 6.2), the hop count of the routes chosen using MCDS and flooding is the minimum. Routing through the nodes that form the minimum connected dominating set results in the message traversing the minimum number of intermediate hops from the source node to the destination node. Figures 6.1 and 6.2 illustrate that flooding also discovers a similar route with the minimum number of hops from the source node to the destination node. With MPR, probability-based and distance-based schemes, the hop count of DSR routes increases by at most 15% compared to that discovered using MCDS and flooding. The hop count of FORP routes (Figures 7.1 and 7.2) is normally more than that of DSR routes. Among a set of routes learnt, FORP selects the route that has the largest predicted route expiration time. For such routes, at the time of their selection, the average physical distance of the constituent nodes of a hop is only 55-65% of the transmission range of the nodes. This results in the relatively larger hop count for FORP routes.

Figure 7.1: Network of 25 Nodes

73

Figure 7.2: Network of 50 Nodes Figure 7: Average Hop Count per Path for FORP The protocols learn the maximum and minimum number of source-destination (s-d) routes using flooding and MCDS respectively. Thus, the average hop count of FORP routes is 10-15% and 2-3% more than that of DSR routes when the routes are learnt respectively using flooding and MCDS. In a probability-based scheme (Figures 8.1 through 8.4), the number of retransmitting nodes decreases as the probability for retransmission is reduced. At high network density, the dense coverage of nodes within a neighborhood offsets for the lower threshold probability of retransmission. At low network densities, one has to adopt reasonably high values for the threshold probability of retransmission in order to guarantee a high percentage of success in route discoveries. At high node mobility, the hop count of the routes decreases drastically as the probability for retransmission falls below 0.4 (for low density networks) and 0.2 (for high density networks). This could be attributed to the loss of connectivity between the source and the destination for low values of the probability of retransmission. The network is partitioned into two or more segments. There exists a path from the source to the destination only if the two end nodes of the path are within the same segment, thus accounting for the reduction in the hop count when the network is partitioned. As MPR incurs more message retransmissions, if we can tolerate a 15% sub-optimality in the hop count, the distance-based or probabilistic schemes, at the appropriate threshold values, may be preferred as the route discovering strategies for DSR.

Figure 8.2: vmax = 5m/s, 50 Nodes

Figure 8.3: vmax = 50m/s, 25 Nodes

Figure 8.4: vmax = 50m/s, 50 Nodes Figure 8: Probability of Retransmission Vs Average Hop Count per Path

Figure 9.1: vmax = 5m/s, 25 Nodes

Figure 8.1: vmax = 5m/s, 25 Nodes

Figure 9.2: vmax = 5m/s, 50 Nodes

74

hop route among the set of routes discovered using these broadcasting strategies. At the threshold values for the probability of retransmission and the threshold distance for retransmission, as indicated in Figures 10.1 and 10.2, DSR incurs 20% less transitions compared to routes discovered using flooding.

Figure 9.3: vmax = 50m/s, 25 Nodes

Figure 10.1: Network of 25 Nodes Figure 9.4: vmax = 50m/s, 50 Nodes Figure 9: Threshold Distance of Retransmission Vs Average Hop Count per Path In a distance-based scheme (Figures 9.1 through 9.4), a node rebroadcasts the RREQ message only if no neighbor node within the area covered by the threshold distance of retransmission has yet broadcasted the message. In the case of DSR, even though we may use several threshold distance values for retransmission, the protocol chooses only the route that has the minimum hop count. Hence, the hop count of DSR routes is not much sensitive towards the threshold distance for retransmission, except for high values of the distance. FORP is slightly more sensitive to the threshold distance of retransmission. Routes with physical hop distance 55-65% of the transmission range are more likely to be found when the threshold distance for retransmission of the RREQ messages is also only 55-65% of the transmission range of the nodes. 4.8 Average Number of Route Transitions In the case of DSR (Figure 10), routes discovered through flooding and MCDS have the minimum number of hops. But such routes are very unstable as observed in Figures 10.1 and 10.2. At the time of route discovery, the average physical distance between the constituent nodes of a hop is close to 7080% of the transmission range of the nodes. Such hops are highly vulnerable to fail as the constituent nodes of the hop are more susceptible to move away quickly. The chance of link failure in the near future increases with increase in node mobility. Broadcasting strategies like MPR also do not offer any improvement in the stability of the routes chosen. The DSR protocol always targets for the minimum

Figure 10.2: Network of 50 Nodes Figure 10: Stability of DSR Routes

Figure 11.1: Network of 25 Nodes

Figure 11.2: Network of 50 Nodes Figure 11: Stability of FORP Routes In the case of FORP (Figures 11.1 and 11.2), the

75

routes are most stable when discovered using flooding. This is because the targeted destination node of the RREQ message receives the message across several routes and selects the route with the highest predicted route expiration time. When routes are discovered using flooding, the number of route transitions incurred by DSR is 70% and 125% more than that incurred by FORP routes at low and high network densities respectively. When route discovery is done using MCDS, the RREQ messages are propagated only by the nodes in the MCDS and hence, the routes learnt are very likely to be of minimum hop paths. Such routes are least stable. When routes are discovered using MCDS, the number of route transitions incurred by DSR is only 3-7% more than that incurred by FORP. Thus, FORP routes selected using MCDS based scheme are the most unstable of routes selected using the broadcasting strategies. DSR routes are less stable in networks of high density compared to networks of low density. This is due to the “edge effect” problem [14]. In high density networks, the average physical distance of a hop in a minimum-hop path during its discovery is close to 80% of the transmission range of the node. While in low-density networks, the average physical distance of a hop is only 70% of the transmission range of the nodes. In high-density networks, when we aim for minimum-hop, we can select the farthest neighbor that is on the path towards the destination. But, this results in routes that are highly unstable. When operated at the threshold distance for retransmission as shown in Figures 11.1 and 11.2, the number of route transitions incurred for both the protocols when using threshold distance of 180m is at most 1.5 times to that incurred when using threshold distance of 100m. More detailed results are shown in Figures 12.1 to 12.4.

Figure 12.3: vmax = 50m/s, 25 Nodes

Figure 12.4: vmax = 50m/s, 50 Nodes Figure 12: Threshold Distance of Retransmission Vs Average Number of Route Transitions Compared to the distance-based scheme, FORP routes discovered using MPR and probability-based scheme are relatively more stable. The number of transitions incurred by these routes is only 20-35% more than that incurred by routes discovered using flooding. For low density networks and in networks with high node mobility, the network connectivity is very limited when operated with low values for the probability of retransmission (Figures 13.1 through 13.4). Under such conditions, the number of successful route discoveries and the number of route transitions are low.

Figure 12.1: vmax = 5m/s, 25 Nodes

Figure 13.1: vmax = 5m/s, 25 Nodes

Figure 13.2: vmax = 5m/s, 50 Nodes Figure 12.2: vmax = 5m/s, 50 Nodes

76

Figure 13.3: vmax = 50m/s, 25 Nodes

Figure 13.4: vmax = 50m/s, 50 Nodes Figure 13: Probability of Retransmission Vs Average Number of Route Transitions 5 CONCLUSIONS

The high-level contribution of this paper is a simulation-based analysis on the impact of the broadcast route discovery techniques on the stability and hop count of routes discovered for the minimumhop based DSR and the stability-based FORP protocols. We also showed that the probability-based and distance-based schemes, when operated at the appropriate threshold values for retransmission, are more scalable compared to the flooding and MPR schemes. Future work will also involve studying the impact of the broadcasting strategies on the link efficiency and stability of trees and meshes determined for the multicast routing protocols. For networks of low density and high density, we identify the threshold values for the probability of retransmission and the distance for retransmission, using which we can obtain 92-95% success in route discoveries with the minimum number of retransmissions, and below these threshold values, the percentage of success in route discoveries decreases with increase in node mobility. When operated at the threshold probability values for retransmission, the number of retransmitting nodes in a network of 50 nodes is only 20% more than the number of retransmitting nodes in a network of 25 nodes. Also, when operated at the appropriate threshold distances for retransmission, the number of retransmitting nodes decreases as the network density increases. The probabilistic and distancebased schemes require less overhead to implement compared to the MPR and MCDS based schemes. Determining the MCDS in a highly mobile network

itself will be a significant overhead. When we employ the different broadcasting strategies, we are reducing the number of retransmitting nodes as well as the number of retransmitted messages. The routing protocols learn only relatively fewer routes compared to the situation when flooding is used. With flooding, each node in the network retransmits the RREQ packet exactly once, thus resulting in the maximum number of retransmissions. Letting the RREQs propagate through the nodes that form the minimum connected dominating set results in the packet traversing the minimum number of intermediate hops with minimum number of retransmissions from the source to the destination. So, we learn the maximum and minimum number of routes using flooding and MCDS respectively. The number of routes learnt using the other broadcasting strategies is in between these two extremes. In the case of DSR, the hop count of routes chosen using the MCDS and flooding based route discovery approaches is the minimum. Nevertheless, since DSR opts always for the minimum hop routes, the hop count of DSR routes discovered using MPR, probability-based and distance-based schemes is at most 15% more than that discovered using flooding and MCDS. This illustrates that routes having minimum hop or close to being minimum hop are very much discovered using the different broadcasting strategies. If we can tolerate a 15% suboptimality in the hop count, the distance-based or probabilistic scheme at the appropriate threshold values (which yield the minimum number of retransmissions) may be preferred as the route discovery strategies for DSR. FORP targets stable routes and the hop count of such routes are usually more than that of minimum hop routes. At the time of route selection, the average physical distance of the constituent nodes of a hop in stable routes is only 55-65% of the transmission range of the nodes. Thus, FORP is more sensitive to the different broadcasting strategies. The average hop count of FORP routes is 10-15% more than that of DSR routes when routes are learnt using flooding, MPR, distance-based and probabilistic approaches. While using MCDS, the hop count of FORP routes is only 2-3% more than that of DSR routes. The stability of DSR routes does not change much with the broadcasting strategy used. This is because the protocol always targets for minimum-hop routes and manages to discover routes with minimum hop count or routes close to minimum hop count irrespective of the broadcasting strategy used. With respect to FORP, the most stable routes are discovered using flooding. FORP routes discovered using the MCDS approach are the least stable as they are more or less similar to DSR routes. FORP routes discovered using MPR and probability-based

77

schemes (operated at the threshold probability for retransmission) incur only 20-35% more transitions compared to those routes discovered using flooding. With regards to distance-based schemes, FORP routes are more stable when discovered using moderate values for the distance of retransmission. This is because, at the time of route discovery itself the physical distance between the constituent nodes of a hop is at least the threshold distance of retransmission. In general, route discoveries with less retransmission overhead yield less stable routes and vice-versa. We thus observe a tradeoff between the number of retransmissions per stable route discovery and the number of stable route discoveries needed for a source-destination session.

40-04, University of Texas at Dallas (2004). [8] N. Meghanathan: A Simulation Study on the Stability-Oriented Routing Protocols for Mobile Ad Hoc Networks, Proceedings of 3rd IEEE International Conference on Wireless and Optical Communications Networks (2006). [9] Kaplan ED (ed.), Understanding the GPS: Principles and Applications, Artech House: Boston, MA (1996). [10] K. Fall and K. Varadhan: The ns Manual, The VINT Project, A Collaboration between researchers at UC Berkeley, LBL, USC/ISI and Xerox PARC. [11] IEEE Standards Department, Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications, IEEE Standard 802.11-1997 (1997). [12] C. Bettstetter, H. Hartenstein and X. PerezCosta: Stochastic Properties of the Random-Way Point Mobility Model, Wireless Networks, Vol. 10, No. 5, pp. 555 – 567 (2004). [13] L. Kou, G. Markowsky and L. Berman: A Fast Algorithm for Steiner Trees, Acta Informatica, Vol. 15, pp. 141 – 145 (1981). [14] G. Lim, K. Shin, S. Lee, H. Yoon and J. S. Ma: Link Stability and Route Lifetime in Ad hoc Wireless Networks, Proceedings of International Conference on Parallel Processing Workshops, pp. 116 – 123 (2002).

6

REFERENCES

[1] S. Ni, Y. Tseng, Y. Chen and J. Sheu: The Broadcast Storm Problem in a Mobile Ad Hoc Network, Proceedings of the 5th ACM International Conference on Mobile Computing and Networking, pp.151-162 (1999). [2] B. Williams and T. Camp: Comparison of Broadcasting Techniques for Mobile Ad Hoc Networks, Proceedings of the 3rd ACM International Symposium on Mobile Ad Hoc Networking and Computing, pp. 194 – 205 (2002). [3] D. B. Johnson, D. A. Maltz and J. Broch: DSR: The Dynamic Source Routing Protocol for Multi-hop Wireless Ad hoc Networks, Ad hoc Networking, edited by Charles E. Perkins, Chapter 5, Addison Wesley, pp. 139 – 172 (2001). [4] W. Su, S-J Lee and M. Gerla: Mobility Prediction and Routing in Ad Hoc Wireless Networks, International Journal of Network Management, Vol. 11, No. 1, pp. 3-30 (2001). [5] J. Broch, D. A. Maltz, D. B. Johnson, Y. C. Hu and J. Jetcheva: A Performance Comparison of Multi-hop Wireless Ad Hoc Network Routing Protocols, Proceedings of the 4th ACM Annual International Conference on Mobile Computing and Networking, pp.85 – 97 (1998). [6] P. Johansson, T. Larson, N. Hedmanm B. Mielczarek and M. DegerMark: Scenario-based Performance Analysis of Routing Protocols for Mobile Ad Hoc Networks, Proceedings of the 5th ACM International Conference on Mobile Computing and Networking, pp. 195 – 206 (1999). [7] N. Meghanathan and A. Farago: Survey and Taxonomy of 55 Unicast Routing Protocols for Mobile Ad Hoc Networks, Technical Report UTDCS-

78

A NOVEL APPROACH TO ADAPTIVE CONTROL OF NETWORKED SYSTEMS
1

A. H. Tahoun1, Fang Hua-Jing1 School of Information Technology and Engineering, Huazhong University of Science and Technology Wuhan 430074, China alitahoun@yahoo.com

ABSTRACT The insertion of communication network in the feedback adaptive control loops makes the analysis and design of networked control systems more and more complex. This paper addresses the stability problem of linear time-invariant adaptive networked control systems. Our approach is novel in that the knowledge of the exact values of all system parameters is not required. The case of state feedback is treated in which an upper bound on the norm of matrix A is required to be known. The priori knowledge of upper bound on norm A is not required in constructing the controller but it is required only to determine an upper bound on the transmission period h that guarantees the stability of the overall adaptive networked control system under an ideal transmission process, i.e. no transmission delay or packet dropout Rigorous mathematical proofs are established relies heavily on Lyapunov's stability criterion. Simulation results are given to illustrate the efficacy of our design approach. Keywords: Networked control systems, Transmission period, Adaptive control, Lyapunov's stability.

1

INTRODUCTION

Networked control systems (NCSs) are feedback control systems with network communication channels used for the communications between spatially distributed system components like sensors, actuators and controllers. In recent years, the discipline of networked control systems has become a highly active research field. The use of networks as media to interconnect the different components in an industrial control system is rapidly increasing. For example in large scale plants and in geographically distributed systems, where the number and/or location of different subsystems to control make the use of single wires to interconnect the control system prohibitively expensive [1]. The primary advantages of an NCS are reduced system wiring, ease of system diagnosis and maintenance, and increase system agility [2]. The insertion of the data network in the feedback control loop makes the analysis and design of an NCS more and more complex, especially for adaptive systems in which systems parameters not completely known. Conventional control theories with many ideal assumptions, such as synchronized control and non-delayed sensing and actuation, must be reevaluated before they can be applied to NCSs. Specifically; the following issues need to be addressed. The first issue is the network induced

delay (sensor-to-controller delay and controller-toactuator delay) that occurs while exchanging data among devices connected to the shared medium. This delay, either constant (up to jitter) or time varying, can degrade the performance of control systems designed without considering the delay and can even destabilize the system. Next, the network can be viewed as a web of unreliable transmission paths. Some packets not only suffer transmission delay but, even worse, can be lost during transmission [3]. The main challenge to be addressed when considering a networked control system is the stability of the overall NCSs. In this paper, we treat the stability analysis of networked control adaptive systems, when the network is inserted only between sensors and the controller. Under an ideal transmission process, i.e. no transmission delay or packet dropout, we have derived a sufficient condition on the transmission period that guarantees the NCS will be stable. This case is treated in [4] and [5], with completely known systems. This paper is organized as follows; the problem is formulated in Section 2. The main result is given in Section 3. Section 4 presents an example and simulation results, finally we present our conclusions in Section 5.

79

2

FORMULATION OF THE PROBLEM

error, Eq. (3) can be rewritten as

Consider an NCS shown in Fig. 1, in which sensor is clock-driven and both controller and actuator are event driven.
Actuator

& x (t ) = A x (t ) + bφ T (t ) x (t k ) − bk * e(t )
T

(4)

3

MAIN RESULT

Plant

Sensor

The main result of this paper will be treated in the following theorem. Theorem 1: Let an NCS with linear time-invariant plant (1), an adaptive stabilizer with control input (2) is globally stable if the adaptive control law takes the form [6]

Network

Controller

Figure 1 The block diagram of NCS In Fig. 1, a class of linear time-invariant plants is described as
and

φ& ( t ) = −α x ( t k ) x T ( t k ) Pb
h < min { , h1 , h2 , h3 } , 1
the transmission period where, α is

(5) satisfies an n×n

& x (t ) = Ax (t ) + bu (t ) t ∈ [tk, tk+1) , k = 0,1, 2, ...
(1) where x(t)∈Rn is a state vector, u(tk)∈R is a control input vector, (A, b) is controllable, A is a constant matrix with unknown elements, b is a known constant vector. We assume that the control is updated at the instant tk and kept constant until next control update is received at time tk+1. Let h be the transmission period between successive transmissions, that is, h = tk+1 – tk. For this paper, we assume that the transmission process is ideal, there are no delays, no data losses (packet losses) during the transmission. In future work, we will relax these assumptions. Our objective is to design an adaptive stabilizer for the networked system and to find an upper bound on the time transmission period (sampling period) h such that the NCS is still stable. The control input is of the form

symmetric positive-definite adaptation gain matrix, and

h1 =

1 ⎛ Aupp ⎞ ⎟ ln⎜1 + Aupp ⎜ ζ1 ⎟ ⎠ ⎝

h2 =

1 ⎛ βλmin (Q) Aupp ⎞ ⎟ ln⎜1 + ⎟ ζ2 Aupp ⎜ ⎝ ⎠

⎞ ⎛ ⎛ ⎞ ⎜ ⎜ (1 − β ) − (1 − β ) 2 − (1 − β ) ⎟ Aupp ⎟ ⎟ ⎜ ⎟ 4 4 1 ⎜ ⎝ ⎠ h3 = ln⎜1 + ⎟ Aupp ⎜ ζ1 ⎟ ⎟ ⎜ ⎠ ⎝ 1 2 2 T where ζ 1 = Aupp + bk (t k ) + b p x(t k ) α 2 and ζ 2 = 4ζ 1 P 1 + λmin (Q) + A + Aupp + bkT (t ) .

(

)

u ( t ) = k T (t ) x (t k )

(2)

where k(t) is an n-dimensional control parameter vector, T denotes transpose. From Eqs. (1) and (2), we get

To prove the stability of the NCS, firstly, we will find an upper bound on the transmission error e(t), a lower and an upper bound on the state x(t), and finally, we will use these bounds in Lyapunov function to prove Theorem 1. Lemma 1: (Transmission Error Upper Bound) The transmission error e(t) is bounded between two successive transmissions by

& x (t ) = Ax (t ) + bk T (t ) x (t k )

= A x (t ) − bk * x (t ) + bk T (t ) x (t k )
T

T

(3)

where A = A + bk * is Hurwitz matrix satisfying that A T P + P A = − Q , P and Q are symmetric and
positive–definite matrices, and k is the true value of k(t). Define φ (t ) = k (t ) − k * as the control parameter error vector, and e(t) = x(t) – x(tk) as the transmission
*

e (t ) ≤ γ x (t k )
where

(6)

1 2 2 Aupp + bkT (tk ) + b p x(tk ) α A (t −t ) 2 γ= (e upp k −1) , Aupp

80

Aupp is an upper bound on A such that; A ≤ Aupp .

Using Eq. (6), it can be concluded that

Proof: From the definition of e(t), it can be found that

(1 − γ ) x (t k ) ≤ x (t ) ≤ (1 + γ ) x (t k )
Now we turn our attention to proof of Theorem 1. Consider a positive-definite Lyapunov function V(t) of the form

& & e (t ) = x (t ) = Ax (t ) + bk T (t ) x (t k ) = Ae (t ) + Ax (t k ) + bk T (t ) x (t k )
Taking the integral on both sides, and taking into account that e(tk) = 0, we have

V (t ) = x T (t ) Px(t ) + φ T (t )α −1φ (t )
Differentiating V(t) with respect to t, we have

(8)

e(t ) = e(t k ) + ∫ Ae( s ) + Ax (t k ) + bk T (t ) x (t k ) ds
tk

t

(

)

& & & & V (t ) = xT (t ) Px(t ) + xT (t ) Px(t ) + φ T (t )α −1φ (t ) (9) & + φ T (t )α −1φ (t )

= [ Ax (t k ) + bk T (t k ) x (t k )](t − t k ) 1 − bb T p x (t k ) x T (t k )αx (t k )(t −t k ) 2 2 + ∫ Ae( s )ds
tk t

& & Substituting for x (t ) and φ (t ) from Eqs. (4) and (5), there results
& V (t ) = xT (t ) A Px(t ) + x T (t k )φ (t )bT Px(t ) − eT (t )k *bT Px(t ) + x T (t ) PA x(t ) + xT (t ) Pbφ T (t ) x(t k ) − x (t ) Pbk e(t )
T *T

If we choose t-tk < 1, Therefore,

(10)

e(t ) ≤ [ A x(t k ) + bk T (t k ) x(t k ) + 1 b 2
t 2

− bT Px(t k ) x T (t k )φ (t ) − φ T (t ) x(t k ) x T (t k ) Pb
Rearranging Eq. (10), yields

p x(t k ) α ](t − t k )
3

+ ∫ A e( s ) ds
tk

If we know an upper bound of A that is; A ≤ Aupp ,

& V (t ) = − x T (t )Qx (t ) + 2 x T (t ) Pb φ T (t ) x (t k ) − 2 x T (t ) Pbk *T e (t ) − 2 x T (t k ) Pb φ T (t ) x (t k )
(11)

and applying Bellman-Gronwall Lemma [2], yields

e (t ) ≤ ∫ [ Aupp + bk T (t k ) +
tk

t

1 b 2

2

p x (t k )

2

α ]

& V (t ) becomes bounded from above as

⎛t ⎞ × x (t k ) exp ⎜ ∫ Aupp dw ⎟ ds ⎜ ⎟ ⎝s ⎠
Then

& V (t ) ≤ − λ min (Q ) x (t )

2

+ 2 P b φ T ( t ) e (t ) x ( t k ) + 2 P bk *T x (t ) e (t )
From (6) and (7), where we choose h < 1, then

(12)

e (t ) ≤ γ x (t k )
Lemma 2: The state of the NCS, x(t), between successive transmissions is bounded by

& V ( t ) ≤ − λ min ( Q ) x ( t ) +

2

(1 − γ ) x (t k ) ≤ x (t ) ≤ (1 + γ ) x (t k )
Proof: As e(t) = x(t) – x(tk), then

(7)

2γ P b φ T ( t ) x ( t ) x ( t k ) (13) (1 − γ ) x (t ) x (t k )

+ 2γ P bk *T
Using (7), and rearranging,

x(t k ) − e(t ) ≤ x(t ) ≤ e(t ) + x(t k )

81

& V (t ) ≤

1 x ( t ) − (1 − γ ) 2 λ min ( Q ) (1 − γ ) + 2 γ P bk ( t ) − bk
T *T

(

satisfies h < min { , h1 , h2 , h3 } defined in Eqs. (15), 1

(14)
k

+ 2 γ (1 − γ ) P bk *T

) x (t

(17), and (19), respectively. Therefore, x(t), φ(t), and V(t) are bounded for all t ≥ t0 and the over all system is globally stable. 4 SIMULATION RESULTS

)

By choosing γ < 1 to guarantee that (1-γ) > 0, we can conclude that h < h1, where

h1 =

1 ⎛ Aupp ⎞ ⎟ ln⎜1 + ζ1 ⎟ Aupp ⎜ ⎝ ⎠

Now, we demonstrate the applicability of our approach through the following example. Consider the plant parameters

(15)

& Using, bk *T ≤ A + Aupp , then V (t ) becomes
& V (t ) ≤ 1 x (t ) − (1 − γ ) 2 λmin (Q ) (1 − γ ) + 2γ P bk (t ) + A + Aupp
T

⎡0 2 ⎤ A=⎢ ⎥, ⎣1 0 ⎦

⎡1⎤ B=⎢ ⎥ ⎣1⎦

(

Assume the desired plant parameters

{

+ 2γ (1 − γ ) P
Again, by choosing

{ A + A }) x(t )
upp k

}

(16)

⎡− 1 0 ⎤ A=⎢ ⎥ ⎣ 0 − 2⎦
Let

γ<
and

(4 P )(1 + λ

βλ min (Q )
min

(Q ) + A + Aupp + bk T (t )

)

,

⎡1 0⎤ ⎡2 0⎤ P=⎢ ⎥ , Q = ⎢0 4⎥ ⎣0 1⎦ ⎣ ⎦
Assume A is unknown but only Aupp is known (take Aupp = 3). Figure 2 shows the simulation results for the networked control system with x(0) = [1 1]T , α = I (identity matrix), β = 0.9, k(0) = [0 0]T, it is found that h1 < 0.1729s, h2 < 0.0125s, and h3 < 0.0149s. Before starting simulation we know that, h < 0.0125s, but with simulation proceeds, h can be found on-line as shown in Fig. 3 ( we take h = 0.002s). Figure 4 shows the simulation results for the networked control system with x(0) = [1 1]T , α = I, β = 0.9, k(0) = [1 1]T, it is found that h1 < 0.1279s, h2 < 0.0069s, and h3 < 0.0104s. Before starting simulation we know that, h < 0.0069s (we take h = 0.001s), also with simulation proceeds, h can be found on-line as shown in Fig. 5. From Eqs. (15), (17), (19) and Figs. (3), (6), it can be concluded that h3 is the minimum transmission period.

0 < β < 1 , we have h < h2, where

h2 =

1 ⎛ βλmin (Q) Aupp ⎞ ⎟ ln⎜1 + ⎟ Aupp ⎜ ζ2 ⎠ ⎝

(17)

Substituting for γ in (16), we get

λ (Q) β ⎞ ⎛ & V(t) ≤ min x(t) ⎜ − (1−γ )2 + β − γ ⎟ x(tk ) (18) (1−γ ) 2 ⎠ ⎝
Finally, by choosing

β β γ < (1− ) − (1− )2 − (1− β ) , we have h < h3,
4 4
where

⎛ ⎛ β ⎞ ⎞ ⎜ ⎜(1− ) − (1− β )2 − (1− β) ⎟Aupp ⎟ ⎜ 4 ⎟ ⎟ 4 1 ⎜ ⎝ ⎠ ln⎜1+ h3 = ⎟ (19) ζ1 Aupp ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ & (t ) < 0 , if h and, we can conclude that V

82

1.4 1.2 1 0.8 0.6 0.4 0.2 0 -0.2

0.18 0.16 0.14 Transmission Period h1,h2,h3 0.12 0.1 0.08 0.06 0.04 0.02 0

h1 h2 h3

NCS States x(t)

0

1

2

3

4 Time (Sec)

5

6

7

8

0

1

2

3

4 Time (Sec)

5

6

7

8

Figure 2 NCS states x(t)

Figure 5 Transmission period h

0.18 0.16 0.14 Transmission Period h1,h2,h3 0.12 0.1 0.08 0.06 0.04 0.02 0

h1 h2 h3

1

CONCLUSIONS

0

1

2

3

4 Time (Sec)

5

6

7

8

Figure 3 Transmission period h

2.5

2

1.5 NCS States x(t)

1

The paper addresses the stability analysis of linear time-invariant adaptive networked control systems. The case of state feedback is in which only an upper bound on the norm of matrix A is required. As shown in theorem 1, the priori knowledge of upper bound on norm A is not required in constructing the controller but it is required only to determine an upper bound on the transmission period h that guarantees the stability of the overall adaptive networked control system under an ideal transmission process, i.e. no transmission delay or packet dropout. In future work we will try to relax these assumptions. Rigorous mathematical proofs are established relies heavily on Lyapunov's stability criterion. Simulation results are given to illustrate the efficacy of our design approach. It is verified that, if the sampling period of the network is less than the upper bound on h, the control parameters of the adaptive controller are bounded and that the NCS states converge to zero as time tends to infinity value as time evolves.

0.5

ACKNOWLEDGEMENTS
0

-0.5

0

1

2

3

4 Time (Sec)

5

6

7

8

This work is supported by National Natural Science Foundation of China, Grant #60574088 and #60274014.

Figure 4 NCS states x(t) REFERENCES [1] L. A Montestruque, and P. Antsaklis: Stability of model-based networked control systems with time-varying transmission times, IEEE Trans,

83

[2]

[3]

[4] [5]

[6]

Automat. Contr., vol. 49, no. 9, pp. 1562-1572 (2004). W. Zhang: Stability analysis of networked control systems, PhD Thesis, Case Westem Reserve University (2001). W. Zhang, M. S. Branicky, and S. M. Phillips: Stability of networked control systems, IEEE Control System Magzine, vol. 21, pp. 84-99 (2001). H. Ye: Research on networked control systems, PhD Thesis, University of Maryland (2000). G.C. Walsh, H. Ye, and L. Bushnell: Stability analysis of networked control systems, in Proc. Amer. Control Conf., San Diego, CA, pp. 2876 2880 (1999). G. Tao: Adaptive Control Design and Analysis. John Wiley & Sons, Inc., New Jersy, (2003).

84

DIFFERENT MAC PROTOCOLS FOR NEXT GENERATION WIRELESS ATM NETWORKS
Sami A. El-Dolil Dept. of Electronic and Electrical Comm. Eng., Faculty of Electronic Eng, Menoufya Univ. Msel_dolil@yahoo.com

ABSTRACT This paper presents a comparison between three proposed Medium Access Control (MAC) Protocols for next generation multimedia wireless ATM (WATM) networks. To support the ATM CBR, VBR, ABR services to end users, a MAC protocol must be able to provide bandwidth on demand with suitable performance guarantee. The protocols have been proposed to efficiently integrate multiple ATM traffics over the wireless channel while achieving high channel utilization. The objective of the comparison is to highlight the merits and demerits of the three proposed protocols. Keywords: Medium access control protocol and ATM network. wireless ATM networks. The three protocols are as follow. 1. Dynamic Allocation TDMA MAC Protocol for Wireless ATM Networks. 2. An Intelligent MAC Protocol for next generation Wireless ATM Networks. 3. Contention and Polling based Multiple Access Control with minimum Piggybacking for Wireless ATM Network. Three performance metrics, namely cell loss probability, average cell delay, and throughput, are considered. Section II gives an overview and description of the proposed protocols. In section III, the source models are identified. Section IV, describes the resource allocation algorithm. An evaluation of the performance of the proposed protocols is presented in section V. Finally, section VI concludes the paper. 2 SYSTEM DESCRIPTION

1

INTRODUCTION

Asynchronous transfer mode (ATM) was recommended by the International Telecommunication Union (ITU-T) to be the transfer protocol of the broadband integrated services digital network (B-ISDN). The concept of wireless ATM (WATM) was introduced to extend the capabilities of ATM to wireless arena in [1]. A major issue of WATM network is the selection of a medium access control (MAC) protocol that will efficiently allocate the scarce radio resources among the competing mobile stations while satisfying the QoS required for each admitted connection. Several MAC protocols are proposed for wireless ATM network [2] – [9]. In [2], a novel predictive approach is used to estimate the current requirements for the connections. The variable bit rate (VBR) traffic is divided into guaranteed and best effort traffic while the time to expiry algorithm is adapted for voice and VBR slot allocation. In [3], the leaky bucket algorithm with priority as well as the cell train concept achieves a fair and efficient slot allocation. In [4], Packet Reservation Multiple Access with Dynamic Allocation (PRMA/DA) MAC protocol adopts dynamic allocation algorithm in order to resolve the contention situation quickly and avoid the waste of bandwidth that occurs when there are several unneeded request slots. However the drawback is that this protocol does not use minislots for the access request. In [6] the use of piggybacking information from VBR connection improves the slot allocation for VBR traffic and enhances the overall protocol performance. The current paper introduces a quantitative comparison of three proposed MAC Protocol for

2.1 Air Interface Frame Structure The proposed protocols use frequency division duplex (FDD) with a fixed frame length of 2 m sec. used for the uplink (UL) and the downlink (DL) channel. Fig. 1 illustrates the frame structure for the uplink channel. The channel bit rate is 4.9 Mbps and the data slot size is 53 bytes. The number of slots per frame is 24 slots. The uplink frame is divided into control and data transmission periods, each consisting of integer number of slots. Slots assigned for control purpose are further subdivided into four control mini-slots with each mini-slot accommodating reservation mini-packet.

85

Control period

Data Transmission period

Figure: 1 the frame structure. In the uplink channel, control slots provide a communication mechanism for a mobile station to send a reservation request during the contention phase of the connection. The data slots are provided with contention-free mechanism during the data transmission phase. An uplink control packet is sent whenever a mobile station needs to inform the base station with its traffic characteristics and source status. Feedback for the uplink control packets is sent in the downlink control packets. 2.2 Contention Access Scheme The first and second protocols use the same contention scheme and the length of the control period is dynamically adjusted as a function of contention traffic load. The control mini-slots are used by the mobile stations to send their reservation requests in contention mode using slotted Aloha protocol. To reduce the access time of real-time connection, which greatly affects the QoS of the real-time services, we separate the control mini-slots assigned to real-time and non real-time connections. The number of control mini-slots assigned to realtime and non real-time connections is adaptively allocated with the collision status. The total number of uplink control mini-slots ranges from 4 to 12 mini-slots. A priority is given to real-time connections by assigning their control mini-slots first according to the number of collisions occurred in the previous frame. In the third protocol, the control period is further divided into contention and polling periods. Control slots assigned in the control period are further subdivided into four control mini-slots, some of them used as contention mini-slots and the others used as polling mini-slots. A fixed number of control minislots are allocated for contention and polling access. The contention mini-slots are used by voice connections to send their reservation requests in contention mode at the beginning of talk-spurt, while the poling mini-slots are used by ABR connections to send their buffer length status to the base station. The number of polling mini-slots are chosen such that the polling period will be less than or equal to the average inter-arrival time of ABR data message (100 m-sec). Number of polling mini-slots ≥ int (number of ABR users * (TF/Tint)). where, TF: Frame duration (2 m-sec). Tint: Average inter-arrival time of ABR data message (100 m-sec). Int: largest integer value. Contention period is set to a constant number of control mini-slots and this number is chosen to satisfy the required QoS for voice traffic. The contention process is divided to four stages: First Stage: When the connection becomes active it randomly selects one of the 4 subsequent frames to send its request during the contention period. Second Stage: If the connection exhibits collision in the first stage it randomly selects one of the 3 subsequent frames to send its request during the contention period. Third Stage: If the connection exhibits collision in the second stage it randomly selects one of the 2 subsequent frames to send its request during the contention period. Fourth Stage: If the connection exhibits collision in the third stage it sends its request in every frame until the base station successfully receives its request. In every stage the connection randomly select one of the available contention mini-slots in the selected frame to send its reservation request. If the connection request is correctly received during any stage the connection exit from the contention process and the base station periodically allocate slots to the connection until the end of talk-spurt. The described contention process aims to reduce the contention load during the contention period in each frame, increase the probability of successfully accessing the network, decreasing the probability of collision, reduce the access delay time and at the same time minimize the number of used contention mini-slots and utilize them efficiently. Decreasing the number of available frames for selection in each subsequent stage aiming to reduce the access delay time of the connections and hence reduce the cell loss probability.

86

2.3 Traffic Integration Strategy As different wireless ATM services share the same resources, an effective interaction between the allocation algorithms is needed to maximize the utilization efficiency of the shared resources. In the first and second protocol , the voice connections have the highest priority and the VBR connections have the next higher priority. The ABR connections have the lowest priority. In the third protocol, the available transmission slots are assigned first to active voice connections, then a minimum assigned slots are allocated to ABR traffic, then VBR traffic slots are allocated, and finally, the remaining slots are distributed between ABR connections according to the buffer length of each connections. 3 SOURCE MODELS 3.1 Voice Source Model A voice source generates a signal that follows a pattern of talk-spurts separated by silent gaps. A speech activity detector can be used to detect this pattern. Therefore, an ON/OFF model can describe a voice source: the source alternates between the ON state where the source generates packets at rate 8 kbps, and the OFF state where no packets are generated. Durations of talk-spurts and silent gaps are modelled as exponential distributions with mean values of 1 and 1.35 sec, respectively. If a voice packet is not sent within its maximum transfer delay (MTD), it should be dropped The MTD is set to be 16 m-sec. 3.2 VBR Source Model The source rates are modelled as truncated Gaussian distribution between (128 – 384 kbps) with mean rate of 256 kbps. The rate of the source varies every 33 m-sec (the duration of image frame) and the MTD of the VBR packet is set to be 50 msec. 3.3 ABR Source Model It resembles a data source with messages of certain length. The length of the message is exponentially distributed with mean 2 k bit, and the inter-arrival time between messages is negatively exponential distributed with mean of 100 m-sec. The MTD of the ABR packet is set to be 6 sec. 4 BANDWIDTH ALLOCATION ALGORITHM The bandwidth allocation for uplink transmission is only considered since the downlink transmission can be scheduled in the same manner as in a wired ATM switch.

4.1 Dynamic Allocation TDMA MAC Protocol for Wireless ATM Networks 4.1.1 Slot Allocation Algorithm for Voice traffic The voice connections have the higher priority. At the beginning of a talk-spurt, the mobile sends a control packet. When the base station knows that the connection becomes active the base station periodically allocates slots to the connection until the end of talk spurt. At the end of the talk-spurt, the mobile sets a flag in the last voice packet to inform the base station that the connection is no longer active. 4.1.2 Slot Allocation Algorithm for VBR traffic VBR connections have the next highest priority. They only contend (send a control packet) at the session beginning. Next, all the control information is piggybacking on the data packets, which reduces the contention over the real-time mini-slots. At the base station, a token pool of certain size is introduced for each VBR connection. Tokens are generated at a fixed rate that is equal to the mean cell rate. A token is removed from the corresponding pool for every slot allocated to the connection After slot allocation for voice connections the base station allocates one slot for each VBR connection to send one of their cells and also to piggyback the current traffic parameter (e.g. buffer length, cell delay) of the connection. Then the base station allocates slots for each connection .The number of slots allocated for a connection is the minimum of the buffer length and the number of tokens in the pool such as; Nv= min (Av, Bv) . where Nv : number of slots allocated for the VBR connection. Av : number of tokens in the pool. Bv : number of the packets in the mobile station buffer. Each connection cannot send greater than 12 cells in the frame. Within the frame, priority is given to the connection with minimum time-of-expiry to send their cells earlier. 4.1.3 Slot Allocation Algorithm for ABR traffic The base station records the buffer length status of each connection using the control information transmitted by the mobile. When a message arrives at a mobile, it sends the number of packets in the new message either piggybacked to a data packet or in a control packet. Like VBR connections a token pool is introduced for each ABR connection. ABR connections have lower priority than voice and VBR connections. The number of slots allocated for a connection is the minimum of the buffer length and the number of tokens in the pool such as;

87

Na= min (Aa , Ba). where, Na : number of slots allocated for the ABR connection. Aa : number of tokens in the pool. Ba : number of the packets in the mobile station buffer. The connection with higher number of tokens in its pool sends their cells earlier within the frame. If there are remaining slots inside the frame, the base station allocates them fairly between ABR and VBR connections. 4.2 An Intelligent MAC Protocol for next generation Wireless ATM Networks 4.2.1 Slot Allocation Algorithm for Voice and CBR traffic The voice connections have the highest priority. At the beginning of a talk-spurt, the mobile sends a control packet to inform the base station that the connection become active. At the base station, a token pool is introduced for each active voice connection and each token is increased by a fixed amount equal to Tv every frame to indicate the number of cells generated in the mobile station buffer and decreased by one for every slot allocated to the corresponding connection. Then the voice connections are arranged according to the content of its token and slots are allocated to the connection with higher value in its token first. At the end of the talk-spurt, the mobile sets a flag in the last voice cell to inform the base station that the connection is no longer active. Tv=Tf /Tp where, Tf : frame duration (2msec). Tp : packetization time of the ATM cell of voice connection (48m-sec). The number of slots allocated for voice connection in each frame should not exceed Lv . where Lv = number of voice connection*( Tf / Tp). The token poll has two advantages: First: it indicates the number of packets generated at the mobile station buffer. Second: it indicates the amount of delay of the generated packet. This helps in deciding which voice connection should send its packet early and leads to reducing the average delay of the voice connections. 4.2.2 Slot Allocation Algorithm for VBR traffic VBR connections have the next higher priority. They only contend (send a control packet) at the session beginning. Next, all the control information is piggybacking on the data packets, which reduces the contention over the real-time mini-slots. At the base station, one token pool of certain size is

introduced for all VBR connections. Tokens are generated at a fixed rate that is equal to the mean cell rate per connection multiplied by the number of VBR connections. A token is removed from the corresponding pool for every slot allocated to any VBR connection. The cell delay is piggybacking on the data packets. The number of slots allocated for a VBR connection depends on the cell delay and the number of token in the pool such as; Nvj=int ( Kv* ( Dvj/Dv)) where Nvj : number of slots allocated for the connection number j Kv : number of tokens in the pool Dvj : delay time of the last transmitted cell from connection number j Dv : total cell delay of all VBR connections 4.2.3 Slot Allocation Algorithm for ABR traffic The base station records the buffer length status of each connection using the control information transmitted by the mobile. When a message arrives at a mobile, it sends the number of packets in the new message either piggybacked to a data packet or in a control packet. Like VBR connections one token pool is introduced for all ABR connections. ABR connections have lower priority than voice and VBR connections. The number of slots allocated for an ABR connection depends on the buffer length and the number of token in the pool such as; Naj= Ka* ( Baj/Ba) Where Naj : number of slots allocated for the connection number j. Ka : number of tokens in the pool. Baj : number of cells in the buffer of the connection (buffer length). Ba : summation of the buffer lengths of all ABR connections. If there are remaining slots inside the frame, the base station allocates them between ABR and VBR connections such that Ψ % for VBR connections and the rest for ABR connections where; Ψ= (

D avg T oe

)*100.

where Davg : average delay of VBR connections. Toe : time of expiry of VBR cells (50 m-sec). 4.3 Contention and Polling based Multiple Access Control with minimum Piggybacking for Wireless ATM Network 4.3.1 Slot Allocation Algorithm for Voice Traffic At the beginning of talk spurt the voice connection sends a reservation request through the

88

contention mini-slot. When the base station successfully receives the request, it periodically allocates slots to the connection up to the end of talk spurt. At the end of talk spurt the connection set a one bit flag in the last transmitted cell to indicate that the connection is no longer active. 4.3.2 Slot Allocation Algorithm for VBR Traffic Initially the base station allocates one slot for each active VBR connection and then broadcast a delay threshold value to all VBR connections every frame. One bit flag is used to indicate the delay status of the buffer and is piggybacked to the data packet (cell). Each VBR connection checks its buffer and sets the flag to one when the packet delay exceeds the delay threshold, and to zero when the packet delay is lower than the delay threshold. The slot allocation procedures are performed as follow: • The base station increases the assigned slots by one for a VBR connection each time its packet delay is greater than the delay threshold (piggybacking flag equal to one). • At the base station a counter is introduced. The counter incremented by one when the number of slots allocated for VBR traffic in the frame is greater than Vmean and decremented by one when it is lower than Vmean. where; Vmean: the mean number of cells generated from all VBR connections per frame according to the mean cell generation rate per connection. • The delay threshold can be set to a fixed value or dynamically adjusted to control the slot allocation process for VBR traffic. The counter can be used to dynamically adjust the delay threshold by increasing the delay threshold value when the counter value is increased and decreasing the delay threshold value when the counter is decreased. Since the increase in the counter value indicates the increase in the allocated bandwidth (slots), so we need to reduce it by increasing the delay threshold value which in turn decreases the piggybacking and hence decreases the number of allocated slots and vice versa. Table.1 shows the dynamic delay threshold values using the dynamic adjustment. • When some of the connection reserved slots are not used by the connection for transmitting its packets (number of generated packets become lower than the number of allocated slots) the base station release this slot and decrement the number of the reserved slots for that connection in the subsequent frames by one. • When the counter becomes greater than the upper limit value (25) the base station release some of the reserved slots for the connections that have no piggybacking in the previous frame until the number

of VBR allocated slots becomes lower than Vmean to decrease the counter. • Each VBR connection could not have lower than one allocated slot per frame. As we suggested before the delay threshold can be set at a fixed value and its value have a significant effect on the allocation process and the achieved QoS of VBR traffic. During the simulation at fixed delay threshold we take its fixed value equals to 0.5 maximum CTD of VBR cell (25msec) as an appropriate value and evaluate the performance of the allocation process in this case. Table 1: Dynamic adjustment of the delay threshold Counter Counter ≤4 4 < counter ≤ 8 8 < counter ≤12 12 < counter ≤15 15 < counter ≤18 18 < counter ≤ 20 20 < counter Delay Threshold values (m-sec) 15 20 25 30 35 40 45

4.3.3 Slot Allocation Algorithm for ABR Traffic Polling control mini-slots are used by ABR connections to send their buffer length to the base station. The number of polling mini-slots is selected such that the polling period should be lower than or equal to the inter-arrival time between ABR data messages (100 m-sec) to enable the base station to efficiently monitor the buffer length status of each connection. Initially a minimum number of slots are allocated to ABR traffic. So that, each ABR connection has an allocated bandwidth equivalent to 50 % of its average cell generation rate. The base station controls this minimum assigned bandwidth by maintaining a leaky bucket for every ABR connection. Tokens added to the bucket at constant rate equals to 50% of the average cell generation rate. Every time a slot is allocated to the connection, a token is removed from the bucket. So, in each frame the connection with non empty leaky bucket has allocated slots equal to the number of tokens in its bucket. After allocating the VBR traffic slots, the remaining slots are allocated to ABR connections. ABR connections are arranged according to their buffer length where the connection with higher buffer length has its required slots allocated first.

89

PERFORMANCE EVALUATION AND SIMULATION RESULTS A comparison has been made to evaluate the performance of the three proposed protocols using the same simulation parameters. Fig. 2 through Fig. 7 illustrates the performance with integrated traffic. There are 30 voice connection and 12 VBR connections in the network, while the ABR connections are added gradually to the network. All results are presented as a function of the number of ABR connection. For voice traffic, Fig. 2 and Fig. 5 show that a good QoS is achieved by the three protocols in term of cell loss probability (lower than 10-4) and average cell delay (lower than 5 m-sec). it is clear that, approximately, the first and the second proposed protocols achieve better performance than the third one as they use the same contention access scheme, due to using lower number of control mini-slots for contention access. It is worth to mention that the third protocol uses 8.3% of the bandwidth (8 control minislots) for contention and polling access, while the first and second protocol uses 4.45% of the bandwidth (approximately 4 control mini-slots) which make the available data transmission bandwidth for the third protocol lower by 3.85 % than that of the first and second protocol. For VBR traffic, Fig. 3 shows that with low VBR traffic up to 45 connections, the second protocol achieves the best performance in term of cell loss probability as its resource allocation algorithm depends on the cell delay at each connection buffer, so that the connection with higher delay allocated more slots than that with lower delay which leads to reduce the probability that the cell delay exceeds the maximum CTD (cell transfer delay) and then lost. This decreases the cell loss probability. On the other hand this increases the average cell delay that becomes higher than the average cell delay caused by the first protocol as indicated in Fig. 6.The third protocol achieves higher loss probability than the second protocol (Fig. 3), and the highest average cell delay (Fig. 6) since the dynamic delay threshold adjustment process produces more average delay than the other two protocols. When the number of ABR connections becomes greater than 45, the offered traffic becomes higher than the available bandwidth. The third protocol achieves the lowest cell loss probability and lower average cell delay than the second one since a considerable part of ABR slot allocation takes place after VBR slot allocation, and the VBR slot allocation is controlled by the value of delay threshold which has upper limit value. So increasing the number of ABR connection has low significant effect on VBR slot allocation. The first protocol achieves the lowest performance in terms of cell loss probability (Fig. 3), while achieves the lowest average cell delay with all number of ABR connections, (Fig. 6). This indicates that the efficiency

5

of its VBR Resource allocation algorithm is lower than the other two protocols. For ABR traffic, Fig. 4 and Fig. 7 show that the reduction of data transmission bandwidth of the third protocol by 3.85% due to the contention and polling periods significantly reduces the available bandwidth for ABR traffic which make the cell losses start early before 40 ABR connection and the average cell delay significantly increases with ABR connections since a considerable part of ABR resource allocations takes place after VBR slot allocation. The first and second protocols achieve good QoS for ABR traffic but the first protocol achieve slightly better performance in term of cell loss probability and average cell delay. At 45 ABR connection the first and second protocol achieve approximately 94% data transmission throughput and 98.5% total channel utilization while providing the acceptable QoS required for each traffic category. For the third protocol, at 36 ABR connection 91% data transmission throughput and 98.5% total channel utilization are achieved while preserving the required QoS for each ATM traffic type.

Figure 2: Cell loss probability of Voice connections as a function of the number of ABR connections (12 VBR and 30 Voice connection)

Figure 3: Cell loss probability of VBR connections as a function of the number of ABR connections (12 VBR and 30 Voice connection)
90

The performance with real time traffic is illustrated in Fig. 8 through Fig. 11. 12 VBR connections are presented while the voice connections are added gradually to the system. For voice traffic, Fig. 8 and Fig. 10 show that the performance is the same as with integrated traffic, where the first and second protocols perform better than the third one. For VBR traffic, Fig. 9 and Fig. 11 show that the second protocol achieves the lowest cell loss probability. The average cell delay of the first and second protocols is low with slightly different values until 112 voice connection (97% channel utilization), after that, the average cell delay of the second protocol become significantly higher since the efficiency of its resource allocation algorithm in reducing cell loss probability results in increasing the average cell delay. The third protocol has lower cell delay than that with integrated traffic because in the absence of ABR traffic, any remaining slots will be given to VBR connections.

With the third protocol the average cell delay lower than with integrated traffic because in the absence of ABR traffic if there are remaining slots they will be given to VBR connections. A cell loss probability of 10-3 for VBR traffic achieved by the first protocol at 106 voice connection (95% channel utilization), by the second protocol at 112 voice connection (97% channel utilization), and by the third protocol at 103 voice connection (94.7% channel utilization).

Figure 6: Average cell delay of VBR connections as a function of the number of ABR connections (12 VBR and 30 Voice connections).

Figure 4: Cell loss probability of ABR connections as a function of the number of ABR connections (12 VBR and 30 Voice connection)

Figure 7: Average cell delay of ABR connections as a function of the number of ABR connections (12 VBR and 30 Voice connections).

Figure 5: Average cell delay of Voice connections as a function of the number of ABR connections, (12 VBR and 30 Voice connections).
91

These results indicate that the second protocol has the most efficient VBR slot allocation algorithm then the first protocol and finally the third protocol, but the third protocol uses the lowest piggybacking overhead.

protocol uses the lowest piggybacking overhead. For ABR traffic, the reduction of data transmission bandwidth of the third protocol by 3.85% reduce the available bandwidth for ABR traffic which make the cell losses and the average cell delay significantly increases with higher values than the other two protocol. Finally the three proposed protocols achieve very high channel utilization of approximately 98% for the wireless ATM channel while respects the required QoS of multimedia ATM traffic types.

Figure 8: Cell loss probability of Voice connections as a function of the number of Voice connections (12 VBR Connection).

Figure 10: Average cell delay of Voice connections as a function of the number of voice connections (12 VBR connections).

Figure 9: Cell loss probability of VBR connections as a function of the number of Voice connections (12 VBR Connection). 6 CONCLUSION We have presented an extensive performance comparison of three proposed MAC protocols to highlight the merits and demerits of each of them. For voice traffic, a good QoS achieved by the three protocols. But, the first and second proposed protocols achieve better QoS than the third protocol while using lower number of control mini-slots for contention access. For VBR traffic, the results indicate that the second protocol has the most efficient VBR slot allocation algorithm then the first protocol and finally the third protocol, but the third

Figure 11: Average cell delay of VBR connections as a function of the number of voice connections (12 VBR connection)

92

REFERENCES [1] D. Raychandhuri and N. D. Wilson, “ ATM –based transport architecture for multiservices wireless personal communication networks“ IEEE J. Selected. Areas in Communication, vol.12, no.80, pp. 14011414, Oct. 1994. [2] J.F. Frigon, H.C.B.Chan and V.C.M. leung “Dynamic reservation TDMA protocol for wireless ATM networks “ IEEE J. Selected Areas in Communication, vol.19, no.2, pp 370-383, Feb.2001. [3] N.Passas, S. Paskalis, D.Vali and L.Merakos “Quality-of -Service – Oriented Medium Access Control for wireless ATM networks “ IEEE Communication Magazine, pp. 42-50, 1997. [4] J.Sanchez, R. Martnez, and M.W.Marcellin “ A survey of MAC Protocols Proposed for Wireless ATM “ IEEE Networks, pp. 52-62, Nov. 1997. [5] H. Liu, U. Cliese and L. Dittmann “ Knowledge-based Multiple Access Protocol in Broadband Wireless ATM Networks “ 50 th Veh. Tech. Conf., pp. 1685-1689, Sept. 1999 . Amesterdam, the Ntherlands. [6] S. Lee, Y. Song , D. Cho , Y. Dhong and J.Yang “ Wireless ATM MAC layer Protocol for Near Optimal Quality of Service Support” GLOBE COM. 98 Sydney, Australia, pp. 2264-2269, Nov. 1998 . [7] Y.Kwork and K.N.Lau “A quantitative Comparison of Multiple Access Control Protocols for Wireless ATM “ IEEE Transaction on Vehicular Technology, vol.50, no.3, MAY 2001.

On Circuits & Systems (The 46th IEEE International MWSCAS), December 2003, Cairo, Egypt. [12] Sami A. EL-Dolil and Mohammed Abd Elnaby " Contention and Polling Based Multiple Access Control with Minimum Piggybacking for Wireless ATM Networks, " 21st National Radio Science Conference (21st NRSC’ 2004), March 16-18, Cairo, Egypt.

[8] R. Steele and L. Hanzo, Mobile Radio Communications, Wiley and Sons, New York, 1999. [9] J. C.Chen , K.M.Sivalingam and R.Acharya “ Comparative Analysis of Wireless ATM Channel Access Protocols ” Baltzer Journals, Sept. 1997. [10] Sami A. EL-Dolil and Mohammed Abd Elnaby " Dynamic Allocation TDMA MAC Protocol for Wireless ATM Networks" Proc. of the Twentieth National Radio Science Conference (20thNRSC’ 2003), March 18-20, Cairo, Egypt. [11] Sami A. EL-Dolil and Mohammed Abd Elnaby " An Intelligent Resource Management Strategy for Next Generation WATM Personal Communication Network," The Proc. of the 46th IEEE International Midwest Symposium

93

FIXEDINTELLIGENT NETWORK ARCHITECTURE FOR FIXED-MOBILE CONVERGENCE SERVICES
Jong Min Lee, Ae Hyang Park, Jun Kyun Choi Electronics and Telecommunications Research Institute (ETRI) 161 Gajeong-Dong, Yuseong-gu, Daejeon 305-700, Korea E-mail: leejm@etri.re.kr, ahpark@icu.ac.kr, jkchoi@icu.ac.kr

ABSTRACT Recenly, competition in telecommunications markets is increasing rapidly. In order to survive in the competitive telecommunication markets, service providers and network operators have to reform their marketing and service delivery strategies. The Fixed-Mobie Convergence (FMC) is an evolution from both the technological and network provision point of view. FMC can generally be achieved at the intelligent network level. Supporting both fixed and mobile services on the intelligent network architecture, will bring a number of benefits to operators and customers. In this article, we propose Fixed-Mobile Converged Intelligent network architecture between fixed network and mobile network via existing ADSL(Asymmetric Digital Subscriber Line) technology. This article consists of convergence trends, IP access architecture, mapping mechanism for IP over other technology, 3GPP user plane protocol architecture, and technology forecasting. Keywords: FMC, FMS, Fixed-mobile convergence network, intellignet network

1

INTRODUCTION

Fixed-mobile convergence (FMC) is the trend towards seamless connectivity between fixed and wireless telecommunications networks. The term also describes any physical network that allows cellular telephone sets to function smoothly with the fixed network infrastructure. The ultimate goal of FMC is to optimize transmission of all data, voice and video communications to and among end users, no matter what their locations or devices. In the near future, FMC means that a single device can connect through and be switched between fixed and mobile networks. FMC is sometimes seen as a way to reverse the trend towards fixed-mobile substitution (FMS), the increasing tendency for consumers and businesses to substitute cellular telephones for hardwired or cordless landline sets. Consumers prefer mobile phones for several reasons. The most often mentioned factors are convenience and portability. With mobile service, it is not necessary for the user to locate and remain bound to a hard-wired phone set or stay within the limited range of a cordless base unit. Most mobile service providers offer packages in which there is no extra charge for roaming or long-distance calling. Another factor in the acceleration of FMS is the fact that as mobile telephone repeaters have proliferated, the per-minute cost of the services has been declining while coverage has been improving. FMS

at the consumer and enterprise level translates to the industry as a whole, offering a major opportunity to mobile companies and threatening the continued existence of traditional telecommunications companies. A number of companies are offering or developing devices that can connect to both traditional and wireless telecom networks as a means of slowing the overall trend to FMS. The FMC is an evolution from both the technological and marketing points of view. From the technological point of view, convergence can generally be achieved at one of three levels: the terminal level, the intelligent network level, or the switch level. However, incumbent operators and new entrants find that they cannot easily integrate all the current switches. The only level where significant progress has been achieved is the intelligent network level. Solutions based on the intelligent network exactly fit the market demands for flexible, innovative services and fast introduction to the market. Therefore, adoption of an intelligent network solution by mobile operators and implementation of wireless access solutions (with limited mobility) by fixed network operators are the current key drivers toward FMC [1, 2]. Support of both fixed and mobile services on intelligent network architecture brings a number of benefits to operators and customers, helping them to become or remain competitive. Consumers using GSM and WCDMA phones

94

will now be able to use their mobile phones at home, with the price advantages offered by fixed-line and internet phones. Intelligent network solution includes home base station that is, in itself, the world’s smallest mobile base station. The home base station is compatible with GSM and WCDMA phones and also includes Wi-Fi and ADSL. This solution enables the operator to offer a “home-area” tariff to all the people living in a household. Home base station is connected, plug-and-play, to any existing IP backhaul network (e.g. ADSL), and the user's mobile phone will switch to the indoor home base station automatically as they walk through the door. The remains of this article are as follows: We investigate current FMC/FMS trends including activities of FMCA which is an organization for convergence products and service. There are different approaches to FMC around different operators and vendors. Section 3 presents IP access architecture model referred from ITU-T standardization, and 3GPP user plane protocol architecture. In section 4, intelligent network architecture for FMC services and proposed protocol architectures are described. Section 5 addresses the future mobile technologies. Finally, we conclude this article. 2 Convergence trend for FMC/FMS

Today, customers are increasingly using mobile phones to replace fixed phones, due to mobile phones’ convenience and greater functionality. With this trend, Fixed to Mobile Convergence (FMC), currently one of the crucial strategic issues in the telecommunications industry is the way to connect the mobile phone to the fixed line infrastructure. With the convergence between the mobile and fixed line networks, telecommunications operators can provide services to users irrespective of their location,

access technology, and terminal. In addition, it is expected that by 2010 mobile service penetration levels of almost 90 percent will be achieved in international market. These trends are stimulating operators and vendors to provide the same services over both fixed and mobile networks, developing a converged intelligent network Fig. 1. Fixed-mobile converged intelligent network can provide seamless location, roaming and hand-off of voice calls between indoor network and outdoor network using one mobile phone with a single number. Applicable to data and video services as well, this capability will enable providers to deliver multimedia services to a range of different devices and maintain service continuity and Quality of Service (QoS) across a range of access networks for users at work, at home, or on the road. The intelligent network can dynamically deliver these services over the most efficient and highest quality network without subscribers having to take action or even acknowledge that any change took place. This results in greater subscriber satisfaction and enhanced customer loyalty. Fig. 2 shows network development roadmap which describes the convergence paths from today’s second-generation wireless system to the thirdgeneration wireless system. There are 3 convergence paths that converge to all IP network: mobile, wireless, and the fixed network. In the past, the mobile network’s data transmission rate was 14.4Kbps to 64Kbps by using PCS (IS-95A) or IS-95B. This was suitable for messaging and short file transfer, but not convenient for Web browsing and multimedia services. In the early 2000, the data transmission rate was enhanced from 144Kbps to 2.4Mbps with newly introduced technologies, CDMA2000 and 1X EVDO. This High-speed data technology allows real-time video communication or large file transfer.

Figure 1: The Fixed-Mobile Converged Intelligent network

95

Figure 2: Network development roadmap The convergence path of wireless network, which started with IEEE802.11 technology, was initially provided 1Mbps data rate. However, after several years later, the IEEE802.11b technology was developed and the data transmission rate became 11Mbps. currently the data rate is 54Mbps using IEEE802.11a/g, and expect to be increased within several years. The Fixed network also has developed from the utilization of the PSTN modem to optical technology. From the ISDN, the ADSL technology with 1 to 8Mbps data rate was widely deployed and soon the VDSL with 50Mbps data rate was adopted. Now, the FTTH which provides hundreds of Mbps is being used. This improvement will be accelerated and converged to all IP networks. 2.1 Fixed-Mobile Convergence Alliance We briefly introduce the activities of 2.1 FixedMobile Convergence Alliance (FMCA) which is an organization for converged products and services. The FMCA [3] was formed in June 2004 and incorporated as a non-profit trade association under New York law in August 2006. The FMCA is therefore managed by Bylaws and operating policies and governed by the laws of the state of New York, USA. As a market driven organization, the FMCA benefits from a number of Priority Programs focused on making Convergence products and services seamless and easy to use, no matter what access technology is employed, for the benefit of the customer. Its global membership base of leading operators, representing a customer base of over 850 million customers, or 1 in 3 of the world's telecoms users, collaborates with member vendors towards the accelerated development and availability of Convergence products and services in areas such as terminals, access points and home access gateways, roaming and innovative applications. In order to accomplish its goals, the FMCA has developed close relationships with leading Standards Development, Specification & Certification Organizations (SDO/Fora), actively contributing towards the delivery of existing and emerging service requirements. Its worldwide membership represents the organizations that are thought leaders in Convergence and deeply involved in the implementation of Convergence technologies and services. As a global organization, they are working together to provide today's and tomorrow's Convergence customers with high-quality, seamless and easy to use products and services. Through their members' collaborative work, they are ensuring that devices, access points, applications and underlying networks interoperate to deliver the best user experience possible. The FMCA published Release 2.0 of the FMCA Product Requirement Definitions (PRDs). The FMCA PRDs, centered on the key Convergence Technologies (Bluetooth CTP, Wi-Fi GAN/UMA and Wi-Fi SIP), are created by senior technical and product development professionals across the FMCA membership base and reflect common operator requirements for Convergence products and services in areas such as Service Capabilities, Handset, Access Point & Gateway, Network Architecture and Roaming. (May, 2006)

96

FMCA Convergence Application Scenarios Convergence Services over Wi-Fi GAN (UMA) Convergence Services using SIP over Wi-Fi - Access Point & Gateway Requirements - Network Architecture Document - Service Capabilities Document - Technical Handset Requirements - Terms and Definitions Document This milestone reflects the phased evolution of the FMCA PRDs and the FMCA’s commitment to collaborating with leading standards development and certification organizations in areas which require operator-led input. The documents have also received input from the Wi-Fi Alliance, the leading worldwide certification body for WLAN technology actively focused on the certification of Convergence products, with which the FMCA has a strategic relationship. In order to understand the technology implication of convergence, various service scenarios have been created. Typical examples of converged service scenarios are as follows [3,4]: One-number service, a basic FMC service, typically offered to residential and small office/home office (SOHO) customers who can be reached by means of enhanced IN functionality. One-number service enables pricing flexibility, such as charge splitting for incoming calls between the called and calling parties. Furthermore, it enables charging options according to subscriber locations. For instance, calls performed within the home tariff zone can be priced according to lower fixed network charging, while calls performed outside this zone can be priced according to mobile network charging. Personal Multimedia, a service which provides secure access to the user’s multimedia content (stored at home and/or in the network) from any terminal. It allows the user to upload or download content from any device anywhere at any time. The service will take care of ensuring that the right network is used dependent on the nature of the content, e.g. music and video content may only be downloaded when in range of a high speed network such as a Wi-Fi hotspot. The service allows the user to subscribe to media feeds which are automatically delivered to the device over the best network. Combinational Services, Services based on the availability of multiple connections (circuit and data) also on a fixed-mobile convergent network during the same communication session. Using more than one connection in the same session allows the combination of media/data flow and different devices to create new services. The fixed-mobile convergent network solution

guarantees that the customer experience is seamless to the end user, independent from the network access used, and with different services and devices available in the two environments during a voice call: - Outdoor - environment where only the GSM/UMTS network is present; - Indoor - environment where the WiFi/Bluetooth/Ethernet networks and the GSM/UMTS networks are available. The customer can choose the best network (xDSL /Wi-Fi /UMTS /GSM…) for a communication connection in every situation (outdoor/indoor, in office at home, in public hot spot) 3 SOLUTION SERVICES APPROACHES FOR FMC

3.1

IP access network architecture This section describes the high-level IP network architecture and models for the IP services referred from ITU-T Recommendation [5]. We describe the access types and interfaces to be supported by the IP access network, the IP access network capabilities and requirements, and the IP access network functional models and possible arrangements.

3.1.1 General network architecture of IP network Fig. 3 shows general network architecture of IP network. In Fig. 3, the lines between various rectangles and ellipses represent connections that are bidirectional, that may be asymmetrical in bit rate, and that may be of differing media in the two directions. The reference points (RPs) which were illustrated are logical separation between the functions and may not correspond to physical interfaces in certain network implementation. In certain network implementations, access and core networks may not be separable
TE

TE

TE
IP Access Network

TE
IP Core Network

IP Access Network

PC

RP

RP

RP

RP

PC

TE

TE

CPN

CPN

Figure 3: General network architecture of IP network 3.1.2 The functional requirements for IP Access Possible IP access functions are as follows: Dynamic selection of multiple IP service providers Dynamic allocation of IP access using PPP NAT

97

Figure 4: PPP over ATM Authentication Encryption Billing usage metering and interaction with AAA server. 3.1.3 Examples of IP mapping mechanism The following diagrams show protocol stacks for IP on various transmission systems. IP over PPP over ATM ATM AAL5 protocol is designed to provide virtual connections between end stations attached to the same network. The PPP layer treats the underlying ATM AAL5 layer service as a bit synchronous point-to-point link. In this context, the PPP link corresponds to an ATM AAL5 virtual connection. Fig. 5 shows IP over PPP over ATM mapping mechanism. [6]

the remote peer, as well as establish a unique session identifier. PPPoE includes a discovery protocol that provides this. Fig. 6 shows IP over PPP over Ethernet mapping mechanism. [7]

Figure 6: Mapping mechanism for IP over PPP over Ethernet 3.2 3GPP user plane protocol architecture This section introduces overviews WCDMA impacts on the protocol architecture as well as on element functionalities. The architecture can be defined as the user plane part handing user data, and the control plane part. The overall user plane protocol architecture is shown in Fig. 8. The Packet Data Convergence Protocol (PDCP) has its main functionality header compression which is not relevant for circuit-switched services. Radio link control (RLC) handles the segmentation and retransmission. The medium access control (MAC) layer in Release 99 focuses on mapping between the logical channels and handling the priorities, as well as selection of the data rates is being used – i.e., selection of the transport format (TF) being applied. Transport channel switching is also MAC layer functionality. Intelligent network protocol architecture for Fixed-Mobile Convergence Services

Figure 5: Mapping mechanism for IP over PPP over ATM IP over PPP over Ethernet The Point-to-Point Protocol (PPP) provides a standard method for transporting multi-protocol datagrams over point-to-point links. PPP over Ethernet (PPPoE) provides the ability to connect a network of hosts over a simple bridging access device to a remote Access Concentrator. With this model, each host utilizes it’s own PPP stack and the user is presented with a familiar user interface. To provide a point-to-point connection over Ethernet, each PPP session must learn the Ethernet address of

3.3

Mobile operators increasingly perceive a threat from the convergence of WCDMA, Wi-Fi and fixed telephony within the home, and are seeking a way to increase their share of the residential calls market.

98

Figure 7: PPP over Ethernet The home base station or femtocell supports cellular calls locally, and then uses broadband, typically xDSL or cable modem, to carry traffic to the operator's core network. Crucially, as a standard 3G base station, it operates with all existing handsets rather than requiring customers to upgrade to expensive dual-mode devices. This provides cellular carriers with an effective means of countering the threat of VoIP, UMA or VoWi-Fi, by using proposed Fixed-Mobile Converged Intelligent network. As the same handset is used for all calls, it improves customer loyalty and reduces churn, as barriers to changing operators increase. An additional benefit is that network coverage and capacity are increased in a cost-effective manner, exactly where they are most needed by the end user. From the customer's perspective, a home base station offers the benefit of using a single mobile handset with a built-in personal phonebook for all calls, whether from home or elsewhere. This eliminates user frustration caused by changing between handsets with different interfaces and functionality. As a result, intelligent network architecture can provide following benefits both customers and network operators. Customers can use single mode terminal everywhere Low price for using mobile station at home. No need other terminal (e.g. fixed telephone) No need to upgrade expensive dual mode terminal. Economic feasibility of investment Cheap price of equipment and construction expenditure Reduction of operational expenditure Creation of new market Application of specialized product for targeted customers 4 TOWARD MOBILE TECHNOLOGIES OF THE FUTURE

IMS (IP Multimedia Subsystem) IMS is defined by 3GPP/3GPP2 as a new core and service 'domain' that allows the service provider to combine wired and wireless applications in the same session, and allows sessions to be dynamically modified on the fly, for instance add video to a voice call, or transfer a cell phone call to your landline seamlessly. This makes possible "blended" services such as video telephony, push-to-talk, chat, broadcast TV using multicast IP video streams, video-ondemand, video surveillance, and other applications. IMS carries signaling and bearer traffic over IP and acts as a session controller application that matches user profiles with the appropriate call/sessionhandling servers and then routes the call or session to the appropriate destination.

4.1

Figure 8: 3GPP User Plane protocol architecture

99

Figure 9: Intelligent network protocol architecture for Fixed-Mobile Convergence service 4.2 ITU-T FMC Working Group [8] convergence which includes convergence of the services, convergence of the basic architecture of NGN and Mobile Networks etc. These requirements are the basis of this Question. This should enable the ITU-T to provide value by rapidly enhancing the NGN recommendations to provide elements of mobility and extend the mechanisms to fully suit the requirements on NGN. Study items: - Service convergence and interoperability between fixed and mobile networks - Architecture convergence between fixed and mobile networks - Nomadicity (discrete mobility) within the NGN and possible roaming between fixed and mobile networks. The high level architecture of the FMC scenarios that are studied in this recommendation is depicted in Fig. 10. The architecture assumes a common IMS service platform for the delivery of services over fixed and mobile networks.

The ITU Telecommunication Standardization Sector (ITU-T) coordinates standards for telecommunications on behalf of the International Telecommunication Union (ITU). FMC working group, in ITU-T SG13, describes principles and requirements for convergence of fixed and mobile networks (Fixed-Mobile Convergence, (FMC)). This convergence of fixed and mobile networks enables mobile users to roam outside the serving area of their mobile networks and still have access to the same set of services outside their network boundaries as they do within those boundaries, subject to the constraints of physical access and commercial agreements. The origin of NGN is within fixed networks and their evolution. However, increasingly mobility services are demanded by the user and the operators. Therefore the support of several aspects of mobility is a feature to be provided by NGN. This shall also include “discrete mobility” which can be offered even based on fixed line access technologies. In addition, there is the requirement for fixed-mobile-

PS/CS Convergence IMS Convergence
IMS

FMC appl.
mobile CS core

PS Core Convergence

fixed

mobile

IWF

fixed PS AN

mobile PS AN

mobile CS AN

service transfer

Figure 10: IMS based FMC architecture

100

4.3

802.21 (MEDIA INDEPENDENT HANDOVER)

Roaming across heterogeneous access technologies such as CDMA, WiMAX, and 802.11 as well as wired access networks such as xDSL and cable will become a requirement of future networking devices rather than an additional feature. However, supporting seamless roaming between heterogeneous networks can be challenging since each access network may have different mobility, QoS and security requirement. One of the latest standards committees, 802.21 is developing protocols that cover both 802-type wireless networks and mobile telephony. [9] This group is creating a framework that defines a media independent handover function that will help mobile devices to seamlessly roam across heterogeneous access network.

between heterogeneous access networks.[10] These handover procedure can make use of information gathered from both the mobile terminal and network infrastructure to satisfy user requirements There are several factors that may determine the handover decision. Typically these include service continuity, application class, quality of service, network discovery and selection, security, power management and handover policy. This reference model facilitates the network discovery and selection process by exchanging network information that helps mobile devices determine with networks are in their current neighborhoods. This network information could include information about the link type, the link identifier, link availability and link quality etc. of nearby network links. This process of network discovery and selection allows a mobile to connect to the most appropriate network based on certain mobile policies. 5 Conclusion

Figure 11: Genesis for 802.21 MIH Reference Model for Mobile Stations with Multiple Protocol Stacks is intended to provide methods and procedures that facilitate handover

In order to succeed in competitive telecommunication markets, network operators and service providers have to develop new markets, enlarge their range of services and provide services at a quicker pace and at more competitive prices. The integration of wired and wireless technologies to create a single telecommunications network foundation has quickly captured the collective imagination of the telecommunication industry. FMC is set to obliterate some of the physical barriers that now prevent telcos from reaching all of their potential consumers with all types of services.

Figure 12: MIH Reference Model for Mobile Stations with Multiple Protocol Stacks

101

Thus, they have to eliminate operational and service constraints imposed by different technologies. An emerging solution is a removal of the barrier between various networks by converging fixed and mobile services. Using the fixed-mobile convergence approach, operators and service providers will be able to enhance customer services, and increase their own competitiveness and revenues. This can be done in a cost-effective way by upgrading the existing technologies and developing FMC strategies toward third-generation wireless technologies. Furthermore, the comparative analysis of wireless technologies shows that by implementing GPRS technology some operators may upgrade their GSM networks and begin offering wireless data services now, while waiting for WCDMA technology in the next century. In other words, using GPRS, they can start making a profit from wireless data services, thus creating a milestone toward achieving the vision of ubiquitous personal communications. Other operators, however, may choose to develop their TDMA networks toward EDGE, which they will use as a 3G network. Customers can be attracted by these intermediate solutions, which could provide them with attractive services from new content and service providers. Manufacturers and operators therefore need to keep abreast of all competitive technologies, since currently there is no technology in the leading position. ACKNOWLEDGEMENT This work was partly supported by the IT R&D program of MIC/IITA [2006-S058-01, Development of Network/Service Control Technology in All-IP based Converged network] and this work was supported in part by MIC, Korea under the ITRC program supervised by IITA. 6 REFERENCES

Y.1231, 2000. [Online]. Available: http://www.itu.int. [6] G. Gross, M. Kaycee, A. Lin, A. Malis and J. Stephens, “PPP Over AAL5” IETF RFC 2364, (1998) [7] L. Mamakos, J. Evarts, D. Carrel, D. Simone and R. Wheeler, “A Method for Transmitting PPP Over Ethernet (PPPoE)” IETF RFC 2516, (1999) [8] ITU-T Study Group 13, Question 6, Working draft of Q.FMC-IMS: Fixed mobile convergence with a common IMS session control domain, http://www.itu.int/ITU-T, (2007) [9] IEEE 802.21, “802.21 Tutorial”, http://www.ieee802.org/21/ (2006) [10] IEEE 802.21, DCN 21-05-0271-00-0000One_Proposal_Draft_Text.doc, (2005)

[1] K. Michaelis: FMC: Breaking Free of Technological Constraints, Telecommun., pp. 47–49 (1999) [2] D. Molony: Fixed-mobile integration, Communications Week, pp. 23–27 (1998) [3] Fixed-Mobile Convergence Alliance (FMCA) : FMCA Convergence Application Scenarios Release 1.0, 2006. [Online]. Available: http://www.thefmca.org. [4] M. Vrdoljak, S. I. Vrdoljak, G. Skugor: Fixedmobile convergence strategy: technologies and market opportunitie, Communications Magazine, IEEE, Vol. 38, pp. 116-121 (2000) [5] International Telecommunication Union Telecommunication Standardization sector (ITU-T): Draft New ITU-T Recommendation Y.1231 IP Access Network Architecture;

102

Call for Paper Ubiquitous Computing and Communication Journal
Recent advances in electronic and computer technologies have paved the way for the proliferation of ubiquitous computing. The combination of mobile and ubiquitous computing is emerging as a promising new paradigm with the goal to provide computing and communication services all the time, everywhere, transparently and invisibly to the user, using devices embedded in the surrounding physical environment. In this context, the communication devices, the objects with which they interact, or both may be mobile. The implementation of such a paradigm requires advances in wireless network technologies and devices, development of infrastructures supporting cognitive environments, and discovery and identification of ubiquitous computing applications and services. We are seeking research papers, technical reports, dissertation etc for these interdisciplinary areas. The goal of the UBICC journal is to publish the most recent results in the development of system aspects of ubiquitous computing. Researchers and practitioners working in this area are expected to take this opportunity to discuss and express their views on the current trends, challenges, and state of the art solutions addressing various issues in this area. Topics of interest include, but are not limited to
• • • • • • • • • • • • • • • • • • •

Applied Soft Computing Graphical Models Image and Vision Computing Artificial Intelligence Information Security and Cryptography Artificial Intelligence in Medicine Information Systems Biometric Technology Knowledge-Based Systems Computer Communications and Networks Mathematics of Computing Computer Hardware Media Design Computer Imaging, Graphics & Vision Medical Image Analysis Computer Networks Computer Networks with Ad Hoc Networks and Optical Neural Networks Parallel Computing Optical Switching and Networking For More Information: email: info@ubicc.org Visit us: http://www.ubicc.org

• • • • • • • • • • • • • • • • • •

Decision Support Systems Computers in Biology and Medicine Database Management & Info Retrieval Programming, SWE & Operating Systems Digital Signal Processing Robotics and Autonomous Systems Displays Simulation Modeling Practice and Theory Electronic Commerce Research and Applications Speech Communication Expert Systems with Applications Statistical Methodology Foundations of Computing User Interface, HCI & Ergonomics Fuzzy Sets and Systems Visual Languages and Computing General Computer Science Web Semantics Miscellaneous

http://www.ubicc.org

Copyright © 2007 UBICC.ORG All rights reserved.


								
To top