Automatic Translation of Arabic Text to Arabic Sign Language

Document Sample
Automatic Translation of Arabic Text to Arabic Sign Language Powered By Docstoc
					AIML 06 International Conference, 13 - 15 June 2006, Sharm El Sheikh, Egypt

Automatic Translation of Arabic Text to Arabic Sign Language Mohamed Mohandes King Fahd University of Petroleum and Minerals KFUPM 1885 Dhahran, 31261, Saudi Arabia

Abstract: There have been a lot of efforts invested on translating sign languages to spoken languages. However, it is just as important to translate a spoken language to a sign language. This would provide a two-way communication between deaf and vocal people. This paper proposes a system that translates Arabic text to Arabic sign language. Words that corresponds to signs from the Arabic sign language dictionary calls a pre-recorded video clip showing the sign that is played on the monitor of a portable computer. If the word does not have a corresponding sign in the sign language dictionary, it is finger spelled. This is done in real life by deaf for words that do not have specific signs like proper names. The size of each video clip is reduced to about 25-30 KB so that they can be downloaded on the Internet in a reasonable time (2-3) seconds. The system is accessible for public at the site The system can also be used to teach deaf or their relatives the Arabic Sign Language.
1. Introduction

the post office. The authors divided the task into three different problems: • Automatic speech to text conversion • Automatic translation of arbitrary English text into suitable representation of sign language • Display of this representation as a sequence of signs using computer graphics techniques. The authors used a virtual human (avatar) that they have already developed for other application for performing signs. The developed system achieved accuracy of identification of the signed phrases of 61% for complete phrases and 81% for sign units. However, the feedback of deaf users and post office clerks were very encouraging for further development. A group of 21 researchers at DePaul University [2,3] participated in developing an automated American Sign Language Synthesizer. Suszczanska, et. al, [4] developed a system to translate texts written in Polish Language into the Polish Sign Language. They use Avatar as well with a dictionary of 600 signs. Scarlatos, et. al, [5] introduced a system to translate speech into video clips of the American Sign Language (ASL). The system displays the ASL clips along with the written words. They used a built-in speech recognition engine in the Macintosh operating system. This added a limitation as this engine can only recognize words from a pre-defined set. They plan to extend the system to recognize more words and later for phrases.

Tremendous efforts have been put to translate speech and text to other sign languages, however, not enough efforts put for the translation to the Arabic Sign Language. This section discusses some research done on translating other text and spoken languages to sign language. Cox et. al, [1] presented a system that translates the English speech to the British Sign Language (BSL) using a specially developed avatar. However, the system is constrained for post office operations. The system uses a phrase lookup approach due to the highly constraint environment in


AIML 06 International Conference, 13 - 15 June 2006, Sharm El Sheikh, Egypt

San-Segundo, and others, [6] developed a system to translate speech into Spanish Sign Language. Their system is made up of four modules: speech recognizer, semantic analysis, gesture sequence generation and gesture playing. For the speech recognizer, they used modules developed by IBM. For the semantic analysis they used modules developed by the University of Colorado. For gesture sequence generation they used the semantic concepts associated to several Spanish Sign Language gestures. For gesture animation they developed an animated character and a strategy for reducing the effort in gesture generation. The strategy consists of making the system generate automatically all agent positions necessary for the gesture animation. iCommunicator company has built a proprietary software [7] to translate English language to text or to the ASL that costs several thousand dollars. A demo is provided in the web page of the company. Eva Safar and Ian Marshall [8] described an architecture of a system to translate English text into a variety of sign languages like Dutch, German, and British. They decomposed the process into two stages: manipulation of the English text into a semantic-based representation, and then the translation from this representation to graphically oriented representations which can drive a virtual avatar. This paper opens the door for research on translating the Arabic text to Arabic Sign Language. Later, the system will be further developed to translate from speech to sign language. 2. DESCRIPTION OF THE DEVELOPED SYSTEM Sign language maps letters, words or expressions of a certain language onto a set of hand gestures thereby enabling an individual to communicate by using hands and gestures rather than by speaking. There has been a continuous increase in interest in facilitating communication with physically challenged people. Systems capable of recognizing sign-language symbols can be used as a means of one-way communication from deaf to vocal people. Many researchers are working along these lines including a project supported by Prince Salman Center for Disability Research for translating Arabic Sign Language into spoken words using an instrumented

glove. However, this will only provide one way of communication. Much similar to how vocal people do not understand signs; deaf people cannot understand spoken words either. This is just one of the barriers preventing the full integration of deaf people into society. The developed system is the first step towards the final goal of translating Arabic spoken language to sign language via voice recognition. This system may be displayed on a portable computer that acquires the speech and translates it immediately to sign language shown on a portable computer. The current stage of the project focuses on translating Arabic text into Arabic Sign Language. When text is typed letter by letter, the system automatically starts searching the database for the words that starts with the specified letter. This process continues letter by letter until the last letter. When the user pushes the Enter key, the system recognizes if the sign exists in the database or not. If it exists, it is called and shown on the monitor of the portable computer, otherwise the sign is finger spelled just like what deaf people do in their daily life. If the word is found in the Dictionary, then the video clip related to the word is displayed filling the entire page as shown in Figure 1. However, if the word is found not to be in the database, the window is divided according to the number if letters so that the entire word is displayed in the window as clearly as possible as shown in Figure 2. Words with high number of letters that are not stored in the dictionary will be displayed with small pictures so that the entire word is displayed on the image as shown in Figure 3.

Figure.1. A word of the Dictionary is displayed over the entire window


AIML 06 International Conference, 13 - 15 June 2006, Sharm El Sheikh, Egypt

Figure.2. The word ‫ ﺳﻠﻤﺎن‬that does not exist in the Dictionary is finger-spelled

Figure.3. The size of the letters of large words is displayed small to fit the window 2.1 Methods 2.2 Dictionary based The purpose of this application is to accept the Arabic text as input and present the equivalent sign language output accurately. The typed word is presented as soon as the space bar is inserted. It searches its database for the given Arabic word and presents the equivalent video clip if its sign is present in the database otherwise it fingerspells the sign. As it is developed using Java Server Pages (JSP), it has all the benefits of Java Programming language. This application is user-friendly, efficient in execution, and scalable. It uses the following software: Database: MS Access Technology: Java, JSP, JavaScript, HTML Web Server: Apache Tomcat Server Development Tool: Net Beans IDE v4.1 Client-side Requirements: Windows Media Player

All entries of the database are brought into an object and are kept in its memory as shown in Figure 4. This helps the application to search the given word very quickly and to decide if it exists or not before the request goes to the server. The matching will be very fast and unmatched entries will be responded to immediately. If the given word matches any entry in the dictionary then the application brings that particular file. Loading all entries of the database into memory allows faster access, better performance, reduced network traffic, and reduced database burden


AIML 06 International Conference, 13 - 15 June 2006, Sharm El Sheikh, Egypt

Figure. 4. First time memory load 2.3 Java and Database based: display.jsp: Resides in the center. Responsible for showing the image(s) or the video. The top.jsp fires events to the list.jsp and the display.jsp. The list.jsp fires events for the top.jsp only. Event handling is a combination of client-side Javascript and server-side Java Code translation to HTML. This structure has the following advantages: • The three files are separate and easy to manage • Client-side JavaScript helps fast event handling for the application • Since its' browser based, no client-side setting are needed to run the application except that the player and the code be available on the client machine • Can be available on the web for any number of clients anywhere • Powerful J2EE server-side coding ensures high availability and faster performance while employing fewer resources. (The total rendering time actually depends on the size of the video only and not on the server-side Java code execution.) •

The application includes a database for holding all Arabic Signs of the Arabic sign language Dictionary [9] for matching. The images and video files will be stored in file system. As all the Arabic words of the dictionary are stored in a database table, it is very easy to add / update / delete signs. The complete Arabic dictionary can be stored in a table. The application is built in Java Technology using J2EE architecture. To access the database table it uses standard JDBC-ODBC Bridge. It can be deployed and run on any operating system with any Java enabled web server. The images of the sign alphabets are displayed directly on browser window with auto sizing facility with respect to the number of images presented. To access the database table it uses Java Database Connectivity (JDBC) package. 2.4 How it works:

The index.html page has been divided into three frames as shown in Figure 5. • top.jsp: Resides at the top, containing the top translate input box. • list.jsp: Resides on the right hand side displaying the video library entries.


AIML 06 International Conference, 13 - 15 June 2006, Sharm El Sheikh, Egypt

3. CONCLUSION and FUTURE WORK In this paper all words related to the signs of the Arabic Sign language Dictionary have been translated to signs. Specific efforts has been placed to reduce the size of the video clips so that the system is practical to be downloaded from a dial-up system in a reasonable time, while at the same time keeping good resolution of the clip to clearly showing the signs. The size of each video clips is reduced from around 500-600 KB to about 25-30 KB. The developed system constitutes the first phase of building a system that translates the Arabic speech to Arabic Sign Language. Upon completion, the system can be integrated with the system that translates Arabic Sign Language to spoken Arabic Language to reach an integrated system that removes all barriers on the way of integrating deaf people with the rest of society. In the next phase, it is intended to obtain the root of every word related to a sign so that any of the derivatives of the word will play the same signs just like what deaf do on their daily life. Next, sophisticated system to translate speech to text will be integrated with the system so that we obtain a system that translates speech to signs. The developed system has been placed on the Internet and tested from several areas in the Kingdom. Acknowledgement: The author would like to thank KFUPM and PSCDR for supporting this development paper. The author would like to thank Mr. Mahdi Al Farag and Jaafar Okakah for performing the Sings and to Asad Nafees and Usman Ahmed for implementing the Java scripts. 4. CITED REFERENCES [1] Alison Wary, Stephen Cox, Mike Lincoln and Judy Tryggvason “A formulaic Approach to Translation at the Post Office: Reading the Signs”, The Journal of Language & Communication, No. 24, pp. 59-75, 2004. [2] Glenn Lancaster, Karen Alkoby, Jeff Campen, Roymieco Carter, Mary Jo Davidson, Dan Ethridge, Jacob Furst, Damien Hinkle, Bret Kroll, Ryan Layesa, Barbara Loeding, John McDonald, Nedjla Ougouag, Jerry Schnepp, Lori Smallwood, Prabhakar Srinivasan, Jorge Toro, Rosalee Wolfe, “Voice Activated Display of American Sign Language for Airport Security”. Technology and Persons with Dsabilities Conference 2003. California State University at Northridge, Los Angeles, CA March 17-22, 2003.

Figure 5. The Frame Coordination 2.5 Java Programming Language

The application used the Java’s Object Oriented Programming Concept. Every element is represented as an object. Some are serialized objects such that they can be persistent. Once the code is compiled, there is no need to compile again for every hit. So, the application will be faster in runtime. The dictionary can be stored in a serialized object for quick matching of Arabic signs. What is required to use the application is just a Java enabled internet browser (IE, Netscape, Eudora). The knowledge of how to use the browser is very simple for an enduser. This makes the application very user-friendly and flexible. To use the application the user can type in words or choose from the list of words shown. When the required word is complete, pressing Enter would get the display of the corresponding sign.


AIML 06 International Conference, 13 - 15 June 2006, Sharm El Sheikh, Egypt

[3] Eric Sedgwick, Karen Alkoby, Mary Jo Davidson, Roymieco Carter, Juliet Christopher, Brock Craft, Jacob Furst, Damien Hinkle, Brian Konie, Glenn Lancaster, Steve Luecking, Ashley Morris, John McDonald, Noriko Tomuro, Jorge Toro, Rosalee Wolf, “Toward the Effective Animation of American Sign Language”. Proceedings of the 9th International Conference in Central Europe on Computer Graphics, Visualization and Interactive Digital Media. Plyen, Czech Republic, February 6 - 9, 2001. 375-378. [4] Suszczanska, N., Szmal, P., and Francik, J., “Translating Polish Texts into Sign language in the TGT System”, the 20th IASTED International Multi-Conference on Allpied Informatics, Innsbruck, Austria, pp. 282-287, 2002. [5] Scarlatos, T., Scarlatos, L., Gallarotti, F., “iSIGN: Making The Benefits of Reading Aloud Accessible to Families with Deaf Children”. The 6th IASTED International Conference on

Computers, Graphics, and Imaging CGIM 2003, Hawaii, USA, August 13-15, 2003. [6] San-Segundo, R., Montero, J.M., MaciasGuarasa, J., Cordoba, R., Ferreiros, J., and Pardo, J.M., “Generating Gestures from Speech”, Proc. of the International Conference on Spoken Language Processing (ICSLP'2004). Isla Jeju (corea). October 4-8, 2004. [7] [8] Eva Safar and Ian Marshal, “The Architecture of an English-Text-to-Sign-Language Translation System”, Recent Advances in Natural Language Processing (RANLP) G. Angelova et al (ed), Tzigov Chark Bulgaria, pp223-228, Sept 2001. [9] The Arabic Sign Language Dictionary ‫اﻟﻘﺎﻣﻮس‬ ‫اﻻﺷﺎري اﻟﻌﺮﺑﻲ ﻟﻠﺼﻢ: ﺟﺎﻣﻌﺔ اﻟﺪول اﻟﻌﺮﺑﻴﺔ واﻻﺗﺤﺎد اﻟﻌﺮﺑﻲ‬ ‫ﻟﻠﻬﻴﺌﺎت اﻟﻌﺎﻣﻠﺔ ﻓﻲ رﻋﺎﻳﺔ اﻟﺼﻢ، ﻣﻄﺒﻌﺔ اﻟﻤﻨﻈﻤﺔ اﻟﻌﺮﺑﻴﺔ ﻟﻠﺘﺮﺑﻴﺔ‬ 2001 .‫واﻟﺜﻘﺎﻓﺔ واﻟﻌﻠﻮم‬


Shared By:
Description: Automatic Translation of Arabic Text to Arabic Sign Language