Development of a Wearable Sensor Glove for Real-Time Sign Language Translation

: This article describes the development of a wearable sensor glove for sign language translation and an Android-based application that can display words and produce speech of the translated gestures in real-time. The objective of this project is to enable a conversation between a deaf person and another person who does not know sign language. The glove is composed of five (5) flexible sensors and an inertial sensor. This article also elaborates the development of an Android-based application using the MIT App Inventor software that produces words and speech of the translated gestures in real-time. The sign language gestures were measured by sensors and transmitted to an Arduino Nano microcontroller to be translated into words. Then, the processed data was transmitted to the Android application via Bluetooth. The application displayed the words and produced the sound of the gesture. Furthermore, preliminary experimental results demonstrated that the glove successfully displayed words and produced the sound of thirteen (13) translated sign languages via the developed application. In the future, it is hoped that further upgrades can produce a device to assist a deaf person communicates with normal people without over-reliance on sign language interpreters.


Introduction
Hearing impairment or deafness eventuates when a trauma or an injury occurs to the components of the ear.Generally, in a noisy environment, an individual who is partially or moderately deaf can detect muted sounds but has difficulty in hearing them properly.A person with moderate deafness needs a hearing aid, while a person with extreme deafness requires a cochlear implant.An individual with a hearing impairment typically utilises various ways to express themselves, such as writing on paper, speechreading (lip reading) or even using an interpreter.Based on the report from the World Federation of the Deaf (WFDEAF), approximately 70 million deaf people utilise sign languages around the world 1 .Furthermore, more than 200 types of sign languages exist globally.In Malaysia, a deaf person uses the Malaysian Sign Language (MSL).Based on the Malaysian Federation of the Deaf, over 30,000 people in Malaysia have hearing problems, and only about 100 certified sign language interpreters are available to provide services to the deaf community 2 .
Sign language plays a major part in the daily life of deaf people, and it acts as a primary communication tool to communicate with others.However, they employ the services of sign language interpreters to aid in translating sign languages while interacting with normal people.The conversation www.aetic.theiaer.orgbetween deaf and normal people becomes very difficult without the help of an interpreter.Furthermore, a deaf person also faces difficulties to obtain a sign language interpreter who can interpret accurately, particularly such related to complex definitions as well as human emotions.Besides, those with sign language needs also face difficulties in communicating efficiently, which hinder their opportunities to demonstrate actual abilities in situations such as job interviews or discussions in the workplace.Consequently, there is a need to find new ways to facilitate communication in individuals with hearing loss.Apart from the sign language itself, other alternatives to help deaf people interact with normal people are hearing aids and sign language translation devices.A hearing aid is a sound amplifying electronic device, worn inside or behind the ear, suitable for individuals with a minimum ability of hearing.There are various types of hearing aid devices that are available in the market.However, the users of hearing aids might experience problems such as discomfort and difficulty adjusting for background noise, which can affect the overall experience with the device.
On the other hand, sign language interpretation systems are emerging technologies that have been extensively studied by experts aimed at removing communication barriers between hearing and deaf people.These systems include wearable sensory devices or vision systems that utilize real-time translation to convert sign language into texts or spoken words.These innovative technologies facilitate seamless communication, enabling deaf individuals to converse with hearing individuals without the need for an interpreter.By exploring and acknowledging these advancements, the collaborative effort is directed toward creating an inclusive environment that fosters effective communication between deaf and hearing individuals.These developments have the potential to increase accessibility, promote equality of opportunity and enable deaf individuals to participate fully in various aspects of life.
Basically, these innovative technologies are classified as vision-based and sensor-based systems.Vision-based systems utilise single or multiple cameras to track hand gestures.In these systems, frame images from the recorded videos are used as an input for the sign language translation system, which requires a computer to translate the hand gestures by using image processing algorithms.Vision-based systems can be implemented easily using a web camera [1][2][3][4], multiple cameras [5], and a smartphone camera [6][7].Several researchers had utilised active techniques in vision-based systems using off-the-shelf tools such as Microsoft's Kinect [8][9][10], Intel's RealSense [11], and Leap Motion Controller (LMC) [12][13].Fundamentally, active techniques utilise the motion of active sources that can be controlled, such as a laser scanner or a projected light to scan around the exterior of the object.The experimental results obtained from the previous studies indicate the vision-based system for sign language gestures demonstrates a high accuracy rate with promising results.The vision-based systems such as in [1][2][3][4][5][6][7][8][9][10][11][12][13] also allow the users to engage with more spontaneously and less constraint.However, these approaches have several disadvantages.For example, sign language motions are greatly influenced by the camera's viewpoint.This means that a gesture looks the same or similar as viewed from a vertical camera, leading to potential confusion in gesture recognition.Various motions may appear similar to a static camera due to this issue.Furthermore, these systems may consist of a standard camera, multiple cameras or even an expensive high-tech camera, which must be connected to a high-performance computer for image processing tasks in a laboratory setting.This requirement makes planning inconvenient and also hinders its practical use in the laboratory or control systems.Vision-based systems often require computing resources necessary for real-time image processing and gesture recognition.The computational requirements can be demanding, requiring powerful hardware, which makes the system mobile and realtime.Furthermore, despite advances in image processing, sign language gestures translation can still pose challenges to vision-based systems.Processing time required for proper recognition and translation can cause delays, affecting communication speed and performance.The performance of vision-based methods is also susceptible to the backgrounds and lighting conditions of the captured images.Differences in lighting, shadows, or background complexity can affect the accuracy of gesture recognition.This limitation makes it difficult to use vision-based systems as portable and easy-to-use sign language translation tools, as they may require a controlled environment with consistent lighting conditions.It is important to consider these shortcomings when designing and implementing vision-based systems for sign language translation devices.Addressing these limitations may help improve the reliability, flexibility, and user-friendliness of such systems and promote effective communication among deaf and hearing individuals.By understanding the challenges posed by vision-based systems, several researchers www.aetic.theiaer.orghave started exploring alternative approaches to overcome these issues such as implementing sensorbased methods for sign language recognition.
A sensor-based system is another type of sign language recognition method that utilises arrays of sensors, developed using micro-electro-mechanical-systems (MEMS) technology to detect fingers and hand motions.The MEMS sensors technology such as inertial measurement units (IMUs), flex sensors, and force sensors have become inexpensive, low-powered, and scaled-down, which is ideal for wearable devices.In research related to sign language translation, the majority of researchers had utilised a glove attached with a fusion of sensors, namely the combination of flexible sensors and IMUs.This approach is highly preferred because of its cost-effectiveness and portability.For instance, a study of Ahmed et al. on a novel sign language recognition approach by utilising a sensory glove fitted with five (5) flexible sensors and five (5) IMUs, where a humanoid arm was used to mimic the sign language gestures [14].The work obtained an exceptional accuracy of 93.4% for 75 static gestures.Similarly, Mehra et al. had utilised flexible sensors and IMUs for an American Sign Language interpreter device, where the translation output was displayed on a computer monitor [15].Another prototype of a wearable sign language interpreter, which was developed by Chong and Kim by using only six (6) IMUs mounted on the back of the hand and fingertips to detect and translate the hand motions [16].A wearable glove attached with flex sensors and a contact sensor is also a favoured combination adopted by researchers.Rishikanth et al. proposed a gesture recognition glove that employs flexible sensors and a contact sensor [17].The flexible sensors were fitted not only on the fingers but also on the wrist to track wrist motions.Contact sensors were positioned on the fingertips to increase the number of recognisable gestures.The glove was able to recognise 80% of the tested gestures (20 of 25 English alphabet gestures).Kannan et al. presented a gesture detection system by adopting accelerometer sensors positioned on fingertips to track hand motions and displayed it on a liquid-crystal display (LCD) [18].They experimented with two (2) to five (5) units of accelerometer sensors to verify the recognition performance in which five (5) accelerometers achieved 95.3% efficiency compared to two (2) accelerometers with 87% efficiency.However, the work concluded that three (3) accelerometers adoption is the best option considering the cost, training time, and efficiency.Several researchers also had developed wearable gloves attached with more than three (3) types of sensors to detect and translate hand motions.For example, Lee et al. proposed a smart wearable glove by employing sensor fusions that consist of flexible sensors, pressure sensors, and an accelerometer for translating the American Sign Language alphabet [19].The translated gestures were displayed on an Android-based application which also produces an audible voice.Besides, a surface electromyographic (sEMG) sensor was also used in detecting sign language motions.The sensor calculates electrical voltages produced by muscle excitation and contraction that can be utilised to differentiate between different finger and hand motions depending on multiple muscle behaviours.Song et al. presented a smart detector prototype based on sEMG for a sign language recognition system [20].The prototype consists of a skin or a tissue interface that provides sEMG signals into the system and a signal amplifying interface that amplifies the obtained signal.Recently, Yu et al. proposed a wearable sensor glove for Chinese sign language translation using sEMG and IMU [21].They tested the sensor fusion with a deep learning method, which resulted in a 95.1% recognition accuracy.
Based on the literature, the sensor-based systems allow for better mobility, versatility, and costeffectiveness.Unlike vision-based systems, which rely heavily on manual camera capture, sensor-based systems are less affected by the view or position of the sensor.This characteristic allows for more consistent and accurate recognition of sign language gestures, regardless of orientation or technique of the user's hands, allowing greater flexibility in capturing and interpreting gestures from different perspectives.Sensor-based systems enable accurate and detailed capture of hand motion and position.By utilizing a variety of sensors such as accelerometers, gyroscopes or flex sensors, these systems can detect subtle changes in sign language gestures and interpret them accurately.The increased sensitivity and specificity of sensor variety contribute to higher accuracy and improved recognition of complex and intricate hand movements.Furthermore, sensor-based systems are usually able to provide real-time feedback and translation of sign language gestures.The rapid response time of sensors allows immediate recognition and translation, enabling smooth and seamless communication between hearing and deaf individuals.Real-time performance is essential for sustaining the flow of conversations and enabling more natural communications.In term of lighting conditions effect, unlike vision-based systems that can be www.aetic.theiaer.orgsensitive to variations in lighting conditions, sensor-based systems are generally less affected by ambient light.The reliance on sensors rather than visual input reduces the impact of lighting changes or shadows, ensuring consistent and reliable gesture recognition even in challenging lighting environments.However, the most appealing advantage of sensor-based systems is their ability to be customized and designed to suit individual user preferences and needs.Through adjustments to sensory levels, gesture maps, or other dimensions, the system can be tailored to accommodate different gestures, gesture designs, or specific user needs.These customizations enhance the user experience and facilitates more personalized and accurate translation.In addition, sensor-based systems can be developed into compact, lightweight, and wearable devices.Combining sensors with wristbands or other wearable devices provides portability and convenience.Users can take the sensor-based system with them, allowing them to communicate in different environments such as classrooms, offices, or partnerships.This portability offers increased accessibility and engagement by providing on-the-go sign language translation.Despite these advantages, sensor-based systems still have certain limitations and disadvantages.Sensor-based systems may have limitations in detecting and accurately interpreting some complex sign language gestures.The types of gestures that the system can recognize and correctly translate may be limited depending on the numbers of sensors utilized in the design, resulting in potential inaccuracies or misinterpretations.This limitation may prevent the system from fully picking up expressive sign language.To achieve high recognition accuracy, the utilization of multiple sensors is often important.However, it is essential to strike a balance between the number of sensors and the associated development costs because incorporating numerous sensors increases the complexity and cost of the system, making it necessary to find a cost-effective approach without compromising accuracy.Furthermore, placing sensors on wearable devices is crucial for accurate gesture recognition.Ensuring that a sensor is properly placed on the user's arm or body can be challenging, as it requires careful placement and alignment.Improper sensor placement can result in inaccuracy or discomfort, affecting both usability and the user experience.Based on the previous studies described above, the designs of the wearable sensors are bulky, heavy, and uncomfortable.This circumstance occurs because most research utilised an LCD or a computer monitor to display the translated gestures.However, there has been a lack of research exploring the potential of mobile phone applications that can be capable of making the system more efficient while further reducing production costs.Therefore, the focus of this work was prompted by the fact that the majority of people own smartphones with built-in cameras.Advances in smartphone camera technology have greatly improved video capture.This development opens up new possibilities for incorporating sign language translation directly into smartphones, making it a practical solution for real-world situations Smartphone capabilities are used to the advantage of users from a flexible and simple sign language translation system that matches their daily communication needs.
This article presents the design and development of a wearable sensor glove that translates sign languages into words and speech in real-time.A smart glove was developed to detect gestures using flexible sensors and an accelerometer by measuring fingers and hand motions.The data from sensors were processed and translated into words and speech, produced on an Android-based smartphone application.The remaining parts of the article are as follows.First, the overview of the work is presented, and the hardware and Android-based application designs are described.Next, the experimental results to demonstrate the functionality and efficiency of the real-time sign language translation system are presented.

System Overview
Figure 1 depicts the overview of the proposed wearable sensor glove for the sign language translation system.The system consists of three parts: input sensors, data processing, and output.The input sensors equipped on the smart glove are a GY-61 accelerometer sensor positioned on the back of the hand and five (5) units of 4.5 inch flexible sensors attached on the back of each finger.All of the sensors are linked to an Arduino Nano microcontroller for data processing.The Arduino Nano is a small-scaled and portable board utilising the ATmega328 single chip 8-bit microcontroller, a 5V operating voltage, and a processing clock with the speed of 16 MHz.It has 8 analogue pins that are adequate to connect with the input sensors required in this work.The data from input sensors are processed and digitised in the www.aetic.theiaer.orgmicrocontroller, where the gestures are recognised and translated into words, which are transmitted to an Android-based smartphone application via an HC-05 Bluetooth module.If the smartphone application successfully receives the words from Arduino Nano, the words will be displayed on the application, while the corresponding voices for the words will be produced via the smartphone's speaker.

Wearable Sensor Glove Design
The circuit diagram for the wearable sensor glove was illustrated using the Fritzing software, which is depicted in Figure 2

Sensor Values for Sign Language Gestures
A flexible sensor is a type of variable resistor made of carbon elements.When the conductive carbon element surface is bent, it produces a resistance signal related to the angle at which it is bent.Through characterising this relation, a user can use flexible sensors to determine how an individual's fingers move through a series of hand gestures.In this work, Spectra Symbol's flexible sensors were used in the wearable sensor glove design.
Initially, the flexible sensors utilised in this work were calibrated prior to being fitted on the wearable sensor glove.This step was implemented to verify the correct sensor fusion values for a corresponding gesture.Therefore, for each gesture, a combination of five ( 5 First, the input voltage values were derived from the digital input voltage values (in the range from 0 to 1023) from Arduino's ADC using the following equation, Where,   is the input voltage value,   is the digital input voltage values from Arduino's ADC,   is the power supply voltage, and n is the size of ADC.Then, the resistance value for each flexible sensor was calculated based on the voltage divider equation as follows, Where,   is the flexible sensor's resistance value,   is the resistor's resistance value to create a voltage divider,   is the input voltage value obtained from Equation (1), and   is the power supply voltage.Subsequently, the bend angle of each flexible sensor was obtained by mapping the flexible sensors' resistance values,   to the sensors' bend angles.The mapping of the values were executed using Arduino's IDE map() function, such as map(Rflex, flat_Res, bend_Res, 0, 90.0), where the value of flat_Res (flexible sensor's value when flat) is mapped to the target angle, which is 0 degree, and bend_Res (flexible sensor's value when bent) is mapped to the target angle, which is 90 degrees.Rflex is the flexible sensor's resistance value (obtained from Equation (2)) to be mapped to the bend angle.
On the other hand, the wearable sensor glove also utilises an accelerometer sensor, where the analogue input values from the sensor (x, y, and z axes values) were converted to digital input values using Arduino's internal ADC with the range of values from 0 to 1023.The obtained values were applied to the work directly.The fusions of the flexible sensors bend angle values and accelerometer values were utilised to develop a mapping table of sensor-gesture, as depicted in Table 1.As shown in the table, for instance, for the translation of "Congratulation" gesture, the bend angle values for flexible sensor A1 (thumb) is between 45 and 55 degrees, flexible sensor A2 (index finger) is between 75 and 85 degrees, flexible sensor A3 (middle finger) is between 0 and 10 degrees, flexible sensor A4 (ring finger) is between 60 and 70 degrees, and flexible sensor A5 (little finger) is between 30 and 40 degrees.Furthermore, the accelerometer produces values between 310 and 326, 280 and 292, 329 and 333, for X, Y, and Z axes, respectively.Similarly, for the translation of "Thank you" gesture, the bend angle values for flexible sensor A1 (thumb) is between 15 and 2 degrees, flexible sensor A2 (index finger) is between -8 and 0 degrees, flexible sensor A3 (middle finger) is between -57 and 48 degrees, flexible sensor A4 (ring finger) is between -8 and 17 degrees, and flexible sensor A5 (little finger) is between 4 and 10 degrees.Furthermore, the accelerometer produces values between 320 and 330, 290 and 310, 330 and 337, for X, Y, and Z axes, respectively.Similar sensor fusion method was used to produce the other nine (9) gestures using different combinations, as shown in the same table. www.aetic.theiaer.org

Android-based Application Development
As explained in the previous subsection, the developed system consists of an Android-based smartphone application.In this subsection, a comprehensive description of the steps involved in developing the application are described.The application was developed by utilising a powerful opensource application development software called MIT App Inventor 2.
MIT App Inventor 2 software is comprising of two main editor display panels referred to as Designer and Blocks Editor panels.The Designer panel serves as a platform to create the actual application design to be displayed.It consists of four (4) distinct display windows called Palette, Viewer, Components, and Properties, as shown in Figure 3.The Palette window contains all components the application design can utilise.The Viewer window allows developers to visually design the user interface and preview how the final application will appear on the smartphone screen.The Components window displays all available components that can be used to design the application.The Properties window permits the developers the ability to manipulate various properties of the components, such as the background colour, font type, text, height, width, and more.
In MIT App Inventor 2, the Blocks Editor panel depicted in Figure 4 plays a vital role in defining the behaviour and functionality of selected components within the application.This panel is composed of four sections, namely Built-in, Screen1, Any component, and Viewer.The application developer can choose appropriate blocks from the first three (3) sections and then easily drag them into the Viewer.Within the Viewer section, these selected blocks can be interconnected and arranged to customise the functionality and behaviour of the application according to specific requirements.Using the intuitive block-based tools provided by MIT App Inventor 2, developers can create and modify the behaviour of application components with ease.The software provides a flexible and efficient development environment for building Android applications.With its intuitive Designer and Blocks Editor panels, developers can create visually appealing interfaces and easily define complex actions.www.aetic.theiaer.orgFurthermore, the software supports real-time testing and debugging of the application.Developers can connect their smartphones to the software and instantly see how the application behaves on the device.This feature allows for rapid iteration and debugging, ensuring that the application functions as intended.MIT App Inventor 2 also supports the integration of external services and APIs.Developers can incorporate features such as GPS location, camera functionality, social media sharing, and database access into their applications.This enhances the capabilities and interactivity of the created applications.
Figure 5 illustrates the design of an application for sign language translation called the Sign Language Translator (SLT) application which has been developed using the block-based tools provided by MIT App Inventor 2. The three (3) developed user interfaces are: (a) the main user interface, (b) the Bluetooth devices connection interface, and (c) the translated gesture interface.
When the user launches the app, the main user interface of the SLT application will appear as shown in Figure 5 (left image).The main user interface consists of a blue button labelled "Bluetooth" that is used to establish Bluetooth connection with another Bluetooth device.Next to the button is the status of Bluetooth connection with another device.Below the Bluetooth button is a text box that displays the text of translated sign language gestures received from the wearable smart glove.
Initially, the app is not connected to another Bluetooth device, therefore, the initial status shows "No Connected".Bluetooth connection is also not established when the app is not launched.Therefore, to establish a Bluetooth connection, for example with the proposed wearable sensor glove, the user must click the Bluetooth button that will change the display to show a list of scanned available Bluetooth devices as shown in Figure 5 (center image).Here, the user needs to select only one device to be connected to the SLT app.As shown in the list, the "00:19:08:35:FA:A0 HC05" is the wearable sensor glove's Bluetooth module.So, when this is clicked, the display will return to the main interface, but with the Bluetooth status changed to "Connected" as shown in Figure 5 (right image).This shows that the Bluetooth communication between SLT app and the wearable smart glove has been established.At this point, the user can start using the wearable smart glove to do sign language gestures.The wearable smart glove translates the sign language gestures into texts.Then, the texts are transmitted to the app via Bluetooth communication.When the data is received by the SLT app, the texts are displayed on the text box as shown in Figure 5 (right image).The users can disconnect the Bluetooth connection with the wearable smart glove by clicking the Bluetooth button again that will change the Bluetooth status to "Disconnected".Figure 6 shows the simplified steps on how to use the developed Sign Language Translator application.

Experimental Results and Discussion
The experimental steps and results to demonstrate the functionality and efficiency of the developed wearable sensor glove and Android-based application are described based on thirteen (13) sign language gestures.Although the number of gestures could be increased, thirteen (13) sign language gestures were initially selected based on the distinctive features of each other and to facilitate the experiment.The thirteen (13) sign language gestures were carefully chosen based on their unique features and relevance to the experiment.These gestures were selected to cover a range of hand movements, finger configurations, and meanings.Prior to the experiment, the wearable sensor glove was calibrated and trained to accurately interpret the hand movements corresponding to the selected sign language gestures.Calibration includes modular testing for each sensor used in the device.This involved collecting data from various users performing the gestures and training the system to recognize and associate specific sensor patterns with each gesture.At the end of this section, the performance of the developed real-time sign language translation system is discussed.

Experiment on the Wearable Technology for Real-Time Sign Language Gesture Translation
The developed wearable sensor glove and Android-based smartphone application were tested to demonstrate the functionality and efficiency of the wearable technology.The developed prototype of the wearable sensor glove is shown in Figure 2(b), and the Android-based application is depicted in Figure 5. Experimental steps: Prior to the experiment, the Bluetooth connections between the wearable sensor glove and application were checked five (5) times to ensure good connectivity.The experiment started by supplying power to the Arduino Nano (initially programmed with the data processing code) using a 5V power supply.Then, a subject was asked to click the SLT application on the smartphone to enable it.Next, the user was requested to select a Bluetooth device that is linked to the wearable sensor glove by pressing the "Bluetooth" button.When the Bluetooth communication was established, the subject was instructed to make the initial sign language gesture.In this experiment, thirteen (13) sign languages were prepared for the user, namely "Assalamualaikum", "Waalaikumussalam", "Congratulation", "Thank You", "Hello", "Please", "You're welcome", "Blind", "Deaf", "No", "Yes", "Not Yet" and "I".The SLT application showed the translated words of the gesture and produced the corresponding speech.Finally, these steps were repeated for other sign language gestures, and the success and failure rates were recorded manually.In an effort to verify the performance of the developed device and application, each gesture was tested thirty (30) times.Before the experiment commenced, the subject was introduced to the thirteen (13) gestures and permitted to practise several times to familiarise each gesture.

Experimental results:
The experimental results demonstrated the successful translation of the thirteen (13) sign language gestures in real-time using the developed sensor-based system.The accuracy of the system SLT app varied for different gestures, with some achieving higher accuracy rates than others.Figures 7 to 13 demonstrate that the application had successfully displayed thirteen (13) translated sign language gestures, namely "Assalamualaikum", "Waalaikumussalam", "Congratulation", "Thank You", "Hello", "Please", "You're welcome", "Blind", "Deaf", "No", "Yes", "Not Yet", and "I".

Discussion on the Performance of the Wearable Sensor Glove and Android-based Application
Table 2 tabulates a comparison of the sign language translation performance using the developed wearable sensor glove.The translated gestures were displayed on the developed Android-based smartphone application in real-time.As explained in the previous subsection, a subject was asked to execute thirteen (13) sign language gestures, namely "Assalamualaikum", "Waalaikumussalam", "Congratulation", "Thank You", "Hello", "Please", "You're welcome", "Blind", "Deaf", "No", "Yes", "Not Yet", and "I", where each gesture was executed thirty (30) times each.
Based on Table 2, when the "Assalamualaikum" and "Deaf" gestures were replicated thirty (30) times, both recorded only a single mistake.Both gestures have the least error owing to the position of fingers and hand orientation that are different from the other eleven (11) gestures.On the other hand, the "You're Welcome" and "I" gestures recorded the highest translation errors with ten (10) and eleven (11) errors recorded, respectively.The observation during the experiment showed that both gestures had relatively similar fingers and hand movements.This can cause the device to be unable to discriminate between the two gestures.Other than that, most of the gestures produced errors of less than five (5) times, which is an encouraging result for the prototype.It was observed that some errors occurred due to the consequences of poor soldering of the sensors that cause intermittent data output during the gestures.Overall, the system showed promising performance in accurately recognizing and translating sign language gestures.However, some challenges were observed, such as occasional misinterpretations or inaccuracies, particularly in complex or rapid gestures.These limitations provide areas for further refinement and improvement in future iterations of the system.The accuracy of gesture recognition varied across the thirteen (13) selected sign language gestures.Certain gestures exhibited higher recognition rates, indicating the effectiveness of the sensor-based system in capturing and interpreting those specific hand movements.However, it is important to note that some gestures such as "You're Welcome" and "I" may have been more challenging to detect accurately due to the reasons explained above, leading to occasional misinterpretations or inaccuracies.These variations in accuracy could be attributed to factors such as the complexity of the gesture, speed of execution, and individual variations in performing the gestures.Furthermore, the usability and user experience of the developed system are critical aspects to consider.Participants' feedback regarding the comfort and convenience of wearing the sensor glove, as well as the user interface of the Android application, can provide valuable insights for system refinement.It is important to address any discomfort or design limitations that may affect the user's ability to perform sign language gestures naturally and effortlessly.While the results are promising, there are several limitations that should be acknowledged.The experiment focused on a specific set of sign language www.aetic.theiaer.orggestures, and expanding the gesture vocabulary would be essential for real-world applications.Nevertheless, the successful translation of sign language gestures in real-time opens a range of practical applications for the sensor-based system.It can facilitate communication between deaf and hearing individuals in various settings, including educational institutions, workplaces, and public spaces.The system's ability to provide instantaneous translations enhances the accessibility and inclusivity of communication for the deaf community.

Conclusion
This paper provides a comprehensive description of the design of a wearable sensor glove specifically designed for sign language interpretation.Additionally, an Android-based application was developed, which can display words in real time and generate speech based on translated gestures.The wearable device itself has five (5) sensitive sensors and an inertial sensor.Together, these sensors accurately measure sign language gestures, which are then fed into an Arduino Nano microcontroller for translation processing into words, and then the processed data is sent via Bluetooth to a custom Androidbased application called the Sign Language Translator (SLT) application.When the application receives the translated data, it displays the corresponding texts and executes the speech associated with the translated gesture.The paper goes into great detail, thoroughly describing each step in the development of this device.Furthermore, the paper highlights promising results from preliminary experiments conducted using the wearable smart glove.This study demonstrated the efficiency of the device in terms of text display and speech production for a total of thirteen (13) sign languages.In summary, this paper not only outlines the development process of the wearable sensor glove and Android-based application, but also demonstrates the successful application in displaying words and generating speech for various sign languages.
In the future, this project will involve storing its sensor values in a database for easy access and analysis.In addition, the range of movements that the application can detect is planned to be expanded.To this end, new initiatives will be taken, including the development of a pair of wearable smart gloves, in contrast to the current offering, which consists of a single wearable smart glove.These gloves will allow users to understand a wide variety of sign language gestures.Additionally, the application can be enhanced with additional features and functionality.For example, users may have the option to save text and audio, ensuring a personalized experience when using sign language gestures.Also in the translation application, a complete sign language database can be developed that can becomes an interactive learning tool.To enhance accuracy, researchers can employ advanced techniques such as deep learning, convolutional neural networks (CNNs), or recurrent neural networks (RNNs) to capture intricate patterns and dependencies in the sensor data.These models can be trained on large datasets of annotated sign language gestures, allowing them to learn and generalize complex gesture representations effectively.The app aims to make learning sign language easier and more enjoyable for students attending deaf or sign language schools.By incorporating a guide that includes visual representations of sign language, along with corresponding text and signing audio, the learning process becomes engaging and accessible.Through these future developments, the authors wish to develop a device that can provide a flexible and www.aetic.theiaer.orguser-friendly method that can interpret sign language gestures and convert it into texts and audio based on the preferences of the deaf.

Figure 1 .
Figure 1.Overview of the proposed wearable sensor glove for sign language translation system (a).The wearable sensor glove consists of five(5) flexible sensors, an Arduino Nano microcontroller, an accelerometer, a HC-05 Bluetooth module, five (5) units of 220 ohm resistors for flexible sensors' voltage dividers, jumper wires, and a 6 cm x 5 cm stripboard to attach the data processing components.Figure2(b) shows the actual circuit attached on the back of a hand glove.As shown in the figure, the flex sensors are sewn and glued to the back of the glove for the purpose of detecting the bending angle of the fingers.These flex sensors are connected using jumper wires to the stripboard that consists of the other electronic components shown in Figure2(a).The stripboard is also attached on the back of the hand glove using heavy-duty Velcro straps.The use of Velcro strap enables easy detachment of the board from the glove if required.

Figure 2 .
Figure 2. (a) A circuit diagram of the wearable sensor glove, (b) actual circuit attached on the wearable sensor glove .theiaer.orgthree(3) accelerometer analogue input values were obtained simultaneously.The obtained analogue input values were converted to digital 10-bit resolution data using an Arduino's Analog-Digital Converter (ADC).Based on these digital voltage values, the bending angle of each resistor was derived.

Figure 5 .
Figure 5.The contents of the designed application: main interface (left image), user interface that displays available Bluetooth devices list (center image), application that displays translated gesture in words (right image)

Figure 6 .
Figure 6.Steps on how to use the developed Sign Language Translator (SLT) application

Table 1 .
The range of sensor values for each gesture

Table 2 .
Performance of the Wearable Sensor Glove and Android-based App