Research Strategy: A Constructive Play for Anatomy Learning System Based on Human Finger Gestures on Holographic Display

: Human anatomy is a biology field that studies human body which consists of intricate and complex piece of engineering in which every assembly has an important role. This subject is considered to be very complex and thus need an advanced technology to help users learning this course more effectively. In this study


Introduction
Construction play can enhance a preschool child the aptitude to count, order, measure, classify, and compile each of units in achieving the unity of a desired shapes or objects [1]. This blocks construction game is also known as a "super toy" because it involves child development thinking, creation, problem solving skill, and interaction of child [2].
Compare to other games, construction play has helped children to think creatively and to be imaginative. construction play has a significant value in terms of convergent thinking and solving problems. Additionally, construction play contributed as the most recommended approach in the influence on use of language and social interaction, because kids during construction play, like to collaborate together and thus plan their "construction structures", such as Lego, puzzle, block building, etc [3].
Therefore, in this research, we propose the use of digital three-dimensional (3D) displays to be taken into account to represent those construction blocks or pieces of human anatomy parts. This is due to the digital 3D in this era has attracted significant consideration in entertainment [4], medical fields [5], and the digital signage [6]. Additionally, this study chooses holography [7] display, which is a display that reflects the image displayed on a flat panel display, such as a liquid crystal display (LCD) on the surface of a quadrangular display made of a half mirror, so that the image appears as an object floating in the display. This makes a combination between the real world and digital by changing 3D model into a 3D holographic experience.
In accordance with the interaction of construction play, this research utilizes an immersive user interface based on finger gestures in which a user can interact and construct (direct manipulation) the 3D objects displayed on a hologram environment. Accordingly, this study considers to integrate Leap Motion technology, invented by co-founder David Holz, which delivers a sensitive and powerful touch-free 3-D motion-sensing and motion control technology. Leap Motion's exclusive technology, can track all 10 fingers and both hands movements with up to 1/100th millimeter accuracy and no visible latency. The Leap Motion as a finger sized control device has attracted attentions and has already been used for some remarkable resources in helping human learning materials [8]. Leap Motion is a special kit to recognize human finger motion and is meaningful as the input devices since human finger is a significant way to carry human emotion and intention.
To effectively address the emerging challenges for children education and skills delivery, there are educational kits have attempted to bring their educational approaches by using digital multimedia software (tablets and phone apps) for learning human anatomy. Several apps in android PlayStore, for instances, atlas for human anatomy, 3D bones and organs, Internal Organ in 3D, etc. These apps enable kids to learn by watching 3D animation or 3D organ parts of human anatomy. However, the problem with these apps is the interaction, which is only using tapping and dragging motion, which are not really interesting for kids. Some apps tried to use drag and drop interaction, but still have a problem for children to visualize and feel the actual experience with their own hand and also it is only displayed in limited side screen [9]. Therefore, several options of latest advanced Information and Communication Technology (ICT), such as Virtual Reality (VR), Augmented Reality (AR) and Hologram come into play to be considered in this study.
The VR technology can achieve a full immersive experience for a user, nonetheless only one user will involve, while the rest of kids have to be willing to watch the process on a flat screen. On the other hand, AR technology can only portray the 3D objects through the camera of phone or tablets. This will obstruct the kids' interactivity because they need to always refer back to the limited size of screen (show limited information) and another hand has to keep holding the phone [10]. In addition, this screen limitation size of AR screen can cause social separation for kids [11]. Therefore, this study opts holographic display, which allows kids to see the digital 3D objects appear in the reality based on hologram [12]. Meaning, kids can view and surround a 3D hologram object from many perspectives flexibly and conveniently without need to wear any glasses or anything because it appears as if it exists in the real world. This way will still keep the nature of social interaction and discussion among kids during construction play [13].
The next sections elaborate about existing studies in this field, research plan (step by step of how we conduct our research), general framework of the develop system (a big picture of how leap motion is used for interactive input and its connection to hologram as output) and we also at the last section presents our current development progress with its technical detail.

Literature Review
In the perspective of finger gesture interaction, some studies [14], [15], [16], [17], and [18] have applied Leap Motion technology in various purposes. However, these studies only use it in very limited manners. Specifically, they just use simple gestures which enable users to use scaling up function, rotation, and moving gestures to view the 3D objects which will be tedious for a long run usage. For example, the study by [15] implemented viewing finger gesture interaction using Leap Motion to view skeleton 3D objects. Another study by [19] for biology purpose (see Figure 2) also only implements viewing (zooming and rotation) activity. Meanwhile, the constructing interactivity level has yet to be explored in literature studies of this field. In addition, in order to achieve the highest engagement of users, many scholars [20], [21], and [22] stated that to include constructing activity in form of game-based learning in multimedia materials is very imperative. This means the users will be attracted and engaged more towards activities in a game manner.  Another study [23] attempted to use VR controller based on virtual reality environment is also another example of viewing interaction using virtual hand that can grasp the human skeleton parts and the label will come out. Therefore, different from the existing studies, which are using typical viewing interaction, we aim to develop construction play (blocks construction) game in holographic display for kids with an advance interaction techniques in which the users are immersed and integrated into various game activities including constructing activity which is all wrapped into a unity of various interactive games. However, we also still adopt their methods of existing studies in terms of viewing interaction and we will improve with more features.
The users not only can view the human anatomy parts but also can play and wander in the game with different difficulty levels starting from moving objects game to the constructing parts game. The powerful GBL elements, such as challenges, levels, storyline, consequences and rewards, are taken into account in this study to ensure the engagement of users. All in all, the users will be engaged and attracted through the projection of holographic 3D objects game play with numerous finger gestures of Leap Motion which are all classified and calculated thoughtfully.

Research Plan
To achieve the purpose of the study, the use of game engine Unity 3D, as a mean to connect and integrate both world input (Leap Motion) and output (holographic display), is utilized. This is due to Unity 3D can provide digital 3D objects (complete with the animation and sound effects) which then have to be designed, accumulated, calculated, programmed and synchronized carefully to meet the smooth integration of a desired study outcome. In general, we have five main research plans: 1. Identifying Leap Motion gestures: in this phase, the selection of appropriate hand gestures is taken place. Then, the bunch of selected gestures will be useful and are stored into database. 2. Projecting 3D object models into holographic display: Using unity 3D game engine, the holographic display function is created and applied to showing them from LCD monitor into holographic display. 3. Identifying game-based learning activities of construction play: this phase is in charge in terms of identifying the appropriate game type activities of construction play in which it must be aligned with the environment of pyramid holographic display. 4. Synchronizing Leap Motion gestures detection with 3D objects using 3D engine: connecting the dots between selected Leap Motion gestures and 3D objects are designed and calibrated. The detail of this process can be seen in the next figure (see Figure 4). 5. Integrating identified gestures and game-based learning activities: In this phase, the integration of all identified components is implemented. The connection of story line, UI design, game elements, effects, animations, etc. are elaborated and comprehensively unified.

General Framework of Developed System
In this study, Leap Motion sensor is served as the interaction tool to manipulate the 3D construction play objects which will be displayed on holographic display. The proposed Leap Motion sensor which equipped with API is used to gather the gesture data for further processing. On the other hand, the proposed holographic 3D display is act as a display platform to give user immersive interaction experience. The overall framework for the proposed system is illustrated in Figure 5. The framework is categorized into 3 main stages: Leap Motion Gesture Recognition, 3D Human Anatomy Processing and Holographic Display Projection.

Leap Motion Gesture Recognition
Gestures recognition is vital in proposed framework in order to manipulate the human anatomy objects. LMC is placed in front of the holographic display graphics as shown in Figure 6 to collect interaction data between the Leap Motion and 3D anatomy models. Gesture recognition is working parallel with 3D human anatomy processing. The process involved in gesture recognition included: 1. Data Acquisition: Input gesture is acquired through Leap Motion sensor. The sensor will track the motion of hand and fingers for each single frame within its field of view. Latest version of LMC is adopted to collect both left-and right-hand motion.

Feature Extraction
Hand features such as finger direction and palm position will be extracted from the input gesture. In addition, the relative distance between adjacent fingers, distance between the palm and fingertips will be calculated for later classification used.

Gesture Classification
Features extracted will be used to classify the gesture into different categories which includes hand translation, hand rotation, index key tapping and swipe, hand occlusion.

Gesture Recognition
Once the hand gesture is recognized, the movement will be activated. For example, if the hand gesture is recognized as "Opposable Thumbs", the hand can grab any 3D objects from the holographic display.

Leap Motion Gesture Recognition
As mentioned in section 4.1, 3D human anatomy processing is working parallel with gesture recognition process. 3D human anatomy processing starts from 3D anatomy loading, followed by 3D anatomy deformation and rendering before Holographic Display Projection.
1. 3D Anatomy Loading: 3D human anatomy from head to toe will be loaded for the constructive play in the system. The 3D human anatomy is categorized in parts and stored in object file(.obj) extension. Users are allowed to select from the interface different part of anatomy they planned to learn: Brain anatomy, Heart Anatomy, Lung Anatomy and others. Once the 3D anatomy selected is loaded, users can manipulate and assemble the 3D anatomy objects using different interaction mode including rotating, translating, directional scaling, uniform scaling, grabbing and free manipulation through the controller.

3D Anatomy Deformation
The 3D anatomy objects manipulated will then gone through deformation process to update the latest shape. The amount of deformation is determined by the movement of the hand projected onto the stated axis. The 3D anatomy object is scaling up (deformed) along Y-axis if the hand gesture is detected to move the controller increasingly towards on Y-axis. Furthermore, users are given freedom to assemble the 3D anatomy objects if they are in practicing scene mode.
3D anatomy parts are presented in the practicing scene separately, users can grab (grabbing gestures recognized) the object parts: frontal lobe, parietal lobe, temporal lobe (in Brain Anatomy) by two hands, separately or simultaneously and assemble them into one. The object parts are then deformed into one object, determined by the hand transformation, local coordinate systems and world coordinate system. Nevertheless, one hand grab and two hand grab required different transformation calculations.

3D Anatomy Rendering
After 3D anatomy deformation, the new shape (object anatomy) is rendered in Unity Engine. The rendered shape will be updated in the Updating Object Position stage.
3D Human Anatomy Processing is a cycle process. Whenever a new gesture is detected interacts with the 3D anatomy objects, 3D anatomy deformation will be executed and followed by rendering process.

Leap Motion Gesture Recognition
Holographic single view, also known as aquarium holographic display (see Figure 5) with the following specification is adopted in our study: -38 " x 20 " x 22 " -Hologram Glass -Wooden casing with black lamination finish -Using Full HD 1920 x 1080 LCD screen tablet -Support MPEG-1, MPEG-2, MPEG4 AVI Video -Using AC 100-240 power adapter The 3D human anatomy model from our system which has black background will be portrayed on that holographic display. Therefore, 3D objects with high definition (HD) graphic will be shown and reflected through special coating glass, also identified as glass optics. This optic glass is placed in a particular angle diagonally underneath the LCD screen will make an illusion to 3D anatomy objects and will be interpreted as a hologram of floating 3D object in the holographic display.

Current Development
In this study, Leap Motion is used as a gesture detection device and it is required two main SDK or package. Core package and Interaction engine package by Leap Motion integrated into Unity in this development with using C# as a programming language.
The integration of Leap Motion in Unity involves with few stages as shown in figure 7. Both packages are required in Unity for the interactivity features. Core contains the fundamental assets to grab the Leap hand data into Unity project. The plugin in core package handle all the work of connect with Leap Motion service that runs on platform and supply hand data to application from sensor. The core version used in this project is 4.3.4 and it is compatible with Interaction engine version 1.1.1.
The LMC is placed on tracking area whereby user places hands over the tracking device to detect and connect with programming in Unity. The Controller tracks properties of a hand gesture position, all joints in each finger gesture and the gestures recognition is taken place in this stage. The position of the information of every finger movement will diverse into various data manipulation to build an interactive instruction. each data frame received from the LMC contains a snapshot of the user's hands at the current time, providing data up to 200 times per second [24]. Then, these gestures will be interpreted on 3D object position, rotation, and deformation as explained before in section 4.2.
For Interaction engine detail process, it requires few components such as Camera and Leap service provider as an input, Hand model manager to connect the interactivity object and Hand model as an interactivity object. The InteractionManager receives FixedUpdate from Unity and does the internal logic to make the interactive interactions possible, including retrieve and updating hand/controller data and interaction object data Leap Motion device. Each InteractionController have details interaction with 3D objects, such as common gesture like pick object, touch object, hitting object, or movement to be near to the object. The object can be 3D hand model as an InteractionHand and controller for the 3D object to be manipulate. In the hierarchy of Interaction Engine, Interaction controller will be placed under Interaction manager. For example, Interaction object is GameObjects and it needs to integrate with InteractionBehaviour. Thus, to implement this step, it requires one Rigidbody and one Collider.
To summarize, InteractionManager, InteractionBehaviour, InteractionController and InteractionHand component are the basic components of interaction in Leap Motion. Once the core components package and interaction engine package being imported into Unity, Unity physic timestep and gravity need to be update. Unity's physics engine has a "fixed timestep," and that timestep is not always in sync with the graphics frame rate. It is very important to set the physics timestep to be the same as the rendering frame rate. In the implementation of development process, Leap motion responds effectively to the righthand coordinate Cartesian system. The X and Z axes expand horizontally and Y axis is lean vertically [25]. This coordinate represents how the Leap motion sensor detect the gesture. The hand gesture with tracked in Z axis with any positive value and Leap motion will provide the preprocessed data through API. The API is consisting of details information required to read hand and finger model in Unity. Leap Motion embed with three infrared light emitters and two cameras as a sensor to capture hand gesture. It can detect the palm and fingers movement and send the data to API. SDK Leap Motion can track the data of position, direction and characteristic of palms and fingers. All of the data tracked will put into series of images and identify as Frames. Each frame retrieved from Leap Motion is consist of information about hands and fingers entities detected in snapshot of movement [25].
In this paper, we demonstrate an interaction between user and a 3D model of brain, in which user can touch, rotate, scale up and down based on cartesian system of Leap Motion and finger motion gesture (see Figure 8). Meanwhile, for the construction play, the integration of collider, Rigidbody and isKinematic function in Unity 3D are used to create a fix position for each part of organ pieces when student is playing a puzzle/construction activity. Specifically, we give a clue (green object) to the learner what should he put next in the puzzle (see Figure 9). This task is accomplished by detecting motion gesture of Leap Motion so that the user can grasp and move a piece to be placed in the puzzle of green object. Then, in this system we use collider and isKinematic system in Unity3D to detect as it collides one another (green object and object from). Next, we validate them is a C# script (OnTriggerEnter function) the logic of it in which if the desired piece is placed correctly, the celebration sound will be played, the object will be placed automatically based on the stored position in database, then the next clue will be given. Conversely, if it is a wrong placement, then failed sound is played and pushing back animation to the organ piece will be also played. Figure 9: Placing the correct brain piece and the next clue is given afterwards

Conclusion
This study integrates the technology of LMC, as hand and finger motion controller, and Unity 3D engine in a case of learning human anatomy course. This combination is then will be displayed on holographic display platform so that users can view, learn, and interact with the model as hologram object. The interactivity in this study includes, moving, scaling, rotating and constructing play to enrich the engagement of the user. It is highly suggested to maintain student learning satisfaction and engagement in long term period with many interactions and game-based activities included. Therefore, in the future, we will show more variety based on GBL on the complete human anatomy model, with the inclusion of timer, rewards, levels, etc.