International Journal of Scientific & Engineering Research Volume 2, Issue 7, July-2011 1

ISSN 2229-5518

Latest Trends, Applications and Innovations in Motion Estimation Research

Mr. P. Vijaykumar, Aman Kumar, Sidharth Bhatia

Abstract— Motion estimation is the process of determining motion vectors that describe the transformation from one 2D image to another, usually from adjacent frames in a video sequence. Motion estimation is a useful in estimating the motion of any object. Motion estimation has been conventionally used in the application of video encoding but nowadays researchers from various fields other than video encoding are turning towards motion estimation to solve various real life problems in their respective fields. This paper reviews some of the innovative uses of motion estimation algorithms in various applications such as in bionics, psychological studies, cinematography, medicine, security and space science.

Index Terms— breathing estimation, cinematography, gesture recognition, hand posture analysis, lunar lander, marker-less analysis, motion estimation, phase correlation, plant root growth, robotic heart surgery, three step search, variable shape search

—————————— • ——————————

1 INTRODUCTION

HE field of motion estimation is undergoing substan- tial usage in non-conventional area nowadays, those which were not even thought about some years ago. These newfound uses of motion estimation are opening newer and wider options for researchers to apply these algorithms and get even better applications which are far more reliable and efficient than any of their predecessor techniques. These techniques are reaching farther and deeper into the avenues which are very new to the human knowledge. This paper goes in the direction of reviewing these newfound applications of motion estimation which are creating a revolution in the world of technology by creating more accurate and efficient solutions to long faced problems. The applications discussed in this paper are based on new motion estimation algorithms which reduce the complexity, resource requirement and re- sponse time of the systems they are used in. Some of these algorithms are fast block matching algorithms, optical

flow estimation, phase correlation, etc. Using these algo-

rithms researchers have developed new applications such as Traffic movement tracking, Studying plant root growth, Landing modules of rovers, Hand posture analysis, Human posture analysis, Gesture controlled gaming, Lip movement for user authentication, Cinematography, Robotic heart surgery, Breathing motion estimation.The paper consists of two sec- tions. The first section briefly explains some of less com- putational complexity motion estimation algorithms. The
second section deals with the applications of the motion estimation algorithms in the new fields and directions.

————————————————

Mr. P. Vijaykumar is currently working as an assistant professsor (sr. grade) in deptt. of electronics and communication engineering, SRM Uni- versity, India, E-mail: vijay_at23@rediffmail.com

Aman Kumar is currently pursuing masters degree program in embedded systems technology in SRM University, India, E-mail: amansad- dress@gmail.com

Sidharth Bhatia is currently pursuing masters degree program in embedded

systems technology in SRM University, India, E-mail: bhatiasid- harth.89@gmail.com

2 LESS COMPUTATIONALLY COMPLEX MOTION

ESTIMATION ALGORITHMS

his section discusses some of the motion estimation algorithms which can either reduce the computation- al complexity or better the accuracy or both in a tra-
deoff. The algorithm can be selected according to the computational power available, time available for compu- ting, the amount of accuracy required, the application being developed or any of the combinations of the above factors. Some of the algorithms that can be used to reduce the computational complexity in varying amounts accord- ing to the algorithm selected are: Fast Full Search Block

Matching Algorithm [1], Three Step Search [2], New Three

Step Search [3], Simple and Efficient Search [4], Four Step

Search [5], Diamond Search [6], Adaptive Rood Pattern Search

[7], New Fast and Efficient Two-Step Search Algorithm [8],

Simplified Block Matching Algorithm for Fast Motion Estima-

tion [9], Fast Block-Based True Motion Estimation Using Dis-

tance Dependent Thresholds [10], Sub-Pixel Motion Estimation

Using Phase Correlation [11], Variable Shape Search [12], etc..

A few of these algorithms are discussed briefly in this
section.

2.1 Three Step Search (TSS)

[2] This technique is a way to reduce the number of time search is done and to reach over a larger distance which saves computational power. This algorithm is used in the MPEG video standard. The first step is the same as in the basic block matching where search step is bigger than four pixels. At the next step the centre of current block is moved to the position of the best match from the previous step and matching is performed again but with smaller search step. Usually half the original size is used. This step can be repeated as many times as desired until one is satisfied with the result.

IJSER © 2011 http://www.ijser.org

International Journal of Scientific & Engineering Research Volume 2, Issue 7, July-2011 2

ISSN 2229-5518

2.2 Variable Shape Search (Vss)

[12] The variable shape search mainly comprises three phases: (1) the big diamond search [6] to fit the directional centre-biased characteristics of the real-world video se- quence. (2) The directional hexagon search to identify a small region where the best motion vector is expected to locate. (3) The small diamond search to select the best motion vector in the located small region. The VSS algo- rithm adopts two kinds of asymmetrical hexagonal search shapes in the step 2, with different directionality, sepa- rately, i.e., horizontal hexagon and vertical hexagon. The Mean Absolute Difference (MAD), rather than Mean Square Error (MSE), is used as the matching criterion to reduce the block-matching computation in practice.

2.3 Phase Correlation

[11] These motion estimation techniques operate in the frequency domain and are commonly based on the tech- nique of cyclic correlation [13]. It uses a method by which we can obtain sub pixel motion estimation of high accura- cy by using phase correlation. It uses the principle of curve fitting [14] on phase correlation surface and intro- duction of variable separable fitting using a modified sinc function. Sub-pixel motion estimation is achieved by straightforward extensions to the basic integer-pixel block-matching algorithm [15] mainly through the use of bilinear interpolation. Fitting prototype functions in the vicinity of the maximum peak of the phase correlation surface located at (km, lm) is an elegant solution that cir- cumvents many of the problems associated with interpo- lation [11].

3 INNOVATIVE APPLICATIONS AND DESIGNS USING

MOTION ESTIMATION ALGORITHMS

his section discusses some of the latest applications of the motion estimation algorithms.

3.1 Motion Estimation In Psychological Studies

3.1.1 In Gesture Recognition

One of the fields where motion estimation is most widely implemented is gesture recognition. Supporting argu- ments can be found in [16]. Gesture recognition can be used in creating an interactive user interface for the mo- bile game REXplorer which guides the tourists through the city of Regensberg, Germany by using a Nokia N70 mobile phone and a Bluetooth GPS receiver encased in a single casing. In order to support this mode of interaction, a gesture recognition system for camera-based motion has been designed. But camera-based motion data is proble- matic to use for gesture recognition due to the low quality of the data, caused by a low sample rate combined with high noise. To solve this problem the authors have de- signed a gesture recognition algorithm that tries to solve this problem by using state machines, modelled from a
gesture rule set that parse the motion data and interpret the gesture the user has performed.
The approach is basically divided into three steps:
1. Obtaining motion data via motion estimation.
2. Providing “live” graphical trace motion data for user interface.
3. Evaluation of the accumulated motion informa-
tion by the gesture recognition algorithm.

Motion Estimation

Block matching algorithm is used here. The last two frames of video input are compared. Both frames are sub- sampled to roughly 1/8th of their former size. Testing of the MSE (Mean Square Error) is done between the old and a shifted version of the new frame, with a shift range of 3 × 3 pixels. The candidate with the lowest MSE then delivers the motion vector.

User Interface

Feedback is provided to the users by rendering a trace of the current gesture’s progress to the device’s screen. In this way the user can determine the path of the gesture that is being seen by the device and so the user can take corrective measures in the gesture’s path wherever re- quired.

Gesture Recognition

The gesture recognition algorithm incrementally matches the input to a gesture by verifying that the entered motion data reaches certain predefined distance offsets. The algo- rithm determines which predefined rule set is matched best by the entered motion data by using a state machine. A rule set defines a sequence of distance offsets that must be fulfilled by the trace of the entered motion data.

3.1.2 in Hand Posture Analysis

The human hand model is first analysed with 27 Degrees Of Freedom (DOF) and its constraints and then reduction to 12 DOFs is done. Using this model and its constraints an algorithm is developed [17] to retrieve the 3D hand posture using eight feature points which are the points of wrist, the tips fingers and thumb, and the metacarpopha- langeal joints for the middle finger and the thumb. Colour markers are used to identify these eight points. The fea- ture points are extracted from the silhouette contour of the out-stretched hand.
A pentagram approach is used wherein the hand’s shape is calculated from the predefined template of a pentagon shape whose parameters are calculated to find the actual present posture of the hand. Here the technique of post- ure estimation is used which is a sub-branch of the mo- tion estimation technique.

3.2 Motion Estimation in Markerless Analysis of

Human Motion

A 3D object model builds a prior knowledge of the sys-

IJSER © 2011 http://www.ijser.org

International Journal of Scientific & Engineering Research Volume 2, Issue 7, July-2011 3

ISSN 2229-5518

tem, using two free-form surface patches with two kine- matic chains [18]. Each kinematic chain consists of seven joints (three for the shoulder, two for the elbow and two for the wrist) and one back segment joint to the torso sur- face patch. The input images are taken using four cam- eras. The motion tracking system consists of three main components, namely silhouette extraction, matching and pose estimation.
The silhouette extraction is done by using image segmen- tation i.e. to estimate boundaries of objects in an image. The authors use a method of forming a zero-level line which marks the boundary between the two regions us- ing some threshold values. The pixels are assigned to the most probable region according to the bayes rule. By us- ing mathematical calculations based on the above given conditions the body silhouette is extracted. The silhouette is initialised with the pose of the last frame.
For pose estimation a set of point correspondences are assumed with 4D (homogenous) model points and 3D image points. Each image point is reconstructed to a plucker line [19] with a unit direction and motion. The reconstructed plucker lines are combined with the screw representation for rigid motions and a gradient descent method is applied. The equations thus formed are solved by using Householder algorithm. To estimate what will be the group action of the whole posture from the esti- mated twist the Rodriguez formula is used.

3.3 Motion Estimation in Space Science

To estimate the relative motion of the lander with respect to the preselected landing site passive video sequence analysis is used. The overall system [20] for this purpose only contains an off-the-shelf simple camera and a processing unit with some data buses. The video image sequence analysis method is based on the Continuous Wavelet Transform (CWT), which is used to estimate the apparent motion of the lander accurately and efficiently in 2-D. The 3-D relative motion of a lander can then also be estimated accurately and efficiently. An onboard navi- gation camera gives the digital video image sequence block and it is then used as an input for the motion esti- mation block. The video sequence is analyzed at each time sample by using CWT and the 2-D image motion is thus estimated. A perspective projection camera model uses geometric mapping and this model is then used to map the 2-D image motion onto the 3-D relative motion of the lander. The output is the 3-D dynamic motions of the lander, which is sent to the Attitude Determination and Control System (ADCS) to perform the lander ’s attitude control task. The algorithm [20] can be explained as fol- lows:
Step 1: Fourier Transform:
The input video image sequence is transformed into Fourier domain first to perform the multiplication with a chosen wavelet.
Step 2: Wavelet Transform:
Wavelet transform is performed in the Fourier domain. This step finds the energy of the wavelet transform which is used for the Local Maximum Search.
Step 3: Local Maximum Search:
This step finds the local maximum in the first video frame. Following the first energy map, the corresponding local maximum energy in the following wavelet trans- form energy of the following video frames is obtained. This allows the building of a motion trajectory of the lan- der according to the intensity difference from the refer- ence video frame with respect to the 2-D apparent motion buried under the video image sequence.
Step 4: Geometric Mapping:
Using the geometric projection of the camera and the rela- tion between the lander ’s 3D relative motion and the 2-D apparent motion found from the above steps, the trajecto- ry of the lander is mapped onto the 3-D motion of the lander.

3.4 Motion Estimation in Biology

Motion estimation nowadays is also finding its use in bio- logical studies. An example is shown in [21]. A new com- puterised bioimaging approach has been developed which can measure the biological growth in lesser time than its predecessor techniques. The advantage of this approach is that it does not involve any invasive tech- nique and by using it we can achieve unprecedented de- tails. This approach involves two steps and exploits the orientation of the structure tensor. After this step match- ing is done. This method is used to compute the motion fields for biophysical studies of cellular processes which involve growth or motility, and to measure root growth in Arabidopsis thaliana. The resolutions achieved by this procedure are spatially one micron per pixel and tempo- rally 10 sec which has not been previously possible. Ac- cording to the authors the growth zone of the root can be divided into two regions, an apical region where velocity rises gradually with position and a sub-apical region where velocity rises steeply with position. In both the zones, with the position, the velocity increases almost linearly, and the transition between the two zones is ab- rupt. This pattern is found in roots of many plants exam- ple arabidopsis, tomato, lettuce, and timothy. The ap- proach can also be used to measure other biological mo- tions by doing slight modifications. The registered veloci- ty profile along the medial axis of the plant root is calcu- lated. The velocity is relative to the base of the plant root.

3.5 Motion Estimation in Heart Surgery

Development in medical sciences have led to new tech- niques in surgery such as off pump heart surgery which reduces the morbidity and blood transfusion as compared to an on pump coronary bypass surgery or renal failure. The technique involves stabilizing a part of the heart and then allowing the surgeon to operate on the organ with- out the assistance of a heart lung machine. Despite the

IJSER © 2011 http://www.ijser.org

International Journal of Scientific & Engineering Research Volume 2, Issue 7, July-2011 4

ISSN 2229-5518

efforts of stabilization some residual motion remains which needs the full attention on the surgeons part to monitor the movement of the cardiac muscles.

To decrease this manual effort many proposals have been made to introduce robotic surgery techniques. A system of this kind [22] would consist of a camera which plays the role of measurement unit, a robot with actuators to move the surgical instruments in synchronization with heart movements and finally a control and image stabili- zation units. The idea is to allow the machine to take over the duties of simultaneous vision and action coordination and permit the surgeon to dedicate his full attention to the complexity of the actual surgical procedure.

A system such as the one discussed above would need to be highly accurate, flexible in the sense that it should be able to adapt to different patient types and the ability to operate in a real time environment with the desired re- liability. Such a system would then obviously need a pre- cise measurement and motion estimation technique for each point of interest on the heart surface. Unfortunately these demands are in contradiction with the properties of low computational complexity and high speed which are also required in a real time system.

A number of techniques exist to estimate the heart motion but the one that is of most interest to the designers is: Modelling the heart as a physical elastic body. This uses the application of partial differential equations which describe the characteristics of the heart and solving them gives very accurate results. Such an approach comes with its own set of problems which are complex geometry models which demand a lot of computational time, nu- merical solution of the PDE with solvers such as finite element (FEM), finite volume (FVM) or finite difference (FDM) methods, generates delays in system response time.

In [22] a novel framework for estimating the motion of the heart is presented involving a robotic system. The main achievement is the ability to predict and reconstruct the heart surface motion based on a new physical model described by a distributed-parameter system whose spa- tial and temporal decomposition is carried out by mesh less methods. The purpose of the model that has been suggested is not only to realistically reconstruct the mo- tion of the heart but at the same time reduce the computa- tion time by simplifying the complex geometry of the heart.

The first step involves the modelling of a small heart area by a thin plate followed by comparison with the exact solution of the thin plate’s deformation. The idea here is to test the accuracy of the non symmetric collocation mesh less method employed for converting the heart sur- face model into lumped parameter model form. The col- location points are initialized by use of the coordinates of the discrete measurement points attached to the heart phantom.
The designers have tried to reconstruct the heart surface
at non measurement points using both the physical model and the measurements obtained by the camera system. The important fact to be noted is that the uncertainties’ that arise due to noisy measurements and those that occur in the physical model are also considered.
This setup was evaluated with model based estimation by means of the Kalman filter in an experiment involving a pressure regulated heart phantom. It’s worthwhile to note that for high precision of the model, exact information of the control function, boundary conditions, and parame- ters of the distributed system is needed. Thus for the pur- pose of adapting the system to different patients the un- certainties of the force affecting the heart surface, model- ling parameters, and approximations errors should be taken into consideration. To conclude the derived model should be used as a combination of parameter and state estimation approaches in order to reconstruct the deflec- tion of the heart surface at any point.

3.6 Motion Estimation Analysis for User Authenti- cation Using Lip Reading

Automatic speech recognition (ASR) has become an integral part of security systems in today’s world with a high degree of accuracy. Obtaining high accuracy from clean speech using state-of-the art technology is simpler when compared to working in noisy environment espe- cially with a large vocabulary size. Increasing the robust- ness of such a system in noisy environments is a major issue in ASR. A proposed solution [23] for such a problem is a system that combines both auditory and visual in- formation and it has been demonstrated to be superior to the audio-only systems. Most of such systems use speech recognition along with visual features like lip informa- tion. Lip reading systems can be utilized in many places such as hearing impaired aid and for noise rich surround- ings where speech is not easily recognizable.
The main challenge in the above system is feature extrac- tion. Typical methods to do such a job are Discrete Cosine Transform (DCT), Principal Component Analysis (PCA), Discrete Wavelet Transform and Linear Discriminate Analysis. Other techniques include motion analysis of image sequences representing lip movement while utter- ing speech.
In [23] the authors propose using motion estimation tech- nique to perform robust speech recognition for lip read- ing alone. Visual features are extracted from the sequence of images and are utilized in model training and recogni- tion. Block matching algorithm is used to obtain visual features without prior information of the lip location. The proposed approach can be divided into two phases. The first phase consists of the training which results in feature extraction from image sequences representing the differ- ent lip movement. The second phase involves recognition where the new utterance is compared to the output of the training phase.
The authors propose a visual speech recognition scheme using motion estimation analysis of lip movement. The

IJSER © 2011 http://www.ijser.org

International Journal of Scientific & Engineering Research Volume 2, Issue 7, July-2011 5

ISSN 2229-5518

visual features were used in both training and testing and results show that the methodology adopted produces a
20% improvement when using the integral information of motion vectors, extracted by full search block based mo- tion estimation technique over using 3SS in the training phase.

3.7 Motion Estimation in Cinematography Using

Inertial Optical Techniques

The IOME Cam [24] (inertial-optical motion estimating camera) is designed to integrate the optical motion esti- mating techniques and inertial sensor readings to deter- mine the camera motion during a captured video se- quence. It employs a Kalman filter joint estimator to ob- serve both the optical and inertial data output in order to accurately predict camera motion parameters. It’s worth- while to note that both the inertial and optical data can in fact be used to estimate motion but the reason they have been combined here is to overcome the inherent errors in both systems since they can correct for each other’s errors. A great deal of work has already been done in the field of optical motion estimation but such a systems accuracy is limited by the lack of information available from the vid- eo stream. Errors in such estimators are caused by exter- nal motion, lighting, shadows and unpredictable camera motion among other things. By the introduction of inertial sensors which are independent of the visual data we can improve upon the quality of the motion estimate. There are a number of reasons for being interested in a camera’s motion during a video sequence. A system such as the one being discussed above can have many exciting appli- cation in diverse fields such as. (1) Salient Stills 2D model- ling (2) Camera operator gesture recognition. (3) 3D Mod- elling (4) Object based Media presentation and Special Effects (5) Motion capture and playback.
The process of capturing inertial data is accomplished with two types of sensors: accelerometers and gyros- copes. These sensors are designed generally to be sensi- tive to motion in only one axis. INS (Inertial navigation systems) used in aircrafts and other vehicles contain three mutually orthogonal gyroscopes. Such a device is capable of measuring motion in full 6 degrees of freedom. Estimating motion using a video camera has been a prob- lem in the computer vision field for a long time and a va- riety of techniques exist to determine this such as Optical flow. This technique uses analysis of the apparent motion of brightness patterns on a cameras image plane. Another approach is called feature tracking which employs tracing the motion and properties of a set of visual features from frame to frame. Unfortunately such techniques are not fully robust since they lack a lot of information or lead to mismodeled information in the video due to discontinui- ties and noise in optical flow and other error sources. In- ertial navigation systems have their own set of errors such as the characteristic positional error drift-where the position errors grow quadratically with time. Due to the
dead reckoning approach used by the INSs the errors get compounded continuously and the system discussed above will be able to correct those errors with the help of the optical data. Image based motion estimators are better suited for extended period tracking and this property can be utilized for baseline restoration of inertial error drift. If visual cues are available occasionally absolute positional and orientation data can be corrected from time to time. In turn inertial positioning can prevent false positive es- timates which occur during optical analysis of moving scenes.

3.8 Motion Estimation in Breathing Analysis Estimating the motion of the lungs during lung cancer radiation therapy is an important area where reduction of

target position uncertainty can lead to optimal results. It
can result in tumor dose escalation and improve the effec- tiveness of the treatment. Many approaches are under investigation [25] such as breath-hold treatment and gat- ing. The disadvantage of the above system is that they all need spatio-temporal information about movements which occur during breathing. Treatment planning should rely on 3D images and patient specific breathing thorax model encapsulating all the mechanical and func- tional information that is available. The images taken from 4D CT imaging should be supplemented with image analysis tools such as motion estimators and anatomical structure tracking methods. They can also be use to create a 4D model of the spatio-temporal trajectories of all the elements in the thorax. The displacement vector fields that are identified at different breathing phases should be integrated in the treatment optimization. The targeting of the beam can then be done in an optimized manner in accordance with the motion. By instructing the patient to breathe using a visual guide cycle the doctor can prevent the radiation from affecting normal tissue. With the help of B-Spline deformable registrations researchers have managed to quantify the impact of respiratory motion on generated dose distributions. The dosage given is directly related to the time of irradiation which makes the motion of the tumors during the respiratory cycle all the more important since it is desired that the beam remain tar- geted at the tumor for as long as possible. A number of approximations are used to modulate the weight of dose calculations from exhale model toward the inhale model as breathing progresses using time weights obtained via fluoroscopy on a given population of patients. Others have suggested modifications by extending these con- cepts further and use Dynamic Multi-Leaf Collimator respiratory motion tracking.
A major challenge in deformable motion estimation is the process of validating the resulting deformation fields. Very few standards exist for evaluation of the deformable motion estimation. Therefore in [25] a tentative evaluation framework has been proposed which is focused on the deformable registration of the brains of different individ- uals. In [25] the goal of the authors is to compare the dif-

IJSER © 2011 http://www.ijser.org

International Journal of Scientific & Engineering Research Volume 2, Issue 7, July-2011 6

ISSN 2229-5518

ferent motion estimators on the basis of temporal aspects of observed motions. The main motivation behind the research is to be able to relate the use of deformable mo- tion estimators with 4D scans to simulate radiation dose delivery inside moving and deforming organs for given irradiation configuration. The authors propose a frame- work and criteria for incorporating temporal information to evaluate the accuracy of the motion estimation me- thods. The main contributions in [25] is the setting up of test cases and a procedure to obtain test inputs by identi- fying over 500 landmarks over four phases in 3 patients and the proposal of spatio-temporal criteria to evaluate the predictions of landmark displacements through the respiratory cycle.
A study of three different motion estimation methods (two registration based and one biomechanical model based method) was conducted to illustrate the compari- son framework .It also allowed the researchers to com- pare the accuracy of those methods and then express some of their limitations.

3 CONCLUSION

y going through the above discussions we can clearly see that motion estimation is becoming a widespread tool for researchers from various backgrounds to
create newer and more efficient applications that will help
in radically changing the way long persistent real life problems are approached.

4 REFERENCES

[1] Yih-Chuan Lin and Shen-Chuan Tai. “Fast Full-Search Block Matching Algorithm for Motion-Compensated Video Compres- sion,” IEEE Transactions On Communication, Vol. 45, No. 5. May

1997.

[2] Aroh Barjatya. “Block Matching Algorithms For Motion Estima- tion”. DIP 6620 Spring 2004.

[3] Renxiang Li, Bing Zeng, and Ming L. Liou, “A New Three-Step Search Algorithm for Block Motion Estimation”, IEEE Trans. Circuits And Systems For Video Technology, vol 4., no. 4, pp. 438-

442, August 1994.

[4] Jianhua Lu, and Ming L. Liou, “A Simple and Efficent Search Algorithm for Block-Matching Motion Estimation”, IEEE Trans.Circuits And Systems For Video Technology, vol 7, no. 2, pp.

429-433, April 1997.

[5] Lai-Man Po, and Wing-Chung Ma, “A Novel Four-Step Search Algorithm for Fast Block Motion Estimation”, IEEE Trans. Cir- cuits And Systems For Video Technology, vol 6, no. 3, pp. 313-317, June 1996.

[6] Shan Zhu, and Kai-Kuang Ma, “ A New Diamond Search Algo- rithm for Fast Block-Matching Motion Estimation”, IEEE Trans. Image Processing, vol 9, no. 2, pp. 287-290, February 2000.

[7] Yao Nie, and Kai-Kuang Ma, “Adaptive Rood Pattern Search for Fast Block-Matching Motion Estimation”, IEEE Trans. Image Processing, vol 11, no. 12, pp. 1442-1448, December 2002.

[8] Fang-Hsuan Cheng and San-Nan Sun. “New Fast and Efficient

Two-Step Search Algorithm for Block Motion Estimation”. IEEE

Transactions On Circuits And Systems For Video Technology, VOL.

9, NO. 7, OCTOBER 1999

[9] M. Ezhilarasan and P. Thambidurai. “Simplified Block Match- ing Algorithm for Fast Motion Estimation in Video Compres- sion”. Journal of Computer Science 4 (4): 282-289, 2008, ISSN 1549-

3636, 2008 Science Publications.

[10] Golam Sorwar . “Fast Block-Based True Motion Estimation Using Distance Dependent Thresholds”. Journal of Research and Practice in Information Technology, Vol. 36, No. 3, August

2004

[11] V. Argyriou and T. Vlachos. “A study of sub-pixel motion esti- mation using phase correlation”. (unpublished)

[12] LIU, ZHANG Wen-jun, CAI Jun. “A fast block-matching algo- rithm based on variable shape search.” Liu et al. / J Zhejiang Un- iv SCIENCE A 2006 7(2):194-198.

[13] Thomas E. Biedka, Lamine Mili, Jeffery H. Reed. “Robust Esti-

mation Of Cyclic Correlatioin In Contaminated Gaussian

Noise”. (unpublished).

[14] En.m.wikipedia.org/wiki/curve_fitting. ( webpage). [15] Videoprocess-

ing.ussd.edu/~Stanleychan/research/MotionEstimation.html. (webpage).

[16] Sven Kratz and Rafael Ballagas. “Gesture Recognition Using

Motion Estimation on Mobile Phones”.

[17] Chin-Seng Chua, Haiying Guan, Yeong-Khing Ho. “Model Based 3-D Hand Posture Estimation Using A Single 2-D Im- age”.

[18] B. Rosenhahn, U.G. Kersting, A.W. Smith, J. K. Gurney, T. Brox, and R. Klette. “A System for Marker-Less Human Motion Esti- mation”.

[19] Murray R.M., Li Z., and Sastry S.S. “A Mathematical Introduc- tion To Robotic Manipulation”. CRC Press, 1994.

[20] Yong Heng Shang, Phil Palmer. “The Dynamic Motion Estima- tion Of A Lunar Lander Using Optical Navigation”.

[21] RootFlowRT v2.8: Biological Motion Estimation for Plant Root

Growth.

[22] Evgeniya Bogatyrenko, Uwe D. Hanebeck, and G´abor Szab´o. “Heart Surface Motion Estimation Framework for Robotic Sur- gery Employing Meshless Methods”.

[23] Khaled Alghathbar, Hanan A. Mahmoud. “Block-Based Motion Estimation Analysis for Lip Reading User Authentication Sys- tems”.

[24] Christopher James Verplaetse, B.S., Aerospace Engineering,

Boston University. “Inertial-Optical Motion-Estimating Camera for Electronic Cinematography”.

[25] David Sarrut, Bertrand Delhay, Pierre-Fr´ed´eric Villard, Vlad Boldea, Michael Beuve and Patrick Clarysse. “A comparison framework for breathing motion estimation methods from 4D imaging”.

IJSER © 2011 http://www.ijser.org