International Journal of Scientific & Engineering Research, Volume 4, Issue 12, December-2013 1478

ISSN 2229-5518

BAYESIAN PROCEDURE IN PHENOLOGICAL STUDY USING INFORMATIVE PRIOR UNDER CONSTANT TIME SERIES MODEL

Vijay Kumar Pandey

Department of Statistics, University of Lucknow, Lucknow-226007 (INDIA)

vijay.pandey550@gmail.com

ABSTRACT

At the present time the climate change is the recent topic of discussion to all of us. The statistical methods are necessary to consider the effect of climate change concerned to human being. Here we will develop a Bayesian Methodology in Phenology using informative prior in Constant Time Series Model. Here we will adopt non- parametric estimation.

Keywords- Bayesian Analysis, Phenology, Constant Time Series Model.

INTRODUCTION

The global average surface temperature has increased over the twentieth century by about
0.6±0.2℃ and is projected to continue to rise at a rapid rate. Many studies have been done of
ecological impacts of recent climate change. Phenology is perhaps the simplest and most
frequently used Ibio-indJicator to trSack climate chEanges. R
Bayesian statistical methods have been applied so far in climate change detection, analysis
and attribution (e.g. Hobbs,1997; Hasselman, 1998; Leroy, 1998; Tol and De Vos, 1998; Barnett, 1999; Katz, 2002 and V. Dose and A.Menzel,2004).
In this paper we will focus to develop a probability model by using Bayesian concept in phenological study under time series model . Here we will use informative prior to develop the model. In the next section, we will shortly introduced the Bayesian concepts.

Bayesian Procedure: Here we will introduce the Bayesian procedure and terminology which is necessary to develop the probability model for phenological studies. Bayesian probability theory is based on the application of two rules. The first is the conventional product rule for

manipulating conditional probabilities . It allows a probability density function to be broken
→ →
down depending on two (or more ) variables Pθ , d/ M , I
conditional on the model M that
 

specifies the meaning of the parameter θ and additional information I into simplified form as,
→ →
 = 
  → →
Pθ , d/ M , I
Pθ / M , I  * Pd/ θ , M , I
(1)
     
Where 𝑃(𝜃⃗/M, I) and P(𝜃⃗ ,𝑑⃗/M,I) depends only on the single (vector)-variables𝜃⃗ and𝑑⃗
respectively.
Equation (1) may be expanded in an alternative way due to symmetry in the variables 𝜃⃗ ,𝑑⃗

IJSER © 2013 http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 4, Issue 12, December-2013 1479

ISSN 2229-5518

→ →
 = 
  → →
Pθ , d / M , I
Pd /, M , I  * Pθ / d , M , I
(2)
     
Equating the right hand sides of equation (1) and (2), we find Bayes, theorem. that is,

P(𝜃⃗/𝑑⃗, M ) =𝑃(𝜃�⃗/M,I)∗ P(𝑑⃗ /𝜃�⃗,M,I)

𝑃(𝜃�⃗/M,I)

(3)
The function on the left hand side is called the posterior density of the parameters 𝜃⃗ given
data 𝑑⃗ and model M. It is equal to the prior density of the parameters𝜃⃗, 𝑃(𝜃⃗ /M, I) which
encodes our information on 𝜃�⃗,
prior to considering the data 𝑑⃗ times the likelihood.
𝑃(𝑑⃗/M, I)is formly the normalisation for the posterior density,
P(𝑑⃗/M, I) =∫𝑑𝜃⃗𝑃(𝜃⃗/M, I) *𝑃(𝑑⃗ /𝜃⃗, M, I) (4)
By inverse application of the product rule we arrive at the Bayesian marginalization rule,
which completes Bayesian Analysis and has no counterpart in traditional statistics,
𝑃(𝑑⃗/M, I) = ∫ 𝑑𝜃⃗𝑃(𝜃⃗/M, I)*𝑃(𝑑⃗ /𝜃⃗, M, I) (5)
Equation (5) allows for us an important interpretation. It is obviously the likelihood of the
data 𝑑⃗ given the model M regardless of the numerical values of the parameter 𝜃⃗ . Employing
Bayes theorem to invert (5) we obtain-
𝑃(𝑑⃗/M, I) = ∫𝑑𝜃⃗𝑃(𝑑⃗, 𝜃⃗/M, I) (6)
Equation (6) is then the probability of a model M out of a possible variety given the data 𝑑⃗.
Having identified the appropriate model to explain the data we are left with the determination
of the parameters, which specify the model. The full information on the parameters, is of course contained in the posterior distribution (3).It is sufficiently simple meaning that
P(𝜃/𝐷, 𝐼) resembles a Gaussian function, then it may be summarized in terms of mean and
variance,
〈𝜃〉 = ∫ 𝜃𝑃(𝜃/D, I)𝑑𝜃�
〈∆𝜃2 〉 = ʃ(𝜃 − 〈𝜃〉)2𝑃(𝜃/D, I)𝑑𝜃 (7)
This completes a Bayesian analysis if the problem is model selection and best estimate of the
parameters specify the model.

Model: The likelihood function for this model must incorporate the data𝑑⃗, the years 𝑥⃗, the scatter of the data will be characterized by a variable 𝜎 and the constant 𝑓 that we choose to

define the no trendon the data.
The model becomes,
𝑑𝑖 − 𝑓𝑖 =∈𝑖 ∀ 𝑖 (8)

IJSER © 2013 http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 4, Issue 12, December-2013 1480

ISSN 2229-5518

Where ∈𝑖 is i.i.d. and follow normal distribution with mean zero and variance 𝜎2 . Hence
P (𝑑⃗/𝑥⃗, 𝜎,I) = � 1 Nexp�− 1
𝑁
(𝑑
− 𝑓)2� (9)

𝜎√2𝜋 P

2𝜎2

𝑖=1 𝑖

From the equation (9), we must now calculate the evidence P (𝑑⃗/𝑥⃗, 𝑐,I), where C denotes that
the constant model. From the marginalization theorem,(4):
P(𝑑⃗/𝑥⃗, 𝑐,I) = ʃ P(𝑑⃗,f,𝜎/𝑥⃗, 𝑐,I)df d𝜎 (10)
=ʃ P(f, 𝜎/𝑥⃗, 𝑐, I) P(𝑑⃗/𝑥⃗, f, σ, 𝑐, I)df d𝜎 (11)
The first distribution under the integral in(11) is logically independent of 𝑥⃗and c simplifies
to,
P(f,𝜎/𝐼) = P(𝜎/𝐼) (12)
The prior distribution P(f/I) on f is chosen(weekly informative) to be constant over the range
2𝛾,

P(f/I) = 1

2𝛾

(13)
The range 𝛾 can be estimated from the variance of the data.
Let the Conjugate Prior Density for 𝜎2 is the inverted – Gamma (𝛼, 𝜆) having the p.d.f.

g(𝜎2 ) = � 1

𝜎2

𝛼+1

exp�−𝜆� ; 𝜎2 >0 (14)

𝜎2

Now, ∑𝑁
(𝑑𝑖 − 𝑓)2 = ∑𝑁
(𝑑𝑖
− 𝑑 + 𝑑 − 𝑓)2
= N (f−𝑑 )P
+ N∆� �𝑑� 2
Where, 𝑑� = 1 𝑁 𝑑
and

P

�∆�𝑑� 2 = 1 𝑛
(𝑑
−𝑑 )2.

𝑁 𝑖=1 R i

P 𝑁

𝑖=1 Ri P

Therefore,
P (𝑑⃗/𝑥⃗, 𝑐,I) =


1 N/2 1 1

𝛼+1




1 exp�−1 𝑁𝑑2 + 𝜆��d𝜎* 𝑒𝑥𝑝 �− 𝑁

(𝑓 − 𝑑)2�df (15)

2𝜋 P

2𝛾 0

𝜎2

𝜎𝑁

𝜎2 2

−∞ 2𝜎2


𝑁

Now ∫ 𝑒𝑥𝑝 −

(𝑓 − 𝑑)2�df = 𝜎�

2𝜋

(16)

−∞ 2𝜎2 𝑁

Now using equation (15) and (16) we can write



P (𝑑⃗/𝑥⃗, 𝑐,I) = � � N-1/2 ∫ � �

𝛼+1



1 exp�−1 𝑁�∆�𝑑� 2 + 𝜆��d𝜎

2𝜋 P

2𝛾 0

𝜎2

𝜎𝑁−1

𝜎2 2

IJSER © 2013 http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 4, Issue 12, December-2013 1481

ISSN 2229-5518


Now by taking substitution x = 1 , we find

𝜎2

1


∫ � �

𝛼+1






1 exp�−1 𝑁�∆�𝑑� 2 + 𝜆��d𝜎 =1𝛤2𝛼+𝑁 1
(17)

0 𝜎2

𝜎𝑁−1

𝜎2 2

2 2 𝑁�∆�𝑑�2

2𝛼+𝑁

2

2 +𝜆�

Collecting terms the evidence for the constant model becomes,


P (𝑑⃗/𝑥⃗, 𝑐,I) = 1 1

𝑁 −1

2 1

𝛤2𝛼+𝑁

2

(18)

2 𝜋

2𝛾

𝑁∆� �𝑑�2

2𝛼+𝑁 𝑁

2

2 +𝜆�

The residual sum of square of the model (18) is given by the expression,

R = N�∆�𝑑� 2

2

Where ∆� �𝑑�2 = 1 𝑁 �𝑑

2

− 𝑑�

𝑁 𝑖=1 𝑖


= 1 𝑁
𝑑 2 − 𝑑2

𝑁 𝑖=1 𝑖

For an illustration, we have the mean temperature data of Faizabad district in ℃ from the
year 1990-1991 Ito 199J9-2000 in wShich, ER
�𝑑
= 25.45
∑ d = 254.55
𝑑2 = 647.98
∑ 𝑑𝑖 2 = 6484.96
∆� �𝑑�2 = 1 𝑁
𝑑 2 − 𝑑2

𝑁 𝑖=1 𝑖

= 648.96 – 647.98
= 0.5 16
Therefore,
Residual sum of the model (18) is

R = N𝑑2

2


= 10∗0.516

2

= 2.58

IJSER © 2013 http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 4, Issue 12, December-2013 1482

ISSN 2229-5518

ACKNOWLEDGEMENT

We thank to Professor Rohit Pandey of Aacharya Narendra Dev Agriculture University, Kumarganj, Faizabad, Uttar Pradesh(INDIA) to provide such type of data.

CONCLUSION

From the above discussion we can say that the Bayesian Analysis gives the new possibilities for Time Series description and assessment of functional behavior. Now using model (18) we can analyse the time series data and assessment of their functional behavior. We can also forecast the effect of climate change in Phenological Time Series data by using model (18).

REFFERENCES

[1] Barnett, V.(1982): Comparative Statistical Inference

[2] Berger, J. O.(1985): Statistical Decision Theory And Bayesian Analysis.

[3] Box, G.E.P. and Tiao, G.C. (1973): Bayesian Inference in Statistical Analysis.

[4] Broemeling,L.D.(1985): Bayesian Analysis of Linear Models

[5] Lindley,D.V.(1965): Introduction to Probability And Statistics From a Bayesian Point of View

Author

Vijay Kumar Pandey

Department of Statistics, University of Lucknow, Lucknow-226007 (INDIA)


vijay.pandey550@gmail.com

IJSER © 2013 http://www.ijser.org