Inte rnatio nal Jo urnal o f Sc ie ntific & Eng inee ring Re se arc h, Vo lume 3, Issue 2, February -2012 1

ISS N 2229-5518

Artificial Neural Network Based Model

for the Prediction of Effluent from Lab-Scale Upward Flow Hybrid Anaerobic Sludge Blanket (UHASB) Reactor

Sindhu. J. Nair, Hota H.S., Ghosh P.K. and Agrawal M.L.

Abs tractAnaerobic processes have gained popularity over the past decade, and have already been applied successf ully f or the treatment of a number of w aste streams. One of the most attractive options available f or such a treatment is the up f low anae robic sludge blanket (UASB) reactor, w hich acts as a compact system f or removal and digestion of organic matter present in sew age. The hybrid reactor UHASB is an improved version of the UASB system and combines the merits of the up f low sludge blanket and the f ixed f ilm reac tors. The hybrid reactor is an economic al solution f or the treatment of municipal sew age. This paper presents the predictions of the eff luent f rom a UHSAB reactor using artif icial neural netw ork. Tw o diff erent neural netw ork Error back propagation netw ork (EBPN) and Radial basis f unction netw ork (RBF) are used here f or prediction, the prediction results are compared. When a UHSAB reactor is put into operation, variations of the w aste w ater quantity and quality must be predicted using mathematical models to assist in UHSAB reactor suc h that the treated eff luent w ill be controlled and meet discharge standards. In this study ANN is used to predict the eff luent biochemica l oxygen demand (BOD), chemical oxygen demand (COD), suspended solids (SS) and total dissolved solids (TDS) f rom the lab-scale upw ard f low hybrid anaerobic sludge blanket reactor (UHASB).The simulation results indicated that the mean absolute percentage error (MAPE) of

11.86, 15.53, 26.67 and 22.26 f or BOD, COD, SS and TDS respectively could be achieved in case of testing. Prediction result suggests that EBPA tuned neural netw ork (EBPN) is perf orming w ell and could predict the removal eff iciencies eff ectively and accurately.

Inde x TermsError back propagation Netw ork (EBPN), Radia l basis Function Netw ork (RBFN), upw ard f low hybrid ana erobic sludge blanket (UHASB).

—————————— ——————————

1 INTRODUCTION

Sewage is the main point-source pollutant on a global scale. Between 90 to 95% of sewage produced in the world is released into the environment without any treatment
[Seghezzo, 2004]. On the other hand, virtually 100% of waste water produced in households from most of the cities and towns in some developing countries is commonly discharged in water bodies like rivers and lakes, with immediate and sometimes disastrous effect on public health and quality of environment [Seghezzo, 2004]. In India, about 70% of domestic wastewater is being discharged without/proper treatment into the water bodies [CPCB, 1997].Up flow a naerobic sludge blanket (UASB) reactor is being used with increasing regulari- ty all over the world and especially in India for a variety of wastewater treatment operations. Its use is not only limited to the traditional application of anaerobic systems, vi z. sludge digestion and treatment of high strength industrial wastes, but also for the treatment of low strength domestic wastewater. In spite of the widespread application of the UASB technology in India, design of such reactors is mired in empiricism. Th ere are several reasons for this state of affairs. The microbial ecol- ogy in an anaerobic rea ctor is extremely complex with several strains of micro organisms existing in symbiotic relation inside the reactor. Though the interactions between these organisms is well understood in qualitative terms, quantitative descri p- tion of these inter relationships as applicable to reactor per- formance is not possible. Similarly, the hydraulics, su bstrate and biomass transport mechanisms and other process parame- ters responsible for reactor performance, through understood
in qualitative terms, cannot be represented in quantitative terms. Through the efforts of the researchers, a large volume of data on UASB/UHASB reactor performance under various working condition has been obtained. This has undoubtedly increased the understanding of the process. However, predi c- tion of UASB/UHASB reactor performance given specific in- put conditions is still not possible due to extreme complexity of the process. To implement detailed study or va lidate me- chanism models, much attention has been devoted to the in- vestigation of water quality indices. The effluent quality trend cannot be predicted appropriately using some mech anism models because few data are available. Some soft computation model using neural network implements training to continua l- ly adjust the weight factor and bios to make the model output approach as objective out put through a black-box-type opera- tion by taking only the relationship between the system input and output.

2 EXP ERIM ENTAL SET-UP

The schematic representation of UHASB is shown in Fig. 1. The experimental investigation was carried out utilizing a p i- lot-scale UHASB reactor with a working volume of 56.52L, a height of 0.9m and internal diameter of 0.30m fed with pre- screened domestic sewage from Nehru Nagar area, Bhilai, C.G., India mixed with pulverized vegetable waste. The rea c- tor was equipped with an egg tray filter through which the sewage passed before discharge. It had a modified gas -solid-

IJSER © 2012

ht t p:/ / www.ij s er. org

Inte rnatio nal Jo urnal o f Sc ie ntific & Eng inee ring Re se arc h, Vo lume 3, Issue 2, February -2012 2

ISS N 2229-5518

liquid separator and 4 sludge collection points. Methane pro- duction was monitored using a plastic bottle with 30L of vo- lume, filled with a NaOH solution (5%w/w). A very slow sti r- rer (1rpm) was installed in the reactor to avoid channeling and
―piston‖ formation in its sludge bed (risin g sludge due to en-
trapped biogas in the sludge layer) – Goncalves et al. (1994) also used this approach. The input variables analyzed were the pH, alkalinity, Temperature, Total Solids, Turbidity, COD and BOD in the influent samples.

2250

2000

1750

1500

1250

1000

750

500

250

TABLE 1

SAMPLE DATA COLLECTED FROM THE REACT OR

B OD(I) C OD(I)

B OD(O) C OD(O)

Fig. 1: Schematic diagram of the reactor

0

0 50 100 150 200 250 300 350 400 450 500

Biological oxygen demand of both influent as well as effl u- ent sewage was determined by dilution method. The BOD in ppm was then calculated using the following equation:
BOD = [(Oxygen Content) Final – (Oxygen Content)
Initial] × Dilution Factor A sample volume of 20 ml or fraction diluted to 20 ml was used for the analysis. Soluble COD was determined after filter- ing the samples through 0.45 μ membrane filter paper. Total solids, total Dissolved, Total suspended solids, tota l volatile suspended solids determinations were done. Experimental data were collected and analyzed as per the methods given in standard methods [APHA et al, 1998] in order to evaluate the
―steady state‖ performance and efficiency of UHASB reactor
on the basis of (i) COD removal efficiency, (ii) effluent vari a-

4.00

3.50

3.00

2.50

2.00

1.50

1.00

0.50

0.00


(a)

0 50 100 150 200 250 300 350 400 450 500

(b)

TDS (I)

S S (I) TDS (O)

S S (O)

bility, and (iii) operational and pH stability. A sample data
collected form the reactor is shown in table 1 in which (I)
represents input variables while (O) represents output va- riables. The distribution of input and output pattern is also shown in Fig. 2. Data are highly non linear, if we will observe there is no mathematical relation between influent and effl u- ent and hence it is a challenging job to develop a model which will produce output with high accuracy .Data used in this piece of research paper are collected from the reactor in be- tween Jan 2009 and Dec 2010, in all there are 489 data out of which 291 data are considered for training while 198 data are considered for testing the ANN model.

Fig. 2: Distribution of the data set (a) inf luent and eff luent of COD and

BOD (b) inf luent and eff luent of TDS and SS

3 M ETHODOLOGY

This study is confined on Artificial Neural Network ANN inspired by biological neural network. Artificial neural net- work (ANN) is one of the very useful s oft computing tools used for prediction. In this piece of research work two very well known neural network EBPN and RBF network are used for prediction these model is shown in Fig. 3 (a) and (b) re- spectively.

IJSER © 2012

ht t p:/ / www.ij s er. org

Inte rnatio nal Jo urnal o f Sc ie ntific & Eng inee ring Re se arc h, Vo lume 3, Issue 2, February -2012 3

ISS N 2229-5518

Influent BOD

Influent COD Influent SS Influent TDS

EBPA tuned Neural Network

(EBPN)

Effluent BOD

Effluent COD Effluent SS

Influent TDS

in which there are 10 neurons in hidden layers, activation function used in hidden and output layers is log sigmoidal function.

Influent BOD Influent COD

Influent SS

Influent TDS

RBF tuned Neural Network (RBFN)

Effluent BOD Effluent COD

Effluent SS

Influent TDS

Fig. 3: Architecture of (a) EBPA tuned neural netw ork (EBPN) (b) RBF tuned neural netw ork (RBFN).

The details of each of these two algorithms are explained
below:
1. Error Back-Propagation Network (EBPN): The back-
propagation learning algorithm is one of the most important
developments in neural networks. This network has re- awakened the scientific and engineering community to the modeling and processing of numerous quantitative phenome- na using neural networks. This learning algorithm is applied to multilayer feed-forward networks consisting of processing element with continuous differentiable activation functions. For a given set of training input-output pair, this algorithm provides a procedure for changing the weights in a BPN to predict the given input pattern correctly. The basic con cept for this weight update algorithm is simply the gradient-descent method as used in the case of simple perception network with

Fig. 4: Architecture of EBPN f or prediction

2. Radial Basis Function Network (RBFN): The radial basis function (RBF) is a classification and functional approximation neural network developed by M.J.D. Powell. The network uses the most common nonlinearities such as sigmoidal and Gaus- sian kernel functions. The Gaussian functions are also used in regularization networks. The response of such a function is positive for all values of y; the response decreases to 0 as
differentiable units. This is a method where the error is propa-

y  0

The Gaussian function is generally defined as:

y2

gated back to the hidden unit.
The back-propagation algorithm is different from other

f y e

The derivative of this function is given by
networks in respect to the process by which the weight are

f ' y  2 yey

 2 yf y

calculated during the learning period of the network. The gen- eral difficulty with the multilayer perceptions is calculating the weights of the hidden layers in an efficient way that would result in a very small or zero output error. When the hidden layers are increased the network training becomes more com- plex. To update weights, the error must be calculated. The error, which is the difference between the actual (calculated) and the desired (target) output is easily measured at the ou t- put layer. It should be noted that at the hidden layers, there is no direct information of the error. Therefore other techniques should be used to calculate an error at the hidden layer, which will cause minimization of the output error, and this is the ultimate goal.
The training of the BPN is done in three stages -the feed- forward of the input training pattern, the calculation and the back-propagation of the error, and updating of weights. The testing of the BPN involves the computation of feed-forward phase only. There can be more than one hidden layer (more beneficial) but one hidden layer is sufficient. Even though the training is very slow, once the network is trained it can pro- duce its outputs very rapidly. In this study, the EBPNN was composed of three independent layers; input, hidden and ou t- put layers the influent BOD, COD, SS and TDS were taken as the input and effluent of the same is considered as output va- riables .The complete architecture of EBPN is shown in Fig. 4
When the Gaussian potential function are being used, each node is found to produce an identical output for inputs exis t- ing within the fixed radial distance from the center of the ker- nel, they found to be radically symmetric, and hence the name radial basis function network. The entire network forms a linear combination of the nonlinear basis function. The train- ing is started in the hidden layer with an unsupervised learn- ing algorithm. The training is continued in the ou tput layer with a supervised learning algorithm. Simultaneously, we can apply supervised learning algorithm to the hidden and output layers for fine-tuning of the network. The training algorithm is given as follows.
Step 0: Set the weight to small random values.
Step 1: Perform Steps 2-8 when the stopping condition is
false.
Step 2: Perform Steps 3-7 for each input.

Step 3: Each input unit ( xi , for all i  1 to n) receives input

signals and transmits to next hidden layer unit.
Step 4: Calculate the radial basis function.
Step 5: Select the centers for the radial basis function. The
centers are selected from the set of input vectors. It should be
noted that a sufficient number of centers have to be selected to ensure adequate sampling of the input vector space.
Step 6: Calculate the output from the hidden layer unit:

IJSER © 2012

ht t p:/ / www.ij s er. org

Inte rnatio nal Jo urnal o f Sc ie ntific & Engrinee ring Re se arc h, Vo lume 3, Issue 2, February -2012 4

ISS N 2229-5518

exp xji xˆji 2

vi xi j l

i2

160

140

B O D (O )

Where the centre of the RBF unit for input variables is the width of ith RBF unit the jth variable of input pattern.
Step 7: Calculate the output of the neural network:

k

ynet wimvi xi w0

i 1

120

100

80

60

B O D (P )

Where

k = number of hidden layer nodes (RBF function) 40

ynet

= output value of mth node in output layer for the nth 20
incoming pattern.

wim = weight between ith RBF unit and mth output node.

w0 = biasing term at nth output node.

0

0 25 50 75 D a t 100 i n t 125 150 175 200

Step 8: Calculating the error and test for the stopping condi-
tion. The stopping condition may be number of epochs or to a
certain extent weight change.
Thus, a network can be trained using RBFN.
Architecture of RBFN which is designed for prediction of
effluent of UHSAB reactor parameters is shown in Fig. 5 with
same number of neurons in input and output layer as EBPN
but this network consisting 20 neurons in hidden layer.

1000

900

800

700

600

500

400

300

200

100

C O D (O ) C O D (P )


0

1. 8

1. 6

1. 4

1. 2

1. 0

0. 8

0. 6

0 25 50 75

T D S (O ) T D S (P )

D a ta10p0o i n t 125 150 175 200

Fig. 5: Architecture of RBF netw ork f or prediction

Result and Discussion: Both the model are trained with 291 training data set. Once the network is trained it is tested with
198 testing data set, the result obtained in case of testing are
shown in figure 6(a) to (d) and 7 (a) to (d) respectively for EBPN and RBFN model. Simulation result shows that pre- dicted values are closer to the observed values, although in case of solid state (SS) in figure 6(c) predicted value is not close to observed value for some of the data point. However the graph of Fig. 7 is different from Fig. 6 for COD and TDS (Fig. 7(a) and (d)). Although number of neurons in hidden layer of RBFN is more as compare to EBPN then also accuracy is less in case of RBFN.

0. 4

0. 2

0. 0

2. 5

2. 0

1. 5

1. 0

0. 5

0 25 50 D a ta 75 i n t 100 125 150 175 200

S S (O ) S S (P )

0. 0

0 25 50 75

D a t 100 i n t 125 150 175 200

Fig. 6: Co mparative graph f or prediction of UHSAB reactor using EBPN (a) BOD(O) Vs BOD(P), (b) COD(O) Vs COD(P),

(c) TDS (O) Vs TDS (P) and (d) SS(O) Vs SS(P)

IJSER © 2012

ht t p:/ / www.ij s er. org

Inte rnatio nal Jo urnal o f Sc ie ntific & Eng inee ring Re se arc h, Vo lume 3, Issue 2, February -2012 5

ISS N 2229-5518

140

120

100

80

60

40

B O D (O ) B O D(P )

MAPE  Observed value  Predicted value 100

Predicted value

MAPE for different parameters have been calculated using above formula as shown in table 2, MAPE in case of training is higher than the testing which is obvious which is clearly shown in bar chart in figure 8.

TABLE 2

COMPARISON OF T WO NEURAL NET WORK MODELS: EBPN AND RBFN

20

0

0 25 50 75

Da ta10P0oint

125 150 175 200

1000 C O D (O )

900

800

700

600

500

400

300

200

100

0

1.80

1.60

1.40

1.20

1.00

0.80

0.60

0.40

0.20

0.00

2.50

2.00

1.50

1.00

0.50

0.00

C O D (P )

0 25 50 75

TD S (O ) TD S (P )

0 25 50 75

S S (O ) S S (P )

0 25 50 75

Da ta10P0o in t

D a ta10P0o i n t

D a ta10p0o i n t

125 150 175 200

125 150 175 200

125 150 175 200

Fig. 8: A comparative chart of testing data using EBPN and RBFN

MAPE in case of testing from EBPN is 11.86, 15.53, 22.26 and 26.67 respectively for BOD, COD, TDS and SS while for RBFN these are 18.73, 16.55, 23.77 and 33.19 respectively for BOD, COD, TDS and SS from table it is clear that for all para- meters MAPE in case of EBPN is less as compare to RBFN hence EBPN is more accurate than RBFN. The range of MAPE in case of EBPN is in between 11.86 to 26.67 for testing data while it is in between 18.73 to 33.19 for testing. The range of error in case of EBPN is in acceptable range and can be a c- cepted for the prediction of different parameters of UHSAB reactor. Our results confirm the hypothesis that EBPN are more robust to solve non linear problems where classical ma- thematical modeling process is unable to predict the effl uent from a UHSAB reactor.

REFERENCES

[1] APHA (1998) St andard Met hods for t he Examinat ion of water and wast ewa- t er, Washington D.C.

Fig. 7: Co mparative graph f or prediction of UHSAB reactor using

RBFN (a) BOD(O) Vs BOD(P), (b) COD(O) Vs COD(P), (c) TDS (O) Vs TDS (P) and (d) SS(O) Vs SS(P)

In order to compute prediction accuracy of two ANN based
model: EBPN and RBFN, mean absolute error (MAPE) is ca l- culated using following equation:

[2] CP CB (1997). St at us of water supply and wast ewat er generat ion, collect ion,

t reatment , and disposal in metro cit ies. Cent ral Pollut ion Control Board, se- ries CUP S/42/1997-98, India.

[3] Goncalves, R.F., Charlier, A.C. and Sam mut , F. (1994). P rimary ferment a-

t ion of soluble and part iculate organic matter for wast ewat er treatment . Wa-

t er Science and Technology. 30(6): 53-62.

IJSER © 2012

ht t p:/ / www.ij s er. org

Inte rnatio nal Jo urnal o f Sc ie ntific & Eng inee ring Re se arc h, Vo lume 3, Issue 2, February -2012 6

ISS N 2229-5518

[4] H.H chen & S.L.L.O. “ P redict ion of the effluent from a domest ic wast e wat er treatment plant of CASP using gray model and neural net work” Env i- ron Monit Assess springer science no 162, pp 2645-275,2010.

[5] T.T. chow, Z. Lin and C.L. Song “Applying neural net work and genet ic algorithm in chiller optimizat ion” 7 th int ernational IBP SA cont, Rio de Ja- neiro, Brazil, PP 1059-1065, Aug. 13-15, 2001.

[6] M. DJENNAS and M. BENBOUZIANE “A neural Net work & Genet ic algorithm hybrid model for modeling exchange rat es. The case of t he US Dollar /Kuwait Dinar” survey paper, www.fxt op.com, 2009.

[7] S.N Shivanandam and S.N. Deepa, P rincipals of soft comput ing, second edit ion Wiley India publicat ion 2011.

[8] S. Rajshekhran and Pai, Neural Net work, Fuzzy logic and Genet ic algo-

rit hm: Synt hesis and Applicat ions, PHI learning privat e limited 2010.

[9] Seghezzo,L., 2004. Anaerobic treatment of domest ic wastewater in subt rop- ical regions. Phd thesis, Wageningen Universit y.

IJSER © 2012

ht t p:/ / www.ij s er. org