Academia.eduAcademia.edu
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-62-74 www.ajer.org Research Paper Open Access The Effect of Strain in Curing Process on Morphology and Mechanical Properties of Natural Rubber/Organoclay Nanocomposites Masoomeh Mehrabi, Hedayatollah Sadeghi Ghari 1 2 Islamic Azad University, Baghmalek Branch, khuzestan, Iran. Young Researchers and Elite Club, Islamic Azad University, Omidieh Branch, Iran. Abstract: - In this paper, two methods were used for the preparation of natural rubber/organoclay nanocomposites including ordinary method and curin under strain. The single networked natural rubber nanocomposites were used as comparison. The effects of organoclay with different extension ratio on mechanical properties, hardness, swelling behaviour and morphology of ordinary and extended natural rubber nanocomposites were studied. The results showed that extended natural rubber nanocomposites exhibit higher physical and mechanical properties. The tensile strength of extended natural rubber nanocomposites increased up to more 4 times than pure natural rubber and then decreased with increasing extension ratio. Modulus and hardness continuously increased with increased extension ratio. The microstructure of the natural rubber/organoclay systems was studied by X-ray diffraction (XRD) and Scanning Electron Microscopy (SEM). Following, the effect of different extension ratios on dispersion of nanoclay layers in nanocomposites were investigated. Keywords: - Natural Rubber, Organoclay, Curing, Nanocomposite, Microstructure, Mechanical Properties. I. INTRODUCTION Natural rubber is used as the most applicable rubber in many engineering applications and need to reinforce it. The fillers such as carbon black, clays, and calcium carbonate are added to rubber formulation in order to optimize properties needed for service application [1]. Achieving a method in which the filler be dispersed adequately in polymer matrix is always a challenge in polymer processing. The most important factor indicating an improvement of properties in rubber by adding nanofiller is dispersion of it in the rubber matrix. The main aspect in preparing nanoclay nanocomposites is the degree of dispersion of nanofiller. This can significantly improves the overall properties of the nanocomposites. One of the most important phenomena in polymer science is the reinforcement of rubber by nanoclay. Nanoclay is a kind of commonly used nanoscale filler used for polymers due to excellent mechanical, barrier and thermal stability achieved by adding it in polymer composite. Recently, researchers have focused their investigations on rubber-clay nanocomposites due to the significant improvement in physical-mechanical properties [2,3], thermal stability [4,5], gas permeability [6,7] and flammability [8,9]. Layered silicates on nanoscale size are effective reinforcements for rubber materials [10-13]. Recently, the largest number of studies is devoted to production of NR nanocomposite by using Cloisite 10A and Cloisite 15A [14], with aliphatic and aromatic modified montmorillonite [15], octadecylamounium modified montmorillonite [16], modified vermiculite [17], modified montmorillonite [18,19] Cloisite 30B and Nanomer I.30P [20] and bentonite [21]. Base on these reports, the use of organically modified nanoclay could also be noted as an inorganic filler to improve the mechanical performance of NR. Other researchers have investigated the effect of using nanoclay in various rubbers. An extended structure is formed when the partially crosslinked elastomer is further crosslinked in a state of straining. An extended elastomer refers to rubber crosslinked twice, the second time while in a deformed state. While extended rubbers can be formed inadvertently due to chain scission or strain-induced crystallization, the potential for improved mechanical properties in such materials has elicited much interest www.ajer.org Page 1 American Journal of Engineering Research (AJER) 2013 [22]. The modulus and mechanical properties of extended NR increases with extension ratio [22, 23, 24]. Thus, when appropriately prepared, double networks can have higher mechanical properties than single networks of the same modulus. In this article, two methods were used for the preparation of natural rubber/nanoclay nanocomposites including ordinary method and extended NR nanocomposite. Comparatively many studies have been published on the mechanical behaviour of the single network rubbers. Recently, it has been reported that the ultimate tensile strength of the extended NR is somewhat invariant to residual strain, or even improved in the direction of residual strain notwithstanding the higher tensile modulus [22, 23]. It has also been reported that the mechanical fatigue resistance of the extended NR perpendicular to the cure stretching direction was greatly enhanced, about ten times, compared with that of a conventionally crosslinked one. NR/nanoclay nanocomposites were prepared by sample shear mixing. The effect of extension ratio on the morphology, mechanical properties and swelling resistance (solvent resistance) of NR was investigated in terms of X-ray diffraction, rheometric and mechanical analysis. The extended natural rubber exhibited enhanced mechanical performance in comparison to single networks. Associated with rubber compounds prepared in common method numerous reports are presented, that these reports was reviewed and summarized. Until today any report of production of extended nanocomposite has not spread. In this regard, there are few reports in relation to traditional composites based on natural rubber, styrene – butadiene rubber (SBR), isoprene rubber (IR) and polybutadiene rubber (BR) is presented. In these systems, reinforcing agent was carbon black but use of nanoclay by nanolayered structure it has remained hidden from the eyes of researchers.In this paper, the effect of nanoclay on morphology and mechanical performance NR nanocomposites prepared by extension are investigated. II. EXPERIMENTAL Materials The commercially available NR used in this study was ribbed smoked sheet (RSS) No. 1 with mooney viscosity ML [1+4,100]=80 from Indonesia. Organically modified montmorillonite (nanoclay) was purchased from Southern Clay Products (Gonzales, TX) under the trade name of Cloisite 15A. This nanoclay was modified by dimethyl dihydrogenated tallow quaternary ammonium with a concentration of 125 mequiv/100 g of clay. The curing additives (zinc oxide, stearic acid, sulfure) were purchased from Iranian suppliers (analytical grade). Dibenzothiazyl disulfide (MBTS) and IPPD 4010NA antioxidant from Bayer Company and toluene for swelling experiment was supplied by Merck. Preparation of NR nanocomposites All the rubber nanocomposites were prepared on a two roll mill of 300 mm length, 170 mm diameter with friction ratio 1.4 operated at room temperature. For mixing the ingredients of the nanocompsites, firstly natural rubber was masticated, and then nanoclay were added and mixed. Then zinc oxide, stearic acid and antioxidant were added to the compound. After mixing, the rubber compounds were left for 8 h and then sulfure, accelerator was added. The extended NR nanocomposite was prepared by a two-step crosslinking method, in which the crosslinking was achieved while the NR was in a stretched condition. In the first step, the rubber sheet was cured partially for 10 min at 150°C under a pressure of 10 Mpa. In the second step, the partially crosslinked rubber sheet was uniaxially stretched to various desired lengths (extension ratio) using a metal holder. The stretched sheet was placed in a vacuum oven for 70 min at 135 °C. The fully cured rubber sheet was then placed in air at room temperature for 24 h. Finally, the extended nanocomposites were obtained by releasing the strain. III. CHARACTERIZATION X-ray Diffraction To study the degree of dispersion of the nanoclay and increase in clay intergallery space in the rubber composites, XRD studies were done using a PHILIPS X-PERT PRO diffractometer in the range of 2θ = 1–10° and using Cu target (λ = 0.154 nm). In this experiment, acceleration voltages of 40 kV and beam current of 40 mA were used, and the scanning rate was maintained at 2°/min. The d-spacing of the nanoclay particles were calculated using the Bragg’s law (λ=2dsinθ). Cure Characteristics Curing characteristics of nanocomposites were measured according to ASTM D2084-95 [24] by using Oscillating Disc Rheometer (Monsanto Rheometer 100) operated at 150 °C with 3° arc oscillation angle. Scorch time (t2), optimum cure time (t95), and also the minimum torque (ML), maximum torque (MH) and the difference between minimum and maximum torque (ΔM) of rheometry were determined. www.ajer.org Page 2 American Journal of Engineering Research (AJER) 2013 Mechanical Properties The mechanical behavior including the tensile strength, modulus, percentage elongation at break of the NR nanocomposites was investigated by the tensile test. Tensile properties were measured on dumbbell shaped specimens punched out from the molded sheets. The tests were carried out as according to the ASTM D- 412 method in a Universal Testing Machine (Zwick-Roel, model Z050, Germany). Tests were carried out at room temperature and cross-head speed of 500 mm/min. Result of tensile test for each sample was recorded as the average of three repeated observations. A Zwick hardness tester was employed according to ASTM D2240 for measuring hardness of the prepared samples. Swelling Measurements Swelling test in toluene solvent was conducted for the rubber compounds. Samples of 25×15×2 mm 3 were used to determine the swelling behavior of vulcanized rubber according to ASTM D 471-06. Initially, the dry weight of the samples was measured. Then, the samples were immersed in toluene at 25 °C for 72 h; the swollen weight of the samples was recorded for the determination of the swelling ratio and cross link density. The samples were periodically removed from the test bottles, the adhering solvent was cleaned from the surface, and the samples were weighed immediately. The degree of equilibrium can be used to quantify swelling ratio, non-mechanical property, cross linking density, and chemical interaction between the polymer and nanoparticle [24]. The swelling ratio can be calculated by the following equation [25]: (1) Where md and ms are the initial weight of dry rubber and the weight of solvent adsorbed by the sample, respectively. Field Emission Scanning Electron Microscopy A Field Emission Scanning Electron Microscope (FE-SEM; Hitachi microscope Model S-4160, voltage 15 kV, Japan) was used to study the morphology of the rubber nanocomposies fractured surfaces. Before the tests, the samples were fractured in liquid nitrogen. Afterward, the fracture surface was coated with gold and observed by FE-SEM. IV. RESULTS AND DISCUSSION Morphological Studies on NR Nanocomposites X-ray diffraction analysis has been utilized for the studying the morphology of produced nanocomposites. The XRD pattern of nanoclay revealed a characteristic diffraction peak at 2θ = 2.8°, corresponding to a basal spacing of 3.15 nm. The XRD pattern of pure NR showed any characteristic diffraction peak. The nanocomposite containing 5 phr (parts per hundred parts of rubber) nanoclay has the d-spacing of 3.99 nm. This indicates the penetration of polymeric chains into silicate galleries resulting an increment of dspacing to about 0.75 nm compared to nanoclay (Fig. 1). Mechanical properties of extended NR nanocomposites www.ajer.org Page 3 American Journal of Engineering Research (AJER) 2013 The mechanical properties of natural rubber nanocomposites containing 5 phr nanoclay that have been prepared by different extension ratios (α) are presented in figures 2, 3, 4 and 5. Accordingly, the tensile strength of natural rubber would boost considerably (about 3-times) by adding nanoclay. Although the improvement of tensile strength in NR nanocomposites was shown in other papers [22,23,26], its 3 times improvement has been rarely reported. The outcomes of the tensile test are fully compatible with the rheometric findings. Fig.2- Tensile strength of pure NR and extended NR nanocomposites by diffrent stretching ratios. The tensile strength would maximises to its highest point (about 33 MPa) by α=2, as the extension ratio increases through the cure process of nanocomposites. This amount is about 6 times higher than pure natural rubber. The remarkable improvement of tensile strength in these nanocomposites can be due to the orientation of polymer chains and silicate nanolayers throughout cureing process. Also these chains and nanolayers can be more oriented during the tension test and cause remarkable improvement in tensile strength. On this basis the synergistic effect can be realized in these nanocomposites.That would occur because of the reinforcing effect of silicate nanolayers and also the orientation of chains and nanolayers during the cure process. The tensile strenght diminishes by raising the extension ratio for α =3 and 4 times. Reduced properties in higher α value can be related to the extra free volume in the system, the separation of chains from the surface and gallery spacing of layers and even to the break off of cross links in network. By adding nanoclay to natural rubber, the modulus that is, somehow, the resistancy of matter against deformation will grow significantly (about 3 times) (Fig. 3). The growing extension ratio can cause a constant and dublex increase in the modulus of produced nanocomposites. Fig.3- Young Modulus of pure NR and extended NR nanocomposites by diffrent stretching ratios. The elongation at break of nanocomposite samples is lower than that of pure NR (Fig. 4). A significant rise in the trend of elongation of the extended NR nanocomposites is observed as the extension ratio boosts up to α = 2. www.ajer.org Page 4 American Journal of Engineering Research (AJER) 2013 The greater α values are, the more significantly reduced elongation at break will be obtained. This remarkable reduction in elasticity of nanocomposites (by α= 3 and 4) is due to the harmful effects of prestretching process that causes the loss of chain entangelments, failure of cross links and separation of the chains on surface and gallery spacing of layers. The same trend was observed in the tensile strength of nanocomposite samples. Fig.4- Elongation at break of pure NR and extended NR nanocomposites by diffrent stretching ratios. The hardness of NR is also significantly incremented by adding nanoclay. Accordingly, by adding 5 Phr nanoclay, its hardness heightens from 43 to 48. Moreover, the hardness of extended nanocomposites will be added uniformly by growing the extension ratio, and this is consistent with other mechanical results (Fig. 5). Further orientation of chains in the vicinity of nano-layers is the reason for the incremented hardness of the nanocomposites. Fig.5- Hardness of pure NR and extended NR nanocomposites by diffrent stretching ratios. Swelling Behavior of NR nanocomposites The degree of adhesion between polymer chains and filler particles can be evaluated from equilibrium swelling of the composites in good solvents. The extent of swelling at equilibrium is reduced in the case of adsorption of polymer chains on particle surfaces [27]. The sorption curves of nanocomposites filled with nanoclay versus time obtained by plotting Qt (the weight-swelling ratio) in toluene at room temperature. The swelling test results for pure NR, ordinary nanocomposite and extended nanocomposites which have different extension ratio are illustrated in Fig. 6. Addition of nanoclay to NR leads to dramatic declines in the swelling ratio and swelling rate. Addition of nanoclay can be greatly increment the rubber swelling resistance against penetration of solvent. www.ajer.org Page 5 American Journal of Engineering Research (AJER) 2013 This is due to the favorable interactions and also high surface area of nanoparticles with the polymer chains that prevents swelling and solvent penetration into the matrix. By extension of rubber, significant changes will be led in the swelling rate and the equilibrium swelling ratio of prepared nanocomposites (Fig. 6). Fig. 6- Swelling behavior of pure NR and extended NR nanocomposites by diffrent stretching ratios (The numbers in parentheses indicate the extension ratio) Increasing extension ratio in the nanocomposites, the swelling ratio is reduced and this cause a maximum change for α=2, where for higher extension ratios, the swelling ratio does not change a bit. You should note that the swelling ratio is a criterion of the degree of cross links in a system. Although this phenomenon shows that the real cross link density is constant, but applying the extension on the samples will causes the apparent cross link density to increase. In other words, since the composition is same in all nanocomposites, this increased cross link density correlates the boosted interactions between nanoparticles and polymeric chains (high-level contact area of nanoparticle with matrix) and also the decrement of free volume in the matrix is due to the orientation and the more compact arrangement of the chains. Therefore, It can be said that the cross link density would be higher for the extended nanocomposites. Using this idea to produce of NR nanocomposites without requiring a higher content of nanoparticles can boost solvent resistancy of NR. Improving the solvent resistancy of these nanocomposites is due to the existence of hard filler phase and impermeable to the solvent molecules [27]. Also it is brought out by orientation of chains and nanolayers in the stretch direction and subsequently the reduction of free volume and development of interfacial area. Dynamic-Mechanical Properties of NR Nanocomposites Dynamic-Mechanical analysis is measure of material response under dynamic deformation. In other words this test is used to measure viscoelastic properties of materials in terms of temperature or frequency [28]. Changes in the storage modulus shown in terms of temperature for NR and their nanocomposites are presented in Fig. 7 and 8. Increasing the nanofiller content, storage modulus in the glassy region (below the glass transition temperature) develops. Based on the presented curves, it can be seen that 5 phr nanoclay can dramatically develops storage modulus in the glassy, glass transition and rubbery regions. The developments of modulus are due to the decreased chain mobility in persence of hard nanofiller. When the rubber chains intercalated into gallery spacing, rubber confined within such space, so the effective volume fraction of filler is developed in the nanocomposite that this is a reason for the higher storage modulus in intercalated nanocomposite. Applying extension on nanocomposites and the growth of extension ratio, the storage modulus in glassy region improves. Although surpassing the critical stretch ratio, the storage modulus is reduced for the nanocomposite with the extension ratio α= 4. www.ajer.org Page 6 American Journal of Engineering Research (AJER) 2013 Fig. 7- Storage modulus of pure NR and extended NR nanocomposites in term of temperature. Due to the hydrodynamic effects of filler in the glass transition region, the storage modulus of nanocomposite is more than pure NR. However, it should be noted that the difference in module in this region is less than the glassy region. Fig. 8- Storage modulus of pure NR and extended NR nanocomposites in rubbery region. Mechanical Loss Factor (Tan δ) Important information can be obtained from plots mechanical loss factor in term of temperature (Tan δ-T) in the glass transition region. Another sign of the degree creating bond between the matrix and the filler can be www.ajer.org Page 7 American Journal of Engineering Research (AJER) 2013 obtained from these curves [29]. In other words, a further hint of the degree of bonding between the matrix and the nanofiller can be derived from the Tan δ vs. temperature curves. According to the effect of filler, this parameter is considered as the ratio of fraction of the filler structure that breaks under dynamic strain to fraction that will remain unchanged [30]. The smaller peak for glass transition temperature shows the higher performance of filler. Fig. 9- Mechanical loss factor of pure NR, extended and ordinary NR nanocomposites in term of temperature. Changes of the glass transition temperature caused by adding nanoclay and the loss factor (Tan δ) of NR are illustrated in Fig. 9. The height of Tan δ peak declines after addition of nanoclay. Reduction of Tan δ is due to decrement of deformable polymer content under oscillatory strain, that's a result of trapping a part of chains between the nanolayers and thus reduce the energy loss capability in the sample. Greater polymer - filler interaction improves the elasticity of nanocomposite, which can lead to a lower Tan δ peak height [31]. Throughout the temperature range, nanocomposite curve is lower than that of pure NR meaning that ordinary nanocomposite has lower loss energy capability than pure NR. Reduced chain mobility is caused by physical adsorption of chains on the surface of nanofiller, which reduce the height of transition temperature peak. The decrease in the height of transition temperature peak as a result of adding of nanoclay has also been reported in many articles before [32-34]. The extended nanocomposites behave similarly as pure NR does in glassy region, based on the energy dissipation capability. In glass transition region, these nanocomposites have intermediate behavior of pure NR and ordinary NR nanocomposite. Whereas in glassy region, the energy dissipation capability of ordinary nanocomposite is lower than the pure rubber due to lower segmental motion of chains. Nanoclay can greatly reduce the energy dissipation capability of chains throughout temperature region. This shows the limited mobility of chains, trapped between layers of nanoclay, as well as interfacial interactions. www.ajer.org Page 8 American Journal of Engineering Research (AJER) 2013 Fig. 10- Mechanical loss factor of pure NR, extended and ordinary NR nanocomposites in glass transition region. As shown in Fig. 10, rising temperature and surpassing the glass transition region, extended nanocomposite would obtain more dissipation energy ability, with optimum extension ratio, than other nanocomposites and pure NR. This phenomenon can be regarded as positive and remarkable aspects of usuing this method in producing reinforced nanocomposites by nanoclay. In other words, nanocomposite obtains a very good energy dissipation capability. This shows the strong physical network in extended nanocomposite. Whereas aftere passing the glass transition temperature and reaching the rubbery region, the energy dissipation capability of nanocomposites is more than that of pure NR and this is due to the failure of physical network in the rubbery state. So we conclude that following idea can give a higher energy dissipation capacity to natural rubber, especially at above the glass transition temperature. Field Emission Scanning Electron Microscopy Investigation of filler dispersion in the matrix and the effect of nanoclay on the morphology were studied by the Field Emission Scanning Electron Microscope (FE-SEM). FE-SEM images of ordinary nanocomposite containing 5 phr nanoclay are shown in Fig. 11. Here, distribution of nanoparticles in micron range is illustrated. For ordinary nanocomposite, the nanoparticles have not been well dispersed in the matrix and can be seen filler agglomoration. Such dispersion of nanofiller would be inappropriate to achieve the desired mechanical properties. www.ajer.org Page 9 American Journal of Engineering Research (AJER) 2013 Fig. 11- FE-SEM micrograph of ordinary NR nanocomposite at various magnifications. Fig. 12 shows that extension of samples under the curing process can improve the dispersion of nanoclay in NR matrix. There have been no agglomoration from nanoparticles and ideal morphology created for NR nanocomposite. Such dispersion of nanoclay in NR matrix reasons for the more desirable mechanical properties of extended nanocomposite rather than ordinary nanocomposite. www.ajer.org Page 10 American Journal of Engineering Research (AJER) 2013 Fig. 12- FE-SEM micrograph of extended NR nanocomposite at various magnifications. V. CONCLUSIONS In this paper, two methods were used for the preparation of NR/nanoclay nanocomposites including ordinary method (single network NR nanocomposite) and extended NR nanocomposites. The effect of these two methods on the morphology, rheometery, and mechanical behaviors of natural rubber nanocomposites have been evaluated. The obtained results reveal that nanocomposites reinforced by extended structure have more adequate morphology, rheometery and mechanical behaviors as well as swelling resistance. It can be concluded that can be prepared nanocomposites with high strength and mechanical properties by using this method. Whereas the nanocomposites prepared by ordinary melt intercalation have lower strength. Incorporation of 5 phr nanoclay in NR and use of extension idea cause better dispersion and orientation of clay nanolayers. From curing study, faster scorch time, cure time and increase in maximum torque had been observed compared to pure NR. The effect of different extension ratios on dispersion of nanoclay layers in nanocomposites showed that the optimized value of extension ratio in extended NR nanocomposites was equal to 2. VI. ACKNOWLEDGEMENT The authors gratefully acknowledge the financial support provided by the Islamic Azad University- Baghmalek Branch (Islamic Republic of Iran) for carrying out this study. [1] VII. REFERENCES F. W. Barlow, Rubber Compounding Principles, Materials, and Techniques, New York: Marcel Dekker, Inc., 1988. www.ajer.org Page 11 American Journal of Engineering Research (AJER) [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] 2013 Lopez-Manchado M., Herrero B. and Arroyo M., Preparation and Characterization of Organoclay Nanocomposites Based on Natural Rubber, Polym. Int., 52, 1070–1077, 2003. Sun Y., Luo Y., Jia D., Preparation and Properties of Natural Rubber Nanocomposites with Solid-State Organomodified Montmorillonite, J. Appl. Polym. Sci., 107, 2786–2792, 2008. Wang S., Zhang Y., Peng Z., Zhang Y., Morphology and Thermal Stability of BR/Clay Composites Prepared by a New Method, J. Appl. Polym. Sci., 99, 905–913, 2006. Denardin E.L.J., Samios D., Janissek P.R, deSouza G.P., Thermal Degradation of Aged Chloroprene Rubber Studied by Thermogravimetric Analysis, Rub. Chem. Technol., 74, 622-630, 2001. Teh P. L., Mohd Ishak Z. A., Hashim A. S., Karger-Kocsis J., Ishiaku U. S., On the Potential of Organoclay with Respect to Conventional Fillers (Carbon Black, Silica) for Epoxidized Natural Rubber Compatibilized Natural Rubber Vulcanizates, J. Appl. Polym. Sci., 94, 2438–2445, 2004. Choudalakis S. and Gotsis A.D., Permeability of Polymer/Clay Nanocomposites: A Review, Eur. Polym. J., 45, 967-984, 2009. Zhang H., Wang Y., Wu Y., Zhang L., Yang J., Study on Flammability of ontmorillonite/StyreneButadiene Rubber (SBR) Nanocomposites, J. Appl. Polym. Sci., 97, 844–849, 2005. Liu L., Jia D., Luo Y., Li B., Structure and Flammability Properties of NR-Organoclay Nanocomposites, Polym. Compos., 30, 107–110, 2009. R. Rajasekar, Kaushik Pal, Gert Heinrich, Amit Das, C.K. Das, Development of nitrile butadiene rubber– nanoclay composites with epoxidized natural rubber as compatibilizer, Materials and Design, 30, 3839– 3845, 2009. B. T. Poh, P. G. Lee, S. C. Chuah, Adhesion property of epoxidized natural rubber (ENR)-based adhesives containing calcium carbonate, eXPRESS Polymer Letters, Vol. 2, No.6, 398–403, 2008. Varghese S., Karger-Kocsis J., Melt-Compounded Natural Rubber Nanocomposites with Pristine and Organophilic Layered Silicates of Natural and Synthetic Origin, J. Appl. Polym. Sci., 91, 813–819, 2004. Madhusoodanan K. N., Varghese S., Technological and Processing Properties of Natural Rubber Layered Silicate-Nanocomposites by Melt Intercalation Process, J. Appl. Polym. Sci., 102, 2537–2543, 2006. A. Jacob, P. Kurian, A. S. Aprem, Cure Characteristics and Mechanical Properties of Natural Rubber– Layered Clay Nanocomposites , International Journal of Polymeric Materials, 56, 593–604, 2007. F. Avalos, J. C. Ortiz, R. Zitzumbo, M. A. L. Manchado, R. Verdejo, M. Arroyo, Effect of montmorillonite intercalant structure on the cure parameters of natural rubber, European Polymer Journal, 44, 3108–3115, 2008. M. Arroyo, M. A. L. Manchado, B. Herrero, Organo-montmorillonite as substitute of carbon black in natural rubber compounds, Polymer, 44, 2447–2453, 2003. Y. Zhang, W. Liu, W. Han, W. Guo, C. Wu, “Preparation and Properties of Novel Natural Rubber/Organo-Vermiculite Nanocomposites”, Polym. Compos., 30, 38–42, 2009. M. A. L. Manchado, B. Herrero and M. Arroyo, Organoclay–natural rubber nanocomposites synthesized by mechanical and solution mixing methods, Polym. Int., 53, 1766–1772, 2004. L. Qu, G. Huang, Z. Liu, P. Zhang, G. Weng, Y. Nie, Remarkable reinforcement of natural rubber by deformation-induced crystallization in the presence of organophilic montmorillonite , Acta Materialia, 57, 5053–5060, 2009. S. Varghese, J. K. Kocsis, Melt-Compounded Natural Rubber Nanocomposites with Pristine and Organophilic Layered Silicates of Natural and Synthetic Origin, J. Appl. Polym. Sci., 91, 813–819, 2004. K. N. Madhusoodanan, S. Varghese, Technological and Processing Properties of Natural Rubber Layered Silicate-Nanocomposites by Melt Intercalation Process, J. Appl. Polym. Sci., 102, 2537–2543, 2006. Abi Santhosh Aprem, Kuruvilla Joseph, Sabu Thomas, Studies on Double Networks in Natural Rubber Vulcanizates, J Appl Polym Sci 91: 1068–1076, 2004. Shinyoung Kaang and Changwoon Nah, Fatigue crack growth of double-networked natural rubber, Polymer Vol. 39 No. 11, 2209-2214, 1998. J. Shah, Q. Yuan, R. D. K. Misra, Synthesis, Structure and Properties of aNnovel Hybrid Bimodal Network Elastomer with Inorganic Cross-Links: The Case of Silicone–Nanocrystalline Titania , Materials Science and Engineering A, 523, 199–206, 2009. L. H. Sperling, Introduction to Physical Polymer Science, 4th ed. Wiley: New York, 472–473, 2006. Shinyoung Kaang, Donghwa Gong, Chang Nah, Some Physical Characteristics of Double-Networked Natural Rubber, J Appl Polym Sci 65: 917–924, 1997 H. Sadeghi Ghari and Z. Shakouri, Natural rubber hybrid nanocomposites reinforced with swelled organoclay and nano-calcium carbonate, Rubber Chemistry and Technology, Vol. 85, No. 1, (2012). S. Pavlidou, C. D. Papaspyrides, A review on polymer–layered silicate nanocomposites, Progress in Polymer Science, 33, 1119–1198, 2008. www.ajer.org Page 12 American Journal of Engineering Research (AJER) [29] [30] [31] [32] [33] [34] 2013 S. Varghese, J. Karger-Kocsis, K.G. Gatos, Melt compounded epoxidized natural rubber/layered silicate nanocomposites: structure-properties relationships, Polymer 44, 3977–3983, 2003. Y. T. Vu, J. E. Mark, L. Pham, M. Engelhardt, Clay Nanolayer Reinforcement of cis-1,4-Polyisoprene and Epoxidized Natural Rubber, J. Appl. Polym. Sci., 82, 1391–1403, 2001. S. Praveen, P. K. Chattopadhyay, S. Jayendran, B. C. Chakraborty, S. Chattopadhyay, Effect of Rubber Matrix Type on the Morphology and Reinforcement Effects in Carbon Black-Nanoclay Hybrid Composites- A Comparative Assessment, Polym. Compos., 31, 97–104, 2010. S. Pradhan, F. R. Costa, U. Wagenknecht, D. Jehnichen, A. K. Bhowmick, G. Heinrich, Elastomer/LDH nanocomposites: Synthesis and studies on nanoparticle dispersion, mechanical properties and interfacial adhesion , European Polymer Journal, 44, 3122–3132, 2008. R. Rajasekar, K. Pal, G. Heinrich, A. Das, C.K. Das, Development of nitrile butadiene rubber–nanoclay composites with epoxidized natural rubber as compatibilizer , Materials and Design, 30, 3839–3845, 2009. P. Li, L. Wang, G. Song, L. Yin, F. Qi, L. Sun, Characterization of High-performance Exfoliated Natural Rubber/Organoclay Nanocomposites , J. Appl. Polym. Sci., 109, 3831–3838, 2008. www.ajer.org Page 13
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-150-156 www.ajer.org Research Paper Open Access God Created Human?... (A New Theory On God And Creation) M.Arulmani, V.R.Hema Latha, 1 B.E.(Engineer) 2M.A., M.Sc., M.Phil.(Biologist) Abstract: - The philosophy of GOD and HUMAN shall be considered as closely associated and related to “DARK FLAME” and “WHITE FLAME”. The White Flame shall be called as “IMAGE” of Dark Flame and both the flames can’t be separated for ever. All the matters exist (Both organic and Inorganic) in the material universe shall be considered as emanated from Billions of White flame rays and continue to exist under the radiation effect of White flame. It is further focused that every organic and inorganic matter of universe right from planets, Bacteria, Microbes shall be considered as having individual Identity just like the “SIM CARD” of mobile phone system. “Billions of rays emanated from white flame shall mean to represent billions of matters exist in the universe with distinguished SIM number recorded in the Master Memory of God”-Author Key Words: - 1) Model Universe, 2) Creation of matter, 3) Three species of human, 4) Philosophy of Evolution 5) End of Universe. I. INTRODUCTION The philosophy of GOD shall be considered as a pre-existence huge DARK TREE having full of Dark matter, dark energy which produces Billions of organic and inorganic “Plants” and these plants shall be considered as “Naturally created plants”, derived initial genetic value from “STEM” of the Dark Tree. The stem shall be considered as composed of “3G Tablet” which provides required Nutrients to the Dark Tree. All the naturally created plants shall be considered as undergoing three major genetic changes and considered produced millions of species to the natural plants. The 3G tablet contains billions of genetic information in the form of scientific logics required for evolution of “three generation species” in three geological periods. (ii) (i) www.ajer.org Page 150 American Journal of Engineering Research (AJER) II. 2013 PREVIOUS PUBLICATION The philosophy of origin of first life and human, the philosophy of model Cosmo Universe, the philosophy of fundamental neutrino particles have already been published in various international journals mentioned below. Hence this article shall be considered as extended version of the previous articles already published by the same author. [1] Cosmo Super Star – IJSRP, April issue, 2013 [2] Super Scientist of Climate control – IJSER, May issue, 2013 [3] AKKIE MARS CODE – IJSER, June issue, 2013 [4] KARITHIRI (Dark flame) The Centromere of Cosmo Universe – IJIRD, May issue, 2013 [5] MA-AYYAN of MARS – IJIRD, June issue, 2013 [6] MARS TRIBE – IJSER, June issue, 2013 [7] MARS MATHEMATICS – IJERD, June issue, 2013 [8] MARS (EZHEM) The mother of All Planets – IJSER, June issue, 2013 [9] The Mystery of Crop Circle – IJOART, May issue, 2013 [10] Origin of First Language – IJIRD, June issue, 2013 [11] MARS TRISOMY HUMAN – IJOART, June issue, 2013 [12] MARS ANGEL – IJSTR, June issue, 2013 [13] Three principles of Akkie Management (AJIBM, August issue, 2013) [14] Prehistoric Triphthong Alphabet (IJIRD, July issue, 2013) [15] Prehistoric Akkie Music (IJST, July issue, 2013) [16] Barack Obama is Tamil Based Indian? (IJSER, August issue, 2013) [17] Philosophy of MARS Radiation (IJSER, August 2013) [18] Etymology of word “J” (IJSER, September 2013) [19] NOAH is Dravidian? (IJOART, August 2013) [20] Philosophy of Dark Cell (Soul)? (IJSER, September 2013) [21] Darwin Sir is Wrong?! (IJSER, October issue, 2013) [22] Prehistoric Pyramids are RF Antenna?!... (IJSER, October issue, 2013) [23] HUMAN IS A ROAM FREE CELL PHONE?!... (IJIRD, September issue, 2013) [24] NEUTRINOS EXIST IN EARTH ATMOSPHERE?!... (IJERD, October issue, 2013) [25] EARLY UNIVERSE WAS HIGHLY FROZEN?!... (IJOART, October issue, 2013) [26] UNIVERSE IS LIKE SPACE SHIP?!... (AJER, October issue, 2013) [27] ANCIENT EGYPT IS DRAVIDA NAD?!... (IJSER, November issue, 2013) [28] ROSETTA STONE IS PREHISTORIC “THAMEE STONE” ?!... (IJSER, November issue, 2013) [29] The Supernatural “CNO” HUMAN?... (IJOART, December issue, 2013) [30] 3G HUMAN ANCESTOR?... (AJER, December issue, 2013) [31] 3G Evolution?... (IJIRD, December issue, 2013) III. HYPOTHESIS 1) The “God” shall be considered as closed container appears to be “TRIPOD” like structure emanating Dark radiation particles from “DARK FLAME”. The Dark flame shall be considered as under “Highly freezed state” having thermodynamic property and infinity value of Entropy, and the Enthalpy value is considered always constant during “expanding universe”. The GOD shall also be considered having defined structure and also called as “TRIPOD UNIVERSE” having resting on three-in-one base SUN, EARTH, MOON which supports the entire universe. www.ajer.org Page 151 American Journal of Engineering Research (AJER) 2013 Region I – Perfect vacuum region (Anti-Neutrinos radiation) Region II – Partial vacuum region (Neutrinos radiation) Region III – Observable Vacuum region (EMR radiation) “The Dark Flame shall be considered as the source of infinite level of internal energy of God having constant thermo dynamic property and strong upward Gravitational force”-Author 2) The philosophy of GOD shall be defined within the following scope. a) GOD is considered as invisible dark colour Super Human. b) GOD is considered as having “Dark eye Iris” c) GOD is considered as having Heart emanating absolutely “White Radiation”. The white flame shall be considered as heart of God. e) The White radiations are considered having creation effect through which all the matters in the material universe (Region-III) are “created”. The material universe shall also be called as “Einstein Region”. f) The philosophy of so called Electromagnetic radiation, Light, lightning might be derived from the philosophy of white radiation emanated from “White Flame”. g) The philosophy of Dark matter, Dark energy, in quantum physics might be derived from the philosophy of Dark Rays emanated from “Dark Flame”. 3) Human ancestor and other matters, shall be considered as created initially by GOD within predefined period through white radiation within defined period. The philosophy of creation of all matters including human shall be defined within the following scope. a) Every matter created has its own definite Identity defined by “each ray” emanated from white radiation. b) Every initially created matter shall be considered having undergoing major three genetic variation in three geological period. The three genetically varied matter shall also be called as “3 Generation species matters” having acquired distinguished three fundamental colours DARK BLUE, DARK GREE, DARK RED in three generations. c) Human ancestor shall also be considered as created matter through one among billions of rays emanated from White radiation and undergoing three major genetic variation in three geological period. d) The three genetically varied human populations in three geological periods shall also be considered as three species to the originally created human. e) The three distinguished “human species” shall be identified through millions colour variation in “Human Eye Iris” under three fundamental colour Iris generated in three geological periods. i) Dark Iris – Human creation origin ii) Dark Blue Iris – Ist generation species iii) Dark Green Iris – 2nd generation species iv) Dark Red Iris – 3rd generation species f) The three major human species shall be considered having distinguished genetic value in skin colour, hair colour, hair structure, nose structure and other physical structure in three generations. g) The philosophy of stage at which human ancestor was created and evolution of subsequent three human species shall be narrated as below. www.ajer.org Page 152 American Journal of Engineering Research (AJER) 2013 (i) (ii) IV. GOD HAS THERMO DYNAMIC VALUE It is focused that God shall be considered as source of internal energy of universe comprising of dark energy, dark matter, dark law. The philosophy of thermodynamic property entropy, enthalpy might be derived from the philosophy of dark matter and dark energy of God. The dark energy shall be considered as responsible for structural part of Universe and dark matter shall be considered as functional part of Universe. www.ajer.org Page 153 American Journal of Engineering Research (AJER) 2013 God Has Heart Beat It is focused that God has defined systematic sustained heart beat which is responsible for evolution process of material universe. The fundamental neutrinos of Universe PHOTON, ELECTRON, PROTON particles shall be considered evolved from white flame radiation of God. “The fundamental neutrinos emanated from heart of God shall be considered as God particles”- Author V. GOD HAS BLOOD?... It is speculated that the Heart of God is fully influenced with absolutely white radiation emanating fundamental neutrinos particles of universe PHOTON, ELECTRON, PROTON. In other words the Blood of God is considered under highly vapour state. The white radiation emanated from Neutrinos particles shall also be called as “J-RADIATION” or “MORNING STAR”. The blood shall alternatively be called as “Neutrino fluid” naturally secreated due to impact of fundamental neutrinos particles Photon, Electron, Proton. VI. GOD HAS GENDER IDENTITY?... It is speculated that GOD shall be considered as “Superhuman” having only three chromosome derived from inbuilt 3G TABLET. The three chromosome shall also be called as “J-Chromosome”. J Chromosome shall mean composed of three fundamental neutrinos of universe Photon, Electron, Proton having genetic value and creation effect of matters. www.ajer.org Page 154 American Journal of Engineering Research (AJER) 2013 a) Right dot (Proton) - Male gender b) Left dot (Electron) - Female gender c) Centre dot (Photon) - Dual gender It is focused that the philosophy of “TRISOMY SYNDROME” occurrence in Medical science might be due to impact of “genetic reflection” of J chromosome of GOD, the creator. VII. GOD CAN FLY? … It is speculated that GOD can be considered capable of “FLYING”. It is focused that the J-Radiation (White flame) comprised of fundamental neutrinos particles Photon, Electron, Proton having “ZERO MASS” which enable GOD to fly and travels faster than speed of “LIGHT”. It is focused that HUMAN ANCESTOR shall be considered created as Image of GOD from his heart. It is speculated that the ANGELS, ADAM, EVE, populations shall be considered as Ist generation Human and lived in RAMNAD of INDIA. The body of two sons of ADAM (Abel, Cain) were buried in “RAMESWARAM’ is an evidence that ADAM has lived in TAMIL NAD. Further it is stated that as ADAM could be capable of flying they might have lived in MARS PLANET and could have constructed “ADAM BRIDGE” at RAMESWARAM and “GREAT PYRAMIDS at EGYPT” without much difficulty at later period concerned with “Astronomical reason”. Further in prehistoric time, ADAM might have been also called as “MGR”. MGR shall acronymically means MARS GEO RULER. Hence ADAM shall also be called as “ADAM alias RAM”. (ii) Can we see GOD?... It is focused that nobody can see GOD. But the image of God can be seen. (i) www.ajer.org Page 155 American Journal of Engineering Research (AJER) 2013 Universe Still Expands?... It is focused that the material universe shall be considered as consistently evolved. Evolution shall mean acquiring additional mass to the fundamental neutrino particles PHOTON, ELECTRON, PROTON due to consistent misalignment in the relative position of SUN, EARTH, MOON. VIII. CONCLUSION Global level research is going on for settlement of future generation population in the MARS PLANET. If god himself destroyed then where is the question of existence of MARS planet and other material universe?... “Peaceful future generation shall be only possible if God is saved by his own children”- Author REFERENCE [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] Intensive Internet “e-book” study through, Google search and wikipedia M.Arulmani, “3G Akkanna Man”, Annai Publications, Cholapuram, 2011 M. Arulmani; V.R. Hemalatha, “Tamil the Law of Universe”, Annai Publications, Cholapuram, 2012 Harold Koontz, Heinz Weihriah, “Essentials of management”, Tata McGraw-Hill publications, 2005 M. Arulmani; V.R. Hemalatha, “First Music and First Music Alphabet”, Annai Publications, Cholapuram, 2012 King James Version, “Holy Bible” S.A. Perumal, “Human Evolution History” “English Dictionary”, Oxford Publications Sho. Devaneyapavanar, “Tamil first mother language”, Chennai, 2009 Tamilannal, “Tholkoppiar”, Chennai, 2007 “Tamil to English Dictionary”, Suravin Publication, 2009 “Text Material for E5 to E6 upgradaton”, BSNL Publication, 2012 A. Nakkiran, “Dravidian mother”, Chennai, 2007 Dr. M. Karunanidhi, “Thirukkural Translation”, 2010 “Manorama Tell me why periodicals”, M.M. Publication Ltd., Kottayam, 2009 V.R. Hemalatha, “A Global level peace tourism to Veilankanni”, Annai Publications, Cholapuram, 2007 Prof. Ganapathi Pillai, “Sri Lankan Tamil History”, 2004 Dr. K.K. Pillai, “South Indian History”, 2006 M. Varadharajan, “Language History”, Chennai, 2009 Fr. Y.S. Yagoo, “Western Sun”, 2008 Gopal Chettiar, “Adi Dravidian Origin History”, 2004 M. Arulmani; V.R. Hemalatha, “Ezhem Nadu My Dream” - (2 Parts), Annai Publications, Cholapuram, 2010 M. Arulmani; V.R. Hemalatha, “The Super Scientist of Climate Control”, Annai Publications, Cholapuram, 2013, pp 1-3 www.ajer.org Page 156
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-296-302 www.ajer.org Research Paper Open Access The forecasting of Potential Evapotranspiration using time series analysis in humid and semi humid regions Arash Asadi, Seyed Farnood Vahdat, Amirpooya sarraf Islamic Azad University, Dehdasht branch, Iran Ph.D of hydrology and water resources engineering, Tehran, Iran Assistant professor in department of soil sciences,roudehen branch, Islamic azad university, roudehen,iran Abstract: - Stochastic models have been proposed as one technique to generate scenarios of future climate change. The goal of this study is the simulation and modeling of monthly Potential Evapotranspiration (PET) using stochastic methods. Also, in this research for calculation of PET was used the Thornthwaite method.The 28-year data PET at yasuj Synoptic Station in southwest of Iran have been used in this study and based on ARIMA model, the autocorrelation and partial autocorrelation methods, assessment of parameters and types of model, the suitable models to forecast monthly PET was obtained. After models validation and evaluation, the forecasting was made for the crop years 2012-13 and 2013-14. In view of the forecasting made, the findings of the forecasting show an increase in PET along with a narrowing of the range of variations. Keywords: - Potential Evapotranspiration( PET), Time Series Analysis , ARIMA I. INTRODUCTION The study of meteorological parameters is highly important in hydrology problems, since the same parameters generally form the climate of a region and is due to variations caused by water, wind, rain, etc. that issues problems, such as drought. Therefore, accuracy in data collection of such parameters is of particular importance. The study of long statistical term of the behavior and fluctuations in climatic parameters and analysis of the results obtained as well as the study of the behavior of a phenomenon in the past can analyze its probable trend in the future, too. Therefore, one can study the climatic variations using forecasting and estimation of parameters, such as precipitation and temperature and studying their behaviorin the past. In order to modeling and forecasting, stochastic and time series methods can be used. Statistical methodsinclude two objectives: 1- understanding of random processes, 2- Forecasting of series (Anderson, 1971). Time series analysis has rapidly developed in theory and practice since 1970s to forecast and control. This type of analysis is generally related to data which are not independent and are consecutively dependent to each another. In a study, the mean of monthly temperature of Tabriz Station in Iran was investigated based on Box & Jenkins ARIMA (Autoregressive Integrated Moving Average) model, In this study the monthly temperature of Tabriz for a 40year statistical period (1959-98) was examined based on autocorrelation and partial autocorrelation methods as well as controlling the normality of residues using above mentioned models. Based on the obtained models, the variations of the mean of temperature of Tabriz Station are forecasted up to the year 2010 (Jahanbakhsh and Babapour Basser, 2003). A study was conducted to analyze the climate of Birjand Synoptic Station in Iran and recognize climatic fluctuations, especially drought and wetness to provide a suitable model to forecast the climatic fluctuations and the best model using statistical methods and Box-Jenkins models of time series of precipitation and temperature. Among the necessities to conduct this study are climatic forecasting to be used in the state planning at large concerning natural disasters, thus, the precipitation and temperature of Birjand Station have been studied to identify the climatic fluctuations and their possible forecasting (Bani Vaheb and Alijani, 2005). Bouhaddou et al. (1997) used the AutoRegressive Moving Average model (ARMA) model for simulation of weather parameters such as ambient temperature, humidity and clearness index. Frausto et al. (2003) implied that autoregressive (AR) and ARMA could be used to describe the inside air temperature of an unheated. Kurunc et al. (2005) applied the ARIMA approach to water quality constituents and streamflows of the Yesilirmak River in Turkey. Yurekli and Kurunc (2006) performed prediction of drought periods based on water www.ajer.org Page 296 American Journal of Engineering Research (AJER) 2013 consumption of the selected critical crops by using ARIMA approach. Yurekli et al. (2005) used the ARIMA model to simulate monthly stream flow of Kelkit Stream in Turkey. Yurekli and Ozturk (2003) showed whether the daily extreme stream flow sequences concerning with Kelkit Stream could be generated by stochastic models. In a study, modeling of drought in Fars Province in Iran was made using Box-Jenkins method and ARIMA model and the model to forecast drought in any region was obtained after zoning of different regions (Shamsnia et al., 2009). Shahidi et al., (2010) used ITSM software for Modeling and Forecasting Groundwater Level Fluctuations of Shiraz Plain in Iran. The autoregressive (order 24) fitted to the series with AIC=165.117. Coefficient of the fitted model was finalized by the residual tests. In a another study, the monthly maximum of the 24-h average time-series data of ambient air quality—sulphur dioxide (SO2), nitrogen dioxide (NO2) and suspended particulate matter (SPM) concentration monitored at the six National Ambient Air Quality Monitoring (NAAQM) stations in Delhi, was analysed using Box–Jenkins modelling approach. The model evaluation statistics suggest that considerably satisfactory real-time forecasts of pollution concentrations can be generated using the Box–Jenkins approach. The developed models can be used to provide short-term, real-time forecasts of extreme air pollution concentrations for the Air Quality Control Region (AQCR) of Delhi City, India (Sharma et al., 2009). Therefore, considering the importance of climatic parameters and the importance they have in determining the roles of other climatic elements, their modeling and recasting using advanced statistical methods is a necessity and could be a basic pillar in agricultural and water resource managements. Also, in this research for calculation of PET was used the Thornthwaite method. The goal of the present study is the simulation and providing a model to forecast Potential Evapotranspiration (PET) under study using the statistical models of time series analysis in The Synoptic Station of the yasuj City. II. RESEARCH METHODOLOGY In this study, the monthly data on the precipitation and the mean temperature of yasuj Synoptic Station were used for calculation of Potential Evapotranspiration (PET) and the required information was collected from the tables and the databases available. Yasuj city located in kohgilouye and boyerahmad Province in southwest part of Iran, is at 51 35 E longitude and 30 42 N latitude with the area of 26416 square kilometers. The mean annual precipitation 860 mm and mean annual temperature for the study area about 15 °C (I.R. of Iran Meteorological Org.). The geographical location of the study region is shown in Figure 1. The statistical period under study is the crop years 1983-84 through 2011-12. Initially, the homogeneity of data was confirmed using the run test statistical method. Essentially homogenous test before statistical analysis on data should be taken to ensure the stochastic data. Homogeneous data was done using SPSS software. Then, based on the results obtained and studying the sequence of observations and the past behavior of the phenomenon the appropriate model was devised to forecast using time series analysis and stochastic methods. In order to model the data, they were fixed after preparing the time series of observations of Potential Evapotranspiration. For fitting ARIMA model to the time series of the new data sequence, the basis of the approach consists of three phases: model identification, parameter estimation and diagnostic testing (Yurekli and Ozturk, 2003). Identification stage is proposed to determine the differencing required to produce stationary and also the order of AR and MA operators for a given series. Stationary is a necessary condition in building an ARIMA model that is useful for forecasting. A stationary time series has the property that its statistical characteristics such as the mean and the autocorrelation structure are constant over time. When the observed time series presents trend and heteroscedasticity, differencing and power transformation are often applied to the data to remove the trend and stabilize variance before an ARIMA model can be fitted. Estimation stage consists of using the data to estimate and to make inferences about values of the parameters conditional on the tentatively identified model. The parameters are estimated such that an overall measure of residuals is minimized. This can be done with a nonlinear optimization procedure. The diagnostic checking of model adequacy is the last stage of model building. This stage determines whether residuals are independent, homoscedastic and normally distributed. Several diagnostic statistics and plots of the residuals can be used to examine the goodness of fit; the tentative model should be identified, which is again followed by the stage of parameter estimation and model verification. Diagnostic information may help to suggest alternative model(s). This three-step model building process is typically repeated several times until a satisfactory model is finally selected. The final selected model can then be used for prediction purpose. By plotting original series trends in the mean and variance may be revealed (Box and Jenkins, 1976).The ARIMA model is essentially an approach to forecasting time series data. However, the ARIMA model requires the use of stationary time series data (Dickey and Fuller, 1981). www.ajer.org Page 297 American Journal of Engineering Research (AJER) III. 2013 THE MODELING PROCEDURES Modeling is made using time series analysis by several methods, one of which is the ARIMA or BoxJenkins method, being called the (p,d,q) model, too (Box and Jenkins, 1976). In the (p,d,q) model, p denotes the number of autoregressive values, q denotes the number of moving average values and d is the order of differencing, representing the number of times required to bring the series to a kind of statistical equilibrium. In an ARIMA model, (p,d,q) is called the non-seasonal part of the model, p denotes the order of connection of the time series with its past and q denotes the connection of the series with factors effective in its construction. The mathematical formulation of ARIMA models shown by equation (1). Analysis of a time series is made in several stages. At the first stage, the primary values of p, d and q are determined using the Autocorrelation Function (ACF) and Partial Autocorrelation Function (PACF). A careful study of the autocorrelation and partial autocorrelation diagrams and their elements, will provide a general view on the existence of the time series, its trend and characteristics. This general view is usually a basis for selection of the suitable model. Also, the diagrams are used to confirm the degree of fitness and accuracy of selection of the model. At the second stage, it is examined whether p and q (representing the autoregressive and moving average values, respectively) could remain in the model or must exit it. At the third stage, it is evaluated whether the residue (the residue error) values are stochastic with normal distribution or not. It is then, that one can say the model has a good fitness and is appropriate. If the time series is of seasonal type, then the modeling has a two-dimensional state, and in principle, a part of the time series variations belongs to variations in any season and another part of it belongs to variations between different seasons. A special type of seasonal models that shows deniable results in practice and coin sides with the general structure of ARIMA models is devised by Box and Jenkins (1976), which is called multiplicative seasonal model. It is in the form of ARIMA (pdq) (PDQ). Then, for the model being ideal, the schemes must be used to test the model and for the comparison purpose, so as the best model is chosen for forecasting. �(�) = �(�−1) ± �(�−2) ± �(�−3) ± �(�−�) ± (�) (1) IV. MODEL SELECTION CRITERIA Several appropriate models may be used to select a model to analyze time series or generally data analysis to present a given set of data. Sometimes, selection is easy, whereas, it may be much difficult in other times. Therefore, numerous criteria are introduced to compare models which are different from methods for model recognition. Some of these models are based on statistics summarized from residues (that are computed from a fitted scheme) and others are determined based on the forecasting error (that is computed from forecasting outside the sample). For the first method, one can point to AIC (Akaike Information Criterion), BIC (Bayesian Information Criterion) and SBC (Schwartz-Bayesian Criterion) and for the scheme based on the forecasting error, one can point to the Mean Percent Error (MPE) method, the Root Mean Square Error (RMSE), the Mean Absolute Value Error (MAE), and the Mean Absolute Value Percent Error (MAPE). The model, in which the above statistics are the lowest, will be selected as the appropriate model. Akaike (1974) suggests a mathematical formulation of the parsimony criterion of model building as Akaike Information Criterion (AIC) for the purpose of selecting an optimal model fits to a given data. Mathematical formulation of AIC is defined as: (2) ��� = �. � � 2 + 2 Where "M" is the number of AR and MA parameters to estimate, "� 2 " is Residual variance and " n "is the number of observation. The model that gives the minimum AIC is selected as a parsimonious model. Akaike (Akaike, 1974) has shown that the AIC criterion trends to overestimate the order of the autoregression. But, Akaike (Akaike, 1978; Akaike, 1979) has developed a Bayesian extension of minimum AIC procedure, called as BIC. The another index for model evaluating is efficiency factor. The model efficiency (EF), which indicates the robustness of the model (Raes et al., 2006). EF ranges from −∞ to 1with higher values indicating a better agreement. If EF is negative, the model prediction is worse than the mean observation: � � 2 2 �=1( � − ) − �=1 ( � − � ) (3) = � �=1( Where Oi and Pi are respectively the observed and predicted (simulated) values for each of the n study cases and O the mean observed value. In the present study, ARIMA model, ITSM software, AIC, RMSE and EF criterion were used for modeling and forecasting the precipitation and temperature. The ITSM software determine the best model with minimum AIC and BIC. Also the best model validated using model efficiency. Time series of monthly Potential Evapotranspiration in yasuj Station were showed in Fig (2). Trend and seasonal components recognized by ACF/ PACF diagrams (Figure 3), shows the peaks in 12 and 24 lag times. These deterministic parameters removed by difference operator. Residual testing was used for validation. ACF/PACF of residuals shows all covered by 95% confidence interval (Figure 4). The RACFs drawn for the www.ajer.org Page 298 American Journal of Engineering Research (AJER) 2013 best models indicated that the residuals were not significantly different from a white noise series at 5% significance level. Inspection of the RACFs and the residuals integrated periodogram confirmed a strong model fit. V. DISCUSSION 5.1 Modeling of monthly Potential Evapotranspiration (PET) To model using ACF and PACF methods, assessment of values related to auto regression and moving average were made and eventually, an appropriate model for estimation of Potential Evapotranspiration values for yasuj Station was found as ARIMA (2 0 0) (0 1 1)12. To prevent excessive fitting errors, AIC and EF criterion was used. In comparison between schemes, regarding the lowest AIC and EF value, the final model with the best fitting of data, obtained using the method of maximum likelihood and ITSM software.the evaluation criterions are shown in table 1. Figure 5. shows the correlation between observed and predicted data from ARIMA models in crop years 200910 through 2011-12. Therefore, because of the strong correlation of data, the selected model is suitable for simulating the monthly Potential Evapotranspiration. According to the results, Predicted data for the agriculture years 2012-13 and 2013-14 are shown in Figure 6. VI. CONCLUSION Recent droughts in kohgilouye and boyerahmad Province with yasuj, as its center, have led to much damage. To prevent such huge damage, knowledge of the fluctuations during the statistical period and forecasting of them in planning is necessary. The findings of the study of climatic parameter of monthly Potential Evapotranspiration and evaluation of diagram showed that, variations of Potential Evapotranspiration in yasuj region denote the existence of severe and, in some instances, long-term droughts. The Box-Jenkins model was used to forecast the studied parameters and the final model was tested using AIC and EF criterion and the results showed that it can be used to forecast the monthly variations in Potential Evapotranspiration in the city of Shiraz regarding its high accuracy. For model validation, EF value calculated 0.9 for Potential Evapotranspiration. Also R2 for climate variables obtained .99. Consequently, the models can be used for forecasting of studied variables. As regards the mean monthly Potential Evapotranspiration, the trend of increasing, especially in recent years, has continued and the findings of the forecasting show an increase in Potential Evapotranspiration along with a narrowing of the range of variations. VII. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] REFERENCES Akaike H. A look at the statistical model identification. (1974). IEEE Transactions on Automatic Control AC. 19(6):716–23. Akaike H. A. (1978). Bayesian analysis of the minimum AIC procedure. Annals of the Institute of Statistics and Mathematics. 30(A):9–14. Akaike H. (1979). Bayesian extension of the minimum AIC procedure of autoregressive model fitting. Biometrika. 66:237–42. Anderson, T.W. (1971). The statistical Analysis of time series. John Wiley & Sons, NewYork. Bani Waheb, E. and Alijani, B. (2005). On the drought and wetness years and forecasting of climatic variations of Birjand region using statistical models. Journal of Geographic Researches. 37(52): 33-46. [In Persian]. Bouhaddou, H. Hassani, M.M. Zeroual, A. and Wilkonson, A.J. (1997). Stochastic simulation of weather data using higher statistics. Renewable Energy.12 (1):21–37. Box, G.E.P. and Jenkins, G.M. (1976). Time Series Analysis, Forecasting and Control. Holden-Day Pulication. Brockwell, P. J. and Davis, R. A. (2002). Introduction to time series and forecasting. New York: Springer-Verlag. Dickey, D.A. and Fuller, W.A. (1981). Likelihood ratio statistics for autoregressive time series with a unit root. Econometrica. 49:1057–72. Frausto, H.U. Pieters, J.G. and Deltour, J.M. (2003). Modelling greenhouse temperature by means of Auto Regressive Models. Biosystems Engineering. 84(29):147–57. Jahanbakhsh, S. and Babapour Basseri, E.A. (2003). Studying and forecasting of the mean monthly temperature of Tabriz, using ARIMA model. Journal of Geographic Researches. 15(3): 34-46. [In Persian]. Kurunc, A. Yurekli, K. C- evik, O. (2005). Performance of two stochastic approaches for forecasting water quality and streamflow data from Yes-ilırmak River, Turkey. Environmental Modelling & Software. 20:1195–200. www.ajer.org Page 299 American Journal of Engineering Research (AJER) [13] [14] [15] [16] [17] [18] [19] 2013 Raes, D. Greets, S. Kipkorir, E. Wellens, J. and Sahli, A. (2006). Simulation of yield decline as a result of water tress with a robust soil water balance model. Agricultural Water Management. 81:335-357. Shahidi, N. Rahnemaei, M. Sharifan, R.A. and Nematollahi, A.R. (2010). Modeling and forecasting groundwater level fluctuations of Shiraz Plain using advanced statistical models. International Conference on Environmental Engineering and Applications (ICEEA 2010). 10-12 Sep. Singapore. Shamsnia, S.A., Amiri. S.N. and Pirmoradian, N. (2009) Drought simulation in Fars province using standardized precipitation index and time series analysis (ARIMA model). International Journal of Applied Mathematics. 22(6):869-878. [Available on site http://www.diogenes.bg/ijam/]. Sharma, P., Chandra, A. and Kaushik, S.C. (2009) Forecasts using Box–Jenkins models for the ambient air quality data of Delhi City. Environmental Monitoring and Assessment. 157(1-4):105-112. Yurekli, K. and Kurunc, A. (2006) Simulating agricultural drought periods based on daily rainfall and crop water consumption. Journal of Arid Environments. 67: 629-640. Yurekli, K. and Ozturk, F. (2003) Stochastic modeling of annual maximum and minimum streamflow of Kelkit Stream. International Water Resources Association. 28(4):433–41. Yurekli K, Kurunc- A, Ozturk F. (2005) Application of linear stochastic models to monthly flow data of Kelkit Stream. Ecological Modelling. 183:67–75. Fig 1. Regional map of Iran , location of study area and synoptic station Figure 2. Time series of monthly PET in yasuj Station www.ajer.org Page 300 American Journal of Engineering Research (AJER) 2013 Figure 3. ACF/PACF of monthly PET in yasuj Station Figure 4. ACF/PACF of residual for monthly PET Figure 5. Correlation between observed and predicted data from ARIMA models in crop years 2009-10 through 2011-12. www.ajer.org Page 301 American Journal of Engineering Research (AJER) 2013 Figure 6. Predicted data of the mean of monthly PET for 2012-13 and 2013-14 Table 1. The ARIMA models selected for PET variable ARIMA Model AIC RMSE EF (2 0 0) (0 1 1)12 4.18283 8.02173 0.91 (1,0,1) (0,1,1)12 4.18443 8.02816 0.85 (1,0,0) (0,1,1)12 4.18482 8.05446 0.83 (2,0,2) (0,1,1)12 4.18508 7.98135 0.76 (0,0,2) (0,1,1)12 4.18863 8.04502 0.72 www.ajer.org Page 302
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-409-413 www.ajer.org Research Paper Open Access Energy efficient Smart home based on Wireless Sensor Network using LabVIEW 1 1 Jayashri Bangali, 2Arvind Shaligram Kaveri College of Science and Commerce 2 Department of Electronic Science Abstract: - Smart home is a house that uses technology to monitor the environment with the help of various sensors, control the electrical appliances and communicate the outer world. Now-a-days the demand for home automation systems in homes and offices are invariably increasing. The home automation system is a key for energy conservation that can be equipped in normal buildings. As there are many benefits of wireless technology over wired, most of the home automation systems are based on the WSN technology. In this paper we present the design and implementation of a smart home based on LabVIEW using wireless sensor network system. The system can monitor the temperature, light, fire & burglar alarm of the house and have infrared sensor to guarantees the family security. The monitored data is automatically stored into an excel file. The system can be connected to internet to monitor the security of home from anywhere in the world. Keywords: - WSN (Wireless Sensor Network), Labview, Home automation system I. INTRODUCTION A smart home is a space or a room which is provided with the ability to get accustomed by itself to certain situations to make the occupants feel comfortable [1]. Today, the term „smart home‟ is no longer alien to anybody as it was a few years ago. Smart homes can also refer as Intelligent Homes or Automated Homes. However, the term smart homes simply indicate the automation of daily chores with reference to the equipments in the house. Smart homes could be simple remote control of lights or more complex functionalities such as remote viewing of the house interiors for surveillance purposes. With the recent expansion of communication networks, smart home applications can be further enhanced with new dimension of capabilities that were not available before. In particular, wireless access technologies will soon enable exotic and economically feasible applications. To this end, in this paper, we present the design and implementation of a smart home which aims to define framework for remote monitoring and control of smart home devices via the internet. The design is based on wireless sensor network system of National Instruments. The programming is done using LabVIEW. For the sensing part, occupancy (PIR- passive infrared) sensor, infrared (IR) sensors, photosensors and temperature sensors are used and for controlling part relays are used. We present the design of the system and implementation of it with all the aspects. The design of the developed smart home is shown in figure 1.1. Similar type of system can be used for various application related to building automation field. www.ajer.org Page 409 American Journal of Engineering Research (AJER) 2013 Fig 1 Design of Smart Home The smart home using WSN starter kit is shown in figure 1. The PIR, IR and LDR (Light Dependent Resistor) are connected to programmable analog input node. The thermocouple is connected to programmable thermocouple node. Both these nodes are wirelessly connected to Ethernet gateway. The paper is organized as follows. In Section 2, a brief review of existing smart home application is given. Section 3 will cover the technical portion of this paper, where the proposed and implemented solution is described. Conclusions of the developed systems are covered in Section 4. II. EXISTING SMART HOME APPLICATIONS A smart home system mainly includes heating, ventilation, and air conditioning, Lighting control, or Audio and Video distribution to multiple sources around the house, security (involving presence simulations, alarm triggering and medical alerts). Smart homes systems are grouped by their main functions such as i) Alert and sensors – heat/smoke sensors, temperature sensors ii) Monitoring – Regular feed of sensor data i.e. heat, CCTV monitoring iii) Control – switching on/off appliances i.e. sprinklers, lightings iv) Intelligence and Logic – Movement tracking i.e security appliances The different technologies that could provide for smart home communication are X10, Insteon, Zigbee and Z-Wave. X10, developed in 1975 by Pico Electronics of Glenrothes, Scotland, allows compatible products to talk to each other remotely over the already existing electrical wires of a home. The first "home computer" was an experimental system in 1966. The Smart House Project was initiated in the early 1980‟s as a project of the National Research Centre of the National Association of Home Builders (NAHB) with the cooperation of a collection of major industrial partners [2]. By using wireless technology, today one can easily control home‟s mechanical systems and appliances over cellular phone or Internet. As the GSM technology provides ubiquitous access to the system for security and can automat appliance control, it is very popular technology now a days. Home Security with Messaging System [3], Security & Control System, and Remote and Security Control via SMS [4] were the three alarm system that were designed using SMS application to securely monitor the home condition when the owner are away or at night. The system described in [5], is also based on GSM technology. The system is wireless and it provides security against intrusion as well as automates various home appliances using SMS. The system uses GSM technology thus providing ubiquitous access to the system for security and automated appliance control. Intelligent home monitoring system based on LabVIEW is described in [5] and it can act as a security guard of the home. This system can monitor the temperature, humidity, lighting, fire & burglar alarm, gas density of the house and have infrared sensor to guarantees the family security. The paper [6] paper presents the hardware implementation of a multiplatform control system for house automation using LabVIEW. The system uses LabVIEW, PIC16F877A and Data Acquisition Card. The system also has internet connection to monitor and control the house equipment‟s from anywhere in the world. www.ajer.org Page 410 American Journal of Engineering Research (AJER) III. 2013 PROPOSED SMART HOME AUTOMATION SYSTEM The proposed smart home automation system is based on wireless sensor network system from National Instruments and programming is done in LabVIEW. I. Hardware Support National Instruments Wireless Starter Network includes NI‐WSN‐9791 Ethernet Gateway (Fig 2), NI‐WSN‐3202 programmable analog input node (Fig 3) and NI WSN‐3212 programmable thermocouple node (Fig 4). The NI WSN Starter Kit requires a PC running Windows Vista/XP OS to act as the host controller in the system. NI WSN-3202 programmable analog input node has the range of ±10 V Any thermocouple can be connected to NI-WSN-3212 programmable node. The TTL outputs are also available on each node which is used to control the lights or fans or to give alarm indication for intruder entry in the home. The hardware requirements of the systems are: Intel Core Duo Processor, 1GB of RAM, Windows XP, Professional / Vista / 7, Ethernet NIC and LabVIEW 8.6 or higher installed on the PC. IV. SOFTWARE SUPPORT NI LabVIEW software is used for a wide variety of applications and industries. LabVIEW is a highly productive development environment for creating custom applications that interact with real-world data or signals in fields such as science and engineering. LabVIEW Support for thousands of hardware devices, including: Scientific instruments, Data acquisition devices, Sensors, Cameras, Motors and actuators, Familiar programming model for all hardware devices, Portable code that supports several deployment targets. LabVIEW has freely available drivers for thousands of NI and third-party hardware [7]. The G programming language is often called “LabVIEW programming”, which can quickly tie together data acquisition, analysis, and logical operations and understand how data is being modified. LabVIEW contains a powerful optimizing compiler that examines your block diagram and directly generates efficient machine code, avoiding the performance penalty associated with interpreted or cross-compiled languages. The compiler can also identify segments of code with no data dependencies (that is, no wires connecting them) and automatically split your application into multiple threads that can run in parallel on multicore processors, yielding significantly faster analysis and more responsive control compared to a single-threaded, sequential application [7]. III. Smart Home system design With technological advances, the control in smart home systems evolve and include new and sophisticated methods based on different control programs and systems. The developed system design is shown in figure 5. www.ajer.org Page 411 American Journal of Engineering Research (AJER) 2013 In this paper we use LabVIEW program, and wireless control to control the different systems in the smart home model. Through LabVIEW software, system will control the lighting system, security system and fan control system. 1. Lighting system The lighting system uses light dependent resistors (LDR) to sense the light and PIR motion sensor to detect the movement in the room. This system automatically turns on or off the lights depending upon the light and the movement inside the room. The motion sensor detects the person in the room and LDR senses the light intensity in the room and accordingly lights will be turns on or off. The LabVIEW software program user can monitor the light intensity and pulse of person detection on front panel. 2. Security system The security system uses infrared sensor to detect the intruder which is fitted on the sides of the window. This system gives alarm when the person is detected at the window. A pulse is transmitted by the sensor to the node and finally to the Ethernet gateway when a person is entering from the window. 3. Fan Control system The fan control system uses temperature sensor and PIR motion sensor. Whenever there is a movement in the room, temperature sensor senses the temperature and accordingly fan will be turn on or off. A “Low Node Power” alarm will occur when the node power level is ≤ 4 V, which is 0.4 V above the minimum required voltage [8]. An indicator LED on the front panel labeled “Low Node Power” will illuminate and give the indication. The quality of the wireless connection between the nodes and the Ethernet gateway is also critical to successful operation of the system. Poor signal quality could result in data loss or losing communication with a node completely. So a wireless “Link/Signal Quality” of each node is also checked and indication of poor link quality is given on the front panel. The stated maximum effective range of the nodes is 90 m indoors and 300 m outdoors [9]. V. SMART HOME SYSTEM IMPLEMENTATION The front panel is designed using LabVIEW allows monitoring to all parts of smart home system that connected with LabVIEW via WSN kit of NI which is shown in figure 6 Figure 6 Front Panel of Smart Home www.ajer.org Page 412 American Journal of Engineering Research (AJER) 2013 As shown in Figure 6, the sensors outputs are monitored on the front panel. Automatically this data is stored in an excel file. A poor link/signal quality and also poor battery indication is given on the front panel. With the built‐in Web Server in LabVIEW, the front panels of the application without adding any development time can be publish to the project. LabVIEW generates front panel images that can access from any Web browser. Through Web Publishing Tool of LabVIEW one can create an HTML file in the application instance from which the VI is open. In the viewing mode clients can view and control the front panel remotely using a browser. The VI must be in memory on the server computer so clients can view and control the front panel. VI. CONCLUSIONS The main objective of this Paper is to design and implement a control and monitor system for smart home. Smart home system consists of many systems that can be controlled by LabVIEW software with the help of wireless sensor network starter kit. Wireless connectivity is the main advantage of the developed system. Similar type of systems can be designed for various applications. However, LabVIEW software run on host PC, so as long as the host PC is plugged in to a power source and the sensor nodes have adequate battery power, the software can be run. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] Dhiren Tejani, Ali Mohammed A. H. Al-Kuwari, Energy Conservation in Smart Home, 5th IEEE International Conference on Digital Ecosystems and Technologies, Daejeon, Korea, May 2011. Chetana Sarode, Prof.Mr.H.S.Thakar ,” Intelligent Home Monitoring System”, International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.1446-1450 Adamu Murtala Zungeru 1*, Ufaruna Victoria Edu 2, Ambafi James Garba,”Design and Implementation of a Short Message ServiceBased Remote Controller” Computer Engineering and Intelligent Systems ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online) Vol 3, No.4, 2012 G.RAGHAVENDRAN,” SMS BASED WIRELSS HOME APPLIANCE CONTROL SYSTEM, 2011 International Conference on Life Science and Technology IPCBEE vol.3 (2011) © (2011) IACSIT Press, Singapore Chetana Sarode, Prof.Mr.H.S.Thakar ,” Intelligent Home monitoring system” , International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.1446-1450 Basil Hamed,” Design and Implementation of smart house control using LabVIEW”, International Journal of Soft Computing and Engineering (IJSCE) ISSN: 2231-2307, Volume-1, Issue-6, January 2012 LabVIEW User Manual, April 2003 Edition, National Instruments Bitter, Rick, Taqi Mohiuddin, and Matt Nawrocki “LabVIEW Advanced Programming Techniques “Boca Raton: CRC Press LLC, 2001 National Instruments Corporation, Wireless Sensor Node Data Sheet (NI WSN 3202 & NI WSN 3212) , National Instruments Corporation, 2009, http://www.ni.com/pdf/products/us/cat_wsn32xx.pdf www.ajer.org Page 413
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-258-264 www.ajer.org Research Paper Open Access A Study on Earthquake Resistant Construction Techniques Mohammad Adil Dar1, Prof (Dr) A.R. Dar2 , Asim Qureshi 3 ,Jayalakshmi Raju4 1 PG Research Student, Department of Civil Engineering, Kurukshetra University, Haryana, India 2 Professor & Head, Department of Civil Engineering, NIT, Srinagar, India 3 PG Research Student, Department of Civil Engineering, IIT Bombay, Maharashtra India 4 UG student, Department of Civil Engineering, MSRIT, Bangalore, India Abstract :- Apart from the modern techniques which are well documented in the codes of practice, there are some other old traditional earthquake resistant techniques which have proved to be effective for resisting earthquake loading and are also cost effective with easy constructability. Keywords:- catastrophic damage, non-engineered buildings, traditional architecture, lack of proper seismic knowledge, details of seismic resistant construction. I. INTRODUCTION Disasters are unexpected events which have adversely affected humans since the dawn of our existence. In response to such events, there have been attempts to mitigate devastating effects of these disasters. Results of such attempts are very encouraging in developed countries but unfortunately and miserably poor in developing countries including ours. Earthquakes are one of the nature’s greatest hazards on our planet which have taken heavy toll on human life and property since ancient times . The sudden and unexpected nature of the earthquake event makes it even worse on psychological level and shakes the moral of the people. Man looks upon the mother earth for safety and stability under his feet and when it itself trembles, the shock he receives is indeed unnerving. Mitigation of the devastating damage caused by earthquakes is of prime requirements in many parts of the world. Since earthquakes are so far unpreventable and unpredictable, the only option with us is to design and build the structures which are earthquake resistant. Accordingly attempts have been made in this direction all over the world. Results of such attempts are very encouraging in developed countries but miserably poor in developing countries including our country India. This is proved by minimal damage generally without any loss of life when moderate to severe earthquake strikes developed countries, where as even a moderate earthquake cause’s wide spread devastation in developing countries as has been observed in recent earthquakes. It is not the earthquake which kills the people but it is the unsafe buildings which is responsible for the wide spread devastation. Keeping in view the huge loss of life and property in recent earthquakes, it has become a hot topic worldwide and lot of research is going on to understand the reasons of such failures and learning useful lessons to mitigate the repetition of such devastation. If buildings are built earthquake resistant at its first place (as is being done in developed countries like USA, Japan etc) the devastation caused by earthquakes will be mitigated most effectively. The professionals involved in the design/construction of such structures are structural/civil engineers, who are responsible for building earthquake resistant structures and keep the society at large in a safe environment. Understanding of earthquake and Basic Terminology Earthquake is defined as a sudden ground shaking caused by the release of huge stored strain energy at the interface of the tectonic plates Epicenter:-It is the point on the free surface of the earth vertically above the place of origin of an earthquake. Focus:-It is the point within the earth from where the seismic waves originate. www.ajer.org Page 258 American Journal of Engineering Research (AJER) 2013 Focal Depth:-It is the vertical distance between the Focus and the epicenter. The figure explains the related terminology used in the earthquake engineering Glimpses of some of the earthquake related failures Collapsing a building A total collapse of a building www.ajer.org Page 259 American Journal of Engineering Research (AJER) 2013 Soft Storey Failure www.ajer.org Page 260 American Journal of Engineering Research (AJER) II. 2013 BEHAVIOUR OF MASONRY BUILDINGS TO GROUND MOTION Ground vibrations during earthquakes cause inertia forces at locations of mass in the building. These forces travel through the roof and walls to the foundation. The main emphasis is on ensuring that these forces reach the ground without causing major damage or collapse. Of the three components of a masonry building (roof, wall and foundation) (Figure (a), the walls are most vulnerable to damage caused by horizontal forces due to earthquake. A wall topples down easily if pushed horizontally at the top in a direction perpendicular to its plane (termed weak direction), but offers much greater resistance if pushed along its length (termed strong direction) [Figure (b)]. www.ajer.org Page 261 American Journal of Engineering Research (AJER) www.ajer.org 2013 Page 262 American Journal of Engineering Research (AJER) III. 2013 ROLE & RESPONSIBILITIES OF CIVIL ENGINEERS It is not the earthquake which kills the people but it is the unsafe buildings which is responsible for the devastation. Keeping in view the huge loss of life and property in recent earthquakes, it has become a hot topic and worldwide lot of research is going on to understand the reasons of such failures and learning useful lessons to mitigate the repetition of such devastation. If buildings are built earthquake resistant at its first place (as is being done in developed countries like USA, Japan etc) we will be most effectively mitigating the earthquake disasters. The professionals involved in the design and construction of such structures are civil engineers. Who are responsible for building earthquake resistant structures and keep the society at large in a safe environment? It is we the civil engineers who shoulder this responsibility for noble and social cause. IV. GUIDELINES FOR EARTHQUAKE RESISTANT CONSTRUCTION In addition to the main earthquake design code 1893 the BIS(Bureau of Indian Standards)has published other relevant earthquake design codes for earthquake resistant construction Masonry structures (IS-13828 1993) • Horizontal bands should be provided at plinth ,lintel and roof levels as per code • Providing vertical reinforcement at important locations such as corners, internal and external wall junctions as per code. • Grade of mortar should be as per codes specified for different earthquake zones. • Irregular shapes should be avoided both in plan and vertical configuration. • Quality assurance and proper workmanship must be ensured at all cost without any compromise. In RCC framed structures (IS-13920) • • • In RCC framed structures the spacing of lateral ties should be kept closer as per the code The hook in the ties should be at 135 degree instead of 90 degree for better anchoragement. The arrangement of lateral ties in the columns should be as per code and must be continued through the joint as well. Whenever laps are to be provided, the lateral ties (stirrups for beams) should be at closer spacing as per code. • V. CONCLUSION Technology is available to drastically mitigate the earthquake related disasters. This is confirmed by minimal damage generally without any loss of life when moderate to severe earthquake strikes developed countries, where as even a moderate earthquake cause’s huge devastation in developing countries as has been observed in recent earthquakes. The reason being that earthquake resistant measures are strictly followed in these countries where as such guidelines are miserably violated in developing countries. The administration system is efficient and effective in developed countries, and its not the same in developing countries – so the government should ensure the implementation of earthquake resistant design guidelines. So it is here that civil engineers in general and structural engineers in particular have a great role to play in mitigating the sufferings caused by earthquake related disasters. REFERENCES [1]. [2]. [3]. [4]. [5]. [6]. [7]. [8]. Barton A.H. (1969). Communities in Disaster. A Sociological Analysis of Collective Stress Situations. SI: Ward Lock Catastrophe and Culture: The Anthropology of Disaster. Susanna M. Hoffman and Anthony OliverSmith, Eds.. Santa Fe NM: School of American Research Press, 2002 G. Bankoff, G. Frerks, D. Hilhorst (eds.) (2003). Mapping Vulnerability: Disasters, Development and People. ISBN 1-85383-964-7. D. Alexander (2002). Principles of Emergency planning and Management. Harpended: Terra publishing. ISBN 1-903544-10-6. Introduction to international disaster management by Damon P. Coppola. “A Study of Seismic Assessment of a Govt. Middle School in Ganaihamam, Baramullah in J&K M A Dar, A.R Dar, A Qureshi and J Raju, International Journal of Advanced Research in Engineering & Technology, ISSN 0976 – 6480 (Print),ISSN 0976 – 6499(Online),Volume 4, Issue 6, October 2013, pp. 288-298,Journal Impact Factor (2013): 5.837. “A Study of Seismic Safety of District Hospital in Baramulla in J&K M A Dar, A.R Dar, S Wani and J Raju, International Journal of Civil Engineering & Technology, ISSN 0976 – 6308 (Print),ISSN 0976 – 6316(Online),Volume 4, Issue 5, October 2013, pp. 88-98,Journal Impact Factor (2013): 5.3277. “A Case Study of Seismic Safety of Masonry Buildings in J&K M A Dar, A.R Dar, S Wani and J Raju, www.ajer.org Page 263 American Journal of Engineering Research (AJER) [9]. 2013 International Journal of Civil Engineering & Applications, ISSN 2249-426X Volume 3, Number 1 (2013), pp. 21-32 “Experimental Study on Seismic Capabilities of Dhajji-Dewari Frames” M A Dar, J Raju, A.R Dar and A H Shah, International Conference on Advances in Architecture and Civil Engineering (AARCV) 2012 at (M S Ramaiah Institute of Technology), India (Proceedings of International Conference on Advances in Architecture and Civil Engineering (AARCV 2012, 21 st -23rd June 2012, Paper ID SAM206, vol 1)) (ISBN 978-93-82338-01-7) M Adil Dar The Author has received his B.E(Hons) in Civil Engineering from M.S.R.I.T Bangalore. He is Presently pursuing his M-Tech in Structural Engineering from Kurukshetra University. The Author has published papers in numerous High Quality Peer Reviewed International Journals and International Conferences. His research interests include Earthquake Engineering, Bridge Engineering & Steel Structures. He is a Chartered Structural Engineer from the Institution of Structural Engineers. He is the Fellow of IAEME & Life Member of ISE(I), ISET, ICI, ISSS, ISCE, IIBE, SEFI, IET(I) & Member of IAStructE, ACCE, ISSE, IStructE, ASCE, ACI, ASTM, IEAust & IRC. Prof (Dr) A.R.Dar The Author has received his B.E in Civil Engineering from R.E.C Srinagar (Presently N.I.T Srinagar) , M.E(Hons) in Structural Engineering from I.I.T Roorkee & Ph.d in Earthquake Engineering from University of Bristol U.K under prestigious Commonwealth Scholarship Award. He is presently working as a Distinguished Professor & Head of Civil Engineering Department in N.I.T Srinagar. The Author has published papers in several International Journals & Conferences..His research areas include Earthquake Resistant Design, Tall Structures, Structural Dynamics , RCC design, Steel Design & Prestressed Design . He is the life member of several Professional Bodies in Structural Engineering. He is presently the senior most Professor & holds many administrative responsibilities in the same institution. Asim Qureshi The Author has received his B.Tech(Hons) in Civil Engineering from N.I.T Srinagar. He is Presently pursuing his M-Tech in Structural Engineering from I.I.T. Bombay. The Author has published papers in many International Journals. His research interests include Earthquake Engineering, Bridge Engineering & Prestressed Structures. Jayalakshmi Raju The Author is pursuing her B.E (Final Year) in Civil Engineering in M.S.Ramaiah Institute of Technology Bangalore. She has published many papers in numerous peer reviewed International Journals and International Conferences. She has presented Technical Papers in many State & National Level Technical Events. She has also participated in many Technical Events like Cube Casting & Technical Debates. Her research interests include Steel Design, RCC Design & Bridge Engineering. She is the Fellow of IAEME & member of ASCE, ACI , IEAust , SEFI, & ISCE. www.ajer.org Page 264
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-131-135 www.ajer.org Research Paper Open Access Effect Of Temperature On The Structural And Optical Properties Of Spray Pyrolysis Sno2 Thin Films S.Parveen Banu*1, T.Saravana Kumaran*2, S.Nirmala3, J.Dhanakodi4 Department of Physics, Muthuyammal College Of Arts And Science, Rasipuram, Tamilnadu. Department of Physics, VSA College of Engineering-Salem, Tamilnadu. Muthuyammal College Of Arts And Science, Rasipuram, Tamilnadu. Muthuyammal College Of Arts And Science, Rasipuram, Tamilnadu. Abstract: - SnO2 thin films were synthesized at 300-500⁰C temperature by spray pyrolysis method using tin chloride pentahydrate, acetic acid, ammonia solution. The films were characterized by XRD, SEM and UV-VisNIR. XRD analysis of nanocrystals prepared with three different temperatures which shows the crystalline nature, structure as well as particle size of the prepared SnO2 particles. From the peaks position of XRD shows that the deposited films possess tetragonal structure with most prominent reflection along (200) plane. The Parameters such as crystalline size, strain and dislocation density has been analyzed. Surface morphology and film composition have been analyzed using scanning electron microscopy, the images of SnO2 nanoparticles showed their morphology, particle size and crystalline respectively. From the structural and SEM analysis it has been confirmed that phase change can be achieved by varying the temperature. The band gap of the prepared nanoparticles is found to be in the range 2.7 to 2.95eV, it is clear that by increases the temperature, energy gap was decreased. Keywords: - Spray pyrolysis thin film, XRD, EDAX, SEM and Optical properties. I. INTRODUCTION In recent years, there has been considerable interest in use of thin films in solar cells devices. Tin oxide is a semiconductor with an energy band gap of 2.7eV and the electrical properties can be suitably controlled by altering the deposition conditions. These materials are important in the fields of catalysis, photograph, electronics, photonics, data storage, optoelectronics, biological labeling, imaging and bio sensing. Tin Oxide (SnO2) films have been successfully used for many applications including use in gas sensor devices, pure and Cd doped SnO2. SnO2 films can be prepared by different techniques such as Spray pyrolysis, successive ionic layer adsorption reaction (SILAR), electro deposition, RFsputtering, pulse laser evaporation, physical vapour deposition, screen printing, metal organic vapour phase expitaxy (MOVPE)/metal organic chemical vapour deposition (MOCVD) and chemical bath deposition (CBD) method. In this paper tin oxide material is fabricated by Spray pyrolysis method, the purpose of this work was to investigate the effects of the growth condition for various temperatures from 300-500⁰C. II. MATERIALS AND METHODS 2. Experimental work SnO2 thin films were deposited by the CSP technique. In this deposition technique a starting solution, containing Sn precursors, was sprayed by means of a nozzle, assisted by a carrier gas, over a hot substrate. When the fine droplets arrived at the substrate, the solid compounds reacted to become a new chemical compound. SnO2 thin films were deposited into ultrasonically cleaned glass substrates using the spray pyrolysis method at different substrate temperature was varied from 300 to 500 ± 3°C which was controlled by thermo controller. www.ajer.org Page 131 American Journal of Engineering Research (AJER) 2013 2.1. Fabrication of SnO2 thin film sample The Substrates were heated to required temperature for film deposition by an electrical heater. The first precursor solution was 0.1M tin (IV) Chloride prepared by the dissolving in deionized water. A few drops of acetic acid were added to aqueous solutions to prevent the formation of hydroxides. The nozzle was kept at a distance of 5cm from the substrate during deposition. The solution flow rate was held constant at 0.5ml/min. Air was used as the carrier gas, at the pressure of 3bar. When aerosol droplets came close to the substrates, a pyrolysis process occurred and highly adherent SnO2 films were produced. The scheme of the spray Pyrolysis setup used in this study is presented in Figure. The various process parameters in the film deposition are listed in Table 1. Deposition rate 0.5ml/min Substrate temperature(°C) 300, 400, 500°C PH of the solution 7 Deposition time (minutes) 10 minutes Nozzle to substrate distance 5cm Carrier gas pressure 30Pa In the present study, the PH of the bath was measured using digital PH meter. SnO2 thin films were deposited using aqueous solutions of 0.1M of tin (IV) chloride maintaining the P H value in between 7 using ammonia solution. If the value is increased above 8 the bath became cloudy due to the precipitation of cadmium acetate. Hence the optimum PH value of 7±0.2may is chosen for all depositions. The films were deposited at bath temperatures 300, 400, and 500 °C for deposition. The dissociation is greater and gives higher amount of Sn4+ ions. The deposition time was optimized as 10 minutes, as which uniform and adherent films were obtained. Then the glass substrates were treated for 15 minutes with ultrasonic waves in a bath of is propane and then rinsed with acetone. The thickness of the substrate was measured using “Stylus profilometer” At various points on the substrate and their average was taken as the film thickness. 2.2. Characterization of SnO2 material The deposited thin films were characterized by Xray diffraction (XRD), scanning electron microscopy (SEM), and optical absorption spectra. X-ray diffraction pattern was recorded on Diffractometer (Miniflex Model, Rigaku,Japan) using CuKα radiation with a wavelength λ = 1.5418 °A at 2θ values between 20◦ and 80◦. The average crystallite size (D) was estimated using the Scherrer equation [12] as follows: D = 0.9λ/β cos θ, where λ, β, and θ are the X-ray wavelength, the full width at half maximum (FWHM) of the diffraction peak, and Bragg’s diffraction angle, respectively. The optical absorption spectra of the films were measured in the wavelength range of 200–700nm on a Shimadzu UV-2450 spectrophotometer. III. RESULTS AND DISCUSSION SnO2 thin films were deposited by the CSP technique. The transparency of thin films so formed depends on parameters like substrate temperature and concentration of the precursor solution. Also other parameters such as spray duration, flow rate, pressure etc. Structure Analysis: The X-ray diffraction patterns of the SnO2 thin films deposited at different substrate temperature 300,400 and 500º C are shown in fig (1, 2&3). The most intense peak was observed in XRD at (200) plane and additional peaks along (110), (101),(200), (211), (002), (310), and (112) planes were also observed. The www.ajer.org Page 132 American Journal of Engineering Research (AJER) 2013 preoperational orientation (200) plane of SnO2 thin films were found to gradually increase with the increase in substrate temperature 300 to 500ºC. It revel that the film is polycrystalline in nature with tetragonal structure. Inter planar spacing “d” were calculated and compared with standard valued of JCPDS 88-0287. It was found that at higher temperature intense diffraction peaks well – crystallized film were formed. The sharper peak (200) was found by x-ray diffraction pattern for SnO2 thin films deposited at higher temperature and small FWHM data were indicated in table. The lattice constants (a,c) was calculated using a equations for the SnO2 films as follow (h2+k2)/a2+l2/c2 It is observed that the lattice constants (a,c) value were slightly decreases with the increased the temperature. The lattice constants, crystalline size and its thickness for various temperatures are shown in table.By using Debye – Scherer formula the crystalline size are calculated, D = 0.9λ/βcosθ where, D is the mean crystalline size, β is the full width at half maximum of the diffraction line, θ is diffraction angle and λ is the wavelength of the x-radiation. The variation of crystalline size and micro strain with substrate temperature. SnO2 films increases with increase in substrate temperature and attains the maximum 67nm. Fig(1) 300⁰C Fig(2) 400⁰C Fig(3) 500⁰C Variation of lattice constants and crystalline size with substrate temperature for the SnO2 thin films. Standard Experimental d Temper FWHM (hkl) Crystalline Lattice lattice spacing 2θ ature size(nm) Constant Constant (Aº) (a) Aº (c) Aº 26.49 3.364 0.187 110 45.571 33.792 2.652 0.093 101 92.721 37.852 2.376 0.224 200 39.078 300 51.629 1.77 0.187 211 4.7455 3.1583 49.277 57.817 1.594 0.448 2 21.118 61.775 1.501 0.897 310 10.771 65.872 1.416 0.547 112 10.062 26.553 3.357 0.187 110 45.577 33.822 2.65 0.187 101 46.364 37.896 2.374 0.187 200 46.900 400 4.7507 3.1307 51.661 1.769 0.224 211 41.069 61.816 1.5 0.299 310 32.324 65.898 1.416 0.41 112 26.086 26.464 3.368 0.149 110 56.961 33.763 2.655 0.448 101 19.319 37.901 2.374 0.131 200 67.002 500 4.7486 3.1352 51.623 1.771 0.187 211 49.275 61.871 1.499 0.299 310 32.333 65.829 1.417 0.274 112 36.115 Chemical composition: Fig. Shows the EDAX spectrum of SnO2 thin film deposited at 400°C. The strong peaks for Sn and O were found in the spectrum, the silicon (Si) peak is due to the glass (parts of the glass component is Si) substrate and no other impurities were detected confirming high purity of the SnO2thin film. www.ajer.org Page 133 American Journal of Engineering Research (AJER) 2013 Surface Morphology: The Surface Morphologies of the SnO2 thin films were observed through a scanning electron microscopy (SEM). The surface of the film is smooth and covers to the glass Substrates well are shown in Fig. The surface of the films is found to be uniform and many pallets like grains are observed. By varying the temperature from 300 to 500 ⁰C the structure of the films and the grain size estimation base are depicted in fig. The average grain sizes are in the range of 350 to 400nm. Optical properties: The energy band gaps of these films were calculated with the help of the absorption spectra. To determinate of the energy band gap, we plotted (αhυ)2 versus hυ. Where α is the absorption coefficient and hυ is the photon energy. The absorption coefficient α is proportional to ฀ h ฀ = A(h ฀ -Eg)h The band gap of films was found to be increased by increasing temperature from 3.7 to 3.95 eV for SnO2. Our results are in agreement with this literature. IV. CONCLUSION In this investigation SnO2, thin films were grown on glass substrates by CSP and the effects of growth conditions such as the molarities of the constituents, growth temperature on structural and optical properties were studied. The major findings are 1. The structural study from X-ray diffraction indicate the best crystalline with tetragonal structure. It revealed that the grain size of the SnO2 films increases with the increase in temperature. 2. The SEM micrograph shows that the film is uniform and many pallets like grains to the substrate. The average grain size of the grains are in the range of 350to 400nm. 3. The stoichiometric compound is confirmed by the EDAX measurements. 4. The Optical absorption study reveals that SnO2 thin films have allowed direct transitions. The optical band gap energy varies from 3.7 eV to 3.95 eV with temperature. REFERENCE [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] Arivazhagan.V , Rajesh.S, Journal of Ovonic research,Vol.6,No.5 ,221-226 ,(2010) J. B. Yoo, A. L. Fahrenbruch, R. H. Bube, J Appl Phys. 68, 4694(1990) R. S. Rusu, G. I. Russia, J. Optoelectron. Adv. Mater 7(2), 823(2005). M. Penza, S. Cozzi, M. A. Tagliente, A. Quirini, Thin SolidFilms, 71, 349 (1999). S. Ishibashi, Y. Higuchi, K. Nakamura, J. Vac. Sci. Technol.,A8, 1403 (1998). J. Joseph, V, K. E. Abraham, Chinese Journal of Physics, 45,No.1, 84 (2007). E. Elangovan, K. Ramamurthi, Cryst. Res. Technol., 38(9), 779(2003). Datazoglov O. Thin Solid Films, Vol.302, 204-213,(1997) Fantini M. and Torriani I. Thin Solid Films, Vol.138, 255-265,(1986). Dainius Perednis and Ludwig J. Gauckler, “Thin Film Deposition Using Spray Pyrolysis”, Journal of Electroceramics, Vol. 14, 2005, pp. 103–111. www.ajer.org Page 134 American Journal of Engineering Research (AJER) [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] 2013 Krishna Seshan, “Handbook of Thin-Film Deposition Processes and Techniques- Principles, Methods, Equipment and Applications”, Noyes Publications, 2002. Tetsuo Muranoi and Mitsuo Furukoshi, “Properties of Stannic Oxide Thin Films Produced From the SnC14-H2O And SnC14-H2O2 Reaction Systems”, Thin Solid Films, Vol. 48, 1978, pp. 309-318. M. S. Tomar and F. J. Garcia, “Spray Pyrolysis in Solar Cells and Gas Sensors”, Progress in Crystal Growth and Characterization of Materials, Vol. 4, 1981, pp. 221-248. Matthias Batzill and Ulrike Diebold, “The surface and materials science of tin oxide”, Progress in Surface Science, Vol. 79, 2005, pp. 47–154. Antonius Maria Bernardus van, “Chemical Vapour Deposition of Tin Oxide Thin Films”, Ph.D Thesis, Technische Universiteit Eindhoven, 2003. Saturi Baco, Abdullah Chik, Fouziah Md. Yassin, “Study on Optical Properties of Tin Oxide Thin Film at Different Annealing Temperature”, Vol. 4, 2012, pp. 61-72. Smaali Assia, Outemzabet Ratiba, Media El Mahdi and Kadi Mohamed, “Optical Reflectance of Pure and Doped Tin Oxide: From Thin Films to Poly-Crystalline Silicon/Thin Film Device”, International Journal of Chemical and Biological Engineering, Vol. 2, 2009, pp. 48-51. Raül Díaz Delgado, “Tin Oxide Gas Sensors: An Electrochemical Approach”, Ph.D Thesis, Universitat De Barcelona, 2002. W. M. Sears and Michael A. Gee, “Mechanics of Film Formation During the Spray Pyrolysis of Tin Oxide”, Thin Solid Films, Vol. 165, 1988, pp. 265 277. G. E. Patil, D. D. Kajale, V. B. Gaikwad and G. H. Jain, “Spray Pyrolysis Deposition of Nanostructured Tin Oxide Thin Films”, ISRN Nanotechnology, Vol. 2012, 2012 www.ajer.org Page 135
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-194-202 www.ajer.org Research Paper Open Access Assessment Of Mechanical Properties Of Sintered And Hot Extruded Aluminium And Aluminium Based Titania Composites C.Vanitha1, K.S.Pandey2 1 Department of Metallurgical and Materials Engineering, National Institute of Technology, Warangal-506 004, Andhra Pradesh, India 2 Department of Metallurgical and Materials Engineering, National Institute of Technology, Tiruchirappalli620 015, Tamil Nadu, India Abstract: - The present investigation pertains to evaluate the quality of aluminium and aluminium based titania composite preforms when hot extruded through varying reduction ratios and also asserts their density and mechanical properties. Aluminium powder and powder blend preforms of Al-6% TiO2 & Al-12% TiO2 were prepared using 1.0MN Capacity UTM. Preform densities were maintained at 90±1 per cent of theoretical by applying controlled pressure in the range of 290± 10 M Pa and by taking accurately weighed powders. Al-6% TiO2 and Al-12% Ti O2 powder blends were separately prepared in a pot-mill. Preforms were sintered under protective coating in an electric muffle furnace for a period of 100 minutes. Extrusion experiments were carried out using 1.0MN capacity UTM while heating the extrusion die and the sintered preform in-situ. Data obtained on extrusion were critically analyzed and properties evaluated were summarized systematically. Addition of Titania in the aluminium matrix enhanced the tensile strength with a little drop in ductility. Enhancement in reduction ratio has been beneficial to mechanical properties. Keywords: - composite, extrusion, preforms, properties, sintered I. INTRODUCTION The development of high strength (Powder Metallurgy) P/M materials capable of withstanding elevated temperature service conditions has been the endeavour of the material scientists, physicists and the metallurgists. However, the past nine decades have witnessed the extremely rapid growth of P/M materials (products) that were capable of withstanding severe service conditions were produced mainly through forging or extruding or rolling the sintered P/M billets. This means that when P/M route is suitably combined with the conventional metal forming routes, it provides a more conducive and productive means while blending the powders of distinctly different constituents. These homogeneously blended powders induce into the final products - a uniform distribution of the dispersoid which is a harder phase and can retain its identity which in turn assists the components to retain higher strengths. Such composites cannot be produced by employing any other conventional route of manufacturing. Thus, these products are capable of withstanding high temperatures resulting into easy production of engine components that operate at elevated temperatures. Aluminium being one among the light metals possessing high strength to weight ratio, and, therefore, its applications in automotive industries is on an increasing trend. The specific advantages of aluminium and its alloys such as light weight, corrosion resistance, high thermal and electrical conductivities, non-magnetic characteristics, variety of forming and finishing operations can be combined with the advantage of powder metallurgy to develop various types of aluminium based composites. Extrusion is reported [1] to be economical when coupled with powder metallurgical route. Basically, extrusion is an act of expulsion of metal by mechanical force [2] through well defined orifice geometries. Mainly, in a hot working operation, the metal is heated to give a suitable degree of softness and plasticity. This process is adopted to develop various types of aluminium based powder metallurgical composites [1]. It is, well known that poly-phase materials are essentially composites, the material distribution here is controlled not mechanically or thermally but, by chemical means. Thus, it is a process of combining materials in www.ajer.org Page 194 American Journal of Engineering Research (AJER) 2013 certain ways to achieve a desired property which the individual materials would not possess. Aluminium P/M composites are such materials which come in the category of metal based oxides/nitrides/borides/ or their different combinations in metals constitute composites. Such combinations with major constituents being as metal (base) are termed as metal based composites. Their reinforcement particles are of a hard phase which retain their identity by not entering into the matrix (not alloying with the metal/metals) and, thus enhancing the strength, wear resistance etc. These dispersion strengthened materials retain high yield strength or strain hardening rate of elemental or alloy matrices, even at higher temperatures [3]. Composite materials, in general, have been produced by extrusion of thoroughly blended elemental and other hard particles of oxides or carbides or nitrides or borides etc. from powder billets. This is possible when deformation properties of components under extrusion conditions are almost same. This is a universal technique, if compactable combinations of materials are found. Aluminium based composites are ideal for hot extrusion in particular [4]. Hot extrusion is a process used to consolidate metal powders to useful shapes such as solid bars, hollow sections and other unusual geometries. This is a powder metallurgical process offering a large reduction in size from a single operation and the same yields improved densification and enhanced mechanical properties. Thus, the resultant product can be employed for structural applications [5]. The hot extrusion process is widely used for the consolidation of dispersion – strengthened materials in particular SAP and dispersion type of nuclear fuel elements, but the application to pure metal or alloy powders is limited [6]. However, the use of P/M methods to overcome casting problem is proven [7] beyond any doubt. A comprehensive experimental detailed studies and thorough analysis showing the influence of composition, extrusion ratio and the temperature of extrusion is described elsewhere[5]. The dies and tooling used in extrusion are required to withstand considerable abuse from the high stresses, thermal shocks and oxidation problems [8]. It has been shown that the addition of hard particles to aluminium powders have improved both the adhesive and abrasive resistance [9, 10] of the resultant product. Niels Hansen [8] has reported that the properties of dispersion strengthened aluminum products by hot extrusion after powder blending, the strength has gone up and the elongation dropped when matrix aluminium powder particle size was decreased and oxide concentration was raised. The sub – grain structure that was formed in the aluminium matrix during hot extrusion was super imposed on oxide strengthening particles which is effective at elevated temperatures. Some important literature on extrusion of aluminum and aluminium based powder preforms can be referred elsewhere [11-32]. The selection of the systems for the present investigation has been advocated purely on the basis of possible industrial application as the basis of the composite selection. Generally to harden and strengthen the pure copper metal, cold working has been the most adopted process, but, to attain highly stable and strong copper base material can be produced by dispersing non dissolvable second phase material into the matrix of copper and working it hot or cold. Since the composites of aluminum based can not be homogeneously prepared following the conventional melting and casting route, the P/M route was thought to be most appropriate because the virtually insoluble ingredient can be blended, compacted, sintered and extruded. This category is classified as dispersion strengthened aluminum extruded products. The sound metallurgical microstructural features of the composite are likely to enhance the mechanical properties. These materials can be employed as strong structural highly dense products at places where elevated temperature properties are sought upon. Thus, the present investigation is to ensure conclusively the effect of the extrusion temperature, extrusion ratio on the mechanical properties of with and without the addition Titania where two Titania additions; 6% and 12% were made. Properties required to be assessed are tensile strength, per cent elongation and per cent area reduction and also the effect of lubricant employed during extrusion on the quality of the products. II. EXPERIMENTAL DETAILS Includes the materials procurements and the types of equipment required for characterization of aluminium powder Al-6% TiO2 and Al-12% TiO2 blends, design and fabrication of die set assembly for compaction and also for extrusion dies. Compaction, ceramic coating fabrication of in-situ heating furnace and other relevant details are briefed. Compaction details along with the compaction assembly and also the die plates containing the extrusion orifices are shown. In addition to these, an extrusion assembly is also shown. II.1 Materials Required Materials required for the present investigation are commercially pure atomized aluminium powder of -150µm which was procured from M/s The Metal Powder Company Limited, Thirumangalam, Madurai, TamilNadu, India and titanium powder of -38μm was obtained from M/s Ghrishma Speciality Powders, Mumbai, Maharashtra, India. Molybdenum -di- Sulphide paste and graphite powders were also procured from Ghrishma Speciality Powders as stated above. Compaction and extrusion die materials were procured for designing, fabricating and heat treating them to required hardness and toughness. Two extrusion die plates were also fabricated for extruding at an extrusion ratio of 6:1 and 24:1. In- situ heating furnace was also designed and fabricated. www.ajer.org Page 195 American Journal of Engineering Research (AJER) 2013 II.2 Equipment Required Universal Testing Machine of 1.0MN capacity was required for powder compaction and powder preform extrusion. Separate electric muffle furnace was required for sintering the compacts. An electronic balance with a sensitivity of 0.0001g was required for density measurements. Apart from this temperature controller cum indicator along with chromel / alumel thermocouple was also required. Lathe machine for tensile specimen preparation was needed along with the Haunsfield Tensometer for conducting tensile tests and other measuring devices such as electronic vernier calipers, etc. II.3 Preparation of Titania Powder A known amount of Titanium powder of -38 µm was spread in a stainless steel tray and the tray was kept in an electric muffle furnace maintained at 1273±10K and allowed the powder to oxidize for a period of two hours and cooled to room temperature. This oxidized powder was ground in a porcelain bowl with a porcelain stirrer manually. This operation was continued till the titanium powder was completely oxidized, and, ground to quite fine sizes, i.e., Table: 1. Characteristics of Al Powder, Al-6%TiO2 and Al-12%TiO2 Powder Homogeneous Blends Properties Evaluated System Apparent Density, Flow Rate Compressibility, g/cc at a g/cc S/100g Pressure of 290±10MPa Al 0.9308 60.37 2.430 Al-6%TiO2 0.9398 58.3 2.491 Al-12%TiO2 0.9698 56.7 2.549 Wt. % Wt% Ret. Cum Wt% Ret. -180 +150 1.60 1.60 -150 +126 3.60 5.20 Table: 2. Sieve Size of Aluminium Powder Sieve Size -126 -106 -90 -75 -63 +106 +90 +75 +63 +53 2.50 0.71 8.30 9.20 16.70 7.70 8.41 16.71 25.91 42.61 -53 +45 15.80 58.41 -45 +38 3.63 62.04 -38 37.95 99.99 less than 38µm. This prepared Titania powder was used to prepare Al-6%TiO2, and, Al-12%TiO2 composite powder blends. Chemical analysis revealed that in the prepared TiO2 powder, the titanium content was found to be exactly in stoichiometric composition of TiO2. II.4 Powder Blend Preparation Known amount of Al-6%TiO2 and Al-12%TiO2 powder mixes separately were taken into two different stainless steel pots with powder mix to porcelain balls (~ 10 to ~20 mm diameters ) weight ratio and the lids of the pots were securely tightened and placed on the pot mill. The blending operation was carried out for a period of 30 hours so as to obtain homogeneous powder blends. Homogeneity was confirmed by taking 100g of powder mix after every one hour blending for measuring apparent densities and flow rates. Immediately after the completion of each test, the powder mixes taken out were returned back to their respective pots and lids were tightened again. This process was repeated till the last three consecutive readings of flow rates and apparent densities were found to be consistently constant and, thus, the time of blending was found out to be 30 hours. II.5 Compact Preparation and Application of Ceramic Coating Green compacts of aluminium, Al-6% TiO2 and Al-12% TiO2 powder blends were prepared by using suitable die, punch and bottom insert on a 1.0MN capacity Universal Testing Machine. The compacts of 27.50 mm diameter and 32.00mm height were prepared by taking pre- weighed powder and or powder blends and pressed into a density range of 89 ± 1 per cent of theoretical by applying controlled pressure in the range of 345± 10 M Pa. The powder compaction die set assembly is shown in Fig. 1. Indigenously developed and modified ceramic coating[33] was applied on the entire surfaces of the green compacts of all compactions while maintaining their identity and allowed them to dry under an ambient conditions for a period of 12 hours . A second coat was applied 90º to the previous coating and allowed them to dry under the aforementioned conditions for a further period of 12 hours. These ceramic coated compacts were placed in a stainless steel tray and the complete set was transferred into the furnace chamber maintained at 473±10K. Compacts were dried at this temperature, for a period of half- an- hour and then were allowed them to remain in the furnace itself for sintering operation. www.ajer.org Page 196 American Journal of Engineering Research (AJER) 2013 Figure 1 Compaction Assembly Showing complete Details II.6 Sintering Ceramic coated compacts were sintered in an electric muffle furnace for a period of 100 minutes in the temperature range of 823±10K. All sintered compacts were cooled inside the furnace itself by switching off the furnace. Instead of furnace atmosphere being hydrogen, dissociated ammonia or nitrogen, the compacts were coated with the indigenously developed and modified ceramic coating which protected the compacts against oxidation during sintering. The ceramic coating was tested upto 1473±10K and was found to be non-permeable to air or other gases while sintering ferrous based preforms at the aforesaid temperature. Therefore it was presumed that the ceramic coating applied over the compacts was highly protective during sintering at 823±10K as well. II.7 Hot Extrusion All hot extrusion experiments were carried out by using the suitable die- set assembly along with the in-situ heating furnace. Figure-2(a) shows the die plates with two different openings for hot extrusion. The entire die-set components were designed, fabricated using hot die steel, suitably heat treated to 55-58Rc values and finally tempered to retain the hardness in the range of 48-52 RC values. All hot extrusion experiments were carried out at two different extrusion temperatures; 773 ±10K and 823 ±10K and at two extrusion ratios namely, www.ajer.org Page 197 American Journal of Engineering Research (AJER) 2013 Figure 2(a) Die Plates for Extrusion with 6:1 and 24:1 Extrusion Ratio with Dimensions in mm. 6:1and 24:1 respectively. The press used for extrusion was 1.0MN capacity Universal Testing Machine. Die plates are shown in Fig. 2(a) and Fig. 2(b) shows the entire extrusion assembly along with the heating arrangement (in-situ heating provision). Figure 2(b) Complete Extrusion Assembly Along with the in-situ Heating Arrangement www.ajer.org Page 198 American Journal of Engineering Research (AJER) 2013 II.8 Tensile Testing Required length of tensile pieces were cut from each of the extruded rods from both extrusion ratios and both extrusion temperatures and the same were machined to standard tensile specimens as described elsewhere [34]. Prior to carrying out the tensile testing, the density measurements were carried out by using methods detailed elsewhere [35]. The masses in air and water were measured on a single pan electronic balance of sensitivity, 0.0001g [36]. Tension tested pieces were carefully used to calculate the final dimensions such as necked diameter and the final gauge length of the broken specimen. III. RESULTS AND DISCUSSION Results of all extrusion experiments were consolidated and were discussed in detail. This further includes recording of the pressures to begin and to end the extrusion for each extrusion at both temperatures of extrusion and at both extrusion ratios. All extrudes were visually observed and found that their general surface appearances were in two categories; (a) Good, and, (b) Excellent. Further, the densities attained at every extrusion temperature and extrusion ratios with their tensile properties are also recorded. During all extrusion experiments a paste consisting of molybdenum – di – sulphide and graphite was employed as a lubricant in order to reduce the frictional constraints while carrying out extrusions. III.1 Effect of Lubricant on Extrusion Quality Extrudes obtained at each of the extrusion ratios and the temperatures of extrusions were visually observed, and, found that, in general, the surface appearance were good to excellent. This observation is summarized in Table –3. This table reveals that in case of aluminium extrudes the surface quality was categorized as good whereas in case of extrudes of Al -6% TiO2 and Al-12%TiO2, the surface quality at all temperatures of extrusions and extrusion ratios were categorized as an excellent surface finish. Table: 3. Surface Finish of the Extrudes of Aluminium, Al-6% TiO2 and Al-12%TiO2 at Both Extrusion Temperatures and Ratios System composition Al Al-6% TiO2 Al-12% TiO2 Lubricant used MoS2 + Graphite Paste in Acetone MoS2 + Graphite Paste in Acetone MoS2 + Graphite Paste in Acetone Temperature of extrusion , K 773 823 773 823 773 823 Extrusion Ratio Quality of Surface Finish 6:1 24:1 6:1 24:1 6:1 24:1 6:1 24:1 6:1 24:1 6:1 24:1 Good Good Good Good Excellent Excellent Excellent Excellent Excellent Excellent Excellent Excellent III.2 Effect of Experimental Parameters on the Final Density Attained With Beginning and Ending Extrusion Pressures Table-4 shows the effect of extrusion ratios and the extrusion temperatures on the pressure required to begin the extrusion and also the pressure at final stages of extrusion. It is observed from this Table –4 that at constant extrusion temperature and constant extrusion ratio, the pressure to begin extrusion has gone up as the Titania content from 0.0% to 12% was raised. For instance at extrusion ratio of 6:1 and at the extrusion temperatures of 773K, the pressures to begin extrusion is in an increasing order such as 126 M Pa, 152M Pa and 160M Pa for aluminium, Al-6% TiO2 and Al-12%TiO2 respectively. Similar is the case for the ending pressure. The above is true at both higher extrusion temperature and higher extrusion ratio. Once the extrusion ratio is raised, the value of pressure to begin extrusion has gone up. This is true for all compositions at both extrusion temperatures. Further Observing the Table-4, the percentage density achieved is highest when the extrusion temperature and the extrusion ratio both were greater. It is, further observed that at extrusion ratio of 24:1 and www.ajer.org Page 199 American Journal of Engineering Research (AJER) 2013 Table: 4. Effect of Experimental Parameters such as Temperature and Extrusion Ratios on the Final Achieved Density Pressure, in MPa System Temperature of Extrusion %Density Composition Extrusion, in K Ratio Achieved Beginning Ending 6:1 126 139 98.58 773 24:1 209 286 99.60 Al 6:1 105 128 99.22 823 24:1 130 215 99.71 6:1 152 165 98.59 24:1 6:1 24:1 250 125 230 300 142 285 99.73 98.76 99.73 6:1 160 190 98.97 24:1 6:1 24:1 298 153 235 315 170 250 99.30 98.98 99.57 773 Al-6%TiO2 823 773 Al-12%TiO2 823 the extrusion temperature was 823K, the density attained in each extrudes has been beyond 99 per cent of theoretical, hence, the properties are expected to be quite enhanced and in fact they were fairly high. III.3 Effect of Composition, Temperature and Extrusion Ratios on Mechanical Properties Table-5 shows the percentage density achieved and tensile properties such as ultimate tensile strength, per centage area reduction and per centage elongation showing the influence of extrusion ratio, extrusion temperature and system composition. It is observed that irrespective of the extrusion temperature and the extrusion ratio, as the titania content in aluminium is raised from 0.0 to 12 per cent, the values of ultimate tensile strength have gone up down the column, but, reverse is true for per cent elongation and per cent area reduction. It is further observed that Table: 5. Effect of Extrusion Temperature, Extrusion Ratio and Composition on Attained Per cent Density and Mechanical Properties Extrusion Ratio Temperature of Extrusion, in K System Composition 773 Al Al-6%TiO2 98.58 98.59 Al-12%TiO2 Al 98.96 99.22 Al-6%TiO2 Al-12%TiO2 823 6:1 UTS, MPa 128 200 24:1 UTS, %El MPa 154 18.4 210 16.01 %(ρf/ρth) 15.0 14.10 % A.R. 13.40 10.60 216 144 9.60 16.4 9.20 13.8 99.30 99.71 229 176 11.10 18.60 9.98 17.41 98.76 206 14.81 11.01 99.73 220 17.41 14.38 98.98 232 10.13 10.13 99.57 243 13.40 11.43 %(ρf/ρth) %El 99.60 99.73 % A.R. 16.53 13.06 As the extrusion ratios and the extrusion temperatures were raised, the attained densities and the attained tensile strengths have gone up. This clearly establishes that at higher extrusion ratios and higher extrusion temperatures, the coherency of mass and bond formation, both have enhanced. The addition of Titania in aluminium as dispersoid has though dropped toughness as is indicated by per cent area reduction and per cent elongation, but, this drop is only marginal compared to the rise in ultimate tensile strength values. Thus, the present investigation leads the way for the future production of aluminium based composites for structural applications. IV. CONCLUSIONS Based on the experimental data obtained and the calculated parameters including their critical analysis led to the following major conclusions to be drawn from the present investigation: 1. Extrudes obtained possessed very smooth surface finish due to the use of molybdenum – di- sulphide and graphite mixed lubricant during extrusion. The surface finish has been categorized for aluminium as good and Excellent for Al – 6%TiO2 and Al -12%TiO2 systems, www.ajer.org Page 200 American Journal of Engineering Research (AJER) 2. 2013 Addition of titania in aluminium matrix established beneficial effect in enhancing the tensile strength even though a mild drop in per cent area reduction and per cent elongation ,i.e. a little drop in ductility which is a measure of toughness and hence a little drop in toughness, An increase in extrusion ratios and the extrusion temperatures and their combination along with the lubricant have produced strong coherent mass with increase in per centage elongation and per centage area reduction meaning thereby an increase in ductility and so as the toughness. Pressure required to begin the extrusion has been established to increase when the extrusion ratio was higher and extrusion temperature was lower. Same effect has been established by an increase in Titania contents in aluminium matrix. Addition of titania in aluminum matrix has established an increase in both the pressure required to begin extrusion and the pressure at the end of extrusion, and, High level of directionality of Titania dispersoid along the direction of extrusion is an anticipated possibility when the extrusion ratio and extrusion temperatures are raised. 3. 4. 5. 6. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] R. K. Jain, Production Technology, 15th edition, AC No. 6219 NSME / SMPD, 1980, 8-320. Claude E. Pearson and Redvers N. Parkins, “The Extrusion of Metals”, Published by London – Chapman and Hall Limited, 2nd Edition, 1961, Chapter – 2 and 5. A. L. Ruoff, Composite Materials – Materials Science, 1982, Vol.2, pp.331-339. H. J. McQueen, et.al, Strength of Metals and Alloys, Pergamum Press, 1986, Vol.1, pp.761-773. M. N. Rao, Hot Extrusion of Sintered Preforms of Aluminium and Aluminium – Iron Composites, Ph.D. Dissertation, December 1994, Bharathi Dasan University, Tiruchirappalli -620 024,Tamil Nadu, India. C. R. Shakespeare and D. A. Oliver, The Hot Extrusion of Metal Powders, Int. J. of Powder Metall., 1964, Vol.7, No.14, pp.202 – 212. P. J. M. Chare and T. Sheppard, Powder Extrusion as a Primary Fabricating Process for Al-Fe Alloys, Powder Metall., 1969, Vol.12, pp. 109-119. Niels Hansen, A Note on the Density of sintered Aluminium Products, Powder Metall. J., 1964, Vol.7.No.13, pp.64 – 68. P. J. M. Chare et.al, The Extrusion of Aluminium Alloy Composites, Trans. of P.M.A.I., 1984, Vol.10. pp.21 – 27. T. Sheppard and P. J.M. Chare, Extrusion of Atomized Aluminium Powders, Powder Metall., 1972, Vol.15, No.29, pp.17-41. J. H. Swartzwelder, Extrusion of Aluminium Powder Compacts, Int. J. of Powder Metall., 1967, Vol.3, No.3, pp.53-63. Niels Hansen, Dispersion Strengthened Aluminium Products Manufactured by Powder Blending, Powder Metall., 1969, Vol.12, No.23, pp.23-44. P. J. M. Chare and T. Sheppard, Densification and Properties of Extruded Al-Zn-Mg Atomized Powder, The Int. J. of Powder Met, and Powder Tech., July 1974, Vol.10, No.3, pp.203-215. A. S. El. Sabbagh, Mechanical Properties of Extruded Aluminium Powder as a Function of Particle size and Shape, Powder Met. Int., 1975, Vol.7, No.2, pp. 97-100. S. C. Park and W. K. Park , Properties of Extruded Aluminium Iron PM Materials, Int. J. Powder Met. and Powder Tech., 1978, Vol.14, No.4, pp.305-321. T. Tabata, S. Masaki and S. Shima, Densification of Green Compacts by Extrusion at Low Pressure, Int. J. Power Met. And Power Tech., Vol.20, No.1, 1979, pp.9-14. H. Hallser and H. Schreiner, Backward cold Extrusion of Composite Materials, Intl. J. of Powder Met. and Powder Tech., 1980, Vol.16, No.1, pp.21-26. A. Kumar, P. C. Jain, M. L. Mehta and P. N. Godbole, Hot Extrusion of Aluminium Powder at Low Reduction Ratios, Int. J. of Powder Met. and Powder Tech., 1981, Vol.17, No.3, pp. 237-250. J. Grosch and G. J. Brovkmann, Development of Wear Resistant Aluminium- Aluminium oxide Powder Composites, Powder Met. Intl. J., 1981, Vol.13, No.3, pp.146-151. T. Sheppard, M .A. Zaidi and G. H. Tan, Production of High Strength Aluminium Alloys by Extrusion of Atomized Powders, Powder Metall., 1983, Vol.26, No.1, pp.10-16. J. A. Walker and E. A. Starke, Jr., Microstructure and Properties of Extruded PM 7091 Plate, Powder Metall., 1983, Vol. 26, No.4, pp.185-191. G. H. Tan, M .A. Zaidi and T. Sheppard, Extrusion and Properties of an Al-10Mg Alloy Prepared from Rapidly Solidified Powder, Powder Metall., 1984, Vol.27, No.1, pp.73-79. H. B. Mc Shane and T. Sheppard, Production Structure and Properties of Al-Fe-Ni-Co Alloy Prepared from Atomized Powder, Powder Metall., 1984, Vol.27, No.1, pp.101-106. www.ajer.org Page 201 American Journal of Engineering Research (AJER) [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] 2013 M. A. Zaidi and T. Sheppard, Micro-structural Features of Extrudes Prepared from Rapidly Solidified AlFe-Ni-Co Powder, Powder Metall., 1984, Vol.27, No.4, pp.221-224. G. J. Marshall, E. K. Ioannidis and T. Sheppard, Extrusion Behaviour and Mechanical Properties of Three Rapidly Solidified Al-Mg-Transition Element Powder Alloys, Powder Metall., 1986, Vol.29, No.1, pp.57-64. R. D. Parkinson and T. Sheppard, Microstructure and Extrusion Characteristics of PM Alloys 7090 and 7091, Powder Metall., 1986, Vol.29, No.2, pp.133-141. G. H. Tan and T. Sheppard, Process Structure Relationship of Extrusions Produced from Rapidly Solidified Al-Mg-Mn Powders, Power Metall., 1986, Vol.29, No.2, pp.143-151. R. W. Gardiner, A. W. Bishop and C. J. Gilmore, Extrusion of Vapour Deposited Al-7.5 Cr-1.2 Fe (wt. %) Alloy (RAE Alloy 72), Materials Science and Technology, May 1991, Vol.7, pp. 410-418. W. Misiolek and J. Zasadzinski, Test Die for Extrusion of the Extrudability of Metals, Aluminium, 1991, Vol.67, pp.85-91. W. Z. Misiolek, Modeling of Temperature Speed Parameters, Proc. of Aluminium Extrusion Seminar FT92,Ch.Papericago, AA and AFC, May 1992, Vol.1, pp. 385-392. I. Hu, Z. Li and E. Wang, Influence of Extrusion Ratio and Temperature on Microstructure and Mechanical Properties of 2024 Aluminium Alloy Consolidated from Nanocrystalline Alloy Powders via Hot Hydrostatic Extrusion, Power Metall., 1999, Vol.42, No.2, pp.153-156. R. K. Goswami, R. Sikand, A. Dhar, O. P. Grover, U. C. Jindal and A. K .Gupta, Extrusion Characteristics of Aluminium Alloy/SiCp Metal Matrix Composites, Mat. Sci. and Tech., 1999, Vol. 15, pp. 433-449. K. S. Pandey, Indigenously Developed and Modified Ceramic Coating, National Institute of Technology, Tiruchirappalli, Tamil Nadu, India, 2010. Instruction Manual, Tensometer, Type – „W‟ Monsanto, Regd.No.29/962. K. S. Pandey, Characteristics Features of Hot Forgings of Sintered Iron and Iron 0.5%C Steel PM Preforms, Engineering Today, Oct. 2001, Vol. III, Issue 10, pp.10-13. “Instruction Manual for Electronic Balances”, Model AD – 180, Adair Dutt and Company, Private Limited, Madras, India. www.ajer.org Page 202
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-484-493 www.ajer.org Research Paper Open Access “J”-Radiation Is Mother Of Hydrogen?... (A New Theory On Supernature, Nature, Science) M.Arulmani, V.R.Hema Latha, B.E. (Engineer) M.A., M.Sc., M.Phil. (Biologist) Abstract: - This Scientific research article focus that the early void Universe shall be considered from Hydrogen and CNO. The J-Radiation shall be considered as the Mother of Hydrogen. The 1st generation CNO element shall be considered as three fundamental species to Hydrogen and considered as highly pure matter of the Universe. The above article focus that the various life organisms could have been originated in the Universe after formation of fundamental Pure CNO environment (Mixture of Carbon, Nitrogen, Ozone) derived from Hydrogen. The existence of life under fundamental CNO environment shall be considered as “oxygen free” 1st generation life”. CNO shall mean Carbon, Nitrogen, Ozone; Oxygen shall be considered as 3rd Generation element and species to ozone. Ozone is like “ICE”; oxygen is like “WATER”. It is speculated that the 1st Generation life might have subsisted with OZONE BREATH. 3) Philosophy of Pure Colours 4) Philosophy of Pure Energy 5)Philosophy of Pure Human Keywords: 1) Philosophy of J-Radiation 2) Philosophy of Pure Science In the expanding Universe the subsequent evolution of oxygen during modern time (3rd Generation) might have added more impurities to fundamental CNO environment in three different geological periods and the Universe polluted. Present day climate changes issues, global warming issues might be due to severe CNO imbalance due to growth of higher oxygen and lower carbon level. Further the increase in cancer growth rate is considered due to higher content of oxygen in the atmosphere. “The Prehistoric life might have switched over to oxygen breath gradually from ozone breath due to Natural Selection in 2nd, 3rd generation. The oxygen free environment shall be considered as Highly Pure environment”. - Author. I. INTRODUCTION In existing universe it is focused that the environmental change shall be considered as changes in CNO environment which shall result in altering the fundamental properties of preexisting matter. Further in the www.ajer.org Page 484 American Journal of Engineering Research (AJER) 2013 occurrence of “CNO Cycle” there is no current theory how CNO Variation is initiated?. If it is due to Nature means then What is Nature?... If it is due to Supernature or Science act means then What does mean Supernature?... What does mean Science?... Even Science itself there is a thought about existence of Pure Science. If so what does mean pure Science?... Some theory further focus about mixture of Nature and Science called as Natural Science. If so what does mean Natural Scince? I am sorry… Now I am totally confused!... Science shall confuse Human mind?... What we observe is Nature!... What we Experience is Science!!... What the Real Truth behind mystery is Supernature!!!... - Author II. PREVIOUS PUBLICATION The philosophy of origin of first life and human, the philosophy of model Cosmo Universe, the philosophy of fundamental neutrino particles have already been published in various international journals mentioned below. Hence this article shall be considered as extended version of the previous articles already published by the same author. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] Cosmo Super Star – IJSRP, April issue, 2013 Super Scientist of Climate control – IJSER, May issue, 2013 AKKIE MARS CODE – IJSER, June issue, 2013 KARITHIRI (Dark flame) The Centromere of Cosmo Universe – IJIRD, May issue, 2013 MA-AYYAN of MARS – IJIRD, June issue, 2013 MARS TRIBE – IJSER, June issue, 2013 MARS MATHEMATICS – IJERD, June issue, 2013 MARS (EZHEM) The mother of All Planets – IJSER, June issue, 2013 The Mystery of Crop Circle – IJOART, May issue, 2013 Origin of First Language – IJIRD, June issue, 2013 MARS TRISOMY HUMAN – IJOART, June issue, 2013 MARS ANGEL – IJSTR, June issue, 2013 Three principles of Akkie Management (AJIBM, August issue, 2013) Prehistoric Triphthong Alphabet (IJIRD, July issue, 2013) Prehistoric Akkie Music (IJST, July issue, 2013) Barack Obama is Tamil Based Indian? (IJSER, August issue, 2013) Philosophy of MARS Radiation (IJSER, August 2013) Etymology of word “J” (IJSER, September 2013) NOAH is Dravidian? (IJOART, August 2013) Philosophy of Dark Cell (Soul)? (IJSER, September 2013) Darwin Sir is Wrong?! (IJSER, October issue, 2013) Prehistoric Pyramids are RF Antenna?!... (IJSER, October issue, 2013) HUMAN IS A ROAM FREE CELL PHONE?!... (IJIRD, September issue, 2013) NEUTRINOS EXIST IN EARTH ATMOSPHERE?!... (IJERD, October issue, 2013) EARLY UNIVERSE WAS HIGHLY FROZEN?!... (IJOART, October issue, 2013) UNIVERSE IS LIKE SPACE SHIP?!... (AJER, October issue, 2013) ANCIENT EGYPT IS DRAVIDA NAD?!... (IJSER, November issue, 2013) ROSETTA STONE IS PREHISTORIC “THAMEE STONE” ?!... (IJSER, November issue, 2013) The Supernatural “CNO” HUMAN?... (IJOART, December issue, 2013) 3G HUMAN ANCESTOR?... (AJER, December issue, 2013) 3G Evolution?... (IJIRD, November issue, 2013) God Created Human?... (IJERD, December issue, 2013) Prehistoric “J” – Element?... (IJSER, December issue, 2013) 3G Mobile phone Induces Cancer?... (IJERD, December issue, 2013) “J” Shall Mean “JOULE”?... (IRJES, December issue, 2013) “J”- HOUSE IS A HEAVEN?... (AJER, December issue, 2013) The Supersonic JET FLIGHT-2014?... (IJSER, January issue, 2013) III. HYPOTHESIS As per existing current theories, there is no clear distinguished definition about i) What does mean Supernature? ii) What does mean Nature? iii) What does mean Science? www.ajer.org Page 485 American Journal of Engineering Research (AJER) 2013 (a) It is hypothesized that Supernature, Nature, Science shall be considered as a “Three-in-one” Members of Universe family, and these three members of the family can not be separated. DARK FLAME shall be considered as Hydrogen based internal energy of the supernature. (b) The fundamental Hydrogen, CNO shall be considered as NATURAL MATTER derived from “JRadiation”. (c) In Medical Term Supernature, Nature, Science shall be considered as equivalent to DNA (Nature), HORMONE (Science), RNA (Supernature). (d) In scientific codal language the Supernature, Nature, Science of Universe shall be formulated as below. (i) (ii) www.ajer.org Page 486 American Journal of Engineering Research (AJER) 2013 (iii) (e) In the early Universe, there was no existence of matter and the whole Universe shall be considered as VOID and shall be considered as matter free Universe. The philosophy of supernature, nature, science shall be considered as three integral part of Cosmo Universe as mentioned below: (i) (ii) www.ajer.org Page 487 American Journal of Engineering Research (AJER) 2013 Region I – Perfect vacuum region (Anti-Neutrinos radiation) Region II – Partial vacuum region (Neutrinos radiation) Region III – Observable Vacuum region (EMR radiation) (iii) (f) The three HCNO phase shall be considered existing due to impact of three different types of EVOLVED RADIATION in three geological period as mentioned below: (i) (ii) (iii) 1st Generation – UV Radiation only 2nd Generation – UV, RF Radiation only 3rd Generation – UV, RF, IR Radiation (g) The region-I of Universe shall be considered as highly freezed zone, region-II shall be considered as moderate freezed zone (Tesla region), region-III shall be considered as highly hot zone (Einstein region). IV. HYPOTHETICAL NARRATIONS a) It is focused that Natural Matter, Natural substance shall be considered as absolutely free from Hydrogen, and CNO elements. In the early universe the first chemical elements Hydrogen, CNO Molecule shall be considered as originated from absolutely pure and Hydrogen free Natural Radiation called as “J-Radiation” or White Flame. In the Cosmo Universe all the Hydrogen, the early CNO based matters such as planets, Gas molecules, Human, Animals, Plants shall be considered as derived from J-RADIATION. Thousands of various other radiations such as Alpha, Beta, Gamma shall be considered as HCNO content and species radiations to “PURE JRADIATION”. “J-Radiation shall be considered as absolutely free from Hydrogen, CNO elements and contains only Neutrino Particles Photon, Electron, Proton. The Neutrino Particles shall also be called as “GOD Particles”. - Author. b) In J-Radiation shall be considered as containing three-in-one fundamental energy properties, i.e. Electric, Magnetic, Optic which shall be considered derived due to impact of Anti-Neutrinos. Hydrogen and Carbon, Nitrogen, Ozone (CNO) required for life shall be considered as 1 st generation pure elements derived from “JRadiation”. All other elements in the Universe shall be considered as “Species Elements” with added impurities called as “3G Matters” under the varied environment of 3 generations of CNO Cycle due to UV, RF, IR impact in three geological periods. c) The Molecular structure properties of all matters of Universe, i.e. Physical, Chemical, Mathematical properties under varied composition of Pressure, Temperature, Density. The varied molecular structure of matters shall be considered as varied “SIM NUMBER” in Mobile Phone System. d) The 1st generation of human and life organisms shall considered derived from Billions of rays emitted from JRadiation under 1st generation Hydrogen, CNO environment. The 1 st generation human and all life organisms shall be considered as having “DARK COLOUR” on origin and varied colour properties might be derived from 2nd, 3rd generation CNO environment. www.ajer.org Page 488 American Journal of Engineering Research (AJER) 2013 “Dark Colour, White colour shall be considered as fundamental Pure colours. “White born of Dark”. Billions of other colours shall be considered as Colours with added Impurities and species to fundamental Dark and white”. Author. e) The fundamental study about various cosmic radiations and associated varied mass level of fundamental Neutrino Particles Photon, Electron, Proton shall be considered as “PURE SCIENCE”. “Pure Science shall be considered as the study of Physical, Chemical, Mathematical behavior of fundamental Particles radiation rather than study of various auxiliary branch area such as medical science Plant Science, Animal Science, Environmental Science, etc.” Author. f) “Pure human” shall be considered as the 1st generation human who lived under 1st generation Hydrogen, CNO environment and could breath Highly Purified “OZONE AIR”. The 2nd, 3rd generation human (so called Modern Post Modern Human) shall be considered as “Impure human” who breath Oxygen air which shall be considered as species air to highly pure ozone Air. The pure human shall be considered as “Creative Product” or “Natural Human”. g) The entire Cosmo Universe shall be considered like a “Supernatural human” having infinite level of Dark matter, Dark energy. J-Radiation shall be considered as the pure creative force of all matter of universe. The Dark matter, Dark energy shall be considered as containing full of Anti-Neutrino Particles and J-Radiation shall be considered as full of Neutrino particles. Neutrino Particles shall be considered possessing exactly opposite charge properties and characteristics to Anti-Neutrino Particles. The whole Cosmo Universe shall be considered as “Three-in-one” regions as stated below. a) Region I b) Region II c) Region III - “BODY” (Structural) “HEART” (J-Logics) “MIND” (Functional) 1. Lamp is like Universe?... The lamp shall be considered as Universe and composed of three-in-one parameter of Supernature, Nature, Science. i. Fuel – Dark energy ii. Thiri (Thread) – Neutrino particles iii. Flame – Creative Rays (J-Radiation) 2. Candle is like Universe?... The candle light shall be considered as Universe composed of three-in-one parameter of Supernature, Nature, Science. www.ajer.org Page 489 American Journal of Engineering Research (AJER) 2013 i. Wax – Science ii. Thiri (Thread) – Nature iii. Flame – Creative Rays (J-Radiation) 3. Morning Star is like Universe?... The star illumination shall be considered as Universe composed of three-in-one Neutrino particles responsible for emission of J-Radiation. Thousands of illuminating colour stars in the sky shall be considered as illuminating Neutrino Particles. 4. Philosophy of H, CNO Variation?... It is hypothesized that the fundamental initially created H, CNO matter shall be considered undergone three major genetic changes in three geological periods due to change in the relative position of SUN, EARTH, MOON which is considered as the base of Cosmo Universe rather than various current theorires about CNO, HCNO Variation. www.ajer.org Page 490 American Journal of Engineering Research (AJER) 2013 5. Philosophy of Oxygen Origin?... It is focused that Oxygen shall be considered as 3rd Generation matter formed only in modern periods due to impact of 3rd Generation radiation and 3rd Generation HCNO phase. The prehistoric life shall be considered as living with super immunity under Oxygen free environment. 6. Oxygen influences Cancer and Climatic Changes?... It is focused that the growth of higher Oxygen and lower carbon level growth shall be considered as having great impact on fast Cancer growth and frequent climate change issues such as frequent cyclone, volcanic activity, sudden forest fire, growth of new disease microbes and bacteria. 7. Oxygen influences O-type blood evolution?... It is focused that the evolution of Oxygen in modern time shall be considered as having great impact on evolution of O-type blood and WBC in human. It is focused that the prehistoric human had only single type blood AB. Further three more types of blood might be originated in three different geological period due to impact of three different radiations and three CNO phase. i. AB Type – Human Origin ii. AB, A Type – 1st Generation iii. AB, A, B Type – 2nd Generation iv. AB, A, B, O Type – 3rd Generation 8. Biblical ADAM, EVE ANGEL could breath Ozone?... It is focused that ADAM, EVE, ANGEL shall be considered as 1 st generation Human Populations subsisting with Ozone Breath and eaten MANNA Food. Manna Food shall be considered as Holy Food derived from “J-Radiation”. JESUS CHRIST shall be considered as 3rd Generation population and born of OType blood and WBC origin. The three different generations based on Biblical understanding shall be narrated as below. i. Angel, Adam, Eve – AB Type only (1st Generation) ii. NOAH – AB, A, B Type only (2nd Generation) iii. Jesus Christ – AB, A, B, O (3rd Generation) Further the philosophy of word “AMEN” shall be considered as creation logic also called as J-Logic or Holy spirit. The creator of Universe shall be called as supernature or JEHOVAH. (i) (ii) www.ajer.org Page 491 American Journal of Engineering Research (AJER) 2013 (iii) The philosophy of J-LOGIC shall be alternatively called as follows.                        J-logic shall mean law of Universe J-logic shall mean First born element J-logic shall mean carol of Ariro… Araro… J-logic shall mean new hope of good news J-logic shall mean virgin mother J-logic shall mean mother of hydrogen J-logic shall mean mercy of J-mass J-logic shall mean J-angel of “Indo-Canaanite” J-logic shall mean castless blood of holy communion J-logic shall mean new covenant of peace J-logic shall mean born love of holy cross J-logic shall mean radiation hope of morning star J-logic shall mean redemption plan of J-kingdom J-logic shall mean Canaanites of J-family J-logic shall mean pyramid of confidence J-logic shall mean holistic medicine of olive leaves J-logic shall mean holy water of Jordan J-logic shall mean holy food of Manna J-logic shall mean command of Sinai J-logic shall mean holy temple of new Canaan J-logic shall mean “POPE-AMMA” of New Jerusalem J-logic shall mean visible Jehovah J-logic shall mean “AMEN” www.ajer.org Page 492 American Journal of Engineering Research (AJER) V. 2013 CONCLUSION Three-in-one Neutrino Particles shall be considered as “Law of Universe” which make the universe. i) GOD is like “Supernature” ii) HUMAN is like “Nature” iii) LAW is like “SCIENCE” “What is observed is Science and Nature. What is Truth is Supernature” VI. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] - Author. REFERENCE Intensive Internet “e-book” study through, Google search and wikipedia M.Arulmani, “3G Akkanna Man”, Annai Publications, Cholapuram, 2011 M. Arulmani; V.R. Hemalatha, “Tamil the Law of Universe”, Annai Publications, Cholapuram, 2012 Harold Koontz, Heinz Weihriah, “Essentials of management”, Tata McGraw-Hill publications, 2005 M. Arulmani; V.R. Hemalatha, “First Music and First Music Alphabet”, Annai Publications, Cholapuram, 2012 King James Version, “Holy Bible” S.A. Perumal, “Human Evolution History” “English Dictionary”, Oxford Publications Sho. Devaneyapavanar, “Tamil first mother language”, Chennai, 2009 Tamilannal, “Tholkoppiar”, Chennai, 2007 “Tamil to English Dictionary”, Suravin Publication, 2009 “Text Material for E5 to E6 upgradaton”, BSNL Publication, 2012 A. Nakkiran, “Dravidian mother”, Chennai, 2007 Dr. M. Karunanidhi, “Thirukkural Translation”, 2010 “Manorama Tell me why periodicals”, M.M. Publication Ltd., Kottayam, 2009 V.R. Hemalatha, “A Global level peace tourism to Veilankanni”, Annai Publications, Cholapuram, 2007 Prof. Ganapathi Pillai, “Sri Lankan Tamil History”, 2004 Dr. K.K. Pillai, “South Indian History”, 2006 M. Varadharajan, “Language History”, Chennai, 2009 Fr. Y.S. Yagoo, “Western Sun”, 2008 Gopal Chettiar, “Adi Dravidian Origin History”, 2004 M. Arulmani; V.R. Hemalatha, “Ezhem Nadu My Dream” - (2 Parts), Annai Publications, Cholapuram, 2010 M. Arulmani; V.R. Hemalatha, “The Super Scientist of Climate Control”, Annai Publications, Cholapuram, 2013, pp 1-3 www.ajer.org Page 493
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-450-454 www.ajer.org Research Paper Open Access Efficiency Assessment of a Constructed Wetland Using Eichhornia Crassipes for Wastewater Treatment David O. Olukanni and Kola O. Kokumo Department of Civil Engineering, Covenant University, P.M.B. 1023, Ota, Ogun State, Nigeria. Abstract: - The practice of treating municipal wastewater at low cost prior to its disposal is continually gaining attention in developing countries. Among the current processes used for wastewatere treatment, constructed wetlands have attracted interest as the unit process of choice for its treatment due to their low cost and efficient operation in tropical regions. The aim of this study is to assess the efficiency of a constructed wetland that uses water hyacinth for wastewater treatment and to investigate the impact of the hydraulic structures on the treatment system. This study also involves determining the efficiency of water hyacinth in polishing biochemical oxygen demand (BOD), chemical oxygen demand (COD), total suspended solids (TSS), phosphate, magnesium, zinc, nitrate, chloride, sulphate, potassium, pH and fecal coliform. Two samples each were collected and tested from the six WHRB reactors available at Covenant University. The wetland achieved a performance of 70% of BOD-, 68% of COD-, 41% of Total Solids (TS)-, 100% of zinc, 30% of nitrate, 38% of chloride, 94% of sulphate, and 2% of potassium-removal, respectively. The result also shows a 6%, 29% and a significant increase, in pH, phosphate and magnesium, respectively. The study shows that constructed wetlands are capable of treating wastewater and also emphasizes the sustainability of the technology. Keywords: - Wastewater, Developing Countries, Low Cost, Constructed wetland, Eichhornia Crassipes, I. INTRODUCTION Conventional wastewater treatment technologies used in most industrialized nations are currently not potential options in many developing countries to provide environmental and public health protection [1]. Also, in these developing countries, the treatment of wastewater has been a great concern and it is well known that most of the projected global population increases will take place in the third world countries that already suffer from land, water, food and health problems. The greatest challenge in the water and sanitation sector over the next two decades would be the implementation of low cost wastewater treatment that would at the same time permit selective reuse of treated effluents for agricultural and industrial purposes [2]. In most developing countries, especially in Africa, wastewater is simply too valuable to waste [3]. Its water and nutrients (nitrogen and phosphorus) are needed for crop irrigation and fish culture [4] [5]. However, the construction cost for conventional wastewater treatment plant has been a major barrier for the implementation of conventional technologies by local authorities in many African countries [6]. Although, these technologies are very effective, they are expensive to build and maintained, coupled with the fact that they also require skillful personnel and technical expertise to be operated [7]. Consequently, while water borne diseases such as cholera and diarrhea have persisted because of inadequacies in wastewater treatment systems, developing nations are unable to incorporate these technologies as part of a wastewater treatment master plan. It is therefore imperative that a treatment system that is economical and sustainable be put in place. As a result of this development, decision makers are looking for alternatives that could be used as complementary methods to reducing treatment costs. Among the current processes used for wastewater treatment in tropical regions, constructed wetland has attracted interest as the unit process of choice for wastewater treatment due to their low cost in energy consumption, low maintenance, high level sustainability, efficient operation and being an ecosystem that uses natural processes [8] [9] [10]. Constructed wetlands are engineered systems that have been designed and constructed to utilize the natural processes involving wetland vegetation, soils and associated microbial assemblage to assist in treatment of wastewater. Constructed wetlands are based upon the symbiotic relationship between the micro organisms and pollutants in the wastewater [11]. www.ajer.org Page 450 American Journal of Engineering Research (AJER) 2013 Some of the different wastewater treatment processes which are in use globally are; activated sludge, biological filter, oxidation ditch, aerated lagoon, waste stabilization Pond (WSP) and Constructed wetlands. In developing countries, the number of choices may be higher as a result of the more diverse discharge standards encountered. Wetlands serve thousands of communities around the world. They are effective in wastewater treatment and offer potentials for resources recovery through the production of biomass, which can be used as human and animal foods. The growing interest in wetland system is due in part to recognition that natural systems offer advantages over conventional systems. Various wetland systems incorporate the use of different plants as a source of nutrient and pathogenic organisms’ removal. Wetland plants have the ability to transport atmospheric oxygen and other gases down into the root to the water column. Within the water column, the stems and roots of wetland plants significantly provide the surface area for the attachment of microbial population. Water hyacinth (Eichhornia crassipes), Duck weed (Lemna spp), Spirodela spp, Wolffia spp, totora and cattails, among others are plants that are very efficient in removing vast range of pollutants, from suspended materials, BOD, nutrients, organic matter to heavy metals and pathogens [12]. Eichhornia Crassipes can be distinguished from others by its highly glossy leaves. Water hyacinth has demonstrated that it is an excellent pollutant removal for wastewaters [13] [14]. This study is aimed at assessing the efficiency of the constructed wetland that uses water hyacinth [water Hyacinth reed bed (WHRB)] as pollutant removal in Covenant University and to investigate how the system can be improved if necessitated. II. MATERIALS AND METHODS 2.1 Description of the study area Covenant University, within Canaan land in Ota town, is in close proximity to the city of Lagos, Nigeria. The institution has undergone an increasing population since its inception in 2002 with a current population of over 9,000 people. Wastewater from septic tanks in isolated locations within the Canaan land is taken by water tankers (Plate 1) for discharge into a primary clarifier which subsequently flows into a secondary clarifier and then into the CW (water hyacinth reed bed). The geometry of the primary clarifier was measured to have a volume of 720 m3 i.e. 15 x 13.7 x 3.5 meters. The secondary clarifier has an area of 261 m2 i.e. 17.41 m x 15 m and a depth of 5m. These tanks functions like anaerobic ponds within which the biochemical oxygen demand (BOD) and total solids are substantially reduced by sedimentation and anaerobic digestion before the partially treated effluent enters a diversion chamber. It is from this point that the wastes are fed into the hyacinth beds (Plate 2). The constructed wetland is a Free Water Surface (FWS) type. As shown in Figure 1, the reed beds consist of six units of concrete facultative aerobic tanks 1.2m deep and each partitioned into four cells with an internal surface area 5.70 m by 4.80 m with influx of wastewater into each cell at alternate ends of the partition walls (Plate 3). The effective depth of each cell is about 0.9 m and has a volume of 23.16 m 3 with a free board of 0.30m. The final effluent discharges into an outfall (Plate 4) that is about 8m long and empties into a perennial stream that drains the campus and forms a tributary that discharges into River Atuara, a few kilometers from the Campus. Plate 1 Plate 2 Plate 1 shows tanker dislodging wastewater into the treatment chamber while Plate 2 is the water hyacinth (Eichhornia Crassipes) beds showing baffle arrangement at opposing edges. www.ajer.org Page 451 American Journal of Engineering Research (AJER) Plate 3 2013 Plate 4 Plate 3- shows water hyacinth treating wastewater and Plate 4 shows effluent discharging through the outfall into the thick vegetation valley. Grab samples of the raw influent and treated effluent from the existing water hyacinth reed bed were collected and analyzed in the laboratory for its BOD5, Faecal coliform, pH, temperature, COD, Suspended Solids, Total Solids, Nutrients and Heavy Metals. Variation of influent and effluent parameters (physical, chemical, bacteriological and physico-chemical characteristics) was determined. Figure 1- Layout of the Constructed Wetland [Water Hyacinth Reed Beds (WHRB)] in Covenant University and the wastewater collection points. www.ajer.org Page 452 American Journal of Engineering Research (AJER) 2013 III. RESULTS AND DISCUSSION 3.1 Physico-Chemical Parameters Table 1 shows the performance evaluation of the constructed wetland. There was a significant reduction in turbidity level with a performance of 40 % reduction. Higher turbidity levels are often associated with higher levels of disease-causing microorganisms such as viruses, parasites and some bacteria. There was an increase in the pH value which range from 6.16-6.59 with a constant temperature of 270C across all the reactors. Though optimum pH for bacteria to function is between 7.5 and 8.5 but most treatment plants are able to effectively nitrify with a pH of 6.5 to 7.0. The Total Suspended Solids (TSS) was reduced by 56% at the outlet of the final reactor. However, this does not meet with the Federal Environmental Protection Agency (FEPA) [15] now named “National Environmental Standards and Regulations Enforcement Agency” (NESREA) standard, recommending a limit of 30 mg/L for TSS. This means that the TSS concentration in the system is high and should be further reduced. The TSS includes silt, clay, plankton, organic wastes, and inorganic precipitates. The treatment plant had little effect on the total dissolved solids (TDS). Though the TDS concentration is way below the standard limit given by FEPA, 2000 mg/L, it’s composition in the effluent can still be reduced. It can also be deduced that most of the TDS concentration has been treated in the primary and secondary clarifiers. The Total Solids (TS) was considerably reduced. Though there is no specification to the amount of solids expected in wastewater. The treatment system gave a significant performance on reducing the total solids by 41.18% in pollutant level. A 37% reduction in chloride concentration was achieved by the treatment system. However the effluent chloride concentration is way below the 600 mg/L standard recommended by FEPA. It is a known fact that the chloride content of wastewater usually increases as its mineral contents increases and vice versa. The phosphate concentration increases very slightly but it is way under the 5mg/L recommendation. The slight increase in phosphate concentration could be as a result of the dead and decayed water hyacinth plant in the reactors. The nitrate and sulphate content was reduced by 30% and 90%, respectively, an amount that is acceptable for discharge into natural water bodies. The BOD and COD ratio reveals the treatability of wastewater, so if the ratio is above 0.5 the wastewater is considered to be highly biodegradable and if lower than 0.3 the wastewater is deemed to undergo a chemical treatment before the routine biological treatment. For the University treatment plant, the BOD to COD ratio is 0.85. Therefore it is concluded that the wastewater generated in the campus is highly biodegradable. The CW and its associated water hyacinth plants were considered to have little or no effect on the concentration of magnesium and potassium. In fact, a highly significant increase in the magnesium content was observed in the wastewater. Magnesium and potassium content could slow down the COD removal at certain concentration but a fair decrease in their level could rapidly enhance COD removal. Though the magnesium content increases, it is still way below the 200mg/L limit in wastewater as recommended by FEPA. The zinc element in the CW system was effectively removed in the wastewater. Table 1. Overall performance of treatment between influent into WHRB 1 Influents and WHRB 6 Effluents on the Parameters Tested Parameters Turbidity pH Total Solids mg/L Total Suspended Solids mg/L Total Dissolved solids mg/L Chloride mg/L Phosphate mg/L Nitrate mg/L Sulphate mg/L Chemical Oxygen Demand mg/L Biochemical Oxygen Demand mg/L Magnesium mg/L Zinc mg/L Potassium mg/L www.ajer.org WHRB 1 Influent 136 6.16 255.00 168.00 87.00 259.93 0.113 0.04 0.20 330.50 298.35 9.00 0.04 25.41 WHRB 6 Effluent 82 6.56 150.00 74.00 76.00 162.45 0.146 0.028 0.012 105.19 90.43 26.00 ND 24.67 Percentage (%) increase Percentage (%) decrease 39.70 6.49 41.18 55.95 12.64 37.50 29.20 30.00 94.00 68.17 69.69 188 100.00 2.91 Page 453 American Journal of Engineering Research (AJER) IV. 2013 CONCLUSION AND RECOMMENDATION The Constructed Wetland with hydrophytes (water hyacinth plant) is capable of removing pollutants and the hydrophytes have shown its ability to survive in high concentration of nutrients with significant nutrient removal. It has reliable nutrient stripping value for the removal of the trace elements tested for in the study. The use of water hyacinth plant aquatic system can help reduce eutrophication effects in receiving streams and also improve water quality. It would be recommended that more reactors are added to the treatment plant to enhance further settling of solids and give the wastewater more exposure to bacteria and water hyacinth, so that more nutrients are removed from the wastewater. Improvement can also be possible by increase in retention time of the wastewater in each compartment of the constructed wetland and possible means of aeration at the final discharge point. REFERENCE [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] D.O. Olukanni, J. J., Ducoste, Optimization of Waste Stabilization Pond Design for Developing Nations using Computational Fluid Dynamics. Ecological Eng, 37, 2011, 1878–1888. P. S. Navaraj, Anaerobic Waste Stabilization Ponds: A Low-cost Contribution to a Sustainable Wastewater Reuse Cycle. 2005 navaraj678@sify.com. World Health Organization, UNICEF. Global Water Supply and Sanitation Assessment Report. Geneva. 2000 D. Ghosh, Turning Around for a Community-based Technology-Towards a Wetland Option for Wastewater Treatment and Resource Recovery that is Less Experience, Farmer-centered and Ecologically Balanced. Environment Improvement Programme, Calcutta Metropolitan Development Authority, Calcutta. 1996. D. D. Mara, Appropriate wastewater collection, treatment and reuse in developing Countries, Proceedings of the Institutions of Civil Engineers, London. 2001, 299-303. D.O. Olukanni, S. A. Aremu, Water hyacinth based wastewater treatment system and its derivable byeproduct. Journal of Research Information in Civil Engineering, 5(1), 2008, 43-55. D. O. Olukanni, Evaluation of the Influence of Reactor Design on the Treatment Performance of an Optimized Pilot-Scale Waste Stabilization Pond. IJET, 3(2), 2013, 189-198. A. Al-Omari, M. Fayyad, Treatment of domestic wastewater by subsurface flow constructed wetlands in Jordan. Desalination, 155, 2003, 27-39 E. A. Korkusuz, M. Beklioglu, G. N. Demirer, Comparison of the treatment performances of blast furnace slag-based and gravel-based vertical flow wetlands operated identically for domestic wastewater treatment in Turkey. Ecological Engineering, 24, 2005, 187-200. A. Yasar, Rehabilitation by constructed wetlands of available wastewater treatment plant in Salkhnin, Ecological Engineering. 29, 2007, 27-32. M. Stomp, K. H. Han, S. Wilbert, M. P. Gordonand, S. D. Cunningham, Genetic strategies for enhancing phytore mediation, Ann. New York Acad. Sci. 721,1994, 481-491. S. Dhote, S. Dixit, Water Quality Improvement through Macrophytes: A Case Study, Asian J. Exp. Sci., 21(2), 2007, 427-430. M. A. Maine, Nutrient and metal removal in constructed wetland for wastewater treatment from metallurgical industry. Ecological Engineering, 26 (4), 2006, 341-347. K. Skinner, Mercury uptake and accumulation by four species of aquatic plants. Environmental Pollution, 145 (1), 2007, 234-237. Federal Environmental Protection Agency (FEPA) Guidelines to Standards for Environmental Pollution Control in Nigeria. FEPA, Lagos, Nigeria, 1991, 90-91. www.ajer.org Page 454
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-265-275 www.ajer.org Research Paper Open Access Studies of the Formation of Submicron Particles Aggregates under Influence of Ultrasonic Vibrations Dr. V.N. Khmelev1, A.V.Shalunov1 Ph.D., R.N.Golykh1, V.A. Nesterov1, K.V. Shalunova1 Ph.D. 1 Biysk Technological Institute (branch) of Altai State Technical University named after I.I. Polzunov, Biysk, Russia Abstract:- The article presents the results of the studies of the formation of aggregates at ultrasonic coagulation of submicron particles. Proposed mathematical model of the behaviour of two separated particles allows to reveal the modes of ultrasonic influence providing the least time of dispersed particles convergence, and it helps to state the most possible form of generated aggregates. The new model takes into consideration the specific features of the flow of small-size particles (less than 1 μm) at their devergence from spherical form and presence of rotational motion under the influence of ultrasonic field. The analysis of the model lets determine, that optimum frequency range of influence is considered to be 20...50 kHz, if the level of acoustic pressure is more than 150 dB. It is ascertained, that at the initial stage of the coagulation the aggregates are ellipsoids of revolution with the most possible ratio of semi-axis of 2.8 oriented along the field. The basic mechanism of such aggregate formation is the approach of submicron particles under the influence of Oseen forces generated by the perturbations of second-order flow field. At further increase of the aggregates during the coagulation their rotational motion occurs, which makes great contribution into the mechanism of the particles coagulation. At the final stage of the coagulation, when the aggregates rise up to 200 μm and more in transverse size, their final orientation across the field takes place. Obtained results can be the base for understanding of aerosol development. Theoretical results allow to work out requierments to the radiators of ultrasonic vibrations for the realization of the process with maximum efficiency. Keywords: - Aerosol, coagulation, hazardous emissions, submicron particles, ultrasound I. INTRODUCTION One of the consequences of rapid growth of industry is noticeable worsening of the state of atmospheric air. The main sources of air pollutions are industrial enterprises, thermal power stations, transport, etc. Technological processes of different industrial branches are accompanied by the emission of dust-laden gases, which pollute production and ecological environment, impede the progress of technological processes and worsen the quality of final product. According to the specialists’ estimation at present the industry daily emits into the atmosphere up to 1 milliard tons of aerosols. The most of aerosols formed in industry are fine-dispersed ones – the size of the particles is less than 1 μm. Such aerosol is especially dangerous for people’s health, as it can easily penetrate into human lung alveoli and blood vascular system. In view of all mentioned above the protection of atmospheric air from the pollutions of industrial emissions remains of the main modern problems. Along with harmful emissions many technological processes at the enterprises of various branches of industry are accompanied by the inflow of aerosols containing final product in the form of particles of submicron and nanometric size. There is a need to capture final product during the production process in the field of nanotechnologies, in food, chemical and mining industries. Thus, it is also necessary to develop the technology of particle capture of final product from gas-dispersed systems. The necessity of solving problems listed above determines the urgency of issues aimed at the design of the equipment for high-efficiency capture of submicron particles from gas-dispersed systems. www.ajer.org Page 265 American Journal of Engineering Research (AJER) 2013 To collect dispersed particles wide range of the apparatuses of dry and wet dust cleaning used different mechanisms of separation (settling chambers, various cyclones, electric or textile filters) is applied. However the sphere of application of all known apparatuses is limited. It is caused by low efficiency, necessity of replacement or cleaning of filtering element, and sometimes it is principally impossible to capture submicron particles. The most efficient of all existing dust collectors are inertial and centrifugal apparatuses, which successfully prove themselves at the capture of micron particles. However the collection of submicron particles by the use inertial apparatuses is inefficient. Superposition of external actions (such as steam-coagulation coalescence of the particles, change of surface tension) can influence the efficiency of aerosol settling. But original properties of finished product can be changed, that it is inadmissible for its further application. The most promising way of efficiency increase of submicron particle capture is its preliminary coagulation in high-intensity acoustic fields. Acoustic influence (acoustic coagulation) provides the increase of particle size in 10-35 times relative to its initial size. Preliminary coagulated particles of industrial aerosols can be collected by existing or specially developed settling methods without any difficulties. In spite of the fact that it have been done much in studies of physics of the process and industrial application of acoustic coagulation of the aerosols [1-13] (first investigations were carried out by S.V. Gorbachyev and A.B. Severnyi, O. Brandt, H. Freund, E. Hiedemann, H.W. St. Clair and others in the beginning of 30 th years of 20th century [1-8]), up to the present moment there is no systematic theoretical and experimental research explaining the mechanism of the coagulation of dispersed particles in the acoustic field. Misunderstanding of the mechanism of submicron particle coagulation makes it impossible to determine optimum modes of influence (the level of acoustic pressure and frequency) on gas-dispersed systems depending on their characteristics (concentration, dispersed composition, rate of powder-gas flow) providing maximum efficiency of the process. Besides that existing theories [1-13] do not take into account the features of the flow of submicron particles by gas medium in the acoustic field. Among the features the most important are the following: – the dominance of the forces of viscous stress due to small size of the particles, as Reynolds number does not exceed 0.1 even at very high levels of the acoustic pressure (up to 165 dB); – the deviation of the form of solid particles and their aggregates from the spheric one leading to their rotation, while liquid drops always have spheric form, as they are in the state of minimum potential energy of surface tension. The absence of studies aimed at the influence of these features on the coagulation process in the ultrasonic field does not allow obtaining valid data on optimum parameters of influence. To determine the optimum parameters of influence it is necessary to investigate the mechanism of formation of fine-dispersed particle aggregates in the ultrasonic field. The definition of the dependences of coagulation efficiency on the parameters of acoustic influence and gas-dispersed system is required for the determination of optimum modes of process realization. The theoretical study of the formation of fine-dispersed particle aggregates presented in the paper consists of four stages: 1) the development of the behavior model of separate particle in the acoustic field; 2) the development of the interaction model of two separate particles on the base of the behavior model of separate particle; 3) the analysis of the interaction model of two particles for determination of the main influencing factors resulting in the convergence and the modes of influence, at which convergence occurs at maximum rate; 4) the determination of the regularities of aggregates formation on the base of the results of the analysis of the interaction model of two particles. II. Mathematical model of the behavior of separate suspended particle in the acoustic field of ultrasonic frequency and space single interaction of two submicron particles in the acoustic field of ultrasonic frequency The behavior model of separate particle is based on the dynamic equations of its translational and rotary motion. As it was mentioned above, the behavior of single particle should be considered for the case of small Reynolds numbers, the size of considered particles is within the submicron range. Moreover at the theoretical study of the coagulation of submicron particles the deviation of their form from spheric should be taken into consideration. In the context of proposed model it is assumed, that each particle is an ellipsoid of revolution, which is a sphere in the special case. The possibility to assume this fact is based on the results of experimental studies representing by the photomicrographs of dispersed particles (Fig. 1) of submicron size, which can be harmful emissions or materials in a suspended state used in technological www.ajer.org Page 266 American Journal of Engineering Research (AJER) 2013 processes [14-17]. a) carbon black b) SiO2 c) smoke d) volcanic ash Figure 1: Photomicrographs of submicron particles As the particles, which form is not spheric in the common case, are oriented at arbitrary angles to the direction of ultrasonic wave propagation, the rotation of particles is taken into consideration. In the context of presented model it is assumed, that the rotation of particles occurs only in one plane (yz), which is in parallel to the direction of ultrasonic wave propagation (axis z) (Fig. 2). y y z ΩA z x Figure 2: Scheme of the revolution of the ellipsoid According to the scheme of the revolution of the ellipsoid desired motion equations of single particle can be presented in a following way: mA JA  2x A t 2  x    FA x A, A ,  A, A ,U 0 , k,, t  t t    A x  2 A    M A x A, A ,  A, , U 0 , k,  , t  t t t 2   (1) (2) where mA is the mass of the particle, kg; J A is the moment of inertia of the particle, kg·m2; F Ai is the force www.ajer.org Page 267 American Journal of Engineering Research (AJER) 2013 acting on the particle, N; M A is the force moment acting on the particle, N; ΩA is the turning angle of the particle, rad; U0 is the amplitude of vibration rate of gas medium, m/sec; k is the wave number of the ultrasonic field m-1; -1 ω is the circular frequency of the ultrasonic field, s ; a is the cross-section radius of the ellipsoid, m; s is the ratio of the dimension of rotational axis of the ellipsoid to cross-section dimension – diameter (dimensionless value);  is the density of particle substance, kg/m3; n is the normal vector to the surface, η is the dynamic viscosity of gas medium, Pa·s. The mass and the moment of inertia of the particle being the ellipsoid of the revolution are defined by the following expressions: 4 mA   sa 3 3 2 5 JA  a s 1  s 2  (3) (4) 8 where a is the radius of ellipsoid cross-section, m, s is the ratio of the length of rotational axis of the ellipsoid to cross-section diameter (dimensionless value),  is the density of particle substance, kg/m3. The force and the force moment is defined based on perturbation of flow field according to the following expressions [11]: 3   u u j F Ai     pni    i    xi j 1  x j SA    n j dS,     3   u u j M A     pn2    2   x  1  j SA   j x2 3   u u j     pn3    3   x  1  j SA   j x3 (5)   n j  x3 dS        n j  x2 dS     (6) where n the normal vector to the surface, η is the dynamic viscosity of gas medium, Pa·s, p is the pressure disturbance of the gas medium, Pa, u is the vector of velocity disturbance of the gas medium, m/s. The velocity and pressure disturbances of the medium are defined on the base of the analysis of NavierStokes equations for viscous mode of flow, which is true at small Reynolds numbers [18]: div u  0, (7) 0 = pu. (8) At infinity of velocity and pressure disturbances of the medium p and u equal zero. On the boundary of the particle the conditions of adherence are true, which are caused by adhesive forces of molecules between viscous medium and the surface. The conditions of adherence are experimentally proved. Thus, boundary condition on the surface of the particle is following:   0 0  0     u   VA2  U sin kz0 sin  A   0 0 V  U sin kz  cos      A 0 A  A3 0  t     x1   A  kU coskz0 sin  A  x2    t  x3  kU coskz0  cos  A   0 (9) where kz0 is the initial phase of the ultrasonic wave, rad, k is the wave number of ultrasonic wave, m-1. For arbitrary shapes of the particles following multipole expansions of pressure and velocity of the medium are valid [19]: 3 3   1  3 3 A 2  1  3 3 3 A  1  pr    H iA      H ijk     H ij   xi  x j  xk  X A   xi  x j  X A  i 1 j 1 k1  xi  X A  i 1 j 1 i 1 A   H ijkh 3 3 3 3 i 1 j 1 k 1 h 1  ui r    H iA 2 3 1 XA www.ajer.org  1  4   ...   xi  x j  xk  xh  X A   3 3 A   1  4 3 3 A 2  1  1 3 A 2  1  2     H j  H ijk  H ij  x j  X A   7   xi  x j  X A  X A  x j  xk  X A  6 j 1 5 j 1 j 1 k 1 Page 268 American Journal of Engineering Research (AJER)   1 3 3 A 1 3 3 3 4 3  1   1  2 2    XA     X A   H Ajkh H jk  10 j 1 k 1 14 j 1 k 1 h1  xi  x j  xk  xh  X A   xi  x j  xk  X A  1 3 3 3 3 A 5  1  2    X A  ... H jkhm  18 j 1 k 1 h1 m1  xi  x j  xk  xh  xm  X A  where X A   2013  xi  xAi 2 . 3 i 1 Constant values H iA , H ijA , … are defined from the boundary conditions (5).    At final stage at the boundary conditions the coefficients are equated, if the monomials equal 1, x j  x Aj , x j  x Aj xk  x Ak  etc. Obtained formulae allow determining the position and the turning angle of the single particle depending on time. In the case of the interaction of two particles the equations of translational (2d Newton’s law) and rotational movement are true. At that multipole expansions of the velocity and pressure will be the sum of expansions of the velocity and pressure of each particle (10,11): pr    H iA 3 i 1  ...   H iB 3 i 1   1  3 3 A 2  1  3 3 3 A 3  1      H ij     H ijk    xi  X A  i 1 j 1  xi  x j  X A  i 1 j 1 k 1  xi  x j  xk  X A    1  3 3 B    1  3 3 3 B  1      H ij     H ijk    ...  xi  X B  i 1 j 1  xi  x j  X B  i 1 j 1 k 1  xi  x j  xk  X B  2 (10) 3 2  1  2 1 3 3 A   1  4 3 3 A 2  1  1 3 2  ui r    H iA   H ij     H ijk    ...  H Aj   XA       3 5 7 6 xi x j  X A  x j xk  X A  xj  X A  XA j 1 j 1 k 1 j 1  1 3 3 A 1 3 3 3 3 4  1   1  2 2     X A   H Ajkh   H jk  10 j 1 k 1 14 j 1 k 1 h 1  xi  x j  xk  X A   xi  x j  xk  xh  X A  X A  1 3 3 B 1 3 3 3 4 3  1   1  2 2  ....      X B   H Bjkh H jk  10 j 1 k 1 14 j 1 k 1 h 1  xi  x j  xk  xh  X B  X B  xi  x j  xk  X B  2 1 3 3 B   1  4 3 3 B 2  1  1 3 2  1  2  ...  H iB   H ij      H ijk    ...  H Aj   3 6 j 1  x j  X B  7 j 1 k 1  x j  xk  X B   xi  x j  X B  X B X B 5 j 1 (11) The second-order velocities and pressures u2 and p2, respectively, are defined on the base of the analysis of Oseen equations, which are true at small Reynolds numbers: div u2  0 (12)  u, u 2  p 2  u 2 , (13) where u is the component of velocity of first-order infinitesimal defined at the previous stage. The second-order pressure and velocity disturbances are significant at small distances between particles of for the spheres of submicron size, for which there is practically no rotational movement. Proposed model of the interaction of two particles lets determine the dependence of the distance between the particles on time at specified initial conditions (transverse dimensions of given particles and the ratio of longitudinal dimension to the transverse one, initial distance between the particles, frequency and level of acoustic pressure, density of the particle matter, angle between the particle center line and ultrasound direction) and state: – main regularities of aggregates formation depending on dimensions, form and features of the ultrasonic field; – optimum modes of the ultrasonic field depending on the size of particles; – main parameters of the form of obtained aggregates. III. THEORETICAL ANALYSIS OF THE AGGREGATES FORMATION PROCESS AT DIFFERENT PARAMETERS OF ULTRASONIC INFLUENCE At the first stage we obtained the dependences of the distance between the particles on time at different level of acoustic pressure, the frequency of the exposure was 22 kHz, the diameter of the particles was 0.6 μm, the density of particle matter was 2200 kg/m3 (SiO2) and starting distance between the particles was 10.75 μm. Fig. 3 shows the dependences of the distance between the particles on time at different levels of sound pressure. www.ajer.org Page 269 American Journal of Engineering Research (AJER) 2013 As it follows from presented dependences, the effects connected with the action of Oseen forces mostly influence on the coagulation of spheric particles of submicron size. Convergence of the particles occurs in a time, which equals to several periods of vibrations and the time of particle convergence before the direct contact decreases in inverse proportion to generated level of sound pressure. Figure 3: The dependences of the distance between the particles on time at different levels of sound pressure (130-145 dB) Convergence time, 10-3 s Obtained data let determine the dependence of convergence time defining the coagulation efficiency on the level of sound pressure (Fig. 4). The analysis of the dependence of process efficiency on the level of acoustic pressure allowed concluding the necessity of application of ultrasonic influences with the level of sound pressure of more than 150 dB. 5 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 130 140 150 Sound Pressure Level, dB 160 Figure 4: The dependence of convergence time on the level of sound pressure Further we carried out the analysis of convergence time on the frequency of ultrasonic action. Obtained results are shown in Fig. 5. www.ajer.org Page 270 Convergence time, ms American Journal of Engineering Research (AJER) 2013 2.5 2 1.5 1 0.5 0 0 20 0,3 μm 40 60 80 0,6 μm Frequency, kHz 100 0,9 μm Figure 5: The dependence of convergence time of the particles of different size on the frequency From obtained dependences it follows, that optimum frequency of coagulation for the particles with the size of 0.9 μm is 20 kHz. For smaller size of the particles it is necessary to increase the frequency of influence. However for the particles with the size of 0.3 μm and less it is observed small dependence coagulation time on the frequency beginning with 50 kHz. Experimental studies show, that ultrasonic vibrations in air at the frequencies of more than 100 kHz damp rather fast. That is why, it is considered to be optimum ultrasonic influence in the range of 20-50 kHz at the level of sound pressure of less than 150 dB. To determine the main regularities of change of aggregate form at the coagulation of submicron particles the dependences of intensity of convergence on the orientation of their center line to the direction of acoustic wave propagation were studied. From the dependences shown in Fig. 6 it follows, that the highest speed of convergence is achieved at their longitudinal orientation. Figure 6: The dependence of the distance between the particles on time The dependence of mean velocity of convergence with the distance of 10.75 μm (corresponding to the concentration of 200 gr/m3) on the angle between particle center line and wave vector of the acoustic field is shown in Fig. 7. www.ajer.org Page 271 Average convergence velocity, μm/s American Journal of Engineering Research (AJER) 2013 5000 4000 3000 2000 1000 0 0 25 50 75 100 Angle between particles centers line and ultrasound direction, degrees Figure 7: The dependence of average convergence velocity on the angle between particles Centers line and the wave vector of the acoustic field As it follows from Fig. 7, the average convergence velocity depending on the angle to the direction of  the acoustic field changes practically linearly, i.e. vr   v0 r     , where r is the distance between the 2  particles, m. Thus, at starting stages of the coagulation the particles are formed, which form at the first approach is the ellipsoid of revolution oriented along the field. Assumed form of the aggregate generated at the first stage of the coagulation is shown in Fig. 8. 0.4 0.3 0.2 0.1 0 -1.5 -1 -0.5 -0.1 0 0.5 1 1.5 -0.2 -0.3 -0.4 Figure 8: Assumed form of the aggregate generated at the first stage of the coagulation (the value of the coordinates on the axes is specified by relative units) The estimations of number Re show, that if longitudinal dimensions of the ellipsoid equals 2 μm, it exceeds 0.05. At such values of Re the assumption of Oseen mode of the flow is still valid. However in the paper [10] it is pointed out, that under the action of velocity gradient in generated stationary acoustic flow the turning of ellipsoids in an angle relative to the initial orientation is observed. It leads to the rotation of the particles under the influence of the acoustic field, which is taken into account in presented model and depends on the ratio of the ellipsoid axes. At further size increase of the ellipsoids up to 200 μm ad more they are finally oriented across the field that is observed experimentally. Based on the dependence of approach velocity on the angle of location the ratio of longitudinal and transverse dimensions of generated aggregate was evaluated. At the initial stage of the coagulation when particles have spheric form, the number of particles, with which the particle collides, is in the proportion to approach velocity of the particles multipled by space angle. To determine the number of the particles the summation is carried out for all possible distances from desired particle.     N r t   nv0 r    r 2 r  n    v0 r r 2 r ,  0 2  2 0  www.ajer.org  Page 272 American Journal of Engineering Research (AJER) 2013 where  is the small space angle, sr, n is the calculating concentration of the particles in the surroundings of desired particle, m-3. It follows, that local longitudinal dimension of the aggregate D(x) depending on longitudinal coordinate x is defined by the following dependence: L    D x  L    x sin   x; x      x cos  x 22   2 , where L is the longitudinal dimension of the aggregate, m. As it is assumed, that particles or aggregates have the form of the ellipsoid of revolution, transverse diameter of the ellipsoid is defined as maximum value of the function D(x), i.e. when D 0 x or D D     sin       cos  0  0 , i.e.    2 (14) Numerical solution of the equation (14) shows, that  ≈ 40° that corresponds to maximum longitudinal diameter of 0.35L. It allows assuming, that at the initial stage of the coagulation the most possible ratio of the ellipsoid axes equals 2.8. Further analysis of the model of two particles is carried out for the ellipsoids of revolution with the ratio of the axes S = 1…3. Fig. 9 shows the dependences of the distances between the particles on time for the case of the ellipsoids of revolution with longitudinal diameter of 0.5 μm, which are oriented at the angle of 45° to the direction of ultrasonic wave propagation at different ratios of semi-axes lengths. Distance between particles, μm 20 15 10 5 0 0 5 s=4 10 15 20 25 30 35 40 Time, 10-6 s s=3 s=2 s=1,1 Figure 9: Dependences of the distance between the centers of the ellipsoids of revolution on time for different ratio of semi-axes As it follows from presented dependences, at small differences of lengths of semi-axes the interaction of the particles is rather weak. The most interaction of the particles is achieved, when s=2. Further increase of the ratio of semi-axes length results in the fact, that the force of particle interaction decreases. It lets conclude, that the ratio of longitudinal dimensions of the ellipsoid to the transverse one is limited. Total dependence, is shown in Fig. 10, of convergence time on the ratio of the semi-axes lying in the range of s=1,1…3, at which the approach occurs. Convergence time, 10-6 s 80 60 40 20 0 1 1.5 2 2.5 3 The ratio of the lengths of a particle Figure 10: The dependence of convergence time on the ratio semi-axes lengths www.ajer.org Page 273 American Journal of Engineering Research (AJER) 2013 At the next stage of the studies the convergence process at different transverse diameter with constant ratio of semi-axes s=2 is considered. The dependences of the distance between the particles on time at the diameters of 0.8 and 0.2 μm are shown in Fig. 11. Convergence time, 10-6s Figure 11: The dependence of the distance between the particles on time at their different transverse diameter As it follows from presented dependences the transverse size of the particles essentially influences on their approach velocity and, consequently, on the efficiency of the coagulation. Fig. 12 shows the dependence of convergence time on their transverse diameter, if s = 2 and the initial distance between the particles is 10 μm. 40 35 30 25 20 15 10 5 0 0 0.25 0.5 0.75 1 Tranverse diameter of particles, μm Figure 12: The dependence of convergence time of ellipsoid particles on transverse diameter Thus for the case to be considered at the constant distance between the particles there is optimum size of the particles (0.4 μm), at which coagulation is the most efficient. The increase of converegnce time at the diameters, which are more than optimum, can be explained by the reducing of the velocity of revolution, and the increase of converegnce time at the diameters, which are less than optimum, can be explained by reducing of the ratio of particle size to the distance between them. IV. CONCLUSION Thus during carried out researches the process of the coagulation of submicron particles at micro level due to the influence of high-intensity acoustic vibrations of ultrasonic frequency was studied. Proposed mathematical models of the behavior of single suspended particle and three-dimensional interaction of two submicron particles in the acoustic field take into account rotational motion of the particles and the influence of the viscosity of gas medium. The analysis of the model allows revealing optimum modes of the ultrasonic influence providing minimal convergence time: – the coagulation process of submicron particles occurs more efficiently in the range of the frequencies of ultrasonic influence of 20…50 kHz; www.ajer.org Page 274 American Journal of Engineering Research (AJER) 2013 – the level of acoustic pressure should be no less than 150 dB. Further analysis of the form of generated aggregates lets determine the following: – at the initial stages of the coagulation generated aggregates have the form, which is close to the ellipsoid of revolution oriented along the field and the most possible ratio of the axes equals 2.8; – ellipsoid aggregates under the action of the ultrasonic field are set in rotational motion, which determines main mechanism of the particle coagulation with the form, which is differed from spheric ones; – at further increase of the particle size of the aggregates (up to 2…10 μm) rotational motion under the influence of the ultrasonic field occurs, as it determines the mechanism of the coagulation; – at the final stage of the coagulation large aggregates (more than 200 μm) are finally oriented across the field. ACKNOWLEDGEMENTS The reported study was partially supported by RFBR, research project 13-08-98092 r_sibir_a. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] L. D. Rozenberg, Physical foundations of ultrasonic technology (Science, Moscow, 1969). O. Brandt, H. Freund, and E. Hiedemann, Zur Theorie der akustishen Koagulation, Kolloid.-Z., 77, 1, 103, 1936. H.W. St. Clair, Agglomeration of Smoke. Fog or Dust Particles by Sonic Waves, Industr. And Engineering Chem., 41,11,2434,1949. E. N. da C. Andrade, The coagulation of smoke by supersonic vibrations, Trans. Faraday Soc., 32, 1936, 1111-1115. N. Fuchs, Aerosol mechanics (Publishing house of USSR sciences academy, Moscow, 1955). In Russian. E.N. Andrade, On the Circulations Caused by the Vibration of Air in a Tube, Proc. Roy. Soc, A134, A824, 193. W. Konig, Hydrodynamische-akustische Untersuchungen, Ann. Phys, 42, 1891, 549-553. C.A. Lane, Acoustic Streaming in the Vicinity of a Sphere, JASA, 27, 6, 1082, 1955. T. Prozorov, R. Prozorov, and K. S. Suslick, High Velocity Interparticle Collisions Driven by Ultrasound, J. Am. Chem. Soc., 126, 2004, 13890-13891. N.N. Chernov, Acoustic methods and means of deposition of industrial fumes suspended particles, Dissertation for doctorate degree in engineering (RSL OD 71:05-5/470, Taganrog, 2004). In Russian. S. V. Komarov,and M. Hirasawa,Numerical simulation of acoustic agglomeration of dust particles in high temperature exhaust gas, Institute of Multidisciplinary Research for Advanced Materials, 2002,10. I.P. Boriskina, and S. I. Martynov, Influence of hydrodynamic interaction on the motion of particles in an ideal fluid, MVMC proceedings, 1, 2003, 93–97. In Russian. C. Sheng, and X. Shen, Modelling of acoustic agglomeration processes using the direct simulation Monte Carlo method, Journal of Aerosol Science, 37, 1, 2006, 16-36. H. Bhandary, S.A. Kumar, and S.K.Dhavan, Conducting polymer nanocomposites for anticorrosive and antistatic applications, Nanocomposites-New Trends and Developments, 2012, 329-368. Alaska - Redoubt volcano eruption (URL: http://mirvkartinkah.ru/alyaska-izverzhenie-vulkana.html, Application date: 03.12.2013). In Russian. S. China, C. Mazzoleni, K. Gorkowski, A.C. Aiken, and M.K. Dubey, Morphology and mixing state of individual freshly emitted wildfire carbonaceous particles, Nature communications, 4, 2013, 7. V.I. Vishnyakov, Smoke particles agglomeration in thermal plasma, Aero dispersed systems physics, 2003, 263-273. In Russian. L. D. Landau, and E. M. Lifshitz, Theoretical physics, in X volumes, vol. VI, (Science, Moscow, 1986). In Russian. V.N. Khmelyov, A.V. Shalunov, R.N. Golykh, and K.V. Shalunova, Modeling of gas-dispersed systems coagulation process for definition of acoustic influence optimum modes, Tidings of Chernozem region higher educational institutions, 2010, 48-52. In Russian. www.ajer.org Page 275
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-441-449 www.ajer.org Research Paper Open Access Revitalization of Urban Old Texture (Case Study: Sarerbagh Neighborhood of Shiraz Metropolis) Soosan Abdollahi1, Hamidreza Saremi2, Hadi Keshmiri3 1 (Architecture & urban planning Department, Science and Research branch, Islamic Azad University, Boroujerd, Iran) 2 (Professor Department of Art and Architecture, Tarbiyat Modares University, Tehran, Iran) 3 (Professor Department of Art and Architecture, Azad University, Shiraz, Iran) Abstract: - In recent years, due to rapid developments in science and technology and consequently changes in the physical, economic and social structure of urban areas and also introducing the theory of urban sustainable development, the significance of revitalization of old tissues has become more prominent. The role of technology, needs, talents and desires of the public in relation with the shape and texture of human communities and the socio - economic factors in changing the face of modern urban living has increased. Sarebagh, neighborhood of Region 8 of Shiraz Municipality has been selected as the study area for this research, is considered as one of the oldest textures in the country. The main purpose of this study, is understanding social and economic characteristics of the residents in the Sarebagh neighborhood of Shiraz old texture and also cognition of the physical condition of the tissue. research methodology in this paper, according to the objectives and hypotheses is based in library and field (observation, interview and questionnaire). According to data analysis using softwares such as GIS, SPSS, Auto Cad , Research results indicate that the rich physical – social charachteristics of the texture, have made the tendency for revival of neighborhood more growing. Keywords: - Fabric, Old Texture, Revitalization ,Sarebagh, Shiraz I. INTRODUCTION City is a collection of dynamic factors and creature of its inhabitants creative spirit. at the length of time the city grows, becomes sober and interacts with its citizens. Thus, the roots of obsolescence and deterioration of city should be searched in the ideas and notions of its residential notions. Old textures, constitute a significant portion of in many of our cities that due to specific problems have been run out of the scope of urban life and have become problematic urban areas. these textures, In addition to physical problems, have degraded the dimensions and quality of social, economic and cultural life of cities and urban spaces. and have faced the human presence in urban spaces with the hardships and disorders. Thus, the removal of collective memories and the decline of urban life that causes the loss of texture vitality, is aftermath of the drop in qualitative characteristics of the urban spaces of these old textures. According to the theoretical ideas and inferences made from world’s ideas and experiences on one hand, and the problems and failures of programs and projects of intervention in urban ancient textures in Iran on the other hand, And with the help of systematic planning models ,appropriate methods of intervention in the Iranian City ancient textures is provided. www.ajer.org Page 441 American Journal of Engineering Research (AJER) 2013 (Kalantari,2006) 1.1.Case Posing City is the eternal creature that has no end and the center of all the social, cultural and economic attractions and the core of cultural heritage and human emotions and sentiments (Bastyh,2000;25). Today, the ancient texture in most of the cities of our country has become an unsolvable. The low standard of living in these areas, the increasing rate of crime, physical deterioration, a heterogeneous population textures, and other social and economic issues make for the most part of the problem. These issues have consequences such as the depletion of living environment and therefore destruction and deterioration of the texture following by it. Ancient textures have such environmental qualities that we are not able to create them now. These environmental qualities are associated with aesthetic values . The physical space in them is evocative, makes us familiar with living space structure of the past generations and shows that we have had a precise and splendid civilization. This is the fact what is considered today as the identity( Mith,1996). The ancient urban area that have been competent at the time of formation have lack of strong functioning at present due to the developments in technology and changes in biological, cognitive, social, and economic needs of the society. This areas once were the center of wealth and power of the cities, but in the current situation (in most cities)they are deteriorating in terms of having poor infrastructure, structural and urban services (Koochaki,2008). Degrading the quality indicators in urban spaces of old texture, is one of the problems which these urban areas are facing with. And Since the urban spaces, represent the pinnacle expressions of urban life, and the attendance of citizens, this interaction of urban space quality with urban life quality clarifies more the depth and extent of this problem in the old texture. 1.2. Research Necessity Degradation is one of the most important issues related to the urban space, which causes the un organization, lack of coordination, lack of proportion and disfiguring. degradation is a factor that eliminates collective memory and helps to decline of urban life event and formation of everyday urban life. This factor with reducing the effect of age and with a more or less hurry, is due to make a move toward ending point (Habibi ,Maghsudi,2002;15). www.ajer.org Page 442 American Journal of Engineering Research (AJER) 2013 Shiraz old texture is located in the heart and center, and includes about 350 hectares of urban land, which covers up to 2/8% of the total area of the city. One can say that this area is equivalent to the size of the city during the Zandieh era. 4/47% of inhabitants are not native and are emigrants. Statistics shows that there does not exist a powerful social base for the texture. the old texture has become a nest of dwelling of low-income people and socially composition which is indispensable for any urban area has directed to unbalanced. Cryer believes that the old spaces must be rediscovered. This goal is achievable when first of all we value its functions and then plan it in the general plan of the city in correct place with the proper enjoyment. (Loosim,1996) 1.3.Researc Objectives 1 – Revival of old and historical identity of the quarter, with the aim of protecting and revitalization. This goal includes the following purposes: - Maintaining characterization and promoting the values of the old texture such as the spatial structure and texture skeleton. - Old and valued buildings modernization in the quarter - Revitalization of valuable spaces and structural-spatial restoration in the quarter. 2-Conservation of the old texture as a social and cultural wealth with emphasis on its historical role This goal includes the following purposes: -Increasing and Strengthening of the sense of participation of people and local organizations. -Raising the awareness of participants -Ensuring the presence of residents in the old fabric - Reducing crime rate in the old fabric - producing and maintaining of the security in old quarters. - Preserving collective memories with emphasis on historical continuity 1.4. Review Of Literature Experience in urban management, urban planning and revitalizing of historical areas of cities refers to the period of 1920 since now which has faced ups and downs during these years. In the 70’s decade by providing books and scientific seminars , the role and importance of historical district of the cities was promoted. With the victory of Islamic Revolution, scientific efforts in this field were weakened, but from 1364 onwards, with research projects, publishing books, articles, seminars and etc.. activities in this field speeded up again(Falamaki,2009) Falamaki (2005) for revitalization of historical towns offers methods such as technical and health care plans, decorative designs, reconstruction and restoration of limited spaces and a comprehensive plan for urban restoration. (M.M.Falamaki,2005) Vafayi research results (2007) with the title of analysis of the physical transformation of the old fabric in Kashan shows mismatch in network access, lack of facilities and buildings quality are such effective factors in transformation and metamorphosis of the old texture in the old fabric of the Kashan city.(vafaee,2007) Rahnama has mentioned to the methods of old fabric revitalization and urban development of samples of residential tissues of Mashhad downtown with emphasis on restoring the historical fabric of the Sarshoor quarter, and has introduced the role of social and economical development of the old texture residents as necessary for the process of old texture revitalization(Rahnema, 2004;73). 1.5. Methodology and data collection In the field of structural cognition of the quarter the popular method of field study and updating the map and in the field of understanding of history : interviews with older people, using textual sources and aerial photographs were used. in the sector of social cognition through interviews with experienced people ,forming small discussion groups, observation, questionnaire and textual sources and data of the Statistical Center of Iran all together helped to collect the data. In the theoretical part of the research which help at documental method, arguments and conclusions will be discussed. In the practical part, the research method, is case study in which the qualitative data by using the tools of interview, observation and photography, is obtained. Finally, to extract information and then analyzing and programming, graphical softwares such as AutoCad and Arc GIS and statistical softwares such as Excel and SPSS are used. 1.6.The statistical population and the sample size Residential units of Sarebagh quarter is the population of this research as one of historical-cultural fabric of the Shiraz city. The district currently has a size of 4/3 hectares, based on fieldwork data collected from a questionnaire and is consisting of 119 residential units and 178 households and 890 inhabitants. www.ajer.org Page 443 American Journal of Engineering Research (AJER) 2013 According to the statistical population of 119 residential units, and examining the viewpoint of residents based on statistical formulas with the assumption error of 5% and 95% significance level, the sample size of 50 was achieved. this sample can be generalized. II. THE STUDIED RANG 2.1. Sarebagh Quarter This quarter, as is evident from its name, is located in the vicinity of the Azodi large garden. This garden was built by order of Azedodole Deilami . Alongside the garden, houses were built and the Sarebagh quarter arose, but later in time, the garden lost its vivacity and divided into limited lands and now no trace of it is left and in its place buildings have raised up . 2.2.Fabric’s Socio - economic status Today, one of the most important issues that attract attention in the old fabric is social and economical destruction of the fabric. The old texture which was once the residence of urban nobility, today has become the residence of many low-income groups, the poor and immigrants that due to having various cultures, have caused disappearance of social and cultural congruence and have provided a proper ground for the incidence of many social issues and problems, so that all types of crime, deviations and social problems can be observed dramatically all over the old fabric today (Soltanzadeh, 1992) 2.3.Demographic –physical structure of Sarebagh quarter Sarebagh quarter in 1301, had been limited to the Saredozak , Sange Sia , Meidane Shah and the Darbe Masjed Quarters. In 1920,it had 923 men and boys and 1,084 women and girls. But in 2012 its Total Population declined to 890 persons, including 427 women and 463 men. Indicators Total Population Area Sex Ratio Family Numbers Table1.Evaluating demographic –physical structural of Sarebagh quarter Sarebagh Quarter Indicator Sarebagh Quarter 890 Number of Residental Units 119 3.8 Hectares 108 178 The Average Person In The Residental Units The average household in Residential Units Population Density 6 1.5 234 Figure1. www.ajer.org Page 444 American Journal of Engineering Research (AJER) 2013 2.3.1. Ownership of residential units of the fabric According to the following table, it can be seen that maximum ownership of residential units in the texture, is of rental type. Thus, ownership of 24 percent of residents is of owning type, 56 percent of inhabitants live in rental houses and type of Ownership of 12% of residents is against service or free of charge and the left 8% of the inhabitants have other types of ownerships. So one of the factors that cause the fabric to become corrupted is the high number of residents that their residential units is rental and most of them are immigrants. types of ownership of housing units in the old texture of city differs from other areas of the city because the numbers of dedicated and private ownership in this part of the city is much higher than other parts. This is one of the barriers in renovation and revitalization of this area and plays an obvious role in deterioration of the old fabric. Because in this type of ownership, due to disagreement between the partners ,the houses suffer from suspense in their reconstruction and this aggravates the corruption and deterioration of the old texture. ( Abdollahi,2013) Ownership Type owning Rental free of charge /against service Others Sum 100 80 60 40 20 0 Table 2 - Type of ownership Frequency Percentage Valid Percentage 12 24 24 28 56 56 6 12 12 4 8 8 50 100 100 Cumulative Percentage 24 80 92 100 Frequency Percentage Figure 2 - Ownership of inhabitants residential units 2.4.Physical condition of old texture What causes the differentiate between tissues of eastern cities from other countries is that these tissues are formed on the basis of social and economical conditions, and are the production of centuries of growing civilization and urbanization. Unfortunately, in urban development planning in these countries there is little attention to these issues, so that mainly benchmarking and spontaneous development and construction in these countries, without any link with the past, has led to nothing except chaos. ultimately these models can just create western style places that are often alien to their native culture. (Rosemary,2005) 2.4.1. Lifetime of the texture’s buildings The following table shows that longevity of most of the buildings in the fabric is more than 30 years. Thus 60% of the buildings are over 40 years old, 20 Percent of the buildings are between 30 to 40 years old, 12 Percent of the buildings are between 20 to 30 years old and the lifetime of 8% of the buildings of the fabric is less than 20 years. However, the decay Percentage in the context in terms of structure is high. Ownership Type More Than 40 Years Between 30 and 40 Years Between 20 and 30 Years Less Than 20 Years Total Number www.ajer.org Table 3- Buildings Lifetime Frequency Percentage Valid Percentage 30 60 60 10 20 20 6 12 12 4 8 8 50 100 100 Cumulative Percentage 60 80 92 100 Page 445 American Journal of Engineering Research (AJER) 100 80 60 40 20 0 2013 Frequency Percentage Figure 3 – buildings Lifetime 2.5. Quality structure of the quarters 2.5.1. The extent of pride for the residence and neighborhoods from residents’ viewpoint Table 4- proud to living place Frequency Percentage Valid Percentage 3 6 6 3 6 6 9 18 18 27 54 54 8 16 16 50 100 100 Proud to Living Place Very Much Much Average Low None Sum Cumulative Percentage 6 12 30 84 100 The above table indicates that from the residents point of view the amount of pride for the living place and the quarter is a low one. Only 6 percent of the residents are very much proud of their residences, whom are often elderly and longtime residents of the quarter. 6 percent are so much satisfied and 18% of residents have Average satisfication. 54% of residents are a little proud of their residences, and 16% are not satisfied and do not feel proud at all. 100 80 60 40 Frequency 20 Percentage 0 Figure 4- The extent of pride for quarter and residence 2.5.2.Level of the Quarter’s safety From Residents’ view Point The following table and graph indicate that totally the level of safety in the quarter is not high. 4% of residents have expressed that the level of security in the quarter is very high. 8 percent of residents believe that the quarter is not safe at all. And respectively, 22 , 54 and 12 percent of residents said that security is very much, Much and low respectively. www.ajer.org Page 446 American Journal of Engineering Research (AJER) 2013 Table 5-quarter’s level of safety Frequency Percentage Valid Percentage 2 4 4 11 22 22 27 54 54 6 12 12 4 8 8 50 100 100 Proud to Living Place Very Much Much Average Low None Sum 100 80 60 40 20 0 Cumulative Percentage 4 26 80 92 100 Frequency Percentage Figure5-quarter’s level of safety 2.5.3.Does the environmental quality of quarter provide comfort for you and your family? Comfort completely To Some Extent Not at All No Idea Total Number Table 6-level of comfort in the quarter Frequency Percentage Valid Percentage 9 18 18 26 52 52 10 20 20 5 10 10 50 100 100 Cumulative Percentage 18 70 90 100 Sense of comfort which is one of the dimensions of environmental quality is shown in the above table. According to respondents view, 18 percent of residents completely feel comfort, 52% of them to some extent feel comfort in the quarter, and from 30 remaining percent, 20 percent of them do not feel comfort at all and 10 percent have chosen the “don’t know” answer . 100 80 60 40 20 0 Frequency Percentage Figure6-Sense of Rielief in The Quarter www.ajer.org Page 447 American Journal of Engineering Research (AJER) III. 2013 HYPOTHESIS IT SEEMS THAT THERE IS AN INVERSE RELATIONSHIP BETWEEN SATISFICATION FROM THE SOCIAL-STRUCTURAL QUALITY OF THE QUARTER’S FABRIC AND TENDENCY TO QUARTER’S REVITALIZATION. 3.1. The relationship between residents' tendency to revitalization and quarter’s safety. According to the Chi-square test from the table 7, the correlation coefficient with significance level of (Sig = 0.000) and a P <0.05 and by 95% confidence coefficient indicates that there is a significant relationship between residents tendency to revitalization and safety of the quarter. and the above hypothesis is proven by referring to above equation and favorable significance level . it means the more the tendency to revitalization increases, the more the safety rate increases too. 3.2. Relationship between tendency to revitalization and sense of comfort. According to the Chi-square test from the table 7, the correlation coefficient with significance level of (Sig = 0.000) and a P <0.05 and by 95% confidence coefficient indicates a significant correlation between tendency to revitalization and sense of comfort in the quarter. and the above assumptions is proven by referring to above equation and favorable significance level. it means that the more the tendency to revitalization increases, the more the sense of comfort increases. 3.3.Relationship between tendency to revitalization and the extent of Proud to the residence According to the Chi-square test from the table 7, the correlation coefficient with significance level of (Sig = 0.001) and a P <0.05 and by 95% confidence coefficient indicates a significant correlation between residents tendency to revitalization and the extent of pride for the residence in the quarter. and the above assumptions is proven by referring to above equation and favorable significance level it means that the more the tendency of residents to revitalization increases, the more the pride for the residence in the quarter increases too. 3.4.Relationship between tendency to revitalization and the lifetime of fabric’s buildings. According to the Chi-square test from the table 7, the correlation coefficient with significance level of (Sig = 0.004) and a P <0.05 and by 95% confidence coefficient indicates a significant correlation between residents tendency to revitalization and the lifetime of fabric’s buildings. and the above assumptions is proven by referring to above equation and favorable significance level it means that the more the tendency of residents to revitalization increases, the lifetime of fabric’s buildings increases too. 3.5.Relationship between tendency to revitalization and the Ownership of inhabitants residential units. According to the Chi-square test from the table 7, the correlation coefficient with significance level of (Sig = 0.023) and a P <0.05 and by 95% confidence coefficient indicates a significant correlation between residents tendency to revitalization the Ownership of inhabitants residential units. and the above assumptions is proven by referring to above equation and favorable significance level it means that the more the tendency of residents to revitalization increases the Ownership of inhabitants residential units increases too. www.ajer.org Page 448 American Journal of Engineering Research (AJER) 2013 With regard to the table above and to the P-value (significance level), we can see that the relationship existing between revitalization tendency, security, sense of comfort, residents’ satisfaction levels, fabric’s buildings lifetime and ownership of residential units are significant. The obtained results from the data analysis and hypothesis testing by using SPSS software, demonstrate that all the obtained significance levels in the table above (0056/0 P =) are less than 0.05and by 95% confidence Coefficient, Thus, there exists a significant relationship between satisfaction from physical -social quality of the fabric and tendency to revitalization so the hypothesis is proven. IV. CONCLUSION Existence of functional, communicational, social and official disruptions inside the quarters of the old fabric, has led this historical quarter into complete isolation. Changes in urban planning, as well as economical and social difficulties along with lack of modern facilities have made the fabric lose its principal function and sink into demolition. Proposed strategies related to the physical – spatial structure: Introduction of new services for old and valuable historical buildings within the fabric so that these structures can be known as a lively urban space by means of this new services in order to preserve the historical identity of city of Shiraz . Demographic- socio-economic structure strategies: 1-Making basis of demographic shifts and changing in the region demographics to maintain a positive balance coupled with the promotion of social status and the indigenous population of the area. 2-Preparation of comfortable, safe and neighborhoods attractive , residence. 3-Strengthening the social and human capital of the historical context and cultural and social promotion of the context, with particular attention to formal and informal education. V. NOTE 1-Zandieh age is the historical period in Iran that was started in 1794 - 1750 REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] KH.Kalantari, Regional planning and development (theories and techniques, (2006) Z.Bastieh . La Ville( 2003) N.Mith, (1996) the New Urban Frontier:Gentrification and the Reaches City.London: Rutledge. GH.Koochaki, Analysis of structure - physical fabric of the old city of Khorramabad, MA thesis, Geography and Urban Planning, University of Isfahan.2008 M.Habibi.M.Maghsoodi, Urban renewal(Tehran University Publication,2002) Loosim, (1996), Urban conservation policy and the preservation of historical and cultural heritage cities, vol. 13. No.6. M.M.Falamaki,Historical Urban and Structural Rehabitalization (Tehran University Publication, 2009) M.M.Falamaki, Urban Renewal and Renovation(Samt Publication, 2005) A.Vafaee, Analyzing the formation process of the physical form of the historic fabric of the city of Kashan, MA thesis, Geography and Urban Planning, University of Isfahan.(2007) M. Rahnama, surveying Mashhad city center revitalization, (Mashhad University Publication, 2004) H.Soltan zadeh, Urban Spaces in Iran Old Texture, (Tehran Municipality,1994) S. Abdollahi, Revitalization of physical-spatial structure of historical-cultural fabric(case study:Sare Bagh quarter in Shiraz) MA thesis, Boroujerd Science and Research Branch, Faculty of post graduation, department of Architecture & Planning, 2013. D. Rosemary, F. Bromley, Andrew R. Tallon and Colin j. Thomas (2005), City center regeneration through residential development: Contributing to sustainability, Urban Studies, Vol 42, No13 www.ajer.org Page 449
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-244-251 www.ajer.org Research Paper Open Access Design and Hardware Implementation of a Digital Wattmeter Md. Arafat Mahmud1, Md. Tashfiq Bin Kashem2, Monzurul Islam Dewan3, andMushfikaBaishakhi Upama4 1 Department of EEE, University of Asia Pacific, Dhaka, Bangladesh Department of EEE, University of Asia Pacific, Dhaka, Bangladesh 3 Department of EEE, Ahsanullah University of Science and Technology, Dhaka, Bangladesh 4 Research and Development Section, Energypac Engineering Ltd., Dhaka, Bangladesh 2 Abstract :- Design and hardware implementation of a digital wattmeter has been presented in this paper. IC ADE7751, Microcontroller ATMEGA32 and a 16x2 LCD display are the major building blocks of the design. The real power information conveyed by the output signal from ADE7751 is extracted by necessary calibration process and finally displayed to LCD display using microcontroller coding. Obtained results show fair amount of accuracy which validates our circuit design and proves precise hardware implementation. Keywords:- Current coil, potential coil, IC ADE7751, microcontroller ATMEGA32, 16x2 LCD display I. INTRODUCTION Wattmeter is a device for measuring real power of a load. In this paper, design and hardware implementation of a digital wattmeter have been presented with detailed functional description of individual component forming the total circuitry. In our design, the role of current and potential coil has been served by using IC ADE7751. The output from the IC is analyzed by a microcontroller, ATMEGA32 and the calculated power is displayed in a 16X2 LCD display through microcontroller coding and proper interfacing with the microcontroller port. Results obtained from our designed wattmeter shows fair amount of accuracy which proves the validity of our circuit design and hardware implementation. II. WORKING PRINCIPLE Real power of a load can be expressed as Eq. 1: P=Vrms Irms cos (1) where, Vrms , Irms and cos denote r.m.s. value of voltage, current and power factor respectively. In a wattmeter, Potential Coil (P.C.) and Current Coil (C.C.) give the measure of r.m.s. values of voltage and current respectively and the cosine of the phase angle difference between the voltage and current is multiplied with them to find the real power dissipated in the load[1]. Fig. 1 shows the basic circuit connection diagram of a wattmeter. Fig. 1. Basic circuit connection diagram of a Wattmeter www.ajer.org Page 244 American Journal of Engineering Research (AJER) 2013 In our design, both the voltage across and the current through the load have been taken in the form of a differential voltage using voltage and current transducers to IC ADE7751. The IC produces a square wave at its output whose frequency is proportional to the real power of the load. Detailed description of this functionality has been present in the circuit level description section. This square wave signal is sent to a input pin of a microcontroller port. The time period of the signal is measured using counter and thus its frequency is also determined. Through a calibration process described later in the paper, the dissipated real power is calculated. Finally, the calculated real power is displayed in a LCD display. In this regard, the LCD screen was properly interfaced with the pins of a port of the microcontroller. The whole process is guided by proper microcontroller coding. III. FUNCTIONAL BLOCK DIAGRAM Based on the working principle described in section II, the functional block diagram of the entire circuit can be presented as Fig. 2 which will give a better insight before going to circuit level description. Voltage across the load Current through the load voltage channel channel current IC ADE7751 square wave output MICROCONTROLLER ATMEGA32 Calibration Coding & Calculation & Interfacing 16X2 LCD DISPLAY Real power output in watt unit Fig. 2. Functional block diagram of the entire circuit design of wattmeter IV. CIRCUIT LEVEL DESCRIPTION In this section, circuit level description of the components is presented. Entire circuitry can be divided into three sections: IC ADE7751, Microcontroller ATMEGA32 and 16x2 LCD display. A. IC ADE7751 The circuit connection diagram of IC ADE7751 has been shown in Fig. 3. Pin configuration description of various pins of the IC can be found at [2]. IC ADE7751 receives two analog inputs (load current and voltage) at its two channels (V1A and V1B as current channel and V2N and V2P as voltage channel)in the form of a differential voltage input using current and voltage transducers. The output of the line voltage transducer is connected to the ADE7751 at voltage channel, V2. Channel www.ajer.org Page 245 American Journal of Engineering Research (AJER) 2013 V2 is a fully differential voltage input. The maximum peak differential signal on Channel 2 is ± 660 mV. Fig. 4 illustrates the connection at ADE7751 Channel 2. Channel 2 must be driven from a common-mode voltage,i.e.the differential voltage signal on the input must be referenced to a common mode (usually AGND). Fig. 3. Circuit connection diagram of IC ADE7751 Fig. 4. Circuit connection at Channel 2 (V2N, V2P) of IC ADE7751 for load voltage measurement Similarly, the voltage output from the current transducer is received at channel 1(V1A and V1B). The two ADCs at the IC digitize the voltage and current signals from the current and voltage transducers. These ADCs are 16-bit second order sigma-delta converters with an oversampling rate of 900 kHz. This analog input structure greatly simplifies transducer interfacing by providing a wide dynamic range for direct connection to the transducer and also by simplifying the anti aliasing filters design. The real power calculation is derived from the instantaneous power signal. The instantaneous power signal is generated by a direct multiplication of the current and voltage signals. In order to extract the real power component (i.e., the dc component), the instantaneous power signal is low-pass filtered. Fig. 5 illustrates the instantaneous real power signal and shows how the real power information can be extracted by low-pass filtering the instantaneous power signal. This scheme correctly calculates real power for non sinusoidal current and voltage waveforms at all power factors. All signal processing is carried out in the digital domain for superior stability over temperature and time. The low frequency output of the ADE7751 is generated by taking this real power information. This low frequency inherently means a long accumulation time between output pulses. The output frequency is therefore proportional to the average real power. Because of its high output frequency, and hence shorter integration time, the CF output is proportional to the instantaneous real power. This is useful for system calibration purposes that would take place under steady load conditions. www.ajer.org Page 246 American Journal of Engineering Research (AJER) 2013 Fig. 5. Power information extraction from instantaneous power signal The method used to extract the real power information from the instantaneous power signal (i.e., by low-pass filtering) is still valid even when the voltage and current signals are not in phase. If we assume the voltage and current waveforms are sinusoidal, the real power component of the instantaneous power signal (i.e., the dc term) is given by VI COS  60o  2 . This is the actual real power calculation. Fig. 6 illustrates how dc component of instantaneous power signal conveys real power information for power factor less than 1 i.e. when there is a phase angle difference between load voltage and load current. Frequency Outputs F1 and F2 of Fig. 2 calculate the product of two voltage signals (on Channel 1 and Channel 2) and then low-pass filters this product to extract real power information. This real power information is then converted to a frequency. The frequency information is output on F1 and F2 in the form of active low pulses. The pulse rate at these outputs is relatively low, e.g., 0.34 Hz maximum for ac signals with S0 = S1 = 0 as shown in Table 1. The expression of frequency of the signal at pin 23 of ADE7751 which is sent to microcontroller pin is given below: f= 5.74V1V2 GF1 4 (2) V2 REF where, V1 = differential r.m.s.voltage signal on channel 1 V2 = differential r.m.s.voltage signal on channel 2 G = gain depending on selection pins G 0 and G1 VREF =reference voltage (2.5V  8%) F1 4 = one of four possible frequencies set by selection pins S0 and S1 TABLE I: FREQUENCY INFORMATION ON F1 AND F2 FOR DIFFERENT COMBINATION XTAL freq. Maximum frequency(ac input) S0 F1 4 S1 0 0 1 1 0 1 0 1 www.ajer.org 1.7 3.4 6.8 3.6 3.579 MHz/2^21 3.579 MHz/2^20 3.579 MHz/2^19 3.579 MHz/2^18 0.34 0.68 1.36 2.72 Page 247 American Journal of Engineering Research (AJER) 2013 Fig. 6. Real power determination when there is a phase angle difference between load voltage and load current B. Microcontroller ATMEGA32 Output pin 23 from IC ADE7751 which carries real power information through its signal frequency is taken as input to 3rd pin of port B of Microcontroller ATMEGA32.. Using a calibration process, the real power information is extracted from the signal. Using counter, at first the time period of the signal at PB3 is measured and from that, frequency is determined. Same process is repeated for a number of loads. Simultaneously, the readings of real power of the same loads are taken using standard commercially available analog wattmeter. Using the data obtained in this process, a curve of real power as a function of frequency in MATLAB is plotted and corresponding equation of power along with the values of the liner co-efficient were derived. This equation is used in microcontroller coding to show real power in LCD display. Port D pins of the microcontroller are interfaced with a 16x2 LCD display to show the real power in watt unit. The microcontroller code algorithm is given in Fig. 7. www.ajer.org Page 248 American Journal of Engineering Research (AJER) 2013 C. 16x2 LCD display The 16x2 LCD display is interfaced with PORT D of ATMEGA32 in 8 bit operating mode, i.e. LCD commands are sent through data line D0 TO D7. 8 bit data is sent at a time and data strobe is given through E pin of the LCD display. The interfacing between ATMEGA32 and LCD display are shown in Fig. 8. Pin configuration of Microcontroller ATMEGA32 and 16x2 LCD display can be found at [3]and [4]. V. HARDWARE IMPLEMENTATION The circuit described above has been implemented in hardware level. Fig. 9-10 shows our implemented digital wattmeter. We have used several lamps having watt ratings of 40, 60 and 100Watt as loads and measured real power dissipated at those loads using our designed digital wattmeter. Simultaneously, watt readings using a standard commercially available analogue wattmeter were also taken. It has been observed that our digital wattmeter shows good accuracy and very minimal amount of error percentage. For instance, we obtained 98 watt reading from our digital wattmeter and 99 watt from the standard analogue wattmeter. So, percentage error can be expressed as: Percantage Error= Actual watt value-Obtained watt value x100% Actual watt value (3) Fig. 8. Interfacing of ATMEGA32 with 16x2 LCD display Fig. 8. Hardware implemented digital wattmeter device www.ajer.org Page 249 American Journal of Engineering Research (AJER)  1.01% 2013 Fig. 10. Amplified view of LCD display showing real power information  99  98  So, percentage error   x100  %  99  So, the accuracy of our digital wattmeter is 98.99% which proves the validity of circuit design and hardware level implementation. VI. SPECIFICAIONS General and Electrical specifications of the digital wattmeter have been presented in Table II and Table III respectively. Table Ii: General Specifications of The Digital Wattmeter Specifications Properties Display 16 x 2 (16 columns, 2 rows) LCD ( Liquid Crystal Display ) Measurement Watt ( real power ) Polarity Uni- polar Zero Adjust External adjustment for zero of the display Operating Temperature 0°C to 50°C ( 32°F to 122°F ). Operating Humidity Less than 80% RH. Power Supply Digital Circuitry: DC 5V battery, Analog Circuitry: 220V (line to neutral) Power Consumption Approx. DC 6 mA Weight 250g (including battery) TABLE III: ELECTRICAL SPECIFICATIONS OF THE DIGITAL WATTMETER Signal AC voltage AC current DC voltage DC current www.ajer.org Specifications Range : 220V Resolution : 0.2VRange : 660V Resolution : 0.6V Frequency Characteristics : 45 Hz - 65 Hz Input Impedance: 1 Mega ohm. Converter Response : Average responding, calibrated to display RMS value Range : 10A Resolution : 10mAVoltage drop ( in case of full scale ) : 250 AC mVFrequency Characteristic : 45 Hz -65 Hz Converter Response :calibrated to display RMS value of sine wave Range : 4V to 5.3VInput Impedance : 1 Mega ohm Range : 10A Resolution : 10mAMaximum Input Current : 10A Input Impedance : 1 Mega ohm Voltage drop ( in case of full scale ) : 250 DC mV Page 250 American Journal of Engineering Research (AJER) VII. 2013 CONCLUSION A circuit level design and hardware implementation of a digital wattmeter has been presented in this paper. The wattmeter has shown remarkable accuracy for measuring a number of different loads. This proves the precision of our design and hardware level implementation. Also, as the device is guided by microcontroller coding, more features like energy measurement and electricity bill generation in local currency can also be added to this digital wattmeter. ACKNOWLEDGMENT We would like to express our gratitude to our respected teacher Mr. Ahmad Zubair, Lecturer, Department of EEE, BUET, who inspired and motivated us to get ourselves involved in innovative circuit design especially involving microcontroller coding and hardware implementation of those designs. REFERENCES [1] [2] [3] [4] M. Rehman, M. R. Saad, S. Ahmad, and M. S. Ansari, “Development of an analogue electronic wattmeter,” IEEE region 10 conference on Tencon, pp. 1-2, November 2005. US Patent 5,745,323; 5,760,617; 5,862,069; 5,872,469, Energy metering IC with on-chip fault detection: ADE7751, Analog devices Inc., USA.: 2002. ATMEL corporation, 8 bit AVR microcontroller with 32Kbytes in system programmable flash: ATMEGA32 and ATMEGA32L, USA, 2011. Revolution Education Ltd., Alpahanumeric LCD display(16x2),UK. www.ajer.org Page 251
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-402-408 www.ajer.org Research Paper Open Access Comparative Study of P&O and InC MPPT Algorithms I.William Christopher and Dr.R.Ramesh 1 (Senior Assistant Professor, Department of EEE, Tagore Engineering College, Chennai, India) 2 (Associate Professor, Department of EEE, College of Engineering Guindy/ Anna University, Chennai, India) Abstract: - Maximum Power Point Tracking (MPPT) algorithms is important in PV systems because it reduces the PV array cost by reducing the number of PV panels required to achieve the desired output power. This paper présents à comparative simulation study of two important MPPT algorithms specifically perturb and observe and incremental conductance. These algorithms are widely used because of its low-cost and ease of realization. Some important parameters such as voltage, current and power output for each different combination has been traced for both algorithms. Matlab simulink tool box has been used for performance evaluation by a 70W photovoltaic (PV) array. Keywords: - Photovoltaic (PV), Maximum Power Point Tracking (MPPT), Perturb and Observe (P&O), Incremental Conductance (InC). 1. INTRODUCTION Photovoltaic (PV) generation represents currently one of the most promising sources of renewable green energy. Due to the environmental and economic benefits, PV generation is preferred over other renewable energy sources, since they are clean, inexhaustible and require little maintenance. PV cells generate electric power by directly converting solar energy to electrical energy. PV panels and arrays, generate DC power that has to be converted to AC at standard power frequency in order to feed the loads. Therefore PV systems require interfacing power converters between the PV arrays and the grid. Photovoltaic-generated energy can be delivered to power system networks through grid-connected inverters. One significant problem in PV systems is the probable mismatch between the operating characteristics of the load and the PV array. The system’s operating point is at the intersection of the I-V curves of the PV array and load, when a PV array is directly connected to a load. The Maximum Power Point (MPP) of PV array is not attained most of the time. This problem is overcome by using an MPPT which maintains the PV array’s operating point at the MPP. The occurrence of MPP in the I-V plane is not known priorly; therefore it is calculated using a model of the PV array and measurements of irradiance and array temperature. Calculating these measurements are often too expensive and the required parameters for the PV array model are not known adequately. Thus, the MPPT continuously searches for MPP. There are several MPPT continuously searches algorithms that have been proposed which uses different characteristics of solar panels and the location of the MPP [1,4]. To extract the maximum power from the solar PV module and transfer that power to the load, a MPPT is used. A dc/dc converter (step up/step down) transfers maximum power from the solar PV module to the load and it acts as an interface between the load and the module.Maximum power is transferred by varying the load impedance as seen by the source and matching it at the peak power of it when the duty cycle is changed. In order to maintain PV array’s operating at its MPP, different MPPT techniques are required.In the literature many MPPT techniques are proposed such as, the Perturb and Observe (P&O) method, Incremental Conductance (IC) method, Fuzzy Logic Method etc [3]. Of these, the two most popular MPPT techniques (Perturb and Observe (P&O) and Incremental Conductance methods) are studied [4].The paper has been organized in the following manner. The basic principle of PV cell and the characteristics of PV array are discussed in section 2. Section 3 presents the P&O and InC MPPT algorithms in detail. The simulation results of PV array, MPPT algorithms and their comparison are discussed in section 4. Last section concludes with the scope for further work. www.ajer.org Page 402 American Journal of Engineering Research (AJER) 2013 2. PV ARRAY CHARACTERISTICS Basic Principle of PV Cell PV cells are essentially a very large area p-n junction diode where such a diode is created by forming a junction between the n-type and p-type regions. As sunlight strikes a PV cell, the incident energy is converted directly into electrical energy. Transmitted light is absorbed within the semiconductor by using the energy to excite free electrons from a low energy status to an unoccupied higher energy level. When a PV cell is illuminated, excess electron-hold pairs are generated by light throughout the material, hence the p-n junction is electrically shorted and current will flow [2]. 2.1 2. 2 PV array Characteristics The use of single diode equivalent electric circuit makes it possible to model the characteristics of a PV cell. The mathematical model of a photovoltaic cell can be developed using MATLAB simullink toolbox. The basic equation from the theory of semiconductors that mathematically describes the I-V characteristic of the Ideal photovoltaic cell is given by (1) Where, (2) Therefore (3) Where, ‘I PV, Cell’ is the current generated by the incident light (it is directly proportional to the Sun irradiation), Id is the diode equation, Io, cell’ is the reverse saturation or leakage current of the diode, ‘q’ is the electron charge [1.60217646* 10−19C], k is the Boltzmann constant [1.3806503 *10 −23J/K], ‘T’ is the temperature of the p-n junction, and ‘a’ is the diode ideality constant. Figure 1 shows the equivalent circuit of ideal PV cell. Figure 1. Equivalent circuit of ideal PV cell Practical arrays are composed of several connected PV cells and the observation of the characteristics at the terminals of the PV array requires the inclusion of additional parameters (as shown in figure. 2) to the basic equation: (4) Where Vt = NskT/q is the thermal voltage of the array with ‘Ns’ cells are connected in series. Cells connected in parallel increases the current and cells connected in series provide greater output voltages. V and I are the terminal voltage and current. The equivalent circuit of ideal PV cell with the series resistance (Rs) and parallel resistance (Rp) is shown in figure.2. Figure 2.Equivalent circuit of ideal PV cell with Rp and Rs. www.ajer.org Page 403 American Journal of Engineering Research (AJER) 2013 For a good solar cell, the series resistance (Rs), should be very small and the shunt (parallel) resistance (Rp), should be very large. For commercial solar cells (Rp) is much greater than the forward resistance of a diode. The I-V curve is shown in Figure 3. The curve has three important parameters namely open circuit voltage (Voc), short circuit current (Isc) and maximum power point (MPP). In this model single diode equivalent circuit is considered. The I-V characteristic of the photovoltaic device depends on the internal characteristics of the device and on external influences such as irradiation level and the temperature. Figure 3.I-V characteristics of the PV cell Figure 4.P-V characteristics of the PV cell The P-V characteristics of the PV cell are illustrated in figure 4. It depends on the open circuit voltage (Voc), the short circuit current (Isc) and the maximum power point (MPP). 3. MPPT ALGORITHMS 3.1 Perturb and Observe (P&O) Algorithm A slight perturbation is introduced in this algorithm. The perturbation causes the power of the solar module to change continuously. If the power increases due to the perturbation then the perturbation is continued in the same direction. The power at the next instant decreases after the peak power is reached, and after that the perturbation reverses. The algorithm oscillates around the peak point when the steady state is reached. The perturbation size is kept very small in order to keep the power variation small [4]. The algorithm can be easily understood by the following flow chart which is shown in figure 5. Figure 5 Perturb and Observe Algorithm www.ajer.org Page 404 American Journal of Engineering Research (AJER) 2013 The algorithm is developed in such a manner that it sets a reference voltage of the module corresponding to the peak voltage of the module. A PI controller is used to move the operating point of the module to that particular voltage level. It is observed that there is some power loss due to this perturbation and it also fails to track the power under fast varying atmospheric conditions. But still this algorithm is very popular because of its simplicity. 3.2 Incremental Conductance (IC) Algorithm Incremental Conductance (IC) method overcomes the disadvantage of the perturb and observe method in tracking the peak power under fast varying atmospheric condition. This method can determine whether the MPPT has reached the MPP and also stops perturbing the operating point. If this condition is not met, the direction in which the MPPT operating point must be perturbed can be calculated using the relationship between dl/dV and –I/V. This relationship is derived from the fact that dP/dV is negative when the MPPT is to the right of the MPP and positive when it is to the left of the MPP. This algorithm determines when the MPPT has reached the MPP, where as P&O oscillates around the MPP. This is clearly an advantage over P&O. Also, incremental conductance can track rapidly increasing and decreasing irradiance conditions with higher accuracy than perturb and observe method [4]. The disadvantage of this algorithm is that it is more complex when compared to P&O. The algorithm can be easily understood by the following flow chart which is shown in figure 6. Figure 6 Incremental Conductance Algorithm 4. SIMULATION RESULTS AND DISCUSSIONS 4.1 PV Array Characteristics The mathematical model of PV array is developed using MATLAB Simulink tool box. Various parameters of the PV array are determined and chosen. Series resistance (Rs) is iteratively chosen by incrementing from zero value. Decreasing the value of parallel resistance (R p) too much will lead ‘Voc’ to decrease and increasing the value of series resistance (Rs) too much will lead ‘Isc’ to drop. ‘Io’ strongly depends on the temperature and hence the simulation circuit of ‘Io’ includes Kv and Ki which are the voltage and current coefficients. TABLE I PARAMETER SPECIFICATIONS OF 70W PV MODULE Parameters Specifications Open circuit voltage Voc 21.4V Short circuit current Isc 4.53A Maximum output power 70W Voltage at maximum power 17.7V Current at maximum power 3.96A www.ajer.org Page 405 American Journal of Engineering Research (AJER) 2013 The light generated by the PV is modeled as an equivalent current source. The series and parallel resistances are connected and simulated. The various equations describing the PV array characteristics are modeled using suitable blocks from the simulink library. The complete simulink model of PV module is shown in Figure 7. Figure.7.Simulation Model of PV Model This simulation study is done for the standard test condition (STC) i.e. temperature is 30˚C and the Irradiation is 1000 W/m2 with the simulation model. Figure.8. Simulated I-V Characteristic Figure.9. Simulated P-V Characteristic The 70W PV module is simulated in MATLAB and the simulated I-V and P-V characteristics are shown in Figures 8 and 9 respectively. The open circuit voltage Voc = 21.4V, the short circuit current Isc = 4.53A are obtained for the corresponding maximum output power of 70W. 4.2 Simulink Model of P&O Algorithm The MATLAB subsystem includes the 70W PV array and it also contains the equations required for modeling it. DC voltage source of the dc-dc boost converter is replaced by the MATLAB subsystem integrated with PV array. Perturbing the duty ratio of dc-dc boost converter perturbs the PV array current and consequently perturbs the PV array voltage. To compute the power at various duty cycles and to compare it with the power of the current operating point, the MPPT subsystem is used. The duty cycle either increases or decreases or remains the same. Figure 10 shows the simulink model of PV array with dc-dc boost converter and P&O MPPT. www.ajer.org Page 406 American Journal of Engineering Research (AJER) 2013 Figure.10. Simulink Model of P&O MPPT with dc-dc converter Figure (a) Current output Figure (b) Voltage output Figure (c) Power output Figure.11. Simulation results of P&O MPPT algorithm The simulation results of P&O MPPT algorithm are illustrated in figure 11. The results show that the current output of 0.073amperes and the voltage output of 36votls and an output power of 2.6watts for a time period of 0.0175 seconds. 4.3 Simulink Model of Incremental Conductance Algorithm The simulink model of PV array with dc-dc boost converter and InC MPPT algorithm is shown in figure 12, under the same conditions as the P & O algorithm is simulated. Figure.12. Simulink Model of InC MPPT with dc-dc Converter Figure (a) Current output Figure (b) Voltage output Figure (c) Power output Figure.13. Simulation results of InC MPPT algorithm www.ajer.org Page 407 American Journal of Engineering Research (AJER) 2013 The simulation results of InC MPPT algorithm are illustrated in figure 13. The results show that the output current varies from 0.093A to 0.087A and the output voltage varies from 47V to 43V and an output power varies from 4.7W to 3.7W for a time period of 0.1 seconds. 4.4 Comparison between P&O and InC MPPT Algorithms The P & O and InC MPPT algorithms are simulated and compared using the same conditions. When atmospheric conditions are constant or change slowly, the P&O MPPT oscillates close to MPP but InC finds the MPP accurately at changing atmospheric conditions also. Comparisons between the two algorithms for various parameters are given in table 2. MPPT P&O MPPT InC MPPT TABLE II COMPARISON BETWEEN P&O AND INC MPPT ALGORITHMS Output Output Output Time Current Voltage Power Response 0.073A 36V 2.6W 0.0175 sec 0.087-0.093A 43-47V 3.7-4.7W 0.1 sec 5. Accuracy Less Accurate CONCLUSIONS In this paper a mathematical model of a 70W photovoltaic panel has been developed using MATLAB Simulink. This model is used for the maximum power point tracking algorithms. The P&O and Incremental conductance MPPT algorithms are discussed and their simulation results are presented. It is proved that Incremental conductance method has better performance than P&O algorithm. These algorithms improve the dynamics and steady state performance of the photovoltaic system as well as it improves the efficiency of the dc-dc converter system. 6. ACKNOWLEDGEMENTS The authors wish to thank the Management, Principal and the Department of Electrical and Electronics Engineering of Tagore Engineering College, Chennai for their whole hearted support and providing the Laboratory facilities to carry out this work. REFERENCES [1] [2] [3] [4] D. P. Hohm, M. E. Ropp, “Comparative Study of Maximum Power Point Tracking Algorithms Using an Experimental, Programmable, Maximum Power Point Tracking Test Bed”, 0-7803-5772-8/00,IEEE, 2000, 1699-1702. N. Pongratananukul and T. Kasparis, “Tool for Automated Simulation of Solar Arrays Using GeneralPurpose Simulators,” in IEEE Conference Proceedings, (0-7803-8502-0/04), 2004, 10-14. Trishan Esram, and Patrick L. Chapman, „Comparison of Photovoltaic Array Maximum PowerPoint Tracking Techniques,‟ IEEE Transactions on Energy Conversion, 22 (2), 2007, 439-449. Hairul Nissah Zainudin, Saad Mekhilef, „Comparison Study of Maximum Power Point Tracker Techniques for PV Systems,‟ Proc. 14th International Middle East Power Systems Conference (MEPCON‟10), Cairo University, Egypt, 2010, 750-755. www.ajer.org Page 408
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-136-143 www.ajer.org Research Paper Open Access Analysis the principles and Dimensions of Urban Parks with point on green spaces in Mashhad city, Iran Mohammad Rahim Rahnama1, Marjan Akbari1* 1 Associated Professor of Geography and Urban Planning, Ferdowsi University of Mashhad, Iran Ph.D. student of Geography and Urban Planning, Ferdowsi University of Mashhad, International Branch, Iran 2 Abstract: - There are heterogeneous in distribution of urban park spaces due to high expenditure of making green spaces. In order to the aim of this paper is analysis the principles and dimensions of urban parks with point on green spaces in Mashhad city. Its theory foundations is based on attribution and desk studies and field visits of related organizations. Information has been collected on their ecological characteristics using questionnaire. Finally based on the gathered information from Mashhad Urban Parks a comparison has been made with Iran existing standards and also international standards. Results showed that there are nongovernmental organizations and the general public who watch the trend carefully and exert controlling effects on public urban parks removal. In addition, the provinces of Iran are all under extensive land use evaluation and planning, the results of which will be available in near future. Key Words: - Urban Parks, Green Spaces, Mashhad city, Public Satisfaction I. INTRODUCTION Urban parks have long played a vital role in community-based programs for all people. Urban parks are now viewed as an important part of the broader structure of urban and neighborhood development rather than just recreation and leisure facilities (Bros, 2003; Meijer, 2013). Development of Urban parks dates back to the ancient times of the boulevard systems in Minneapolis and Kansas City. Beginning in 1859 when Frederick Law Olmsted, Calvert Vaux and more than 3,000 laborers created central park in United States of America, a wave of enthusiasm for urban pleasure grounds swept America and the world over (Harnik, 2003). Urban Parks in this study refers to specific piece of ground, excluding natural parks, within the City/town and set apart for use of the general public. It may be planted with trees, lawns and other shrubbery and include facilities for sport, entertainment and recreation (Rabare & et al, 2012). The International Federation of Parks and Recreation Administration (IFPRA) is the unique international organization that represents parks, recreation, amenity, cultural, leisure and related services (Xang, 2012). Among the federation’s aims are the advancement of parks, recreation, cultural and leisure services through representation and the dissemination of information; and the promotion of relevant research. During the past few years, IFPRA has refocused its activities more towards urban parks, which e.g., led to the establishment of a World Urban Parks Initiative together with a range of other national and international organizations. Moreover, IFPRA strengthened its scientific base by setting up as science task force at the IFPRA world congress in Hong Kong (autumn, 2010). At the end of 2011, the Executive Committee of Ifpra decided to assign a review study of urban park benefits. This work was to be coordinated by the Science TF. In response, a research team of four, representing three different institutions, three different disciplines, and four different nationalities was set up. The research team carried out a systematic review of the scientific evidence for urban park benefits during most of 2012 (Register, 2006; Defra, 2007). Urban planning is an instrument of town management. In the past, when it turned from an operational instrument into a legal duty, it became useless. Planning, as it has always been, is undergoing changes towards what they believe to be a more efficient working way. What looks a trend of the last decades is its market orientation, more precisely demand orientation, so that it can promptly meet it. Low et al, (2005) wrote that in this new century, we are facing a different kind of threat to urban parks not only one of disuse, but of patterns of design and management that exclude some people and reduce social and cultural diversity. Most parks in Mashhad city were created in the 1980s and 90s. The ancient www.ajer.org Page 136 American Journal of Engineering Research (AJER) 2013 Mashhad settlers had an interest for recreational gardens and parks as this was in line with the interests of their countries of origin (Low & et al, 2005). Open spaces, playgrounds, sports fields, and recreational programs make an important contribution to citizen’s lives. But to realize their full potential as community resources for youth development, parks can and should go beyond recreation. At their best, they can offer a wide variety of high-quality opportunities for citizens to build the skills and strengths they need to lead full and rewarding lives (Zarabi & Azani, 2011). Urban parks have always been vital in providing youth with recreational opportunities and enriching program initiatives. But a few are breaking new ground, with innovative programs for children and adolescents that are in keeping with recent advances in policy thinking. Three examples from beneficiaries of The Wallace Foundation’s Urban Parks Initiative illustrate these new directions (Rahnama & Heydari, 2013). In other hand, in land use planning, urban open space is open space areas for "parks", "green spaces", and other open areas. The landscape of urban open spaces can range from playing fields to highly maintained environments to relatively natural landscapes. They are commonly open to public access, however, urban open spaces may be privately owned. Areas outside of city boundaries, such as state and national parks as well as open space in the countryside, are not considered urban open space. Streets, piazzas, plazas and urban squares are not always defined as urban open space in land use planning. The value of urban open space can also be considered with regards to the specific functions it provides. For example, the Bureau of Municipal Research in Toronto lists these functions as the nature function, urban design function, economic function, social retreat function, and outdoor recreation function (Bureau of Municipal Research, 1976). Another study categorizes the values open space offers from a sociological viewpoint, listing: civic and social capital, cultural expression, economic development, education, green infrastructure, public health, recreation, and urban form (Berry, 1976). These studies reiterate the same core benefits of urban open spaces and none of the options create any inconsistencies with the others. Additional beneficial aspects of urban open space can be factored into how valuable it is compared to other urban development. One study categorizes these measures of value into six groups: utility, function, contemplative, aesthetic, recreational, and ecological (Eysenbach, 2008). These categories account for the value an urban open space holds to the development of the city in addition to just those things citizens consciously appreciate. For example, the "function value" of an open space accounts for the advantages an urban open space may provide in controlling runoff. The final three values listed, aesthetic, recreational, and ecological, are essentially the same as the values that make urban open spaces consciously valuable to citizens. Of course, there are several different ways to organize and refer to the merit of open space in urban planning (Rahnama & Karimi, 2013). Fig. 1. Mashhad city urban parks. In this mean, applicability of green places that were constrained by unregularly growth of city band by converting the gardens and agricultural lands into city body, was faced by some problems such as incorrect zonation and establishment at city, use of unsuitable spaces and paying little attention to neighborhood’s, per capitals and standards (Esmaeili 2002). Urban Parks are as a complement of urban physical structure. These spaces are a type of urban land-use that has ecologic and social traits. On the other hand, today planning and design is adaption green space networks (Ericson, 2004). Today, urban green spaces are introduced as appropriate method for promotion of life quality due to impressive social and ecological influences (Barker, 1968). So, an urban park space is important issue due to creating beautiful landscape, also it is as obstacle air pollution in cities. www.ajer.org Page 137 American Journal of Engineering Research (AJER) II. 2013 RESEARCH INVESTIGATION Urbanization both in population and spatial extent, transforms the landscape from the natural cover types to impervious urban lands (Xian & et al., 2005). This phenomenon is one of the most important factors that changes land surface leading to modification of receiving environments which are usually composed of natural cover. The pressure for additional housing and business demands in towns and urban areas alters existing urban green park spaces even more in the route to development (world resources institute, 1996). Urban green spaces provide a variety of functions which can be grouped in three classes including architectural application and aesthetics, climatic and engineering functions (Miller, 1997). Also, urban parks provide the opportunity for recreation and experiencing nature. These functions are essential for improving the quality of citizen life. Therefore, allocation of urban land to green spaces as a class of land use is an important policy issue in almost all cities. However, due to physical expansion of Mashhad city, extensive destruction of urban park spaces has occurred that conflicts with an environmentally sound developing paradigm (Rafiee & et al, 2009). Old Mashhad according to natural condition has numerous trees and pastures. Old Mashhad is introduced as green city at previous time. But, green spaces have changed in last centuries due to development of city. According to research, in Mashhad the ratio of green spaces to city area is high at Qajar1 age. In 1922 decade (Pahlavi 2age) decrease this ratio due to physical development of city. After that, Mashhad had grown sharply in 1962 & 1972 decades due to immigration of rural that was led to destroying of gardens and green spaces and making apartments. This issue introduced urban park spaces as important topic in urban issues. On the other hand, by developing of Mashhad the garden and agriculture lands that has been located around the city, were combined with its (Taheri, 2007). In this stage decrease the area of urban space land-use due to lack of appropriate plan. Urban park space from city development point of view consists of different vegetative covers and as a living factor besides unloving framework of city determines city morphology structure. Open urban spaces consists of existing green space and from the other hand this urban spaces known as potential spaces for urban park space development. Sustainable urban development is defined as “improving or enhancement of urban living quality like ecological, cultural, political, social, economic and facilities without making any problem for future generations which these problems could originate from reductions in natural reserves and local belongings. In another way sustainable urban development is focusing on management and development paths that are sustainable and in these paths some aspects of sustainable development like energy efficiency, green space and neighboring units are improved. One the shortest definition which is given about the ecologic city is “the ecologic city is a healthy city from ecological point of view” and it proceeds that such city doesn’t exist (Adnr, 1994). There have been many studies and researches about Mashhad Urban Parks. Different researchers like Rahnama and Razzaghian (2012) and Khakpour et al (2010) have studied Mashhad Urban Parks. Some of these experts have focused on measuring Mashhad green space per capita. One of these experts is Bahram soltani which conducted a research in the year 1995. But until now there haven't been comprehensive researches by means of geographical systems about Mashhad Urban Parks (Rahnama & Razzaghian, 2012). Fig. 2. A view of Mellat urban park in Mashhad city. Urban parks are very important, firstly because of their environmental roles, and secondly because they serve as cultural and recreational places for free times. Standards and precipitate of urban green spaces before of every planning on development of urban green spaces, related standards and per capias should be determined. Now, it is necessary, for better introduction of issue, that the criteria should be defined. Standard is a level of www.ajer.org Page 138 American Journal of Engineering Research (AJER) 2013 implementation that is determined by the measuring criteria, and considered for a given number of residential populations (Chehrzad & Azarpishe, 1992). With respect to the importance of green space and the necessity of its‫ ۥ‬creation in cities for the purpose air-conditioning, people‫ ۥ‬s recreation and beautification of city, it seems that no limit should be considered for creating green spaces because how much the green spaces develop, still will not be enough in other word, the more green space as lung of city, the better the condition of that city. However, the related standards are not as the same relating to climate conditions, ecologic features and availability of water resources on one hand and cleaning air of city in polluted areas on other hand. III. METHOD AND MATERIAL Present research has a descriptive- analytical method that is of applied kind. Its theory foundations is based on attribution and desk studies and field visits of related organizations. Information has been collected on their ecological characteristics using questionnaires (open and closed questions), along with interviews. This information is collected in two documents and field ways. In this part we have applied different approaches to calculate the relative of urban spaces and urban parks in thirty districts of Mashhad city. Then, we have compare the level of population density in each district with condense of urban parks in the same districts and continuous this process for other regions. After gathering needed data’s, we used of different statistical methods for analyzing and discussion about research. Finally based on the gathered information from Mashhad Urban Parks a comparison has been made with Iran existing standards and also international standards. Iran standards includes some suggested standards by ministry of Roads and Urban development and international standards includes suggested standards by United Nations and public health bureau. IV. CASE STUDY REGION Mashhad is the capital of Khorasan Razavi Province of Iran (Fig. 3). It is one of the most important cities because of its religious, historical and economic values that attract a large number of people each year. In 1986, its population was 668,000 whereas its current population is about 2.8 millions. Since 1987, built-up areas in the city have expanded significantly (Rafiee, 2007); the city has witnessed a rapid growth in construction which has caused destruction of green spaces areas. This trend in the urban park is in sharp contrast with the rules governing improvement and establishment of new urban parks within the current boundary and the projected future of the city. In fact, municipality closely attends to the urban parks and scrutinizes even single tree uprooting. On the other hand, there are nongovernmental organizations and the general public who watch the trend carefully and exert controlling effects on public urban parks removal. In addition, the provinces of Iran are all under extensive land use evaluation and planning, the results of which will be available in near future. The application is mostly environmentally oriented giving high value to public urban parks and aims to upgrade the per capita green areas in the newly built regions. However, there are other players in the filed including major private stakeholders who have influence in deciding the physical and biological properties of built-up area development plans. Fig. 3. Case study Region. Source: authors adopted Rafiee & et al, 2009. V. www.ajer.org DISCUSSION AND RESULTS Page 139 American Journal of Engineering Research (AJER) 2013 Mashhad is located in the north east of Iran country close to the borders of Afghanistan and Turkmenistan. Mashhad as other shrine cities in the world has different potential’s in the field of attraction urban tourism. In this between, advertising as a powerful tool has a key role in exacerbated of this process. With attention to need of citizens, people, children, tourism and other groups to passing of their free times in urban parks, green spaces and other spaces, the municipalities and NGOs must have special programs to created happiness in citizens. In order to, the aim of this research is analysis the principles and dimensions of urban parks with point on green spaces in Mashhad city. About urban parks with point on urban green spaces in Mashhad have done many researches but every ones of them has point on special approaches and don’t has any attention to spatial and human structure of these spaces. In really we must know, we create urban parks and urban green spaces for human and for peoples then we should try to have more urban and green spaces points with high joy. In Mashhad city the results of plans and program stage show the lack of land has been led to increasing the value of lands so, has changed park spaces land-use to other land-uses. Table 1: Total Area of Urban Parks and Green Spaces in Mashhad city Area 1 2 3 4 5 6 7 8 9 10 11 12 Samen Total Urban parks and urban green spaces in Mashhad city 36 58 24 10 17 34 22 20 68 65 25 9 9 397 Total area 263037 6493534 6480747 5157623 3446556 4994155 24119223 733.025.1 10574022 5987321 9069732 549629 282147 8.711.531 Source: Authors, 2013. There are heterogeneous in distribution of urban park spaces due to high expenditure of making green spaces. Even, in some cities per capita of green space land-use is low from optimum range. The green spaces development process of Mashhad city show that the first modern green space is not racemization correctly and different groups have different idea about this subject but what of many of citizens have agreed about that is the Kohsangi urban park is an old urban park in Mashhad but National garden park in fact is the oldest park in Mashhad city. It has been made in 1952. Then, in order to balancing between urban park, green spaces land-use and other land-uses (such as: residential, commercial, administrative and etc.) were made parks in 1962 decade. The studies show urban green spaces increase to 11.1 km2 in 1998 that the numbers of parks are 184. In 2003, the numbers of parks increase to 672 and the area of parks is 12076761 hectare. After that urban green spaces development is important in 2006 & 2007 due to increasing air pollution. The area of green spaces in 2006 & 2007 are 6882 & 7244 hectare. In 2008, the area of urban green spaces increase to 7990 hectare that consist 130% of city area. www.ajer.org Page 140 American Journal of Engineering Research (AJER) 2013 Fig. 4. National Garden as one of the oldest urban parks in Mashhad city. Source: authors, 2013. It is strongly believed that developing more sustainable cities like Mashhad is not just about improving the biotic aspects of urban life, it is also about the social aspects of city life that is among others about people’s satisfaction, experiences and perceptions of the quality of their everyday environments. In the context of this study, the relation between urban parks and city sustainability in Mashhad city is addressed through the investigation of the value of urban nature as provider of social services essential to the quality of human life, which in turn is a key component of sustainable development. Table 2 Total Area of Urban Parks and Green Spaces in Mashhad city Land use Residential Urban park and green spaces Compatibility Minimum in low density Minimum density in average density Minimum density in high density Small sector Neighborhood Park Regional Park Parks and gardens around the city Per capita 50% 40% 30% 1.2 1.5 1.8 4.5 In Mashhad city 70% 24% 6% 0.5 1.5 2 2.75 Source: Authors, 2013. Furthermore, aesthetic, historical and recreational values of urban parks in case study region increase the attractiveness of the city and promote it as tourist destination, thus generating employment and revenues. Furthermore, natural elements such as trees or water increase property values, and therefore tax revenues as well. Fig. 5. Pattern of Sustainable city in Mashhad city. The administration of Mashhad city has been distributed between organizations. But there isn’t cooperation between organizations. So, the organizations need to cooperate each other. However, there are some organizations for administrating of city, but also the main organization is parks and green spaces organization. For promoting of green space management, gardens official was established in 1961. After that, change to parks and green space organization in 1964. Also, the basic changes were created in structure. The traditional methods were removed and were replaced modern technique for administrating city. In Mashhad city, projects for the enhancement of urban improvement mainly with the arrival of the Islamic city council, the organization of www.ajer.org Page 141 American Journal of Engineering Research (AJER) 2013 gardens and sidewalks was dedicated to the interests of the native symbols with respect of the economic aspect of nature in Iran. The projects were related to the interests of promoting knowledge about the economic possibilities of the local and exotic flora. Preservation of green space and implementation urban plans in Mashhad show, process of green spaces development is appropriate. Indeed, developing of urban green spaces is the best solution for achieving sustainable city. Implementing of urban green spaces development plans have important role in urban life especially in Mashhad city. In these city urban management both should create urban green spaces and should protect green spaces. Appropriate and systematic plans are necessary for developing of cities. On the other hands , studying of urban plans show , green spaces are the best solution for decreasing air pollution, mental diseases and social problems in metropolitan cities (such as Mashhad ). Fig. 6. GS & Urban park with standard rate and compare it current rate in Mashhad city. The analysis of the change in the landscape patterns provided insight into the nature of the changes that had taken place in the Mashhad city. We conclude that urbanization in the Mashhad city has had important effects upon the urban environment through which urban parks and green spaces have been converted into builtup areas with corresponding loss of functions of the green areas. Players with some roles in shaping the city and its expansion are now reaching some consensus as to the balance between built-up and natural areas. This is especially backed by the ongoing land use planning project in the Province which together with the activities taken by NGO’s and the Foundation of Holy Imam Reza Shrine contributes to development of a new trend in protecting green areas. In summer time, the population of the city doubles by the pilgrims who are happy to have even a single joy space and tree to set up their tent and stay in the city. With the rapid downgrading quality and quantity of the green spaces in the Mashhad city, managers are required to take a timely measure to reverse the trend of changes that were partially shown in this study. Otherwise, soon we will face a totally artificial and unpleasing urban environment with lost functions and services of the green areas. VI. CONCLUSION AND SUGGESTIONS Green space areas in the densely populated cities of today are valued more than before while at the same time are suffering shrinkage due to pressures for more open lands for housing development. The results of this study revealed that green space areas in the Mashhad city during the years 1987–2012 have become isolated and decreased. Even, in some cities per capita of green space land-use is low from optimum range. The green spaces development process of Mashhad city show that the first modern green space is not racemization correctly and different groups have different idea about this subject but what of many of citizens have agreed about that is the Kohsangi urban park is an old urban park in Mashhad but National garden park in fact is the oldest park in Mashhad city. It has been made in 1952. Then, in order to balancing between urban park, green spaces land-use and other land-uses (such as: residential, commercial, administrative and etc.) were made parks in 1962 decade. The studies show urban green spaces increase to 11.1 km2 in 1998 that the numbers of parks are 184. In Mashhad city, projects for the enhancement of urban improvement mainly with the arrival of the Islamic city council, the organization of gardens and sidewalks was dedicated to the interests of the native symbols with respect of the economic aspect of nature in Iran. In summer time, the population of the city doubles by the pilgrims who are happy to have even a single joy space and tree to set up their tent and stay in the city. With the www.ajer.org Page 142 American Journal of Engineering Research (AJER) 2013 rapid downgrading quality and quantity of the green spaces in the Mashhad city, managers are required to take a timely measure to reverse the trend of changes that were partially shown in this study. Otherwise, soon we will face a totally artificial and unpleasing urban environment with lost functions and services of the green areas: 1. Multipolar urban management in Mashhad city has been make different problems for this city (Astan Quds, Municipality, military organizations and Legal rules Islamic religious). 2. Collusion between the various institutions which involved in the urban management is another factor of inefficient systems of urban development in Mashhad city. 3. Non- cooperative urban management in Mashhad city especially in marginal areas has created serious problems for sustainable urban development. 4. Correction of old laws that had been approved in previous time. 5. Preparing financial sources for using modern equipments and technique. 6. Making standard in urban management. 7. Codification urban forest plan. 8. Presenting of different models and other urban green spaces plans. 9. Creating urban green spaces should be adapted to unique system. 10. Strengthening of natural scopes that have been located in Mashhad. 11. Prevention of mountainous open space by using native plants 12. Changing industrial land-uses to green space land-use. 13. Rivers is the most important natural element. So must be prevented from destroying themes. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] André, G. (1994). The Politics of Parks Design: a history of Urban Parks in America, Cambridge, Mass: MIT Press. Autumn, A. (2010). An integrated urban development and ecological simulation model. Integrated Assessment 1, 215–227. Barker, R.G. (1968). Ecological Psychology, Stanford: Stanford University Press. Bery, A. (1976). Managing customer relationship management projects: The case of a large French telecommunications company. 28, 339–351. Beureau of municipal research. (1976). Mashhad Municipality. Comprehensive plan for Mashhad green spaces, Social Studies. Bros, G. (1993). Principles of Sociology, Translators: Gholamabas Tavasoli, Reza fazel, Samt Publication. Chehrzad, S. Azarpishe, X. (1992). The Prepration of Designing Eco-park Principles, case study: Pardisan Eco-park of Tehran, Environmental Science and Technology magazine, Volume X, Number 4. Defra, O. (2007). An efficient hybrid approach based on PSO, ACO and K-means for cluster analysis. Applied Soft Computing, 10, 183–197. Ericson, I. (2004). New towns and future urbanization in Iran. TWPR, 22(1), 67–86. Esmaeili, A. (2002). Organization of Planning and Recreational Design, Gorgan University of Agricultural Sciences and Natural Resources Publication. Eysenbach, Y. (2008). Guiding of population of cities of Iran. Housing Ministry Press. Harnik, P. (2003). Planning through debate: The communicative turn in planning theory. Town Planning Review, 63(2), 143–162. Low, S., Taplin, D. and Schel, S. (2005). Rethinking Urban Parks. Texas: University of Texas Press. Meijer, M., et al. (2013). A Next Step for Sustainable Urban Design in the Netherlands Cities. Miller, R.W., (1997). Urban Forestry: Planning and Managing Urban Green spaces, second edition. Prentice Hall, Inc., Upper Saddle River, NJ. Rabare, F, & Sara, L. (2012). The Peru Urban Management Programmed: Linking capacity building with local realities. Habitat International, 24(4), 417–431. Rafieian, M., (2009). Urban system in developing countries: Case study Iran-Esfahan. Tarbiat Modarres , University Press. Rahnama, MR. Heydari, A. (2013). North West border cities of Iran and regional development: A case of Kurdistan Province, Journal of Geography and Regional Planning, Vol. 6(5), pp. 184-192. Rahnama, MR. Karimi, E. (2013). Ecologic city planning. Rahnama, MR. Razzaghian, F. (2012). Ecological Analysis of Urban Parks (Case Study: Mashhad Metropolitan, International Journal of Applied Science and Technology Vol. 2 No. 7. Register, R. (2006) Eco cities: building cities in balance with nature Taheri, A. Gh. (2007). History of political relations of Iran and England. Tehran: National Works Society. p. 952. Wong, T.,Yuen, B.(2012). Eco-City Planning, Springer, Singapore. World resource institute. (1986). The Prepration of Designing Eco-park Principles, case study: Pardisan Eco-park of Tehran, Environmental Science and Technology magazine, Volume X, Number 4. Xian, G., Crane, M., Steinward, D., (2005). Dynamic modeling of Tampa Bay urban development using parallel computing. Computer and Geosciences 31 (7), 920– 928. Zarabi, A, Azani, M. (2001). Sustainable development in the industrialized and developing world, Journal of Geography Education, Number 59, Tehran. www.ajer.org Page 143
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-09-15 www.ajer.org Research Paper Open Access Performance Evaluation ofIEEE802.11g WLANsUsing OPNET Modeler Dr Adnan Hussein Ali1Ali Nihad Abbas2 Maan Hamad Hassan3 Electrical & Electronics Engineering Techniques College – Baghdad (adnan_h_ali@yahoo.com) Studies Planning and Follow up Office, Computer Department, MHE&SR(ali_nihad83@yahoo.com) 3 Electrical & Electronics Engineering Techniques College – Baghdad ( maan_singer@yahoo.com) 1 2 Abstract: -Current trends towards a ubiquitous network (the Internet) capable of supporting different applications with varying traffic loads: data, voice, video and images, has made it imperative to improve WLAN performance to support „bandwidth-greedy‟ applications [1,2]. In this paper the performance optimization methods have been presented using an advanced network simulator, OPNET Modeler to model a WLAN subnetwork deployed within an enterprise WAN framework.. Here performance optimization has been shown via a series of simulation tests with different parameters such as Data rate, and the physical characteristics. The different quality of service parameters are chosen to be overall WLAN load data, Packet Delay and Medium Access Delay, and the overall throughput of the WLAN. Then finally the results are compiled to improve the performance of wireless local area networks. Keywords:- Wireless LAN, IEEE 802.11g, OPNET I. INTRODUCTION A WLAN is a versatile data communications system deployed either as an extension, or as an alternative to a conventional wired LAN. Majority of WLAN systems use Radio Frequency (RF) transmission technology with a few commercial installations employing the Infrared (IR) spectrum[3]. A typical WLAN is connected via the wired LAN as shown in Figure 1 below. Printer Computer Switch Server Mobile Node Access Point Mobile Node Workstation Fig. 1: Wireless Local Area Network II. WLAN TECHNIQUES Commercial wireless LANs employ spread-spectrum technology to achieve reliable and secure transmission in the ISM bands although bandwidth efficiency is compromised for reliability. Newer WLAN technologies such as the IEEE 802.11(a) and (g) are employing Orthogonal Frequency Division Multiplexing (OFDM) schemes[4,5]. www.ajer.org Page 9 American Journal of Engineering Research (AJER) 2013 The IEEE 802.11 WLAN standard specifies a Media Access Control (MAC) layer and a physical layer for wireless LANs. The MAC layer provides to its users both contention-based and contention-free access control on a variety of physical layers.The basic access method in the IEEE 802.11 MAC protocol is the Distributed Coordination Function (DCF), which is a Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) MAC protocol [6]. The IEEE 802.11 standard defines a Basic Service Set (BSS) of two or more fixed, portable, and/or moving nodes or stations that can communicate with each other over the air in a geographically limited area [6]. Two configurations are specified in the standard: ad-hoc and infrastructure. The ad-hoc mode is also referred to as the peer-to-peer mode or an Independent Basic Service Set (IBSS) as illustrated in Fig. 2(a). This ad-hoc mode enables mobile stations to interconnect with each other directly without the use of an access point (AP). All stations are usually independent and equivalent in the ad-hoc network. Stations may broadcast and flood packets in the wireless coverage area without accessing the Internet. The ad-hoc configuration can be deployed easily and promptly when the users involved cannot access or do not need a network infrastructure. However, in many instances, the infrastructure network configuration is adopted. As shown in Fig. 2(b), in the infrastructure mode, there are APs, which bridge mobile stations and the wired network. BSSs can be connected by a distributed system that normally is a LAN. The coverage areas of BSSs usually overlap. Handover will happen when a station moves from the coverage area of one AP to another AP[7]. (a) (b) Fig. 2(a): Ad Hoc network architecturesand (b): Infrastructure [6] Due to the need for high-speed data rates, many standards IEEE 802.11a and the European Telecommunications Standards Institute (ETSI)‟s High Performance Local Area Network type 2, (HIPERLAN/2) are in place[8]. A summary of the key WLAN standards is given in Table 1 below: IEEE 802.11 IEEE 802.11b IEEE 802.11a IEEE 802.11g HIPERLAN/2 Table 1: Summary of Key WLAN Standards[8] RF Band Max. Data Rate Physical Layer 2.4GHz 2Mbps FHSS,DSSS, IR 2.4GHz 11Mbps DSSS 5GHz 54Mbps OFDM 2.4GHz 54Mbps OFDM 5GHz 54Mbps OFDM III. Range 50 – 100m 50 – 100m 50 – 100m 50 – 100m 50m indoor 300m outdoor SIMULATION ENVIRONMENT OPNET Modeler v14.5 is used for all network simulations. OPNET Modeler is a powerful communication system discrete event simulator (DES) developed by OPNET Technologies. OPNET Modeler 14.5 assists with the design and testing of communications protocols and networks, by simulating network performance for wired and/or wireless environments[9]. OPNET Modeler comes with an extensive model library, including application traffic models (e.g. HTTP, FTP, E-mail, Database), protocol models (e.g. TCP/IP, IEEE 802.11b, Ethernet), and a broad set of distributions for random variable generation . There are also adequate facilities for simulation instrumentation, report generation, and statistical analysis of results . The OPNET tool provides a hierarchical graphical user interface for the definition of network models. OPNET provides a comprehensive development environment for modeling and performance-evaluation of communication networks and distributed systems. Thepackage consists of a number of tools, each one focusing on particular aspects of the modeling task. These tools fall into three major categories that correspond to the www.ajer.org Page 10 American Journal of Engineering Research (AJER) 2013 three phases of modeling and simulation projects: Specification,Data Collection and Simulation , and Analysis[10]. These phases are necessarily performed in sequence. They generally form a cycle, with a return to Specification following Analysis. Specification is actually divided into two parts: initial specification and re-specification, with only the latter belonging to the cycle, as illustrated in the following figure(3). Fig. 3 Opnet tools 3.1 Simulation Network Model /Baseline Scenario The 802.11g baseline model was created using a variation of the OPNET 802.11 standard models wlan_deployment scenario. In this scenario, the behaviour of a single infrastructure 802.11g WLAN is examined within the framework of a deployed WAN to better emulate the configuration of an actual network. An effective and efficient way of increasing the capacity and coverage of WLANs is to place one or more access points at a central location and distribute the wireless signals from the access points to various antenna locations[11]. The WLAN is connected via its AP to an office LAN connected through a central switch using 100BaseT (100Mbps) Ethernet wiring emulating a real life office environment with a standard Fast Ethernet LAN. An IP gateway (i.e., an enterprise router) connects the LAN to an IP cloud used to represent the backbone Internet. The gateway connects to the office LAN using 100BaseT Ethernet wiring while the connection between the gateway and the IP cloud is done with a Point-to-Point T1 (1.544Mbps) serial link depicted in Figure 4. The network‟s traffic servers are located on the other side of this IP cloud via a firewall connected by a T1 link denoting the Headquarters of the hypothetical corporation. These servers connect to the firewall using 100BaseT Ethernet wiring and are used as the source and destination of all services: HyperText Transfer Protocol (HTTP), File Transfer protocol (FTP), Electronic Mail (E-mail) , Database, multimedia (voice & video) and talent session, running on the entire network representing traffic that is exchanged with the mobile nodes in the 802.11g WLAN during the simulation. Figure 4: Simulated WAN Framework www.ajer.org Page 11 American Journal of Engineering Research (AJER) 2013 The red octagon in Figure 4 titled subnet_1 represents the remote branch office consist of an office_LAN having 20 workstations and an 802.11g WLAN BSS subnetwork connected by a 100BaseT link. Within that subnetwork are the mobile nodes and the Access Point that contain the WLAN, as seen in Figure 5. Figure 5: 802.11g WLAN BSS A single fixed Access Point and six mobile nodes were chosen as the WLAN configuration for the model. All mobile nodes are the same distance from the AP. This small WLAN was selected both to limit the scope of the simulation and to approach accepted emulation durations. The WLAN 802.11g baseline network model is configured to generate six types of application traffic: Web Browsing, File Transfer, Email, Database, print, talent session and video conference. However, all the applications defined in OPNET Modeler are enabled for future use. Table 2 details the departments and the applications commonly used. Figure 6 shows the profile configuration, which defines how the applications are run at the OPNETnetwork level. Every profile contains many number of applications, configured as shown in Table 2, which runs throughout the simulation. Figure 6: Profile Configuration www.ajer.org Page 12 American Journal of Engineering Research (AJER) 2013 3.2 Baseline IEEE 802.11g WLAN Deployment Scenario The 802.11g baseline model was used in an OPNET to test and illustrate its performance. The goal of the emulation was to ensure proper operation of the model using the analysis of a particular aspect of a protocol‟s behaviour or examining a specific network performance characteristic. 3.2.1 LOAD The final load on the WLAN as a specific function of time as the simulation progressed is one of the important results. The overall WLAN load data is displayed in Figure 4.1 showing an average value of 325Kbps on the 15 minute mark. Figure 7.1 Total load of WLAN The loads on each station and the AP are shown in Figure 7.2, and approximate average peak values on the 15 minute mark are shown in Table 3 below. Figure 7.2: the Load Values for the access point and stations www.ajer.org Page 13 American Journal of Engineering Research (AJER) Table 3: the load values for the AP and stations Node AP e-commerce costemer engineer marketing multimedia researcher saleperson total 2013 Load (Kbps) 162.5 0.0485 0.061 0.186875 156.8625 0.07 0.095 320 The total summation of the average loads of the access point and the stations show a value approximately equal to the overall WLAN load. 3.2.2- Delay An important parameter in determining the successful operation of the MAC layer, its timing operations, and the RTS/CTS mechanism are the medium access delay and overall packet transmission delay statistics. Those results are shown in Figure 8. Average overall WLAN delay reach to 6ms while the average WLAN medium access delay reach to 5ms. Figure 8: Packet Delay and Medium Access Delay 3.2.3-Throughput A result of that no data was dropped is that the total load on the WLAN must be closely matched to the overall throughput of the WLAN which is the case here as is shown in Figure 9. Figure 9 the overall throughput of the WLAN www.ajer.org Page 14 American Journal of Engineering Research (AJER) IV. 2013 CONCLUSION The overall performance of the IEEE 802.11g Wireless Local area networks has been analyzed in detail with the help of OPNET Modeler. The performance has been analyzed with the help of the parameters like throughput, access delay, and theoverall WLAN load data. These different parameters reveal the different methods to optimize the performance of wireless local area networks through a limited time, the performance can be optimized in terms of throughput, media access delay. Results obtained show that the WLAN subnetwork operates within normal limits of the IEEE 802.11g standard. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] Ms. Kaur, Dr. Sandip Vijay, Dr. S.C.Gupta," Performance Analysis and Enhancement of IEEE Wireless Local Area Networks" Vol. 9 Issue 5 (Ver 2.0), January 2010 Sameh H. Ghwanmeh and Abedel Rehman Al-Zoubidi ,"Wireless network performance optimization using OPNET Modeler” Information Technology Journal, Volume: 5, Issue: 1, 2006. DOI: 10.3923/itj.2006.18.24 T. Soungalo, Li Renfa and Z. Fanzi, “Evaluating and Improving Wireless Local Area Networks Performance,“ International Journal of Advancements in Computing Technology, Vol. 3, No. 2, March 2011. Jon W. Mark et al, “IEEE 802.11 Roaming and Authentication in WLAN/Cellular Mobile Networks”, IEEE Wireless Communications, Vol. 11, Issue: August 2004, pp 66 – 74. T. Regu and Dr. G.Kalivarathan, “Prediction of Wireless Communication Systems in the Context of Modeling” International journal of Electronics and Communication Engineering &Technology (IJECET), Volume 4, Issue 1, 2013, pp. 11 – 17. Chow C. C. and Leung V. C. M., “Performance of IEEE 802.11 Medium Access Control Protocol over a Wireless Local Area Network with Distributed Radio Bridges”. IEEE Standards Board, “IEEE Standard for Wireless LAN: Medium Access Control and Physical Layer Specifications”, IEEE Std. 802.11-1997, November 1997. IEEE 802.11 WG, Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, IEEE Standard 802.11-2007, June 2007 www.opnet.com S. Mangold, S. Choi, G.R. Hiertz, O. Klein, B. Walke, Analysis of IEEE 802.11e for QoS support in wireless LANs, IEEE Wireless Communications Magazine, December 2003 Fisher, C., “The Wireless Market: Growth Hinges on the Right Solution”, Radiata Inc., September 2000 AUTHOR’S PROFILE The Author is a Lecturer in the Computer Engineering Department at Institute of Technology, Baghdad, IRAQ. He has been awarded a Doctor of Philosophy in Laser and Opto-Electronics Engineering from University of Technology, Baghdad, in 2007. He has studied Master of Science in Electronics Engineering, Cupper Vapor Laser's Power supply at University of Technology, Baghdad in 2000. He has gained Bachelor in Electrical and Electronic Engineering from University of Technology, Baghdad, in 1987. Currently he is the Deputy Dean of Institute of Technology, Baghdad, IRAQ. His research interests are Radio over Fiber, Wireless Network, Laser's Power supply and OPNET. www.ajer.org Page 15
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-321-333 www.ajer.org Research Paper Open Access Exergy Analysis of A Combined Gas/ Steam Turbine Cycle with A Supercharged Boiler Sayed A. Abdel-Moneim, and Khaled M. Hossin** * Prof., Mech. Eng. and Vice Dean, Faculty of Eng., Northern Border Univ.(NBU), Kingdome of Saudi Arabia (KSA). ** M Sc., Graduate student Abstract: - In this paper,energy and exergy analysis of a combined cyclewith a supercharged boiler was carried out. Acombination of a basic gas turbine and steam cycle with both a supercharged boiler (SB) and a heat recovery boiler (HRB)wasinvestigated. The effects of the inlet temperature of the gas turbine, the excess air factor, and the compressor pressure ratio on the performance of the supercharged boiler combined cycle (SBCC) were studied. Comparisons between the SBCC and the conventional combined cycle were performed. The results indicated that the SBCC gives output power up to 2.1 times of that of the conventional combined cycle when compared at the same values of the operating parameters. However, the SBCCefficiency was found to be lower than the conventional combined cycle. The exergy analysis showed an advantage of SBCC over the conventional combined cycle. Keywords: - Thermal power plant; supercharged boiler, combined cycle, energy; exergy; second-law efficiency, exergy destruction. NOMENCLATURE CP e ExD h specific heat at constant pressure, (kJ/kmol K) flow specific exergy, (kJ/kg) exergy destruction rate, (kW) h enthalpy, (kJ/kg) LHV lower heating value of fuel, (kJ/kmol) mass flow rate, (kg/s)  m mHP mLP mLH enthalpy difference, (kJ/kg) mass of HP steam generated in HRB, (kg/kmoln.g) mass of LP steam generated in HRB, (kg/kmoln.g) mass ratio of LP to HP steam M P mass of steam generated in SB, (kg/kmoln.g) molecularweight; (kg/kmol) pressure, (bar) or power, (kW) PR q ST to GT power ratio heat transferred per kg of steam, (kJ/kg) mSB www.ajer.org Greek symbols 1 ,  2 ,  3  HRB  SB  com G G C  HRB m P  SB  2 nd  C steam mass fractions Efficiency of the HRB Efficiency of the SB thermal efficiency of combined cycle generator efficiency thermal efficiency of GT cycle thermal efficiency of HRB steam cycle mechanical efficiency pump isentropic efficiency, thermal efficiency of SB steam cycle second-law efficiency excess air factor compressor presser ratio Acronyms Page 321 American Journal of Engineering Research (AJER) T v w W Xa Xg temperature, (K) specific volume, (m3/kg) work per kg of steam, (kJ/kg) work, (kJ/kmoln.g) actual air to fuel ratio, (kmola /kmoln.g) amount of product gases, (kmolg/kmolen.g) Subscripts a com g GC i n.g N o SC air combined cycle product gases gas turbine cycle Inlet natural gas Net Outlet steam turbine cycle I. 2013 C CON CP EC EV FP compressor condenser condensate pump, circulated pump economizer evaporator feed pump FWH GEN GT HRB HP LP NG P SB SBCC SH ST C CON CP EC EV FP FWH surface feed-water heater generator gas turbine heat recovery boiler high pressure low pressure natural gas pump supercharged boiler supercharged boiler combined cycle superheater steam turbine compressor condenser condensate pump, circulated pump economizer evaporator feed pump surface feed-water heater INTRODUCTION Exergy analysis is a technique based on the first and second laws of thermodynamics which provides an alternative and illuminating means of assessing andcomparing processes and systems rationally and meaningfully. Unlike energy, exergy is not conserved and gets depleted due to irreversibilities in the processes. The performance of energy systems is degraded by the presence ofirreversibilities, and the entropy production is a measure of theirreversibilities that present during a process. In particular, exergy analysis yields efficiencies which provide a true measure of how nearly actual performance approaches the ideal, and identifies more clearly than energy analysis the causes and locations of thermodynamic losses. Consequently, exergy analysis can assist in improving and optimizing designs. Several studies had been carried out by researchers [1-5] to evaluate the performance ofthermal power plants using exergy analysis. Combined gas/steam turbine cycle power plants are widely used for cogeneration and electricity generation as well. In combined cycles, the gas turbine exhaust heat is utilized through the use of heat recovery boilers (HRBs). The overall efficiency of combined power plants can be improved by: increasing the mean temperature of heat supplied by increasing the inlet gas temperature of the gas turbine and/or decreasing the mean temperature at which heat is rejected [6-8]. Briesch et al. [9] reported that 60 % efficiency can be achievedfor a combined cycle by increasing the gas turbine inlet temperature to 1427ºC.Modeling and optimizing of a dual pressure reheat combined cycle was carriedout by Bassily [10]with introducing a technique to reduce the irreversibility of the steam generator. One of the applicable methods of saving energy and reducing steam generator size is to supercharge the steam generator by using a gas turbine-driven compressor to furnish combustion air. Developments in metallurgy and pressure vessel technology make it possible to build such a supercharged boiler (SB). The reduction in size and heat transfer surface of a supercharged boiler is due to two reasons. First, as the operating gas-side pressure is increased, the emissivity of the non-luminousradiating gases increases markedly. Second, the higher gas density and available pressure drop permit much higher gas mass flow rates (compared with the conventional steam generator) to be used in the convection section, with higher accompanying convection heat transfer coefficients [11].Mikhael et al. [12] investigated the possibility of utilizing the solar energy for electrical power generation with a hybrid mode of steam generation in a combined power plant incorporating a SB and a HRB.Studies based on the exergy analysis identify the location, the magnitude and the sources of irreversibilities inSBCCs werepresented in [13-15]. In this paper, a supercharged boiler combined cycle (SBCC) is modeled, analyzed and the effect of the different operating parameters are extensively investigated. II. www.ajer.org DESCRIPTION OF THE CYCLE Page 322 American Journal of Engineering Research (AJER) 2013 Figure 1 shows the present supercharged boiler combined plant which combining the supercharged boiler cycle with the heat recovery cycle. In this SBCC, the compressor supplies pressurized air to the SB (state 2). All combustion takes place in the boiler and steam can be generated at any suitable pressure and temperature(state 20S). The steam generated in the SB is circulated through a separated steam cycle. The steam expands in a steam turbine (ST) with extracted steam fractions during expansion process to heat the water before entering the SB. High-temperature pressurized gas from the boiler is expanded as it flows through the gas turbine (GT). The power so developed supplies the compressor and drives the generator. The hot exhaust gases from the GT pass through a dual pressure HRB to generate steam and next go to the stack (state 9). After the water leaves the condenser (state 1S), it is pumped to the dual pressure HRB, where it is converted to a steam with low and high pressures (states 9S and 7S, respectively). The low pressure steam is mixed with the exhaust steam from the high pressure turbine (HPST) before entering the low pressure turbine (LPST) to expand to the condenser pressure (state 11S). III. CYCLE ANALYSIS To evaluate the thermal performance the cycle an analysis of each component based on the following assumptions is carried out: - Temperature differences and pressure drop through gas and steam pipes are negligible. - The heat losses and pressure drop for feed-water heaters and condensers are negligible. - The steam side pressure drop in HRB and SB are negligible. - Air leakage through gas cycle components is negligible. The input data and other assumptions used in the present study are listed in Table 1. In the preset study, three values for the gas turbine inlet temperature (T3) of 1200ºC, 1300ºC, and 1400ºC are investigated. The excess air factor (λ) for the SB is changed from 1.2 to 2.2 within a range of compressor pressure ratios (πc) from 6 to 30. III. i. Analysis of the GT cycle The GT cycle is assumed to operate according to the actual Brayton cycle and the three main processes are as follows: III. i. i. The compression process in the compressor The work absorbed by the compressor per kmol of air is determined by, wC  C P ,a (T2  T1 ) kJ/kmolair (1) Where, C P ,a is calculated at the mean temperature between inlet and outlet of the compressor. III. i. ii. The combustion process in the combustion chamber www.ajer.org Page 323 American Journal of Engineering Research (AJER) 2013 In the present study clean natural gas fuel of ultimate analysis as (78.8 % CH4, 14 % C2H6, 6.8% N2 and 0.4% CO2 by volume)is used, [11]. The combustion equation based on one kmolof natural gas is: 0.788CH 4  0.14C 2 H 6  0.004CO2  0.068N 2  .nO2 (O2  3.76N 2 )    1.072CO2  1.996H 2 O  (  1).nO2 O2  3.76.nO2  0.068 N 2 whereλ is the excess air factor and nO2 nO2  1.072  1.996 / 2  0.004  2.066 (2) is the theoretical O2 required to burn 1 kmol of natural gas ( kmol/kmoln.g.) - The energy balance equation for the combustion process based on 1kmol of fuel is: Xa CP ,a T2  LHV  CP ,n.g.Tn.g.  Xg CP , g T3  mSB ho  hi   SB (3) Where, LHV is the lower calorific value of the natural gas which given by: (4) LHV  nCH 4 .LHVCH 4  nC2 H 6 LHVC2 H 6 kJ/kmoln.g. and, X a is the actual amount of air (number of kmoles) per kmol of fuel, and X g is the amount of product gases per kmol fuel. The mass flow rates of the fuel and combustion gases are then calculated from the mass flow rate of the air as follows:  n. g  m  a /( X a M a / M n. g ) kg/s (5-a) m  g  X g M g / M n. g m  n. g kg/s m (5-b) III.i. iii.The expansion process in the gas turbine In this process, the work done by the GT per kmol natural gas is determined by, (6) WGT  Xg CP , g (T3  T4 ) kJ/kmoln.g. where CP , g is also determined at the mean ttemperature between inlet and outlet of the GT. WN ,GC  WGT m  X a wC /  m  G kJ/kmoln.g. The net work for the GT cycle is: (7) The thermal efficiency of the GT cycle is:  GC  WN ,GC LHV  C P ,n. g Tn. g (8) III. ii. Analysis of the Steam Turbine cycles In the present work a combined cycle shown in Fig. 1, enclosesHRB cycle and SB cycle, is analyzed. Each of these two cyclesis assumed to operateon a Rankinecycle.An energy balance is applied for each component (Control volume) as follows: III. ii. i. Analysis of the HRB steam cycle Enthalpy rise in each pump in the cycle is written as: hP  vi Po  Pi  / p kJ/kg (9) where P is the pressure in (KPa) The heat added to the steam in each stage of the HRB is: - Low-Pressure Economizer  HP  m  LP )(h3s  h2s ) kJ/s Q LPEC  (m or q LPEC  (1  mLH )( h3s  h2 s ) kJ/kgHP (10) wheremLH is the mass ratio of LP to HP steam in the HRB; mLH  - Low-Pressure Evaporator q LPEV  mLH ( h9 s  h3s ) - High-Pressure Economizer q HPEC  h5 s  h 4 s - High-Pressure Evaporator q HPEV  h6 s  h5s - High-Pressure Superheater q HPSH  h7 s  h6 s www.ajer.org  LP m  HP m kJ/kgHP kJ/kgHP (11) (12) kJ/kgHP kJ/kgHP (13) (14) Page 324 American Journal of Engineering Research (AJER) 2013 q HRB  q LPEC  q LPEV  q HPEC  q HPEV  q HPSH kJ/kgHP Total heat added in the HRB per kg of HP steam is then: (15) The work of HP and LP pumps per kg of HP steam are: (16) wHPP  hHPP kJ/kgHP wLPP  1  mLH hLPP kJ/kgHP (17) The work of HP and LP stages of the ST per kg of HP steam are: wHPST  h7 s  h8 s kJ/kgHP (18)   wLPST  1  mLH  h10 s  h11s kJ/kgHP  (19)  The net work of HRB steam cycle per kg of HP steam is given by: wN , HRB  wHPST  wLPST  m  wHPP  wLPP  /  m  G kJ/kgHP (20) The thermal efficiency of HRB steam cycle is calculated as follows:  HRB  wN , SC (21) q HRB It is obvious that the above equations are based on kg of HP steam. To calculate the mass of HP steam, energy balance between points 4 and 8 in the HRB should be carried out: mHP   HRB X g C P , g (T4  T8 ) q HPSH  q HPEV  q HPEC  q LPEV kg/kmoln.g (22) whereT8 is determined using the temperature difference at LP pinch point ( TPP , LP ) as: T8  Tsat ( PLP )  TPP , LP (23) Then, the net work of HRB steam cycle per kmol natural gas is equal to: WN , HRB  mHP wN , HRB kJ/kmoln.g (24) III. ii. ii. Analysis of the SB steam cycle In order to calculate the fraction of steam required for each surface heater, energy balances for the surface heaters are done. - Energy balance for surface feed-water heater 1: 1  h18s  h17 s (25) (1  1 )(h16s  h15s ) (26) (1  1   2 )(h14s  h13s ) (27) h21s  h17 s - Energy balance for surface feed-water heater 2: 2  h22s  h15s - Energy balance for surface feed-water heater 3: 3  h23s  h13s The work of the cycle pumps per kg of steam is: - Feed pump wFP  hFP kJ/kg - Heater pump 1 kJ/kg - Heater pump 2 - Condensate pump wP1  hP1 (1  1 ) wP 2  hP 2 (1  1   2 ) wCP  hCP (1  1   2   3 ) (28) (29) kJ/kg (30) kJ/kg (31) wST ,SB  h20s  h21s  (1  1 )(h21s  h22s )  (1  1   2 ) Specific work of ST for SB steam cycle (per kg of steam) is: (h22s  h23s )  (1  1   2   3 )(h23s  h24s ) kJ/kg (32) Net work of SB steam cycle per kg of steam is: www.ajer.org Page 325 American Journal of Engineering Research (AJER) wN , SB  wST , SB m  wFB  wP1  wP 2  wCP   m  G kJ/kg 2013 (33) Heat added in the SB per kg of steam is given by: q SB  h20S  h19S kJ/kg (34) Thermal efficiency of the SB steam cycle is:  SB  wN ,SB (35) q SB Net work of the SB steam cycle per kmol natural gas is then: WN , SB  mSB wN , SB kJ/kmoln.g (36) where the mass of steam generated in the SB (mSB) is obtained from Eq. (3). The total net output of the combined cycle per kmol natural gas is: Wcom  WN ,GC  WN , HRB  WN , SB kJ/kmoln.g (37) The combined cycle thermal efficiency is then calculated as:  com  Wcom LHV  C P ,n. g Tn. g (38) WN , HRB  WN , SB Another important parameter for the combined cycle is the power ratio, and it is defined as: PR    WN ,GC (39) The output power produced by each combined cycle in (kW) can be determined from the following equation:  n. g M n. g kW Pcom  Wcom m (40) Exergy Analysis The exergy destruction in the different control volumes of the cycle is calculated by applying the exergy balance equation derived by [16-17]. This equation reads:  T   kW  i ei   m  o eo   1  0 Q CV  W EXD   m CV  T (41) where, Q CV :heat transferred to the control volume, kW W : rate of work out from the control volume, kW CV T : temperature at which heat is transferred, K T0: reference temperature and equal to 298K. The exergy of a flow stream for a given pressure (P) and temperature (T) is given by: 42) e  h  ho   To s  s o  where, the properties h and s for steam are obtained from the present code, and for gas are calculated from the ideal gas model as: (43) h  ho  C P T  To  P T  (44) and s  so  C P ln    R ln   P  T   o  o The second-law efficiency for each control volume in steady state steady flow (SSSF) process is calculated as:  2 nd  1  EXD  Exi   Exo IV. www.ajer.org (45) NUMERICAL CALCULATIONS Page 326 American Journal of Engineering Research (AJER) 2013 In the present work a FORTRAN computer code is designed includes special subroutines utilizing the governing equations (1 to 45). This code was used to, calculate the thermodynamic properties of the water at each state, perform heat balance for each control volume in the combined cycle, evaluate energy and exergy performance characteristics of the cycle, predict the effect of the different operating parameters on the cycle performance. V. RESULTS AND DISCUSSIONS The present results were found based on the following operating data for the cycle following Akiba and Thani [11] as listed in Table 1, . Table 1 Assumptions of the cycle. Parameter Air mass flow rate Ambient temperature. Atmospheric pressure. Compressor isentropic efficiency.* GT isentropic efficiency.* Gas-side pressure loss in SB.* Efficiency of SB. GT exhaust gas pressure. Pump isentropic efficiency.* ST isentropic efficiency.* Condenser pressure. Efficiency of HRB. Pinch point of HRB at HP.* Pinch point of HRB at LP.* LP to HP steam mass ratio. Mechanical efficiency.* Generator efficiency* Value 67.9268 30 1.01325 85 90 6 95 1.05 70 87 0.075 95 15 25 0.2 99 98 Unit kg/s o C bar % % % % bar % % bar % o C o C % % In addition, the following steam conditions at various states in the cycle were considered as listed in Table 2. Table 2 Steam conditions of the cycle. Parameter Pressure (bar) Temperature (oC) ST (SB) 170 540 HP ST 50 540 LP ST 4.5 Tsat FWH1 59 - FWH2 14 - FWH3 1.9 - The effect of excess air on the energy and exergy performance characteristics of the SBCC cycle are predicted at fixed T3 of 1300ºC.The energy performance characteristics are plotted against the compressor pressure ratio at different excess air factors are shown in Figs. 2-3. The output power and power ratio are shown in Fig.2 and the combined cycle efficiency is shown in Fig.3. 70000 2.6 65000 2.4 2.2 60000 2 55000 1.8 Pcom 50000 (kW) 45000 Pcom 1.6 1.4 PR T3=1300ºC 40000 1.2 35000 1.4 30000 1.8 PR 1.6 3 6 1 0.8 9 12 15 18 21 24 27 πc 30 33 Fig. 2: The SBCC output power and power ratio at fixed T 3 of 1300ºC and at three different excess air factors. www.ajer.org Page 327 American Journal of Engineering Research (AJER) 2013 0.51 0.5 0.49 ηcom 0.48 0.47 1.4 T3=1300ºC 0.46 1.6 1.8 0.45 3 6 9 12 15 18 21 24 27 30 πc 33 Fig. 3: The SBCC energy efficiency at fixed T3 of 1300ºC and at three different excess air factors. The results showed noticeable effects of the excess air factor on the cycle performance. The output power and the power ratio decrease as the excess air factor increases, while the combined cycle efficiency increases. Also, an optimum compressor pressure ratio for the combined cycle efficiency was found depending on the excess air factor. On the other hand, the change of the output power with the compressor pressure ratio is almost small. The exergy destructions in the cycle components at different excess air factors is shown in Fig.4. It was found that, the exergy destruction in the SB is the major part followed by that in the HRB. Figure 4 shows that the exergy destruction in the SB decreases by increasing the excess air factor. Also, by increasing the excess air factor, the exergy destruction in the HRB is slightly decreased due to the reduction in the temperature difference between the hot gases and cold steam in the HRB. It is clear that the exergy destruction in the compressor is not affected by the excess air factor as the air mass flow rate was fixed constant. 60000 55000 50000 ExDcom 45000 (kW) 40000 35000 30000 1.4 T3=1300ºC 1.6 25000 1.8 20000 3 6 9 12 15 18 21 24 27 πc 30 33 Fig. 4: Total exergy destruction in the SBCC at different excess air factors. The exergy destructions in the ST, FWHs, and CON2 are decreased by increasing the excess air factor due to thedecrease in the amount of steam generated in the SB, while those for the other components were not affected. Figure 5 shows the relative values of the total exergy destruction in the different components in the combined cycle. www.ajer.org Page 328 American Journal of Engineering Research (AJER) 2013 40000 λ=1.4 35000 λ=1.6 λ=1.8 30000 πc=15 25000 ExDi( T3=1300ºC 20000 kW) 15000 10000 5000 0 C GT SB ST FWHs HRB HPST LPST CON1 CON2 Fig. 5: Exergy destructions in the cycle components at different excess air factors. Figure 6 shows a plot of the second-law efficiency with the compressor pressure ratio at different excess air factors. The second-law efficiency was increased by increasing the excess air factor. Also, an optimum compressor pressure ratio was found depending on the value of the excess air factor. 0.59 0.58 0.57 η2nd 0.56 0.55 0.54 T3=1300ºC 1.4 1.6 0.53 1.8 0.52 3 6 9 12 15 18 21 24 27 πc 30 33 Fig. 6: Second-law efficiency of the SBCC at different excess air factors. The effect of the turbine inlet temperature on the energy performance and the exergy destruction of the cycle was investigated in the present work. Three different values for T3 of (1200ºC, 1300ºC, and 1400ºC) were studied at a fixed excess air factor of 1.6. Figure 7 shows that the turbine inlet temperatureis strongly affect the combined cycle thermal efficiency. www.ajer.org Page 329 American Journal of Engineering Research (AJER) 2013 0.52 0.51 0.5 0.49 ηcom 0.48 0.47 0.46 T3=1200ºC λ=1.6 0.45 T3=1300ºC T3=1400ºC 0.44 3 6 9 12 15 18 21 24 27 πc 30 33 Fig. 7: Thermal performance of the SBCC at different turbine inlet temperatures. Figure 8 shows the effect of the turbine inlet temperature on the second-law efficiency of the combined cycle. The second-law efficiency is highly affected by the turbine inlet temperature it was strongly increased by the increase in the turbine inlet temperature. 0.61 T3=1200ºC 0.6 T3=1300ºC T3=1400ºC 0.59 0.58 η2nd 0.57 0.56 0.55 0.54 0.53 0.52 λ=1.6 0.51 3 6 9 12 15 18 21 24 27 30 33 πc Fig. 8:The second-law efficiency of the SBCC at different turbine inlet temperatures. A comparison between the SBCC and the conventional combined cycles was carried out to evaluate the performance of these cycles.This comparison was carried out at a fixed air mass flow rate of 67.9268 kg/s, turbine inlet temperature of 1300ºC, and the other parameters are considered as listed in Table 1. Figure 9 shows a comparison between the thermal efficiency of SBCC and conventional combined cycles. The results showed that the combined cycle efficiency of the SBCC is lower than that of the conventional combined cycle. Also, for the conventional combined cycle, the efficiency is continuously increased by increasing the compressor pressure ratio. www.ajer.org Page 330 American Journal of Engineering Research (AJER) 2013 0.56 0.54 0.52 0.5 ηcom 0.48 0.46 0.44 T3=1300ºC Conventional λ=1.2 SBCC 0.42 λ=2.2 SBCC 0.4 3 6 9 12 15 18 21 24 27 πc 30 33 Fig. 9: Comparison between theefficiency of SBCCand conventional combined cycles. Figure 10 shows a comparison between the second-law efficiency ofSBCC and conventional combined cycles. The second-law efficiency of the SBCC is almost higher than that of the conventional one at excess air factor over 1.2. It was found that 9.5% to 18.5% increase in the second-law efficiency was obtained for the SBCC higher than that for the conventional combined cycle. 0.63 0.61 0.59 0.57 η2nd 0.55 0.53 0.51 0.49 Conventional T3=1300ºC λ=1.2 SBCC 0.47 λ=2.2 SBCC 0.45 3 6 9 12 15 18 21 24 27 πc 30 33 Fig. 10: Comparison between the second-law efficiency for SBCC and conventional combined cycles. Finally, the present predictions for the SBCC were correlated in terms of the investigated operating parameters. New correlation form was obtained respectively, for the combined cycle efficiency, second-law efficiency, and the total exergy destruction ratio (the total exergy destruction to the total exergy input) with different correlating coefficient as listed in Table 3. This correlation form is, Where, the variable    a 0 C 1 a2 T3 T0  3 a a * (46) is one of ηcom , η2nd ,or EXD com and the coefficients a 0 , a1 , a 2 , and a 3 are listed in Table 3, and T3and T0 are temperatures in (K). www.ajer.org Page 331 American Journal of Engineering Research (AJER) 2013 The obtained correlationis valid within the ranges of the operating parameters of (6 πc 30, 1200ºCT3 1400ºC, and 1.2λ2.0). Table 3 Coefficients of the correlation46. Variable  com  2 nd EXD * com a0 a1 a2 a3 0.15798 1.137E-2 0.13231 0.62025 0.15204 1.698E-2 0.18693 0.71240 2.60315 -2.806E-2 -0.30428 -1.04413 VI. % DEVmax 2.57 2.76 4.75 CONCLUSIONS In the present work, a thermodynamic analysis ofa supercharged boiler combined cycle was carried out. The effects of the inlet temperature of the gas turbine, the excess air factor, and the compressor pressure ratio on the performance of the cycle were investigated. A comparison between the SBCC and the conventional cycle performance was also carried out. The preset study leads to the following conclusions: 1. The largest values of the output power for the SBCC are predicted at a minimum excess air factor and a maximum turbine inlet temperature. 2. The SBCChas higher values of the output power ranging from 1.6 to 2.1 times that for the conventional combined cycle. 3. The values of the combined cycle thermal efficiency of the SBCC are lower than that of the conventional cycle. 4. For a turbine inlet temperature of 1300oC, optimum compressor pressure ratios which give maximum efficiencies are predicted for the SBCC. While, for the conventional cycle, the efficiency is continuously increased with the compressor pressure ratio. 5. The maximum exergy losses were found in the supercharged boiler and the heat recovery boiler. Therefore, research effortsare recommended to minimize losses in these components. 6. Lower values of the total exergy destruction in the SBCC were found at the higher excess air factor over 1.2. 7. Exergy destruction ratio,ranges from 31% to 43%, was found for SBCC, while values from 43% to 52% were obtained for the conventional combined cycle. 8. Higher values for the second-law efficiency were found for SBCC compared with that for the conventional combined cycle. An enhancement ranging from 9.5% to 18.5% in the second-law efficiency for SBCC was foundcompared with that for conventional cycle. 9. New correlation was obtained to correlate the combined cycles performance characteristics with the different operating parameters (turbine inlet temperature, the excess air factor, and the compressor pressure ratio). REFERENCES [1] [2] [3] [4] [5] [6] [7] M. J., Ebadi andM. Gorji-Bandpy, Exergetic Analysis of Gas Turbine Plants, Int. Journal of Exergy Research. 2(1), 2005, 31-39. P. O. Ayoola1, and N. A. Anozie, A Study of Sections Interaction Effects on Thermodynamic Efficiencies of a Thermal Power Plant, British Journal of Applied Science & Technology, ISSN: 22310843, 3(4),2013, 1201-12143. W. Goran and G. Mei. On Exergy and Sustainable Development-Part 1: Conditions and Concepts. Exergy International Journal. 1(3), 2001, 128-145. S. Sengupta, A. Dattaand S. Duttagupta. Exergy Analysis of a Coal-Based 210mw ThermalPower Plant. Int. Journal Energy Research,31(1), 2007, 14-28. A. Mohammad, A. Pouria and H. Armita, Energy, Exergy and Exergoeconomic Analysis ofa Steam Power Plant, Int. Journal Energy Research. 33(5), 2008, 499-512. M. S. Briesch, R. L. Bannister, I. S. Diakunchak and D.J. Huber, A Combined Cycle Designed to Achieve Greater Than 60 Percent Efficiency, ASME J. Engineering for Gas Turbines and Power, 117, 1995, 734741. A.M. Bassily, Modeling, Numerical Optimization, and Irreversibility Reduction of a Dual-Pressure Reheat Combined-Cycle, Applied Energy, 81, 2005, 127-151. www.ajer.org Page 332 American Journal of Engineering Research (AJER) [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] 2013 M. Akiba and E.A. Thani,Thermodynamic Analysis of New Combination of Supercharged Boiler Cycle and Heat Recovery Cycle for Power Generation, ASME J. Engineering for Gas Turbines and Power, 118, 1996, 453-460. N.N.Mikhael, K.K.A. Moradand A.M.I. Mohamed, Design Criterion of Solar-Assisted Combined-Cycle Power Plants with Parabolic Through Concentrators, Port-Said Engineering Research J., 4,2000, 80-101. M. Ghazikhani, H. Takdehghan and A. Moosavi, Exergy Analysis of Gas Turbine Air-Bottoming Combined Cycle for Different Environment Air Temperature, Proceedings of 3rd International Energy, Exergy and Environment Symposium, 2007. C. Koch, F. Cziesla and G. Tsatsaronis, Optimization of Combined Cycle Power Plants Using Evolutionary Algorithms, Chemical Engineering and Processing, 2007. Y. Kwon, H. Kwan, S. Oh,Exergoeconomic Analysis of Gas Turbine Cogeneration System, Int. Journal of Exergy, 1, 2001,31-40. H. Jericha, and F. Hoeller,Combined Cycle Enhancement,ASME J. Engineering for Gas Turbines and Power, 113,1991, 198-202. O. Bolland, A Comparative Evaluation of Advanced Combined Cycle Alternatives,ASME J. Engineering for Gas Turbines and Power, 113, 1991, 190-197. B. Seyedan, P.L.Dhar, R.R. Guar and G.S. Bindra, Optimization of Waste Heat Recovery Boiler of a Combined Cycle Power Plant, ASME J. Engineering for Gas Turbines and Power, 118,1996, 561-564. G.V. Wylen, R. Sonntag and C. Borgnakke, Fundamentals of Classical Thermodynamics,( 4th Ed.), (John Wiley & Sons, Inc., 1994). Y. Sanjay, O. Singh and B. N. Prasad, Energy and Exergy Analysis of Steam Cooled Reheat Gas–Steam Combined Cycle, Applied Thermal Engineering, 2007. www.ajer.org Page 333
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-472-483 www.ajer.org Research Paper Open Access Design And Development Of Suitable Software Engineering Techniques To Detect And Manage Buffer-Overflows In Computer Systems Mohd Ayaz Uddin1, Mirza Younus Ali Baig2, Prof.Dr.G.Manoj Someswar3 1. Working as Assistant Professor in the Department of Information Technology, Nawab Shah Alam Khan College of Engineering & Technology (Affiliated to JNTUH), New Malakpet, Hyderabad-500024, A.P., India. 2. Working as Assistant Professor in the Department of Information Technology, Nawab Shah Alam Khan College of Engineering & Technology (Affiliated to JNTUH), New Malakpet, Hyderabad-500024, A.P., India. 3. Working as Professor, HOD & DEAN (Research) in the Department of Computer Science & Engineering, Nawab Shah Alam Khan College of Engineering & Technology (Affiliated to JNTUH), New Malakpet, Hyderabad-500024, A.P., India. Abstract: - Throughout the history of cyber security, buffer overflow is one of the most serious vulnerabilities in computer systems. Buffer overflow vulnerability is a root cause for most of the cyber attacks such as server breaking-in, worms, zombies, and botnets. Buffer overflow attacks are the most popular choice in these attacks, as they provide substantial control over a victim host. ―A buffer overflow occurs during program execution when a fixed-size buffer has had too much data copied into it. This causes the data to overwrite into adjacent memory locations, and, depending on what is stored there, the behavior of the program itself might be affected.‖ (Note that the buffer could be in stack or heap.) Although taking a broader viewpoint, buffer overflow attacks do not always carry code in their attacking requests (or packets) 1, code-injection buffer overflow attacks such as stack smashing count for probably most of the buffer overflow attacks that have happened in the real world. Although tons of research has been done to tackle buffer overflow attacks, existing defenses are still quite limited in meeting four highly-desired requirements: (R1) simplicity in maintenance (R2) transparency to existing (legacy) server OS, application software, and hardware (R3) resiliency to obfuscation (R4) economical Internet wide deployment. Keywords: - Code-injection Buffer Overflow attack, C Range Error Detector, Libsafe, Libverify, Safe Pointer, Data Execution Prevention, Solar Designer, Stack guard I. INTRODUCTION As a result, although several very secure solutions have been proposed, they are not pervasively deployed, and a considerable number of buffer overflow attacks continue to succeed on a daily basis. To see how existing defenses are limited in meeting these four requirements, let us break down the existing buffer overflow defenses into six classes which are as follows: (1A) Finding bugs in source code. (1B) Compiler extensions. (1C) OS modifications. (1D) Hardware modifications. (1E) Defense-side obfuscation. (1F) Capturing code running symptoms of buffer overflow attacks. (Note that the above list does not include binary code analysis based defenses which we will address shortly.)[6] We may briefly summarize the limitations of these defenses in terms of the four requirements as follows. (a) Class 1B, 1C, 1D, and 1E defenses may cause substantial changes to existing (legacy) server OSes, application software, and hardwares, thus they are not transparent. Moreover, Class 1E defenses generally cause processes to be terminated. As a result, many businesses do not view these changes and the process termination overhead as economical deployment. (b) www.ajer.org Page 472 American Journal of Engineering Research (AJER) 2013 Class 1F defenses can be very secure, but they either suffer from significant run time overhead or need special auditing or diagnosis facilities which are not commonly available in commercial services.[7]The idea of SigFree is motivated by an important observation that ―the nature of communication to and from network services is predominantly or exclusively data and not executable code.‖ In particular, as summarized in , (a) on Windows platforms, most web servers (port 80) accept data only; remote access services (ports 111, 137, 138, 139) accept data only; Microsoft SQL Servers (port 1434) accept data only; workstation services (ports 139 and 445) accept data only. (b) On Linux platforms, most Apache web servers (port 80) accept data only; BIND (port 53) accepts data only; SNMP (port 161) accepts data only; most Mail Transport (port 25) accepts data only; Database servers (Oracle, MySQL, PostgreSQL) at ports 1521, 3306 and 5432 accept data only. Since remote exploits are typically executable code, this observation indicates that if we can precisely distinguish (service requesting) messages that contain code from those that do not contain any code, we can protect most Internet services (which accept data only) from code-injection buffer overflow attacks by blocking the messages that contain code.[5]The merits of SigFree are summarized below. They show that SigFree has taken a main step forward in meeting the four requirements afore mentioned. a. SigFree is signature free, thus it can block new and unknown buffer overflow attacks b. Without relying on string-matching, SigFree is immunized from most attack-side obfuscation methods. c. SigFree uses generic code-data separation criteria instead of limited rules. This feature separates SigFree from, an independent work that tries to detect code-embedded packets. d. Transparency. SigFree is an out-of-the-box solution that requires no server side changes. e. SigFree has negligible through output degradation II. ANALYSIS Software engineering is an extremely difficult task and of all software creation Related professions, software architects have quite possibly the most difficult task. Initially, software architects were only responsible for the high-level design of the products. More often than not this included protocol selection, thirdparty component evaluation and selection, and communication medium selection. We make no argument here that these are all valuable and necessary objectives for any architect, but today the job is much more difficult. It requires an intimate knowledge of operating systems, software languages, and their inherent advantages and disadvantages in regards to different platforms. Additionally, software architects face increasing pressure to design flexible software that is impenetrable to wily hackers. A near impossible feat in itself. SQL attacks, authentication brute-forcing techniques, directory traversals, cookie poisoning, cross-site scripting, and mere logic bug attacks when analyzed via attack packets and system responses are shockingly similar to those of normal or non-malicious HTTP requests.[8] Today, over 70 percent of attacks against a company’s network come at the ―Application layer,‖ not the Network or System layer.—The Gartner Group[4]. Buffer overflows are the most feared of vulnerabilities from a software vendor’s perspective. They commonly lead to Internet worms, automated tools to assist in exploitation, and intrusion attempts. With the proper knowledge, finding and writing exploits for buffer overflows is not an impossible task and can lead to quick fame especially if the vulnerability has high impact and a large user base.[3] III. EXISTING SYSTEM Detection of Data Flow Anomalies There is static or dynamic methods to detect data flow anomalies in the software reliability and testing field. Static methods are not suitable in our case due to its slow speed; dynamic methods are not suitable either due to the need for real execution of a program with some inputs.[2] It takes considerable effort to prevent buffer overflows. On the one hand static methods produce false / negative results, which cause manual corrections in the source code by the developer. On the other hand instrumentation methods have lot of overhead and they are not transparent. Stack based methods does not prevent from all attacks. Hardware methods provide less overhead but need to have deeper architectural changes. The dynamic methods are too expensive to protect the systems against buffer overflow attacks. Methods used in existing system are a. Stack based: Adding redundant information / routines to protect the stack or parts of stack. b. Instrumentation: Replacing of standard functions / objects like pointers to equip them with tools. c. Hardware based: Architecture check for illegal operations and modifications d. Static: Checking the source code for known vulnerable functions, do flow analysis check the correct boundaries, use of heuristics. [1] e. Operation system based: Declare the stack as non-executable to prevent code execution IV. www.ajer.org STACK GUARD Page 473 American Journal of Engineering Research (AJER) 2013 A simple approach to protect programs against stack smashing and with little modification against EBP overflows. This is achieved by a compiler extension that adds so called canary values before the EIP saved at the function prologue (see Figure 3.1). Before the return of a protected function is executed, the canary values are checked. An attacker could guess the canary values. This is quite hard, if the values are chosen randomly for each guarded function, but it is also possible to choose a canary value made of terminator characters, which makes every string/file copy function to stop at the canary value. So even restoring the canary value would not lead to a successful program flow detour. Another way to thwart Stackguard is to find a pointer to overflow that it points to the address of the saved EIP and use that pointer as target for a copy function. This way the EIP is overwritten without modifying the canary values.[9] Overhead produced is moderate with up to 125% and that this method is not transparent, meaning that the source code is needed for recompilation. This fact makes Stackguard useless for many legacy software products on the market, because they are not open source. Figure 1: Stack layout using stack guard V. LIBSAFE AND LIBVERIFY Two methods that should protect against buffer overflow attacks. The first method is libsafe, a transparent approach set up in a DLL that replaces standard (vulnerable) functions by standard bounds checked functions (e.g. strcpy could be replaced by strncpy). The upper limit of the bounds is calculated based on the EBP, so the maximum amount written to a buffer is the size of the stackframe. This method only works if the EBP can be determined, since there exist compiler options that make this impossible; further compatibility issues could arise with legacy software. VI. INSTRUMENTATION Safe pointer Safe pointer structure is to detect all pointer and array access errors. Meaning that both, temporal and spatial errors are detected. The structure consists of five entries: a. Value (the value of the safe pointer, it may contain any expressible address) b. Base (the base address of the referent) c. Size (the size of the referent in bytes) d. Storage class (either Heap, Local or Global) e. Capability (unique capability. Predefined capabilities are forever and never, else it could be an enumerated number as long as its value is unique) Base and size are spatial attributes capability and storage class are temporal attributes. The capability is also stored in a capability store when it is issued and deleted if the storage is freed or when the procedure invocation returns. This ensures that storage that is not available (like freed heap allocated memory) is not accessed anymore. The transformation of a program from unsafe to safe pointers involves pointer conversion (to extend all pointer definitions), check insertion (to instrument the program to detect memory access errors) and operator conversion (to generate and maintain object attributes). C Range Error Detector (CRED) www.ajer.org Page 474 American Journal of Engineering Research (AJER) 2013 CRED is the idea to replace every out-of-bounds (OOB) pointer value with the address of a special OOB object created for that value. To realize this, a data structure called object table collects the base address, and size information of static, heap and stack objects. To determine, if an address is in-bounds, the checker first locates the referent object by comparing the in-bounds pointer with the base and size information stored in the object table. Then, it checks if the new address falls within the extent of the referent object. If an object is outof-bounds, an OOB object is created in the heap that contains the OOB address value and the referent object. If the OOB value is used as an address it is replaced by the actual OOB address. The OOB objects are entered into an out-of-bounds object hash table, so it is easy to check if a pointer points to an OOB object by consulting the hash table. The hash table is only consulted if the checker is not able to find the referent object in the object table or cannot identify the object as unchecked. Arithmetic and comparison operations to OOB objects are legal, since the referent object and its value is retrieved from the OOB object. But if a pointer is dereferencing, it is checked if the object is in the object table or if it is unchecked else the operation is illegal. Buffer overflows are not prevented but the goal is thwarted because copy functions need to dereference the OOB value, the program is halted before more damage happens. This fact could be still used as DoS attack, since the program (service) is halted and needs to be restarted or even worst if it has to be re-administrated. This method works, by comparing the non instrumented code, the CRED instrumented code and a code where the instrumentation is which is the base for CRED. Since recompilation is needed, this method is not transparent. But the instrumented code is fully compatible to non-instrumented code. The overhead of this approach ranges from 1% to 130%, but it do not show how certain kind of buffer overflow attacks, like signed/unsigned and off-by-one overflows are handled. The same arguments as on the safe pointers in the previous section can be applied here. VII. HARDWARE BASED The approach deals with an architectural change implementing a Secure Return Address Stack (SRAS), which is a cyclic, finite LIFO structure that stores return addresses. At a return call the last SRAS entry is compared with the return address from the stack and if the comparison yields that the return address was altered the processor can terminate the process and inform the operation system or continues the execution based on the SRAS return address. Since the SRAS is finite and cyclic, n/2 of the SRAS content has to be swapped on an under- or overflow. Two methods: a. OS-managed SRAS swapping. The operation system executes code that transfers contents to or from memory which is mapped to physical ages that can only be accessed by the kernel b. Processor-managed SRAS swapping. The processor maintains two pointers to two physical pages that contain spilled SRAS addresses and a counter that indicates the space left in the pages. If the Pages over or underflow, the OS is invoked to de-/allocate pages, else the processor can directly transfer contents to and from the pages without invoking the OS.The problem is that the SRAS is not compatible with non LIFO routines, such as C++ exception handling. This makes it necessary to change the non LIFO routines to LIFO routines or it must be possible to turn off the SRAS protection. VIII. STATIC This deals with the idea to comment the source code that LCLint can interpret them and generate a log file which can be used to identify possible vulnerabilities. If a source code is analyzed, LCLint evaluates conditions to fulfill safe execution of the finally compiled program. These conditions are written to the log file so the programmer an check if these conditions are true for every case that could happen while execution. Then the programmer can write control comments into the source to let LCLint what conditions are fulfilled or if LCLint should ignore parts of the source code. These way errors can be found before compilation, but this method has certain shortcoming. Since it is not possible to efficiently determine invariants, to take advantage of idioms used typically by C programmers. Since this method is not exact, the rate of false positives grows. Further LCLint is a lightweight checker, meaning that the program flow is also checked using heuristics, since determining all possible program states might need exponential time. The false positives can be commented out, but this means more work to the developers of the software and since heuristics are used, false negatives are produced either. All these facts and the fact that this method is not transparent makes it only suitable for new or small projects. The last aspect we want to point out is that this is the only method so far that produces no overhead, since the compilers skip comments. IX. OPERATION SYSTEM BASED Data Execution Prevention (DEP) With the release of the Service Pack 2 for Windows XP and Service Pack 1 for Windows 2003 a new protection was introduced to machines using these operation systems, the DEP. Microsoft explains a bit how the www.ajer.org Page 475 American Journal of Engineering Research (AJER) 2013 DEP works. In cooperation with Intel (Execute Disable bit feature) and AMD (no-execute page-protection processor feature) a new CPU flag was implemented called the NX-Flag. It marks all memory locations in a process as non-executable unless the location explicitly contains executable code. [10] If the machine running with DEP support has no NX-Flag, the DEP can be enforced by the operation system (software enforcement). This protection prevents execution of injected code, if the code was injected in a non executable area. The DEP can be bypassed. Further this method requires that even valid, working processes are (sometimes) recompiled. Another shortcoming is, that this method does not prevent the buffer overflow itself, so attacks like variable attack or BSS/heap overflows are not prevented. Solar Designer The Solar Designer patch does nearly the same as the DEP, but it makes the stack non executable. Since Linux needs the executable stack for signal handling, this restricts the normal behavior of Linux. If the attack is able to determine code that would act like a shellcode and execute this code instead of injected code the patch can be bypassed. To conclude, buffer overflows are not prevented, only the code execution. Attacks like the variable attack or BSS/heap overflows are still possible, and heap overflows can also be used to execute arbitrary code. X. PROPOSED SYSTEM We proposed SigFree, a real-time, signature free, out of- the-box blocker that can filter code-injection buffer overflow attack messages, one of the most serious cyber security threats, to various Internet services. SigFree does not require any signatures, thus it can block new, unknown attacks. Figure 2: Signature Free prototype We have implemented a SigFree prototype as a proxy to protect web servers. Our empirical study shows that there exists clean-cut ―boundaries‖ between code embedded payloads and data payloads when our code data separation criteria are applied. We have identified the ―boundaries‖ (or thresholds) and been able to detect/ block all 50 attack packets generated by Meta spoilt framework, all 200 polymorphic shellcode packets generated by two well-known polymorphic shellcode engine ADMmutate and CLET , and worm Slammer, CodeRed and a CodeRed variation, when they are well mixed with various types of data packets. Also, our experiment results show that the throughput degradation caused by SigFree is negligible. XI. BUFFER OVERLOW VARIANTS Today buffer overflow attacks are known and well understood. In general every buffer that can be accessed by an attacker might be compromised if vulnerable functions are used. Such variables are located on the stack and heap. The attacks are partitioned as follows: a. Stack smashing used to execute inject code b. Variable attack used to modify program state c. Heap overflow used to execute arbitrary code or to modify the variables d. Off-by-one a classic programmers error , only one byte is overwritten [11] XII. INPUT DESIGN The input design is the link between the information system and the user. It comprises the developing specification and procedures for data preparation and those steps are necessary to put transaction data in to a usable form for processing can be achieved by inspecting the computer to read data from a written or printed document or it can occur by having people keying the data directly into the system. The design of input focuses on controlling the amount of input required, controlling the errors, avoiding delay, avoiding extra steps and keeping the process simple. The input is designed in such a way so that it provides security and ease of use with retaining the privacy. Input Design considered the following things: a. What data should be given as input? www.ajer.org Page 476 American Journal of Engineering Research (AJER) 2013 b. How the data should be arranged or coded? c. The dialog to guide the operating personnel in providing input. d. Methods for preparing input validations and steps to follow when error occur. XIII. OBJECTIVES Input Design is the process of converting a user-oriented description of the input into a computer-based system. This design is important to avoid errors in the data input process and show the correct direction to the management for getting correct information from the computerized system. It is achieved by creating userfriendly screens for the data entry to handle large volume of data. The goal of designing input is to make data entry easier and to be free from errors. The data entry screen is designed in such a way that all the data manipulates can be performed. It also provides record viewing facilities.When the data is entered it will check for its validity. Data can be entered with the help of screens. Appropriate messages are provided as when needed so that the user will not be in maize of instant. Thus the objective of input design is to create an input layout that is easy to follow. XIV. OUTPUT DESIGN A quality output is one, which meets the requirements of the end user and presents the information clearly. In any system results of processing are communicated to the users and to other system through outputs. In output design it is determined how the information is to be displaced for immediate need and also the hard copy output. It is the most important and direct source information to the user. Efficient and intelligent output design improves the system’s relationship to help user decision-making. Designing computer output should proceed in an organized, well thought out manner; the right output must be developed while ensuring that each output element is designed so that people will find the system can use easily and effectively. When analysis design computer output, they should Identify the specific output that is needed to meet the requirements. Select methods for presenting information. Create document, report, or other formats that contain information produced by the system. The output form of an information system should accomplish one or more of the following objectives. a. Convey information about past activities, current status or projections of the b. Future. c. Signal important events, opportunities, problems, or warnings. d. Trigger an action. e. Confirm an action. XV. SYSTEM STUDY FEASIBILITY STUDY The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding of the major requirements for the system is essential. Three key considerations involved in the feasibility analysis are: a. ECONOMICAL FEASIBILITY b. TECHNICAL FEASIBILITY c. SOCIAL FEASIBILITY ECONOMICAL FEASIBILITY This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified.Only the customized products had to be purchased. [12] TECHNICAL FEASIBILITY This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system. [13] SOCIAL FEASIBILITY www.ajer.org Page 477 American Journal of Engineering Research (AJER) 2013 The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity.His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the computer system.[14] In this phase, we understand the software requirement specifications for the research work. We arrange all the required components to develop the project in this phase itself so that we will have a clear idea regarding the requirements before designing the project. Thus we will proceed to the design phase followed by the implementation phase of the project. XVI. DESIGN ARCHITECTURE Figure 3: Architecture of SigFree DATA FLOW DIAGRAM / USE CASE DIAGRAM The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system. Data flow diagram shows step by step flow. a. Use case diagram is a description of a set of sequences of actions, including variants that a system performs to yield an observable result of value to an actor. b. Class diagram that shows a set of classes, interfaces and collaborations and their relationships. c. Activity diagram is a flowchart, showing flow of control from activity to activity. d. A component diagrams shows the organization and dependencies among set of components. e. An interaction diagram that emphasizes the time ordering of messages. www.ajer.org Page 478 American Journal of Engineering Research (AJER) 2013 Figure 4: Data Flow Diagram UML DIAGRAMS Upload files Register Search Admin Request URL User Get response Download files Figure 5: Use Case Diagram www.ajer.org Page 479 American Journal of Engineering Research (AJER) 2013 Send request +urlid +requesturl +requestdate +decoder() +ASCII code() +Distiller() +analyse() Upload files +fileid +filename +filetype +files +upload() Get response +fileid +filename +files +checkresponse() Figure 6: Class Diagram Server User Admin Upload files Search Send request Retrieve all files Retrieve non-executable files If request contains executable files If request contains pure data Figure 7: Sequence Diagram www.ajer.org Page 480 American Journal of Engineering Research (AJER) User search 2013 Admin login Send HTTP request Import files Encode and Decode URL Upload files Convert into ASCII code If request contains pure data Distill and analyse URL If request contains executable files Check URL Retrieve all files Block executable files A Retrieve non-executable files Figure 8: Activity Diagram Request/Response User Server Upl oad files Admin Figure 9: Component Diagram www.ajer.org Page 481 American Journal of Engineering Research (AJER) 2013 In this way we can design the layout of the project which is to be implemented during the construction phase. Thus we will have a clear picture of the project before being coded. Hence any necessary enhancements can be made during this phase and coding can be started and program compiled and executed successfully keeping in view of the proposed needs and requirements in the beginning of the research work. The Testing Process- Overview The testing process for web engineering begins with tests that exercise content and interface functionality that is immediately visible to end-users. As testing proceeds, aspects of the design architecture and navigation are exercised. The user may or may not be cognizant of these WebApp elements. Finally, the focus shifts to tests that exercise technological capabilities that are not always apparent to end-users—WebApp infrastructure and installation/implementation issues. a. Content Testing b. Interface Testing c. Navigation Testing d. Component Testing e. Configuration Testing f. Performance Testing g. Security Testing The following figure shows the testing flow: Figure 10: Testing Flows XVII. RESULTS & CONCLUSION We proposed SigFree, a real time, signature free, out of- the-box blocker that can filter code-injection buffer overflow attack messages, one of the most serious cyber security threats, to various Internet services. SigFree does not require any signatures, thus it can block new, unknown attacks. SigFree is immunized from most attack-side code obfuscation methods, good for economical Internet wide deployment with little maintenance cost and negligible throughput degradation and can also handle encrypted SSL messages. A combination of developer education for defensive programming techniques as well as software reviews is the best initial approach to improving the security of custom software. Secure programming and scripting languages are the only true solution in the fight against software hackers and attackers. REFERENCES www.ajer.org Page 482 American Journal of Engineering Research (AJER) [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] 2013 SigFree: Signature Buffer overflow attack blocker by Xinrang Wang, Chi-Chun Pan, Peng Liu, Senchu Zhu pp 1-3 ,6-18,47-68 Buffer overrun in jpeg processing (gdi+) could allow code execution 833987 pp 47-68http://www. microsoft.com/technet/security/bulletin/ MS04-028.mspx Web application vulnerabilities: detect exploit and prevent by Michael cross pp 47-68 Buffer overflows vulnerability diagnosis for commodity software by jiang.pp 6-18 Buffer overflow attacks by James C Foster pp 47-49 Intel ia-32 architect software developer’s manual volume 1: Basic architecture.pp 9 Metasploit project. http://www.metasploit.com. pp 47-68 Security advisory: Acrobat and adobe reader plug-in buffer overflow. http://www.adobe.com/ support/techdocs/321644.html. pp 13-15 Stunnel – universal ssl wrapper. http://www.stunnel.org. Symantec security response: back door.hesive. pp 6- 18 http://securityresponse.symantec.com/ avcenter/ venc/data/backdoor.hesive.html Winamp3 buffer overflow. http://www.securityspace.com/ smysecure/ catid.html?id=11530. pp 6-18 Pax documentation. http://pax.grsecurity.net/docs/pax.txt, November 2003. against stack smashing attacks. In Proc. 2000 USENIX Technical Conference (June 2000). pp 14 Professional ASP .NET by Wrox Publications pp 25-46 Effective methods for software testing by William Perry 69-83 www.ajer.org Page 483
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-414-419 www.ajer.org Research Paper Open Access Reflections on the Usage of Air - Conditioning Systems in Nigeria A.I. Obanor, and H.O. Egware 1,2 Department of Mechanical Engineering, University of Benin, Benin City, Nigeria. Abstract: - Air conditioning systems are usually designed to meet the indoor environmental requirements of occupants, process or products in a conditioned space. Three types of air-conditioning systems are mainly used in Nigeria, namely, room or window air-conditioners, split air–conditioning systems and central air– conditioning systems. The increasing utilization of the split air-conditioning systems has led to a decline in the usage of room air-conditioners and central air- conditioning systems in Nigerian buildings. The split air conditioning systems have even been wrongly applied to condition buildings with unit large floor spaces. For such spaces, the specified indoor environmental conditions are not met with the usage of split air conditioning systems because their air distribution performance index is low. This paper strongly advocates the usage of central air conditioning systems for these types of buildings and recommends the adoption of good thorough procedures regarding the design, procurement, installation, commissioning, operation and maintenance of innovative and energy-efficient air conditioning systems in buildings. For this to be actualized, the paper also recommends the generation of air conditioning system design data to address the dearth of such data in the country. Keywords: - Air conditioning systems, design, installation, operation, maintenance, reflection, I. INTRODUCTION The American Society of Heating, Refrigeration and Air – Conditioning Engineers (ASHRAE) has defined air – conditioning as the process of treating air to control simultaneously its temperature, humidity, cleanliness, quality and distribution to meet the requirements of the occupants, process or product in the conditioned space [1]. Depending on the application, the sound level and pressure differential between the conditioned space and adjacent spaces are also controlled within prescribed limits. An air – conditioning system consists of components and equipment arranged in sequential order to heat or cool, humidify or dehumidify, clean and purify, attenuate objectionable equipment noise, convey the conditioned outdoor air and return air to the conditioned space, and control and maintain an indoor or enclosed environment at optimum energy use [2]. The types of building served by air – conditioning systems include institutional, commercial, residential, and manufacturing buildings. In institutional, commercial and residential buildings, air – conditioning systems are designed principally to meet the health and comfort of the occupants. In manufacturing buildings, air – conditioning systems are provided to meet requirements for product processing or storage, or for the health and comfort of workers as well as processing in enclosed spaces. Historically two types of air – conditioning systems have been observed to be used in Nigeria [3]. These are the room or window air conditioners and the central air conditioning systems. The room air conditioner is a factory – made encased assembly designed for delivery of conditioned air to an enclosed space without making use of ducts. It is usually installed through a wall enclosing the conditioned space. The central air conditioning system is a system in which the air is treated in a central plant and carried to and from the conditioned space(s) by one or more fans and a system of ducts. The advent of the split air – conditioning systems has brought a new dimension to the usage of air – conditioning systems in Nigeria. A split air – conditioning system consists of two main parts, namely, the outdoor unit and the indoor unit. The outdoor unit houses the compressor, condenser and associated fan, as well as the expansion valve or capillary tubing while the indoor unit contains the cooling coil, blower and an air filter. A recent study has shown preference by many building owners and clients in Nigeria for the utilization of www.ajer.org Page 414 American Journal of Engineering Research (AJER) 2013 split air – conditioning systems to control the indoor environment in their buildings [3]. This has led to a considerable reduction in the usage of room air-conditioners and central air – conditioning systems in buildings. The use of split air – conditioning systems to control the indoor environment in rooms or spaces with small floor areas can adequately satisfy the comfort requirements of the occupants of such buildings. However, this requirement cannot be satisfactorily met when such systems are used to condition buildings with very large floor spaces like large church halls, auditoriums, large boardrooms, banquet halls, theatres, conference centres, operating suites of hospitals, clean spaces, etc.. This is so because the supply air leaving the indoor units of split air – conditioning systems cannot be properly distributed within the entire conditioned space resulting in localized cooling of the spaces close to the indoor units of such systems. On a very hot day, complaints are often heard from the occupants or owners of these buildings about the poor performance of the installed split air – conditioning systems. The authors have observed the increasing utilization of split air - conditioning systems to condition the indoor environment of a good number of commercial and institutional buildings with large floor spaces as well as churches with big auditoriums. Preliminary inquires made by the authors concerning the selection of such systems for installation by the air - conditioning contractors reveal that no proper design of the indoor environmental conditioning system was carried out. The solution to this problem and therefore the provision of an acceptable indoor environment is proper design, specification, procurement, installation, commissioning, operation and maintenance of an efficient central air – conditioning system for these types of buildings. This paper discusses the various practices or measures to be undertaken in the provision of a healthy and comfortable indoor environment with acceptable indoor air quality in buildings starting from the design of an appropriate air conditioning system to its proper operation and maintenance. The adoption of these measures by all professionals associated with the building industry will enhance engineering practice relating to the provision of acceptable indoor environment in Nigeria buildings. II. DESIGN OF APPROPRIATE AIRCONDITIONING SYSTEMS FOR BUILDINGS IN NIGERIA A building is a structure that has a roof and walls and primarily designed to provide shelter and ensure comfort for its occupants. Heerwagen [4] has outlined the basic requirements of a building to include controlling its internal environment well enough to satisfy the occupants physical and physiological needs, supporting the psychological state and social activities of each occupant, and resisting the natural forces that act against it (e.g. weather and climate, gravity and seismic loads etc). The aforementioned requirements should be provided at a reasonable cost and efficient use of resources. The design of an air conditioning system to meet the requirements of an indoor environment begins with a study of architectural drawings of the building with a view to determining the following data [5]: i) The functional use of the building, namely, residential, commercial, industrial, institutional or other facility. ii) The geographical site location and means of accessing the building. iii) The building area, height, number of stories, internal transportation, materials used for the walls and roof as well as type and amount of fenestration. iv) The number, distribution and occupancy patterns of the building. Other data required include the following: v) The weather and climatic design data for the geographical location of the building (outdoor dry and wet bulb temperatures, solar radiation data, wind velocity and direction data etc). vi) Indoor environmental data (indoor dry and wet bulb temperatures, (or relative humidity)), air quality and ventilation requirements. vii) Data about internal loads such as lights, equipment and special conditions concerning noise and vibration. In Nigeria, efforts have been made by various researchers to determine climatic design data and thermal properties of building materials which are required to estimate the space cooling load [6,7,8,9 and 10]. The researchers must be commended for carrying out studies relevant to the determination of data used in air conditioning system design. However, the fact that these studies did not cover the entire scope of materials used for constructing buildings in Nigeria and the non – availability of recent climatic design data have led to the usage by local building services engineers of air conditioning system design data produced by foreign organizations such as the American Society of Heating, Refrigerating and Air – Conditioning Engineers (ASHRAE) [11], Inc. and the Chartered Institution of Building Services Engineers (CIBSE) [12]. The usage of such data, more often than not, results in the incorrect estimation of space cooling loads and this ultimately affects the design of air-conditioning systems for buildings. In order to produce comprehensive air-conditioning system design data for Nigeria, relevant government ministries, departments and agencies, research institutes, professional organizations, universities, www.ajer.org Page 415 American Journal of Engineering Research (AJER) 2013 etc should execute collaborative programmes to address the issue of the determination of the thermal properties of building materials and detailed climatic design data. The availability of these data and their proper usage by building services engineers will bring about the design of appropriate and energy efficient air-conditioning systems in Nigeria. With the correct estimation of the building sensible and latent cooling loads, the next step in the design process is the selection of an air-conditioning system that can compensate for the loads and produce the desired indoor environment in a sustainable way with the minimum consumption of energy. The selection procedure will entail a consideration of various competing air-conditioning systems. All such systems must be capable of maintaining the indoor environmental condition required in each area. The ability to provide adequate thermal zoning is also mandatory [5]. For each system considered, the following items should be evaluated: (i) the relative space requirements for equipment, ducts and piping, (ii) the fuel and /or electrical use and thermal storage requirements, (iii) the initial and operating costs, (iv) the acoustical requirements, (v) the compatibility with the building plan and structural system, and (vi) the effect of indoor air quality, illumination, noise and vibration [5]. The results of this rigorous study will lead to the selection of an appropriate air conditioning system. The selected air-conditioning system must maintain the indoor environmental condition by transferring the sensible and latent cooling loads from the building and rejecting them to a sink. A detailed psychrometric analysis is used to determine the sizes or capacities of components of the air-conditioning system. The space airconditioning design process requires the determination of the quantity of air to be supplied and the supply air condition necessary to remove the sensible and latent loads from the space. For buildings with large floor areas, the selection of adequately sized fans and the design of a ducting system that will convey the supply air to the conditioned spaces and ensure proper air circulation within them must be carried out. The ducting system must incorporate appropriate terminals (for regulating the quantity of air entering the space or conditioning it), diffusers (for admitting air to the space), grilles (for gathering the air from the space). Diffusers, registers or grilles selected must exhibit the correct throw, drop, spread, noise level and pressure drop performance to ensure proper circulation of supply air within the conditioned space thereby producing an indoor environment comfortable for the occupants. Air-conditioning systems are usually sized to satisfy a set of design conditions which are selected to generate a near – maximum load. These design conditions occur for only a few hours of the year and therefore the air-conditioning equipment operates most of the time at less than rated capacity. Thus a control system is necessary and its function is to adjust the equipment capacity to match the load. A properly designed, operated and maintained automatic control system is of utmost importance and will provide economy in the operation of the air-conditioning systems [13]. The design process is concluded by providing a detailed specification of all components of the air handling and associated control systems, cooling equipment, heating equipment, heat transfer equipment, pumps, valves, piping and ducting systems. Design documents include drawings and specifications. These documents are the means by which the designer can convey the design requirements to the contractor. The drawings and specifications must define the work to be done by the contractor in a clear, complete and unambiguous manner. III. PROCUREMENT, INSTALLATION AND COMMISSIONING OF AIR CONDITIONING SYSTEMS With the completion of the preparation of design documents, the next step is the selection of a competent and responsible contractor that will be awarded the contract to execute the installation of the airconditioning system. The selection of the contractor is done after a competitive bidding process by various contractors for the project and subsequent evaluation of the bids have taken place. The chosen contractor should be one having a good track record in the execution of similar projects. Good procurement practice requires that the selected contractor purchases all components of the air conditioning system as specified in the design documents. The air-conditioning system design team of engineers should verify that the purchased items conform to the specification and are of right quantity and quality. The contractor proceeds with the installation of the procured components and this activity should be monitored, supervised and inspected by the team of engineers who designed the air-conditioning system. After the installation of the air-conditioning system, the commissioning exercise is the next in the series of activities concerning project execution. The commissioning of an installed air-conditioning system is the implementation of a quality – oriented process for achieving, verifying and documenting that the performance of the system is in accordance with the design intent and the building owner’s operational needs [14]. The commissioning process includes all the elements of Testing, Adjusting and Balancing (TAB) as well as training of Operations and Maintenance (O &M) personnel. The process should preferably be carried out www.ajer.org Page 416 American Journal of Engineering Research (AJER) 2013 by air-conditioning system commissioning specialists consisting of competent and experienced engineers and technicians working in conjunction with the design team and building owner. The TAB process consists of all operational tests and measurements carried out on the installed airconditioning system to show compliance with the design requirement. The process includes adjustments to fluid (air, water, steam) flows rates and temperatures to satisfy those requirements. Load tests should also be performed on cooling and heating equipment. IV. PROPER OPERATION AND MAINTENANCE OF AIRCONDITIONING SYSTEMS The design team of air-conditioning systems can greatly facilitate their proper operation and maintenance by doing a good job of turning over the systems to those who will operate them. A proper commissioning process is the first step towards achieving the task. 4.1 Designing for Operation and Maintenance The air-conditioning system designers should observe the following basic criteria [13]: i) Adequate space and accessibility should be provided for equipment. This includes ease of access, space for maintenance and repair and access for removal and replacement of large items of equipment. ii) Well – written operational and maintenance procedures for the air-conditioning system which are simple, straightforward and easy for operations and maintenance personnel to understand should be prepared. The schematic flow and control diagrams from the contract drawings constitute reference materials for these procedures. A collection of components manufacturer’s descriptive and maintenance bulletins is useful as a reference but is not a procedure. iii) The contractor should provide comprehensive training for all operations and maintenance personnel to cover all items and procedures. In this connection, the air-conditioning system designer should request the building owner or manager to embark on continuous training or retraining of all new or old operating and maintenance personnel. It is important to stress here that inadequate maintenance will result in higher operating costs. 4.2 Maintenance Management Section 4.1 has stressed the importance of the availability of well – written operation and maintenance documentation as well as the continuous training and retraining of operating and maintenance personnel. Maintenance management entails the planning, implementation and review of maintenance activities. ASHRAE [13] has stated three maintenance strategies as follows: run – to - failure, preventive maintenance and condition – based maintenance. In run – to – failure strategy, minimal resources are invested in maintenance until equipment or systems break down or fail. The preventative maintenance strategy schedules the maintenance of equipment, either by run time or by the calendar. The condition –based maintenance is based on equipment monitoring to establish the current condition of equipment and on condition and performance indices to optimize repair intervals. The success of maintenance management depends on dedicated, trained and accountable personnel, clearly defined goals and objectives, measurable benefits, management support, constant examination and re examination. To be effective and efficient, operations and maintenance programs require staff with the right combination of technical and managerial skill. Technical skills range from hands–on correct application of methods and procedures to the analytical problem-solving skill of the plant engineer. Managerial skills include overseeing the stewardship of the facility on a day – to – day basis and in life – cycle terms. The operations and maintenance manual of the air-conditioning system should contain in detail the maintenance practices to be undertaken by personnel assigned to perform such duties. Let us briefly consider here some maintenance tips. The filters of the air-handling unit (AHU) must be cleaned monthly and replaced if found to be defective. Dirty filters restrict the flow of air thereby affecting the performance of the air-conditioning system. The cooling coils of the AHU should be cleaned at least once a year. Dirty coils are inefficient. The fan blades of the AHU should be cleaned regularly because clean blades move more air. All condensate lines must also be cleaned. It is important to ensure that the shut down switches of the air-conditioning system are working. The tubes and cooling fins of air cooled condensers should be cleaned at least once a year since accumulated dirt around the primary and secondary heat transfer surfaces will impede their ability to dissipate heat. The ducting system must be accessible for inspection and necessary periodic cleaning. Dirty ducts are first and foremost unhealthy and they must be cleaned every 3 – 5 years. All ducts must be insulated, sealed, and checked for air leakage. www.ajer.org Page 417 American Journal of Engineering Research (AJER) V. 2013 USAGE OF CENTRAL AIRCONDITIONING SYSTEMS IN NIGERIA Sanni [3] has conducted a preliminary study to examine the functional state of the central airconditioning systems installed in various buildings and had worked for a minimum of ten years. The findings of the study can be categorized as follows: Category A: The central air-conditioning systems are functioning in an efficient and effective manner. Category B: The central air-conditioning systems are operating in an epileptic manner. Category C: The central air-conditioning systems are no more working and have since been replaced by split airconditioning units or room window units. A pertinent question to be asked here is what is responsible for this state of affairs? The causative factors are as follows: i) Poor maintenance culture/inadequate maintenance strategy employed by the building owners or management staff of establishments utilizing the buildings. ii) Epileptic power supply by Power Holding Company of Nigeria (PHCN) PLC. iii) Incompetent technicians or maintenance personnel and the dwindling number of competent technicians. This can be mainly attributed to the neglect of technical and vocational education by the federal and state governments in Nigeria. iv) High cost of maintenance and purchase of spare parts. However, the management staff of establishments or building owners whose functional state of central airconditioning systems was classified into Category A utilized a preventive maintenance strategy to ensure that the desired environmental conditions were maintained in their buildings while those in Categories B and C fell far short of this. A good maintenance policy must adopt the following practices: a) It must be welled funded. b) All operating and maintenance personnel must possess the relevant skills and be properly motivated. c) Sufficient personnel must be employed to carry out operations and maintenance duties. In the absence of this, competent contractors can be hired to carry out maintenance operations on the air-conditioning system. d) All spare parts required to keep the air-conditioning system functioning must be properly stocked. The practice of using split air-conditioning system in buildings whose spaces have large floor areas is not recommended. This is because the conditioned supply air emanating from the indoor units of the split airconditioning system is not distributed properly within such spaces. The designed temperature and relative humidity are not maintained within the space because of poor throw, spread and drop performance of the supply air from the indoor units serving it. For such spaces, a central air-conditioning system having a properly designed ducting network employing diffuser, registers or grilles produces the requisite air distribution performance of supply air and ensures thermal comfort of the occupants. As a rule of thumb, the authors recommend that if the floor area of a building space exceeds 150m2, a split air-conditioning system should not be used to condition the space. Therefore, split air conditioning systems should not be used to condition the indoor environment of large church halls, auditoriums, large boardrooms, banquet halls, theatres, conference centres, operating suite of hospitals, clean spaces etc. VI. CONCLUSION AND RECOMMENDATION This paper has focused on the use of air-conditioning systems to produce desired indoor environment in buildings in Nigeria. It noted that the types of air-conditioning system utilized in these buildings are room or window air conditioners, split and central air-conditioning systems. The split air-conditioning systems are increasingly being applied to condition the air in various buildings including residential buildings, commercial and public buildings, hotels, motels and hostels, educational and health care facilities, church halls and auditoriums, theatres, conference centres etc. This practice has led to a decline in the usage of room airconditioners and central air-conditioning systems in Nigerian buildings. The increasing utilization of split air-conditioning systems has led to their being used in some types of buildings not suited for their application. These buildings are those with spaces having floor areas exceeding 150m2. .In order to do the right thing and utilize central air-conditioning systems to produce desired indoor environmental conditions in these types of buildings, this paper recommends the adoption of good practices that entail proper design, procurement, installation, commissioning, operation and maintenance of such systems. To facilitate the design of innovative and energy efficient air-conditioning systems, the authors also recommend a comprehensive research work to be undertaken to determine the thermal properties of local building materials and relevant climatic data. www.ajer.org Page 418 American Journal of Engineering Research (AJER) 2013 REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] ASHRAE, Terminology of Heating, Ventilation, Air – Conditioning and Refrigeration, ASHRAE Inc., Atlanta, Georgia 30329, U.S.A, 1986. S.K Wang, and Z.Lavan, “Air – Conditioning and Refrigeration, in Frank Kreith, Editor”, Mechanical Engineering Handbook, Boca Raton, CRC Press LLC, 1999 pp. 9-1 to 9-160. B.A.Sanni, “Case Studies of the Problems Associated with the Usage of Central Air – Conditioning Systems in Some Selected Cities in Nigeria”, A Technical Report, Department of Mechanical Engineering, University of Benin, Benin City, November, 2005. D.Heerwagen, “Passive and Active Environmental Controls; Informing the Schematic Designing of Buildings”, Mc Graw – Hill, New York, 2004. ASHRAE, Air – Conditioning Systems Design Manual, ASHRAE Inc, Atlanta Georgia 30329, U.S.A, 1993. A.B Shoboyejo, and F.O Shonubi, “Evaluation of Outside Design Conditions For Air – Conditioning System Design in Nigeria”, The Nigerian Engineer, March, 1974, Vol.9 (1) C.C.O Ezeilo, “Thermal Conductivity of Building Materials I: Some Nigerian Timbers”, Nigerian Journal of Engineering and Technology, Vol.3,(1 and 2),1980 pp. 98-109. C.C.O Ezeilo, “Thermal Conductivity of Building Materials II: Sandcrete Mixtures, Nigerian”, Journal of Engineering and Technology, Vol. 4, (1and 2), 1981, pp. 57 – 65. O.C Iloeje, and A.D Odukwe. “Effective Thermal Conductivities of Natural Insulators: Raffia Palm and Roofing Grass”, Nigerian Journal of Engineering and Technology, Vol.6,(1), 1983, pp. 87 – 100. S.S Oluyamo, O.R, Bello, and O.J,Yomade, “Thermal Conductivity of Three Different Wood Products of Combretaceae; Terminalia Superb, Terminalia Ivorensis and Quisqualis Indica”, Journal of Natural Sciences Research Vol. 2, (4), 2012. ASHRAE, 2009 ASHRAE Handbook of Fundamentals, American Society of Heating, Refrigerating and Air – Conditioning Engineers Inc., Atlanta, Georgia, 2009. CIBSE, Guide to Current Practice, Volumes A, B and C, 1986. R.W.Haines, “HVAC Systems Design Handbook", TAB Books, Inc. Blue Ridge Summit, Pennsylvania, 1988. ASHRAE, 2007 ASHRAE Handbook HVAC Applications SI Edition, American Society of Heating, Refrigerating and Air – Conditioning Engineers, Inc., Atlanta, Georgia, 2007. www.ajer.org Page 419
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-98-109 www.ajer.org Research Paper Open Access Design, Analysis and Implementation of a Search Reporter System (SRS) Connected to a Search Engine Such as Google Using Phonetic Algorithm Md. Palash Uddin1, Mst. Deloara Khushi2, Fahmida Akter3, Md. Fazle Rabbi4 1&4 Computer Science and Information Technology, Hajee Mohammad Danesh Science and Technology University (HSTU), Dinajpur, Bangladesh. 2 Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka, Bangladesh. 3 Computer Science and Engineering, East Delta University, Chittagong, Bangladesh. Abstract: - A web search engine is an information retrieval system on the World Wide Web showing a list of necessary and less necessary URLs against the searching keywords elapsing a more time and requiring some analysis to get the required URL. The web creates new challenges for required information retrieval because the amount of information on the web is growing rapidly. Thus the target of SRS connected to a search engine such as Google is to get the required information though the searching process more easily, effectively and efficiently. For this, admin should insert the searching keywords in the SRS and then the SRS connected to any search engine will get all the titles, URLs and descriptions and then check and count the keywords in the web pages of the URLs. Then these retrieved information and an id associated with each keyword are stored in the database. Now, if a user searches for the keyword, then the SRS loads the search results from database and then ranks the pages based on its highest number of matching keywords. More significantly, a phonetic algorithm namely Metaphone algorithm is used in the SRS to eliminate the problem of spelling errors in the keywords given by the users. Admin can update existing keyword with the rapid updating of information based on the keyword and also add new keywords that are not found by the users. The SRS with necessary analysis has been implemented using appropriate and latest demanding tools and technologies such as Metaphone algorithm, HTML, PHP, JavaScript, CSS, MYSQ, Apache server etc. Keywords: - Information Retrieval, Metaphone Algorithm, Phonetic Algorithm, Search Engine, Search Reporter, Spelling Errors I. INTRODUCTION Search Reporter System (SRS) along with a search engine offers that when we search for any keyword, then it provides the search result exactly related to the keywords. The SRS along with a search engine is very user friendly and simple also. A user can easily search and get the informative result from this system. The SRS is designed in such a way that a user will never feel boring with the system. If a user searches for a keyword, then the search reporter loads the search results from any search engine such as Google and then ranking the pages based on its information that means the most informative page will have rank 1, then the second informative page rank 2 and so on. In this way a user get the main information which he/she is actually looking for. There are three steps in this system: search any keyword in the system, get the results as they want from Google based on ranking, and open the links and get the best information. It provides an outstanding web based search interface. The salient features of the SRS are given below:  It saves large amount of time.  It offers the informative result without analysis all the results get from Google.  It helps the users who actually don’t know about searching. www.ajer.org Page 98 American Journal of Engineering Research (AJER) 2013 1.1 History of Search Engine and Development of SRS In the summer of 1993, no search engine existed for the web, though numerous specialized catalogues were maintained by hand. Oscar Nierstrasz at the University of Geneva wrote a series of Perl scripts that periodically mirrored these pages and rewrote them into a standard format. This formed the basis for W3Catalog, the web's first primitive search engine, released on September 2, 1993 [1]. The web's second search engine Aliweb appeared in November 1993. One of the first "all text" crawler-based search engines was WebCrawler, which came out in 1994. Google adopted the idea of selling search terms in 1998, from a small search engine company named goto.com. Around 2000, Google's search engine rose to prominence [1]. The company achieved better results for many searches with an innovation called PageRank. By 2000, Yahoo! was providing search services based on Inktomi's search engine. Yahoo! acquired Inktomi in 2002, and Overture (which owned Allthe Web and AltaVista) in 2003. Yahoo! switched to Google's search engine until 2004, when it launched its own search engine based on the combined technologies of its acquisitions. Microsoft's rebranded search engine, Bing, was launched on June 1, 2009. On July 29, 2009, Yahoo! and Microsoft finalized a deal in which Yahoo! Search would be powered by Microsoft Bing technology. By the passing of time the use of search engine is increasing. As increased use of search engine for searching information, a system has been developed that helps users to search information. When a person wants to search anything he simply places his words in search engine. Then search engine returns him relevant information according to his/her words based on many more criteria. But user has to extract their necessary information after doing much analysis as search engines can’t give the exact information manually. This makes searching for any information very time consuming. Then we thought that we may develop search reporter system so that users may search and get any information manually and which is not time consuming. Search engines use many criteria such as SEO (Search Engine Optimization), searching and returning information but we choose primarily only the words that are given for searching. On the stage of developing the SRS at first, admin places a keyword in the field that is defined for him. Then the system which is connected to any search engine such as Google will get all the titles, URLs and descriptions and then check and count the keyword in the web pages of the URLs. Then the titles, URLs, descriptions, number of matches of the keyword and an associated id against the keyword are stored in database. Now, if a user searches for a keyword, then the search reporter loads the search results from database and then ranking the pages based on its highest number of matching keywords. As search engines updated their information day by day, so the admin needs to update the database of SRS day by day so that user gets the updated information from the SRS. 1.2 Present Search Engine Generally, a search engine is software code that is designed to search for information on the World Wide Web [1]. The search results are generally presented in a line of results often referred to as search engine results pages (SERP's). The structure diagram of present system is shown below: Figure 1: Structure Diagram of Search Engine The structure diagram of present system shows that it contains resources such as server. Then information from resources come to crawler, harvester and import. A Web Crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches [2]. A Web Harvester extracts every single word every time it accesses a webpage. Additionally, a web harvester stores every single page harvested as a separate version in our database. It has two main advantages. These are Analytic capabilities and Versioning of Web Pages [3]. A www.ajer.org Page 99 American Journal of Engineering Research (AJER) 2013 Web Import imports information from server and provide it to search engine. Users request information from search engine by query and search engine them information by response. An information flow diagram (IFD) shows the relationship between external and internal information flows between organizations. It also shows the relationship between the internal departments and sub-systems [4]. The IFD for present system is shown below: Figure 2: IFD of Search Engine 1.3 Limitation of the Present System   Time consuming: In present system, information retrieval is very time consuming.  Less user interactivity: In present system, user interactivity is very poor.     Complex queries: The queries are very complex in present system. Scam: As more visitors , there is a chance to more scam and sometimes it might become cost efficient as more and more people to advertise a website online. Learning curve: Using engines does involve a learning curve. Many beginning Internet users, because of these disadvantages, become discouraged and frustrated. Sophistication: Regardless of the growing sophistication, many well thought-out search phrases produce list after list of irrelevant web pages. The typical search still requires sifting through dirt to find the gems. Overload: Search engine creates information overload. 1.4 Proposed SRS To minimize the problems as specified above we are proposing the following structure and the IFD of the SRS. Figure 3: Structure Diagram of SRS www.ajer.org Page 100 American Journal of Engineering Research (AJER) 2013 Figure 4: IFD of SRS 1.5 Objectives of SRS The SRS is an open system. Here the user who wants to search anything, he/she just needs to visit the website and searches the keyword which he/she actually wants. The main objectives of this system are:  to save time  to help those users who don’t know about searching  to give the users meaningful information which they need  to place the less important information in the last position Also the SRS possesses the following advantages:     It offers instant services, no need of time consuming The new users can easily understand the system and maintain it easily The system enables the users to precisely describe the information that they seek The SRS reduces the problem of sophistication and scam of present search engines. II. DESIGN METHODOLOGY The purpose of system design is to create a technical solution that serves both the user and the admin. The system should be designed in such a way that is very flexible to use for both the administrator and the user. The preparation of the environment needed to build the system, the testing of the system and the migration and the preparation of the data that will ultimately be used by the system are equally important. In addition to designing the technical solution, system design is the time to initiate focused planning efforts for both the testing and data preparation activates. The SRS is a real life problem solving application. Both the admin section and the user section are designed in such a way that both parties enjoy the facilities of the application. 2.1 Modular Design The whole system is divided into two parts i.e. the user and the admin section. That is why, the modular design of the system is also divided into two modular diagrams. Figure 5(a): Modular diagram for user www.ajer.org Figure 5(b): Modular diagram for admin Page 101 American Journal of Engineering Research (AJER) 2013 2.2 Use-case Diagram It covers the whole SRS and how it works. It makes easier to communicate between the user and the system developers. The two main components of a use case diagram are actors and use cases as specified in the following diagram. Actor Use-case Figure 6: Symbol of Actor and Use-case Figure 7: Use-case Diagram for Admin and User 2.3 Working Stucture The SRS works in the following way: Figure 8: Working Structure of SRS The working steps are in details:  Create an index.  Receive a query-a set of search terms  Look in the index file for matches  Gather the matching page entries and rank them by number of keyword matches  Format the result  Return the result page in HTML to the searcher’s web browser. www.ajer.org Page 102 American Journal of Engineering Research (AJER) 2013 2.4 Data Flow Diagram (DFD) The level-0, level-1, and level-2 DFD of the SRS are shown below: Figure 9: Level-0 Data Flow Diagram Figure 10: Level-1 Data Flow Diagram Figure 11: Level-2 Data Flow Diagram www.ajer.org Page 103 American Journal of Engineering Research (AJER) 2013 2.5 Relational Diagram The relational or schema diagram among the table used in the SRS is shown below: Figure 12: Relational Diagram of SRS 2.6 Activity Diagram The activity at different level of the system can be shown in the following diagram: Figure 13: Activity Diagram of SRS 2.7 Architecture Diagram Architecture Diagram of SRS designed for searching shows the different components of the system. We have populated the central database of the system by MySql database and populated it in the MySql Essential-5.0.67. The web server used is Xampp-1.7.3. This system designed by HTML, PHP, JavaScript, and CSS provides an interface with the user using both the servers. This interaction requires a network among the computer from which the personnel will provide their requirements. www.ajer.org Page 104 American Journal of Engineering Research (AJER) 2013 Figure 14: Architecture Diagram of SRS III. TOOLS AND TECHNOLOGY The latest demanding tools and technologies such as HTML, PHP, CSS, JavaScript, MySQL database, and Apache web server have been used to develop the SRS. More significantly, a phonetic algorithm namely Metaphone algorithm has been used for searching in SRS to ignore the spelling error in the searching keywords. 3.1 Importance and Phonetic Algorithm A phonetic algorithm is an algorithm for indexing of words by their pronunciation. The main advantage of phonetic algorithm is to eliminate misspelling of words. When user search for a keyword than it may happen that he/she will place misspell of his desired keyword. To solve this problem we use Metaphone algorithm in this system which produce phonetic similarity of different alphabets or group of alphabets to avoid the loss of information. Here, users will give search keywords and actual data are matched by its phonetic similarity. 3.2 Metaphone Algorithm Metaphone is a phonetic algorithm, published by Lawrence Philips in 1990, for indexing words by their English pronunciation. It fundamentally improves on the Soundex algorithm by using information about variations and inconsistencies in English spelling and pronunciation to produce a more accurate encoding, which does a better job of matching words and names which sound similar [5]. As with Soundex, similar sounding words should share the same keys. Metaphone is available as a built-in operator in a number of systems, including later versions of PHP. Metaphone codes use the 16 consonant symbols 0BFHJKLMNPRSTWXY. The '0' represents "th" (as an ASCII approximation of Θ), 'X' represents "sh" or "ch", and the others represent their usual English pronunciations. The vowels AEIOU are also used, but only at the beginning of the code. This table summarizes most of the rules in the original implementation: 1. Drop duplicate adjacent letters, except for C. 2. If the word begins with 'KN', 'GN', 'PN', 'AE', 'WR', drop the first letter. 3. Drop 'B' if after 'M' at the end of the word. 4. 'C' transforms to 'X' if followed by 'IA' or 'H' (unless in latter case, it is part of '-SCH-', in which case it transforms to 'K'). 'C' transforms to 'S' if followed by 'I', 'E', or 'Y'. Otherwise, 'C' transforms to 'K'. 5. 'D' transforms to 'J' if followed by 'GE', 'GY', or 'GI'. Otherwise, 'D' transforms to 'T'. 6. Drop 'G' if followed by 'H' and 'H' is not at the end or before a vowel. Drop 'G' if followed by 'N' or 'NED' and is at the end. 7. 'G' transforms to 'J' if before 'I', 'E', or 'Y', and it is not in 'GG'. Otherwise, 'G' transforms to 'K'. 8. Drop 'H' if after vowel and not before a vowel. 9. 'CK' transforms to 'K'. 10. 'PH' transforms to 'F'. 11. 'Q' transforms to 'K'. 12. 'S' transforms to 'X' if followed by 'H', 'IO', or 'IA'. 13. 'T' transforms to 'X' if followed by 'IA' or 'IO'. 'TH' transforms to '0'. Drop 'T' if followed by 'CH'. 14. 'V' transforms to 'F'. 15. 'WH' transforms to 'W' if at the beginning. Drop 'W' if not followed by a vowel. 16. 'X' transforms to 'S' if at the beginning. Otherwise, 'X' transforms to 'KS'. 17. Drop 'Y' if not followed by a vowel. 18. 'Z' transforms to 'S'. www.ajer.org Page 105 American Journal of Engineering Research (AJER) 2013 19. Drop all vowels unless it is the beginning [5]. 3.3 Works on Metaphone Algorithm  Metaphone Calculator [6].  Doing a fuzzy match in MySQL. Soundex and Metaphone algorithm [7].  Naushad UzZaman and Mumit Khan, “A Bangla Phonetic Encoding for Better Spelling Suggestions” [8].  Chakkrit Snae and Michael Brückner, “Novel Phonetic Name Matching Algorithm with a Statistical Ontology for Analyzing Names Given in Accordance with Thai Astrology” [9].  Chakkrit Snae, “A Comparison and Analysis of Name Matching Algorithms” [10]. 3.4 Working Steps of Metaphone Algorithm The main two steps of working of the Metaphone Algorithm are illustrated below: Step-1:  Take the input for inserting in the database  Select the words which may be searched  Make Metaphone code for the searchable words  Store Metaphone code in database Step-2:  Create Metaphone code for each of the words in the search key  Retrieve data from database based on the code of the words  Calculate percentage of matching with the keywords and the actual data in the database  Display retrieved data in the ascending order of matched with the actual data  Divide the whole data into several pages if it is required IV. SNAPSHOTS OF SRS The home page in the SRS looks like the following: Figure 15: Hope Page of SRS The following figure shows the form to search for keywords by the users: Figure 16: Searching Form www.ajer.org Page 106 American Journal of Engineering Research (AJER) 2013 Suppose that the user has searched for “Cricket” in this SRS which is connected to a search engine such as Google. In searching, he/she can mistype the input keywords but the Metaphone algorithm used in SRS performs the related results as shown below: Figure 17: Result Page After successful login to the SRS, admin can insert keyword by using this page: Figure 18: Keyword Inserting Form Then the admin can update keywords by the following page: Figure 19: Keyword Updating Form www.ajer.org Page 107 American Journal of Engineering Research (AJER) 2013 And the admin can see the missing keywords to add them in the SRS by this page: Figure 18: Monitoring Missing Keyword and Then Inserting Them V. CONCLUSION Analyzing the above descriptions of the Search Reporter System along with a search engine like Google it can be concluded that the SRS is highly effective, efficient and user-friendly for the fulfillment of the user’s requirement. Users can be highly benefited using the SRS. With the rapid enhancement of modern technology people want beneficial information within a short moment of time. Web search engine can provide that required information but it shows many more necessary and less necessary URLs about the searching keyword elapsing a more time. Hence the SRS aims to reduce the less necessary and totally unnecessary URLs from a list of URLs resulted from a search engine after analyzing some factors for the same reducing the searching time. To get full benefits of modern web technology using the SRS, it can be integrated with other search engines. Future Plan: In the world nothing is free from error. So it is very common that the SRS may contain error. The SRS fully dependent on any search engine like Google. In future the following features will be integrated with the SRS:  Searching based on search analyzer  Mailing facility  Chatting facility REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] http://en.wikipedia.org/wiki/Web_search_engine http://en.wikipedia.org/wiki/Web_crawler. http://www.brightplanet.com/2012/11/deep-web-search-engines-vs-web-harvest-engines-finding-intel-ina-growing-internet/ http://en.wikipedia.org/wiki/Information_flow_diagram http://en.wikipedia.org/wiki/Metaphone http://www.vbforums.com/showthread.php?655230-Metaphone-Calculator http://theunderweb.com/doing-a-fuzzy-match-of-names-in-mysql-soundex-and-metaphonealgorithms.html. Naushad UzZaman and Mumit Khan, “A Bangla Phonetic Encoding for Better Spelling Suggestions”, Proc. of 7th International Conference on Computer and Information Technology (ICCIT 2004), pp. 7680. Chakkrit Snae and Michael Brückner, “Novel Phonetic Name Matching Algorithm with a Statistical Ontology for Analyzing Names Given in Accordance with Thai Astrology”, Issues in Informing Science and Information Technology, 2009, V.6, pp. 497-515. Chakkrit Snae, “A Comparison and Analysis of Name Matching Algorithms”, World Academy of Science, Engineering and Technology, 2007, Issue 1. Md. Palash Uddin (palash_cse@hstu.ac.bd) received his B.Sc. degree in Computer Science and Engineering from Hajee Mohammad Danesh Science and Technology University, Dinajpur, Bangladesh in 2013. His main working interest is based on artificial intelligence, bioinformatics, algorithm analysis, database structure analysis, software engineering, theory of computation etc. Currently he is working as a lecturer in Dept. of Computer Science and Information Technology in Hajee Mohammad Danesh Science and Technology University, Dinajpur, Bangladesh. Previously, he was a lecturer in department of Computer Science and Engineering at Central Women’s University, Dhaka, Bangladesh. He has research publications in various fields of Computer Science and Engineering. www.ajer.org Page 108 American Journal of Engineering Research (AJER) 2013 Mst. Deloara Khushi received his B.Sc. degree in Computer Science and Engineering from Hajee Mohammad Danesh Science and Technology University, Dinajpur, Bangladesh in 2013. Her main working interest is based on communication theory and computer algorithms. Currently she is working as a lecturer in Dept. of Computer Science and Engineering in Bangladesh University of Business and Technology, Dhaka, Bangladesh Fahmida Akter received his B.Sc. degree in Computer Science and Engineering from Hajee Mohammad Danesh Science and Technology University, Dinajpur, Bangladesh in 2013. Her main working interest is based on bioinformatics and data mining. Currently she is working as a lecturer in Dept. of Computer Science and Engineering in East Delta University, Chittagong, Bangladesh Md. Fazle Rabbi received his B.Sc. degree in Computer Science and Engineering from Hajee Mohammad Danesh Science and Technology University, Dinajpur, Bangladesh in 2008. His main working interest is based on bioinformatics, data structures and algorithm etc. Currently he is working as an assistant professor in Dept. of Computer Science & Information Technology in Hajee Mohammad Danesh Science and Technology University, Dinajpur, Bangladesh www.ajer.org Page 109
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-225-234 www.ajer.org Research Paper Open Access Adaptive Control of Mobile Manipulator to Track Horizontal Smooth Curved Apply for Welding Process Tran Duy Cuong, Ngo Cao Cuong, Nguyen Thanh Phuong HUTECH High Technology Research Instituite, Viet nam. Abstract:- In this paper, an adaptive control of mobile manipulator to track horizontal smooth curved applying for welding process is presented. The requirements of welding task are that the end effector must track along a welding trajectory with a constant velocity and must be inclined to the welding trajectory with a constant angle in the whole welding process. The mobile manipulator is divided into two subsystems such as the tree linked manipulator and the wheeled mobile platform. Two controllers are designed based on the decentralized motion method. However, there exists the relation among the controllers such as the velocities of subsystems at the previous sampling time. In order to avoid the singularity of configuration of the manipulator, a control algorithm is proposed to maintain that the initial configuration of manipulator throughout welding process. The mobile platform has to move to guarantee the unchanged configuration of the manipulator. This problem is solved using a set of tracking errors of the mobile platform so that the initial configuration of manipulator is unchanged when these errors go to zero. The interaction between the manipulator and the mobile platform is determined based on the D’Alembert principle, and an adaptive tracking motion controller for the mobile platform is designed using the “computed torque” method. The effectiveness of the proposed control system is proven through the simulation results. Keywords:- mobile platform (MP), welding mobile manipulator (WMM), manipulator, trajectory tracking, Lyapunov function. I. INTRODUCTION Nowadays, the working condition in the industrial fields has been improved greatly. In the hazardous and harmful environments, the workers are substituted by the welding robots to perform the operations. Especially in welding field, the welders are substituted by the welding manipulators to perform the welding tasks. Traditionally, the manipulators are fixed on the floor. Their workspaces are limited by the reachable abilities of their structures. In order to overcome this disadvantage, the manipulators that are movable are used for enlarging their workspaces. These manipulators are called the mobile manipulators. In this study, the structure of the mobile manipulator includes a three-linked manipulator plus a two-wheeled mobile platform. In recent years, there has been a great deal of interest in mobile robots and manipulators. The study about mobile robots is mostly concentrated on a question: how to move from here to there in a structured/unstructured environment. It includes three algorithms that are the point to point, tracking and path following algorithm. The manipulator is a subject of a holonomic system. The study on manipulators is mostly concentrated on a question: how to move the end effector from here to there and it also has three algorithms like the case of the mobile robot. Although there has been a vast amount of research effort on mobile robots and manipulators in the literature, the study on the mobile manipulators is very limited. It is hopeful that this paper will make a little contribution for the mobile manipulator research. The previous works are concentrated on the following topics ● Motion control of a wheeled mobile robot The mobile platform is a subject of non-holonomic system. Assume that the wheels roll purely on a horizontal plane without slippage. The mobile platform robot used in this study has two independent driving wheels and one passive caster for balancing. Several researchers studied the wheeled mobile robot as a non-holonomic system. Kanayama et al.[8] (1991) proposed a stable tracking control method for a non-holonomic mobile robot. The www.ajer.org Page 225 American Journal of Engineering Research (AJER) 2013 stability is guaranteed by Lyapunov function. Fierro and Lewis[3] (1995) used the backstepping kinematic into dynamic method to control a non-holonomic mobile robot. Lee et al.[4] (1999) proposed an adaptive control for a nonholonomic mobile robots using the computed torque method. Fukao et al.[5] (2000) developed an adaptive tracking control method with the unknown parameters for the mobile robot. Bui et al.[6] (2003) proposed a tracking control method with the tracking point outside the mobile robot. ● Motion control of a manipulator The control of a manipulator is an interesting area for research. In previous works, Craig et al.[1] (1986) proposed an algorithm for estimating parameters on-line using an adaptive control law with the computed torque method for the control of manipulators. Lloyd et al.[2] (1993) proposed a singularity control method for the manipulator using closed-form kinematic solutions. Tang et al.[9] (1998) proposed a decentralized robust control of a robot manipulator. ● Motion control of a mobile manipulator A manipulator mounted on a mobile platform will get a large workspace, but it also has many challenges. With regard to the kinematic aspect, the movement of the end effector is a compound movement of several coordinate frames at the same time. With regard to the dynamic aspect, the interaction between the manipulator and the mobile platform must be considered. With regard to the control aspect, whether the mobile manipulator is considered as two subsystems is also a problem that must be studied. In previous works, Dong, Xu, and Wang[7] (2000) studied a tracking control of a mobile manipulator with the effect of the interaction between two subsystems. Tung et al [10] (2004) proposed a control method for mobile manipulator using kinematic model. Dung et al [11] (2007) proposed a “Two-Wheeled Welding Mobile Robot for Tracking a Smooth Curved Welding Path Using Adaptive Sliding-Mode Control Technique” 2. System modeling Fig 1. Three-link welding manipulator mounted on mobile platform Fig 2. Schematic diagram of mobile platform-manipulator Fig 1 The mobile manipulator is compose of a wheeled mobile platform and a manipulator. The manipulator has two independent driving wheels which are at the center of each side and two passive castor wheels which are at the center of the front anh the rear of the platform. Fig 2 shows the schematic of the mobile manipulator considered in this paper. The following notations will be used in the derivation of the dynamic equations and kinematic equations of motion. 2.1 Kinematic equations Consider a three-linked manipulator as shown in Fig 2. The velocity vector of the end-effector with respect to the moving frame is given by (1). 1 (1) VE  J  www.ajer.org Page 226 American Journal of Engineering Research (AJER)  Where 1VE  x E y E E    T is the velocity vector of the end-effector frame,   1 2 3 is the angular velocity manipulator, and J is the Jacobian matrix. T  L3 S123  L2 S12  L1S1  L3 S123  L2 S12 J   L3C123  L2C12  L1C1 L3C123  L2C12  1 1  L3 S123  L3C123  1  vector of with 2013 respect to the moving the revolution joints of the three-linked (2) where L1, L2, L3 are the length of links of the manipulator, and C1 = cos(1 ) ; S1 = sin(1 ) ; C12 = cos(1 +  2 ) C123 = cos(1 +  2 +  3 ) ; S12 = sin(1 +  2 ) ; S123 = sin(1 +  2 +  3 ) ; The dynamic equation obtained as follows: of the end-effector VE  VP  WP 0 Rot1 1p E  0Rot1 1vE of the manipulator with respect to the world frame is (3)  xE   0   X E   X P  cos P  sin  P 0  L1C1  L2C12  L3C123    ;    ; W   0  ; 1p  y  ; 1  L S  L S  L S  0 Rot   sin  vE   YE  vP   YP  E E P cos P 0 p     1 P E 2 12 3 123    1 1         E    0   0 1 E P  E  P   =  +  +     E = 1 +  2 +  3   P  ;  E 1 2 3 P 2 The relationship between v, ω and the angular velocities of two driving wheels is given by  R  1 / r b / r   v p  (4)    1 / r  b / r     p   L  Where Where b is the distance between the driving wheels and the axis of symmetry, r is the radius of each driving wheel. The linear velocity and the angular velocity of the end-effector in the world coordinate (frame X-Y) E (5) vE  X E cos  E  YE sin  E ;  E   2.2 Dynamic equations In this application, the welding speed is very slow so that the manipulator motion during the transient time is assumed as a disturbance for MP. For this reason, the dynamic equation of the MP under nonholonomic constraints in A(qv )q v  0 is described by Euler-Lagrange formulation as follows: M v (qv )qv  Cv (qv , q v )q v  E (qv ) v  AT (qv ) where A(qv )   sin  p cos  p 0 ; qv  X p Yp  p T (6) 2I w   0  mc d sin  p   m r2   2I M v (q v )   m  2w mc d cos  p  0 r   Iw   I 2   mc d sin  p mc d cos  p 2c    0 0  mc d p cos  p    sin   Cv (qv , q v )  0 0  mc d p p 0 0  0   E ( qv )  cos  p 1 sin  p r  b cos  p   R  ; sin  p   v     L   b  www.ajer.org Page 227 American Journal of Engineering Research (AJER)    m  2I w    p  mc d  p X P cos p  YP sin  p  r2  2013 Consider a WMM as shown in Fig 2. It is model under the following assumptions: • The MP has two driving wheels for body motion, and those are positioned on an axis passed through its geometric center. • The three-linked manipulator is mounted on the geometric center of the MP. • The distance between the mass center and the rotation center of the MP is d . Fig. 2 doesn’t show this distance. This value will be presented in the dynamic equation of MP. • A magnet is set up at the bottom of the WMM to avoid slipping. In Fig. 2, (XP, YP ) is a center coordinate of the MP, Фp is heading angle of the MP, ωR, ωL is angular velocities of the right and the left wheels,  v   R  L T is torques vector of the motors acting on the right and the left wheels, 2b is distance between driving wheel, r is radius of driving wheel, mc is mass of the WMM without the driving wheels, m is mass of each driving wheel with its motor, Iw is moment of inertia of wheel and its motor about the wheel axis, I is moment of inertia of wheel and its motor about the wheel diameter axis and Ic is moment of inertia of the body about the vertical axis through the mass center. m  mc  2mw ; I  I c  2mwb 2  2 I m 3. Controllers Design Fig 3. Scheme for deriving the tracking error vector EE of manipulator As the view point of control, this paper addressed to an adaptive dynamic control algorithm. All of them are based on the Lyapunov function to guarantee the asymptotically stability of the system and based on the decentralized motion control method to establish the kinematic and dynamic models of system. 3.1 Defined the errors From Fig. 3, the tracking error vector EE is defined as follows:  e1   cos E E E  e2    sin  E e3   0 sin  E cos E 0 0  X R  X E  0  YR  YE  1  R   E  (7) v Ф 5 4 v 6 Ф Fig 4. Scheme for deriving the MP tracking error vector www.ajer.org Page 228 American Journal of Engineering Research (AJER) 2013 From Fig. 4, A new tracking error vector EM for MP is defined as follows: e4   cos M sin  M 0  X E  X M  (8) E M  e5    sin  M cos M 0  YE  YM  0 0 1  E   M  e6   3.2 Kinematic controller design for manipulator To obtain the kinematic controller a back stepping method is used. The Lyapunove function is proposed as follows: 1 (9) V0  E ET E E 2 The first derivative of V0 yields (10) V0  E E E ET To achieve the negativeness of V0 , the following equation must be satisfied (11) E E   KE E where K=diag(k1 k2 k3) with k1, k2 and k3 are the positive constants. Substituting (1), (3) and (7) into (11) yields  A1  K E  V  V  W 0 Rot 1 p  (12)   J 1 0Rot11 A1 A E R P P 1 E 3.3 Kinematic controller design for mobile platform The Lyapunove function is proposed as follows: 1 (13) V1  E MT E M 2 The first derivative of V1 yields (14) V1  E M E MT To achieve the negativeness of V0 , the following equation must be satisfied v p  vE cos e6  D P  k4 e4  p   E  vE sin e6  k5 e5  k6 e6 with k4, k5 and k6 are the positive constants. (15) 3.4 Sliding mode controller design To design a sliding mode controller, the sliding surfaces are defined as follows: e4  k4 e4  s   (16) s   1      e k e k ( e ) e   s 6 5  2  6 6 6 5 where k4 , k5 and k6 are positive constant values.  (e6 ) is a bounding function and is defined as follows: 0 1 if e6    (17) if e6  2  (e6 )  1  0 no change   e6  2  Where  is a positive constant value. The Lyapunov’s function is chosen as follows: 1 (18) V  sT s 2 To satisfy the Lyapunov’s stability condition V  0 , the following proposed controller umb can be calculated as follows: e   (e5  D) r  vE e6 sin e6  u mb   5 r  e3 (19)   k4 e4   k e  k  (e )e   Q sgn(s)   sgn(s) 6 5  6 6 5 Where    1  2 T ; Q  Q1 Q2 T A new smooth switching controller than previous switching controller can be obtained by substituting sgn(•) function to sat(•) function: Q 0   Sat(1 )   1 0   Sat(1 )  where the saturation function is defined as Q sgn(s)   1  Sat( );  sgn(s)   0   Sat( ) 0 Q 2  2  2  2    www.ajer.org Page 229 American Journal of Engineering Research (AJER) 2013  s   Sat( i )  Sat i    i if  i  1 (20)    Sat( i )  sgn( i ) otherwise In this case, the welding velocity is rather slow, 7.5mm/s. Therefore, a thin boundary layer =0.1 is chosen. 3.5 Hardware design 3.5.1 Measurement of the errors Reference path welding R R oller O R O 1 3 oller O E e 2 To rch Fig 5. The scheme of measuring errors e1,2,3 From Fig. 5, the tracking errors relations are given as e1  rs sin e3 (21) e2  d e  rs cose3  2 From Fig. 4, the tracking errors e4, e5, e6 with respect to moving frame can be calculated as follows: e3  O1 E , O1O3   e4  xM  xE   L1 cos1  L2 cos(1   2 )  L3 cos(1   2   3 ) e5  yM  yE  D  L1 sin 1  L2 sin(1   2 )  L3 sin(1   2   3 )  e6  1   2   3  2  E   P  1   2   3  (22)  2 3.5.2 Measurement of the angle rotate of MP Ф PF Ф P l l    PF  tan 1  s 2 s1    l ds  2    p Fig 6. The scheme of measuring angle rotate of MP (23) PF 3.6 Control algorithms The schematic diagram for a decentralized control method is shown in Fig 7. In this diagram, a relationship between controllers is illustrated by means of the output of this controller is one of the input of another controller and vice versa. The control task demands a real-time algorithm to guide the mobile manipulator in a given trajectory. Laser sensor, rotary potentiometer and linear potentiometer were adopted in the simulation to obtain the position and orientation of the mobile platform relative to the walls. www.ajer.org Page 230 American Journal of Engineering Research (AJER) 2013 Fig 7. Block diagram of control system IV. SIMULATION RESULTS In this section, some simulation resuls are presented to demonstrate the effectiveness o the control algorithm developed for horizontal smooth curved welding. Fig 8a. The WMM is tracking along the welding path Fig 8b. Different perspective about WMM. www.ajer.org Page 231 American Journal of Engineering Research (AJER) 2013 Fig 9. Trajectory of the end-effector and its reference at beginning Fig 10. Tracking errors e1 e2 e3 at beginning Fig 11. Tracking errors e4 e5 e6 at beginning Fig 12. Sliding surfaces www.ajer.org Page 232 American Journal of Engineering Research (AJER) 2013 25 Angular velocity and velocity of MP 20 15 10 5 0 -5 Vp [mm/s] -10 -15 p [rad/s] 0 50 100 150 200 250 Time[s] 300 350 400 450 Fig 13. Angular velocity and velocity of the center point of platform 1.5 R [rad/s] Angular velocity of right and left wheel L [rad/s] 1 0.5 0 -0.5 -1 0 50 100 150 200 250 Time[s] 300 350 400 450 Fig 14. Angular velocities of the right and the left wheels 150 100 Angular [deg] 50 0 -50 -100 1 2 3 -150 -200 0 50 100 150 200 250 Time[s] 300 350 400 450 Fig 15. Angular of revolution joints 9.5 welding velocity Welding velocity [mm/s] 9 8.5 8 7.5 7 0 50 100 150 200 250 Time[s] 300 350 400 450 Fig 16. Linear and angular velocities of welding point www.ajer.org Page 233 American Journal of Engineering Research (AJER) 2013 Fig 16. Results of trajectories of the end effector and its reference V. CONCLUSION In this study, developed a WMM which can co-work between mobile platform and manipulator for tracking a long horizontal smooth curved welding path. The main task of the control system is to control the end-effector or welding point of the WMM for tracking a welding point which is moved on the welding path with constant velocity. The angle of welding torch must be kept constant with respect to the welding curve. The WMM is divided into two subsystems and is controlled by decentralized controllers. The kinematic controller and sliding mode controller are designed to control the manipulator and the mobile-platform, respectively. These controllers are obtained based on the Lyapunov’s function and its stability condition to ensure the error vectors to be asymptotically stable. From the simulation results are presented to illustrate the effectiveness of the proposed algorithm. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] J. J. Craig, P. Hsu, and S. S. Sastry, “Adaptive Control of Mechanical Manipulators”, Proceedings of the IEEE International Conference on Robotics and Automation, Vol. 2, pp. 190-195, 1986. J. Lloyd, and V. Hayward, “Singularity Control of Robot Manipulator Using Closed Form Kinematic Solutions”, Proceedings of the Conference on Electrical and Computer Engineering, Vol.2, pp. 10651068, 1993. R. Fierro, and F. L. Lewis, “Control of a Nonholonomic Mobile Robot: Backstepping Kinematics into Dynamics”, Proceedings of the IEEE Conference on Decision and Control, Vol. 4, pp. 3805-3810, 1995. T. C. Lee, C. H. Lee, and C. C. Teng, “Adaptive Tracking Control of Nonholonomic Mobile Robots by Computed Torque”, Proceedings of the Conference on Decision and Control, Vol. 2, pp. 1254-1259, 1999. T. Fukao, H. Nakagawa, and N. Adachi, “Adaptive Tracking Control of a Nonholonomic Mobile Robot”, Transactions on Robotics and Automation, Vol. 16, No. 5, pp. 609-615, 2000. Tr. H. Bui, T. L. Chung, J. H. Suh, and S. B. Kim, “Adaptive Control for Tracking Trajectory of a Two Wheeled Welding Mobile Robot with Unknown Parameters”, Proceedings of the International Conference on Control, Automation and Systems, pp. 191-196, 2003. W. Dong, Y. Xu, and Q. Wang, “On Tracking Control of Mobile Manipulators”, Proceedings of the IEEE International Conference on Robotics and Automation, Vol. 4, pp. 3455-3460, 2000. Y. Kanayama, Y. Kimura, F. Miyazaki, and T. Noguchi, “A Stable Tracking Control Method for a Nonholonomic Mobile Robot”, Proceedings of the IEEE/RSJ International Workshop on Intelligent Robots and Systems, Japan, Vol. 3, pp. 1236-1241, 1991. Y. Tang, and G. Guerrero, “Decentralized Robust Control of Robot Manipulator”, Proceedings of the American Control Conference, Pennsylvania, USA, pp. 922-926, June 1998. Tan Tung Phan, Thien Phuc Tran, Trong Hieu Bui, and Sang Bong Kim, “Decentralized Control Method for Welding Mobile Manipulator”, Proceedings of the International Conference on Dynamics, Instrumentation, and Control, Nanjin, China, pp. 171-180, August 18-20, 2004. Ngo Manh Dung, Vo Hoang Duy, Nguyen Thanh Phuong, Sang Bong Kim*, and Myung Suck Oh, “TwoWheeled Welding Mobile Robot for Tracking a Smooth Curved Welding Path Using Adaptive SlidingMode Control Technique” Proceedings of the International Journal of Control, Automation, and Systems, vol. 5, no. 3, pp. 283-294, June 2007. www.ajer.org Page 234
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-01-08 www.ajer.org Research Paper Open Access Implementation of Gas Plasma Treatment on Cotton Fabric Tailor ability Dr. Elsisy, W.S Assistant Professor of Apparel Design Dept., Faculty of Family Science Taibah University , Kingdom of Saudi Arabia Abstract: - The primary objective of current research was to examine the effects of fabric finishes on fabric tailor ability in relation to low stress measurements (Fabric Assurance by Simple Testing, "FAST). Plasma-finishing in textile technology is very promising due to various end uses like protective textiles for soldiers, medical textiles and smart textiles. Gas plasma finishing was applied to cotton fabric with the use of a no polymerizing gas, namely air, argon , helium, and nitrogen. Properties of the gas plasma treated samples including low stress mechanical behavior, fabric tailor ability index (performance - % improvement – efficiency), and total fabric-skin comfort value, were evaluated in this study, Fabric Assurance by Simple Testing, "FAST", was employed to evaluate the influence of dray treatment on tested fabrics. The change in the fabric tailor ability parameters of the gas plasma and / or mercerized, or mercerized- plasma treatments were in good agreement with the earlier findings and can be attributed to the amount of air trapped between the yarns and fibers. This study suggested that the gas plasma finishing and/or mercerized- gas plasma processes can influence the final fabric tailor ability properties of cotton queen fabrics, and also provide information for developing mercerized – gas plasma processes treated cotton fabric for very high quality queen fabrics . Keywords: - Fabric wet processing (fabric mercerization), Fabric dry process (non-polymerized gas treatment),Fabric assurance by simple testing ( FAST ), The Balanced Scorecard Concepts – Demand Triangle. I. INTRODUCTION Future automation and robotisation of apparel manufacturing processes will undoubtedly require that the machines and systems are selected based on the specific properties of the fabric being processed. Thus it is essential to develop and use objective evaluation methods for producing fabric compatibility data which are necessary for control of material handling, sewing and other processes involved in the conversion of fabrics into garments. The qualitative and quantitative analyses of relationships between properties of apparel fabrics and garment making-up processes are focused on. Studies of mechanical properties of fabrics such as extension, shear and bending and their relationship to objective tailor ability determinations constitute the main part of this research work 1.1-Adding Value with Dry Treatment: Cotton fiber is the purest form of natural cellulose and has very little lignin or pectin compounds in the cellulose, such as flax, jute , hemp or wood. However, it still contains several unwanted impurities .The need for removal of impurities is obvious to make grey fabrics white and absorbent and to prepare them for dying, printing or chemical finishing. The effect of their various treatments is to bring about deterioration, degradation of cotton fabrics, plus its impact on environmental. In essence, the method described here is based on the concept of fabric dray treatment using non-polymerized gas. The information’s given in this section are mainly abstracted from Refs.[1- 14] .The textile and clothing industries in some developed countries are facing some big challenges today, largely because of the globalization process. Therefore, the shift to high- functional, added value and technical textiles is deemed to be essential for their sustainable growth. The growing environmental and energy-saving concerns will also lead to the gradual replacement of many traditional wet chemistry-based textile processing, using large amounts of water , energy and effluents, by various forms of low-liquor and dry-finishing processes. www.ajer.org Page 1 American Journal of Engineering Research (AJER) 2013 1.2. Advantages of Plasma Treatments for Textiles [5 & 7]: As it has been demonstrated, plasma treatments of textiles look very promising. They can be used both in substitution of conventional processes and for the production of innovative textile materials with properties that cannot be achieved via wet processing. [7] a) They are applicable, in principle, to all substrates, even to those that cannot be modified by conventional method. b) The modification is fairly uniform over the whole substrate. c) In general, no significant alteration of bulk properties is produced. d) A broad range of functional groups can be introduced at the surface, by varying the monomer gas used. e) They are fast and extremely gentle, as well as environmentally friendly. f) Being dry processes plasma treatment is characterized by low consumption of chemicals and energy. When they cannot replace an existing wet process (dyeing and some finishing), if used as pre-treatments, they can reduce markedly the amount of chemicals required by the process and the concentration of pollutants in the effluents. [5] Low-temperature, low-pressure plasma (LTLPP) is already used industrially for the treatment of certain metals, semiconductors and polymer materials. For example, in chemical, pharmaceutical, biological and medical equipment low-pressure plasma is used to treat plastic surfaces, such as moldings from polyethylene for bottles, pipes and containers. LTLPP is also used for the treatment of polymer surfaces in the packing industry. There have not, however, been many applications for the treatment of fiber and textile materials, mainly for the reason that LTLPP systems have to be vacuum based which is expensive and such systems are only suitable for batch processing, although some attempts have been made at developing continuous low pressure plasma machines. For plasma processing methods to be used in the textile industry, they need to based on atmospheric pressure, low temperature plasma (APLTP) and a number of such systems are now being developed commercially. Nevertheless, most of the research studies that have identified the potential plasma surface treatment offers, have been undertaken with LTLPP; the technology transfer to APLTP is seen as mainly a matter of modification to process conditions. The following is, therefore, a summary of the findings from LTLPP treatments of various textile structures and fiber types. LTLPP technology has been widely investigated for the surface modification of textiles and an overview of such plasma treatments has been published by Mordent et al. Many of the improvements to fabrics of various fiber types largely depended on the gas employed 1.3. Objective: Disadvantages of fabric wet processing are: i) they are tedious and lengthy, ii) they are costly (water, energy, chemicals, and others), and iii) they can not be used without environmental impacts.. In the same time, evaluation properties of plasma treated fabrics, required, a costly equipments such as, SEM, atomic force microscope, contact angle and capillary rise measurements. Also, deals with the qualitative and quantitative analysis of the tailor ability of lightweight fabrics, as well as studies related to the interaction between the ease of tailor ability and performance characteristics of garment fabrics. The role of mechanical/physical properties of fabric in the making-up process as regards lightweight apparel fabrics must be fully understood in order to achieve trouble-free tailoring of garments made from such fabrics. Thus, it is obvious that a rapid and simple method for fabric finishing and its evaluation is badly needed .The present work was undertaken to fill this gap. II. MATERIALS AND TESTING. Common finishes include: mercerization (this process makes the material more comfortable, gives it a luster and added strength); plasma treatment; and conjunction of mercerization and plasma treatment-this new process makes the fabric equal to queen textiles in all required properties, such as comfortable, warm in winter, cooling summer, resists wrinkling, absorbs moisture, dries quickly, does not soil easily. 2.1. Plan (Experimental Road-map): Influence of dry processing on fabric, tailor ability & hand [15] Dry processing improved fabric quality by 20% 100%Cotton fabric subjected to dry processing and measured its properties before and after treatment Hypothesis testing www.ajer.org Page 2 American Journal of Engineering Research (AJER) 2013 Define (D) Plan (p) Replan Do (D) Improvement (I) No Control(C) Yes End Y1: Y2 Fig.1 shows the experimental plan. Trials have been made to find out the influence of dray processing on fabric performance (overall fabric tailor ability index) . For this purpose different fabric finishing processes are used ,i.e., wet and dray fabric treatment .In order to study the effect of plasma treatment on mercerized cotton fabrics, the general plan of this research will be as follows (See Fig. 1 and Table 2): Step (A) Step (B) Step(C) 1-Grey Cotton Fabrics 2-Mercerized Cotton Fabrics In order to study the effect of plasma treatment on mercerized cotton fabrics, the general plan of this research will be as follows (See Fig. 2 and Table 1): Step (A) Step (B) Step(C) 1-Grey Cotton Fabrics 2-Mercerized Cotton Fabrics “FAST” Fabric Selection 3- Plasma Treated Cotton Fabrics Fig.2 shows the experimental roadmap. Table1. Properties of Mercerized Fabric [16]. structure Fabric width(cm) Plain 1/1 130 Yarn density per inch 83x66 Mass per unit area (g/m2) 118 2.2. Gas Plasma Finishing: The plasma process cylinder is 15 Cm diameter, and 35 Cm length. The radio frequency is 20 MHz... The system possesses two gas channels with a mass flow controller and magnetic valves for programmed, automatic precise gas flow in the process cylinder. The cotton fabric samples were placed in the plasma cylinder as shown in Figs.2. The plasma cylinder was first pumped down to 0.187 tore (25 pas) then the gas was injected automatically by opening the gas valves. The gas flow rate was kept constant at, 60mL/min. [16]. www.ajer.org Page 3 American Journal of Engineering Research (AJER) 2013 Fig. 3. Plasma apparatus. 2.3. - Fabric assurance by simple testing “FAST”. It measures properties which are closely related to the ease of garment making – up and the durability of fabric finishing. FAST-1 gives a direct reading of fabric thickness over a range of loads with micrometer resolution. FAST-2, measures the fabric bending length and it's bending rigidity. FAST- 3, measures fabric extensibility at low loads as well as its shear rigidity. FAST- 4 is a quick test for measuring fabric dimensional stability, including both the relaxation shrinkage and the Hygral expansion, i.e. FAST, is a system of objective measurements for assessing the appearance, handle, and performance properties of fabrics, using an integrated set of instruments and test methods [17]. Therefore fabric tailor ability refers to the ease with which a fabric can be fashioned to create a garment, and includes factors such as sew ability, drape, setting, shape-retention, and wrinkle-resistance. 2.4. Data Presentation. There are actually two different ways to scale each parameter on a radius: (a) parameters without data normalization process: The raw data of each parameter will be adjusted and the scale on each radius determined so that the maximum and the minimum values of the parameters can all be accommodated on the whole range of its radius, (b) parameters after normalization: Another way either more generality is to normalize all the raw data as shown in equation (1), [18]: Xj    Xj m ax  Xj ----------------- (1) Xj m ax  Xj m in Where Xj, Xj the values of jth parameters before and after normalization, Xjmax and Xjmin are the maximum and minimum values of this parameter. By normalizing the data, all parameters will range from 1 to 0, so the circle becomes a unit radius. In this article, only the second method is illustrated. III. RESULTS AND DISCUSSION 3.1. Objective evaluation of feel and handle, appearance and tailor ability of fabrics Fabric hand, one of the most significant properties of a fabric, is tested not only by manufacturers, but also by consumers prior to purchase. Over the years textile manufacturers have used several different methods to measure fabric hand. Prior to the development of the Kawabata Evaluation System of Fabric (KES-F) and the Fabric Assurance by Simple Testing Method (FAST), fabric hand was evaluated subjectively by touch and feel. Both the KES-F and FAST systems use precise instruments that help manufacturers evaluate a product and maintain a desired hand. While these instruments provide valuable information, both are time consuming and costly to run. Furthermore, data produced by the two methods are sometimes difficult to interpret. There have been several studies conducted focusing on ways of determining fabric hand using methods that are relatively simple, fast, and less costly than current methods. El-Hadidy., Mosbah ,M. and Abd-Allh,H. [16], published a study that evaluated fabric hand using fabric draw force through a metal cone. On the other hand the relationship between fabric handle and different types of fabric finishing ( Wet processing and dry processing parameters ) is still needs to study. The effectiveness of a gas plasma treatment is governed by a variety of factors such as: x1=the composition of the gas; x2= the type of fabric; x3= the pressure within the plasma camper; x4= the frequency and power of the electrical supply; and; x5 = the temperature and duration of the treatment. In our study variables, x1, x2, x4and temperature are constant. It was found that: www.ajer.org Page 4 American Journal of Engineering Research (AJER) 2013 1- Fabrics tested the stability of gas plasma finishing ( as a function of fabric handle) affected by each of the treatment time (ranged from 2 min . to 6 min.) and the type of gas used (air, Ar, He, and N2):a-improves stability (form 0.234 to 0.228) , an increase of time with, "N2", gas a- improving stability and then getting worse with time, an increase of , "Ar, gas, from 0.225 to 0.231, and b- worse stability increase of time with, "He", gas, from 0.235 to, 0.237, 2- Fabric tailor ability is affected by all of the processing time and type of gas as follows: i- worsen an increase of time from 2 min. to 6 min with, "He", gas, from 0.618 to 0.538 ii- getting better and then worse than 2 min to 6 min with, "Ar", gas, from 0.713 to 0.681, and iii- worse then better than, 2 min. to 6 min with, "N2", gas. 3- Plasma treatment with, 'Ar", gave better results(0.677) than treatment with air (0.599), 4- Best results (0.677) were registered after both mercerization plus gas plasma treatment, followed by the results fabric mercerization (0.610) and finally the results of grey fabric (0.384). 5- Figs.4 – 8, show the effect of both fabric wet processing, i.e., fabric mercerization , and fabric dry processing ,i.e., non-polymerizes gas plasma treatment on fabric tailor ability ,measured by ,fabric assurance by simple testing ,FAST. Fig.4 SHows Fabric Tailorability Parameters(Sample 1). Fig.5 Fabric Tailorability Parameters after Mercerization. W(g/m2) 1 ST(mm) 1 1 T(mm) 0.5 0 B(μN.m) E(%) F(mm2) 7 Cotton Shirting Fabric without Treatment 6 3 5 4 Fig. 7 ,Cotton Shirting Fabric after Plasma Treatment(Ar , 2000 volt , 40 min. -Sample No. 18 , Ranking 1 ). 1 1 1 1 7 2 0.5 0 6 3 5 Cotton Shirting Fabric after Mercerization ( 280 g/L NaOH concentration , 25 C , 20 sec.) 0 G(N/m) Fig.6, Fabric Tailorability Parameters after Plasma Treatment ( Sample No.8 , Ranking 18 ) 7 2 0.5 4 Cotton Shirting Fabric after Plasma Treatment (4$:4$!1‫ورقة‬air , 1500 volt , 1 min. ) 2 0.5 0 6 3 5 4 The changes in these properties are believed to be related closely to the inter-fiber and inter-yarn frictional force induced by the low temperature plasma (LTP). The increase in overall fabric tailor ability index of the "LTP" – treated cotton fabric was found to be probably due to the plasma action effect on change in fabric surface morphology. The change in the tailor ability properties of the "LTP" treated finished cotton fabric was in good agreement with the above finding and can be attributed to the amount of air trapped between the yams and fibers. In the evolution of the low-stress mechanical properties of the plasma – treated fabric, the plasma treatment showed different effects on the extensibility, bending rigidity, shear rigidity, surface thickness, and formability. However, the overall fabric tailor ability index (equation No.1) confirmed that the plasma treatment could alter the low- stress mechanical properties related to fabric tailor ability of tested cotton fabrics. Out of the varieties examined samples, No., "1, show the lowest value (total comfort value reaches zero), whereas, samples No. 4(0.899), shows the highest total fabric- skin comfort value. On the other hand fabric relative value reaches, "0.169" with sample, "4", and,"0.038", with grey fabric. 3. 2. Results of fabric performance improve The effectiveness of gas plasma treatment is governed by a variety of factors, x 1=the composition of the gas(Argon); x2= the type of textile (100% cotton fabric, plain weave 1/1) 123 g/m2); x3= the pressure within the plasma champers(2000 volt); x4= the frequency and power of the electrical supply; (1500 volt); and x 5= the temperature(25Co), and duration of the treatment(40 min) . www.ajer.org Page 5 American Journal of Engineering Research (AJER) 2013 The following data (Table 2), are the results of fabric performance, before and after; mercerization, gas plasma, and mercerized-gas plasma treatment. Table 2. Tested fabric performance Results [17 & 18]. Fabric performance Mercerize Mercerized +Gas Grey Gas plasma d Plasma Fabric sample % Imp. 1- Grey Fabric 0.384 -- -- -- 0 2- Mercerized 0.384 0.601 -- -- 56.5 3- Gas plasma 0.384 -- 0.677 -- 76.3 4- Mercerized + Gas plasma 0.384 -- -- 0.833 116.9 Table 2 shows that the percentage improves in fabric performance of tested fabric mercerized plus gas plasma, tailor ability index reaches 116.9 %, while fabric tailor ability index reached 20% as predicted value. 3.2. The Balanced Scorecard Concepts and / or Demand Triangle: The Balanced Scorecard concept involves creating a set of measurements for four strategic perspectives. These perspectives include: 1) financial, 2) customer, 3) internal business process and 4) learning and growth. The idea is to develop between four and seven measurements for each perspective Nevertheless, the application of plasma treatments to textiles is still limited to technical products. Several explanations can be given. Correct application of plasma processes requires a good knowledge of the physical and chemical nature of plasmas, especially if the treatments have to be applied to different materials, as is the normal case for most textile small and medium enterprises, (internal business process -Innovation). Therefore, skilled labors is required, which is, however, not generally available either in textile or in textile machinery companies. Without the capacity to understand the nature of the problems that can occur and to adopt the relevant correct actions, plasma treatments may lack reproducibility and give rise to disappointment and delusion. Moreover, the very wide variety of plasma technologies makes it difficult to decide which the best solution to be adopted (learning and growth -Innovation). These perspectives include: 1) financial, 2) customer, 3) internal business process and 4) learning and growth. The idea is to develop between four and seven measurements for each perspective. Two graphic illustrations appear below to help convey the idea [19]. In this study a similar benchmarked is used, i.e., Demand Triangle. The results of Demand Triangle of all tested fabrics are given in Figs.10 – 13, respectively. Fig.9. shows the concept of balanced scored concept [19]. The concept of balanced scored, has been discussed in details in Ref.19.But in this work, Demand Triangle, was used instead of it, the results were shown in Figs. 10 -13, respectively. www.ajer.org Page 6 American Journal of Engineering Research (AJER) 2013 Demand Triangle of Raw Fabric OFTI 0.3 0.2 0.1 Demand Triangle of Raw Fabric 0 TCV RP Fig.10 Balanced scored results of raw fabric. Fig. 10 shows balanced scores results as a function od demonad triangle of fabric without treatment ,where ,OFTI = overall fabric tailorability index ,RP = relative price (cost) , and TCV = total comfort value .It is evedent that the area of result demond triangle is so small ( percentage improve reaches zero). Demand Triangle of Mercerized Fabric 1 0.6 0.4 0.2 Demand Triangle of Mercerized Fabric 0 3 2 Fig.11 Balanced scored results of mercerized fabric. As can be see from Fig.11 , the area of demoand triangle is increased ,due to wet processng ,anf percentage improved reaches ,56.5%. Demand Triangle of Plasma Treated Fabric 1 0.6 0.4 0.2 Demand Triangle of Plasma Treated Fabric 0 3 2 Fig.12 Demand triangle, results of gas plasma treated fabric. On the otherhand ,Fig.12 ,indicate that ,the area of demond triangle ,is reaching the maxiumum value ,where ,the the percentage of improvement reaches ,116.9%. Demand Demand Triangle Triangle of of both both Mercerization Mercerization and and Plasma Plasma Treatment Treatment of of Shirting Shirting Fabric. Fabric. 1 1 1 1 0.5 0.5 0 0 3 3 2 2 Demand Demand Triangle Triangle of of both both Mercerization Mercerization and and Plasma Plasma Treatment Treatment of of Shirting Shirting Fabric. Fabric. Fig.13 Demand triangle, results of wet processing plus ,gas plasma treated fabric. www.ajer.org Page 7 American Journal of Engineering Research (AJER) 2013 It is clear from Fig. 13 , that the non-polymerized gas plasma ( Ar, 2000 v , and 2 second ),plus fabric mercerization ,gives the best ranking ( as a function of the area of demand triangle ).As can be seen , the figures reveal one common feature, i.e. dray processing improves fabric tailor ability properties [16]. IV. CONCLUSION There are a number of different treatment methods which suit different products, but one of the most interesting aspects of plasma coating is its flexibility - it can be used to treat for yarn, fabrics and even whole garments. The application of fabric finishing with both mercerization plus gas plasma was investigated It was found that the fabric tailor ability parameters of light weight cotton fabrics finished with both mercerization plus gas plasma together, were superior to those mercerized only. Substitution of conventional finishing treatments with plasma treatments has much longer pay-off times, especially if water, energy and waste treatment costs are not exactly taken into account. FAST, system may be used instead of, SAM, AFM, capillary rise, and /or contact angle, method, to evaluate the influence of plasma treatment on tested fabrics. The same is for Demand triangle and Balanced score system. REFERENCE [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] Shishoo R., Plasma technologies for textiles, Woodhead Publishing L td., the Textile Institute, Cambridge, England (2007). Federal Ministry of Education and Research , "Plasma Technology --Process Diversity & Sustainability" Germany (2001) Ricardo d'Agostino, Pietro Favia, Christian Oehr, Michael R,Wertheimer Low-Temperature Plasma Processing of Materials: Past, Present, and Future",. Plasma Process. Polymer. (2005), Shyam Sundar P, Prabhu K H, and K arthikeyan N, Fourth state treat men for textiles",www.fibre2fashion.com M Radetic, P Jovancic, N Puac and Z L, Petrovic, "Environmental impact of plasma application to textiles" , Journal of Physics: Conference Series 71 (2007) Richard Moore, "Plasma surface functional inaction of textiles”, Nanotechnology technology and Smart Textiles (March 2008), www.acteco.org B.Mar Candall And C.Riccard "Plasma treatments of fibers and textiles", 2006 Suzanne Rodden Matthews , Plasma Aided Finishing of Textile materials” , A dissertation submitted to the Graduate Faculty of North Carolina State University in partial fulfillment of the requirements for the Degree of Doctor of Philosophy ,Fiber And Polymer Science (2005) Amelia Sparavigna, "Plasma treatment advantages for textile" Physics (Popular Physics) (2008) http://arxiv.org/abs/0801.3727v1 Aasim Ahmed, "Atmospheric plasma treatment for surface modification of fibre assemblies", Textile Research & Innovation Centre, Textile Institute of Pakistan (2008) 'TNO Defence, Security and Safety' "E examples of plasma enhanced textile modification" Dutch Ministry of Defense, Netherlands www.tno.nl Qufu Wei, Hinging Wang, Qin Yang and Liang an Y u, "Fictionalization of Textile Materials by Plasma Enhanced Modification, Journal of Industrial Textiles, Vol. 36, No. 4 -301 April 2007 http://www.sciencedaily.com/releases/.htm D. Hegemann And D.J .Balazs "Niño-scale treatment of textiles using plasma technology" (2006) El-Hadidy, Mesbas, M. and Abdelah, H. The relationship between fabrics sew ability and fabric handle .MEJ, 2010. El-Hadidy, A. and El-Sisy, W.S (2012) Influence of Plasma treatment on fabric tailors ability. International Conference , Faculty of Applied Arts , Cairo , 8 – 10 ,Oct.2012. El-Hadidy, a .M. “Tailor ability Analyses of Value- added Fabric of Plasma Treatment of Apparel Fabrics, International Conference Turkey, (2013) El-Hadidy, A., Eid, R., and Abd-Elaziz, L.:(2013) Effects of plasma treatment in enhancing fabric tailorability of protective fabrics, journal of faculty oh home economics, Shibean EL- Kum , Oct.2013. http://www.sciencedaily.com/releases/.htm, Balanced scored. www.ajer.org Page 8
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-235-243 www.ajer.org Research Paper Open Access Effect of Trotros on Saturation Flow at Selected Signalized Intersections on the 24th February Road, Kumasi, Ghana A. Obiri-Yeboah1*, A. S. Amoah1, P.C. Acquah1 1 Civil Engineering Department, Kumasi Polytechnic, P. O. Box 854, Kumasi, Ghana Abstract:- Trotros constitute a good proportion of urban traffic on the 24th February Road. The effect of the Trotros on saturation flow at signalized intersections could therefore be substantial. This research studies and analyses the effect of Trotros on the saturation flow at selected signalised intersections by collecting data along the route. A strong correlation was observed between the measured saturation flow using the headway method and the proportion of Trotros stopping per hour suggesting that their presence indeed impact on the capacity significantly and should therefore be considered in the capacity analysis of signalized intersections. The effect of Trotros on saturation flow rate was incorporated in the Highway Capacity Manual (HCM) model by comparing the field saturation flow to the adjusted saturation flow using the HCM model. Results show that saturation flow measured using the modified HCM equation is generally closer to observed saturation flow values. Keywords: Capacity, Kumasi, Ghana, Saturation Flow, Traffic signals, Trotro I. INTRODUCTION The first traffic signal was installed in 1868 and it exploded. In 1918 the first three coloured light signals were again installed almost 50 years later and since then traffic signals are now used throughout the world, using the three colour signals of green, red and amber [1]. Traffic signals have since become the most common form of traffic control measure used in urban areas of most countries. It is a known fact that most developed countries have developed models based on their conditions, to analyse the capacity of signalised intersections. These models are best suited for their developed conditions where flow is homogeneous and lane discipline can be adhered to. In most developing countries like Ghana, Trotros are a major mode of transportation. The Trotro is a mini-van used as the main form of public transportation see Fig 1. In the city of Kumasi, Ghana, Trotros constitute about 40% of the total volume, 58% cars and the remaining 2% being trucks and others [2]. Because of this high proportion of Trotros, urban traffic characteristics in developing countries are significantly different from those of developed countries where the mini-vans do not operate as commercial vehicles. These Trotros create a nuisance in the traffic stream by dropping off and picking up passengers at unapproved locations, sometimes within the trafficked lanes thereby impeding the flow of other traffic upstream and creating a bottleneck within the system as is shown in Fig 2 below. Lane changing and overtaking manoeuvres by these Trotros are also not same as is in-built in the HCM capacity analysis model. It is therefore not possible to use their model directly since it has been developed under different driver and driving behaviours within similar traffic streams. Hence the need to modify these models to suit prevailing local conditions [3]. .ajer.org www.ajer.org Page 235 American Journal of Engineering Research (AJER) 2013 Figure 1: Trotro loading at a Trotro station Figure 2: Trotro loading within the traffic stream during green indication Saturation flow rate is the basic parameter used to derive capacity of signalized intersections. It is calculated based on the minimum headway that the lane group can sustain across the stop line. Several attempts have been made previously to model saturation flow. Also, effect of approach volume and increasing percentage of bicycles on the saturation flow was studied. The study has shown that the saturation flow increases with the increase in approach volume. A field survey was conducted by [4] to find saturation flow and verify saturation flow and traffic volume adjustment factors used in various capacity manuals throughout the United States at Signalised intersections. Saturation flow headways for more than 20,000 observations were collected. Various factors like road geometry, traffic characteristics, and environmental and signal cycle lengths were considered to develop series of modified adjustment factors to determine modified saturation flow rates while calculating signalised intersection capacity [4]. The HCM (2000) [5] developed by Transportation Research Board (TRB), USA, includes a model (1) to calculate saturation flow rate considering the effect of various factors. It assigns an adjustment factor to each parameter, which can be calculated using empirical formulas proposed in the manual. These adjustment factors are multiplied to the base saturation flow So,which is considered to be 1900 passenger cars (pc) per hour of green time per lane (pcphgpl) for signalised intersections, to obtain the saturation flow rate S of the intersection .ajer.org www.ajer.org Page 236 American Journal of Engineering Research (AJER) 2013 approach. S  So nf w f HV f g f p fbb f a f LU f RT f LT f Lpb f Rpb Where, S So n fw fHV fg fp fbb fa fLU fRT fLT fLpb fRpb (1) = saturation flow rate for the lane group in vehicles per hour of green. = ideal saturation flow rate in pcphgpl = number of lanes in the lane group = adjustment factor for lane width = adjustment factor for heavy vehicles = adjustment factor for approach grade = adjustment factor for parking characteristics = adjustment factor for blocking effect of local buses that halt within the intersection area = adjustment factor for area type (Central Business District or other areas) = adjustment factor for lane utilization = adjustment factors for right-turns in the lane group = adjustment factors for left-turns in the lane group. = pedestrian-bicycle adjustment factor for left-turn movements; and = pedestrian-bicycle adjustment factor for right-turn movements. As can be seen from (1), the effect of type of vehicles is considered only in terms of heavy vehicle adjustment factor which is obtained using the following equation: (2) Where %HV is the heavy vehicle percentage and ET is the passenger car equivalent of the corresponding heavy vehicle. The effect of the Trotros (loading and offloading within and around the trafficked lanes in the local setting) in mixed traffic conditions is not reflected. Attempts have been made to model the effects ofmixed traffic flow on saturation flow. A proposed probabilistic approach based on first-order second-moment method to estimate saturation flow at signalized intersections, under heterogeneous traffic conditions was investigated [6]. They make a comparison between the conventional method of estimating saturation flow i.e. headway method and their newly proposed probabilistic approach. The authors found probabilistic approach to be more appropriate for Indian condition. An analysis of the traffic characteristics and operations at signalised intersections of Dhaka, Bangladesh concluded that there is a need for different modelling approaches to analyse the saturation flow rates at the intersections of developing nations and the concept of passenger car unit (PCU), which is widely used as a signal design parameter, is not applicable in case of mixed traffic comprising of both motorised and non-motorised vehicles [7]. A trial new microscopic simulation technique, where a co-ordinate approach to modelling vehicle location is adopted has also been developed [8]. Based on these simulation results an equation was developed to estimate the saturation flow from the influencing variables like road width, turning proportion, percentage of heavy and non-motorised vehicles. A Simulation model HETEROSIM was proposed by [9] to estimate the saturation flow rate of heterogeneous traffic. Simulation results were used to study the effect of road width on saturation flow measured in passenger car units (PCU) per unit width of road. An analysis of the impacts of different light-duty trucks (LDTs) [10] and [11] on the capacity of signalized intersections. Simple regression models have also been developed to estimate saturation flow at signalized intersections having heterogeneous traffic [12]. Summarizing the review of past literature, it is clear that the model proposed by [5] can be adapted to developing countries after necessary calibration. Considering this, the objective of the current research is to study the impact of the Trotro category of vehicles on saturation flow rate and to modify the HCM 2000 model to suit Ghanaian conditions incorporating the contribution of Trotros. II. METHODOLOGY 2.1 Site Selection and Description The signalized intersections were selected based on their accident and safety records in the past, the reducing impacts of other factors and levels of congestion associated with the selected intersections. Fig. 3 shows the map of Kumasi. Selected intersections are shown with a yellow circle and labelled accordingly. All selected intersections are located on the 24th February Road. Detailed descriptions are given in subsequent sub headings. .ajer.org www.ajer.org Page 237 American Journal of Engineering Research (AJER) 2013 Figure 3: Map Showing Location of Study Sites 2.1.1 24th February/Bomso Road Intersection (Bomso Intersection) The intersection with Bomso and 24th February roads is signalised and it is about 550 meters west of the KNUST junction. The intersection has four (4) legs with one (1) approach/entry and exit lanes on each leg of the minor roads, (Bomso/Ayigya roads), and two (2) approach/entry and exit lanes on the 24th February road. It is the intersection of a Principal arterial and Collector roads, namely:  24th February Road – Principal Arterial  Bomso Road – Collector Road  Ayigya Road – Collector Road On the approach from Adum there is a lay-bye where Trotros and taxis stop for passengers. The average lane width is 3.62m, median of 2.0m and the terrain is relatively flat. Roadside friction is mainly attributable to street hawking and transit activity on the two lay-byes on the approach from and exit to Adum. Traffic composition consists of 58% cars, 37% medium buses (Trotros) and 5% trucks. The layout is shown in Fig 4. TO ADUM N LAYBYE TO AYIGYA TO BOMSO TO TECH Figure 4: General Layout of Bomso Signalized Intersection 2.1.2 24th February/Eastern Bypass Intersection (Anloga Intersection) The Anloga intersection is a signalised intersection comprising three (3) principal arterials. It is about 2.6 km west of the KNUST junction. The intersection has four (4) legs with the following configuration:  East/West approaches - 24th February road, having two (2) approach through and exit lanes  North-East approach - Okomfo Anokye road, having one (1) approach through lane and two (2) exit lanes .ajer.org www.ajer.org Page 238 American Journal of Engineering Research (AJER) 2013  South-East approach –Eastern By-Pass, having one (1) approach through lane and two (2) exit lanes The intersection operates on a four phased plan. It is characterized by a lot of roadside friction in the form of hawkers, pedestrians, lay-byes and the wood factory. The traffic composition consists of 58% cars, 39% medium buses (Trotros) and 3% trucks. Fig 5 shows the general layout of the Anloga intersection. N TO ADUM TOTAL FILLING STATION TAXI STATION LAYBYE LAYBYE CAR PARK TO ASOKWA TO ABOABO TAXI STATION POLICE STATION TO TECH Figure 5: General Layout of Anloga Signalized Intersection 2.1.3 24th February/Yaa Asantewaa Road Intersection (Amakom Intersection) The Amakom traffic light, formerly called the Amakom roundabout (it used to be a roundabout before being changed to a signalized intersection), and is a four legged signalised intersection. It is about 4 km west of the KNUST junction. The intersection has four (4) legs with one (1) approach/entry and exit through lanes on each leg of the minor road, (Yaa Asantewaa road), and two (2) approach/entry and exit through lanes on the 24th February road. It is the intersection of a Principal arterial and a Collector road:  24th February Road – Principal Arterial, and  Yaa Asantewaa road – Collector Road The intersection operates on a four phased plan. Traffic composition consists of 62% cars, 35% medium buses (Trotros) and 2% trucks. The layout is as shown in Fig 6. TO ADUM LAYBYE N LAYBYE TAXI STATION TO ASAWASE TO STADIUM SHELL FILLING STATION SHELL FILLING STATION LAYBYE LAYBYE TO TECH Figure 6: General Layout of Amakom Signalized Intersection 2.2 Saturation Flow Rate Measurement Saturation flow rate is defined as the maximum discharge rate during green time. It is expressed either in passenger car unit (pcu)/hour or vehicles/hour. Saturation period and direction wise classified traffic volume is necessary to calculate saturation flow for a particular lane group. .ajer.org www.ajer.org Page 239 American Journal of Engineering Research (AJER) 2013 Headway method was used to measure field saturation flow rates. Theoretical saturation flow rate was also calculated using the HCM 2000 model. A correlation between measured saturation flow, number of Trotros stopping per hour Trotros and approach volume was calculated. Theoretical and measured saturation flow rates are compared and if found to be comparable within acceptable error limits then it can be concluded that the HCM 2000 model is good enough for local conditions, and the process ends. If not then factors to be considered for calibrating of HCM 2000 model will be identified. New adjustment factors are then derived and the modified HCM 2000 model validated for local Ghanaian conditions by comparing the modified theoretical saturation flow with measured saturation flow. The results of turning movement counts done at the selected signalized intersections are given in Table I. Saturation flow measurements were done only for through movement. 2.3 Videotaping traffic at the Selected Signalized Intersections Video camcorder was used to record traffic flow data at the selected signalized intersections. Required data for saturation flow analysis were collected from the video recordings. These were then followed by comparing calibration data with simulated results from the field and finally calibrating the model by introducing a new adjustment factor fT. All signals identified are operating as pre-timed signals. III. RESULTS AND DISCUSSIONS Table I shows a summary of turning movement data at the selected study sites and the results of the saturation flow rate measured using the headway method and that of the HCM 2000. Table I: Summary of Traffic Data and Saturation Flow rates Intersection Direction Movement Volume Volume Field from of Saturation Trotros Flow KNUST T 1412 523 Bomso 1324 Adum Anloga KNUST Adum Amakom KNUST Adum L R T 107 40 1262 40 15 467 L 139 52 R 202 75 T 1718 671 L 156 61 R 563 220 T 1447 565 L 440 172 R 86 34 T 1151 415 L 73 27 R 381 138 T 1095 395 L 175 63 R 287 104 Adjusted Saturation Flow 1650 1353 1713 1417 1725 942 1710 1895 1792 1525 1793 Source: From Study A correlation between measured saturation flow and the volume of Trotros yields a strong negative correlation (-0.52) indicating that the saturation flow decreases with an increase in the percentages of the mini buses. Further a correlation was also performed on the number of stops by these Trotros (Table II) interfering with the flow of traffic per hour and the measured saturation flow. This also yields a very strong negative correlation of (-0.74) implying that it is rather the stopping effect of the buses that cause a reduction in the saturation flow rate and not their mere presence. Negative correlations also mean that there is an inverse relationship. .ajer.org www.ajer.org Page 240 American Journal of Engineering Research (AJER) 2013 This confirms the relationship in the HCM 2000 quoted already in (2) above. The heavy vehicle factor is therefore modified as in (3) below to give the Trotro adjustment factor. (3) Where NT is the number of Trotros in the traffic stream per hour, ST is the number of Trotros stopping per hour, ET is the passenger car equivalent for Trotros and fT is the Trotro adjustment factor. Table II: Saturation Flow and Number of Trotros stopping per hour Intersection Direction Movement Field Adjusted No of Trotros from Saturation Saturation stopping per Flow Flow hour KNUST T 1324 1650 146 Bomso Adum T 1353 1713 165 KNUST T 1417 1725 283 Anloga Adum T 942 1710 312 KNUST T 1895 1792 98 Amakom Adum T 1525 1793 97 Source: From Study Substituting the values from Tables I and II into equation 3 results in Table IV. The last column is the new adjusted saturation flow rate incorporating the new Trotro adjustment factor. Table III: Table showing the Trotro adjustment factor Intersection Direction Movement %St fT new adjusted from saturation flow rate KNUST T 28 0.89 1463 Bomso Adum T 35 0.85 1450 KNUST T 42 0.87 1499 Anloga Adum T 55 0.81 1385 KNUST T 24 0.88 1577 Amakom Adum T 25 0.87 1560 Source: From Study The field saturation flow was then compared to the new adjusted saturation flow rate. Table IV: Comparison of Field Saturation Flow rates and New Adjusted Saturation Flow Rates Intersection Direction Movement Field new adjusted Error from Saturation saturation flow (%) Flow Bomso KNUST T 1324 1463 9 Anloga Amakom Adum T 1353 1450 7 KNUST T 1417 1499 5 Adum T 942 1385 32 KNUST T 1895 1577 -20 Adum T 1525 1560 2 Source: From Study Table IV is translated into Figure 7 to show the error bars. The bars represent the statndard deviation and the error margins associated with this distribution. New adjusted saturation flow rates fall within acceptable error limits. .ajer.org www.ajer.org Page 241 American Journal of Engineering Research (AJER) 2013 Figure 7: A Plot of Field and Modified Saturation Flow Showing Error Bars [13] in predicting saturation flow estimations attributed errors to three primary sources namely the temporal variance in saturation flow predictions related to saturation flow models, omission of certain capacity factors in predictive models and lastly, an inadequate functional relationship between model variables and saturation flow rates. He further admits that there is a considerable standard error of prediction reaching between 8-10%. From Table 4 above it can be seen that, all intersections except the Anloga from Adum approach that does not fall within the acceptable error margin. This could be due to the fact that the presence of the fuel stations and the Trotro station as well as the lay-bye located just ahead of the intersection could be considerably interfering with the flow of traffic and thereby must be looked at in future work. IV. CONCLUSIONS It can be concluded from the study that there is a relationship between the percentage of Trotros in the travel stream and the number that stop to pick up and drop passengers around the intersection approaches. Saturation flows is inversely proportional to the percentage of Trotros within the traffic stream and the number of stops made by Trotros around the intersection approaches and the new Trotro adjustment factor can be incorporated into the HCM 2000 model to better predict saturation flows. From the study it was observed at one intersection that there is a significant reduction in the saturation flow rate even after adjusting for the effect of Trotros and it was attributed to the presence of roadside friction from the presence of a fuel station, lay-bye and a Trotro station just ahead of the intersection approach. Further research is therefore recommended in this area. ACKNOWLEDGEMENT The authors would like to acknowledge staff of Daasaf Productions Limited for the video filming of traffic flow data at the selected intersections. Help and support from the staff of Civil Engineering Department of Kumasi Polytechnic is well appreciated. REFERENCES [1]. [2]. [3]. [4]. [5]. [6]. Transport Research Laboratory, The Use of Traffic Signals in Developing Cities, Overseas Road Note 13: Crowthorne, Berkshire, United Kingdom. 1996 A. A. Nyarko, Capacity and Saturation Flows at Signalized Intersections in Kumasi. Department of Civil Engineering, Masters diss., Kwame Nkrumah University of Science and Technology, Kumasi, Ghana. 2006 Transportation Research Board, Highway Capacity Manual, 2010, National Research Council, Washington, D.C. United States of America. J.D. Zegeer, Field Validation of Intersection Capacity Factors, Transportation Research Record No. 1091, 1986, page 67-77 Transportation Research Board, Highway Capacity Manual, 2000, National Research Council, Washington, D.C. United States of America. V.T. Arasan and K. Jagadeesh, Effect of Heterogeneity of Traffic on Delay at Signalized Intersections, Journal of Transportation Engineering, ASCE, Vol. 121, No.5, 1995, 397-40 .ajer.org www.ajer.org Page 242 American Journal of Engineering Research (AJER) [7]. [8]. [9]. [10]. [11]. [12]. [13]. 2013 M. Hossain, Estimation of Saturation Flow at Signalized Intersections of Developing Cities: A Microsimulation Modelling Approach, Transportation Research Part A 35, 2001,123 – 141 C.S. Anusha, A. Verma, and G. Kavitha, Effects of Two-Wheelers on Saturation Flow at Signalized Intersections in Developing Countries, Journal of Transportation Engineering, American Society of Civil Engineers, doi: 10.1061/(ASCE)TE.1943-5436.0000519, 2012 V.T Arasan and P. Vedagiri, Estimation of Saturation Flow of Heterogeneous Traffic using Computer Simulation, Proceedings 20th European Conference on Modelling and Simulation, ECMS, Bonn, Germany 2006 K.M. Kockelman and R.A. Shabih, Effect of Light Duty Vehicle on Signalized Intersection, Journal of Transportation Engineering, ASCE, Vol. 126, No. 6, 2000, 506 – 51 K.M. Kockelman and R.A. Shabih, Effect of Vehicle Type on the Capacity of Signalized Intersections: The Case of Light Trucks. Department of Civil Engineering. University of Texas at Austin. 1999 R.G. Patil, R.V.K. Roa and M.S.N Xu.), Saturation Flow Estimation at Signalized Intersections in Developing Countries Proc. 86th Annual Meeting, CD – ROM, TRB, Washington, D. C. 2007, 1-23 M. Tracz, and A. Tarko, Uncertainty in Saturation Flow Predictions. Unpublished. .ajer.org www.ajer.org Page 243
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-171-174 www.ajer.org Research Paper Open Access Influence of Neem Seed Husk Ash on The Tensile Strength of Concrete Nuruddeen Muhammad Musa (Civil Engineering Department, Kano University of Science and Technology, Wudil, Kano, Nigeria) Abstract: - This paper presents the influence of Neem seed husk ash (NSHA) on the tensile strength of concrete. Neem seed husk is a by-product obtained during industrial processing of Neem seed to extract oil and produce fertiliser. Compressive strength, flexural strength and splitting tensile strength tests were carried out on concrete partially replaced with 0%, 5%, 10%, 15%, 20% and 25% NSHA. Test results shows that compressive strength decreases with NSHA replacement level, also, NSHA improves the flexural strength of the concrete. While Test results shows that 5%, 10% and 20% NSHA in the concrete improves the splitting tensile strength. An attempt also has been made to obtain a relationship between the compressive strength, flexural strength and splitting tensile strength. Keywords: - Compressive strength, flexural strength, Neem seed husk ash, splitting tensile strength. I. INTRODUCTION Concrete is good in compression and poor in tension [1]. Understanding the response mechanisms of concrete under tensile conditions is a key to understanding and using concrete in structural applications, especially in determining concrete resistance to cracking. There are currently no well standardized test procedures for determining the direct tensile strength of concrete, that is, the strength under uniaxial tension. This is due to the difficulty involved in inducing pure axial tension within a specimen without introducing localized stress concentrations [2]. Therefore, several test procedures have been developed to indicate indirectly the tensile strength of concrete. These include: Test Method for Flexural Strength of Concrete (Using Simple Beam with Third-Point Loading), Test Method for Flexural Strength of Concrete (Using Simple Beam with Center-Point Loading) and Test Method for Splitting Tensile Strength of Cylindrical Concrete Specimens [2]. A Neem seed husk ash (NSHA) is obtained by burning a waste husk obtained during the extraction of oil from neem seed. The possibility of partially replacing cement with NSHA for use in low-cost construction has been shown by [3]. The present investigation has been aimed to determine the influence of Neem seed husk ash on the split and flexural tensile strengths of concrete, and the corresponding compressive strengths. II. MATERIALS AND METHOD (i) Materials Dangote Ordinary Portland cement was used in this study. The cement has a specific gravity of 3.14, with initial setting time of 155minutes and final setting time of 208 minutes. Locally available sand has been used as fine aggregate. The specific gravity of the fine aggregate was determined to be 2.55. Also, locally available crushed stone aggregate of maximum size 20 mm was used. The coarse aggregate has a specific gravity of 2.75. The Neem seed husk used was obtained from Neem fertiliser processing plant, it was dried and burned in an open air, after which it was calcinated in an oven at temperature of 600 oC to produce an ash (NSHA). (ii) Mix Proportion In this study, concrete to achieve a target compressive strength of 25 N/mm 2 at 28 days was designed using the Absolute volume mix design method. Binders were prepared by partially replacing cement with various percentages of Neem seed husk ash (NSHA). The percentages are 0%, 5%, 10%, 15%, 20% and 25% www.ajer.org Page 171 American Journal of Engineering Research (AJER) 2013 and they are by weight. The 0% is the control specimen. The binders were then mixed with the aggregates and water in accordance with the mix design proportion to form NSHA concrete. (iii) Tests on Concrete Compressive strength test was carried out on concrete with 0%, 5%, 10%, 15%, 20% and 25% NSHA, using iron mould of size 150 x 150 x 150 millimeter. Specimens are tested after 28 days in accordance with [5]. Flexural strength test of concrete with 0%, 5%, 10%, 15%, 20% and 25% NSHA replacement was carried out on rectangular beams measuring 150 mm x 150mm cross-section and 450 mm length. The beams were casted and cured for 28 days before tested using the three-point loading arrangement specified in [6]. Splitting tensile test of concrete with 0%, 5%, 10%, 15%, 20% and 25% NSHA replacement was carried out after 28 days in accordance with [7], on cylinder measuring 150 mm diameter and 300 mm length. All the specimens were cured under water at room temperature until testing. Each strength value was the average of the strength of three specimens. III. RESULT AND DISCUSSION For all the concrete mixes, compressive, split tensile and flexural strengths were determined at the end of 28 days. Fig. 1 shows the variation of compressive strength with various NSHA replacement level. The compressive strength decreases with NSHA content. The variation is shown to have linear relationship as expressed in Eq. (1): fcu = -0.9837R + 27.111 (R² = 0.9468) (1) where fcu and R denote the 28-day compressive strength and NSHA replacement, expressed in N/mm2 and % respectively. From Fig.1 only 0% replacement and 5% replacement have satisfied the target designed strength of 25 N/mm2. However, all the samples have attained the compressive strength of 20 N/mm 2 at 28 days; therefore they can be used for non-structural and mass concrete applications. Fig. 1: compressive strength against various percentage replacement of Neem seed husk ash (i) Flexural Strength Fig. 2 shows the variation of flexural strength with various NSHA replacement level. It can be seen that addition of NSHA improves the flexural strength of the concrete. The increase in strength could be due to improvement of the interfacial zone between the paste and the aggregate in the presence of NSHA. The 5% replacement seems to have the highest flexural strength. Fig. 2: flexural strength against various percentage replacement of NSHA www.ajer.org Page 172 American Journal of Engineering Research (AJER) 2013 (ii) Relationship between Flexural and Compressive strength Bhanja and Sengupta (2005) observed that no single equation seems to represent the flexural tensile strength with sufficient accuracy; therefore measured values should be used instead of predicated ones. In order to establish a potential relationship between flexural and compressive strength in this study, Eq. (2) was obtained from the test data: f(fs) = 3.9312fcu-0.037 (R² = 0.002) (2) where f(fs) and fcu denote the flexural strength and compressive strength respectively expressed in N/mm2 . The coefficient of determination, R2, was obtained between the test data and the regression equation. It is a measure of the portion of the total variability of the test data explained by the particular equation [8]. When R2 is unity, all data points lies exactly or closely on the regression equation, and value of zero signifies no correlation between data points and regression equation. Therefore, statistically, there is little correlation between flexural and compressive strength of concrete containing NSHA. (iv) Splitting tensile strength Figure 3 shows the effect of NSHA replacement on splitting tensile strength. There is increase in splitting tensile strength from 0% replacement to 5% replacement, an increase of 11%. The strength further increases at 10% replacement, an increase of 18% higher than the control. The strength then reduces to 4% lower than the control at 15% replacement. At 20% replacement the strength is 3% higher than the control. At 25% replacement the strength is 8% lower than the control. Generally, 5%, 10% and 20% NSHA replacement increases the splitting tensile strength, with 10% replacement having the highest strength. Fig. 3: splitting tensile strength against various percentage replacement of Neem seed husk ash (v) Relationship between Splitting tensile strength and Compressive strength Usually, the ratio of splitting tensile strength to compressive strength ranges from about 0.06 to 0.20 [9]. Based on this, the ratio of splitting tensile strength and that compressive strength are determined to be 0.13, 0.15, 0.17, 0.14, 0.16 and 0.14 for 0%, 5%, 10%, 15%, 20% and 25% respectively. These are compared with the recommended ratio of 0.06 to 0.20 and all the ratios are within the recommended limit. Furthermore, Fig 4 shows a relationship between splitting tensile strength and compressive strength using regression analysis. Fig. 4: Relationship between splitting tensile strength and compressive strength www.ajer.org Page 173 American Journal of Engineering Research (AJER) 2013 (vi) Relationship between flexural strength and splitting tensile strength Based on the test data, there is weak linear relationship between the flexural and splitting tensile strength as shown in figure 5. Fig. 5: Relationship between flexural and splitting tensile strength and compressive strength IV. CONCLUSION Based on the experimental results and discussions, the following conclusion can be drawn: 1. The compressive strength decreases with Neem seed husk ash (NSHA) content with 5% replacement level being the optimum level at 28 days. 2. NSHA improves the flexural strength of the concrete. The increase in strength could be due to improvement of the interfacial zone between the paste and the aggregate in the presence of NSHA. Statistically, there is weak relationship between flexural and compressive strength of concrete containing NSHA. 3. Test results shows that 5%, 10% and 20% NSHA in the concrete increases the splitting tensile strength 4. No strong correlation is obtained between the flexural and splitting tensile strength test data in this study. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] Selim, P. (2008), “Experimental investigation of tensile behavior of high strength concrete”, Indian Journal of Engineering & Material Sciences, Vol. 15, December 2008, pp 467- 472. Ozyildirim, C. and Carino, N. J (2006) : Concrete strength testing, chapter 13 of Significance of Tests and Properties of Concrete and Concrete-Making Materials STP 169D Edited by Joseph F. Lamond and James H. Pielert, ASTM Stock No.: STP169D ASTM International,100 Barr Harbor Drive, PO Box C700, West Conshohocken, PA 19428-2959 Nuruddeen M.M and Ejeh S.P., “Synergic Effect of Neem Seed Husk Ash on Strength Properties of Cement-Sand Mortar” International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, 2012, pp.027-030 BS 1881-116:1983, “Testing Concrete: Part 116. Method for determination of compressive strength of concrete cubes”, British Standards Institute, 389 Chiswick High Road, London, W4 4AL, http://www.bsiglobal.com/. BS 1881-118:1983, “Testing Concrete: Part 118. Method for determination of flexural strength”, British Standards Institute, 389 Chiswick High Road, London, W4 4AL, http://www.bsi-global.com/. BS 1881-117:1983, “Testing Concrete: Part 117. Method for determination of tensile splitting strength”, British Standards Institute, 389 Chiswick High Road, London, W4 4AL, http://www.bsi-global.com/. S. Bhanja, B. Sengupta (2005),” Influence of silica fume on the tensile strength of concrete” Cement and Concrete Research 35 (2005) 743–747 Choi, Y. and Kang, M. (2003), “The relationship between splitting tensile strength and compressive strength of fiber reinforced concretes”, Journal of the Korea concrete institute, Vol. 15, No.1, February 2003, pp 155- 161. Avram, C., I. Facaoaru, O. Mirsu, I. Filimon, and I. Tertea, (1981). “Concrete Strength and Strains”, Elsevier Scientific Publishing Company, pp.105-133, 249-251 www.ajer.org Page 174
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-303-312 www.ajer.org Research Paper Open Access Production And Performance Evaluation Of Bioethanol Fuel From Groundnuts Shell Waste Nyachaka C.J1 Yawas D.S2 Pam G.Y3 1. Postgraduate Student Department of Mechanical Engineering Ahmadu Bello University Zaria, Nigeria 2. Department of Mechanical Engineering Ahmadu Bello University Zaria, Nigeria 3. Department of Mechanical Engineering Ahmadu Bello University Zaria, Nigeria Abstract: - This paper examines the feasibility of bioethanol production from groundnut shells as an important sustainable alternative source of biofuel in Nigeria. Here an experimental attempt has been made to know the level of variation of exhaust emission with a view of minimising the emission of green gases in line with Kyoto protocol treaty. Groundnuts shell where hydrolysed to produce 1.56 mg/ml and 1.09 mg/ml reducing sugar concentration on the first day of batch one and two respectively. Ethanol yield was 6.2 millilitre and 7.9 millilitres on the first and the seventh day of batch one from 420g of substrate. Highest brake power, volumetric efficiency and torque of 9623.40 W, 18.09 % and 62.35 Nm recorded for E40% ethanol/ gasoline blends. Lowest brake power and brake mean effective pressure of 6385.70 W and 8.02 bar respectively was also recorded for sole gasoline. On the other hand exhaust emission from carbon monoxide (CO), nitrous oxide (NOx) decrease greatly as the percentage of the blends increase at point source and 3 metres distance. Keywords: - Groundnuts shell, fermentation, ethanol, emissions, gasoline engine I. INTRODUCTION A challenge that humanity must take seriously is to limit and decrease the greenhouse effect Caused by various human activities, a major contributor to the greenhouse effect is the transport sector due to the heavy, and increasing, traffic levels. In spite of ongoing activity to promote efficiency, the sector is still generating significant increases in CO2 emissions. As transport levels are expected to rise substantially, especially in developing countries, fairly drastic political decisions may have to be taken to address this problem in the future. Furthermore, the dwindling supply of petroleum fuels will sooner or later become a limiting factor. Groundnut shell (GS), a residue after separation of pod, is available in copious amount in the world. The crop residue is of low economic value and generally used in burning, gasifiers as a fuel source or sometimes as manure to increase the soil conditions. The residue contain a total 54.4 % total carbohydrate content (dry weight) in its cell wall (Raveendran et al., 1995) makes it an appropriate substrate for bioconversion to fuel ethanol. II. BIOMASS RESOURCES IN NIGERIA Biomass resources in the country include Agricultural crops, wood, charcoal, grasses and shrubs, residues and wastes (agricultural, forestry, municipal and industrial), and aquatic biomass. Total biomass potential in Nigeria, consisting of animal and agricultural waste, and wood residues, was estimated to be 1.2 PJ in 1990 (Obioh and Fagbenle, 2004). In 2005, research revealed that bio-energy reserves/potential of Nigeria stood at: Fuel wood 13071,464 hectares, animal waste, 61 million tonnes per year, crop residues, 83 million tonnes (Agba et al., 2010) 2.1 Biofuel Potential in Nigeria Biofuels can be broadly defined as solid, liquid or gaseous fuels consisting of or derived from biomass. At the moment potential crops for biofuel production in the country are cassava, sugar cane, rice and sweet sorghum for bioethanol; palm oil, groundnut, and palm kernel for biodiesel because of their high yield and current production output in the country. Nigeria is the largest producer of cassava in the world and has the www.ajer.org Page 303 American Journal of Engineering Research (AJER) 2013 largest capacity for oil palm plantation which serves as a great source for biodiesel (Abiodun, 2007). It is interesting to mention that Nigeria could also be a major player in the biofuel industry given the enormous magnitude of various waste/residues (agricultural, forestry, industry and municipal solid) available in the country. Biofuel may be of special interest in many other developing countries like Nigeria for several reasons. Climate in many of the countries are well suited to growing biomass. Biomass production is inherently rural and labor-intensive, and thus may offer the prospect for new employment in regions where the majority of populations typically resides. Abila (2010) classified Nigeria as one of the countries with very high potential for energy crops production. 2.2 GROUNDNUT The peanut, or groundnut (Arachis hypogaea), is a species in the legume or "bean" family (Fabaceae). The peanut was probably first cultivated in the valleys of Peru. It is an annual herbaceous plant growing 30 to 50 cm (1.0 to 1.6 ft) tall (www.eurekalert.org/pub). Groundnuts were introduced to Nigeria in the 16 th centuary and it is extremely grown in West Africa and from Sudan to South Africa. It has been estimated that about 22299 million hectares of land are annually planted with groundnuts. In Africa groundnuts in shell was grown on 746 million hectares with total production of 5794 million tonnes (FAO, 1980). 2.2.1 Global Situation and Potential Bioethanol Production from Groundnut shell In Nigeria 1486 million tones of groundnut in shell was estimated from 1.61 million hectares of land (FAO, 1995). Here in Zaria, SAMNUT- 38, was developed at Institute of Agriculture Research Samaru as selection from Virgina bunch with 130-150 days maturity period, Potential yield of 2,500-3,000kg/ha. It has large seed, high oil content, Rosette susceptible, Leaf Spot susceptible and large shell with excellent adaptation in Northern and Southern Guinea savannas. III. MATERIALS AND METHODS 3.1 MATERIAL 3.1.1 Groundnut Shell Groundnuts Shell waste for SAMNUT-38 Specie obtained from The Institute of Agricultural Research Samaru Zaria are residue that are cheap and readily available source of lignocellulose after separation of pod and grain were collected and taken to Micro-biology laboratory Ahmadu Bello University Zaria Nigeria. 3.2 METHODOLOGY 3.2.1 Pre-treatment of Lignocellulosic Source The substrates were washed with distilled water dried for three days at 60 o in hot air Memmert oven so as to reduce the moisture content and make them more susceptible to milling. The substrates were milled with motor and pistil, sieved to pass through a 2.2 mm mesh sieve. 1500g each was weighed, the samples were then soaked in 1% (w/v) sodium hydroxide solution (substrate + solution) for 2hours at room temperature after which it was washed with distilled water and dilute HCL until the wash water was brought to neutral PH free of the chemicals and then set in Memmert oven (Model UE-500 DINI12880) overnight at 60o to dry. The NaOH pretreatment was repeated for each sample according to Amadi (2004). 3.2.2 Samples Collection of producer microorganisms: Pure culture strains of Aspergilus niger and Saccharomycs cerevesiae Isolate was provided by the Department of Microbiology Ahmadu Bello University Zaria, Nigeria. This was used for the study. The organism was maintained as direct stocks culture from which inoculates were prepared. Fungal species of A. niger and S. cerevesiae were originally isolated from soil samples and palm wine respectively, The slant cultures were subcultured and grown on potato dextrose agar (PDA) in petri dishes according to manufacturer specification and sterilized at 121o for 15 mins, samples incubated at room temperature for 5 days. The microscopic feature of pure grown colonies were observed and identified according to procedure described by Bailey et al (2004). 3.2.3 Inoculum Preparation The organisms were grown on malt extract agar slant at 30 oC for 5 days and stored at 4oC with regular sub-culturing. 150 ml of inoculums was prepared for each culture using 5g glucose, 10 g peptone, 5 g yeast extract in 1000 ml distilled water. The inoculum was shaken continuously on an environment-controlled incubator shaker (Model 3527-1/34) at 200 rpm and 34oC for 48 h before it was used for the fermentation process. Bailey et al (2004). www.ajer.org Page 304 American Journal of Engineering Research (AJER) 2013 3.2.4 Preparation of Fermentation Medium The fermentation medium used for ethanol production consisted of glucose 8% (w/v), peptone 0.1% (w/v), Malt extract0.1% (w/v), Yeast extract 0.2% (w/v), Magnesium chloride 0.01% (w/v), Calcium carbonate 0.2% (w/v ), Ammonium sulphate 0.2% (w/v),and Ferrous sulphate,0.001% (w/v) respectively. 2000 ml medium culture was prepared and 300 ml dispensed into each 500 ml Erlenmeyer flask. The flask were sterilized in autoclave (Model Astell ASB 300) at 1210C for 15 minutes and inoculated with 15 ml and 4 ml containing growth innocula of S. cerevesiae and A. niger cells and 2 million spores respectively. (Abouxied, and Reddy,1986). The flasks were incubated on orbital shaker (Model Vineland NJ SH2-526) with an initial agitation rate of 300 rpm at 300C for seven days each sample withdrawn at 24 hours interval for distillation. 3.2.5 Determination of Density and Specific gravity Digital Electronic Balance at Old Chemical Engineering Analysis Lab. Model FA2004 was used. The densities and specific gravities of the solutions were determined using standard procedure and the result recorded. The ethanol concentration was plotted against the number of days. 3.2.6 Determination of Refractive Index Standard Curve: Refractive Index of ethanol was carried out at Chemical Engineering Unit process Laboratory Ahmadu Bello University Zaria, Nigeria. The refractive index of standard ethanol concentration were determined using Abbe Refractometer (Model 2WAJ) at 28o C. The refractive index of same volume of distilled water was also determined. The refractive index values recorded for each samples. (Amadi et al., 2004) 3.3 Experimental Setup of PETTERS Spark Ignition Engine & Description Four stroke single cylinder petrol engine was connected to the electric dynamometer with the help of coupling and mounted on the rigid frame, Tachometer for RPM reading, U tube manometer, air filter, fuel measuring tube. And gas analyzer was arranged. 3.3.1 Experimental Fuels We used the following fuels in the experiment: ► Gasoline purchased at Oando fuel filling station in Samaru Zaria ► Ethanol from Groundnut shell 3.3.2 Four-Stroke Engine A four-stroke, single cylinders, stationary petrol engine, of specification as follows, Engine Data: i. Bore = 8.50 cm ii. Stroke = 8.25cm iii. Compression Ratio = 6:1 iv. Swept Volume = 468.67 cm v. Maximum BHP at 1650 rev/min vi. Maximum speed = 2000 rev/min vii. Brake Arm = 32.0167 cm viii. Manometer angle = 15o ix. Orifice Size = 1.905 cm x. Coefficient of Discharge (Cd) = 0.60 3.3.3 Test Procedure on Petters heat engine Before starting, water circulation to the engine was ensured by first, filling the tank to full capacity. Transformer was switched on so as to supply current to the electrical D.C motor which runs the Petter Paiws test engine as it’s a motor start engine. The loads were released, the field and start switches were switched on. On operating the starting lever, the motor runs the test engine until it fires, thereafter the test engine powers the D.C motor which is the D.C electrical dynamometer from which relevant data are recorded. The hand wheel provided on top of the balance frame was use to adjust the height of the balance arm. This should always be horizontal when taking brake horsepower readings from the dynamometer. The fuel consumption was obtained from the measuring jar on the engine by noting the fit for consumption of a known quantity of fuel using stop watches. The inlet and exhaust temperatures of the water coolant were obtained by deepening thermometers into a bore on the inlet and outlet water pipe. Observations were made on the rate of fuel consumption, speed, load coolant and exhaust temperatures of every fuel sample. www.ajer.org Page 305 American Journal of Engineering Research (AJER) 2013 The experimental analysis commenced by using 100% gasoline as a reference and later with the gasoline – ethanol fuel blends (E10, E20, E30, E40, E50). The results obtained was recorded. 3.3.4 IMR 1400 gas analyzer This is an equipment to sample emission product directly from the combustion chamber, it Measures and calculates in addition to the above mentioned parameters the following: -Flue Gas Temperature - Carbon dioxide CO2 -Carbon monoxide CO (Corrected to 0%O2) -Nitrogen dioxide NOx (Corrected to 0%O2) -Sulfur dioxide SOx(Corrected to 0%O2) The IMR gas analyzer is design to work under strict adherence to the operating manual and within stipulated temperature. Procedure Petters Heat Engine was started and allowed to idle and set to 1200 revolution per minute. The duct was then connected through the gas sampling probe to the analyzer, The gas sampling probe was initially at ambient air during the zero calibration and the unit turned on to start the zero calibration which took 180 seconds before measurement started. Fuel type, engineering unit (ppm) was selected on the display screen through the selection menu. Exhaust duct valve turned open and readings were recorded for each set of experiment at point source and at 3meters measured distance. Interval of one minute was observed before next reading was taken, and after each run the dust filter and the sensor removed and cleaned free of soot and readings recorded in table. 3.4 CHEMICAL COMPOSITION GS used in this investigation with chemical composition 35.7% cellulose and 18.7% hemicelluloses which constitutes total carbohydrate content (TCC) of 54.4% on dry solid (DS) basis is presented in Table 1 below. Table 1: Chemical composition of groundnut shell Components % dry weight Ash 5 Cellulose 38 Hemicellulose 36 Lignin 16 Moisture 5 Source: (Raveendran et al., 1995) IV. RESULTS AND DISCUSSIONS Glucose Conc. 4.1 RESULT The result of the test conducted was recorded in tables and plotted in figures 1, 2, 3, 4, 5, 6,7 and 8 below. 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0 1 2 3 4 5 6 7 8 No of Days Figure 1: Glucose concentration Vs No of days www.ajer.org Page 306 American Journal of Engineering Research (AJER) 2013 Ethanol Yields 10 8 6 4 2 0 0 1 2 3 4 5 6 7 8 No of Days Figure 2: Ethanol yields Vs No of days Reducing Sugar Concentration Obtained from Groundnut Shell Wastes The ability of the Aspergilus niger and cellulose to breakdown groundnuts shell waste into reducing sugar. Groundnut shell was hydrolyzed to produce 1.56 mg/ml reducing sugar concentration on the first day and 0.46 mg/ml on the seventh day of batch one 1.09 mg/ml, 0.96 mg/ml, 0.85 mg/ml for first, second and third day of batch two. Batches three, four and five of the experiment show similar result. Thus, reducing sugar concentration decreases gradually as the fermentation period increases. Result of Ethanol obtained from fermentation of organic waste On the first day of batch one, substrate produced 6.2 millilitre of ethanol, as the fermentation period increase ethanol yield also increase 9.2 milliliter on the seventh day respectively. Total ethanol yield obtained from 420g of substrate groundnut shell is 55.8 millilitres. Batch two indicates slight decrease from the same quantity as the fermentation period increases. Thus, total ethanol yields was 45.60 millilitre, while in batch three, ethanol yield Groundnut shell was 43.76ml respectively. From the result, it was observed that as the concentration of the distillate increases, the refractive index also increase which implies that ethanol concentration is directly proportional to its refractive index. The refractive index of groundnut shell was 1.3361 day one to 1.3409 day seven respectively. The calculated result of densities of ethanol was, 756.4 kg/m3 Specific gravity of ethanol obtained was, 0.7564. pH values of the ethanol obtained from groundnut shell wastes also decreases as the concentration increases. The pH values of ethanol obtained from groundnut shell decreases from 6.79 on the first day to 6.49 on the seventh day respectively. Volume of ethanol produced from 420 g substrate groundnut shell was approximately 60 ml. As in literature by Mathewson (1980), that a ton of fermentable sugar substrate can produce 70 – 100 gallons of ethanol. The approximate ethanol yield of (50 – 80) ml obtained in this experiment also agrees with the report of Akpan and Adamu (2008), that substrate of 2500 g of fermentable sugar can produce a maximum ethanol yield of about 0.65 litres. RESULT OF CALORIFIC VALUE OF THE BLENDS The calorific values, density and specific gravity for the blends where determined and the result obtained was tabulated in table 2 below. www.ajer.org Page 307 American Journal of Engineering Research (AJER) 2013 4.3 EXPERIMENTAL CALCULATION/ ENGINE PERFORMANCE The values obtained from the experiment were used to determine the various engine parameters as outlined below. These engine parameters were calculated for the E0%. E10%, E20%, E30%, E40% and E50% base on full throttle opening. 1. Brake Power It is the actual work output of an engine or the actual work available at the crank shaft. It is usually measured using a dynamometer. It is given by WN Brake Power (Bp) = ���� Where, W = Load reading N = Speed (1) 2. Torque Is a good indicator of an engine ability to do work. It is define as force acting at a moment distance. Torque on Dynamometer (T) = WR (2) W = Load R = Torque arm length 3. Brake Mean Effective Pressure It is the mean effective pressure which would have developed power equivalent to the brake power if the engine were frictionless for a four – stroke engine, it is given by BP Brake Mean Effective Pressure (bmep) = (3) LANn L = Stroke πD2 A= 4 N = Revolution per second n = Number of cylinder = 1 4. Volumetric Efficiency Is the mass of air equal to the density of atmospheric air times the displacement volume of the cylinder per each cycle. Volumetric Efficiency ( v) = Where, m Va = Volume of air = aRTa Va + Vf � (5) P ṁa = 0.866 (4) Ph (6) � ṁa = mass of air h = Manometer reading in (in) = H sin = 1ηo R = Gas Constant Note Before: Barometric Readings. i. .Atmospheric pressure = 27.80 inHg ii. Ambient Temperature = 29.96 oC = 302.96 K Vf = Volume of Sample Rate of consumption Vs = VsNn Vs = Swept volume (8) (9) V. RESULTS OF PERFORMANCE OF SI ENGINE From the result of the performance of the Petters Spark Ignition Engine the behaviors of parameters of Brake Power, Torque, Brake Mean Effective Pressure, and Volumetric Analysis Vs Engine Speed are presented in Figures 5, 6, 7 and 8 below. www.ajer.org Page 308 American Journal of Engineering Research (AJER) 2013 9.5 9 8.5 E0% Brake power (W) 8 7.5 E10% 7 E20% 6.5 E30% 6 E40% 900 1000 1100 1200 1300 1400 1500 1600 Speed (rpm) Figure 5: Brake power Vs Speed (Groundnuts shell) Torque (Nm) 64 62 E0% 60 E10% 58 56 E20% 54 E30% 900 1000 1100 1200 1300 1400 1500 1600 E40% Speed (rpm) Brake mean efective pressure (bar) Figure 6: Torque Vs Speed (Groundnut shell) 9.2 9 8.8 E0% 8.6 E10% 8.4 E20% 8.2 E30% 8 E40% 7.8 900 1000 1100 1200 1300 Speed (rpm) 1400 1500 1600 Figure 7: Brake mean effective pressure Vs Speed (Groundnut shell) www.ajer.org Page 309 Volumetric efficiency (%) American Journal of Engineering Research (AJER) 2013 18 16 E0% 14 E10% 12 E20% 10 E30% 8 E40% 900 1000 1100 1200 1300 1400 1500 1600 Speed (rpm) Figure 8: Volumetric efficiency Vs Speed (Groundnut shell) VI. DISCUSSION OF RESULT FOR ENGINE PERFORMANCE TEST. 5.1 DISCUSSION 5.1.1 Brake Power Brake Power was found to be relatively equal at lowest engine speed of 1000 rpm as shown, it shows that Brake Power increase with increase in speed, At the highest speed of 1500 rpm E40% developed the highest brake power followed by E30%, E10% and E20% while gasoline developed the lowest brake work, this may be due to better combustion condition of the engine, power increase when more ethanol is added to gasoline. Due to oxygen in ethanol composition the combustion process improves in the engine this is in agreement with the findings by Alvydas and Saugirdas (2003). 5.1.2 Torque From the graphs of Torque Vs Speed (Fig. 6).Torque is good indicator of an engine ability to do work. Torque decreases with increase in engine speed for all the blends for groundnuts shell. On the other hand, slightly higher torque is produced by the gasoline –ethanol blends at low engine speed. Gasoline (E0%) developed the lowest torque of 54.49 Nm at engine speed of 1500 rpm. This is because the engine is unable to ingest a full charge of air at higher speed, also because friction loss increases with speed as explained by Pulkrabek (2003). 5.1.3 Brake Mean Effective Pressure As shown in Equation (3), Brake Mean Effective Pressure is directly proportional to the torque developed by the engine. Graph 7, shows slightly higher torque and BMEP at speed of 1000 rpm for blend and sole fuel (gasoline). Lowest BMEP of 7.92 bar for gasoline (E0%) was recorded at 1400 rpm engine speed. At low engine speeds the higher heating value of gasoline is responsible for high BMEP. 5.1.4 Volumetric Efficiency Graph 8 show that Volumetric efficiency is slightly affected by increase in blends of gasoline – ethanol. Peak Volumetric efficiency of 22.35% was recorded at 1200 rpm engine speed and lowest volumetric efficiency of 10.09% for gasoline (E0%). This is in agreement with Andreas (2003), that volumetric efficiency is inversely proportional to engine speed, increasing the compression ratio, decreases the clearance volume and hence a higher volumetric efficiency is obtained. 5.2 COMBUSTION ANALYSIS OF THE OF BIOFUEL BLEND AND GASOLINE Graph 9, below shows the combustion analysis of petters engine at Point and 3 metre source. www.ajer.org Page 310 American Journal of Engineering Research (AJER) 2013 point source 2000 1800 1600 O2 % 1400 CO (PPM) /Cr 1200 NOx (PPM) /Cr 1000 800 SOX (PPM) /Cr 600 CO2 % 400 Exhust Temp (0c) 200 0 E0% E10% E20% E30% E40% Graph 9: Exhaust emission at point source Vs Blend Ratio (Groundnut shell) 3meters 100 90 80 70 60 50 40 30 20 10 0 O2 % CO (PPM) /Cr NOx (PPM) /Cr SOX (PPM) /Cr CO2 % Exhust Temp (0c) E0% E10% E20% E30% E40% Graph 10: Exhaust emission at 3 m dis. Vs Blend Ratio (Groundnut shell) VII. SUMMARY From the result of this research, using fungus microorganism A.niger and S. cerevisiae that can convert xylose and other pentose to bioethanol will convert 420g substrate Groundnuts shell to produce approximately 50 – 80 ml ethanol yield. The test result and graphs have demonstrated the possibility of using bioethanol obtained from organic wastes to run gasoline engine with little or no modification. Result from the test carried out showed that blends of ethanol: gasoline from banana peel developed the highest maximum torque of 63.19 Nm, and the highest brake power of 9578.64 W, compared to sole gasoline with 60.92 Nm torque, and 8683.4 4W. However maximum fuel consumption was noticed from ethanol: gasoline blends when compared with that of sole gasoline. VIII. CONCLUSION The result of the experiment conducted shows that Cellulosic agricultural wastes particularly groundnuts shell is a potential substrate which can be exploited in industries for bioethanol production on a commercial scale as they are cheap and more importantly renewable. Available data support the conclusion that environmental impact associated with dedicated production of cellulosic biomass appears to be generally acceptable and can be positive. www.ajer.org Page 311 American Journal of Engineering Research (AJER) 2013 Ethanol blends with gasoline causes significant improvement in engine performance, indicating parameters like Brake power, Torque, Brake Mean Effective Pressure, Volumetric Efficiency and Fuel consumption has been observed for various additives. Addition of 50% ethanol – gasoline was feasible though with difficulty in starting but there was significant reduction in exhaust emission as engine speed increase. Values of CO, NOx, SOx, emission decreases dramatically as a result of leaning effects caused by ethanol addition REFERENCE [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] Abila N. (2010). Biofuels adoption in Nigeria: Preliminary Review of Feedstock and Fuel Production Potential, Dept. of Industial Management, University of Vaasa, Vaasa, Finland. pp 1-11. Abiodun O. (2007). Biofuel Opportunities and Development of renewable energies Markets in Africa: A paper presented during the biofuel market Africa 2007 conference, Cape Town, South Africa. Akpan, U G.; Alhakim, A. A. and Ijah, J. J (2008) Production of ethanol frm organic food waste Adeniyi, O. D.; Kovo, A.S.; Abdulkarim, A. S. Chukwudozie, Ethanol production from cassava as a substitute for gasoline. J. Dispersion Sci. Technol.(2007) Akande, F. H. and Mudi, K. Y., kinetic Model for Ethanol production from cassava starch by saccharomyces cerevisiae. proceedings of the 35th Annual Conference of NSChE, Kaduna, Nigeria(2005) Agba A.M., Ushie M.E., Abam F.I., Agba M.S., Okoro J. (2010). Developing the Biofuel Industry for Effective Rural Transformation. European Jornal of Scientific Research, Vol. 40 No. 3, pp 441-449. Ajueyitsi O.N (2009). Optimization of Biomass Briquette Utilisation, a Fuel for Domestic use. PhD Research Proposal Seminar, Dept. of Mechanical Engineering, FUTO. Amadi, B.A., Agomo, E.N and Ibegbulem, C.O (2004). Research method in biochemistry. Supreme publishers, Owerri, Nigeria pp. 93-99 Bailey, B., and Russell, J. (2004). Emergency Transportation fuel investigation: Properties and Performance. SAE Papers 81044. Elijah I.O. (2010). Emerging Bio-ethanol Projects in Nigeria: Their Opportunities and Challenges. Energy Policy Reviews. Vol 38, Issue 11, pp 7161-7168.+ http://www.eurelertk Obioh I. and Fagbenle R.O. (2009). Energy Systems:Vulnerability Adaptation Resilience (VAR). Hello International Ogwueleke T. (2009). Munucipal Solid Waste Characteristics and Management in Nigeria. Iran Journal Of Environmental Health Sci. Eng, Vol 6,N0.3, pp 173-180. Raveendran, K., Anuradda, G. and Kartic, CK., “Influence of mineral matter on biomass pyrolysis characteristics”, Fuel, 199η, 74, 1812-1822. www.ajer.org Page 312
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-110-116 www.ajer.org Research Paper Open Access A Simulation Analysis of Dislocations Reduction in InxGa1xN/GaNHeterostructure Using Step-graded Interlayers SohelHossain, Md. FaridUddin Khan, Md. LitonHossain, Abu FarzanMitul Department of Electrical and Electronic Engineering, Faculty of Electrical Engineering, Khulna University of Engineering &Technolog, Postal address, 920300, Khulna, Bangladesh Abstract:- To reduce the misfit dislocation densityis a great challenge for semiconductor devices. Misfit dislocation generation is harmful for device performance. In this paper, we have used different techniques to reduce misfit dislocation. We have studied and calculated the critical layer thickness by varying In composition and we have compared the result between two models i.e. Matthews-Blakeslee and People-Bean model. Matthews-Blakeslee model shows the better performance than People-Bean model. Then we have analyzed the misfit dislocation generation by varying layer thickness and compared the result between two graded layer i.e. uniform graded layer and step graded layer for different three planes such as 1/3<11-23>{11-22},1/3<1101>{11-22},1/3<11-23>{0001}. Here it is remarkable that we are also able to show the edge, screw as well as mixed dislocation density by varying layer thickness among these planes. Step graded layer displays the lesser misfit dislocation generation than uniform graded layer. Then we have investigated the inter layer effects. If we use more number of interlayer, the dislocation density can be reduced with sharply. Keywords:- InGaN;dislocation;critical layer thickness; In composition;step graded layer;inter layer effect. I. INTRODUCTION During the last decade III-Nitride semiconductors have been receiving much concentration due to their large, direct band gap to build a new generation of electronic and optoelectronic devices. But in heteroepitaxial nitride semiconductors, the large lattice mismatch between layers and layer-substrate interface leads to degrade the quality of these promising material systems and hence the performance of their essential devices. The high density of MD greatly degrades the device performance. So a material system with low MD is highly desirable for future generation electronic and optoelectronic devices fabrication. Many researchers have been analyzed to reduce misfit dislocation generation. Md. Arafat Hossain, Md. Mahbub Hasan, and Md. Rafiqul Islam [1] have calculated the critical layer thickness in each step graded layer using the Matthews-Blakeslee balance force model that critical layer thickness is inversely dependent with In composition. The critical thicknesses are found to be 13.5 nm and 11.5 nm for x1=0.09 and x2=0.17, the MD has been decreased from 2.2×10 5cm-1 to 1.6×105cm-1 . Authors [2] reported that the critical thickness have been found to be 12.4, 13.9 and 3.3 nm in (11-22), (1-101) and (0001) slip respectively for 10% In composition. Durjoy Dev, Anisul Islam, Md. Rafiqul Islam, Md. Arafat Hossain and A. Yamamoto [3] have calculated the value of critical layer thickness at x=0.2 is 6.792 nm and the edge MD densities of 3.25×10 11, 9.39×1010, 6.7×1010, 4.74×1010, 4.45×1010 and 4.24×1010 cm2 have been calculated for 0, 1, 2, 3, 4 and 5 interlayer respectively. The present article presents a theoretical analysis of different types of critical layer thickness by using Matthews-Blakeslee and People – Bean force balance model, MDs generation by using uniform graded layer and step graded layer and effects of interlayer number on their reduction. In this work we have present a theoretical evidence of low density MD formation during the step increase in In composition with the thickness of InGaN grown on three possible planes of GaN. We have observed that more number of interlayer reduce the dislocation density sharply. www.ajer.org 110 American Journal of Engineering Research (AJER) II. 2013 THEORY All mechanical properties of GaN and InN used in the subsequent calculations are summarized in this subsection. Lattice parameters of wurtzite GaN and InN are given in Table 1.1[7]. The lattice parameters for InxGa1-xN are derived using Vegard’s law.In approximately all heteroepitaxial development of interest, the epitaxial layer has a stress-free lattice constant which is different from that of the substrate. As the epitaxial layer thickness increases, so does the strain energy stored in the pseudomorphic layer. At a few thickness, called the critical layer thickness (hc), it becomes energetically approving for the introduction of MD in the interface that relaxes some of the mismatch strain.The critical layer thickness developed by the Matthews-Blakeslee balance force model is modified to calculate the hc for each step increase in In composition [4]. Table 1.1:Lattice parameters of GaN and InN used in the calculations throughout this work [8]. (1− 2�) 8 (1+� )∣∈ ∣ � � � ∅ c [ A] a[ A ] 3.189 3.538 GaN InN ℎ =   Materials 5.185 5.702 ℎ ln⁡ ( ) 0 (1) The index b is the length of burger vector, v is the Poisson ratio, φ is the angle between the slip plane and normal to the film-substrate interface, θ is the angle between the dislocation line and burger vector and r0 is dislocation cut-off parameter. The critical layer thickness, hc(x), at which strain relief is expected to occur, can be estimated as a function ofx using the model proposed by People and Bean. The equation for hc(x) as a func-tion of the lattice mismatch and the film structural properties is given by [6] ℎ � = 1−� � 1+� � 2 1 16 2 (�) 1 � 2 (�) ℎ (�) (2) Where n(x) is Poisson’s ratio, a(x) is the bulk lattice constant of the film, b is the slip distance, and f(x)is the lattice mismatch. The value used for b was aGaN, and Vegard’s law was assumed to obtain a(x) and n(x). In case of material with hexagonal symmetry the only non-zero component of biaxial misfit stress tensor and elastic energy per unit area of the interface takes the form [5] σxx = σyy = C11 + C12 − � = �11 + �12 − 2 2�13 2 2C13 ∈ C33 (3) ∈2 ℎ �33 (4) where cij are elastic constant and h is the thickness of the epitaxial layer grown on the GaN substrate.Therefore the strain energy per unit area of the interface in the material with hexagonal symmetry is [3] dW dA = C11 + C12 − 2C 213 C 33 ∈mi − 3 2 2 bci pi h (5) The strain in the epitaxial layer is partially relaxed by the misfit strain. Therefore the residual strain after a thickness of h is ∣∈� ∣= ∣∈ � ∣ −∣ 3 ∣ (6) www.ajer.org Page 111Page 2 � � American Journal of Engineering Research (AJER) 2013 Where, i= 1, 2, 3 …. residual strain of the first, second, third layer and so on. The total energy stored by the array of misfit dislocation in the ith layer with partially relaxed misfit strain ∈ = �11 + �12 − 2 2�13 3 (∈ � − �33 2� � )2 ℎ� + 3 2� � �11 + �12 − 2 2�13 �33 ∈ ℎ ℎ� ℎ (7) The first term of this equation is due to the strain energy and the second term counted for energy per unit length of an array of dislocation per unit area lying in the layer substrate interface. It is assumed that the dislocation spacing l is such that it minimizes the total energy  total so the misfit dislocation density is found by differentiating above equation which results in Eq. (8). The layer grown upon the partially relaxed layer of thickness hi, will experience a misfit strain less by the residual strain ε i of the previous layer and calculated by Eq. (8). � = ∣∈m 1 � = 1+i ∈ � 2 � � � 3 ∣= ∅ a li −a l i+1 a l i+1 1− ℎ� ℎ (8) ℎ ℎ −∣∈i ∣ (9) The misfit dislocation density (�+1) for the (i+1) layer will be updated using the Eq. (8) and (9) corresponding residual strain. In this paper, the most creative work is to analyze the interlayer effect on edge, screw and mixed dislocation. By using the equation 8, we have observed the effect of one and two interlayer on misfit dislocation .The value of misfit dislocation for edge ,screw, and mixed in one interlayer is comparatively large than two interlayer which is the main point. III. RESULT AND DISCUSSION Critical layer thickness [nm] The Matthews-Blakeslee balance force model and People-Bean model has been used to calculate the critical layer thickness in each step graded layer which is shown in figure 1 & 2. The figures show the inverse relationship between critical layer thickness and indium composition. Here step increase in indium composition leads to lower value of critical layer thickness. 80 60 40 20 0 0 0.05 0.1 0.15 0.2 0.25 Indium composition , x 0.3 Critical layer thickness (nm) Figure 1: Critical thickness for the InxGa1-xN /GaN system predicted by Matthews and Blakeslee model. 8 10 6 10 4 10 2 10 0 10 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 Indium content, x Figure 2: Critical thickness for the InxGa1-xN /GaN system predicted by People and Bean Model www.ajer.org Page 112Page American Journal of Engineering Research (AJER) 2013 The figure 3, 4, 5 show the comparison relationship between Matthews-Blakeslee and People - Bean force balance model for critical layer thickness to In composition for 1/3<11-23>{11-22} plane . Edge type dislocation from fig.3 shows the different value of critical layer thickness with the step increase in In composition. And for the same value of In composition 0.15 the critical layer thickness for Matthews-Blakeslee model is 9.82nm and 17.3nm for People- Bean method. In Screw type dislocation from fig.4 for the same value of In composition 0.1 the critical layer thickness for Matthews-Blakeslee model is 2.768nm and 8.279nm for People-Bean method. In Mixed type dislocation from fig.5 for the same value of In composition 0.05 the critical layer thickness for Matthews-Blakeslee model is 33.01nm and 116.3nm for People-Bean method. www.ajer.org Page 113Page American Journal of Engineering Research (AJER) The figure 6 shows the Comparison of critical thickness for the In x Ga 1 x 2013 N /GaN predicted by Matthews and Blakeslee at different plane. For the 1/3<11-23>{11-22} plane the critical layer thickness for 0.2 indium content is 16.99 nm, 11.89 nm for 1/3<11-23>{1-101} plane and 6.802 nm for 1/3<11-23>{0001} plane. Observing the above figure from 1 to 6 we conclude that the Matthews and Blakeslee model suits more than other model to theoretical and practical works which were performed in the past. So, Matthews and Blakeslee model for critical layer thickness is favorable. www.ajer.org Page 114Page American Journal of Engineering Research (AJER) 2013 The figures 7, 8, 9 shows the comparison between uniform and step graded layer for the edge, screw and mixed type dislocation with layer thickness for different plane. Figure 7 for 1/3<11-23>{11-22} plane states that for uniform layer the edge, screw and mixed type dislocation is much more than step graded layer. For the same point of layer thickness the screw dislocation is 2.826  10 cm 5 1 for uniform layer and 1.034  10 cm 5 step graded layer. On the other hand the edge dislocation for uniform layer is 3 . 331  10 cm 5 1 . 21  10 cm 5 1 and 1 . 9  10 cm 5 1 1 for and for step graded layer and for mixed type dislocation for uniform layer is 5 . 191  10 cm 1 5 1 for step graded layer. So, among three types of dislocation, dislocation density is the lowest in screw type. From Fig. 8, for 1/3<11-23>{1-101} plane, screw dislocation is 2.817  10 cm 5 uniform layer and 1.028  10 cm 5 1 1 for for step graded layer. From Fig. 9, for 1/3<11-23>{0001} plane, the screw dislocation is 5.37  10 cm for uniform layer and 1.969  10 cm for step graded layer. In every plane, the screw type shows the lowest dislocation density with layer thickness and the performance of 1/3<11-23>{1101} plane is best. By the above figure 7, 8, 9, we conclude that step graded layer technique is much better than uniform layer to reduce misfit dislocation. 5 1 5 1 The figures 10 and 11 show the interlayer effect for the edge, screw and mixed type dislocation with layer thickness for 1/3<11-23>{11-22} plane. In figure 10, we use only one inter layer and which results the dislocation density for screw 2.319  10 cm 5 1 , for edge 2.715 ×105cm-1 and for mixed 4.271 ×105cm-1. In figure 11, we have used two inter layer and which results the dislocation density for screw 1.809  10 cm 5 edge 2.118  10 cm 5 1 and for mixed 3 . 328  10 cm 5 1 1 , for . . So, the above figure 10 and 11 state that if we use more interlayer the dislocation density can be reduced with sharply. So, interlayer effect is very important to reduce the misfit dislocation as well as for good performance www.ajer.org Page 115Page American Journal of Engineering Research (AJER) 2013 of the fabricated device. But it is important to notify that increasing interlayer also increases the experimental complexity as well as initiates interfacial dislocations in every layer IV. CONCLUSION Misfit dislocations, the mechanism of their generation and their properties are a crucial problem in any hetero-epitaxy. The quickly evolving area of applications based on III-nitrides enforced a revision of various models. To fulfil this aim a literature survey was carried out that resulted in identifying several most frequently used critical thickness models. Original results on the misfit dislocation for InGaN/GaN step graded layer systems were presented. A step wise change of lattice mismatch in step-graded interlayer introduces a reduced amount of misfit force and subsequently lesser misfit dislocation generation with thickness. The increase of interlayer enhances this decline up to a definite limit. Therefore, the lower MDs and TDs density in the upper layers as compared to the without graded layer make the step-graded interlayer a better technique for high performance semiconductor device fabrication. However, more attention will be given to obtain reliable experimental data and faithful comparison between theory and experiment in the future work. We will extend the work on misfit dislocation for multiple quantum wells in future. ACKNOWLEDGEMENT This work is supported by simulation and hardware laboratory of Electrical and Electronic Engineering (EEE) Department of Khulna University of Engineering and Technology (KUET), Bangladesh. We would like to thank the coordinator of the Laboratory, Abu FarzanMitul, Lecturer, Department of Electrical and Electronic Engineering (EEE), Khulna University of Engineering and Technology (KUET) for his continuous support and help throughout the experiment. His guideline made this work successful. We would like to thank Md. Arafat Hossain, Assistant professor, Department of Electrical and Electronic Engineering (EEE), Khulna University of Engineering and Technology (KUET), Bangladesh for helping us in solving many critical problems. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] Md. Arafat Hossain, Md. MahbubHasan, and Md. Rafiqul Islam, “Strain Relaxation via Misfit Dislocation in Step-Graded InGaNHeteroepitaxial Layers Grown on Semipolar (1122) and (1101) GaN”, International Journal of Applied Physics and Mathematics, Vol. 2, No. 1, January 2012. Md. Arafat Hossain, Md. Rafiqul Islam, Md. MahbubHasan, A. Yamamoto, and A. Hashimoto, “A Mathematical Modelling of Dislocations Reduction in InxGa1-xN/GaNHeteroepitaxy Using Step-graded Inter layers”, 7th International Conference on Electrical and Computer Engineering, 20-22 December, 2012, Page(s): 365 – 368. DurjoyDev, Anisul Islam, Md. Rafiqul Islam, Md. Arafat Hossain and A. Yamamoto, “A Theoretical Approach for the Dislocation Reduction of Wurtzite InxGa1-xN/GaNHeteroepitaxy”, 7th International Conference on Electrical and Computer Engineering, 22 December, 2012, Page(s): 369 – 372. J. W. Matthews and A. E. Blakeslee, “Defects in epitaxial multilayers,” J. Crystal Growth, 27, 118, 1974. D. Holec, P.M.F.J. Costa, M.J. Kappers, C.J. Humphreys, “Critical thickness calculations for InGaN/GaN,” Journal of Crystal Growth, 303 314–317 (2007). S. Pereira, M. R. Correia, E. Pereira, C. Trager-Cowan, F. Sweeney, K. P. O’Donnell, E. Alves, N. Franco, A. D. Sequeira, “Structural and optical properties of InGaNÕGaN layers close to the critical layer thickness”, Applied physics letters, volume 81, number 7, 12 august 2002. Vickers M. E., Kappers M. J., Smeeton T. M., Thrush E. J., Barnard J. S., Humphreys C. J., “Determination of the indium content and layer thicknesses in InGaN/GaN quantum wells by x-ray scattering,” J. Appl. Phys., Vol. 94, No. 3, August 2003, pp. 1565–1574. Ponce F. A., “Structural defects and materials performance of the III–V nitrides,” Group III Nitrides Semiconductor Compounds, edited by B. Gil, chap. 4, Clarendon Press, Oxford, 1998, pp. 123–157. Jain S. C., Willander M., Narayan J.,Overstraeten R. V., “III-nitrides: Growth, characterization, and properties,” J. Appl. Phys., Vol. 87, No. 3, February 2000, pp. 965–1006. Jain S. C., Harker A. H., Cowley R. A., “Misfit strain and misfit dislocations in lattice mismatched epitaxial layers and other systems,” Phil. Mag. A, Vol. 75, June 1997, pp. 1461 -1515. Fischer A., K¨uhne H., Richter H., “New approach in equilibrium-theory for strained-layer relaxation,” Phys. Rev. Lett., Vol. 73, No. 20, June 1994, pp. 2712–2715. Park S.-E., O. B., Lee C.-R., “Strain relaxation in InxGa1−xN epitaxial films grown coherently on GaN,” J. Crystal Growth, Vol. 249, No. 3–4, March, 2003, pp. 455–460. Willis J. R., Jain S. C., Bullough R., “The energy of an array of dislocations – implications for stain relaxation in semiconductor heterostructures,” Phil. Mag. A, Vol. 62, No. 1, July, 1990, pp. 115–129. www.ajer.org Page 116Page
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-39-45 www.ajer.org Research Paper Open Access Predicting the Minimum Fluidization Velocity of Algal Biomass Bed Abbas H. Sulaymon, Ahmed A. Mohammed, Tariq J. Al-Musawi* *Corresponding author: Baghdad University/Environmental Engineering Dept. Abstract: -The minimum fluidizationvelocity (Umf)is an important hydrodynamics parameter in the design of fluidized bed reactor. This paper aims to predict the minimum fluidization velocity of liquid-solid (algal biomass) reactor. The experimental work was carried out in glass column of 1 m height and 7.7 cm inside diameter. The minimum fluidization velocities of beds were found to be 2.27 and 3.64 mm/s for algal mesh sizes of 0.4-0.6 and 0.6-1 mm diameters, respectively. It was found that the minimum fluidization velocitywas not affected by the variation of bed weight, but it was a function of the particles diameter. The results showed that the experimental Umf is greater than the calculated value. This may be attributable to the following: the equations of calculation Umf were based on the homogenous bed and spherical particles; the calculated U mfwas not take into consideration the friction between the fluid and the wall of column. Keywords: -Fluidized bed; Algae; Fluidization velocity; Particles I. INTRODUCTION Fluidized beds have been used widely by chemical industry, pharmaceutical industry, food industry, wastewater treatment and recovery of different substance (Park et al., 1999).Fluidized beds are common and important reactors in process engineering due to their better characteristics concerning the uniformity of the temperature, concentration, and avoid the formation of dead zones and clogging (Fu and Liu, 2007).As well as, fluidized bed reactor offers high available surface area, since there is no contact between particles, and intimate contact of the entire surface with the wastewater(Sulaymon et al., 2013). Thetermfluidization is used to describe the condition of fully suspended particles. Liquids or gases are passed upwards at certain velocity through a bed of solid particles, at this velocity the pressure drop across the bed counter balances the force of gravity on the particles and further increase in velocity achieve fluidization at a minimum fluidization velocity. Fluidization quality is closely related to the intrinsic properties of particles, e.g. particle density, particle size and size distribution, and also their surface characteristics (Richardson, 2002; Asif, 2012). Several studies pointed that the first challenge of designing and operation a fluidized bed reactor is finding the minimum fluidization velocity (U mf).The Umf is a crucial hydrodynamics parameter of fluidized beds as it marks the transition at which the behavior of an initially packed bed of solids changes into a fluidized bed. Therefore, its accurate specification is indispensable for a successful initial design, subsequent scale-up, and operation of the reactors or any other contacting devices based on the fluidized bed technology. Industrial practice on fluidized beds usually involves the fluidization of solids over a wide range of particle sizes and/or systems with two or more components. In these cases, each particle fraction or each solid species has its own minimum fluidization velocity(Ngian, 1980).When a fluid flows slowly upwards through a bed of very fine particles the flow is streamline and linear relation exists between pressure gradient and flow rate. If the pressure drop (ΔP) across the whole bed is plotted against fluid velocity (u c) using logarithmic coordinates as shown in the Fig. (1), a linear relation is again obtained up to the point where expansion of the bed starts to take place (A), although the slope of the curve then gradually diminishes as the bed expands and its porosity increases. As the velocity is further increased, the pressure drop passes through a maximum value (B), then falls slightly and attains an approximately constant value that is independent of the fluid velocity (CD). If the fluid velocity is reduced again, the bed contracts until it reaches the condition where the particles are just resting on one another (E). The bed voidage then has the maximum stable value which can occur for a fixed bed of the particles. If the www.ajer.org Page 39 American Journal of Engineering Research (AJER) 2013 velocity is further decreased, the structure of the bed then remains unaffected provided that the bed is not subjected to vibration. The pressure drop (EF) across this reformed fixed bed at any fluid velocity is then less than that before fluidization. If the velocity is now increased again, it might be expected that the curve (FE) would be retraced and that the slope would suddenly change from 1 to 0 at the fluidizing point. This condition is difficult to reproduce, however, because the bed tends to become consolidated again unless it is completely free from vibration. In the absence of channeling, it is the shape and size of the particles that determine both the maximum porosity and the pressure drop across a given height of fluidized bed of a given depth. In an ideal fluidized bed the pressure drop corresponding to ECD is equal to the buoyant weight of particles per unit area. In practice, it may deviate appreciably from this value as a result of channeling and the effect of particle-wall friction. Point B lies above CD because the frictional forces between the particles have to be overcome before bed rearrangement can take place. In a fluidized bed, the total frictional force on the particles must equal the effective weight of the bed. Thus, the pressure drop across the bed is given by: ∆� = � − � 1 − � . .h………….. (1) Where, ρs and ρl are the particles andfluid densities (kg/m3), respectively. εis the void fraction, g is the gravitational acceleration (9.81 m/s2), and h is bed height. Eq.(1) applies from the initial expansion of the bed until transport of solids takes place (Richardson, 2002). Ngian and Martin (1980)studied the bed expansion behavior of liquid fluidized beds of char particles coated with attached microbial growth of denitrifying mixed bacteria. They concluded that the correlations recommended by Richardson-Zaki for homogenous spheres are satisfactory for the estimation n of small particles (0.61 mm dia.), for the larger support particles (1.55 mm dia.) the predicted U i values was found to be 30 to 70% below the experimentally values. Tsibranska and Hristova(2010) studied the behavior of activated carbon in a fluidized bed for removal of Pb2+, Cu2+, Cd2+ and Zn2+ ions from aqueous solution. This work presented complete theoretical equations of bed expansion, minimum fluidization velocity, external mass transfer coefficient and mass balance equations for fluidized bed reactor. Sulaymon et al., (2010)studied the hydrodynamic characteristics of three-phase fluidized beds. The experimental work of fluidized beds system was carried out in QVF glass column of 10.6 cm in dia. and 2 m height. Activated carbon with diameter 0.25 -0.75mm and density 770 kg/m3 was used as a solid phase. The minimum liquid flow rate required to fluidize a bed of particles was determined from the change in the bed dynamic pressure drop behavior that occurs as the bed changes from a fixed bed to a fluidized bed. It was found that the minimum fluidization velocity increases with increase in particle size. Wang et al., (2011) studied the removal of emulsified oil from water by inverse fluidization of hydrophobic silica aerogels (nanogel). The hydrodynamics characteristics of the nanogel granules of different size ranges are studied by measuring the pressure drop and bed expansion as a function of superficial water velocity. The minimum fluidization velocity was measured experimentally by plotting the pressure drop against the superficial fluid velocity. The results showed that the major factors which affect the oil removal efficiency and capacity are the size of nanogel granules, bed height, and fluidization velocity. In recent years, there has been a significant increase in the studies concerning algae as biosorbents for metal removal due to their binding ability, availability and low cost(2003). In this study, the algae were used as a solid media in the liquid-solid fluidized bed reactor. This material was widespread used in the biosorption process of various materials. Therefore, the objectives of this work are: (i) to characterize the physical properties of algal biomass such as density, specific surface area and bed void fraction. (ii) to study some of the hydrodynamics properties such as U mfof algal biomass in the fluidized bed reactor. II. EXPERIMENTAL WORKS AND MATERIALS 2.1 Materials Mixture of green (Chlorophyta) and blue-green (Cyanophyta) algae were used in this study as a bed material. Large quantities of algae have been observed their spreading along the artificial irrigation canal in Baghdad University. This canal feed by water from the Tigris River. For this study, algae were collected from the selected location of this canal in April and September 2011. Approximately greater than 5 kg of fresh algae was collected at each month. Sample of 0.5 kg of collected algae at each month were analyses for their genus and species and percentage weight by using microscope. These analyses were achieved according to the standard methods (APHA, 2005) in laboratories of Iraqi Ministry of Sciences and Technology/Water Treatment Directory. The results showed that there are five species were dominated in these two samples, Oscillatoriaprinceps alga was the highest percentage, these results were listed in table (1). The collected algae were washed several times with tap water and distilled water to remove impurities and salts. The algal biomass was sun-dried and then dried in oven at 50 °C for 48 h. The dried algal biomass was www.ajer.org Page 40 American Journal of Engineering Research (AJER) 2013 shredded, ground to powder and sieved. Mesh sizes of 0.4-0.6 and 0.6-1 mm ofparticle diameters were used.The biomass particle size distribution was determined using a set of standard sieves. Since the algal biomass could swell in water, therefore the biomass was initially soaked in water and then wet sieved. Particles density, surface area and void fraction were measured and listed in table(2),these parameters are so important in the characterization of fluidized bed.Fig.(2)shows two pictures of powdered algal biomass particles. 2.2 Experiments Fig. (3)shows a schematic diagram of fluidized bed reactor used in the experiments of this work. Experiments were carried out in a 7.5 cm inner diameter and 1 m high glass column, stainless steel distributer of 5mm thickness with 0.2 mm holes diameter was installed at the bottom of the reactor to distribute an influent flow smoothly. The flow rate of water was adjusted using calibrated flow meter. A U-tube manometer was connected to the reactor to observe the pressure drop along the bed at each flow meter reading, the manometer has an inside diameter of 5 mm and length of 50 cm. The manometer liquid is carbon tetrachloride (CCl4) with ρl =1590 kg/m3. The bed heights were read visually.In addition, all the experiments were carried out at room temperature. A typical experimental runs are described as follows. First, the pressure drop across the empty column was measured at different water flow rates in order to obtain a correlation that can be used to determine the pressure drop of the fluidized bed alone; this was done by subtracting the empty column pressure drop from the total fluidized bed pressure drop. Then known weight of the algal biomass particles to be fluidized was loaded into the fluidization columnand then vigorously agitated with water in order to arrange particles and break down any internal structure. After that the bed left to settle down, and then the water flow rate increased gradually from 0 to 100 l/h. At each flow meter readings the pressure drop and bed height were measured.The static pressure before the column was kept constant to ensure consistent readings. The algal biomass particles were fluidized by increasing the flow until the drag force on the particles balances the buoyant force. III. RESULTS AND DISCUSSION 3.1 Bed Expansion It is important to establish the relationship between the superficial liquid velocity (U) and the bed voidage (ε) (Ngian, 1980).An accurate description of the bed void fraction is an important prerequisite for determining various hydrodynamic aspects of the fluidized bed including the minimum fluidization velocity of fluidized bed (Nidal et al., 2001).For homogeneous particles in a liquid fluidized bed, it is generally accepted that the most convenient expression to relate U to ε is the Richardson-Zaki equation (Richardson, 2002): U/Ui=εn………………………….. (2) Where Uis the superficial fluid velocity, Uiis the settling velocity of a particle at infinite dilution, and n is constant. The index n is a function of Reynolds number at terminal velocity (Ret) as follows: n= 4.65+ 20d/D(Ret <0.2) ………….(3) n= (4.4+ 18d/D)Ret-0.03(0.2<Ret<1) ……… (4) n= (4.4+ 18d/D)Ret-0.1(1<Ret<200) ……..(5) n= 4.4Ret-0.1 (200< Ret<500)……(6) n= 2.4 (Ret>500) ……..(7) where, d is the particle diameter; D is the bed diameter. The settling velocity at infinite dilution (Ui) and the terminal velocity (Ut) are related by: � �� = � � − …………………..(8) � = � = Ut = � . .� � ………………….….(9) � . 2 .(� −� ) 18.� Rep < 0.2….(10) 0.153.g0.71 .d1.14 . ρs-ρl 0.71 ρl 0.29 .� 0.43 Rep > 0.2……(11) Where,ρs and ρl are the densities of the particle and fluid, respectively; µ is the viscosity of the fluid, and Rep is the particle Reynolds number. Fig. (4)shows the voidage against superficial velocity of 0.4-0.6 mm particles diameter. The correlation obtained from this figure is (U in mm/s): U= 15.24 ɛ3.675………………. (12) In addition, the bed voidage can be found experimentally by subtracting the volume of the particles (Vp) from the total volume of the fluidized bed (Vb). Hence, the voidage of the fluidized bed is: ε= Vε Vb = Vb -Vp Vb = 1- Vp Vb = 1- mp ρs .Vb www.ajer.org = 1- mp ….. (13) ρs .A.hmf Page 41 American Journal of Engineering Research (AJER) 2013 where, Vɛ is void volume, mp is the mass of particles (kg), A is the cross sectional area of the bed (0.0044 m2), hmf is the bed height (m). The bed voidage of fluidized algal biomass was found experimentally using Eq.(13) and compared with theoretical value that calculated using Eq.(2).Table (3) shows the calculated values of (n), experimental and theoretical voidage (ε). As seen in table, the values of calculated voidage are lower than experimental values. This may be due to the fact that the Richardson-Zakiequation is based on the homogenous and spherical particles shape in liquid fluidized bed. 3.2Minimum Fluidization Velocity The Umf was determined experimentally by measuring the pressure drop across the bed of algal particles, and then compared with the calculated value. Two mesh sizes of particles were used in this study ranging from 0.4-0.6 mm and 0.6-1 mm diameter. The weight of algal biomass that used for each particles diameter range were 30, 50, 70, 100, and 150 g. Fig. (5)shows the pressure drops across the bed against the superficial fluid velocity in logarithmic scale for 30 and 70 g of algal biomass bed. This graph is used to obtain the minimum fluidization velocity (U mf), as well as to show the pressure drop rises linearly below minimum fluidization in the packed bed region and then plateaus above minimum fluidization. The U mfcan be read from the sharp change in the pressure drop across the fixed bed region. The pressure drop was found to be less for the smaller particles (Fig.5) compared with the larger particles, and the fluidized bed height was double the initial static bed for all algal biomass weights. Several correlations were proposed for prediction of the minimum fluidization velocity. The most important one was Ergun equation (Tsibranska and Hristova, 2010). Equation (14) is a simplified form of Ergun equation. Ergun equation may be applied when the flow regime at the incipient fluidization which is outside the range of Carman-Kozeny equation applicability. �� = 150 �� = � = 1−� �3 3 � � −� � .� �2 � � + 1.75 �3 � 2 …. (14) ……. (15) …….. (16) Where, Ga is the Galileo number. The value of void fraction mentioned in Eq.(14) was determined from the Richardson-Zaki correlation (Eq.(2)). It is important to note that the Ergun equation contains terms with third order dependence on the bed void fraction. As a result, even a small error in the bed void fraction can lead to a significantly higher error in the prediction of the pressure drop. Table (4) shows the minimum fluidization velocity, plateau pressure drop (ΔP) and fluidized bed height (hmf) of two different particles size.It can be seen that the Umf is not a function to the weight of bed but it is a function to the particles diameter. It was found that the experimental U mf greater than the calculated value. This may be attributable to the following: the equations of calculation Umf were based on the homogenous, spherical particles; the calculated Umfdo not takes into consideration the friction between the fluid and the wall of column. These results are in a good agreement with (Sulaymon, 2013). IV.CONCLUSION In this study, the minimum fluidization velocity of algal biomass beds was found experimentally and then compared with the calculated value that obtained using Ergun equation. The experimental minimum fluidization velocities were found to be 2.27 and 3.64 mm/s for mesh sizes of 0.4-0.6 and 0.6-1 mm particles diameters, respectively.On the other hand,the results showed that the experimental U mf was greater than the calculated value. This can be attributed to that Ergun equation suppose homogenous bed and spherical particles.The pressure drop is found to be less for the smaller particles than for the larger bed particles and the fluidized bed height is double the initial static bed for all algal biomass weights. V. [1] [2] [3] [4] REFERENCES APHA (American Public Health Association), (2005), "Standard Method for the Examination of Water and Wastewater", 21st. ed. American Public Health Association. Asif, M., (2012), "Volume-Change of mixing at incipient fluidization of binary-solid mixtures: Experimental data and predictive models", Powder Technology, 217, 361-368. Davis, A., Volesky, B., Mucci, A., 2003, "A review of the biochemisty of heavy metals biosorption by brown algae", Water Research, 37, 4311-4330. Fu, Y., Liu, D., (2007), "Novel experimental phenomena of fine-particle fluidized bed", Experimental Thermal and Fluid Science, 32, 341-344. www.ajer.org Page 42 American Journal of Engineering Research (AJER) [5] [6] [7] [8] [9] [10] [11] [12] 2013 Ngian, K.F., Martin, W.R., (1980), "Bed expansion characteristics of liquid fluidized particles with attached microbial growth", Biotechnol. andBioeng. 22, 1843-1856. Nidal, H., Ghannam, M., Anabtawi, M., (2001), "Effect of bed diameter, Distributor and Inserts on minimum fluidization velocity", Chem. Eng. Technol., 24 (2), 161-164. Park, Y.G., Cho, S.Y., Kim, S.J., Lee, G.B., (1999), "Mass transfer in semi-fluidized and fluidized ionexchange beds", Envi. Eng. Res., 4(2), 71-80. Richardson, J.F., Harker, J.H., Bachurst, J.R., (2002), " CHEMICAL ENGINEERING, Particle Technology and Separation Processes", Vol.(2), 5 th Edition, Butterworth-Heinemann. Sulaymon AH, Mohammed AA, Al-Musawi TJ (2013) Column Biosorption of Lead, Cadmium, Copper, and Arsenic ions onto Algae. J Bioprocess Biotech 3: 128 doi: 10.4172/2155-9821.1000128 Sulaymon, A. H., Mohammed, T.H., Jawad, A.H., (2010), "Hydrodynamic Characteristics of Three-phase Non-Newtonian Liquid-Gas-Solid Fluidized Beds", Emirates Journal for Engineering Research, 15 (1), 41-49. Tsibranska, I., Hristova, E., (2010), "Modelling of heavy metal adsorption into activated carbon from apricot stones in fluidized bed", Chem. Eng. and Processing, 49, 1122-1127. Wang, D., McLaughlin, E., Pfeffer, R., Lin, Y.S., (2011), "Aqueous phase adsorption of toluene in a packed and fluidized bed of hydropholic aerogels", Chemical Engineering,168, 1201-1208. Fig. 1 Pressure drop across fixed and fluidized beds (Richardson, 2002) a b Fig. 2 Two plots pictures of, a: 0.6-0.4 mm diameter of powdered algal biomass, and b: Microscopic picture of one particle of 0.4-0.6 mm dia. alga biomass (mesh size:1x1 mm2) www.ajer.org Page 43 American Journal of Engineering Research (AJER) 2013 Outlet F E A: Water tank B: Pump D C: Flow meter A C D: Distributer E: Column reactor F: Manometer Drain B Voidage ɛ Fig. 3 Schematic diagram of fluidization experimental setup 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 U=15.24 ɛ3.675 R² = 0.984 0 1 2 3 4 5 6 7 U (mm/s) Fig. 4 Relationship between voidage and superficial velocity of 0.4-0.6 mm particles diameter Fig. 5 Pressure drop vs. superficial fluid velocity in algal bed, a: 0.4-0.6 mm and b: 0.6-1 mm particle diameter www.ajer.org Page 44 American Journal of Engineering Research (AJER) 2013 Table (1) Division, genus, species and weighting percentage of collected algae Division Genus and Species Cyanophyta Percentage June/ 2011 88 % September/ 2011 91 % 5% 3% 2% 2% Cyanophyta Oscillatoriaprinceps Spirogyra aequinoctialis Oscillatoriasubbrevis Cyanophyta Oscillatoriaformosa 3% 1% Chlorophyta Mougetasp 1% 2% --- 1% 1% Chlorophyta others Table (2) Physical properties of algal biomass particles Particle diameter (mm) 0.4-0.6 0.6-1 474 400 1120 1120 1.88 1.65 0.713 0.77 0.577 0.642 Bulk density(kg/m3) Real density (kg/m3) Surface area (m2/g) Particle porosity (--) Static bed void fraction (--) Table (3) Theoretical and experimental voidage for two particle size ranges and at U=U mf Particle size (mm) 0.4-0.6 0.6-1 Ui (m/s) index (n) Calculated ε Eq.(2) Experimental ε Eq.(13) 0.015 0.017 3.69 3.53 0.61 0.65 0.75 0.83 Table (4) Umf, ΔP and hmf of two different size particles Particle size (mm) 0.4-0.6 0.6-1 www.ajer.org Mass (g) Static height (cm) ΔP (pa) hmf (cm) 30 50 70 100 150 30 50 70 100 150 1.5 2.5 3.5 5 7.5 1.8 3 4.2 6 9 32.9 56.3 65.5 80.1 112 50.6 66.1 89.9 103.3 124.8 3 5 7 10 15 3.6 6 8.4 12 18 Calculated Umf (mm/s) Experimental Umf (mm/s) 2.21 2.27 3.11 3.64 Page 45
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-360-366 www.ajer.org Research Paper Open Access A Survey on Security Requirements Elicitation and Presentation in Requirements Engineering Phase Md. Alamgir Kabir, Md. Mijanur Rahman 1, 2 (Department of Software Engineering, Daffodil International University, Bangladesh) Abstract: - Secure software development is the new attention of current world in recent days. Security is the key issue for assuring the quality full software. Since, security is one the non-functional requirement most of the times it is ignored in the requirements phase. But, it is possible to reduce software development cost and time to identify user security requirement in the early stage of the software development process. IT security must apply to ensure the reliable system and protect assets of the business organization. In this scene, the main deal is to present the user security requirements combining with user functional requirements which are collected form requirement phase in Software Development Life Cycle (SDLC). Secure Software Development Life Cycle (SSDLC) start from security requirements. If we can elicit user security requirements and present these requirements in requirements phase then secure software develop will be ensure from the very beginning. In industry and academic, there are several methods to elicit and analyze the user security requirements, but few methods are efficient for identifying and presenting the user security requirements. This paper reflects the current research on software user security requirements elicitation techniques in requirements engineering phase. We try to identify the research trend, based on related published work. Keywords: - Requirements Phase, Security Requirements Engineering, Secure Software Development Life Cycle, Security Requirements Model, Security Requirements I. INTRODUCTION In the competitive economic market, the demand of secured and reliable system is increasing day by day. A successful software development is possible by considering equally both functional and nonfunctional requirements. For this issue, nonfunctional requirements are much important like functional requirements. There are few generic nonfunctional requirements for a system like auditability, extensibility, maintainability, performance, portability, reliability, security, testability, usability and etc. among them security is very vital issue for system development. If we want to develop a reliable and secure system, we have to more concern about security before developing the system that means as early stage in software development life cycle. And this will be requirements stage. In requirements stage, generally we collect user functional requirements. But if we collect user security requirements with user functional requirements then secure software development will be possible with less effort and less cost. Because if user security requirements is arranged after some development from users or stakeholders then it is more difficult, costly and matter of time to combine with user functional requirements with the product or module. In real sense, user security requirements means the security requirements for a specific requirements or function. That means for a login system, user name or password is necessary for a successful login. But if the users or stakeholders enter wrong user name or password simultaneously and continuously then some effect will be occurred for security purpose. But the users or stakeholders can enter user name and password two or three times which is specified in requirement stage for specific requirement or function then this type of security purpose is achieved from early stage with less effort, time and cost. Beyond this introduction on the background details, rest of the paper is organized as follows: In Section II, Security Requirements (SR) in Software Development Process is briefly reported, In Section III, Security Requirements Engineering is briefly reported, whereas in Section IV, we present Security Requirements Elicitation and Presentation Model is briefly reported. Finally, Conclusion is drawn in Section V. www.ajer.org Page 360 American Journal of Engineering Research (AJER) II. 2013 SR IN SOFTWARE DEVELOPMENT PROCESS Adding security requirements to a system that has already functionally developed is very difficult. The security requirements should be integrated at the requirement stage so that it can be identified with the first parts of development phase. Salim Chehida and Mustapha Kamel Rahmouni think that the development of a security policy must be done at the same time, than the functional design stage, and the final model must integrate at the same time, the functional and security specifications [26]. The security of the critical systems must start with the early stage and should follow an approach which would present: what are the threads? What do we have to protect? Why? [26]. P. Devanbu said in the book named “Software Engineering for Security: A Roadmap” that security concern must inform every phase of software development, from requirements engineering to design, implementation, testing and development [27]. Microsoft says that defining and integrating user security requirements helps make it easier to identify minimize disruptions to plans and schedules for establishing security requirements in early stage [3]. Microsoft’s security development life cycle in requirement phase has three phase. These are; (a) establish security requirements, (b) create quality gates/bug bars and perform security and (c) privacy risk assessments. Fig 1. security development lifecycle: requirements phase In fig 1 security requirements gets more importance in requirements phase in SDL which is developed by Microsoft. Following Microsoft description, the project inception phase is the best time for a development team to consider foundational security and privacy issues and to analyze how to align quality and regulatory requirements with costs and business needs [4]. Viega J presented in the CLASP application security process in volume 1.1 that CLASP, a plug-in to RUP, is another well-defined and structured method to consider security in the very first step of software lifecycle. CLASP fully supports UML 2.0 in the entire software development lifecycle [5]. Hence, secure software development if we want to integrate user security requirements with in user functional requirements especially in requirement analysis phase is considered as one the today’s research challenges [2]. III. SECURITY REQUIREMENTS ENGINEERING Requirements engineering is the first major stage of software development. Security requirements of requirements engineering aren’t the initial interested area of most application developers. And they aren’t knowledgeable about security requirements engineering. For decades, the focus has been on implementing as much functionality as possible before the deadline, and patching the inevitable bugs when it’s time for the next release or hot fix [16], [17]. However, the software engineering community is slowly beginning to realize about requirements engineering that security requirements is also important for application [18]. www.ajer.org Page 361 American Journal of Engineering Research (AJER) 2013 Table 1: security requirements approaches Serial No 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 Approach Name Knowledge Agent-oriented System (KAOS) Risk Analysis Security Patterns Security Design Analysis (SeDaN) Abuse Cases Software Cost Reduction Threat Trees Fault Trees Problem Frames Security Use Cases Simple Reuse of Software Requirements (SIREN) Threat Modeling for Security Requirements Agile Security Requirements Engineering Security Models Security Development Lifecycle Tool (SDL) Controlled Requirements Expression (CORE) Joint Application Development (JAD) Issue-based information systems (IBIS) Critical discourse analysis (CDA) Accelerated Requirements Method (ARM) Quality Function Deployment (QFD) Misuse Cases Abuser Stories Secure TROPOS Security Problem Frames Anti-models i* Security Requirements Common Criteria System Quality Requirements Engineering (SQUARE) Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) Attack Trees Usage-centric Security Requirements Engineering (USeR) Comprehensive Lightweight Application Security Process (CLASP) Various researchers are underway on the different aspects of security requirements in requirements phase. However, a rapid growth about security requirements engineering has been visualized recently. Some significant contributions bear weight and appear valuable among all. A selection from the trend setting research contributions are briefly described one by one for analysis on the advances, as follows: Open web application security project (OWASP) developed a cheat sheet which build requirements security into multiple parts or module of software development processes including requirement phase. It describes various policies and rule for security requirements in SDLC [7]. For identifying and measuring the requirement security and related verification method in requirements engineering (RE), Souhaib Besrour and Imran Ghani presented a paper and proposed a new set of tools. And they proposed an effective security check list of security requirements questions that should be considered for identifying and measuring security in RE phase [8]. About security requirements engineering, P. Salini and S. Kanmani published a review paper. They reviewed various methods on security requirements engineering and analyzed and compared different methods of requirements engineering [9]. Sultan Aljahdali, Jameela Bano and Nisar Hundewale published a review paper on requirement engineering which is goal oriented. This paper helps in identifying security requirements of Goal oriented requirements engineering [10]. Smriti Jain and Maya Ingle developed a model namely Software Requirement Gathering Instrument that helps to gather security requirements from the various stakeholders. The proposed model helps the developers to gather security functional requirements and incorporate security requirements with functional requirements during the requirements phases of software development [11]. The most comprehensive model for security requirements is currently the SQUARE method presented by the SEI of Carnegie Mellon University [12]. M. A. Hadavi, V. S. Hamishagi and H. M. Sangchi presented a paper about security requirements engineering. This paper focuses on the current research situation by reviewing and classifying the efforts into four main categories: security requirements in the standard software development processes, security www.ajer.org Page 362 American Journal of Engineering Research (AJER) 2013 requirements engineering consist of eliciting and modeling security requirements and threat modeling as a basis for security requirements engineering [13]. P.salini and S.kanmani presented a paper about survey on Security Requirement Engineering (SRE). In this paper they present a view on Security Requirements, Security Requirements issues, and types, Security Requirements Engineering and methods. Comparison on different methods and trends of Security Requirements Engineering is given. With this short view information security requirements for banks and the approach that can be adopted for security requirements engineering can be easily identified by the developers [14]. Daniel Mellado, Eduardo Fern´andez-Medina, and Mario Piattini presented a paper about applying security requirements engineering process. In this paper they presented a case study of SREP (Security Requirements Engineering Process), which is a standard-centered process and a reuse based approach which deals with the security requirements at the earlier stages of software development in a systematic and intuitive way by providing a security resources repository and by integrating the Common Criteria into the software development lifecycle [15]. IV. SECURITY REQUIREMENTS ELICITATION AND PRESENTATION MODEL Security Requirements Elicitation is the initial activity for most of the requirements engineering approaches in requirement phase that we analyzed. This phase is mainly concerned with gathering as much information as possible from a variety of stakeholders including past documentation [19]. User security requirements elicitation and presentation is the branch of software security requirements engineering concerned with the real-world goals for, security functions of user functions, and constraints on software or module. It is also associated with the user functional requirements to precise specifications of software behavior [6]. For the purposes of this review, we focus only on those approaches that proactively address the issue of security. We addressed variety of approaches that could be adapted to engineer security requirements. But we did not consider other approaches because they make no mention of security as they currently stand. Over 30 SRE approaches were originally considered [19] in Table 1. These models are in table 1 is about security requirements model. In general, we can address and evaluate security requirements before and after development. But our main target is that we want to address security requirements before development in requirement analysis phase. In table 1, all models aren’t these type of model which address security before development in requirement phase in SDLC. M. A. Hadavi, V. S. Hamishagi, H. M. Sangchi presented a paper in Proceedings of the International Multi Conference of Engineers and Computer Scientists 2008 Vol I, IMECS 2008, 19-21 March, 2008, Hong Kong named Security Requirements Engineering: State of the Art and Research Challenges. In this paper, they described about Current research activities and methods in SR engineering [1]. Fig.2 Categorization of research activities and current method in SR engineering www.ajer.org Page 363 American Journal of Engineering Research (AJER) 2013 In fig. 2, security requirement approaches are divided in categories. Some approaches are for threat modelling & risk assessment, some are for software development process and some are for security requirement elicitation and presentation. We focused security elicitation and presentation approaches which are misuse case, abuse case, mitigation case, security use case, security standards, using attack patterns and hold a brainstorming session. Researchers are undergoing on the different aspects of security requirements elicitation and presentation model in requirements phase. A selection from research contributions which are published recently and are briefly described one by one for analysis as follows and arranged in table 2: Guttorm Sindre, Donald G. Firesmith and Andreas L. Opdahl presented a paper named a Reuse-Based Approach to Determining Security Requirements. They propose a reuse-based approach to determining security requirements. Development with reuse involves identifying security assets, setting security goals for each asset, identifying threats to each goal, analyzing risks and determining security requirements, based on reuse of generic threats and requirements from the repository [20]. Use cases are widely used for functional requirements elicitation. However, security non-functional requirements are often neglected in this requirements analysis process. Form this issue, Thitima Srivatanakul, John A. Clark, and Fiona Polack presented a paper named Effective Security Requirements Analysis: HAZOP and Use Cases. This paper takes one such technique, HAZOP, and applies it to one widely used functional requirement elicitation component, UML use cases, in order to provide systematic analysis of potential security issues at the start of system development [21]. Guttorm Sindre and Andreas L. Opdahl presented a paper named eliciting security requirements with misuse cases. This paper, they presents a systematic approach to eliciting security requirements based on use cases, with emphasis on description an method guidelines. The approach extends traditional use cases to also cover misuse, and is potentially useful for several other types of extra-functional requirements beyond security [22]. www.ajer.org Page 364 American Journal of Engineering Research (AJER) 2013 Ala A. Abdulrazeg, Norita Md Norwawi and Nurlida Basir published a paper named Security Measurement Based on GQM to Improve Application Security during Requirements Stage. In this paper, they present a security metrics model based on the Goal Question Metric (GQM) approach, focusing on the design of the misuse case model. Misuse case is a technique to identify threats and integrate security requirements during the requirement analysis stage. The security metrics model helps in discovering and evaluating the misuse case models by ensuring a defect-free model. Here, the security metrics are based on the OWASP top 10-2010, in addition to misuse case modeling antipattern [23]. The Common Criteria is often too confusing and technical for non-security specialists to understand and therefore properly use. At the same time, it is essential that security critical IT products under development be validated according to such standards not after but rather during the software engineering process. To help address these issues, Michael S. Ware, John B. Bowles and Caroline M. Eastman published a paper named using the Common Criteria to Elicit Security Requirements with Use Cases. This paper, they presents an approach to eliciting security requirements for IT systems with use cases using Common Criteria methodologies. They focus is to ensure that security issues are considered early during requirements engineering while making the Common Criteria more readily available to end-users in an understandable context [24]. Smriti Jain and Maya Ingle published a paper named software security requirements gathering instrument in (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 2, No. 7, 2011. This paper describes Software Security Requirements Gathering Instrument (SSRGI) that helps gather security requirements from the various stakeholders. This will guide the developers to gather security requirements along with the functional requirements and further incorporate security during other phases of software development. They presented case studies that described the integration of the SSRGI instrument with Software Requirements Specification (SRS) document as specified in standard IEEE 830-1998. Proposed SSRGI will support the software developers in gathering security requirements in detail during requirements gathering phase [25]. Pauli, J. and Dianxiang Xu presented a paper named Integrating functional and security requirements with use case decomposition in Engineering of Complex Computer Systems, 2006. ICECCS 2006. 11th IEEE International Conference in Stanford, CA. In this paper they proposed an approach to decomposing use cases, misuse cases, and mitigation use cases [28]. Donald Firesmith presented a paper in Journal of Object Technology, vol. 2, no. 3, May-June 2003, pp. 53-64 named Security Use Cases. This paper provides examples and guidelines for properly specifying essential (i.e., requirements-level) security use cases [29]. V. CONCLUSION In this paper, we have discussed about security requirements engineering, security requirements in software development process and security requirements elicitation and presentation model. And here, we listed security requirements approaches of security requirements engineering. We also specified model for security requirements elicitation and presentation. This research work provides the knowledge of security requirements elicitation and presentation approaches for requirements phase in software development life cycle. These model are associated with security requirements with user functional requirements. One of the future work may be developed an elicitation and presentation method for security requirements in requirements phase. Security requirements combining techniques with functional requirements can be developed as a future work. Future task may be done to develop a security requirements testing tool that have to be more efficient to preserve security requirements for the requirements phase. A mathematical model can also be developed for evaluating security requirements in requirements analysis phase. We have also planned a model for identifying user security requirements for specific functional requirements in requirements analysis phase for secure software development. REFERENCES [1] [2] [3] [4] [5] [6] M. A. Hadavi, V. S. Hamishagi, H. M. Sangchi, “Security Requirements Engineering; State of the Art and Research Challenges”, Proceedings of the International Multi Conference of Engineers and Computer Scientists 2008 Vol I, IMECS 2008, 19-21 March, 2008, Hong Kong Paolo Giorgini, Fabio Massacci, Nicola Zannone, “Security and Trust Requirements engineering”, Foundations for Security Analysis and Design, Lecture Notes in Computer Science, Volume 3655, Berlin: Springer, 2005. Security Development Life Cycle, Retrieved on December 6, 2013. Available at http://www.microsoft.com/security/sdl/default.aspx Security Development Life Cycle, Retrieved on December 7, 2013. Available at http://www.microsoft.com/security/sdl/process/requirements.aspx Viega J. “The CLASP Application Security Process”. Volume 1.1. Training Manual. Secure Software Inc. 2005. D. Gollmann, J. Meier, and A. Sabelfeld, “Applying a Security Requirements Engineering Process” (Eds.): ESORICS 2006, LNCS 4189, pp. 192–206, 2006. Springer-Verlag Berlin Heidelberg 2006 www.ajer.org Page 365 American Journal of Engineering Research (AJER) [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] 2013 The Open Web Application Security Project (OWASP) cheat sheet in 2013. Retrieved on December, 7, 2013. https://www.owasp.org/index.php/Secure_SDLC_Cheat_ Sheet#Purpose Besrour Souhaib and Ghani Imran 2012,” Measuring Security in Requirement engineering”International Journal of Informatics and Communication Technology (IJ-ICT) Vol.1, No.2, pp 72-81. Salini P. and Kanmani S. 2012 ”Survey and analysis on Security Requirements Engineering”, Journal Computers and Electrical Engineering, Volume 3, Issue 6, pp 1785-1797. Aljahdali Sultan, Bano Jameela and Hundewale Nisar 2011 “Goal Oriented Requirements Engineering - A Review”, -1-880843-83-3/ISCA CAINE. Jain Smriti, Ingle Maya 2011 “Software Security Requirements Gathering Instrument”, International Journal of Advanced Computer Science and Application Vol. 2, No. 7, pp 116-121. Christian T. and Mead N. 2010. “Security Requirements Reusability and the SQUARE Methodology”, Software Engineering Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, Technical Note CMU/SEI2010-TN027. Retrieved on March, 7, 2013 http://www.sei.cmu.edu/library/abstracts/reports/10tn027.cfm Hadavi M. A., Hamishagi V. S. and Sangchi H. M. 2008. “Security requirements Engineering; State of the Art and Research Challenges”, International MultiConference of Engineers and Computer Scientists Vol I, pp 19-21. P.salini and S.kanmani. 2011. “A survey on security requirements engineering”, International Journal of Review in Computing. Vol 8, pp 1-10. Daniel Mellado, Eduardo Fern´andez-Medina, and Mario Piattini. 2006. “Applying a Security Requirements Engineering Process”. ESORICS 2006, LNCS 4189, pp 192-206 at Springer- Verlag Berlin Heidelberg. P. Coffee, “Security Onus Is on Developers,” eWeek, 7 December 2013, www.eweek.com/article2/0,1895,1972593,00.asp H. Mouratidis, P. Giorgini, and G. Manson, “When Security Meets Software Engineering: A Case of Modeling Secure Information Systems,” Information Systems, vol. 30, no. 8, 2005, pp. 609–629. J.D. Meier, “Web Application Security Engineering,” IEEE Security & Privacy, vol. 4, no. 4, 2006, pp. 16–24 Jose Romero-Mariona, Hadar Ziv, Debra J. Richardson, “Security Requirements Engineering: A survey”, August 2008, ISR Technical Report # UCI-ISR-08-2 Guttorm Sindre, Donald G. Firesmith and Andreas L. Opdahl, “A Reuse-Based Approach to Determining Security Requirements”. Thitima Srivatanakul , John A. Clark, and Fiona Polack, “Effective Security Requirements Analysis: HAZOP and Use Cases”. K. Zhang and Y. Zheng (Eds.): ISC 2004, LNCS 3225, pp. 416–427, 2004. At Springer-Verlag Berlin Heidelberg 2004 Guttorm Sindre, Andreas L. Opdahl, “Eliciting security requirements with misuse cases”. Received: 15 February 2002 / Accepted: 5 March 2004 / Published online: 24 June 2004 at Springer-Verlag London Limited 2004 Ala A. Abdulrazeg, Norita Md Norwawi and Nurlida Basir,” Security Measurement Based on GQM to Improve Application Security during Requirements Stage”, International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(3): 211-220, the Society of Digital Information and Wireless Communications (SDIWC) 2012 (ISSN: 2305-0012) Michael S. Ware, John B. Bowles and Caroline M. Eastman, “Using the Common Criteria to Elicit Security Requirements with Use Cases”. Smriti Jain and Maya Ingle, “Software Security Requirements Gathering Instrument”, (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 2, No. 7, 2011. Salim Chehida and Mustapha Kamel Rahmouni, “Security Requirements Analysis of Web Applications using UML”, Proceedings ICWIT 2012. P. Devenbu, “Software Engineering for Security: A Roadmap”, 2000. Pauli, J. and Dianxiang Xu, “Integrating functional and security requirements with use case decomposition”, Engineering of Complex Computer Systems, 2006. ICECCS 2006. 11th IEEE International Conference in Stanford, CA Donald Firesmith: "Security Use Cases", in Journal of Object Technology, vol. 2, no. 3, May-June 2003, pp. 5364. http://www.jot.fm/issues/issue_2003_05/column6 www.ajer.org Page 366
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-214-224 www.ajer.org Research Paper Open Access Prediction and Evaluation of Collection Efficiency Of Municipal Solid Wastes Collection In Uyo Metropolis. Obot E. Essien1, Obioma C. Nnawuihie2, Joseph C. Udo3 1,2 Department of Agricultural and Food Engineering, GIS unit, Department of Geography and Regional Planning University of Uyo, Uyo PMB 1014, Uyo Nigeria. 3 Abstract:-The collection efficiency of haul-container system for municipal solid waste collection in Uyo metropolis was evaluated based on the variables of operating time, dispatch time and loss time. Data on those variables were measured on the time-study of routes zones in Uyo using continuous stopwatch timing, and were analyzed for mean, standard, deviation, covariance, ANOVA and correlations. Regression models of efficiency on operation and loss times components showed efficient and significant association (R 2 =1.000, p < 0.01), High values of collection efficiency showed variation with route zone as precise predictive tool. Nutcliffe coefficient of performancewas used to test the goodness of fit. The coefficient of performance or efficiency varied differently with root zones and operation times in the order of: Zone 6 (98%) > zone 4 (80%) > zone 2 (60%) > zone 4 (40%). The variation in efficiency was affected by distribution of waste receptacle in route zone designs and time loss factors. Route zone design and dispatch station location whichwill effect close equalization of cycle time (dump-trip time) are recommended. Keywords:- Time study, municipal solid waste collection, haul-container system, route zones, operation hours, efficiency I. INTRODUCTION Solid waste management is the application of techniques that will ensure the orderly execution of the basic function of collection, transportation, processing and disposal of solid wastes (Masters, 1991; Sincero and Sincero, 2006). Horsfall et al (1998) described municipal solid wastes (MSW) collection as those activities of orderly gathering of solid wastes and hauling to where the collection vehicle is emptied. Those functions are rendered in the best principles of public health, economic, ergonomic, engineering conservation, neighborhood aesthetics and other environmental considerations that regard public attitudes (George, 1977). The intent is to keep the environment clean, devoid of nuances and diseases (Henry and Heinke, 2005) or to improve neighbourhood aesthetics and reduce public health risk (US EPA, 1999; Gwinnett country, 2012). Different designs of collection operation are available in developed economies aimed to improve collection efficiency. The use of haul container system (HCS), with one hauler per zone collection mode as introduced to Uyo municipality based on the unsatisfactory performances of previous methods (backyard wastes dump/burning and stationary container on curbsides system). Backyard waste dumping was in place prior to the new status of Uyo as capital territory (1987). With the urban development, fallowed patches of land, which received litters of yard waste,vastly disappeared making it difficult to litter wastes in patches of bushes and built up areas of the metropolis. Also, the once-a-month sanitation day cleaning-up exercise cleared gutters and brought out heaps of garbage, yard trimmings and other solid wastes, but the tipper lorries used for collection could not remove all the heaps of generated wastes, sometimes, weeks after generation, even till another mass waste cleaning-up sanitation day came around. That meant, in some cases, the heaps of wastes were left uncollected for one month after their generation on sanitation day. These were washed by runoff back into the gutters or, where possible, were openly burnt on the curbsides. The problems were alluded to insufficient tippers Lorries released on the sanitation day for collection and disposal, and lack www.ajer.org Page 214 American Journal of Engineering Research (AJER) 2013 of adequate crew volunteers, since the collection crew and equipment were based on voluntary participation. Thus, after the sanitation day, the tippers, which turned up for the exercise, were withdrawn by their owners and the collection crew had no motivation to continue on voluntary service. Thus, the generated wastes remained sometimes till the return of the next sanitation day. Recyclables were sorted by scavengers (sorters) at the dumpsite. Also, during the collection of heaps of wastes by the tippers on sanitation day, the crew spent time sorting the recyclables from the mixed wastes at the curbside, which action delayed rapid mounting of containers and making it difficult to complete collection on one sanitation day. Uyo did not witness planned and scientific solid waste management programme hither to (Uwem, 2005). Although research and information data on waste management were scanty, there was a significant increase in the volume and composition of wastes generated daily in Uyo, as well as other major towns in the state (Uwem, 2005). Therefore, the use of haul container system (HCS) on one hauler per zone basis was expected to offset the failures of the previous methods and produced an efficient, effective and cost-effective solid waste collection by private agency handling of solid waste collection and transportation to disposal (Gwinnettcounty.com, 2012; NSWMA, 2012). Gwinnett country choice of hauler preferred one solid waste hauler per zone on the claims that it increased collection efficiency, limited truck traffic in residential neighborhood and reduced noise pollution (Gwinnettcounty.com, 2012). Also, it has been observed that outsourcing of MSW collection to private management resulted in money-savings and efficiency maximization (NSWMA, 2012; World Bank, 2000; Gwinnettcounty.com, 2012). Therefore the objectives of the study were: 1. to analyze the time component of daily operation of MSW collection activities and time loses on HCS daily collection of solid waste to disposal at dumpsite in Uyo. 2. to evaluate the daily collection efficiency of the one hauler truck per zone operation by HCS, 3. to make recommendations for sustainable efficient MSW collection in Uyo. II. MATERIALS AND METHODS 2.1 Measurement of time of activities and distance Time study (work measurement) was applied. Time study is the art of observing and recording the time required to do each detailed element of an industrial operation, where industrial (product or service) includes manual, mental and machining operations (Sharma, et al., 2004;Nuutinon 2013). In this case, service industry was involved with manual and mental and driving operations which combined to drive the time of operation to unforeseen efficiency. Measurements of times of activities and distances moved were involved. Time involved was both the on-the-job activity time and non-job-related (or off-route) times (called time allowance). The continuous method of stop watch timing was used formeasurement of time components all time elements of the chequered activities of municipal solid waste (MSW) collection and transportation to disposal operations at MSW dumpsite, Uyo. These times were used to compute cycle time (t net). For time measurement, the stop watch was set at zero at the dispatch and pressed on at the release of the truck (s). Travel distances were measured by reading off the counter of the odeometer on the dashboard of the hauler truck. Counter checking was made at the “start” and “stop” schedules of each activity. 2.2 Sampling Duration Four route zones were randomly selected in the study area, Uyo metropolis, Nigeria(Fig 1)for the time study of HCS collection in 2010 which lasted for 2 months in the wet season including June and July and two months in the dry season including active period of December and January at 2 weeks per months. The temporal staggering was to meet seasonality and festive periods as well as the cost and logistic implications especially as agreed by the MSW management contract service agency. The respective hauler trucks attached to the routes zones were identified as 046 for zone 2, 053 for zone 3, 060 for zone 4 and 072 for zone 6.Time for each activity in the operations were added up and averaged for each time element for computation of net cycle time, tnet, and time loss. 2.3 Evaluation of operation times Totalavailable cycle time is the net travel time for a trip (or a collection cycle) (Sincero and Sincero, 2006; Dr McCreanor, 2008); Net time per trip is given as: tnet = m1 + h1 + s + u + h2 + m2 + dl www.ajer.org (1) Page 215 American Journal of Engineering Research (AJER) 2013 Figure 1: Showing municipal solid waste dumpsite and solid waste receptacle site in route zones in Uyo metropolis, Nigeria. where m1 is time taken to mount the used (or loaded) container into the collection truck at generation station; h1 is the d time from the (container) station to disposal site with loaded containers, s is time taken to unmount loaded truck at disposal site (mins), u is the time taken to mount empty truck at the disposal site, h2 is time to drive back to the same container station with empty container, m 2 is the time taken to un-hitch the empty container at the same container before moving to the next station, and dl is the component time taken to move from the previous to the next container station, min. Total allowance, TL = total losstime = Hence, TL = �=1 � � � � � � Dq + Bt + Dd + Hu + Dl + Ud + Dc (2) where Dq is delay (queuing time) enroute the narrow lane to the dumpsite, min; Bt is extended break time by drivers to recover from mental/physical fatigue; Dd is dispatch delay caused by sudden truck break down; Hu is hold-up along the roads; Dl is extended lunch time break, Ud is unnecessary delay (e.g. for sorting or scooping of spilt waste) at collection stations; De is delay in evacuation; De is caused by insufficient trucks required to carry out round trip activities. Working period, H = total No of working hours per day; this is generally 8 hours per day. Operation hour = total available cycle hour, ho ho = total work period /day – total time allowance www.ajer.org Page 216 American Journal of Engineering Research (AJER) 2013 total working hour/day – total timeloss = ho = H – (w + t1 + t2) (3) where Total loss time Tɩ = (w + t1 + t2), = w + t, where t = t1 + t2. (4) 2.4Evaluation of Efficiency HCS collection efficiency, Ef = total available cycle time per day Total Work Period per day (5) The stop watch measured the complete time for a cycle or trip, as wells as the loss of time (w) and the dispatch time (t). Thus, ho = H – w – t. (6) where, Then, efficiency, Ef = [ho/H] x 100, % (7) H= ∑i=n t1i - ∑n t2i- ∑wi + ho, hence, daily operating hour, ho is: ho = H – (t1 + t2 + w) = H – t – w (8) where ho operation time or total available cycle time H Working hours per day t1, t2 = dispatch time from dispatch station to first container station (t 1), and time to return from last container station for the day back to dispatch station (t2); t = t1 + t2 is overall dispatch time; w is loss time in the collection trips in a working day. 2.4.1 1. 2. 3. 4. Precaution: The following acts of caution were exercised: the starting time at the dispatch station was the same for each day fueling was served full tank at the dispatch stationbefore the start of the trips checking on overall worthiness of each truck was made prior to releasing for theday’s exercise for less error in data, all members of team were briefed on rules or ethics and involvements before the survey started. How to measure time and distance using respective instruments were demonstrated. 2.5 Statistical Analysis Descriptive statistics, covariance, reliability test, analysis of variance and test of significant differences were made with the use of SPSS software version 17: Correlation and regression analyses were used to modeled the relationship between operation time and efficiency. III. RESULTS AND DISCUSSION The results were obtained for cycle time of four trucks (truck 046, 053, 060 and 072) for five tripson week days and Saturday and are given in Table 1. Table 2contains values of collection efficiency and data of variables for computing efficiency of HCS pick-up truck. 3.1 Cycle time (tnet) The data on elements forcomposing the net travel time, tnet (or cycle time) were averaged and summarized. Sample tnet for 5 trips are shown in Table 1 for Eighteen18 entries in 7 variables for the four trucks (046, 053, 060 and 072). The fastest truck was 072 with the least cycle time of 17.30 mins. The reason for this is not clear but cursory look at the area showed that the placement of receptacles were not deep into the largely community area because of the bad roads, as such solid waste burning went on at many roadside locations supposedly marked for waste containers but which stayed for days without being picked up. As such, fewer MSW were actually collected for disposal (175kg/km2 (Nnawuihe, 2006). The roads in www.ajer.org Page 217 American Journal of Engineering Research (AJER) 2013 this zone were largely not tarred compared to all other zones. Therefore, urban road development is needed in the communities making up this zone to improve collection. 3.2 HCS Collection Efficiency 3.2.1 Time components affecting efficiency: Time components affecting efficiency are captured in the function (3) comprising dispatch times (t1 + t2 = t), non- job- related or off- routes loss w, and operating hour, ho. Operation hour or total available cycle time (Sharma et al., 2004) varied amongst trucks or route zones, although not significantly being 6.06hr for truck 046 (route zone 2), 5.96hr for 053 in zone 3, 6.58hrsfor 060 in zone 4 and 5.21hrs for 072 in zone 6. Truck 072 (prowling the widest area,zone 6 (2.25ha))recorded the lowest mean daily operation hours (ho) of(5.21hrs) while truck 060, which covered one of the two smallest zones (1.0ha) but with greatest waste load capacity (1699kg) (Nnawuihe, 2006), recorded the highest mean daily operation hour (6.58hrs) followed by truck 046 with ho = 6.06hrs. Figure 2 shows the component mean time distribution pie charts with mean time distribution in percentages. Dispatch time occupied between 5 and 6% of the time with 046 having 6%, and 053, 060 and 072 individually having 5%. The loss time varied significantly (P = 0.05) between the zones or truck operations, being 19% for 046, 20% for 053, 7% for 060 and 30% for 072. This is very reasonable account especially as truck 072 in the widest zone had the highest loss time. The operation time (ho) distribution in daily solid waste collection operation also had a significant variation or difference (P= 0.05) with 046 having 75% of the working hour of the day, 055 with 75%, 060 with 88% and 072 with 65% (the smallest)(Figures 1 and 2) 3.3 HCS Collection Efficiency, Ef The collection efficiency of the Haul container system of municipal solid waste management was related to total available cycle time or operation hour ho as in (7). The computed values of efficiency are shown in Table 3 while Figure 3 shows, in composite charts and comparatively, the efficiency and deficiency of route zone solid wastes collection by HCS pick-up trucks. Daily collection efficiency varied significantly with, travel time components, hence with total available cycle time (ho), loss time (w) and total dispatch time (t); as well as with route zone. For truck 046 (route zone 2), average daily Ef was 79%; while it was 75% for truck 053 (route 3), 82% for truck 060 (route 4) and 65% for truck 072 (route 6) (Table 2). The hierarchy of efficiency was in the order: truck 060 > 046 > 053 > 072, and showed truck 060 to be the most efficient collector having the highest efficiency (82%). Truck 072 also had the lowest average efficiency as well as the lowest tnet and ho. Collection efficiency varied with daily cycle time in the order: 072 < 053 < 046 < 060 (Table 2), which is the same sequence as efficiency. However, variation of efficiency with loss time (w) and dispatch time (t) did not follow the above observed pattern, hence regression analysis was used to understand their relationships. The variation of efficiency with total daily tnet time showed significant difference between the values for five week days (Mondays, Tuesday, Wednesday, Thursdays and Fridays), although a definite pattern was not observed, except that for some route zones and trucks like 060, 053 and 072 loss time and total loss time were highest on Tuesdays while for 046, in particular, highest loss times and total loss time were sustained on Mondays. These significant differences or variations made it necessary to use multiple regression analysis to understand their relational effect on HCS MSW collection efficiency. Table 1: Cycle time for individual trips, t net from travel time Componentsand total net time for solid waste collection Trucks/Day 046 M T W T F S 072 M T W www.ajer.org Trip1 Trip 2 Trip 3 Trip 4 Trip 5 Hr 24.60 29.35 26.09 27.38 25.35 24.54 21.20 24.25 22.35 22.27 19.57 19.30 24.30 20.25 21.30 18.57 23.38 18.11 18.80 13.10 13.15 12.58 22.32 14.50 24.3 21.2 21.13 22.70 24.1 15.52 1.20 1.20 1.45 1.49 1.31 1.32 18.0 19.29 16.50 14.80 16.32 17.08 17.60 17.21 16.35 14.33 16.38 18.55 15.32 15.35 16.33 1.19 1.26 1.29 Page 218 American Journal of Engineering Research (AJER) T F S 060 M T W T F 053 M T W 2013 23.10 19.37 21.70 17.32 16.02 17.38 16.33 19.08 14.12 16.00 19.53 1618 16.22 20.15 1.30 1.34 1.10 24.80 25.47 25.37 20.40 18.15 29.80 23.37 24.50 24.33 24.43 23.02 23.33 26.43 24.15 23.22 25.02 27.13 23.90 25.80 21.40 25.37 17.5 23.48 24.25 20.17 2.70 1.57 2.03 15.8 1.48 24.33 24.02 26.25 23.28 21.28 36.50 28.35 23.38 26.45 22.60 23.60 22.08 20.58 20.48 24.47 1.22 1.53 2.16 t 5% t 6% w 30% w 19% ho 75% ho 65% Truck 072: route zone 6 Truck 046: route zone 2 t 5% w 7% t 5% w 20% ho 75% ho 88% Truck 053: route zone 3 Truck 060: route zone 4 Figure 2: Percentage mean time distribution in HCS solid waste collection inrespective route zones for pickup trucks 046, 072, 062 and 053. 100 Efficiency (%) 80 88 76 75 65 60 40 20 0 O46 O72 O60 O53 Soilid waste pick up trucks in route zone Figure 3: Comparative efficiency(shaded area) and deficiency (brown area)of waste collection in route zones by HCS pick up trucks www.ajer.org Page 219 American Journal of Engineering Research (AJER) 2013 Table 2: Dispatch time, non job activity time, total time loss, operation time and collection efficiency in HCS MSW collection in route zones, Uyo Truck/day 060 M T W T F Avg 053 M T W Avg 046 M T W T F Avg 072 M T W T F Avg Dispatch time, T, Hr Loss time W, Hr Total loss time Tɩ, Hr Operation Time Ho, Hr Efficiency Ef% 0.483 0.350 0.350 0.383 0.350 0.380 1.683 1.819 0.660 0.870 0.660 0.570 1.866 2.169 1.010 1.253 1.010 0.950 6.32 5.83 6.99 6.75 6.99 0.58 79 73 87 84 87 82 0.517 0.350 0.217 0.360 1.222 1.683 1.633 1.513 1.74 2.03 1.85 2.04 6.26 5.97 6.15 5.96 78 75 77 77 0.508 0.425 0.410 0.483 0.450 0.455 2.747 0.887 0.973 1.775 1.063 1.489 3.255 1.312 1.433 2.175 1.513 1.938 6.06 6.69 6.57 5.83 6.49 606 76 84 82 73 81 79 0.383 0.450 0.482 0.332 0.316 0.393 2.433 2.732 2.101 2.849 1.899 2.403 2.816 3.820 2.583 3.181 2.215 2.923 5.184 4.818 5.417 4.819 5.785 5.21 65 60 68 60 72 65 NB: Total loss time = t + w 3.4 Regression Analysis Using the data for total loss time (w),dispatch time (t), and total available cycle time (ho), (Table 2), multiple variables regression equations were modeled for collection efficiency as a function of (t, w, ho). The coefficients of the general multiple regressional equations are tabulated in Table 3 for trucks 046, 053, 060 and 072. The general form of regression equations for collection efficiencies under travel times of trucks 046, 053, 060 and 072 in their respective zones were obtained by substituting the coefficients into the general regression model of collection efficiency. Thus, the general regression equation for collection operation by each truck using average of values was: 046; Ef = 3.297 T + 4.992W + 17.339ho – 3.5,279, R2 = 1, p <0.05 (8) 053; Ef = 0.583T + 0.583W + 10.680ho + 11.442, R2 = 1, p < 0.001 (9) 060; Ef = 2.176T + 1.062W + 13.130ho – 4.720, R2=1.000, p < 0.01 (10) 072; Ef = 2.547T + 0.2547 W + 12.670ho – 1.941, R2=.9988 (11) Table 3: Regression model coefficients for efficiency-time relationship, coefficient ofdetermination (R2); ANOVA and significant differences Haulage truck 046 053 Model predictor and parameters Constant T W Ho R2 Adj R2 f-ratio Constant T W Ho R2 www.ajer.org Unstandardized Coefficients t- statistics Sig Remarks 35.279 -3.297 4.992 17.339 1.000 0.999 1416.940 11.442 -583 (excluded) 10.680 1.000 39.300 13.489 6.178 5.574 -898 -244 -808 3.111 Ns Ns Ns Ns P = 05 P =005 P = 05 P = 05 .05 P = .05 .000 .000 .000 .000 P < 001 Page 220 American Journal of Engineering Research (AJER) 060 072 Adj R2 f-ratio Constant T W Ho R2 Adj R2 f-ratio Constant T W Ho R2 Adj R2 f-ratio 2013 P < .001 -4.720 -2.176 1.062 13.130 1.000 1.000 .000 .000 9.745E07 .000 -7.016E+ 05 -8.056E+05 1.90E + 06 1.313E+07 Ns Ns Ns Ns -1.941 2.547 2.233 2.096 -869 1.215 12.670 998 977 611.581 0.367 34.478 Ns Ns .000 .001 P<.001 P=.05 P = 05 P< 001 P < 0.001 .01 p < 0.01 Ns = not significant Using ANOVA and f-ratio, the following effects of the groups of variables on efficiency were tested. For truck 046, the differences between the groups of variables (t, w, ho) were significant effect on efficiency at p= .05, and R2 = 100%, adjusted R2 = 99.99%, in which case the variance between the variables completely explained any difference in efficiency. For truck 053, the variables (t, w, ho) had very significant effect on efficiency at p < 0.001. For truck 060, the variables highly influenced the efficiency (p < 0.001) and the within- sample error was not significant such that the association between the predictors and efficiency was completely (100%) explained by any variances between variables (thus, R 2 = 100%, Adjusted R2 = 0). For truck 072, the differences between the groups (t, w, ho) were very significant (p < 0.01) and they explained the 99.8% effect on efficiency (R2 = 99.8% adj R2 = 99.9%). The 02% unexplained coefficient was an significant error variance in ho (p <.001). In general, the efficiency was completely dependent on the three independent variables (R2 = 99.9- 100%). Also there was a perfect correlation between the predictors (variables) and the efficiency (r = 0.999 -1.000). 3.5 Predicted Efficiency, Ep The predicted efficiency Ep was obtained by substituting the daily average time elements (t, w, ho) (Table 4) into the respective general regression model of HCS collection efficiency (Equations8, 9, 11). Predicted daily collection efficiency (Ep) are presented in Table 4 Table 4: Predicted daily collection efficiencies from regression model for route zone truck operation. Day/Truck Mon. Tue. Wed. Thu. Fri. Avg 046 82 84 82 73 81 76 Predicted Efficiency 053 060 79 80 76 74 78 88 85 888 77 82 072 65 61 68 61 73 66 3.5.1 Overall Collection Efficiency:The data for each predictor time element from all zones (t, w, ho) were merged into one list for each time element and regressed on Ep using into SPSS version 17 software windows, to obtain the all zones predictive model for overall (or all-zones collection) efficiency, (EZ) was obtained as: EZ = 0.127t + 0.273w + 12.721ho – 1.881, R2 = 99.9% (12) and adj. R2 = 0.999 also, which indicated a very perfect relationship, and Se = 0.28523. The low Se indicated that the use of all-zones time variables for predictive efficiency model offered better time-based overall predictive collection efficiency than using singular zonal predictors (time element). ANOVA and F-ratio of 5870.796 indicated significant differences (p<0.01) with the between-variables being greater than the withinvariables, which also means that the variance between the three groups of variables affected the efficiency prediction more than the within - variable variance (Ofo, 2000). No significant differences existed in the predictors (P = .05), except Ho at P < 0.01, and Cv = 5%. The all-zones predicted efficiency was 76%. www.ajer.org Page 221 American Journal of Engineering Research (AJER) 2013 3.6 Validation The validity of the predictive models of collection efficiency was tested using parametric test on the field computed (Ef) and predicted (Ep) efficiencies for each truck or zone. The following parameters were used for the tests: 1. Nash Sutcliffe coefficient (Nash and Sutcliffe, 1970 also called coefficient of performance efficiency, COE is (13) 2. 3 Average error of Bias, AEB AEB = 100 x [1/n∑ni=1 (Pi - Oi)2]1/2 / [Om] (14) Coefficient of residual mass, CRM (15) where Oi, Om, Pi are respectively the measured, mean, and predicted data, and n is the number of data from i =1 to i = n. In addition, coefficient of variation (Cv), and root mean squared error (RMSE) were used to analyzed variances or mean differences between field measured (Ef) and predicted (Ep) efficiency. The values Ep were plotted against those of Ef and their (goodness of fit) varied with the truck and route zone. The graphs are showed in Figures 4 a, b, c and d for truck 046, 053, 060 and 072; and Figure 5 for overall zone (or all-zones) average collection efficiencies. The profiles of Ep and Ef showed perfect goodness of fit hence the model coefficient of performance was very efficient and reliable for efficient prediction of effect of travel time or operation times on collection efficiency of haul container system collection of municipal solid waste in Uyo metropolis. The statistic of efficiency model characteristics are shown in Table 5. Table 5: The characteristics of goodness of fit of the efficiency curves. Statistics 046 053 060 072 2.622 0.78 2.41 RMSE 2.09 1.17 0.78 2.02 AEB 0.56 - 0.015 -0.012 - 0.013 CRM - 0.01 3.5 1.0 2.9 CV, % 1.0 COE, % 60 40 80 98 The goodness of fit shows the time-based efficiencies for trucks 060 and 072 to be very superior, to all others, that fortruck046 was good, while it was just fair for truck 053. The performance of truck 053 would need more data to improve its predictive model. IV. CONCLUSION The operation and loss times and efficiencies of the haul-container system collection and disposal of the municipal solid wastes in the challenging route zones in Uyo metropolis wereinvestigatedusing the strategy of one pick-up truck per route zone under private agencyconsultancy management. The time study utilized the continuous method of stop watch timing for the sample field survey for timing of all the jobbased activities and the non-job-related time losses for four trucksin four zones (2, 3, 4, and 6). The one-haulage-truck-per-route-zone MSW collection strategy worked successfully and efficiently;the collection efficiency ranged from 65% for truck 072 (zone 6),77% for truck 053 (zone 3),79% for truck 046 (zone 2) to 82% for truck 060(zone 4),in Uyo metropolis. The overall Net travel time, time varied (withzone)in the order: zone 3 (24.21) > zone6 (23.83) > zone 2 (23.50) > zone 4 (17.30mins). ANOVA showed very significant difference (p < 0.001)in w and ho for truck 072 and in t for truck 053. The collection efficiency ranged from 65% for truck 072 zone 6, 77% for truck 053, zone 3, 79% for truck 046, zone 2 to 82% for truck 060, zone 4; and showed perfect association very high goodness of fit with predicted efficiencies obtained from regression equations. The goodness of fit indicated efficient prediction of MSW collection for truck 072 (98%), 060 (80%) and 046 (60%).however its prediction for truck 053 was imprecise (40%) and more data are required to enrich analysis in future. Also, receptacles distribution and paved roads are needed for better collection efficiency. The overall collection efficiency of 76% is very good performance. www.ajer.org Page 222 Daily collction Efficiency in the zone (%) American Journal of Engineering Research (AJER) 2013 90 85 80 Observed Ef 75 Predicted Ef 70 65 1 2 3 4 5 No. of days of w aste collection Daily collction Efficiency in the zone (%) Fig. 4 (a): Observed and predicted efficiencies for truck 060 in route zone 4 100 80 60 Observed Ef 40 Predicted Ef 20 0 1 2 3 4 5 No. of days of w as te colle ction Daily collction Efficiency in the zone (%) Fig. 4 (b): Observed and predicted efficiencies for truck 046 in route zone 2 75 70 65 Observed Ef 60 Predicted Ef 55 50 1 2 3 4 5 No. of days of w aste collection Daily collction Efficiency in the zone (%) Fig. 4 (c): Observed and predicted efficiencies for truck 072 in route zone 6 79 78 77 Observed Ef 76 Predicted Ef 75 74 73 1 2 3 4 5 No. of days of w as te colle ction Combined Efficiencies for all zones (%) Fig. 4 (d): Observed and predicted efficiencies for truck 053 in route zone 3 100 90 80 70 60 50 40 Predicted Ef 30 20 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 No. of colle ction days in all zone s Fig. 5: Combinedobserved and predicted efficiencies for all zone www.ajer.org Page 223 American Journal of Engineering Research (AJER) 2013 REFERENCES [1]. [2]. [3]. [4]. [5]. [6]. [7]. [8]. [9]. [10]. [11]. [12]. [13]. [14]. [15]. [16]. [17]. [18]. [19]. [20]. [21]. [22]. [23]. [24]. [25]. 1. 2. 3. Banga T. R., Agarwal N. K. and Sharma S. C. 2004. Industrial engineering and management science. Khanna Publishers, New Delhi 110006, pp 49-80. Cointreau S., Gopalan, P. and Coad, A. 2000. Private Sector participation in municipal Solid waste management toolkit. Guidance pack, World Bank Group. worldbank.org/INTURBANDEVELOPMENT/Resources/33363871249073752263/6354451-1244073991564/aiel Environmental Research and Education Foundation, 2001. Size of the United States Solid Waste Industry. R.W. Beck and Chartwell Information Publishers. Gwinnet county com. 2012. Frequently Asked Questions, Gwinnett County, GA. mht.gcsolid@gwinettcounty.com.; GwinnettcountyGAFrequently Asked Questions.mht.. Henry J. G. and Heinke G. W. 2005. Environmental Science and Engineering, 2nd Ed., Prentice Hall of India , New Delhi 110001, 778p. Horsfall, M. Jnr. and Spiff, A. I. 1998. Principle of Environmental Studies in Physical, Chemical and Biological Emphasis. Metro Prints Ltd, Port Harcourt, Nigeria McCreanor, Dr. 2008. Lesson 6. Solid waste collection. Available at the http://faculty.mercer.edu/McCreanor-pt/eve420/les Accessed 2/2/2010 Mehdian, M. H. and Fallichard, J. 1995. Validation of the SUBSTOR Model for Simulating Soil Water Contents. Transaction of the American Society of Agricultural Engineers. 38: 513-520 Morristown to Outsource Garbage Pickup Services, 2011. Morristown Patch, February 2011. http://morristown.patch.com/aiticles/morristown-to-outsource-garbage-pickup-services. National Solid Wastes Management Association (NSWMA), 2012. Privatization: Saving money, maximizingefficiency and achieving other benefits in solid waste collection, disposal, recycling. National Association of Solid Waste Management. 4p. www.environmentalistseveryday.org/privatization. Nnawuihe O. C. 2006. Evaluation of haul container system of solid waste disposal in Uyometropolis.B.Sc.project, Faculty of Engineering, University of Uyo, Uyo.Ofo, J. E. 2001. Research Methods and Statistics in Education and Social Sciences .Joja Educational Research and Publisher Ltd, Lagos, Nigeria, 311p SWANA. 1997, Integrated municipal solid waste management: six case studies of system cost and energy use. Summary Report. Solid Waste Association of Northern America (SWANA), GR-G 2700 US EPA .1999a..Collection Efficiency: Strategies for success. US Environmental Protection Agency. No. EPA530-K-99007. www.epa.gov/osw/nonhaz/municipal/landfill/coll-eff/k 99007.pdf US EPA. 1999b.. Getting more for less: improving collection efficiency. US Environmental Protection Agency. No. EPA 530-R-99-038. www.epa.gov/osw/nonhaz/municipal/land fill/coll-eff/r99038.pdf US EPA, 2011. Municipal Solid Waste in the United States: 2010 Facts and Figures, Office of Solid Waste, US EPA, Washington D.C. http://www.epa.gov/epawaste/nonhaz/municipal/msw 99.htm. World Bank, 2000. Private Sector participation in municipal solid waste management toolkit, World Bank. http://rru.worldbank.org/Toolkits/SolidWasteManagement. World Bank Group, 2000. Urban solid waste management. Promote micro - enterprise as part of poverty reductionstrategy. World Bank Group. http://web.worldbank.org/WEBSITE/EXTERNAL/TOPICS/EXTURBANDEVELOMENT/EXT World Bank Group - what a waste: Global review of solid waste management. Urban Development SeriesKnowledge Papers. http://siteresources.Worldbank.org/INTURBANDEVELOPMENT/Resources/3363871334852610766/chapt4.pdf Worldbank.org/In turbandevelopment/Resourbes/3336387-124907375226316354451-1244073991564/Aiello.pdf NSWMA and WASTEC. 2011. Privatization of trash collection, disposal and recycling. The Environmental Industry Association (EIA). The Environmentalist Every day. www.environmentalistseveryday.org/publications-solid-waste-industry-research/index.php BIOGRAPHICAL NOTES Dr.O. E. Essien received his B. Sc. (Agricultural Engineering) in 1978 and Ph.D. (Agricultural Engineering) in 2004 from University of Ibadan. He is anAssoc. Professor in the Department of Agricultural and Food Engineering, University of Uyo. He is a fellow of Nigerian Institution of Agricultural Engineers (NIAE). He has many publications in the area of Environmental water quality, soil and water conservation Engineering, amongst others. He designed and supervised the research study and wrote the article. O. C. Nnawuihe is Agricultural Engineer, University of Uyo, Uyo. He carried out data collection along with others in the field. Dr. J. C. Udo is a specialist in GIS and lectures in the Department of Geography and Regional Planning, University of Uyo. He produced the GIS map for this study. www.ajer.org Page 224
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-182-193 www.ajer.org Research Paper Open Access Some Aspects Of Sliding Velocities And Applied Normal Loads On The Accelerated Wear Behaviour Of Sintered Fe-16%Cu2.50%Mn-0.95%Cr-3.25%C P/M Steels K.S.Pandey1, C.Vanitha2 1 Department of Metallurgical and Materials Engineering National Institute of Technology, Tiruchirappalli - 620 015, Tamil Nadu, India 2 Department of Metallurgical and Materials Engineering National Institute of Technology, Warangal - 506 004, Andhra Pradesh, India Abstract: - The present investigation is aimed to generate experimental data on the wear behaviour of sintered Fe-16%Cu-2.50%Mn-0.95%Cr-3.25%C preforms under furnace cooled and oil quenched condition on a pinon-disc machine at two different sliding velocities: 2.09 and 4.19 m/s with three different loads, 1.10, 1.55 and 1.90 Kg respectively. Preforms were made from thoroughly blended iron, copper, manganese, chromium and graphite elemental powders in the required proportion to attain the above composition for 30 hours in a pot mill. Compacts of blended powders were prepared on a 1.0MN capacity UTM using a suitable die set assembly and controlling the densities in the range of 90±1 per cent of theoretical with 1.32±0.01 initial aspect ratio. Preforms were coated with the indigenously developed ceramic coating, sintered at 1050 0±100C for a period of 100 minutes, half were oil quenched and half were furnace cooled. These were separately machined to required dimensions and then subjected to an accelerated wear test on a pin-on-disc machine. Experimental data and calculated parameters revealed that wear resistance was improved when the specimens were oil quenched. Higher, loads and higher sliding velocities resulted in higher wear rates. Several empirical relations were established to describe the wear behaviour. Keywords: - behaviour, experimental, generate, sliding, velocity, wears I. INTRODUCTION Loss of mechanical performance and material loss control can be successful, if and only if, substantial reduction in wear is achieved which would cause considerable savings. Basically, friction is the major cause of wear and high energy dissipation. Tribology derived from Greek word „tribos‟ meaning rubbing or sliding [1] is defined as the “science and technology of interacting surfaces in relative motion and of related subjects and practices”. Tribologies in fact deals with the technology of lubrication, friction control and wear prevention of surfaces having relative motion under applied load. Thus, the surface interaction controls the functioning of practically every mechanical device that was and is designed by man. Every thing that the man makes out wears out, almost always as a result of relative motion between surfaces and hence , most of the machine break – downs are due to failures and stoppages associated with interfacing moving parts such as gears, bearings, couplings, sealings, cams, clutches etc. It has been reported [2] that sweating on palms of hands or soles of feet of humans and dogs, has the ability to raise friction between the palm or feet and a solid surface. Thus, the practical objective of Tribology is to minimize the two main disadvantages of solid to solid contacts, i.e., friction and wear, but this is not always the case. However, in certain situations, minimizing friction and maximizing both friction and wear is desirable. For instance, reduction of wear but not the friction is most desirable in brakes and lubricated clutches, reduction of friction but not wear is desirable in pencils, increase in both friction and wear is most desirable in erasers. Wear predominantly occurs in components like gears, piston rings sleeves etc. Metallic tribological components have been manufactured by casting or forging followed by machining to required dimensions. Once the machining operation is completed the mating surfaces are subjected to special finishing such as plating and www.ajer.org Page 182 American Journal of Engineering Research (AJER) 2013 chemical treatment processes. However, Powder Metallurgy (PM) is an alternative method of shaping components. PM is a highly developed technology of manufacturing ferrous and non ferrous parts. Many components are being produced by following PM route because the properties obtained are unique and quite often superior to conventionally produced parts. Main advantages of PM routes are achievements of high dimensional accuracy and minimal material wastage since the powder blends obtained are uniform and homogeneous. PM parts, generally, weigh from less than an ounce to nearly 1000, i.e., around 450 Kg. However, most P/M parts weigh less than 5lb (2.3 Kg) [3]. Micro-wave sintering of PM Green compacts comprising of various metal alloys such as Fe – Cu – C, Fe – Ni – C, WC – Co systems produced is highly improved sintered bodies in a very short duration of time with 20 to 30 per cent increase in wear performance when compared to conventionally produced parts [4]. Wear resistance of high speed tool steels, among the most wear resistant alloys produced by conventional metallurgical processes is due to the composite microstructure of a martensitic matrix and reinforcement of various metal carbides. But, the hot workability considerations limit the carbide content to the ranges associated with conventional alloy compositions. However, PM techniques allow these steels to be loaded with extra reinforcements via co-blended alloy steel and carbide powders. Further in addition to reinforcements such as alumina are effectively used as they pose no dissolution difficulties, as might the carbides. Twenty per cent volume fraction of reinforcements with alumina has been reported to enhance the wear resistance of M2 steel by an order of magnitude [5]. Apart from the above, new anti – friction materials based on iron – copper powders with several additional elements such as tin, lead and molybdenum di – sulphide have been developed via PM technique in order to exhibit improved anti – friction and mechanical properties. It has been reported [6] that the linear wear rates and gravimetric wear rates were reduced as the lead contents was decreased. It is also reported [7] that the optimum amount of copper added to Fe – Cu – C sintered bearing materials for high contact pressure ranges varied between 14 to 18 per cent by mass. Similarly, the addition of hard particles though improve the wear resistance, but, in excess cause damage to the shaft. Therefore, an optimum limit has been adopted which ranged in between 10 – 15 per cent by mass. Around 215 years ago, it was proposed by Jacobs Rowe that by the application of the rolling element, i.e., bearings to the carriages in U.K. could save one million pounds per annum in early 18 th century [8]. In 1966, Peter Jost reported [9] that by the application of the basic principles of tribology, the economy of U.K. could save approximately 515 million pounds per annum at 1965 values. A similar report published in West Germany in 1976 revealed that the economic losses by friction and wear cost about 10 billion per annum at 1975 values which equals to the 1 per cent gross national product. However, 50 per cent of these losses were attributed to wear. In U.S.A. it has been estimated that about 11% of the total energy annually can be saved in four major areas of transportation, turbo machinery, power generation and industrial process through technical progress in tribology [10]. In order to understand the wear mechanism, it is important to know the basics of molecular theory of wear and also wear rate. I.1 Molecular Theory of Wear The degree of proximity of two surfaces that is their compliance mainly depends upon the statistical chance as the surfaces separate in the horizontal plane during sliding and trying to make due to the attractive force between their atoms. Once sufficiently close, the atoms will be repelled and its natural tendency is to return back to its original position. However, it is plausible hypothesis that an atom can be dislodged and moves for enough to come within the field of another atom in the opposite surface where it finds a new equilibrium position. Thus, this means that atoms from one body can be plucked by other in the opposite surface. According to Tomlinson [11], this is the mechanism of wear. Energy dissipated by an atomic couple is F oL, where Fo is the inter atomic force of cohesion and L is the distance of separation. If ρ is the density of the metal which is wearing, the mass of an atom is m= ρE3, where E is the distance between successive row of atoms. If E is the total energy dissipated, the number of atomic junction „N‟ is given by N = Et/Fo ……………… (1) Total mass of the atom being removed from the surface is, M = N ……………. (2) M = Et ρE3 / FoL …………. (3) FoL = μEρo /α …………… (4) M = α Et ρE2 /μ ρo ……….(5) Flow stress σy is the limiting force that the space lattice can withstand and, i.e. given by σy = ρmax /E2 …………. (6) Where, ρmax = 2Po, the mean repulsive force and, therefore, M is given by: M = 2 α E + ρ /μ σy …... (7) www.ajer.org Page 183 American Journal of Engineering Research (AJER) 2013 Total mass of the metal removed is inversely proportional to the applied pressure. I.2 Wear Rate Holm [12] proposed that as sliding commences, atom to atom contact removes surface atoms at favorable encounters so that the loss of volume, V for sliding distance, S is given by: V = ZAtS …………. (8) Where, At is the true contact area, Z is the number of atoms removed per encounter. But, according to the friction laws, At is given by: At = W/ σy ……….(9) Where, W is the applied load and σy is the flow pressure of the softer metal. Substituting for At, equation (8) can be rearranged as: V/S = ZW/ σy ……… (10) The term V/S is the volume rate of wear per unit sliding distance and it is inversely proportional to the flow stress. Thus, it is clear that the total volume of material removed due to sliding is proportional to the applied normal load, the sliding distance and inversely proportional to the flow pressure of the material. Some important investigations are reported elsewhere [13-25]. Present investigation is aimed to generate experimental data on the accelerated wear behaviour of sintered Fe-16% Cu-2.50%Mn-0.95Cr-3.25%C preforms under furnace cooled and oil quenched conditions on a pin-on –disc machine. Data were obtained and analyzed under three loading conditions and two different sliding velocities. II. EXPERIMENTAL DETAILS Experimental details include the materials required and their procurements, instruments and equipment that are essentially required are identified and ensured for their availability. Further, powder charact erization such as chemical analysis and sieve size analysis needed to be carried out. Apart from these, the preparations of homogeneous powder blends are required to be experimentally carried out. Compact preparation and application of indigenously developed ceramic coating on the compact surfaces are detailed followed by sintering and subsequent treatments are highlighted. Standard specimens for accelerated wear tests were then prepared and tested. II.1 Materials Required Commercially pure atomized iron powder of -180μm was procured from Sundaram Fasteners Limited Hyderabad, India and electrolytic grade of copper powder of -63 µm, manganese and chromium powder were obtained from Ghrishma Specialty Materials, Mumbai and Maharashtra, India. However, the graphite powder of 3-5 µm was provided by courtsy Ashbury Mills Inc., New Jersey. USA II.2 Instruments and Equipment Required Sieve Shaker, pot mill for powder mix blending, stainless steel pots and porcelain balls in the diameter range of 10-20 mm, Hall flow meter for measuring apparent density and flow rates, die set assembly for compaction. Hydraulic press of 1.0MN capacity, sintering furnace which is capable of operating upto 1250±10ºC and an electronic balance capable of measuring 0.0001gm are required. II.3 Powder and Powder Blend Characterization The sieve size analysis of iron powder is given in Table-1. Flow rate, apparent density and compressibilities are listed in Table-2. Atomized iron powder of -180 µm has been analyzed for chemical purity and it was found to be 99.67 per cent of pure with 0.33 per cent being insoluble impurities. Table: 1. Sieve Size Analysis of Iron Powder Sieve size, µm Wt % retained Wt% powder retained Cum, Wt% powder Ret. Powder Size Distribution -180 + 150 -150 +125 -125 +106 -106 +90 -90 +75 -75 +63 -63 +53 -53 +37 -37 1.52 1.83 23.12 1.11 21.86 2.21 18.60 13.62 16.11 1.52 3.35 26.47 27.58 49.44 51.65 70.25 83.87 99.98 II.4 Blending of Iron, Copper, Manganese Chromium and Graphite Powder Powder blend of iron, copper, manganese and graphite elemental powders was prepared by blending the required amount of each of the above powders so as to yield the final sintered composition of the alloy as Fe-16%Cu 2.50%Mn-0.95%Cr-3.25%C. the powder mix was taken in a stainless steel pot with a powder to www.ajer.org Page 184 American Journal of Engineering Research (AJER) 2013 porcelain ball weight ratio of 1:1.1. Blending operation was carried out for a period of 36 hours so as to obtain homogeneous powder blend. During blending operation 100 g of powder mix was taken at an interval of every one hour to measure the apparent density and flow rates. Immediately after measuring the apparent density and flow rate, the powder mix was returned back to the pot and the blending operation was continued till consistency in apparent densities and flow rates were obtained. Table: 2. Characteristics of Iron Powder and Powder Blends Property Systems The density, g/cc Iron Fe -16%Cu-2.50%Mn-0.95%Cr-3.25%C Flow rate, Sec/100g 48.35 46 Compressibility g/cc at a 6.667 6.711 pressure of 480±10 M Pa. Apparent density, g/cc. 3.352 3.257 Theoretical density 7.850 7.461 II.5 Cold Compaction The cold compaction of the above elemental powders blend was carried out on a 1.0 MN capacity Universal Testing machine by using suitable die; punch and bottom insert assembly. Graphite paste in acetone was used as lubricant during compaction of the powder blend on the inner surfaces of the die, the outer surfaces of the punch and the bottom insert. Compact density was maintained in the range of 90±1 per cent of theoretical by applying the pressure in the range of 590±1 MPa and by taking controlled amount of powder blend. The compact dimensions were as 28.5mm diameter and 31.5mm height. II.6 Application of Indigenously Developed and Modified Ceramic Coating Indigenously developed modified ceramic Coating [26] was applied on to the entire surfaces of all the compacts as a thin film and this coating was allowed to dry for a period of twelve hours under ambient conditions. Recoating was done 90º to the previous coating and re- allowed to dry for a further period of twelve hours under the aforesaid conditions. II.7 Sintering and Treatment Ceramic coated compacts were sintered in the temperature range of 1050±10ºC in an electric muffle furnace in the uniform temperature zone for a period of 100 minutes. Equal number of sintered compacts were cooled inside the furnace and oil quenched. A total of eight were oil quenched and eight were furnace cooled. Ceramic coated compacts were protected against oxidation during sintering as this coating has been impermeable upto 1300± 10º C which was tested prior to using the same in the present investigation. II.8 Specimen Preparation for Wear Test Immediately after the removal of residual ceramic coatings after sintering and treatments were machined to 26.5 mm diameter and 24 mm height. Precaution has been exercised to obtain smooth and scratch free surface during machining and final surface finishing operations. II.9 Accelerated Wear Test Pin-on-disc machine is a popular wear testing apparatus where the pin is loaded normally. The variable which can be changed as desired are the normal load, sliding contact velocity, specimen surface finish and wheel surface. However the amount of wear can be established by weighing the weared specimen on an electronic balance at each interval of time, say, 30 minutes. A complete wear test involves plotting weight loss/area against the sliding intervals to obtain steady state wear pin-on–disc machine. This machine consists of an abrasive wheel (carboraundum) which is made to rotate in the horizontal plane by an electric motor. A pin holder with a groove of dimension 27mm diameter and 12mm depth is placed vertically on the top of the wheel at certain height with a provision to vary its height. It also consists of a through hole in order to give access to apply loads on the pin. Pin is loaded using a rod to which various weights can be attached to its head. Specimen can be subjected to loads of 1.10, 1.55 and 1.90 kg respectively by means of a specimen holder. Two sliding velocities at all three loads were used during the wear test. These speeds were 2.09m/s and 4.19m/s respectively. Specimens were placed at 80 mm away from the centre of the wheel disc. III. RESULTS AND DISCUSSION Accelerated wear test data and calculated parameters wear utilized to draw various plots to establish empirical relations between weight loss per unit area (g/m2) and the sliding distance (Km) and also between www.ajer.org Page 185 American Journal of Engineering Research (AJER) 2013 wear volumes (cc) with the sliding time in minutes. Further plots were also drawn to assess the effects of applied loads, sliding velocities and the type‟s treatments given to the specimens prior to the accelerated wear test. III.1 Characteristic Plots between Weight Loss per Unit Area (g/m2) and the Sliding Distance (Km) Figs.-1(a) and 1(b) have been drawn between the Weight loss per unit area (g/m 2) and the sliding distance (Km) showing the effects of applied loads and the sliding velocities for furnace cooled and oil quenched specimens respectively. Observing the curves in these figures, it is observed that the characteristic nature of the curves drawn in these figs. -1(a) and 1(b) are quite similar to each other and, therefore, these curves must be governed by a similar mathematical expression. It is found that these curves conformed to a third order polynomial of the form: Wg= A0+A1S+A2 S2 +A3 S3…………. (11) (a) FC (b) OQ Figure 1: Plots Loss/Area (g/m2) and between Weight Sliding Distance (Km); (a) Furnace cooled and (b) Oil Quenched Specimens under Accelerated wear Test. Where, „A0‟, „A1‟, „A2‟ and „A3‟ are empirically determined constants which depend upon the applied loads and the sliding velocities. But, „W g‟ represents the weight loss per unit area(g/m2) and „S‟ being the sliding distance in Km. the values of these constants are listed in Table-3. It is further observed that, in general, the curves exhibited the tendency to exhibit higher wear rate at higher applied loads at a constant sliding velocity. However, at constant sliding velocity, the wear rate was higher for 1.90 Kg normal load applied and decreased successively for 1.55 Kg and 1.10 Kg normal loads. Once the normal load acting on the specimen was raised, the number of contact points also increased resulting into cold welding taking place locally and the relative, motion between the disc and specimen break the junction and thus incurring weight loss. However, at constant load, the wear rate was found to be higher at a greater sliding velocity. Now, therefore, it can be conclusively inferred from these figures that the wear rate increased with increase in either load or by increase in sliding velocity or both. Further for all curves it is observed that the wear rate is higher at initial stages of the test compared to the later stages of each curve (for a given load and a given sliding velocity). However, the wear rate remained virtually constant after certain sliding distance and this is attributed to the change in roughness of the specimen and the grinding disc. The constant „A0‟ is always remained zero because at no wear no weight loss principle being valid. Therefore, A0 is not listed in this table. The coefficient „A1‟of S increases with increase in normal load at both sliding velocities, namely 4.19 m/s and 2.09 m/s respectively. Therefore, this certainly reflects higher wear rates at higher applied normal loads. But, the coefficient of „A2‟ of S2 for sliding velocity of 4.19 m/s at 1.90 Kg and 1.55 Kg loads was found to be negative, thus, showing the tendency to decrease the wear rate. But, the magnitude of „A2‟is more in this case for (1.90 Kg at 4.19 m/s), and, hence, the wear rate. „A3‟the coefficient of S3 increased with an increase in the applied loads, thus, showing an increase in the wear rate reduction which was more in this case. However, at (1.10 Kg and 4.19 m/s velocity), „A 2‟ is a positive value showing the steady increase in wear rate. Same explanation holds good for all the other curves corresponding to sliding velocity of 2.09 m/s with the respective loads of 1.10, 1.55 and 1.90 kg. Since, TABLE: 3 Coefficient of Second Order Polynomial of the Form: Wg = A0 + A1S +A2S2 +A3S3 for Fe-16.0%Cu- 2.50% Mn-0.95% Cr-3.25%C Under Furnace cooled and Oil Quenched Between Wt. Loss (g/mm2) and the Sliding Distance in km. www.ajer.org Page 186 American Journal of Engineering Research (AJER) Sliding velocity, m/s Treatment 4.9 FC 2.09 FC 4.19 OQ 2.09 OQ Applied Load, Kg 1.10 1.55 1.90 1.10 1.55 1.90 1.10 1.55 1.90 1.10 1.55 1.90 2013 A1 A2 A3 R2 230.69 695.79 1577.3 217.35 328.48 950.71 14.575 39.617 146.75 11.711 28.099 67.394 5.2718 -13.487 -96.964 -16.364 23.123 -112.99 2.914 1.8706 -7.7838 2.9547 2.0429 -1.4046 -0.2389 -0.0294 2.1465 1.3537 -0.731 7.5553 -0.0735 -0.0512 0.1866 -0.2029 -0.053 0.0911 0.9977 0.9981 0.9872 0.9958 0.9932 0.9969 0.9948 0.9995 0.9976 0.9933 0.9929 0.9960 the values of Regression coefficient „R2‟ in all these cases was found to be almost unity and hence the third order polynomial arrived at, stands justified. III.2 Logarithmic Plots between Weigh Loss (g/m2) Per Unit Area and the Sliding Distance (Km) Figs.-2(a) and 2(b) have been drawn between log (weight loss (g/m2)) and the log sliding distance in km for sintered furnace cooled and oil quenched specimens respectively. The plots shown in these figs. were found to be represented by a straight line equation of the form: Log (Wg (g/m2)) = M log (S) + N……….. (12) Where, „M‟ and „N‟ are empirically determined constants. Values of these constant are given in Table -4.The above expression can also be expressed as under: Wg = NSM ………………………………… (13) (b) (a) Figure 2: Log-Log Plots Between Weight Loss (g/m2) and the Sliding Distance (Km) for (a) Furnace Cooled and (b) Oil Quenched Specimens during Accelerated Wear Te Table: 4 Coefficients and Exponents of Equation: Wg = NSm for Fe -16.0% Cu – 2.50%Mn – 0.95 Cr – 3.0% C Treatment Sliding velocity, m/s 4.29 FC 2.09 4.29 OQ 2.09 www.ajer.org Applied Load, Kg 1.10 1.55 1.90 1.10 1.55 1.90 1.10 1.55 1.90 1.10 1.55 1.90 M N R2 0.9366 0.7936 0.4155 1.0145 1.0514 0.7112 1.1659 1.2164 0.7228 0.9822 1.2387 0.9216 2.4677 2.9143 3.4723 2.2458 2.6185 2.9881 1.4608 1.4974 2.1992 1.3317 1.3966 1.8635 0.9957 0.9726 0.9896 0.9895 0.9895 0.9653 0.9903 0.9946 0.970 0.9693 0.9823 0.9943 Page 187 American Journal of Engineering Research (AJER) 2013 Since the regression coefficients are in close proximity to unity, and, hence, the power law equation is justified. III.3 Effect of Sliding Velocity at Constant Loads Figs.-3(a) and 3(b) are drawn between weight loss per unit area (g/m2) and the sliding distance (Km) showing the effect of sliding velocity for sintered, but, furnace cooled and oil quenched specimen respectively. Both of these figures clearly show that the wear rate is quite high when the sliding velocity has been to 4.19 m/s from 2.09 m/s at the constant load of 1.10 Kg. (a)FC (b)OQ Figure 3: Effect of Sliding Velocity on Weight Loss per Unit Area against Sliding Distance for Sintered (a) Furnace Cooled (FC), and, (b) Oil Quenched (OQ) at Constant Load. III.4 Effect of Applied Loads on Weight Loss for Unit Area (g/m2) and the Sliding Distance in (km) Figs.-4(a) and 4 (b) have been plotted between weight loss per unit area (g/m2) and the sliding distance (Km) exhibiting the effect of applied normal. It is, quite clear from these two figures that higher has been normal applied loads higher has been the wear rate and vice - versa is also true. This is true irrespective of the treatment given to the specimen prior to the wear test. (a) FC (b) OQ Figure 4: Effect of Applied Loads on the Plots Drawn between Weight Loss and Sliding Distance at Constant & Sliding Velocity for Sintered (a) Furnace Cooled and (b) Oil Quenched Specimens During Accelerated Wear Test www.ajer.org Page 188 American Journal of Engineering Research (AJER) 2013 III.5 Effect of Treatments on the Relationship between Weight Loss (g/m2) and the Sliding Distance Figs.–5(a) and 5(b) exhibit the effect of treatment given to the sintered specimens prior to the conduction of accelerated wear test on the relationship between weight loss (g/m2) and the sliding distance (Km).Fig. – 5(a) corresponds to the constant applied load of 1.10 Kg whereas figure -5(b) represents the constant load of 1.90 Kg. both of these figures clearly indicate that the sintered but, furnace cooled specimen worn out at higher wear rates compared to the specimen sintered but oil quenched because of impregnation of 23 per cent coolant oil which provided improved lubrication during the wear test. (a) FC (b) OQ Figure 5: Effect of Cooling Media on the Plots between Weight Loss per Unit Area (g/m2) and the Sliding Distance at Constant Load for Sintered at (a) 1.1 Kg load &2.09 m/s Sliding Velocity and (b) 1.9 Kg Load &4.19 m/s Sliding Velocity during Accelerated Wear Test. III.6 Wear Volume and the Time (min.) of Sliding Figs.-6(a) and 6(b) have been drawn between the wear volume (cc) and the time (minutes) of sliding for sintered and (a) furnace cooled and (b) sintered but oil quenched specimen during accelerated wear test respectively at two different sliding velocities (4.19m/s & 2.09 m/s) and three different applied loads, namely, 1.10 Kg, 1.55 Kg and 1.90 Kg respectively. The characteristic nature of curves in these figs.– 6(a) & 6(b) are found to be quite similar (a) FC (b) OQ Figure 6: Characteristic Plots Between Wear Volume (cc) and Time (Minutes) of Sliding at Given Sliding Velocities and Applied Loads for Sintered (a) Furnace Cooled and (b) Oil Quenched Specimens During Accelerated Wear Test. www.ajer.org Page 189 American Journal of Engineering Research (AJER) 2013 Table: 4 Coefficients of the Third Order polynomial of the Form: V w = B1T + B2T2 +B3T3 between Wear Volume (cc) V/S Time (T) in Minutes Sliding Applied velocity, m/s Treatment B1 B2 B3 R2 Load, Kg 4.19 FC 2.09 FC 4.19 OQ 2.09 OQ 1.10 1.55 1.90 1.10 1.55 1.90 1.10 1.55 1.90 1.10 1.55 1.90 0.0048 0.0144 0.0327 0.0022 0.0032 0.0098 0.0007 0.0008 0.003 0.0001 0.0003 0.0007 3.00E05 -7.00E05 -0.005 -200E05 400E05 -0.0001 2.00E-05 1.00E-05 -4.00E-05 4.00E-06 3.00E-06 -2.00E-06 -5.00E-07 -5.00E-08 3.00E-06 2.00E-07 -2.00E-07 1.00E-06 -1.00E-07 -7.00E-08 2.00E-07 -3.00E-08 -9.00E-09 2.00E-08 0.9973 0.9981 0.9872 0.9958 0.9946 0.9969 0.9945 0.9982 0.9976 0.9938 0.9929 0.9961 to each other and therefore, they must be governed by a similar mathematical expression. The curves corresponding to both heat treatment conditions conform to a third order polynomial of the form: Vw = B0+B1T+B2T2+B3T3…………… (14) Where „B0‟, „B1‟, „B2‟ and „B3‟ are empirically determined constants and are found to depend upon the sliding velocity and the applied normal load. „Vw‟ represents the wear volume in cc and „T‟ represents time in minutes. These constants are listed in Table -4. The constant „A0‟ has no influence on the curve as it is zero because at no wear no volume loss is there at starting time T =0. From these equations it is inferred that the wear is a function of time and wear volume increases with the increase in the applied normal load and the sliding velocity or both. III.7 Effect of Applied Loads on the Wear Behaviour Keeping Treatment and Velocity Constant Figs.-7(a) and 7(b) are drawn between the wear volume (cc) and the time „T‟ in minutes for sintered (a) furnace cooled and sintered, but (b) oil quenched conditions showing the influence of the applied normal loads. These figures clearly indicate that the higher values of applied normal loads result in enhanced wear volumes at a constant sliding velocity. Meaning thereby, that mainly the material is worn out at enhanced applied normal loads, (a) FC (b) OQ Figure 7: Effect of Applied Loads on the Plots Between Wear Volume (cc) and Time (Minutes) of Sliding at Constant Velocity of 2.09m/s for Sintered (a) Furnace Cooled and (b) Oil Quenched Specimens During Accelerated Wear Test and, the same is true irrespective of the heat treatments given to the sintered specimen prior to the condition accelerated wear test. These two plots are constructed at a constant sliding velocity of 2.09m/sec. III 8 Effect of Sliding Velocity on the Relation between Wear Volume and Time of Wear Test Figure -8(a) and 8(b) are drawn between the wear volume (cc) and time of accelerated wear test „T‟ in minutes at constant load but two different sliding velocities , namely , 2.09m/sec and 4.19m/sec respectively for www.ajer.org Page 190 American Journal of Engineering Research (AJER) 2013 sintered , but (b) oil quenched specimens. It is observed that higher is the sliding velocity higher is the wear volume worn out mechanism is already established earlier. Both figures 8(a) and 8(b) respectively exhibit similar pattern but changed wear rates. (b) OQ (a) FC Figure 8: Effect of Sliding Velocity on the Plots between Wear Volume (cc) and Time (Minutes) for Sliding at Constant Applied Load of 1.1 Kg for Sintered (a) Furnace Cooled and (b) Oil Quenched During Accelerated Wear Test. III.9 Effect of Treatment on the Relationship between Wear Volume (in cc) and Time of Sliding (in minutes) at Constant Sliding Velocity Figs.-9(a) and 9(b) are the plots drawn between the wear volume and the time in minutes showing the effect of cooling media at constant sliding velocities of 2.09m/s and 4.19m/s respectively. Visual observation of these figs. clearly asserts that irrespective of sliding velocity, the rate at which the wear volume for furnace cooled specimen rises up is comparatively much more than encountered by specimens which were sintered and (a) (b) Figure 9: Effect of Cooling Media on the Plots Between Wear Volume (cc) and Time (Minutes) for Sliding at Constant Load and Constant Velocity for Sintered (a) 1.1 Kg Load & 2.09 m/s Sliding Velocity and (b) 1.9 Kg Load &4.19 m/s Sliding Velocity oil quenched. Fig-9(a) was drawn for a normal applied load of 1.1 Kg and sliding velocity of 2.09m/s. whereas, fig.–9(b) corresponded to an applied normal load of 1.9Kg at a sliding velocity of 4.19m/s. Thus, the present investigation has comprehensively established the wear behaviour of sintered Fe-16% Cu-2.50%Mn-0.95%Cr – 3%C alloy of around 90% of its theoretical density value under sintered, but, oil quenched condition is far superior in performance compared to the specimens which were sintered and furnace cooled condition. In general, the wear behaviour of sintered oil quenched specimens was comparatively low as the absorbed oil has acted as lubricant. Hence, oil quenched material can be preferred for industrial applications. IV. www.ajer.org CONCLUSIONS Page 191 American Journal of Engineering Research (AJER) 2013 Based on the critical analysis of the experimental data, calculated parameters and various plots drawn led to arrive at the following major conclusions: 1. Characteristic nature of all the curves drawn between weight loss / area (g/m 2) and the sliding velocity (m/sec) are found to be similar to each other irrespective of sliding velocity and the applied normal loads and the heat treatment given to the specimens. Mathematical analysis revealed that these characteristic curves corresponded to a third order polynomial of the form: Wg = A0+A1S+A2S2+A3S3; where, „A0‟, „A1‟, „A2‟ and „A3‟are empirically determined constants and they are found to depend upon the sliding velocity and the normal applied loads. Wg represents the weight loss per unit area (g/m2) and „S‟ is the sliding distance in Km, 2. The wear rate is found to increase with the increase in either load or in sliding velocity or both, 3. The wear rate is comparatively much higher for furnace cooled specimens compared to oil the quenched specimen, 4. All curves have revealed that the wear rate is much higher in the initial stages and comparatively slowed down in the final stages of wear which is attributed to the change in roughness of the specimen and the abrasive wheel, 5. Log- Log plots were drawn between weight loss per unit area and the sliding velocity (S/m) to obtain an improved and an accurate mathematical expression to explain the wear mechanism. Since, these plots yielded straight lines in two segments with two different slopes and two different intercepts. Therefore these lines, in general, can be expressed at a given normal load. The equation in general can be expressed in the form as : Log (Wg) = M log (S) + N; where, M and N are empirically determined constants and S represents the sliding distance and thus the power law equation of the form: Wg = NSM is arrived at which is quite handy apply while interpreting the wear data, and, 6. Relationship between wear volume and time of wear test for both sintered furnace cooled and oil quenched specimens were established to be of the third order polynomial of the form : Vw = Bo+B1T+B2T2+B3T3 Where , „Bo‟, „B1‟, „B2‟ and „B3‟are found to be empirically determined constants and are found to be functions of sliding velocity and the applied normal loads . „Vw‟ represents the wear volume in cc and „T‟ represents time in minutes REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] Guidon W. Stachowiak and Andrew W. Batchelor, “Engineering Tribology,” Second Edition, 2001, Boston, Oxford, Auckland, Johannesburg, Melbourne and New Delhi, pp. 506 – 507. S.Adelman, C.R. Taylor and N.C.Heglund, “Sweating on Paws and Palms: What is its Function,” American J. of Physiology, 1975, Vol.29, pp. 1400-1402. Kurt H.Miska, “An Introduction to Powder Metallurgy” Source Book on Powder Metall., 1976, pp.1-9. Dinesh Agarwal and Rusturn Roy, “Produced Advanced Drill Bit Cutters Using Microwave Technology,” Gas TIPS, winter 2002, pp. 34-37. Steven F.Gradess, Richard Queeney, “Wear Resistance in a Reinforced Functionally Gradient High Speed Steel”, pp. 401-409. Cristina Teisanu, Andrei Tudor, Stefan Gheorghe and Ionciupitu, “Tribological Features of P/M IronCopper Based Materials,” The Annals of University Dunarea, De Jos of Gatali, Fascide, 2003, Vol.VIII, Tribology, pp. 168-172. Takeshiyanase and Motohiro Miyaska, “Sliding Property of Fe-Cu-C Sintered Materials under High Control Stress and at Low Sliding Velocity”, Hitachi Powdered Metals Technical Report, 2002, Vol.1, pp. 23-29. D. Dowson, “History of Tribology”, 1979, Longman Group Limited. Lubrication (Tribology) - Education and Research, A Report on the Present Position and Industry Needs (JOST REPORT). Department of Education and Science, 1986, Vol.43, PP 623-628. Strategy for Energy Conversion Trough Tribology, Nov.1977, ASME, New York. G.A. Tomilson, Phil. Mag., 1929, Vol.7, pp.7-10. R.Holm, “Electrical Contacts”, 1946. M.V.Swain, “Microscopic Observations of Polycrystalline Alumina”, Wear, 1979, Vol.35, pp. 185-189. www.ajer.org Page 192 American Journal of Engineering Research (AJER) [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] 2013 N.Emori, T.Sasada and M. Oike, “Effect of Material Combination in Rubbing Parts on Three Body Abrasive Wear”, Wear, 1984, Vol.97, pp. 291-302. C.S.Yust and R.S.Grouse, “Melting at Particle Impact Sites during Erosion of Ceramics”, Wear 1978, Vol. 51, pp. 193-196. A.Karimi and P.Avellan, “Comparison of Erosion Mechanism of Different Types of Cavitation”, Wear, 1984, Vol.113, pp. 305-322. P.Veerabhadra Rao, “Evaluation of Epoxy Resins in Flow Cavitation Erosion,” Wear, 1988, Vol.122, pp. 77-95. G.W. Rengstorff, K.Miyoshi and D.H.Buckks, “Interaction of Sulfuric Acid Corrosion and Mechanical Wear of Iron”, ASME Trans., 1986, Vol.29, pp. 43-51. B.C. Majundar, “Introduction to Tribology of Bearings”, A.H.Wheelers&Co. Pvt. Ltd., 1986, pp. 296302. P.Jost, “Some Economic Factors of Tribology”, Proc. JSLE-ASLE, Int. Lubrication Conference, Elsevier, 1976, pp. 2-8. J.T. Browell, “A Survey of Possible Wear Mechanisms”, Wear, 1957, Vol.1. pp. 119-141. R.G.Bayer, W.C.Clinton, C.W.Nelson and R.A.Schumacher, “Engineering Model for Wear”, Wear 1962, Vol.5, pp. 328-391. Kazoo Shimada, “Improved Properties and Performance of Low Alloy Steel Parts Via. High Temperature Sintering”, the Int. J. of Powder Metall., 1991, Vol.27, No.4, pp. 357-361. A.D.Sarkar, “Wear of Metals”, Intl. Series in Mat. Sci. and Tech., Pergamon Int. Library, 1976, Vol.18, pp. 137-147. K.Matsubara, “Tribology”, 1981, Sangyo Tosho Publishing Co. Ltd., p.69. K.S.Pandey, “Indigenously Developed Modified Ceramic Coating “, National Institute of Technology, Tiruchirappalli – 620015, Tamil Nadu, India, 2010. www.ajer.org Page 193
American Journal of Engineering Research (AJER) 2013 American Journal Of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-16-38 www.ajer.org Research Paper Open Access Development and Application of CLOGEN-Polymer Slug as Enhanced Oil Recovery Agent in the Niger Delta Marginal Oil Fields Udie, A. 2Nwakaudu, M. S. and 3 C.I.C. Anyadiegwu: School of Engineering and Engineering Technology (SEET,) Federal University of Technology Owerri (FUTO), Nigeria Abstract: - Mathematical models for the application CLOGEN Polymer Slug (CPS) was successfully designed for chemical flooding in the Niger Delta. Estimation of cumulative oil recovery or additional oil recovery after secondary recover method was done using development and application of CLOGEN-Polymer slug as enhanced oil recovery agent in the niger delta marginal oil fields. Draw-dip and down-dip solution gas was designed to enhance the recovery. A double line drive pattern was employed with chemical flooding simulation using CLOGEN-Polymer slug injection at one end of the reservoir to maintain the reservoir pressure above the bubble-point pressure and as well as to displace the level of the oil to the perforation section. Pressurized injection was equally done at the other end to achieve miscibility pressure and enhance the fluids lifting to the surface. The producers placed in between them, for effective drainage. The water and gas produced recovered in a separation process and sent to water-plant and gas-plant respectively, for treatment and re-injection. The water treatment and injection skid conditioned the water for CPS and the pressurized stream before re-injection. In addition to cutting down the cost, the production system was designed to ensured water and pressurized streams requirement availability. A total of nine (9) wells was estimated, three (3) injecting wells for CLOGENPolymer-injection at the lower dip, three (3) injecting wells for pressurized stream-injection at the upper dip and three (3) producing wells for fluids production in between the injectors. It was found out that 72 to 83% reserves would be recovered in new fields and additional 10 to 25% recovery in old wells, after secondary methods. That was possible through the reduction of the interfacial tension (IFT) between oil and water to low tension, which converted macro-emulsion from higher droplets to a micro-emulsion of lower droplets and total voidage out replacement by Water soluble polymer solution. I. INTRODUCTION CLOGEN Polymer Slug (CPS) is an improved (Combination of Polymer Augmented Water and Microemulsion) chemical flooding. The objective of the design is to improve the recovery efficiency and surmount most of the problems common in the chemical flooding, an agent for enhancing oil recovery. The application of CLUGEN-Polymer-Slug is an advanced EOR process, because it is a method/technique which recovers oil more efficiently than plain water flooding or gas injection methods. It is an attempt to recover oil beyond primary and secondary methods. Chemical flooding methods involve mixing chemicals and sometimes other substances in water prior to injection in low to moderate viscosity and moderate to high permeability. Lower mobility fluids are injected in chemical flooding with adequate injection. Active water drive reservoirs are not good candidates for chemical flooding, because of low residual oil saturation be low limit, after primary recovery and gas-cap reservoirs mobilized oil might re-saturate the gas-cap fluids. High clay contents formations increase adsorption of the injected chemicals. Moderate salinity with low amount of divalent ions are preferred, since high divalent ions interact negatively with the chemicals. The polymer augmented water flooding is a chemical flooding technique used to improve the mobility ratio for good displacement and sweep efficiencies (areal & vertical). The resultant effect is high oil recovery. The ultimate oil recovery at a given economic limit may be from 4% to 10% higher with a mobility controlled www.ajer.org Page 16 American Journal of Engineering Research (AJER) 2013 flood than in plain water flooding. More efficient displacement, since less injected water is required for a given oil value recovered. Polymer flooding is an improved waterflooding technique, but it does not recover residual oil trapped in pore spaces and isolated by water. It produces additional oil by improving the displacement efficiency and increases reservoir volume contacted. Dilute aqueous solution of water-soluble polymers have the ability to reduce the mobility of water in the reservoir, then improves the flood efficiency. Partially hydrolyzed polyacrylamides (HPAM) and zanthan gum (XG) polymers are good chemicals for reducing the mobility of water by increasing its viscosity. In addition HPAM has ability to alter the flow path by reduction of the permeability to water and leave that of oil unchanged. A resistant factor of 10 makes it 10 times more difficult for polymer solution to flow through the system. Meaning that the mobility of the augmented waterflooding is 10 folds, since for water with the viscosity of 1cp, polymer solution flows with an apparent/effective viscosity of 10cp, even though the viscometer reading is a lower value. [Chang, 1978] Oil and gas are some of the gifts of nature which contribute much to an economic development or growth of a Nation, so advancement in the recovery techniques is an added advantage. The Pilot oil fields used were the reservoirs fields with low recoverable target reserves between 6.0 to 20.0MMstb. The target reserves/oil ( ) could be any value, but the recoverable value ( ) is paramount, at least 6MMstb. The economic models in this work were designed to estimate the profit margins from the proceeds of the oil recovery value, revenue generation and taxes values. It uses Visual Studio (Basic Programming Language) to show the Target-oil, recoverable-oil, recovery value, revenue from the proceeds and taxes value. The economic models solutions use the revenue value and effects of Petroleum Production Taxes (PPT) on the NPV in low recoverable oil reserves field development. This gives an investor good idea about the business, so that he can make decision whether to invest on the development of the field or not and the government to formulate the agreement or contractual terms. The outstanding advantage in the research work is that it gives an investor the value of the target reserves ( ), recoverable value ( ), the CAPEX and OPEX values as well as the profit before and after PPT by government (Technical and Economic Feasibilities). Many paper publications on oil recovery using EOR are based on the principle of chemical oil recovery/flooding, fluids (HCS, Water or Gas) injections and thermal (heating) oil recovery techniques. 1.1 Chemical Oil Recovery or Flooding The chemical flooding for oil recovery is based on 3-main principles polymer augmented water flooding, alkaline /caustic or surfactant flooding. Craig, (1971) designed a better correlation of water mobility determination at the average water saturation behind flood front at water breakthrough. He found out that the relative mobility to water (� ) at average water saturation ( ) at breakthrough using Weldge graphical approach for mobility ratio (M) expression. He equally found out that the mobility ratio of waterflooding ( ) remained constant before the breakthrough, it increased after the water breakthrough corresponding to the increase in water saturation and the relative water saturation in the connected portion of the reservoir. He concluded that unless stated otherwise, the term mobility ratio is the value prior to water breakthrough, so it is important in the determination of the waterflooded. He defined mobility ratio of a fluid as the ratio of the permeability of a fluid (absolute permeability, K) to the fluid viscosity ( ). Mathematically: = = � 1.1 Where M = = Mobility ratio, md/cp K = Effective fluid permeability, md = fluid viscosity, cp In a multi fluid flow reservoir system = � / � / = � � 1.2 API-Report, (1984), defined recovery efficiency as the fraction of oil in place that can be economically recovered with a given process. The API research work showed that the efficiency of primary recovery www.ajer.org Page 17 American Journal of Engineering Research (AJER) 2013 mechanism varies with reservoir, but the efficiency is normally greatest in water drive, intermediate in gas-cap and least in solution gas drive. The results obtained using waterflooding confirmed their findings. They concluded that generally primary and ultimate recoveries from carbonates reservoirs tends to be lower than from sandstones. For pattern waterflooding the average ratio of secondary to primary oil recovery ranges from 0.3 in Califonia sandstones to greater than 1.0 in Texas carbonates. For edge water injection the ∶ the ratio ranged from 0.33 in Louisiana to 0.64 in Texas. By comparison secondary recovery for gas injection into a gascap reservoir averaged only 0.23 in Texas sandstones and 0.49 in Califinia sandstones. They recommended that solution gas drive reservoirs are the better candidates for waterflooding, because generally they have higher residual oil after primary recovery than any other one. They also pointed out that displacement of oil by waterflooding is controlled by oil viscosity, relative permeability, rock heterogeneity, formation pore size distribution, fluids saturations, capillary pressure and injection wells locations relative to the producers. These factors contribute to the overall oil recovery efficiency ( ) by waterflooding and it is the product of displacement efficiency ( ) and the volumetric efficiency ( ). This mathematical definition was based on the � fluid mobility ( = ). Mathematically: = = Where V ER = D V = Recoverable Reserves, %pv I ED = Displaced fluids from the pv, % EV = Volumetric sweep efficiency, % = ,% = 1.3 Muskat and Wyckoff, (1934) presented analytical solutions for direct-line drive, Staggered-line drive, 5-spot, 7spot and 9-spot patterns. Craig, et al, (1955) worked on 5-spot and line drive. Kimbler, et al, (1964) worked on 9-spot pattern flood. Prats, et al, (1959) worked on 5-spot flood pattern. All their results showed that the areal sweep efficiency is low when mobility ratio is high. They concluded that sweep efficiency is more important for considering rate vs time behaviours of waterflooding rather than ultimate recovery, because at the economic limit most of the interval flooded has either had enough water throughput to provide 100% areal sweep or the water bank has not yet reached the productivity well, so that no correction is needed for areal sweep. Fassihi, (1986) provided correlation for the calculation of areal swep efficiencies and curved fitted with the data of Dyes and Caudle resulting to the eqn1.5. − = + + + + + 1.4 = / = − . − − 1.5 . Willhite, (1986) used material balance and derived a mathematical model called MBE for estimation of oil recovery by waterflooding. MBE/models are: ∅ = = − − − www.ajer.org − − + 1.6 − 1.7 Page 18 American Journal of Engineering Research (AJER) 2013 Where = Potential oil recoverable by waterflooing = , = = , = Dyes, et al, (1954) experimentally studies showed that if the M of waterflooding with a 5-spot pattern is 5, the areal sweep efficiency is 52% at breakthrough. If the economic limit is a producing water-oil ratio of 100:1 ( = 100 101 = 99%), the sweep efficiency at floodout is 97%. If the polymer lowers the mobility ratio from M = 5 to M = 2, the sweep efficiencies are 60% at breakthrough and 100% at the economic water-oil ratio of 100:1. They concluded that a proper size polymer treatment requires 15 – 25% pv and polymer concentration of 250 2000mg/L injection over 1 t0 2 years and then revert to normally waterflooding. Martin, (1986) used aluminium citrate process: consisted of the injection of HPAM polymer solution slug, 3+ and citrate ions and a second polymer slug. The first polymer slug was adsorbed or retained on the surface of the reservoir, the 3 attached to the adsorbed polymer and acted as a bridge to the second polymer layer. The process was repeated until a desired layering was achieved. The disadvantage in his work was that the transport of 3+ through the reservoir may be limited to near wellbore, whicvh needed another treatment further than that. Gogarty, (1983) in the reduction of chromium ions ( + ) to permit crosslink of HPAM or XG polymer + molecules, a polymer slug was used. The polymer slug contained was injected, followed by a polymer + + − slug that contained a reducing agent ( → + ) a gel was formed with the polymer. The amount of permeability reduction is controlled by the number of times each slug is injected, the size of each slug or concentration used. His alternate treatment involved placing a plain water pad between the first and the second polymer slug. A cationic polymer is injected first since reservoir surfaces are often negatively charged, and can highly adsorb the cationic polymer. The injection of this treated slug or cationic polymer adsorbent slug generate a strong attraction between the adsorbed cationic polymer and the anionic polymer that followed. The advantage is that polymer concentration used in these variations are normally low: 250mg/L and with low molecular weight polymer or if a very stiff gel is desired 1 to 1.3% addition to those used in conventional polymer flooding, but the products used for gelation command a higher price. These could be used in fractured + treatment, example: acetate ( ), polyacrylamides, colloidal silica and resorcinol-formaldehydes. II. SURFACTANT AND ALKALINE FLOODING Alkaline flooding like surfactant flooding improves oil recovery by lowering the interfacial tension (IFT) between the crude oil and the displacing water. The surfactants for alkaline flooding are generated in-situ when alkaline materials react with crude oil. This is possible if the crude oil contains sufficient amount of organic acids to produce natural surfactant or emulsification of the oil for the alteration in the preferential wettability of the reservoir rock. Surfactant flooding involves the mixing of surface active agent with other compounds (cpds) as alcohol and salt in water and injected to mobilize the crude oil. Polymer thickened water is then injected to push the mobilized oil-water bank to producing wells. Water soluble polymer can be used in a similar fashion with alkaline flooding. Alkaline flooding consist of injection of aqueous solution of sodium hydroxide ( ( ) ), sodium carbonate solution ( ( ) ), sodium silicate solution ( ( ) ) or potassium hydroxide solution (� ( ) ). The alkaline chemicals react with organic acids in certain crude oil to produce surfactant in-situ that dramatically lower the IFT between water and oil. The alkaline agent also reacts with reservoir rock surfaces to alter the wettability from oil-wets to water-wets or vice versa. Other mechanisms include emulsification and entrainment of oil to aid mobility control. The slug size of the alkaline solution is often 10 – 15%pv. The concentrations (conc.) of alkaline chemical are normally 0.2 to 5% dosage, a pre-flush of fresh or softened water often proceed the alkaline slug and a drive fluid, which is water or polymer solution after the slug. [William, 1996] www.ajer.org Page 19 American Journal of Engineering Research (AJER) 2013 Surfactant/Polymer Flooding Fassihi, (1986) postulated the present-day methods for designing surfactant flooding for enhancing oil recovery, which include: A small slug of about 5%pv and high conc. of the surfactant 5 to 10% of the total chemical solution. In many cases of micro-emulsion, the combination included surfactant, HCS, water, electrolytes (salt) and a solvent (alcohol). This mixture uses a slug size of 30 to 50%pv of polymer thickened water to provide mobility control in displacing the producing wells. The advantage in his work was that low cost petroleum sulfonate or blends with other surfactant could be used. Alkaline/Surfactant/Polymer Flooding (ASP) Martin, et al, (1986) used the combination of chemicals to lower process cost by lowering the injection cost and reducing the surfactant adsorption value. ASP solution permits the injection of large slug of injecting, because of lower cost. Hydrocarbons (HCS) or Gas Injection Taber, (1982) worked on gas injection. He generally classified hydrocarbon or gas injection into: Miscible solvent (LPG-propane), enrich gas drive, high pressure gas drive, carbon dioxide ( ), flue gas (smoke) or inert gas ( ) application to improve oil recovery value. Gas injection recently has been coming from non-hydrocarbons application ( ), or flue gas. Miscible flooding (HCS) can be subdivided into 3techniques LPG-slug/solvent flooding, enrich (condensing) gas drive and high pressure (vaporizing) gas drive. The miscible flooding depends on pressure and depth ranges to achieve fluids miscibility in the system. The disadvantages of his work include: Early breakthrough and large quantity of oil-bypass in practice and hydrocarbons deferment, meaning gas needed for processes are valuable, so to this most operators prefer nonHCS gases such as , or flue gas that are less valuable. The disadvantage in using non-HCS gases is that or flue gas does not recover oil as much as the HCS gases or liquid, due to low compressibility and poor solubility at reservoir conditions in them. Carbon Dioxide ( ) Flooding Haynes, et al, (1976) stated numbers of reasons, why 2 gas is an effective EOR agent, which are: i. Carbon dioxide is very soluble in crude oil at reservoir conditions, hence it swells up the net volume of oil and it reduces the oil viscosity before miscibility is achieved. ii. As the reservoir fluids and miscibility approaches, both oil and 2 phases containing oil-intermediate ( 2−6 ) can flow together due to low IFT and the relative increase of the oil volume by the combination of 2 and oil phases, compared to waterflooding. iii. Miscibility of oil and 2 is high in crude oil system when pressure is high enough, so the target is for the system or steam to attain the minimum miscibility pressure (MMP). Their report showed that there is a rough correlation between API gravity and the required MMP. They also stated that the MMP increases with temperature. Holm and Jesendal, (1982) showed that a better correlation is obtained with the molecular weight of that fraction of the oil than with the API gravity. + Orr and Jensen, (1982) work showed that the required pressure must be high enough to achieve minimum density in the 2 phase. At this variable 2 density with oil composition, the becomes a good solvent for the oil, especially the 2−6 HCS and the required miscibility can be developed to provide the efficient displacement normally observed in 2 . To this effect at high temperature, corresponding high pressure are equally needed to increase density value to match up the ones at MMP at low temperature. Helier and Taber, (1986) studied the mechanism for flooding and found out that 2 mechanism appeared to be similar to that of HCS miscible flooding, but 2 flooding gave better oil recoveries even if both systems are above their required MMP, especially in tertiary flooding. This is so, because is much more soluble in water and it has been experimentally shown that it diffuses in water phase to swell up by-passed oil www.ajer.org Page 20 American Journal of Engineering Research (AJER) 2013 until the oil becomes mobile, but the ultimate recovery may be higher than with HCS when above MMP. Miscible Flooding Design and Performance Prediction General miscible flooding design and performance prediction showed that the accuracy is affected by pore volume of solvent and drive fluid injected, pressure distribution, size of the solvent, type of drive fluid, mobility of the solvent, drive fluid and reservoir fluids and the displacement efficiencies in both miscible and immiscible swept areas. Laboratory test is used to determine the miscibility performance. Physical and numerical models are used to predict the computational fluids dynamics (CFD), this considers whether the displacement is miscible or immiscible and flows vertical or horizontal. In medium to light gravity crude oil and deep to medium depth reservoirs miscible displacement is considered. In medium to shallow depth with medium to heavy gravity crude oil the miscibility pressure (if exists) surpasses the formation parting pressure. Here displacement is immiscible, with beneficial effects of viscosity reduction and oil swelling. The direction of displacement depends on reservoir geometry and characteristics. It is horizontal in non-dip and thin pay-zones. It is controlled by the displacing fluid/oil mobility ratio. To avoid or reduce the displacing fluid fingering, gas/water alternating injections (WAG) are employed. It is vertical in pinnacle reef or salt-dome reserve controlled by gravity. For gravity stable process, upward vertical displacement is achieved, using water as a chasing fluid. Downward displacement is accomplished by using gas as a chasing fluid. Initial phase of the miscible fluid flooding is reservoir pressurization using water in the primary pressure depletion or others. The total amount of injected water, W and time, t necessary for reservoir pressurization are estimated. The total amount of the displacing fluid required is estimated in pinnacle reef for vertical, downward and gravity-stabilized displacement. The displacing fluid injected static wellhead pressure is estimated and parasite tubing of the displacing fluid injection pressure is also estimated. The compressor horsepower that would be required to compress 1MM scf/d of the displacing gas from the given pressure and temperature to the required pressure plus wellhead loss and surface choke must be estimated. [Stalkup, 1984] Conventional EOR Performances Predictions National Petroleum Council (NPC) US, (1984) studied the general EOR methods compared to conventional performances in four categories. A is 5 to 10%: Tight oil reservoirs slightly fractured or heavy oil reservoirs. B is 10 to 25%: Oil reservoirs producing mainly by solution gas drive C: 25 to 40%: Oil reservoir producing under water-drive and gas injection D is 40 to 55%: Oil reservoir produced by conventional waterflooding Table 1.1: EOR-methods compared to conventional Performance Predictions EOR Methods A B C D In-Situ Combustion Steam Injection Polymer Injection Solvent Injection Dry or Rich Gas LPG or Alcohol Surfactant Flooding Gas/ Injection Immiscible Flooding Miscible Flooding Improved Conventional Infill drilling Water-Gas Injection www.ajer.org 40-45 30-35 - 40-55 30-45 35-50 40-45 35-50 40-55 50-65 - - 35-50 40-55 37-52 40-45 40-55 48-63 50-65 - 30-45 - 35-50 40-55 50-65 - 7 2-4 5 - Page 21 American Journal of Engineering Research (AJER) Gas-Cap Water Inj. Waterflooding-gas inj Pressure Pulsing Attic Oil –Gravitational Gross Flooding 2–4 Source [National Petroleum Council (NPC) Study US, 1984] 3–5 2–4 5 3 5 2–4 2013 5 2–4 Adsorption of Surfactants on Grains Surface Studies showed that although petroleum sulphonates with high equivalent weight cause the greatest reduction in an interfacial tension, but are insoluble in water, so are readily adsorbed. Lower equivalent weight sulfonates show very little adsorption and are water soluble. More, so when these sulfonates are mixed with those of high equivalent weights. In addition the chemical system is provided with various mineral compounds which are adsorbed in preference to the surfactant. Other mineral additives (NH3 or Na2CO3) protect the surfactant slug against mineral in the formation water. [Carlos, et al, 2003] Santoso, (2003) worked on effects of divalent cations and dissolved oxygen on hydrolyzed polyacraylamides (HPAM) polymers and found out that HPAM polymers are unstable on elevated temperature 2+ in the presence of divalent cations ( 2+ ) and dissolved oxygen. Moradi-Araghi and Doe, (1987) worked on effects of divalent cations on HPAM using divalent cations concentration of 2000ml/L at 75℃, 500ml/L at 88℃, 270ml/L at 96℃, 250ml/L at 120℃, 200ml/L at 160℃, 150ml/L at 180℃, 100ml/L at 200℃, 50ml/L at 220℃ and less than 20ml/L at 240℃. They found out that for brine concentration less than 20ml/L divalent cations polymer hydrolysis and precipitation (ppt) will not be a problem in a temperature elevation of 200℃ or above. They concluded that two known chemicals which can impact critical stability for partially hydrolyzed polyacralamide (HPAM) are divalent cations 2+ ( 2+ ) and dissolved oxygen. They equally showed that HPAM polymer in absence of divalent 2+ 2+ cations ( ) or dissolved oxygen are stable for at least eight (8) years at 100℃ and in brine concentration of 0.3 to 2%NaCl or 0.2%NaCl + 0.1% 3 at 160℃ and more stable above 160℃ in brine concentration of 2%NaCl+1% than orders without antioxidant or chemical oxygen scavenger. They 3 recommended water pre-flushing to remove or reduce effects of projected dissolved oxygen in the reservoir or if any leak at surface facilities or piping, this prevents aggravation of HPAM degradation. Emulsion Problem in Oil Recovery Efficiency Emulsion is the dispersion of one liquid in another, with one as continuous phase and the other as discontinuous phase. There are two main types of emulsion oil-in-water (O/W) emulsion and water-in-oil emulsion (W/O). The O/W is commonly in pipeline and surface tanks or facilities while W/O is mainly in the reservoir near the wellbore. In reservoir conditions emulsion in macro-droplets of the dispersed phase tends to plug a reservoir pore spaces or permeability thereby reduces a well-inflow performance. The disadvantage is fluid recovery efficiency reduction. Emulsion in micro-droplets of the dispersed phase tends to flow with ease in the pore spaces than macro-emulsion. This is because micro-emulsion phase is similar to crude oil and behaves just like its droplets. The advantage of this micro-emulsion is that it mobilizes residual oil in a reservoir, thus improving the recovery efficiency. Any agent that can enhance attainment of micro-emulsion with droplets sizes ranging from 1 10−6 to 1 10−4 mm is an enhancement chemical for high oil recovery efficiency. Residual oil saturation is the total volume of irreducible oil in a reservoir. It acts as a displacing agent for the recoverable oil. If the residual oil saturation is high, it means low oil recovery efficiency and if it is low it means small volume of oil is left in the reservoir or high recovery efficiency. Obah, et al, (1998) worked on ‘‘Micro-Emulsion Phase in Equilibrium with Oil and Water’’ and showed that when maximum adsorption of oil is attained it becomes thermodynamically stable. Any additional oils begin to build an oil bank as a third equilibrium phase and this phase has relatively low viscosity with Newtonian flow ability at low pressure flooding. Micro-emulsion can equally reduce IFT to a low value with minimal inter facial energy. The advantage of low tension force is that it reduces both the capillary and viscous forces, which are frictional forces to oil recovery in a reservoir. Any agent that reduces both capillary and viscous forces enhances oil recovery efficiency. They equally showed that Oil phase viscosity can be reduced www.ajer.org Page 22 American Journal of Engineering Research (AJER) 2013 using miscible flooding (surfactants) and thermal process (heating). Fully miscible oil and water phases simultaneously reduce both frictional forces (capillary and viscous). Capillary force is reduced when IFT is reduced to minimum while viscous force is reduced in a miscible phase and flow as a phase. The viscosity of water phase is increased using polymer and interfacial tension (IFT) is reduced through the addition of surfactant. An experimental procedure was carried out on three primary oil production based terminals in the Niger Delta (Escravos, Forcados and Que Iboe) by Obah, et al, 1998. Four categories of emulsion phases were used for the study. i. Equilibrium of oil and oil/water emulsion phase ii. Equilibrium of water and water/oil emulsion phase iii. Equilibrium between oil-water and emulsion phase iv. Exclusive availability of a micro emulsion phase as a control experiment They found out that the addition of co-surfactant as alcohols favour the formation of micro emulsion. They equally carried out model tests using hydrocarbons as toluol, n-octane and cyclohexane to ascertain the influential factors for micro emulsion phases. They found out that the surfactant Carboxymethylated nonphenolyethylate (5 EO/mole) with a co-surfactant isopropanol favored micro emulsion formation and stability based on aqueous solution within a given range of salt concentration (1 to 22 wt %NaCl). They concluded that Micro emulsion volume increases with surfactant concentration and decreases with temperature. Paraffinic oil needs a higher temperature to form stable micro emulsion than others. Toluol formed middle phase emulsion between 12 & 13 wt % NaCl, cyclohexane between 19 & 22 %, but n-octane did not even form emulsion at 22%wt. Escravos and Que Iboe oils salinity is 17 and 23 while Forcados oil is between 19 and 24. The range increased to 6%. They stated here that the tendency for developing a middle phase micro emulsion phase is highest with aromatic hydrocarbons and the reverse is in oil with high percentage of alkanes (saturated hydrocarbons) while cycloalkanes are in between them. The oil composition, formation water ions content and temperature are fixed parameters, so the choice of surfactant and co-surfactants must be based on individual system. Table 1.2: Influence of Temperature on Phase Behaviour Toluol Aqueous Micro Oil Temp Water Volume, emulsion Phase [ml] [ml ] [ml] Volume, ℃ [ml] 48 25 5.5 7.5 12.0 54 25 8.0 7.0 10.0 60 25 8.5 5.5 11.0 66 25 9.0 0.0 16.0* Source [Obah, et al, 1998] *The upper phase micro emulsion was observed. They concluded that a closed oil bank developed in a pilot test and can be produced. Micro emulsion flows at optimal flooding velocity till the end of the flooding tube. Interfacial Tension Maintenance Laboratory study method reported that it would be necessary to reduce and maintain the interfacial tension in 0.01 to 0.001dyne/cm. This would have an effect on the residual oil saturation. To obtain this low interfacial tension value in petroleum, sulphonate derived from crude oil was used. This was successful, because sulphonates have high interfacial activity, are less expensive and potentially available in large supply. The challenge here is selecting the component in order to reduce or displace the residual oil saturation. [Atkinson, 1927] Wettability and Capillary Pressure Synergy The wettability of a fluid on rock depends on a capillary number. A reservoir will be MP/EOR candidate if the capillary number is greater than 10−5 for water wetting critical and/or 10−4 for oil wetting critical. [Gupta and Trushenski, 1979] www.ajer.org Page 23 American Journal of Engineering Research (AJER) 2013 Water Displacement in Linear series Beds The displacing fluid cut in each zone of a reservoir depends on milidarcy-foot (md.ft) of oil flowing capacity at any time that break to production. The distance of advanced flood front is proportional to the absolute permeability (K). In linear beds geometry all beds undergo the same oil saturation change due to displacement effect by the displacing fluid, more so if all beds have similar porosity, relative permeability of oil and water. Under constant pressure drop across the beds with mobility ratio greater than unity the total flow through all the beds will increase. This is because less mobile oil phase is replaced by the more mobile displacing fluid phase. [Stile, 1949] Petroleum Profit Taxation (PPT) The current or past fiscal regime relating to oil fields development only offers a reduction of 19.25% from 85% in PPT, giving 65.75% for new comers in the 1st 5-years. This does not adequately pay for the use of unconventional equipment and technology, which are much more expensive. [David and Decree 23, 1996] Legal Framework for Oil Reserve Fields The acquisition of an oil reserve in the Niger Delta (Nigeria), is to have a right to effectively exploit the existing assigned oil fields in Nigeria, it is necessary to consider the methods or procedures by which these fields are transferred and acquired (Farm-out and Farm-in) by the intending investors. This is done within the existing and pending legislation. The petroleum Act of 1969 Decree, No.23 of 1996 (Amendment) deals with the exploration, drilling (evaluation) and production of oil and gas in Nigeria. An additional or new paragraph 16A of the Act provides guidelines for the development and production of these fields. Many of these fields lie within the existing OPL and OML portfolios of the major oil companies and as in joint venture operations with NNPC. The fact that some of these fields are the low reserves and smaller portion of the OPL and OML granted area, methods of acquisition must be in accordance with methods prescribed or allowed under the oil and gas Act or Decree granted by the OPL and OML. [Decree-23, 1996] CPS2 Memorandum of Understanding (MOU) Adepetun, et al, (1996) worked on the MOU and stated that is was another major fiscal incentive on profit, which was given to enhance export, encourage exploration & production activities, increase investment volume, promote crude oil lifting operations and to enhance reserves base. In addition a mechanism was introduced to ensure that producers actually realized equity share of the crude oil recovered. Actual market prices are the basis used for computing of government take values (PPT & royalty). Contractual Arrangements: 1. Concession Arrangement (sole risk) 2. Joint Venture 3. Production sharing contract (PSC). Government preference* 4. Service Contract 5. Joint Operating Sharing Holdings 6. Contract, (current in use and government interest) 2. Research Methodology Research Work Plan In this research work a surfactant was designed called CLOGEN-Polymer slug (CPS). The second part of the design used pressurized polymer injection. Mathematical definitions and calculations procedures of the materials, reagents and proceeds of the investment incorporated. The third part of the research covers an economic evaluation procedure for effective cost control. A mathematical evaluation is used here to study both the total oil recovery and the cost to recover it, estimating the profit margin before and after petroleum profit tax (PPT) by government. Project Case Design Draw-dip and down-dip solution gas was designed to enhance the recovery. A double line drive pattern was www.ajer.org Page 24 American Journal of Engineering Research (AJER) 2013 employed with chemical flooding simulation using CLOGEN-Polymer slug injection at one end of the reservoir to maintain the reservoir pressure above the bubble-point pressure and as well as to displace the level of the oil to the perforation section. Pressurized injection is equally done at the other end to achieve miscibility pressure and enhance the fluids lifting to the surface. The producers placed in between them, for effective drainage. The water and gas produced recovered in a separation process and sent to water-plant and gas-plant respectively, for treatment and re-injection. The water treatment and injection skid conditioned the water for CPS and the pressurized stream before re-injection. In addition to cutting down the cost, the production system was designed to ensured water and pressurized streams requirement availability. A total of nine (9) wells was estimated, three (3) injecting wells for CLOGEN-Polymer-injection at the lower dip, three (3) injecting wells for pressurized stream-injection at the upper dip and three (3) producing wells for fluids production in between the injectors. Figure 2.1 shows the schematic view of the converted field for EOR methods. Oil CPS1 1 2 3 CPS2 4 Fig. 2.1: Mechanism of CPS Operation 1. Chase Water Bank 2. Polymer slug (CPS) 3. CLOGEN-Surfactant Solution (CSS) 4. Miscible Displacement Bank (CPS, oil and Gas) CLOGEN-Polymer Slug (CPS) Design Table 2.1: CLOGEN Surfactant Solution (CSS) Composition Conc CLOGEN-2A Components % wt Active Surfactant (HPAM) 10.0 Crude Oil 15.0 fresh water 70.0 Co-Surfactant (hexyls or isopropyl alcohol) 2.0 3.0 Inorganic Salt (2% + % ) Total 100.0 In each of the surfactant solution preparation about 100g (10%) of active surfactant was placed in an anaerobic Chamber and about 700ml (70%) of fresh water, steered and 170ml 0 �0 = 15 0.8550 crude oil was added to the mixture, steered again vigorously. About 20ml of co-surfactant (1.12 2 2 ) was added then shaken properly and 30g of inorganic salts (2% + % ) was finally added to the mixture in an anaerobic Chamber. The complete solution was transferred into Teflon wrapped plugs (CLOGENPolymer Storage tool). The objective of CLOGEN-Polymer slug injection is to reducing and maintaining IFT between 0.01 and 0.001 dyne/cm and it is less expensive and potentially available in large supply. Surfactants in water solutions recover more of the oil, because proportionate composition assures a gradual transition from displacement of water to the oil displacement without significant interface. Another advantage is to converts macro-emulsion to micro-emulsion which enhances high recovery. Inorganic salt is used to prepare the surfactant solution in order to gain better solution viscosity control. The surfactant solution is driven by a polymer slug in order to control its mobility called CLOGEN surfactant polymer (CSP) flooding. The CSP www.ajer.org Page 25 American Journal of Engineering Research (AJER) 2013 solution is miscible with reservoir fluids (oil and water) without phase separation, assuring lower residual oil after displacement. The percentage of a fluid displacement depends on rock uniformity, areal sweep efficiency and the injection fluid invasion efficiency. The surfactant solution is similar to emulsion except that the discontinuous phase in the solution is smaller in size (more microscopic). CLOGEN Mechanism of Operation The 3 principal components of CLOGEN are surfactant (sulfonate), oil and water in oil and water region. Oil and water are in equilibrium and external to the CLOGEN each lying at the opposite ends of the miscible-line AB. In the miscible region all the components are present with little or no interfaces. The pseudocritical diagram for practical CLOGEN-Polymer Slug (CPS) displacement in the field of study is the oil and water region. The surfactant-slug moves through the reservoir, changing its composition after absorbing oil and water thereby attaining miscible displacement in present of the injected pressurized stream. Miscible Phase Bank Single-Phase Region A OIL miscible-line CPS B Oil and water Region Fig 2.2 Pseudo-critical saturation diagram [Source: Niger Delta Oil Sample Analysis] Oil Bank Least Dense CLOGEN-Slug 62.3 / 3 Water, 62.4 / 3 Fig 2.3 Volume of Oil Bank Observed Experimental Procedure and Observations About 25ml of each CLOGEN solution was pipette into a boiling tube containing 50ml of macro oil emulsion. The mixture was agitated and exposed to direct sun heating from 60 to 240℉ and left to settle down. The volume of oil bank observed in each of the CLOGEN types was recorded in every 30℉ increased. Table 2.2 shows detailed recorded values. In this case hydrolyzed polyacrylamide (HPAM, called CLOGEN-2A) was selected, be cause fresh water HPAM solution can provide efficient sweep with minimum mixing saline brine if 2+ polymer mobility is sufficiently low. In the absence of 2 and/or divalent cat-ions ( 2+ ), HPAM polymer viscosity remains unchanged at 100℃ (212℉) for many years and in EOR is stable up to 120℃ (248℉) 2+ even if it contacts 2 and/or divalent cat-ions ( 2+ ). More so most reservoirs produce water with little or no detectable dissolved oxygen and it can be controlled in the field by preventing leakages. www.ajer.org Page 26 American Journal of Engineering Research (AJER) 2013 Table 2.2 Temperature effect on Micro Emulsion CLOGEN-1 CLOGEN-2 CLOGEN-3 5.2 8.3 Micro Phase [ml] 5.2 90 10.3 12.7 15.1 20.1 15.3 17.9 120 14.9 30.2 15.0 37.4 15.0 20.4 150 10.4 38.6 9.2 43.0 10.3 40.3 180 6.7 40.7 7.5 51.7 8.5 50.4 210 2.3 41.8 5.3 62.1 5.3 58.9 240 0.4 43.1 0.0 68.8 0.5 60.0 T 60 Micro Phas [ml] Oil Phase V [ml] Oil Phase V [ml] 10.5 Micro Phase [ml] 5.0 Oil Phase V [ml] 10.7 [Source: Experimental Results from the field of study] Technical Evaluation and Modelling Assumptions: The assumptions are necessary to drive the equations and make reasonable calculations. Variable Permeability in Series/parallel Beds 1. Linear geometry and the distance (∆ ) of the advanced flood front is proportional to the absolute permeability (K). → ∆ ∝ 2. Production in each zone changes from oil to displacing fluid (CLOGEN) 3. The displacing fluid (water or CLOGEN) cut in each zone depends on Milidarcy.foot (md.ft) of oil flowing capacity at any time that breaks to water production. 4. There is negligible cross flow between zones 5. All beds have the porosity, relative permeability to oil ( = ) ahead and to water ( = ) behind the flood front. 6. All beds undergo similar oil saturation change (∆ ) due to CLOGEN displacement (∆ ). 7. The given zone thickness is ∆ and permeability is 8. The velocity of the flood front is proportional to the permeability of the beds 9. When the mobility ratio (M) is equal to 1.0, there is a constant velocity and pressure drop: Meaning uniform permeability (K) beds 10. When M ≥1.0 there is variable velocity and pressure drop (non-uniform permeability beds). 11. The total pressure drop equals the sum of the individual drop in the zone 12. Total length of bed is the sum of individual length ( + ) in the zone 13. The flow is a single phase since miscible and two-dimension (2D), since small cross-flow Permeability in Linear Beds or Layers www.ajer.org Page 27 American Journal of Engineering Research (AJER) + = � + = � − 2013 � 2.1 where hj = Total height swept at the given Ev = ∆ 1 + ∆ 2 +. . . + ∆ 2.2 = 1 + 2 + . . . . + Reservoir Thickness 2.3 = , . j 1 K j = Complete flooded capacity, md. ft j Kh − ∆ K j = Producing capacity, md. ft 1 = Substituting these in eqn2.1 gives eqn2.4 + = � � − � 2.4 � Multiplying eqn2.4 by � gives eqn2.5, the recovery efficiency. Cumulative Oil recovery ( ) Modelling = but ∆ = Substituting this in eqn2.6 giveseqn2.8 = � + = Actual Oil recovery Factor (% % = = � + � − � � − � � or Using the Graphical Table % % ( ) = � ∆ = � + � − � � Total Surfactant Requirements ( www.ajer.org � = � + � − � � 2.5 2.6 2.7 2.8 ) 2.9 = ( ) 2.10 ) Estimation Page 28 American Journal of Engineering Research (AJER) 2013 = where = −∅ ∅ � � −ø − = 2.11 � ø = active surfactant in the injected slug = = � � = Surfactant retention � � = = Slug size to surfactant retention ratio = = Total Polymer Requirement ( = Unit floodable pore volume ) Estimation When relative permeability data are available, a plot of polymer buffer ( � � against could be made. The initial mobility of the ) is made equal to the minimum mobility ratio of water and oil ( = � � ). Then the viscosity of the mobility in buffer is graded down to that of the chase fluid. Or a simplified plot of polymer concentration in initial portion of drive against the ratio of oil to water viscosity is made. Applying the US Department of Energy, 1980 model values Table 2.3 Polymer conc. based on oil-water viscosity ratio �0 � 1.0 300 2.0 417 3.0 550 5.0 689 5/0 825 6.0 900 7.0 1082 8.0 1200 9.0 1260 10.0 1500 Average Concentration of the Polymer Buffer 1600 1400 1150ppm 1200 CpB, ppm 1000 800 600 400 200 0 0 2 4 6 8 10 12 Fig 2.5 Polymer and CLOGEN Viscosity Ratio Synergy = − . � 2.12 Project Life ( ) Estimation The reservoir pressure gradient must be 0.1psi/ ft3 less than the injector pattern drive pressure gradient to maintain the elastic limit, so that the total underground withdrawal at the producing end equals to the surfactant invasion rate at the other end of the reservoir block. This prevents free gas saturation from exceeding the critical fluid saturation for proportionate volume flow. The resultant effect is that double line-drive mechanism provides normal condition for proportionate phase (oil & gas) separation. Using the US Department of Energy, 1980 Mathematical model total injection volume in PV ( ) is: = + + = . ≈ . 2.13 = ∅ . + . � www.ajer.org ( ) 2.14 Page 29 American Journal of Engineering Research (AJER) 2013 Field Development Study and Estimation This must be based on the number of wells pattern in the given field (injectors and producers) and the CPS required in sweeping the area in a given period. The total area needed to be developed ( ) is a function of the floodable pore volume ( ) and reservoir effective porosity (7758øh) and the total number of wells ( ) for the project depends on the reservoir area ( ). The function of the number of wells is to increase the surface area for sweeping efficiency. = = �ø . = �ø − 2.15 Economic Data and Mathematical Modelling The revenue ($o or N) depends on the market price ( 1 /bbl) and the recoverable fluids (Np). It equally depends on the market modifier factor, (XS). About 80% of current market price is used to minimize the inflation and fluctuation effects. XS = 1.0 for sweet (non-acid) crude or 0.9 for sour crude. Nigerian crude is predominately sweet, but 0.95 the average value is preferred for conservative reason. Using the OPEC oil market price model: [ = − . − ] by US, Department of Energy 1980 the current oil buying price would be estimated. Revenue from the Proceeds = − = − . 2.16 Development Costs Data Estimation This part of the model covers the expenses incurred in the application for licenses, field exploration bills, drilling new-wells, purchasing equipment, conversion and workover jobs on old wells to suit EOR project, called CAPEX. = + + + + + + + 2.6 Development cost Recovery Value (CAPEXRV) = + = 2.17 Yearly Project Operations Costs (Investment) = = + + 2.18 � � �= Operation Costs/Investment Recovery value � � = + + 2.19 Annual Overhead (OHDC) is 10% of Investment = . + + 2.20 Annual Overhead Cost Recovery value = = . + + 2.21 Yearly Operations Information Flow Calculation i. Yearly Crude Oil production = / ii. 2.22 Revenue, Per Year (Only round down) = = − . www.ajer.org − 2.23 Page 30 American Journal of Engineering Research (AJER) Royalty Interest, Royalty = iii. � iv. 2.24 − . − 2.25 Working Interest, = 2.26 = . v. State Tax ( = % ) Substituting eqn2.19 into eqn 2.27 gives eqn2.28 = % = . ( + + 2013 ) 2.27 2.28 vi. Yearly Net cash flow before tax (NCF) = – – – – − vii. Cumulative cash flow before tax (CUM) = ( – – – – − ) viii. Income Tax, the Petroleum Profit Tax (PPT) Government fixed the petroleum profit tax ( ) at 65.75% for New Comers in the first five years into the business and ( ) 85% there after or old members. The mathematical definition is: = ��( + − ) 2.29 Net Pay Value: = − 2.30 = ( − + ( − ) ) 2.31 Equation 2.31 is the general net pay value mathematical definition. The percentage of net cash flow (% �) gives the investor an idea on how much he is getting in the end of the contract. % = + 2.32 Evaluated Model Equations Applications This section presents the application of the models on 89 reservoirs in 4-categories (Tab 2.4), with the reserve of 1.24MMMstb. About 80% of these reserves showed that 65% to 72% of the reserves were recovered using GLOGEN-Slug compared to 48% recovered using conventional methods (gas dissolved drive and waterflooding). The economic models equally showed good NPV after PPT. Table 2.4: A cross section of 4-Categories of Reservoirs Reservoir Number of Reserves Fields capacity Reservoir Nf, MMstb N MMstb f I 0.1 – 5.0 18 90.0 ii 5.1 – 10.0 16 160.0 22 iii 10.1 – 15.0 330.0 33 iv 15.1 – 20.0 33 Total 89 1240.0 Probability, P (N < 5.0) = 18/1240 = 0.015 or 20% Probability, P (N > 5.1) = 71/1240 = 0.057 or 80% Example Application: Table 2.5 Initial, Production and Laboratory data Reservoir Depth, D 60000ft Reservoir thickness, h 24ft Porosity, ø 28% Irreducible water saturation, Swi 30% Average Permeability, K 400md Dykstra Permeability variation, VDk 0.5 Oil Gravity, oAPI 34oAPI Oil Viscosity, µ o 3.4cp Initial Reservoir Temperature, Ti 102oF Oil FVF Boi & Bof, rb/stb 1.15/1.10 Average Reservoir Area, A 80acres www.ajer.org Page 31 American Journal of Engineering Research (AJER) Cumulative (Gas&Water Drives), Np (48%) Water – Oil Ratio, WOR Residual Oill Saturation (Swept Zone) Sorw Oil Saturation in the Un-swept Zone, Sor Salinity content of the Water, Ws, ppmTDS Water Viscosity, µ w Clay content of the Rock, Wclay Rock Density, Pr Surfactant Density, Ps Injection Pressure Gradient, Cp IFT tension, £ Dyne/cm Initial Oil In Place N (17.2 x 100/48 x 103) MP Displacement Efficiency, Emp Volumetric Sweep Efficiency, EV Vertical Swept Efficiency, ED 2013 17.2MMstb 21 26% 65% 6.5x104 0.55cp 0.05 156 lbm/ft3 62.3 lbm/ft3 0.5psi/ft3 3.33x10-3 35.8MMstb 77.39% 80% 65% A field was abandoned due to high gas after 17.2MMstb (48%) recovery. Then it was selected for reconsideration as a pilot reservoir for study. A rectangular reservoir boarded all sides by faults, except one, which was boarded by an aquifer in a monocline with 13 o dip to the faults. After a short period of production using gas dissolved drive mechanism, the reservoir was converted to water-flooding. This took place in selected single-line drive area of 80acres pattern. The cumulative oil production under solution gas drive and waterflooding was 17.2MMstb 48% of the pore volume. Table 2.5 is the collated history, production and laboratory test data of the field.. Solution: Technical Evaluation Procedures (Table 3.2) Column – 1: Volumetric sweep efficiency Column – 2: ∆ Beds thickness delineation Absolute permeability (capacity) in each of the beds Column – 3: Column – 4: Cumulative capacity of the beds . Cumulative capacity Column – 5: ∆ Column –6: Using the table 3.2 the 80% sweep efficiency in the most permeable part of the formation has a total permeability of 331md and contains 331/400 = 83% of the total formation capacity. When the 22.4th footage has been completely flooded the recovery efficiency ( ) was estimated using eqn2.5 as: � + � − = � � = . +( − ) = % Column – 7: � � � � Applying eqn2.8 in column-7 table 3.2 gives actual cumulative oil recovery ( efficiency. � + � − � = = . . . + − ) at 80% volumetric sweep � = . MMstb Column – 8: Actual oil recovery factor (% ) Applying eqn2.10 in column- 8 on table 3.1 gives actual oil recovery factor: % Additional Oil recovery: = 25.87 − 17.2 = 8.67 ∆ III. ( %) = 72.20% RESULTS AND DISCUSIONS Technical Feasibility Results: About 89 reservoirs in 4-categories with the sum reserves of 1.24MMMstb. Table 3.1 shows the confirmed evaluation models. Table 3.3 shows the technical feasibility results and tables 3.4 to 3.6 show the economic results. www.ajer.org Page 32 American Journal of Engineering Research (AJER) Table 3.1 Technical and Economic evaluation Models Eqn Evaluation Models Technical Models � + � − � = 2.8 � 2.9 % % = ( ) = 2.10 2.11 − ø ø − = = 2.12 2.14 � + � − � � + � − � . ∅ = 2.15 == 2.16 Economic Models = 2.23 = 2.25 2.26 =� � � � − Working Interest, Using Tab 2.5 Surfactant Needed Required Polymer � Project Duration ø � ) ( Total Well Required − − . − . Recovery Factor + . . − Cumulative Recovery � � . remarks � − . 2013 Total revenue Yearly Revenue − � − . Yearly Capital Net Cash Flow = Govt Tax 2.29 2.31 2.32 = − – = ( �� – – − ) + = ( − ฀฀฀฀(฀฀฀฀฀−฀)) % www.ajer.org = – Net Pay Value: + + Page 33 American Journal of Engineering Research (AJER) Table 3.2 CLOGEN –Slug Flooding Performance Prediction 2 3 4 5 6 7 1 % 10 20 30 40 50 60 70 80 90 100 2013 ft 2.8 5.6 8.4 11.2 14.0 16.8 19.6 22.4 25.2 28.0 � md 45 44 43 42 41 40 39 37 35 34 � md.ft md.ft 45 89 132 175 215 255 294 331 366 400 125.0 246.4 361.2 470.4 574.0 672.0 764.4 828.8 882.0 952.0 Eff 0.38 0.45 0.52 0.59 0-66 0.73 0.80 0.87 0.93 1.00 8 MMstb % 11.30 13.38 15.46 17.55 19.63 21.71 23.79 25.87 27.66 29.74 31.54 37.34 43.15 48.98 54.79 60.59 66.40 72.20 77.20 83.00 Table 3.3 Technical Feasibility Results Studied Parameters Data Oil Initially in place (OIIP), N 35.83MMstb 17.2MMstb (48% PV) Cumulative oil production, 8.67MMstb Additional recovered oil, NR (24.20% PV) 72.20%PV Total recovery factor, ER (48 + 24.20) 2-67x10Capillary number, 3 ppm Total surfactant required, GTS 75.00Mstb Total polymer required, GPM 11.0x106lbm Project life or duration, 6 years Total field for development, DA 874acrees Total number of wells (6 old + 3 new) 9 wells Wells for conversion and workover jobs 6 wells Total new to drill, NDA 3 wells Distribution , 6 injectors & 3 producers 3wells each [Calculated using technical feasibility equations] Table 3.4 Yearly Operations Information flow Yr Rev WI Roy S/T OPCR OHCR DVCR MMbbl 0 0.00 0.000 0.000 0.00 0.00 0.00 0.00 0.00 1 1.445 118.3 103.51 14.79 1.28 16.00 9.60 33.60 2 1.445 118.3 103.51 14.79 1.28 16.00 9.60 33.60 3 1.445 118.3 103.51 14.79 1.28 16.00 9.60 33.60 4 1.445 118.3 103.51 14.79 1.28 16.00 9.60 33.60 5 1.445 118.3 103.51 14.79 1.28 16.00 9.60 33.60 6 1.445 118.3 103.51 14.79 1.28 16.00 9.60 33.60 8.67 709.8 621.06 88.72 7.68 96.00 57.60 201.6 Source [Calculated Using Economic Feasibility Models] www.ajer.org Page 34 American Journal of Engineering Research (AJER) 2013 Table 3.5 Six Year Cash Flow at 65% and 85% PPT Operations ($Years of Op x ) Rev/Time Revenue 118.3 118.3 118.3 118.3 118.3 118.3 WI ( ) 103.51 103.51 103.51 103.51 103.51 103.51 Roy( ) 14.787 14.787 14.787 14.787 14.787 14.787 STax 8%inv 1.280 1.280 1.280 1.280 1.280 1.280 CAPEX 16.00 16.00 16.00 16.00 16.00 16.00 OPEX 33.60 33.60 33.60 33.60 33.60 33.60 OHCR 9.600 9.600 9.600 9.600 9.600 9.600 Taxable 28.240 28.240 28.240 28.240 28.240 28.240 PPT (18.36) (18.36) (18.36) (18.36) (18.36) (24.00) NPV 9.884 9.884 9.884 9.884 9.884 4.236 OPEXCRV 16.00 16.00 16.00 16.00 16.00 16.00 CAPEXCRV 33.60 33.60 33.60 33.60 33.60 33.60 OHCRV 9.600 9.600 9.600 9.600 9.600 9.600 NCF 69.084 69.084 69.084 69.084 69.084 63.436 Source [Calculated Using Economic Feasibility Models] The research result shows that in the pilot reservoir 25.87MMstb (72.20%) was estimated recovered compared to 17.20MMstb (48%) in the conventional methods used. Thus an economic additional recovery factor (% ) of 24.20% pore volume was achieved in this field, because the CPS used effects on the oil displacement efficiency. Table 3.6 Effect of PPT on Net profit (NPV) PPT % (NPV) %(NPV $ 10/29.25 55.8582 29.06 15/34.25 52.0725 27.09 20/39.25 48.2969 25.13 25/44.25 44.5211 23.16 30/49.25 38.3228 19.94 35/54.25 36.9700 19.24 40/59.25 33.1943 17.27 45/64.25 29.4186 15.21 50/69.25 25.6300 13.34 55/74.25 21.8673 11.38 60/79.25 19.0917 9.41 65/84.25 14.3132 7.45 70/89.25 10.5404 5.43 75/94.25 6.7647 3.52 [Calculated: Economic Feasibility equations] www.ajer.org Page 35 American Journal of Engineering Research (AJER) 2013 Source [Generated from Table 3.6] Fig 3.1 Net Profit Value against Petroleum Profit Tax This graph shows that when the PPT is 30% the NPV is 21% and when PPT is 40%, NPV is 17%. This implies that at 65% and 85% the NPV is 5%. The only remedy is the MOU between the Government and investor. IV. DISCUSIONS The primary advantage of these models result is to identify and select the chemical flooding technique for all or high oil recovery in the Niger Delta fields. This would enhance the prediction of the fluid production value in a given period using the chemical flooding mechanisms. At any stage of production, the designed slug controls the oil displacement from the pore spaces and sweeping to the producers. The principal mechanisms of CLOGEN-Polymer Slug is the ability of preventing free gas saturation from exceeding the critical fluid saturation, maintaining the reservoir pressure above the bubble point pressure, very high displacement of the oil level and lifting to the surface. The effective fluid recovery using CLOGEN-Polymer Slug ranges from 65% to 72% of the reserves compared to 15% - 48% common in the conventional methods (gas dissolved drive and waterflooding). V. CONCLUSION Mathematical evaluation models were successfully derived for preparing CLOGEN-Polymer slug that effectively displaces oil from the pore spaces and sweeping it to the producers in practice. The principal advantage is that 10% to 25% addition to conventional method recovery of the recoverable reserves would be achieved. This is possible since the surfactant-oil phase activity and the changes in the CLOGEN-Polymer will cause a reduction in the interfacial tension required for a miscible displacement. The Surfactant-brine-oil phase measurement can control any difficulty of interfacial tension and also provides a basis for CLOGEN-surfactant flooding design. VI. a. b. c. d. e. RECOMMENDATIONS 3 3 in formation water of 62.4 (meaning: The CLOGEN-Polymer density must be 62.3 / / 3 less than the formation water). This maintains proportionate adsorption profile. The recovery / 0.1 in this case is between 65 and 75% if the volumetric sweep efficiency is up to 80%. These principles are achieved only in a very narrow range of salt concentration) in the CLOGEN-solution. The salinity of the brine influences the phase behaviour of CLOGEN-surfactant solution, so it needs a good correlation with the interfacial tension. The wells (producers) location must be determined using principle of moment. The advantage of using the principle is that fluids miscibility pressure attainment and micro-emulsion are possible with vertical oil displacement assurance. Injection gradients must be slightly above the reservoir pressure gradients, for controlled flowing, but is best determined in practice. Amortization must be spread throughout the contract duration and not at once like in the conventional www.ajer.org Page 36 American Journal of Engineering Research (AJER) 2013 production operation contract. This favours the business viability and stability. The best way to determine PPT should be based on individual contract for fair consideration. In this pilot reservoir I recommend a PPT of 40% for new comers and 60% in the subsequent years with NPV of 42%. This would entice small scale investors, since the profit is good. This equally increases indigenous firms’ chances or opportunities to participate in oil upstream sector. Enhanced oil recovery technology can maintain the potential of the declining proven elephant reserves of a country, so developing special methods for advancement in the recovery efficiency is recommended. Government should do all that is necessary to encourage advancement in fluids recovery efficiency research. The development of low oil fields enhances technical knowledge exchange or transfer. It equally gives the citizens employment opportunities. It increases both domestic oil base and foreign reserves or exchange. It generates additional revenue for a nation. The most assured philosophy or best program for high recovery in a reservoir is to recognize early the proper techniques to use in that reservoir. This guides the development program of the reservoir towards maximum use in the exploration and exploitation programs best suited for high recovery. To successfully farm-out and farm-in low oil fields for development, government, fields’ owners and interested investors (OPL/OML license holders) have to come together and reformulate the terms of agreement. Or the government should used its veto power and formulate a farm-out and farm-in policy. f. g. h. i. j. VII. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] REFERENCES Adepetun, A.O., M. Caxton and H. Agbor (1996) ‘Development, of Marginal Fieldsμ The Way Forward’ NAPE, Pre-conference Workshop, Lagos. (7th May, 1996). PP1-14 API, (1984) ‘‘Statistical Analysis of Crude Oil Recovery and Recovery efficiency’’ API Bul. D14 second edition API production dept, Dallas April, 1984 Atkinson, H. (1927), US Patent, No 1651311, PP100-123 Carlos, F. M., Pedro, J. B. and Jose, M. F. (2003) ‘‘Rejuvenation of Marginal Offshore Fields’’ Part I, II & III, Offshore Technology Conference, Houston, Texas USA, 3rd – 6th, 2003. PP1 – 14 Chang, H. L., (1λ78) ‘‘Polymer Flooding Technology-Yesterday, Today and Tomorrow’’ J. Pet Tech. Aug., 1978; pp1113-1128 Craig, F. F. (1971), ‘‘The Reservoir Engineering Aspect of Water-flooding’’ Monograph series SPE, Dallas, vol.3 PP45-46. Craig, F. F. Jr., Geffen, T. M. and Morse, R. A., (1955) ‘‘Oil Recovery Performance of Pattern Gas or Water Injection Operations from Model Tests‘’ J.Pet Tech, Jan. 1955. PP1-15; Trans AIME vol.204 David, R. and Decree-23, (1996) ‘‘Marginal Fields Development Funding’’ NAPE Workshop, 34 Saka Tinubu Street, Lagos, Nigeria. 1996, PP2-18. Dept of Energy, (1981) ‘‘Economics of Enhanced oil Recovery’’ U. S. Report DOE/ET/12072-2 Washinton, D. C. U.S. PP19-20 Dyes, A. B., Caudle, B. H. and Erickson, R. A., (1954) ‘‘Oil Production after Breakthrough as influenced by mobility Ratio’’ Trans AIME, vol.201, pp81-86 Fassihi, M. R., (1986) ‘‘New Correlation for Calculation of Vertical Coverage and Areal Sweep Efficiency’’ SPE reservoir engineering, Nov., 1λ86; Pp604-606 Gogarty, W. B., (1983) ‘‘EOR through the Use of Chemicals’’ Part-1, J. Tech, Sept, 1983; Pp1581-1590 Gupta, S. P. and S. Trushenski (1979) ‘‘Micellar Flooding Compositional Effects on Oil Displacement’’, Society of Petroleum Engineering Journal, Vol. 1λ (1λ7λ), P116 – 117 Haynes, H. J., National Petroleum Council, Industry Advisory Council to US (1976) ‘‘Enhanced Oil Recovery’’ Dept of the Interior US; 1λ76 Helier, J. P. And Taber, J. J., (1986) ‘’Influence of Reservoir Depth on EOR by CO2 Flooding’’, paper, SPE 15001 presentated at the 1986 SPE Permian Basin Oil & Gas Recovery conference Midland; 13th – 14th, 1986 Holm, L. W. And Josendal, V. A., (1982) ‘‘Effects of Oil Composition on Miscible Type Displacement by carbon Dioxide’’ Soc. Pet Eng, J. Feb., 1λ82; pp8λ-98 Kimbler, O. K., Caudle, B. H. And Cooper, H. E. Jr,, (1964) ‘‘Areal Sweepout Behavior in a (-Spot Injection Pattern’’ J. Pet Tech, Feb., 1λ64. Pp1λλ-202 Leed, P., Bram, V. C. and Jaap-Hanm, W. (2004) ‘‘Rejuvenation of Marginal Offshore Fields’’ (OTC 16484), Part I, II & III, Offshore Technology Conference, Houston, Texas USA, 3rd – 6th, 2004. PP1 - 9 Martins, F. D., (1986) ‘‘Design and Implementation of a Polymer Flood’’ South-Western Petroleum short course proc. 33rd Annual South-Western Short course, Lubbock, 23rd – 24th April, 1986 www.ajer.org Page 37 American Journal of Engineering Research (AJER) [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] 2013 Moradi-Araghi, A. and P.H. Doe (1987), ‘‘Development and Evaluation of EOR Polymers, SPEμ RE 2(4), PP461-467 Muskat, M and Wyckoff, R. D. (1934) ‘‘Á Theoretical Analysis of Waterflooding Network’’ Trans AIME, vol.107, PP62-76 National Petroleum Council (NPC), 1984: ’’Enhanced Oil Recovery’’, Washinton D. C., US Dept of Energy, June, 1984. P206 Obah, B., Chukwu, O. and Onyche, T. D. (1998) ‘‘Phase Behavour of Nigerian Oil/Water by the Addition of Surfactant/co-Surfactants’’ Seibnishμ 23. D 38678 Claustal Zellerfeld, Germany. Orr, F. M. Jr., and Jensen, C. M., (1982) ‘‘Interpretation of Pressure-Composition Phase Diagram for – crude Oil System’’ Paper SPE 11125 presented at SPE 57th Annual Fall Technical Conf. And Exhibition, New Orleans, 26th -29th Sept., 1982 Prats, M., Hazebroek, P. and Allen, E. E. (1959) ‘‘Prediction of Injection Rate and Production History for Multifluid 5-spot Floods’’ J. Pet Tech, May, 1λ5λ. Ppλ8-105; Trans AIME vol.216 Santoso, S. T. (2003), ‘‘Stability of Partially HPAM at Elevated Temperature in the Absence of Divalent Cat ions and Oxygen’’ SPE_121460_ Web.pdf SPE Reprinted Series-59 (2004) ‘‘Rejuvenation of Marginal Offshore Fields’’ Part I, II & III . Stalkup, F.I.Jr. (1984) ‘‘Miscible Displacement, SPE Monograph Series’’ Richardson, TX, 1λ84. PP137-156 Stiles, W. E. (1949) ‘‘Use of Permeability Distribution in Water Flood Calculation’’ Trans AIME 1λ4λ 186, 9. Taber, J. J., (1982) ‘‘Enhanced Recovery Methods for Heavy and Light Oils’’ Proc. International conference on heavy vs Light oils; Technical issues and economic considerations, Colordo Spring, 24th – 26 May, 1982 Willhite, G. R., (1λ86) ‘‘Waterflooding’’ SPE, Richardson, TX (1λ86) William, C. L., (1996) ‘‘Standard Handbook of petroleum and Natural Gas Engineering’’ Copyright (C) 1996 by Gulf Publishing Company Houston, Texas; Pp319-325 www.ajer.org Page 38
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-252-257 www.ajer.org Research Paper Open Access Ambient Intelligent Computing in Health Care Dr.Hisham S. Katoua Management Information Systems Dept., Faculty of Economics & Administration King Abdul-Aziz University, Jeddah, Kingdom of Saudi Arabia Abstract :- Ambient intelligence Computing (AIC) is a multidisciplinary field of research includes: artificial intelligence technologies and human-centric computer interactions. Recently, AIC has been an efficient tool for the health care, e-health and telemedicine. This technology enables the elderly people and people with disabilities to improve their quality of life. This paper discusses the applications of AIC for healthcare for elderly people and people with disabilities. Challenges and current research areas are discussed as well. Keywords:- Knowledge-Based systems, ambient intelligence, disabilities, human-centric computer interaction, intelligent computing, health care, Artificial intelligence I. INTRODUCTION Recently, ICT produces a new computing paradigm known as ambient intelligence (AIC [1, 8]. AIC is characterize by invisible and embedded computational power in everyday usage, application and other common physical objects, including intelligent mobile and wearable devices [6, 9]. The concept of AIC provides a vision of the information society, where the emphasis is on greater user-friend lines, more efficient services support, user empowerment, and support for human interactions. People are surrounded by intelligent intuitive interfaces that are embedded in all kinds of objects and an environment that is capable of recognizing and responding to the presence of different individuals in a seamless ,unobtrusive( i.e., , many distributed devices are embedded in the environment , not intruding upon our consciousness unless we need them) and often invisible way. AIC is anticipated to have a profound impact on the everyday life of people in the information society [3, 4, and 9]. A variety of new products and services will be made possible by the emerging technological environment, including home networking and automation, mobile health management, interpersonal communication, and personalized information services. Many of these applications and services are anticipated to address a wide variety of domains and tasks that critical for elderly people and people who are disabling. For example, in the health care domain, AIC technologies will have the potential to greatly compute to improve services for everyone .A sensors measuring heart rate, blood pressure, and other vital signs will provide the possibility of accurate and real-time control of the user's state of health, with mobile communication devices automatically dispatching emergency call if necessary. Portable positioning systems (e.g. GPS) can also help in identifying the location of a patient and various mobile communication devices can be used to obtain access to a patient's health–care record from any place and at any time. The deployment of telemedicine systems in AIC settings will also contribute to continue care and patient education, assist patients in taking medications, and improve healthcare delivery.     II. MAIN CHARACTERISTICS OF AIC AIC refers to electronic environments that are sensitive and responsive to the presence of people. AIC aims to detect anomalous events from seemly disconnected ambient data that we take for granted. AIC is a new paradigm that enables a system to understand human states and feelings and to share this intimate information. AIC is a vision on the future of consumer electronics, computing and telecommunications that was originally developed in the late 1990s for the time frame 2010–2020 [2]. Figure 1 shows the time frame of www.ajer.org Page 252 American Journal of Engineering Research (AJER) 2013 the AIC. Figure 1: Timeframe 2010-2020 [2]    AIC is made possible by the convergence of affordable sensors, embedded processors, and wireless ad-hoc networks. AIC paradigm builds upon ubiquitous computing and human-centric computer interaction design. In an ambient intelligence world (see figure 2), devices work in concert to support people in carrying out their everyday life activities and tasks in easy, natural way using information and intelligence that is hidden in the network connecting these devices [5]. Figure 2: Ambient Intelligent World [5] III. THE KEY TECHNOLOGIES OF AIC AIC is characterized by systems and technologies that are: Embedded, Context Aware, Anticipatory, Adaptive, and Personalized. Figure 3 shows the AIC cycle. This cycle is composed of the following five phases: (a) Embedded; where many networked devices are integrated into the environment, (b) Context Aware; where devices can recognize situational context of the person, (c) Anticipatory; Anticipate your desires the personal desires without conscious mediation, (d) Adaptive; change in response to the person and (e) Personalized; tailored to the personal needs. www.ajer.org Page 253 American Journal of Engineering Research (AJER) 2013 Figure 3: AIC Cycle Figure 4 classifies the key technologies which cooperate to deliver AIC system. The main 5 classes of these technologies and their corresponding technologies are: 1. Human-centric computer interfaces (a) Intelligent agents. (b) Multi Model Interaction. (c) Context awareness. 2. Dynamic and massively distributed device networks (a) Service discovery. (b) Auto-configuration (c) End user programmable devices and systems 3. Unobtrusive hardware (a) Miniaturization (b) Nanotechnology (c) Smart devices (d) Sensors 4. Seamless mobile / fixed communication and computing infrastructure (a) Interoperability (b) Wire And Wireless Networks (c) Service Oriented Architecture (d) semantic web 5. Dependable and secure systems and devices (a) self-testing (b) self-repairing software (c) privacy ensuring technologies Figure 4: Key Technologies of AIC Figures 5 and 6 show the main scientific disciplines and their related topics which contribute in building AIC system. The three main disciplines with their corresponding topics are: 1. Artificial Intelligence www.ajer.org Page 254 American Journal of Engineering Research (AJER) 2013 (a) Expert Systems (b) Computer Vision (c) Natural Language Processing(NLP) (d) Robotics (e) Intelligent agents (f) Data Mining and Knowledge Discovery (DM & KD) 2. Computational Intelligence (g) Neural computing (h) Genetic algorithms (i) Decision trees (j) Rough sets (k) Fuzzy logic (l) Case Based Reasoning (CBR) 3. Web Intelligence (m) Web mining (n) Web framing (o) Web log mining (p) Web information retrieval (q) Web knowledge management (r) Semantic web (s) Artificial Intelligence • • • • • • Expert Systems Computer Visions NLP Robotics Intelliigent Agents DM & KD Computational Intelligence • • • • • • Neural Computing Genetic Algorithms Decision Trees Rough Sets Fuzzy Logic CBE Web Intelligence • • • • • • Web Mining Web Farming Web Log Mining Web Informational Retrival Web Knowledge Managment Semantic Web Ambient Intelligence Figure 5: AIC Scientific Disciplines and Communities from AI to AIC Requirements Engineering Software Quality Digital Signal Processing Software Quality Human Computer Interaction Usability Engineering Artificial Intelligence Social Sciences Human Factor Software Engineering Machine Learning Figure 6: AmI Disciplines IV. 1. 2. 3. SOME EXAMPLES OF AIC FOR DISABILITIES Neural computing to improve linguistic word predication. Word prediction is the most frequently used technique in writing systems designed to assist people with disabilities. Vision-Based human computer Interfaces. Intelligent eye tracking systems to implement eye mouse to Provide computer access for people with severe disabilities. Virtual reality technologies. Patients with disabilities can be trained with virtual reality systems to judge www.ajer.org Page 255 American Journal of Engineering Research (AJER) 2013 architectural barriers and tackle environmental obstacles. Accelerometer-based human computer interface for people with severe disabilities. Mobile technologies for people with disability. 4. 5. V. APPLICATIONS OF AIC IN HEALTH CARE SECTOR A variety of new products and services will be possible for elderly people and people who are disabled. E.g. home networking and automation, mobile health management, interpersonal communication and personalized information services. Sensors measuring heart rate, blood pressure, and other vital signs will provide the possibilityof accurate andreal-time control of the user's state of health, with mobile communication devices automatically dispatching emergency call if necessary. Portable positioning systems (e.g. GPS) can help in identifying the location of a patient, mobile communication devices can be used to obtain access to a patient's healthcare record from any place and at any time. Telemedicine systems in AIC settings will contribute to continue care and patient education, assist patients in taking medications, and improve healthcare delivery.       VI. EMERGING CHALLENGES OF AIC The distribution of interaction over devices and modalities. The balance between automation and adaptation and direct control. The identification of contextual dependencies among services. Health and safety issues. Privacy and security. Social issues. VII. THE SOCIAL AND POLITICAL ASPECTS OF AIC The ISTAG advisory group suggests that the following characteristics will permit the societal acceptance of AIC: (a) AIC should facilitate human contact. (b) AIC should be orientated towards community and cultural enhancement. (c) AIC should help to build knowledge and skills for work, better quality of work. (d) AIC should inspire trust and confidence. (e) AIC should be consistent with long term sustainability - personal, societal and environmental - and with life-long learning. (f) AIC should be made easy to live with and controllable by ordinary people. 7.2 BUSINESS MODELS FOR AIC The ISTAG group acknowledges the following entry points to AIC business landscape: (a) Initial premium value niche markets in industrial, commercial or public applications where enhanced interfaces are needed to support human performance in fast moving or delicate situations. (b) Start-up and spin-off opportunities from identifying potential service requirements and putting the services together that meet these new needs. (c) High access-low entry cost based on a loss leadership model in order to create economies of scale (mass customization). (d) Customer‟s attention economy as a basis for „free‟ end-user services paid for by advertising or complementary services . (e) Self-provision – based upon the network economies of very large user communities providing information at near zero cost.    VIII. CONCLUSION Ambient intelligence computing is a new approach for developing an efficient assistive technology that increases and improves the functional capabilities of individuals with disabilities. Ambient intelligence paradigm yields smart technologies that enable the elderly people and people with disabilities to change and improve their quality of life, and overcome many barriers . Ambient intelligence computing offers a new way to use the medical devices at a distance for many health care activities, e.g home networking , mobile health management, interpersonal communication and personalized information services www.ajer.org Page 256 American Journal of Engineering Research (AJER) 2013 REFERENCES [1]. [2]. [3]. [4]. [5]. [6]. [7]. [8]. [9]. P. L. Emiliani, C. Stephanidis, "Universal access to ambient intelligent environments: Opportunities and challenges for people with disabilities.”, IBM Systems Journal, Vol. 44, NO3, 2005, pp. 605-619. ISTAG Advisory Group Report on Scenarios for Ambient Intelligence in 2010,http://www.hltcentral.org/usr_docs/ISTAG-Final.PDF C. Stephanidis, et al. “Toward an Information Society for All: An International R&D Agenda. “International Journal of Human-Computer Interaction 10, No. 2, 107-134 (1998). C. Stephanidis, et al. “Toward an Information Society for All: An International R&D Agenda. “International Journal of Human-Computer Interaction 11, No. 1, 1-28 (1999) C. Stephanidis and P. L. Emiliani, “Connecting to the Information Society: a European Perspective,” Technology and Disability Journal 10, No. 1, 21-44 (1999). P. L. Emiliani, “Special Needs and Enabling Technologies: An Evolving Approach to Accessibility” in User Interfaces for All-Concepts, Methods and Tools, C. Stephanidis, Editor, Lawrence Erlbaum Associates, Mahwah, NJ(2001), pp. 97-114. Mu-Chun Su, Kuo-Chung Wang, and Gwo-Dong Chen, “An Eye Tracking System and its Application in Aids for People with Severe Disabilities”, Biomedical Engineering Applications, basis & Communications, Vol. 18 No. 6, pp. 319-327 (2006). Y. Cai and J. Abascal (Eds), “Ambient Intelligent in Everday Life”, LNAI 3864 , pp. 67-85, 2006. A. Salem and H. S. Katoua, “Exploiting the Ambient Intelligent Paradigm for Health Care”, International Journal of Bio-Medical Informatics and e-First Author1, Second www.ajer.org Page 257
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-126-130 www.ajer.org Research Paper Open Access Analysis of Optical Spectrum for Hematite Nanofluid Longpass Tunable Filter Fairuza Faiz1, Ebad Zahir2 1 2 (Electrical and Electronic Engineering, American International University-Bangladesh, Bangladesh) (Electrical and Electronic Engineering, American International University-Bangladesh, Bangladesh) Abstract: - The appropriate liquid-filter material must meet several requirements, including exact refractive index and absorption coefficients, optical constants that determine a satisfactory spectral response, solubility and stability in cold and hot water, and environmental safety. Recent research indicates that nanofluids must be very carefully chosen to see improvement in the proposed application. This is especially true for the nanofluid optical properties in tunable filters. If the volume fraction of nanoparticles is very high, all the incoming light will be absorbed in a thin surface layer where the thermal energy is easily lost to the environment. On the other hand, if the volume fraction of nanoparticles is low, the nanofluid will not absorb all the incoming radiation. Therefore, the optical properties of the fluid must be controlled very precisely or a nanofluid could actually be detrimental for the application. This paper focuses on analyzing the transmittance spectrum of hematite nanofluid for varying levels of volume fraction, particle size (diameter) and thickness of liquid layer using simulation results based on an established mathematical model. Keywords: - Nanofluid , transmittance, tunable filters. I. INTRODUCTION AND METHODOLOGY Nanofluid optical filters, which recently are the objects of research in the field of nano electronic devices, have a bright prospect in optical communications, sensing, lighting, photography, and energy harvesting[3]. They can meet the transient needs in a system because of the substantial amount of control that can be achieved by varying the particle size, volume fraction, thickness of the liquid layer and external factors like magnetic and electric fields. The transmittance and absorbance of the spectrum is highly affected by the choice of the base fluid alongside other factors like the extinction and scattering efficiency of the nanoparticles. An ideal optical limiter should be transparent to low energy laser pulses and opaque at high energies, so that it can protect human eyes and optical sensors from intense laser radiation [4]. The objective of this paper is to analyze the performance of hematite or iron(III) oxide (Fe2O3) nanofluid using simulation results to justify its usability as a longpass filter in various fields of operation. Fig. 1 shows a flow chart that describes how this proposed filter can be used as a tunable filter in communication. Fig 1: Block diagram of a tunable Nanofluid Filter application in a transmitter circuit. www.ajer.org Page 126 American Journal of Engineering Research (AJER) 2013 Shinde et al [5] compared the transmittance of aluminium doped Fe2O3 with that of pure hematite thin films whereas Nair et al [4] reported having successfully implemented a ferrofluid optical limiter. The simulation results here show that there is a close match between transmittance of the hematite nanofluid filter and that of hematite thin films given in previously published work by Shinde et al [5]. The equations given in section III of this paper takes into account the effect of the base fluid (which is water here) , volume fraction, thickness of liquid layer and particle size or diameter of the nanoparticle to the transmittance of electromagnetic radiation through the hematite nanofluid filter. II. RESULTS AND ANALYSIS OF TRANSMITTANCE SPECTRUM The variation of transmittance with thickness of the liquid layer at a constant volume fraction of 70% and particle diameter of 0.005µm is presented in Fig 2. There is significant transmittance for a nanofluid layer thickness of 1µm compared to the rest of the values considered as can be seen from the simulated data collected on Table 2. It is also noteworthy that such a high transmittance pattern has been obtained at a comparatively high concentration or volume fraction of the nanoparticle in the base fluid. Therefore, the results infer that the thinner the nanofluid layer at a high volume fraction the greater the similarity in performance of the hematite nanofluid to the conventional solid hematite thin film filter for a small particle size. Fig 2: Transmittance spectrum of hematite nanofluid at different thicknesses of liquid layer. Basefluid transmittance (in red) is at l=1000µm. Table 2: Showing transmittance for varying thicknesses of hematite nanofluid layer at a constant volume fraction and particle diameter. Volume fraction=70% and Diameter=5nm Thickness Color code %Transmittance at Wavelengths in (µm) (µm) 0.300 0.500 0.850 1.350 1.550 1 Magenta 0% 25% 70% 71% 71% 10 Black 0% 0% 2% 2% 2% 100 Green 0% 0% 0% 0% 0% 1000 Blue 0% 0% 0% 0% 0% Fig: 3 shows the transmittance spectrum for varying particle sizes while the volume fraction and thickness is kept constant. From Table 3 it can be inferred that transmittance tends to improve at the mid and higher infrared wavelengths for a hematite nanofluid. The transmittance spectrum is also consistent over considerable wavelengths in the infrared region and has a significantly high level of transmittance there. From Fig: 3 it can be seen that the cutoff frequency can be regulated by varying the particle size at fixed values of volume fraction and thickness of the nanofluid layer. The results in Fig 3 were obtained by varying the volume fraction of the nanoparticle at a constant thickness of 0.1µm and a diameter of 0.005µm. www.ajer.org Page 127 American Journal of Engineering Research (AJER) 2013 Fig 3: Transmittance spectrum of hematite nanofluid at different particle diameters. Table 3: Showing transmittance for different sizes of hematite nanoparticles for fixed thickness and volume fraction. Volume fraction=50% and Thickness=0.1 µm Diameter (µm) 0.005 Color code Blue %Transmittance at Wavelengths in (µm) 0.300 0.500 0.850 1.350 1.550 32% 87% 94% 94% 94% 0.05 Black 63% 87% 92% 94% 94% 0.2 Green 93% 94% 94% 94% 94% Fig: 4 presents the effect of varying the volume fraction of hematite nanoparticles of diameter 0.005µm at a nanofluid layer thickness of 0.1µm or 100nm. As seen earlier the effect of a high concentration evidently plays an important role in transmittance of the fluid filter. With increasing volume fraction the level of transmittance increases and the cut off frequency tends to shift towards the right i.e deep into the infrared region. Fig 4: Transmittance spectrum for varying volume fraction of hematite nanofluid. www.ajer.org Page 128 American Journal of Engineering Research (AJER) 2013 Table 4: Showing transmittance for different volume fractions of hematite at fixed thickness and particle diameter. Diameter=0.005µm and Thickness=0.1 µm Volume Fraction (µm) Color code %Transmittance at Wavelengths in (µm) 0.300 0.500 0.850 1.350 1.550 0.1% Blue 89% 89% 90% 90% 90% 5% Green 80% 89% 90% 90% 90% 50% Black 32% 87% 94% 94% 94% Considering the combined effect of the size parameter, volume fraction and thickness of liquid layer it can be concluded that the best results for transmittance of hematite nanofluid are obtained for higher concentration, lower thickness and smaller particle size. III. MATHEMATICAL MODEL Mie theory was used to calculate the scattering and extinction efficiency[1] factors of a single homogeneous sphere particle. In this study the relative refractive index was applied because the nanoparticle was immersed in the base fluid. The relative refractive index is defined as: = � +�� � eq-1 where np and κp are the real and imaginary parts of the complex refractive index of the nanoparticle, respectively, and n m is the refractive index of the dispersed medium. The size parameter is defined by equation (2) where d is the diameter of the nanoparticle and ฀ is the wavelength in the medium. �= � eq-2 The extinction efficiency factor of the nanoparticle can be calculated by 2 , ∞ =1 = �2 2 +1 ( + ) eq-3 where an and bn are Mie scattering coefficients, which can be found by solving Bessel functions. The determination of the extinction coefficient for a non-absorbing monodisperse particulate medium where fv is the volume fraction of nanoparticles is given by equation (4). If absorbing of the base fluid (water) is taken into account, the extinction coefficient for the water-based nanofluid is given by equation number (5) where κbf is the extinction coefficient of the base fluid. � � , = , = 1.5 � 1.5 � , eq-4 , + 1− 4� � eq-5 eq-6 The regular transmittance of a liquid layer with a thickness of l µm can be determined by Beer‟s law using equation (6). www.ajer.org Page 129 American Journal of Engineering Research (AJER) IV. 2013 CONCLUSION AND FUTURE WORK It has been demonstrated through the simulation results that the hematite nanofluid can be employed to effectively control the transmittance of infrared radiation by varying the volume fraction, thickness and particle size. Further consideration of external factors like magnetic field and temperature change will have some added effect on the transmitted or absorbed spectrum but were not explored in this paper. In the experimental setup the liquid layer is generally enclosed in a quartz cell during the measurements of radiative properties. Therefore, reflection exits on the interface of air and glass and on the interface of glass and liquid. The multiple reflection can affect the transmitted thermal radiation. Hence, the quantity obtained from Eq. 6 needs to be corrected to address the multiple reflection [2] in order to attain results which are closer to the actual values obtained from realistic experiments . In general the main advantages of this type of optical filter are (a) a single filter can be used for a range of central wavelengths, where the desired central wavelength region can be tuned by external magnetic field[8] (b) it is suitable for selecting wavelengths in the ultraviolet, visible and infrared regions [8] (c) there is no need for changing the optical element for different wavelength regions[8] (d) tuning can be easily achieved by changing the field strength (e) the spectral distribution can be controlled by adjusting the polydispersity (objects that have an inconsistent size, shape and mass distribution) of the emulsion [8] (f) The intensity of the transmitted light can be controlled by changing the emulsion concentration [8] (g) it is simple to operate and less expensive compared to the existing filters[8]. For optical filters „line‟ absorbers are highly preferred for selective light extinction and this can be achieved by the use of metallic shell/silica core particles that exhibit pronounced plasmon resonance[3]. The wavelength or natural resonant frequency at which this occurs is determined by parameters such as the particle‟s size, shape, shell thickness, and bulk optical properties of the materials involved [3,9]. Core/shell nanoparticles can then be tuned to achieve the desired optical properties by changing the above mentioned parameters. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] Qunzhi Zhu, Yun Cui, Lijuan Mu, Liqing Tang, Characterization of Thermal Radiative Properties of Nanofluids for Selective Absorption of Solar Radiation, DOI 10.1007/s10765-012-1208-y S.H. Wemple, J.A. Seman, Appl. Opt. 12, 2947 (1973) Robert A. Taylor et al, Feasibility of nanofluid-based optical filters, Applied Optics, Vol. 52, Issue 7, pp. 1413-1422 (2013) Swapna S. Nair,Jinto Thomas,C. S. Suchand Sandeep,M. R. Anantharaman,and Reji Philip,An optical limiter based on ferrofluids, Applied Physics Letters 92, 171908 2008 , DOI: 10.1063/1.2919052. S.S.Shinde, R.A.Bansode, C.H.Bhosale, and K.Y.Rajpure,Physical properties of hematite -Fe2O3 thin films: application to photoelectrochemical solar cells, DOI: 10.1088/1674-4926/32/1/013001. Lei Zheng, Jarrod Vaillancourt, Craig Armiento, and Xuejun Lu Thermo-optically tunable fiber ring laser without any mechanical moving parts, Optical Engineering July 2006/Vol. 45(7) 070503-1 E.D. Palik, Handbook of Optical Constants of Solids, vol. 3 (Academic Press, San Diego, 1998) http://www.igcar.ernet.in/igc2004/htdocs/technology/ferroseal_2009.pdf L. K. Kelly, E. Coronado, L. L. Zhao, and G. C. Schatz, “The optical properties of metal nanoparticles: the influence of size, shape, and dielectric environment,” J. Phys. Chem.B 107, 668–677 (2003) www.ajer.org Page 130
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-117-125 www.ajer.org Research Paper Open Access 3G HUMAN ANCESTOR?... (A new theory on Human Cell Mass growth and Gene growth) M.Arulmani, V.R.Hema Latha 1 B.E. (Engineer) 2M.A., M.Sc., M.Phil. (Biologist) Abstract: - In Biological science concept, DNA is normally considered packed in the form of one or more macro molecules called “chromosome” which carriers genetic information in the “cell” through “GENES”. A scientific research analysis in this article focus that “3G HUMAN” shall be considered as the first life organism evolved in the universe. “3G” shall mean “THREE GENE” derived from three fundamental Neutrinos of the Universe. In the Expanding Universe, the fundamental Neutrinos might have gained mass and produced thousands of Species Neutrinos in the three generation of geological periods. It is speculated that thousands species Neutrinos might have added additional genes to the three fundamental gene. In brief it is focused that three generations of higher mass Neutrinos might have generated three generations of “genetically varied human”. This research further focus that the 3G HUMAN shall be considered as 1st generation human and probably lived in “MARS PLANET” with different genetic structure. The so called Modern Human shall be considered as 3rd generation human and 2nd generation human might have lived in between. The genetic study confirms that the Bacteria has 500 genes and Modern human have around 20,000 to 30,000 genes. This research concludes that in the evolutionary progression from bacteria to Invertebrates, vertebrates, Modern human all the subsequent derived genes might have derived from the fundamental three gene of 3G HUMAN ANCESTOR. The fundamental Neutrinos shall be considered as “GOD PARTICLES”. Hence the 3G HUMAN ANCESTOR shall be considered as “GOD”. Key Words: 1) Philosophy of God Particles 2) Philosophy of “Tablet” 3) Philosophy of eye Iris Variation 4) Philosophy of Human Blood evolution 5) Philosophy of PH Variation 6) Philosophy of Universe. I. INTRODUCTION This article emphasis that the 3G HUMAN ANCESTOR on origin had only 3Gene having least cell mass least PH value Dark eye Iris. It is further focused that every human on birth has different cell mass and the cell mass of one individual can’t exactly march with other theoretically even within the same family members. In other words it can be stated that every individual has his own “Cell Mass Index”. This “Cell Mass Index” of the person can be considered equivalent to independent “SIM CARD” of Mobile Phone distinguished by “specific number”. The three generation human shall be considered as distinguished by wide difference in the “Bandwidth of Cell mass” which might have necessitated for wide difference in the “Bandwidth of genetic characteristics” It is speculated that different disease, disorders shall be considered as Harmonics which alters the normal “cell mass index” of a person. Consuming medicine for various disease shall be considered as means restoring the cell mass index to the normal level. “Different medicine is required for different person of different, cell mass index for the same disease” - Author www.ajer.org Page 117 American Journal of Engineering Research (AJER) II. 2013 PREVIOUS PUBLICATIONS The philosophy of origin of first life and human, the philosophy of model Cosmo Universe, the philosophy of fundamental neutrino particles have already been published in various international journals mentioned below. Hence this article shall be considered as extended version of the previous articles already published by the same author. [1] Cosmo Super Star – IJSRP, April issue, 2013 [2] Super Scientist of Climate control – IJSER, May issue, 2013 [3] AKKIE MARS CODE – IJSER, June issue, 2013 [4] KARITHIRI (Dark flame) The Centromere of Cosmo Universe – IJIRD, May issue, 2013 [5] MA-AYYAN of MARS – IJIRD, June issue, 2013 [6] MARS TRIBE – IJSER, June issue, 2013 [7] MARS MATHEMATICS – IJERD, June issue, 2013 [8] MARS (EZHEM) The mother of All Planets – IJSER, June issue, 2013 [9] The Mystery of Crop Circle – IJOART, May issue, 2013 [10] Origin of First Language – IJIRD, June issue, 2013 [11] MARS TRISOMY HUMAN – IJOART, June issue, 2013 [12] MARS ANGEL – IJSTR, June issue, 2013 [13] Three principles of Akkie Management (AJIBM, August issue, 2013) [14] Prehistoric Triphthong Alphabet (IJIRD, July issue, 2013) [15] Prehistoric Akkie Music (IJST, July issue, 2013) [16] Barack Obama is Tamil Based Indian? (IJSER, August issue, 2013) [17] Philosophy of MARS Radiation (IJSER, August 2013) [18] Etymology of word “J” (IJSER, September 2013) [19] NOAH is Dravidian? (IJOART, August 2013) [20] Philosophy of Dark Cell (Soul)? (IJSER, September 2013) [21] Darwin Sir is Wrong?! (IJSER, October issue, 2013) [22] Prehistoric Pyramids are RF Antenna?!... (IJSER, October issue, 2013) [23] HUMAN IS A ROAM FREE CELL PHONE?!... (IJIRD, September issue, 2013) [24] NEUTRINOS EXIST IN EARTH ATMOSPHERE?!... (IJERD, October issue, 2013) [25] EARLY UNIVERSE WAS HIGHLY FROZEN?!... (IJOART, October issue, 2013) [26] UNIVERSE IS LIKE SPACE SHIP?!... (AJER, October issue, 2013) [27] ANCIENT EGYPT IS DRAVIDA NAD?!... (IJSER, November issue, 2013) [28] ROSETTA STONE IS PREHISTORIC “THAMEE STONE” ?!... (IJSER, November issue, 2013) III. HYPOTHESIS 1) The 3G Human Ancestor is considered derived “three gene” from three fundamental neutrinos of the universe. The three neutrinos shall be distinguished by the alphabet A, K, J to differentiate the physical, chemical, mathematical characteristics. 2) Increase cell mass Index shall be considered as increasing the gene in microbial level. Wide difference in cell mass Index shall be considered increasing wide genetic characteristics. 3) Wide cell mass Index variation shall have impact on cell structure, cell size and shall contribute to wide variation in PH value, eye Iris, evolution of New blood group. IV. HYPOTHETICAL NARRATION 1. Philosophy of 3G Human Origin?... In the expanding Universe 3G human shall be considered as “first organism” spontaneously evolved from star dust particles. The 3G Human shall be considered as evolved from most fundamental neutrinos, i.e. Photon Neutrino, Electron Neutrino, Proton Neutrino. The three fundamental neutrinos shall be considered as responsible for existence of all matters in the universe. The 3G Human shall be defined within the following scope. 1) 3G human had only 3 Chromosome, 3 Gene derived from three fundamental Neutrinos. 2) 3G Human had 3G Blood “AB” Type on origin. 3) 3G human had “least cell mass”. www.ajer.org Page 118 American Journal of Engineering Research (AJER) 2013 4) 3G human had least “PH Value of Blood”. 5) 3G human had “Dark Blue Iris”. 6) 3G human was capable of “flying”. (a) Right dot (Proton) - Considered as DNA of the cell (b) Left dot (Electron) – Considered as HORMONE of the cell (c) Center dot (Photon) – Considered as RNA of the cell 2. Neutrinos are God Particles?... It is focused that the fundamental Neutrinos considered as evolved from star dust of universe shall be considered as God particles. Thousand of subsequent Neutrinos evolved in three generation of geological periods shall be considered as “species” to God particles. (i) (iii) (ii) www.ajer.org Page 119 American Journal of Engineering Research (AJER) 2013 (iv) 3. Neutrinos are genetic Tablets?... It is focused that Tablets shall be considered as prehistoric scientific logic comprising of three tablets meant for translating the genetic information of the 3G Human Ancestor. It is focused that the 3G Human lived in MARS PLANET shall be considered as Super scientist and expert in Astrophysics, Astronomy, Genetics might have formulated the scientific logic in “Three Tablets”. (iii) (i) (ii) www.ajer.org Page 120 American Journal of Engineering Research (AJER) 2013 4. Neutrinos are having genetic structure?... It is focused that the three-in-one fundamental neutrinos of the universe shall be considered having defined structure and have sustained oscillation. It is speculated that the philosophy of cell region DNA, HARMONE, RNA Molecular structure might have been derived from the philosophy of fundamental Neutrinos. (i) (ii) (iii) 5. a) b) c) Neutrinos have genetic characteristics?... It is focused that bottom two dot shall be considered having directly opposite in characteristics. Right dot (proton) – Responsible for functional characteristics Left dot (Electron) – Responsible for structural properly Centre dot (Photon) – Law for regulation of gene process www.ajer.org Page 121 American Journal of Engineering Research (AJER) 2013 6. Neutrinos have gender characteristics?... It is focused that the three dot shall be considered indicating three type of gender. a) Right dot – male gender b) Left dot – Female gender c) Centre dot – Dual gender It is further focused that the 3G HUMAN have only three chromosome. The bottom two dot shall be considered as “Autosome” and centre dot shall be considered as Dual Gender (Half male, Half Female) 7. Philosophy of colour of Eye Iris?... It is focused that the variation in colour of “Human eye iris” shall be considered due to progressive increase in Gene in the cell and considerable increase in cell mass. The increase in gene growth shall be considered as increase in DNA Mass, RNA Mass, Hormone Mass. The Billions of human iris colour shall be classified under three most fundamental colour Bandwidth during the course of expanding universe. 1) “Dark Blue” .. 2) ”Dark Green”.. 3) “Dark Red” .. Lower Cell Mass (Prehistoric Human) Moderate Cell Mass (Ancient Human) Higher Cell Mass (Modern time Human) This research focus that the global level in different region human having “three type of eye iris bandwith” shall be considered as “three generations” of human in different period during the course of expanding universe. The Dark Blue Iris” shall also be called as Prehistoric “J” – Iris. (i) www.ajer.org Page 122 American Journal of Engineering Research (AJER) 2013 (ii) 8. Philosophy of Human Blood Variation?... It is focused that human blood shall be considered as natural product of fundamental neutrinos. It is speculated that AB type blood shall be considered as 3G Blood on origin. All other blood types A, B, O shall be considered as species blood to AB blood and which might be evolved in three generation of geological period due to subsequent growth of human cell mass index and additional genes. 9. Philosophy of PH Variation?... It is focused that in prehistoric time the PH value of blood might be well below 7.0 and might be show acidic. In expanding Universe the PH value of blood might become variant in three generations due to growth of cell mass index and additional genes. 10. Philosophy of Human Blood PH Variation?... It is focused that AB blood type shall be considered as “3G blood” having least cell mass, least PH Value on origin. The AB Blood shall also be considered as “BLACK BLOOD” or Black Body Blood”. The following different type of human blood shall be considered as evolved from AB type during the course of Expanding Universe in different environment. 1) AB type - Least PH Value and least cell mass (Prehistoric time) 2) A, B type - Moderate PH Value and Moderate cell mass (Ancient time) 3) O type - Higher PH value and Higher cell mass (Modern time) It is focused that AB Type blood shall be considered as 1st generation blood; AB, A, B shall be considered as 2nd generation blood; AB, A, B , O shall be considered as 3rd generation type. 11. Philosophy of GOD?... The philosophy “God” shall be distinguished from “human” within the following scope. 1) God shall be considered as having 3Gene with Zero Cell mass. 2) God shall be considered as having Blood of Zero PH Value. 3) God shall be considered as having absolutely “Dark eye Iris”. 12. How 3G Human looks like?... This research focus that the 3 G human shall be considered having different genetic characteristics and capable of flying due to impact of low level Telomere sequence.. Further the 3G human on origin shall be considered having highly Acidic Blood and lived in MARS Planet in Prehistoric time having high immunity. From Biblical understanding the 3G human shall be considered as “ANGEL”, “ADAM”. www.ajer.org Page 123 American Journal of Engineering Research (AJER) 2013 13. What Universe Means?... It is focused that the Universe shall be considered influenced by fundamental neutrinos and electromagnetic radiation under which all matters weather organic or inorganic shall be considered as existing under equilibrium condition. Further all matters shall be considered undergoing changes in characteristics in three generation due to expanding Universe. As such bacteria, amoeba, plants, vertebrates, apes, human etc shall be considered as under distinguished head of three generation variant genetic structure. It is focused that apes shall be considered as second generation matter after long evolutionary gap of 3G HUMAN ANCESTOR. It is further focused that not only apes even bacteria with 500 genes might have been evolved after evolution of 3G human when the neutrinos gained additional mass in the expanding Universe which might have contributed 500 genes to the bacteria. V. CASE STUDY 1) Case study of Impact of growth of cell mass? In any given steady state culture of Escherichia coil cells, initiation of DNA replication at the chromosal origin occurs at a specific time in the cell cycle at “specific cell mass” The distribution of cell mass was determined for each DNA content (ie. each fluorescence) in each histogram. The initiation of mass (mi) is determined as the average mass where one-chromosome cell starts to increase their DNA content. The different aspect of the DNA replication, cell division relationship is displayed by newborn dnaA mutuant cells: they have a much higher mass than dividing wild-type cells and yet they are not ready to divide. (Contributed by Nancy Kleckner July 25, 1996) www.ajer.org Page 124 American Journal of Engineering Research (AJER) 2013 “Coordinating DNA replication initiation with cell growth: Differential roles for DnaA and SeqA proteins” Harwad University, Cambridge MA02138. 2) Case study on modern human gene: At a relatively recent time as Evolution goes, modern humans acquired an extra 223 genes not through gradual evolution, not vertically on the tree of Life, but horizontally, as a sideways insertion of genetic material from Bacteria… (Genome Discovery by Zecharia Sitchin Spanish version from Zecharia sitchin Web site) 3) Case study on Y chromosome: A single female ancestor who lived about 1,40,000 years ago, but that genes on the Y chromosome trace back to the male who lived about 60,000 to 90,000 years ago. Further the bulk of genes in the nucleus all trace back to different time about two million years. ….. Google search VI. CONCLUSION It is focused that the so called “ANGEL” population shall be considered as having only single type AB Blood with little lower cell mass compared to ADAM, EVE. ADAM, EVE, shall be considered having higher cell mass and “species” to “Angel Populations”. The Angel shall be defined within the following scope. 1) Angel shall be considered as species to “3G Human”. 2) Angel shall be considered having “Dark Blue Iris” 3) Angel shall be considered having Acidic Blood having “PH” value much below 7.0. REFERENCE [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] Intensive Internet “e-book” study through, Google search and wikipedia M.Arulmani, “3G Akkanna Man”, Annai Publications, Cholapuram, 2011 M. Arulmani; V.R. Hemalatha, “Tamil the Law of Universe”, Annai Publications, Cholapuram, 2012 Harold Koontz, Heinz Weihriah, “Essentials of management”, Tata McGraw-Hill publications, 2005 M. Arulmani; V.R. Hemalatha, “First Music and First Music Alphabet”, Annai Publications, Cholapuram, 2012 King James Version, “Holy Bible” S.A. Perumal, “Human Evolution History” “English Dictionary”, Oxford Publications Sho. Devaneyapavanar, “Tamil first mother language”, Chennai, 2009 Tamilannal, “Tholkoppiar”, Chennai, 2007 “Tamil to English Dictionary”, Suravin Publication, 2009 “Text Material for E5 to E6 upgradaton”, BSNL Publication, 2012 A. Nakkiran, “Dravidian mother”, Chennai, 2007 Dr. M. Karunanidhi, “Thirukkural Translation”, 2010 “Manorama Tell me why periodicals”, M.M. Publication Ltd., Kottayam, 2009 V.R. Hemalatha, “A Global level peace tourism to Veilankanni”, Annai Publications, Cholapuram, 2007 Prof. Ganapathi Pillai, “Sri Lankan Tamil History”, 2004 Dr. K.K. Pillai, “South Indian History”, 2006 M. Varadharajan, “Language History”, Chennai, 2009 Fr. Y.S. Yagoo, “Western Sun”, 2008 Gopal Chettiar, “Adi Dravidian Origin History”, 2004 M. Arulmani; V.R. Hemalatha, “Ezhem Nadu My Dream” - (2 Parts), Annai Publications, Cholapuram, 2010 M. Arulmani; V.R. Hemalatha, “The Super Scientist of Climate Control”, Annai Publications, Cholapuram, 2013, pp 1-3 www.ajer.org Page 125
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-83-90 www.ajer.org Research Paper Open Access A Comparative Evaluation of Automatic Sampling Methods Sajjad Hassany Pazoky1, Mahmoodreza Delavar2, Alireza Chehreghan3 1 Phd Student of GIS, Department of Geomatics, College of Engineering, University of Tehran, Tehran, Iran Assistance Professor, Department of Geomatics, College of Engineering, University of Tehran, Tehran, Iran 3 Phd Student of GIS, Department of Geomatics, College of Engineering,University of Tehran, Tehran, Iran 2 Abstract: - Preparation of Digital Terrain Model is one of the necessities of geometric sciences, especially Geographic Information Systems (GIS). The first step in this process is sampling, which is one of the most effective aspects on the accuracy of the final model. Various types of sampling methods have been introduced and implemented in the previous years. These methods are of a wide range that can be categorized based on different aspects. Nowadays, due to the large size of the data produced and the impact of IT sciences on all technical issues, automatic sampling methods have gained special significance and wider utility in comparison with methods requiring human involvement. This study introduces automatic sampling methods and evaluates them comparatively. The advantage of the employed comparison is using an equal number of points for comparison to make sure the final accuracy depends on the sampling method and not on the number of the points. To add to the precision of the comparison, all the process was conducted on samples including flat terrains, terrains with slight slopes and mountainous areas. All the details such as sampling, triangulation, locating control points within the triangulation grid, and interpolation are completely implemented in this research. In the next step, the errors yielded by various statistical criteria are analyzed and the methods are then evaluated using these criteria. The methods are finally analyzed and their strengths and weaknesses are discussed. Keywords: - Digital Terrain Model, Random Sampling, Systematic Sampling, Contouring, Profiling, Incremental Sampling, Statistical Criteria. I. INTRODUCTION Digital Terrain Model is a continuous, mathematical and digital representation of a real or virtual object and its surroundings [1], which is commonly used to produce topographic maps [2]. The simple structure and availability of DTM makes it a favorable tool for land use planning, feature extraction, hydrological modeling, civil engineering, forest management, bird population modeling, and producing maps of polar ice layers, flood control, route design, large-scale map making, and telecommunications [3-7]. Such extensive utility of DTM attaches much significance to it. The quality of a DTM depends on the following aspects [5, 8]: specification of input data, interpolation procedure, and specification of the terrain.Fisher and Tate [9] argue that the first two items are of error nature, on the contrary, the third should be counted as an item increasing uncertainty. The first and most important action in modeling a terrain is specifying a number of points that determines the quality and quantity of input data [5, 10]. This action is of two main steps: sampling and measuring. Sampling concerns with selection of points and measuring relates to coordinates of the points [11]. Three important criteria for determining the points are density, accuracy and distribution of the points. The accuracy criterion concerns the measuring step and the other two, the density and distribution, relate to sampling. Density and distribution completely depend on the specification of the terrain. For instance, three points would suffice to sample a completely flat terrain such as a plain. While sampling from a mountainous area requires very dense control points with appropriate distribution; otherwise, the obtained DTM would not match the real terrain. Accuracy is the most important of the aforementioned criteria, which is of the most effectiveness on the model. The accuracy criterion must of course be taken into consideration along with cost and efficiency so as to maintain an economic and operational method [12]. There are multitudinous methods of sampling and ample www.ajer.org Page 83 American Journal of Engineering Research (AJER) 2013 studies have been conducted on selecting the best method in different conditions. Due to recent advances in data collection technologies such as aerial photography, digitalization technology of paper topographic maps, radargrammetry, Synthetic Aperture Radar (SAR) Interferometry, LIDAR, and GPS (and similar systems), data gathering has been facilitated and sped up such that manual sampling methods are not capable of catching up with the enormously fast data production methods. Thus, automatic methods are of great significance. Considering the wide range of automatic methods, it proves important to know what method in what condition is appropriate; that is, what method can yield the most accurate results by spending the least possible time and performing the least possible calculations. We have studied and evaluated various methods in this paper. The important aspect of this study is sampling of an equal number of points in all methods. This ensures that the resultant precision in models will be only due to the method and not the quantity of points. As DTM has been widely used in geosciences, many researches have been conducted on its various aspects. One of the important researches was conducted by Li [13]. He had three objectives in mind: 1) assessment of DTM accuracy by means of contour line with and without feature points; 2) assessment of DTM accuracy by means of regular grid with and without feature points; 3) assessment of DTM accuracy by means of regular grid and contour line with and without feature points. In the first two modes, he has utilized standard deviation for comparison and has reported the optimization percentage for each of the modes and with regard to the type of uneven terrains. In the third mode also, the author has proved a relation between the distance of contour line and the regular grid for achieving a similar accuracy. In another research, the author has studied the distance between contour lines for using this method for forming DTM. In this study, the author, after studying many terrains with different unevenness, concludes that the distance of contour lines affect the final accuracy of the model significantly. Also feature points are of positive effect on the accuracy of the model; that is, the more the distance of contour lines, the more its effect on the accuracy of the model. Also, the less the unevenness of the terrain, the more the effect of contour line on the decrease of accuracy of the model. Zhou and Liu [7] studied the effectiveness level of data accuracy, grid size, grid direction, and complexity of the terrain in the error distribution pattern in the calculation of the morphologic properties such as size and slope direction. In a more recent study by the same researchers [14], the role of terrain unevenness on variables of DTM such as slope and aspect was investigated. They concluded that the accuracy of slope and aspect has a reverse relationshipwith the slope of terrain. Also, the slope and aspect has extreme reverse dependency on unevenness. Fisher and Tate [9] have explained the grid data errors by the difference between the obtained value and the real value (incorrect height values, height values with wrong location, locations without data and etc.) Bonk [15] evaluated the effect of arrangement of input points in random and grid sampling methods. He also studied the effect of the number of points in the size and spatial distribution of DTM errors. Aguilar, Aguilar [16], having studied the accidental method of DTM sampling, concluded that the accuracy of this method has great dependency on input data concentration. Höhle and Höhle [17] studied the number of appropriate points required for quality control of DTM. Their focus was upon the issues where the histogram of errors is symmetric and where errors exist in the data. Zandbergen [18] studied DTMs from a hydrological perspective, i.e. the water flow on DTMs should resemble the water flow on the real earth. He concluded that little and shallow pits are more likely to occur than deep and large pits. It has also been pointed out in this paper that selection of an extreme threshold for identification of unreal pits leads to considerable number of errors and field operations are needed for their identification. II. MATERIALS AND METHODS 1.1. Sampling theory From a theoretical perspective, 3-dimensional surfaces are comprised of an infinite number of dimensionless points. If the complete information of all points of a surface is needed, then all points should be measured, which is impossible; that is, modeling surfaces such that the modeling matches the reality 100% is impossible. In practice, when the height of a point is measured, it represents a neighboring area. Thus, a surface can be modelled by a finite number of points. The key point is that since it is impossible to provide a model matching a surface 100%, thus a sufficient number of points should be measured to obtain appropriate accuracy. The main aspect of sampling is using the best points for sampling. 1.2. Sampling from different perspectives Points on a surface can be studied from different aspects such as statistics-based, geometry-based, and featurebased that are introduced briefly in this section [11]. 1.2.1. Statistics-based From a statistical viewpoint, a surface is comprised of an infinite number of points constituting the statistical www.ajer.org Page 84 American Journal of Engineering Research (AJER) 2013 population. To study a statistical population, the sample space must be evaluated. To choose the members of a sample space, the samples must be selected through either random or systematic methods. In random selection of samples, points are random variables and are sampled accidentally; thus, considering this mechanism, the probability of selection of different points may be different. However, in systematic sampling, the points are chosen specifically such that the probability of their selection is 100%. Systematic sampling is often performed through grid selection of points [11]. 1.2.2. Geometry-based From a geometric viewpoint, the surface of a terrain can be modeled by different geometric patterns, either regular or irregular. Regular patterns can be divided into two groups: one-dimensional and two-dimensional. Profiling and contouring are types of geometric sampling that are regular only in one dimension. In contouring, the height dimension and in profiling, the dimension parallel to x, y, or a combination of them is fixed. In fact, the output of the contouring is the cross-section of x-y and the output of profiling is the cross-section of x-z or yz [11]. 1.2.3. Feature-based From a feature-based perspective, the surface of the terrain is comprised of a finite number of points whose data may vary based on their position. Thus, surface points consist of feature-based and random points. The feature points are relative extremes of the surface of the terrain, such as hills and valleys. These points not only are of height, but also provide valuable topographic information about their surroundings. The lines connecting these points are called feature lines including ridge, thalweg, etc. [11]. 1.3. Different types of automatic sampling This section introduces different types of automatic sampling. These methods will be implemented and compared with each another in the following sections. 1.3.1. Random sampling According to this method, some points are selected randomly and their heights are measured. As mentioned in 1.2.1, the likelihood of being selected is equal for all points. A demonstration of random sampling is illustrated in Figure 1-A. 1.3.2. Systematic Sampling (grid-based) Points in this approach are sampled with a fixed interval in both directions. An example of systematic sampling is illustrated in Figure 1-B. 1.3.3. Sampling with one dimension fixed As discussed in 1.2.2, each of the x, y, and z dimensions is considered as fixed and move on the other two dimensions. In photogrammetry, fixing the z value and moving on the map produces points having the fixed height. These points make lines that are called contour lines. This approach is shown in Figure 1-C. By fixing x, y or a combination of them, profiles are produced that an example is shown in Figure 1-D. 1.3.4. Sampling with two fixed dimensions In this method, two x and y dimensions are kept fixed that is called regular grid. The major disadvantage of this method is that it requires a large amount of samples to ensure all important points such as slopes and topographic changesare sampled. To resolve this issue, a procedure is added to the method. The resultant method is called incremental sampling. It works by grid sampling where the grid distance decreases incrementally. First, a very coarse grid is performed. Then, using a certain criterion, a calculation is run for each cell individually to determine whether it is necessary to decrease the grid lines in the cell or not. In the positive case, a new grid network is formed within the cell according to a certain procedure (for instance four points or nine points are selected in each cell) and this procedure continues. The criterion determining the progress within the cells can be either the second derivative of the height of the points, parabolic distance or any other criterion [19]. This method is depicted in Figure 1-E. Although incremental sampling has resolved data overload, it still suffers from the following weaknesses [11]:  Data overload around sudden changes in topographic points;  Some features may be ignored in the initial steps of gridding;  The path may become too long and thus decrease the efficiency of the algorithm. There is another method called ROAM, which is introduced by Mark, Wolinsky [20]. The principles of this method is similar to those of the incremental method. The only difference is that in the ROAM, the grid cells are www.ajer.org Page 85 American Journal of Engineering Research (AJER) 2013 divided in the form of triangles instead of squares. The procedure is like that of incremental method according to which square-shaped divisions are formed in the first step. Then, if the condition of breaking the square is correct, a diameter of square is drawn and two right isosceles triangles are formed. From this step onwards, whenever the condition of triangle is correct, a line is drawn from the angle of the right side to the opposite side and two other right isosceles triangles are formed [21]. This method cannot be considered as an irregular method such as triangulation because all triangles are right isosceles triangles. (A) (D) (B) (C) (E) Figure 1 - Types of Sampling Methods: A) Accidental; B) Systematic (Grid); C) Contour Line; D) Profiling; E) Incremental. III. RESULT AND DISCUSSION Figure 2 shows the flowchart of comparison of aforementioned automatic sampling methods. The whole process is conducted in this study for three different terrain types, namely almost flat, with gentle slope and mountainous. It is noteworthy that the numbers of captured points are equal so that the obtained results indicate only the efficiency of the method and not the difference in the number of input points. In fact, one can view the issue from this perspective that how accurate the information is yielded by different methods of sampling with an equal amount of input. In the next step, the DTM is created for all different samples by means of Triangular Irregular Network (TIN) and their accuracy and precision is assessed using control points. The results are then interpreted. For reconstruction of the terrain, the Delaunay triangulation is used that yields the best performance among other triangulation methods [22], although other methods such as Wavelet TIN and Irregular Network Merge and Triangulation also exist [23, 24]. Figure 3 shows triangulations conducted in a mountainous region. In this step, 500 check points are selected accidentally. Then, using the random walk algorithm, the triangle containing the point is discovered and then the height is calculated for each sampling method using inverse distance weighting (IDW) [25]. According to a research conducted by Chaplat et al., the inverse distance weighting has functioned much better than other methods such as Ordinary Kriging (OK), Universal Kriging (UK), Multiquadratic Radial Basis Function (MRBF), and Regularized Spline with Tension (RST) [10]. On the other hand, complicating the procedure using these methods as well as other methods such as combined method of linear and nonlinear interpolation [26] will be of no avail to the objective of this study. In the next step, the difference between the calculated height and the real height as well as the obtained results are analyzed by L1 Norm, L2 Norm and standard deviation [4, 27]. As the errors of DTM depend on the type of the terrain, all calculations of this study are performed for three different types of terrain – almost flat, gentle slope, and mountainous [27, 28]; Table 1 shows the results. It is worth to note that these values are not of any validity in regard with absolute value and are only utilized for comparison of different methods. As blunders may exist in different points, errors are purified using 2.5-Sigma test and the above statistical analysis has been rerun and the results are presented in Table 2. www.ajer.org Page 86 American Journal of Engineering Research (AJER) 2013 Figure 2: Flowchart of comparison of automatic sampling methods mentioned in this study (A) (B) (C) (D) (E) Figure 3: Triangulation conducted on captured point by means of different methods: A) Accidental; B) Systematic; C) Contour Line; D) Profiling; E) Incremental To show the impact of errors in each method, the percentage of improvement for statistical values after removal of errors is presented in Table 3. www.ajer.org Page 87 American Journal of Engineering Research (AJER) 2013 Table 1: Statistical evaluation of height errors by means of different methods for different terrains Sampling method L1 norm L2 norm Standard deviation Terrain Random Systematic Contouring Profiling Incremental Flat Gentle slope Mountainous Flat Gentle slope Mountainous Flat Gentle slope Mountainous 4.45 10.21 35.76 0.19 -0.24 -33.19 10.17 15.7 47.16 3.57 10.14 26.87 0.07 0.72 -24.26 9.63 14.84 41.71 9.6 12.33 28.61 -6.02 0.78 -11 13.98 17.18 40.91 6.28 11.24 37.46 -1.06 -0.72 -30.12 14.27 16.56 49.34 11.38 13.67 25.65 0.72 1.3 -19.4 20.39 18.51 38.15 Table 2: Statistical evaluation of height errors by means of different methods after removal of blunders Sampling method Number of blunders omitted L1 norm L2 norm Standard deviation Terrain Random Systematic Contouring Profiling Incremental Flat Gentle slope Mountainous Flat Gentle slope Mountainous Flat Gentle slope Mountainous Flat Gentle slope Mountainous 16 11 13 3.14 9.03 31.7 -0.2 0.1 -29.1 5.82 12.62 40.18 11 15 19 2.59 8.98 21.43 -0.11 0.4 -18.7 4.96 12.41 31.19 8 16 11 8.87 10.9 25.99 -6.39 1.35 -7.99 12 14.49 35.87 20 15 18 4.18 9.92 32.52 0.07 0.02 -27.8 7.72 13.79 39.85 9 11 22 9.64 12.63 20.25 -0.7 0.86 -13.7 13.87 16.3 27.69 Table 3: The percentage of improvement of statistical values after removal of blunders Sampling method L1 norm L2 norm Standard deviation Terrain Random Systematic Contouring Profiling Incremental Flat Gentle slope Mountainous Flat Gentle slope Mountainous Flat Gentle slope Mountainous 29.61 11.56 11.35 -7.4 56.46 12.44 42.74 19.6 14.18 27.49 11.4 20.21 -77.11 44.81 22.81 48.46 16.41 25.27 7.47 11.56 9.15 -6.24 -73.17 27.4 14.19 17.09 12.31 33.44 11.76 13.19 -92.74 96.8 7.69 45.89 16.72 19.23 15.21 7.58 21.03 3.32 34.11 29.28 32.01 11.93 27.41 Due to the considerable difference in mean and standard deviation in some of the methods, the removal of errors by 2.5-sigma method has been continued up to the point that no data is removed. The results are presented in Table 4. Table 4: Evaluation of statistical errors of heights by means of different methods after repetitive removal of errors up to the point that no data is removed Sampling method Number of iterations Number of blunders omitted www.ajer.org Terrain Random Systematic Contouring Profiling Incremental Flat Gentle slope Mountainous Flat Gentle slope Mountainous 11 11 5 137 67 44 11 7 15 134 58 109 3 5 4 14 38 20 12 7 7 175 50 56 14 3 7 65 20 67 Page 88 American Journal of Engineering Research (AJER) L1 norm L2 norm Standard deviation Flat Gentle slope Mountainous Flat Gentle slope Mountainous Flat Gentle slope Mountainous IV. 0.93 6.51 25.42 -0.1 -0.21 -22.87 1.3 8.4 32.17 0.71 6.99 9.62 0.04 -0.22 -7.76 1.02 9.11 13.86 8.65 9.68 24.78 -6.66 1.55 -8.11 11.54 12.44 33.94 2013 0.79 8.08 25.45 -0.01 -0.03 -21.16 1,14 10.74 30.77 6.93 12.08 14.31 -1.42 59 -7.4 9.47 15.51 18.55 CONCLUSION Considering the different tables presented in the previous section, the following points can be concluded about different sampling methods as the results of this study:  The systematic method can be considered as the most appropriate method, as it yielded the best result in the most of the selected statistical variables. This can be explained by the homogeneous distribution all over the area in question that led to a decrease in the number of errors all over the area.  Contouring is of the least accuracy as the selected points are on several limited contour lines leading to a large number of flat triangles in the triangulation process. Formation of flat triangles decreases the performance of IDW to nearest neighbor interpolation. On the one hand, as shown in Figure 1-C, the distribution of points is not homogeneous and in some spots due to the fact the points are distant, the formed triangles are so large that interpolation is of no use and erroneous (Figure 2-C). Furthermore, due to the large value of standard deviation obtained through 2.5-sigma test, the erroneous points cannot be removed.  As illustrated in Table 3, the greatest improvement after 2.5-sigma test have been witnessed in all variables in the incremental method. This indicates the accuracy of the method, despite presence of some errors. As pointed out in section 4-4, one of the weaknesses of this method is the issue that many features are lost when the grid distances are still large. This has been shown in Figure 2-E where triangles are very large in many occasions and many features within the grid have been ignored.  The profiling method chooses the points on direct lines, thus the resultant triangulations will resemble spider webs, as illustrated in Figure 2-D. That is, triangles are drawn to a certain direction and lose their equilateral form. Therefore, high accuracy cannot be expected.  The random method, like systematic method, is of good distribution i.e. the points are distributed equally and evenly. This method is ranked second after the systematic method.  Considering many features are not seen in methods with poor distribution, adding feature points will have a significant effect on the accuracy of methods [12]. With regard to the great accuracy optimization in the incremental method with error removal, this method is expected to experience the greatest optimization after adding the feature points.  Contouring, profiling and incremental methods have been evolved through time. That is, many researchers have made great efforts to improve them. Thus, it is possible that after such optimizations, the obtained results may better the results obtained in this study. We attempted to compare the automatic sampling methods in this research. The studied methods included: random, systematic, contouring, profiling and incremental methods. This study was based on equal conditions for compared methods; that is, accuracy has been evaluated while an equal number of points have been captured during sampling. This way, the most efficient method, the one which yields to a more accurate DTM with the same equal cost, is identified. The variables that were taken into account in this study for comparison were L1 norm, L2 norm and standard deviation. To decrease the impact of studied variables, 2.5-sigma test was utilized for discovering and removing blunders. According to the obtained results, the systematic method and contouring method have the highest and lowest accuracy, respectively, in all studied variables. The random method has ranked second after systematic method. The low accuracy of contouring method is due to the great number of flat triangles, which in fact nullify the efficiency of interpolation. The weakness of the incremental method, as predicted in the study and pointed out by other researchers, was loss of feature points in the initial steps of segregation. This research work can be considered as a step forward in this field as it has dealt with this issue with a data and input approach. In practice, the results of this research can be of great use for selection of optimum model in special conditions. www.ajer.org Page 89 American Journal of Engineering Research (AJER) 2013 REFERENCES [1]. Aguilar, F.J., et al., Effects of Terrain Morphology, Sampling Density, and Interpolation Methods on Grid DEM Accuracy. Photogrammetric Engineering and Remote Sensing, 2005. 71: p. 805–816. [2]. Podobnikar, T., Methods for Visual Quality Assessment of a Digital Terrain Model. Surveys and Perspectives Integrating Environment and Society, 2009. 2(2): p. 15-24. [3]. Aguilar, F.J., et al., Modelling vertical error in LiDAR-derived digital elevation models. ISPRS Journal of Photogrammetry and Remote Sensing, 2010. 65(1): p. 103-110. [4]. Chen, C. and T. Yu, A Method of DEM Construction and Related Error Analysis. Computers and Geosciences, 2010. 36(6): p. 717-725. [5]. Erdogan, S., A Comparison of Interpolation Methods for Producing Digital Elevation Models at the Field Scale. Earth Surface Processes and Landforms, 2009. 34: p. 366-376. [6]. Lim, K., et al., LiDAR Remote Sensing of Forest Structure. Progress in Physical Geography, 2003. 27(1): p. 88-106. [7]. Zhou, Q. and X. Liu, Analysis of Errors of Derived Slope and Aspect Related to DEM Data Properties. Computers and Geosciences, 2004. 30(4): p. 369-378. [8]. Gong, J., et al., Effect of Various Factors on the Accuracy of DEMs: An Intensive Experimental Investigation. Photogrammetric Engineering and Remote Sensing, 2000. 66(9): p. 1113-1117. [9]. Fisher, P.F. and N.J. Tate, Causes and Consequences of Error in Digital Elevation Models. Progress in Physical Geography, 2006. 30(4): p. 467–489. [10]. Chaplot, V., et al., Accuracy of Interpolation Techniques for the Derivation of Digital Elevation Models in Relation to Landform Types and Data Density. Geomorphology, 2006. 77(1): p. 26-41. [11]. Li, Z., C. Zhu, and C. Gold, Digital Terrain Modeling: Principles and Methodology2010: Taylor & Francis. [12]. Li, Z., A comparative study of the accuracy of digital terrain models (DTMs) based on various data models. ISPRS Journal of Photogrammetry and Remote Sensing, 1994. 49(1): p. 2-11. [13]. [14]. [15]. [16]. Li, Z., VARIATION OF THE ACCURACY OF DIGITAL TERRAIN MODELS WITH SAMPLING INTERVAL. The Photogrammetric Record, 1992. 14(79): p. 113-128. Zhou, Q., X. Liu, and Y. Sun, Terrain Complexity and Uncertainties in Grid-Based Digital Terrain Analysis. International Journal of Geographical Information Science, 2006. 20(10): p. 1137-1147. Bonk, R., Digital Terrain Modelling : Development and Applications in a Policy Support Environment, R.J. PECKHAM and G. JORDAN, Editors. 2007, Springer Berlin Heidelberg: Berlin. Aguilar, F.J., M.A. Aguilar, and F. Agera, Accuracy Assessment of Digital Elevation Models Using a NonParametric Approach. International Journal of Geographical Information Science, 2007. 21: p. 667–686. [17]. Höhle, J. and M. Höhle, Accuracy Assessment of Digital Elevation Models by Means of Robust Statistical Methods. ISPRS Journal of Photogrammetry and Remote Sensing, 2009. 64(4): p. 398-406. [18]. Zandbergen, P.A., Accuracy Considerations in the Analysis of Depressions in Medium Resolution LiDAR DEMs. GIScience and Remote Sensing, 2010. 47(2): p. 187-207. [19]. Makarovic, B., Progressive sampling methods for digital elevation models. ITC Journal, 1973. 3: p. 397416. [20]. Mark, D., et al., ROAMing terrain: real-time optimally adapting meshes, in Proceedings of the 8th conference on Visualization '971997, IEEE Computer Society Press: Phoenix, Arizona, USA. p. 81-88. [21]. Li, Z., Algorithmic Foundation of Multi-Scale Spatial Representation2007: Taylor & Francis. [22]. El-Sheimy, N.V.C.H.A., Digital terrain modeling : acquisition, manipulation, and applications2005, Boston; London: Artech House. [23]. Wu, J. and K. Amaratunga, Wavelet Triangulated Irregular Networks. International Journal in Geographic Information Sience, 2003. 17(3): p. 273-289. [24]. Yang, B.S., W.Z. Shi, and Q. Li, An Integrated TIN and GRID method for Constructing Multi-Resolution Digital Terrain Model. International Journal of Geographical Information Science, 2005. 9(10): p. 10191038. [25]. Kyriakidis, P.C. and M.F. Goodchild, On the Prediction Error Variance of Three Common Spatial Interpolation Schemes. International Journal of Geographic Information Science, 2006. 20(8): p. 823-855. [26]. Shi, W.Z. and Y. Tian, A Hybrid Interpolation Method for the Refinement of a Regular Grid Digital Elevation Model. International Journal of Geographical Information Science,, 2006. 20(1): p. 53-67. [27]. Carlisle, B.H., Modelling the Spatial Distribution of DEM Error. Transactions in GIS, 2005. 9(4): p. 521540. [28]. Kyriakidis, P.C., A.M. Shortridge, and M.F. Goodchild, Geostatistics for Conflation and Accuracy Assessment of Digital Elevation Models. International Journal of Geographic Information Science, 1999. 13: p. 677-707. www.ajer.org Page 90
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-91-97 www.ajer.org Research Paper Open Access Suggestions to Implement Human Relations and Its Determinants in Public Sectors Dr. Roopali Bajaj RKDF University, Gandhi Nagar Bhopal Abstract: - The objective of this study was to explore the status of Employee Relationship Management as it exists in various PSUs, to understand the relationship of ERM with its determinants in the organization studied, to identify the limitations of the existing system of Employee Relationship Management in these organizations. This paper aims to conclude the main findings from the research carried out by primary and secondary research. There has been a marked shift in Employee Relations in Indian context with transformation from a closed regulated economy to open globalized economy commitment in the principles and philosophy of managing people. There are changes in many areas including market, technology compensation and workplace governance hence there is need to improve Employee Relations in organizations. The study has discovered and established that status of ERM in the state PSUs is not very good; measures of HR practices are not being implemented as they should be. Key Words: - Human Relations Management, Employee Relations, Public Sector I. INTRODUCTION The focus of the study was to explore the status of ERM as it exists in various PSUs and for this it was important to know the significance of HRM in organizations. HRM is important for increasing productivity and efficiency for an organization. For an organization to perform better, it is necessary that the complex dynamic Human Resource shares a good and healthy relationship as every individual is different and have different working styles. Few years back the employees used to have trade unions in organization so that they share harmonious relationship but gradually the collectivism could not work and individuals desire that organization should recognize their potential individually, also the workforce culture which involved all types of employees like men, women, retired people, students, handicapped etc. needed different attention hence the Industrial Relation gradually declined and a new concept called Employee Relationship have emerged which covers relationship between management and workers, between coworkers, between supervisor and subordinate or any members of management. Thus it is essential for any organization to have healthy relationship be its employees, supervisors, employer or peers as it motivates employees to perform at their best and enhance the productivity of the organization. In India where a major part of economy is shared by Public Sector Undertakings, it is important to know the status of Employee Relationship Management as it constitutes a large number of workforces and therefore there are many issues related to Human Resources. Public Sector Enterprises have major role to perform in an economy. The central part of Indian PSUs is a large part consisting of various PSUs and Madhya Pradesh is having 23 PSUs run by the government of Madhya Pradesh (M.P.). The M.P. PSUs were established in the same period as of Central PSUs with the objective of providing basic amenities and other facilities to the public for their welfare and betterment. It is been revealed that PSUs in M.P. share and contribute a major part of investment in the economy but still they are neglected for some issues like finance, managerial etc. HRM practices in Indian Organization is not very encouraging especially in PSUs therefore the operative functions like Compensation Management, Role of Top Management, HRD and Human Relations need to be focused and monitored both qualitatively and quantitatively. Therefore the study chose the above aspects to determine status of ERM through these HRM functions in M.P. PSUs. www.ajer.org Page 91 American Journal of Engineering Research (AJER) II. 2013 METHODOLOGY The PSUs were chosen on basis of lottery system and approximately 50 percent of PSUs were selected for the study out of 23 PSUs. The 10 PSUs chosen were having different perception regarding the different issues of ERM. The researches reveal that the operative functions of HRM lead to satisfaction of employees in an organization. This can be achieved if the employers provide employees healthy conditions and better environment. Employers should take care of the needs of the employees and this can be understood by HRM practices like Training, Job satisfaction, Job Rotation, Participative Management, Performance Appraisal and Career Planning for their employees. Satisfied employees can contribute more towards Employee Relations and hence ERM status can be understood by equating ERM to satisfaction of employees where satisfaction is taken as proxy variable. ERM in Public Sectors of M.P. is explored in this way through this study. For knowing the satisfaction of employees it was important to know the HRM operative functions and the study found these variables like Compensation Management, Role of Top Management, HRD and Human Relations functions crucial in M.P. PSUs. Thus through satisfaction of employees ERM status could be evaluated and for this various statistical tool like Chi square test, ANOVA test and SPSS software to know the value of these determinants in quantitative terms. The questionnaire was prepared for both employers and employees and distributed to know their response and was analyzed in quantitative terms for employees and in qualitative terms for employers. The perception of employees was statistically quantified through tables, graphs and charts. The results were put in tables and question by question approach was used to analyze the results through graphs, tables, charts, MS Excel, and different statistical tools. The correlation table shows the effect of various variables of HRD and Human Relations on satisfaction of employees where Performance Appraisal was found to be negative in the chosen 10 PSUs. Regression analysis proved that Performance Appraisal needs a lot of improvement in chosen PSUs along with Motivational measures and Participative Management. Employers view could not be quantified statistically as they were fewer in number hence their perception was qualitatively analyzed and put in forms of tables and charts. Role of top management and Compensation Management were the two aspects which were analyzed qualitatively with the genuine responses given by the top officials of M.P. PSUs who are directly handling these PSUs and their view is that compensation management should be effective in PSUs to have better satisfaction of employees. Role of top management should involve professionals and autonomy for running PSUs effectively. HRD and Human Relations aspects were combined with the view of employees and findings illustrate that more and more practices related to HRD like training, job satisfaction, job rotation, performance appraisal, participative management, career planning and Human Relations practices like motivational measures and grievance redressal are not satisfactory hence should be critically monitored and practiced for better satisfaction of employees and better ERM status in organizations. The results show that components of ERM are divided into two variables for statistical relationship which are discussed below. III. DEPENDENT VARIABLE: EMPLOYEES SATISFACTION Independent Variable: HRM functions: HRD (Training, Job satisfaction, Job Rotation, Participative Management, Performance Appraisal, Career Planning and Development) ; Human Relations (Motivational measures, Grievance Redressal and Disciplinary procedure) HRM practices will give a better ERM status and increase satisfaction of employees and employers (HODs) which is dependent on various HRM functions. Thus implementing HRM functions can motivate employees and in different ways can increase satisfaction leading to healthy ERM. 1 Compensation Management This study reveals that PSUs have to follow government norms with respect to compensation management and there is very less autonomy with the management of PSUs. Employers (HODs) feel that interference of government is too much in PSUs regarding compensatory benefits and everything is fixed as per norms of government. Compensatory benefits are not at all satisfactory and for a performer or nonperformer the treatment is same and also government parity in compensation, reservation in jobs, personnel policies and promotions affect their satisfaction. Hence, productivity linked pay packages should follow some part of autonomy rather than following government norms. Compensation plays important role in building ERM of the organization as earlier discussed in literature review but in these PSUs government interference is to a great extent as per the respondents view. Thus the Satisfaction and Motivation of the employees gets affected and they do not feel satisfied regarding this issue. www.ajer.org Page 92 American Journal of Engineering Research (AJER) 2013 Hence it is concluded that Compensation benefits in PSUs should be given autonomy and benefits to employees according to their performance. A planned variable pay performance system can give maximum benefits to the employees as well as organization enhancing motivation and satisfaction of employees and thus improving ERM status in the organization. 2 Role of Top Management The study suggests that if top management in ducts more of HR professionals in formulating and implementing ERM/HRM policies in PSUs it will increase satisfaction of employees as per the respondents view. Since the role of Top management is very important in PSUs as already discussed in literature review it is concluded from the study and analysis that Top Management can play important role in enhancing satisfaction and motivation of employees through proper HR practices and dealing with the employees as HR professional. This can definitely improve ERM status in the organization. 3 HRD The study concluded that HRD function is crucial function of HRM which can bring change status of ERM in the organization as this function could be quantified and resulted in statistical data to prove its importance in enhancing ERM status in the organization. Hence improved quality and productivity linked to motivation can be achieved through Training, Job rotation, Job Satisfaction, Participative Management, Performance Appraisal, Career planning and development, employee involvement and extrinsic and intrinsic rewards. It includes functions: ฀ Training ฀ Participative Management ฀ Job Rotation ฀ Performance Appraisal ฀ Job Satisfaction ฀ Career Planning & Development These determinants of HRD are function of satisfaction and dependent on HRM functions as well which concludes that training policies are not satisfactory, training set ups are not enough, employees are rarely sent out of the premises for training and effectiveness of training programme is not assessed after training. Thus better training policies and assessment can improve Satisfaction of employees in the organization hence improving ERM status in the organization. For Job satisfaction the result concluded is that if the employees are satisfied through different determinants, it increases their level of satisfaction. For this particular study the level of job satisfaction can be increased by reframing government pay scales and DA‟s and also by introducing performance based system rather than government fixed norms system The results conclude that more Job Rotation of employees will bring more satisfaction to employees as they will get more and more different types of responsibilities in the organization and if employer‟s makes efforts to make their job interesting by rotating them and giving different job descriptions. This will definitely improve ERM status in the organization The results on Participative Management suggest that organization does not make proper efforts in caring for them and their family needs and also do not take much interest in knowing their problems and suggestions hence their level of satisfaction is not very high hence ERM status gets affected in such case. For this Participative Management policy can help them to raise the ERM status in the organization. The results on Performance Appraisal imply that for better satisfaction of employees need to have variable pay performance system as Government pay scales and DA‟s are not appropriate to judge the performance of employees hence needs to have proper performance appraisal system for better ERM status in the organization. As far as Career Planning and Development is concerned employees in PSUs do not have effective Career plans for their growth, also very few employers conduct any exercise in knowing the job satisfaction of employees. Promotion policy is not very much motivating for employees and promotion opportunities are also very less hence employees need to have clear cut advancement policy for planning their career. This makes them more satisfied and can improve ERM status in the organization. 4 Human Relations These particular determinants give a clear idea that Human Relations when improved in both cases of employers and employees increases their satisfaction but amongst employees there is not a significant relationship which proves that amongst employees it is essential to build Human Relations; to improve ERM status in the organization. The results conclude that if day to day work of employees is made interesting it can make them more satisfied, also a proper grievance redressal system can make their grievances redressed timely to enhance their satisfaction. The employers do feel that both monetary and non-monetary gains are important for building Human relations but employees have to made aware regarding nonmonetary gains, also employers www.ajer.org Page 93 American Journal of Engineering Research (AJER) 2013 feel that unions can help in promoting ERM status in the organization. Hence the theory that ERM do not have any benchmarking turns to be proven and thus it can be compared to productivity which is a continuous function thus ERM can be improved continuously in an organization with such factors which increases satisfaction of employees which will definitely improve the ERM status in the organization with support of this study. Though ERM is far behind these issues and mainly depend on trust between employers and employees such that understanding Employee Relations from behavioral science perspective reveals psychological contract for understanding and achieving positive employee relations in the workplace. The suggestions are given as per the responses of the employers and employees and also on the basis of the questions that were asked to them. It not only includes the responses that were specifically asked but also their genuine suggestions which can really help to enhance satisfaction of employees and hence better ERM status in organizations. IV. SUGGESTIONS Suggestions are given after analysis and findings after the research which study conducts. For his particular thesis the study was studying ERM in the various organizations and reference was taken as Public Sectors thus to sum up Public Sectors on the basis of ERM can be discussed mainly as following: On the basis of the above discussion for PSUs and concluding results on the basis of objectives framed at the start of thesis, the respondents which are the employers looking after the state PSUs have also given some suggestions which can directly affect the ERM status of the PSUs hence very much valuable to give a positive thought. After studying the various facts, the study has come up with some suggestions which are discussed as below. Since all the top officials and employees are aware of the concept of HR but only few PSUs are having HRD department which means focus of HRM lacks here, also Personnel should understand the importance of HRD and its need to implement HRM practices. Hence it is suggested to have an HRD department in every PSU to look after HRM practices. 1 Compensation Management Autonomy of PSU‟s: In PSU‟s generally the top management e.g. Managing Director‟s and Chairman are all IAS‟s officers which are appointed generally on deputation having a better communication between an IAS to IAS and thus the appointment is justified that an IAS officer is better going to understand the working of another IAS officer. Sometimes a negative approach may create a barrier in working of state PSU‟s but this procedure is followed since the state PSU‟s were established and the practice is continued till date. Some PSU‟s are having IFS officers as their top officials instead of IAS officers which to some extent causes some noise in smooth functioning of PSU‟s. Since all the political people are the top officials so all the PSU‟s are governed by the Govt. norms and procedures and thus continuous interference at in all policies & procedures. Thus PSUs do not have autonomy in selecting any professionals. State PSU‟s generally get all the information from DIC (District Industries Centre) but they need to be approved by Govt. of the state. In the beginning many PSU‟s were working as autonomous body but slowly and gradually they have become as facilitator and on behalf of the state Govt. they are mainly sorting the difficulties occurring in PSU‟s. Though all PSU‟s should have autonomy but for many decisions they have to depend on Govt. for e.g. for a purchase of new vehicle which has become the essentiality and the old vehicle which everybody knows is not at all economic, have to depend on approval of Govt. for the new vehicle and thus cost effectiveness cannot be implemented as Govt. has major control on all the PSU‟s and they are not able to function independently and efficiently. Though all PSU‟s have a recruitment process and have independent board to recruit personnel‟s but finally they have to take approval from the Govt. to recruit a person and hence this super cedes all the decisions of Board and make it less importance. It is expected from Govt. to treat all employees equally and continuous change and reshuffling is required in case of postings and locations. State PSU‟s should be given complete autonomy as it is in CPSE‟s. They should be ranked like CPSE‟s according to their performance. Also Large and Medium Scale industries status should be given and this is already been proposed. According to schedule- Small Scale Industries must be included in Panchayati Raj and there should be no Govt. interference. Improved Corporate Governance: Public enterprises are expected to meet social and other noncommercial objectives, in addition to financial objectives; however, in pursuing these objectives, PSU management generally has limited discretion over fundamental decisions. Faced by multifarious objectives, limited decision-making autonomy, and few incentives, PSU management will often postpone or avoid necessary decisions. This is to be reinforced by the existing system of performance monitoring, accountability, and appointment procedures. Promotions- since Time scale is given in most PSU‟s for e.g. for a person to be promoted to GM level then criteria is seniority and suitability and the minimum time is 7 years so a person can be promoted to GM level but actually the person is Assistant Manager post so the promotions should be purely based on performance of employees. In PSU‟s though the pay scale and promotions are sufficient but the www.ajer.org Page 94 American Journal of Engineering Research (AJER) 2013 employees are not motivated and for this feedback system can be implemented which is lacking in almost every PSU. Many personnel‟s in PSU‟s like in M.P. Van Vikas Nigam have performed well and without any support of Govt. but still have to take approval for any kind of decision, this makes them feel de motivated and thus promotion becomes essential policy. Balancing is required in salaries, DA and commissions for those employees who are heavily involved should be appraised accordingly. All Govt. pay scales and DA‟s are appropriate for those 60-70% employees as they work only because they have the job security in mind but for those 30-40% employees who perform extra feel de-motivated as there is no variable pay performance scale. Contract based agreement can be another suggestion to involve and motivate employees for better performance. Also departmental exam should be conducted for appraising the employees. Some organizations are under the scheme of Payment of Bonus Act, Industries Act etc. and so they are governed accordingly, this practice should be same for all PSU‟s so that employees may get uniform benefit. Productivity linked pay packages may be one of the solution to motivate employees and enhance ERM e.g. in M.P. Warehousing and Logistics corporation there are categories like high and middle for employees, after monitoring the performance of employees they are transferred to high and middle category and accordingly disqualifies if the performance of employees is not upto the mark, this motivate them to perform accordingly. The top officials are of the opinion that in existing PSU‟s there is government parity with regard to different procedures etc for the employees like in Compensation, Promotions (DPC), and Reservation in Jobs but in Personnel policies it is partial Govt. parity, this means all PSU‟s have the power to take decisions in certain matters for e.g. 58 years is the maximum retirement age of any employee which can be extended on the basis of his/her medical fitness. These types of decision are taken by Chief personnel Officer (CPO). Secondly for any major decisions in procedure related to Compensation, the existing PSU‟s need to take Govt. approval but for day to day working, the officials can take their own decisions. 2 Role of Top Management HR professionals must be appointed to take care of employees as the organization needs a lot of knowledge regarding Labor Legislation Welfare and Industrial relations. Some PSU‟s have started the practice of appointing them. Also the tenure of MD, CMD and Chairman of any PSU should be minimum 5 years as it will be in the best interest of PSU as frequent change of any top officials may affect adversely on functioning of PSU. Corporate Governance should be implemented as PSU as it is not Govt. organization but established to work independently. 3 HRD in terms of Training, Job Rotation, Job satisfaction, Participative management, Performance Appraisal, Career Planning and Development Training – Training in many PSU‟s exists but they do not have independent training set up but training policy exists in many organization which is either for field officers or training for safety measures and employees can participate in many training activities but training effectiveness is not assessed after the training activity. Some PSU‟s like MP Tourism have assessed and appointed GM (Training) after creating a post for it and after every 15 days training is assessed. This PSU has remarkably progressed and outstands in state economy in recent years. Though many PSU‟s are having common training set up for central and state PSU‟s and candidates from same cadre can go for training. There are 2.5 Lakhs employees in M.P. and the economic constraint do not allow to train all. Training is assessed by taking competency certificate from Director Safety. Thus provision for training should be implemented in all organizations with assessment of training programme. Job satisfaction - Since the employees are not much satisfied with the role given to them and feel that if their day to day work is interesting they will feel more satisfied and for this it will be very effective if employers start conducting job satisfaction exercises then they will be in better position to know the satisfaction level of employees and hence can improve ERM status in the organization. Job Rotation- For Job Rotation the suggestions are that more employees should be given opportunities to bear different kinds of roles in the organization then definitely employees will feel more satisfied and employers can try to make their work more interesting and effective so that can be possible by Job rotation for improved ERM status in the organization. Participative management - The organization should take care of the needs of employees, employees should be made participative and should try to know the problems and suggestions related to their job so as to make them more and more participative in organizational activities. Participation can be improved by taking care of their needs and their family requirements after attaining some targets say yearly objectives, this will definitely increase their satisfaction level and hence can improve ERM status in the organization Performance Appraisal - Performance of all employees must be appraised and performance based pay system must be implemented to an extent. It should be the objective of Government to set target for some department or sections and then appraise them section or department wise as Government is appraising IAS officers. Also Performance Appraisal should not be fixed but should be varied according to performance of employees and if www.ajer.org Page 95 American Journal of Engineering Research (AJER) 2013 required then on discretion of management. It is also suggested that target based work should be given to every individual where while appraising transparency and monitoring is important. Most of the PSU‟s are not having feedback system which as per the officials is most important but recently MPT has started with a feedback system in which GM (HR) is responsible to look after it. Personnel are of view that the purpose/mission with which PSU‟s were established was diluted in some years. There are so many posts created and cancelled only because of political reasons e.g. there is no need for Chairman & Vice- Chairman in certain PSU‟s only one of the post is sufficient and hence objectives of PSUs should be redefined. Thus organization design should be proper. Career Planning and Development - Career advancement policy and promotion policy are not very effective to motivate employees thus they are not able to plan their career in effective way, They do not have sufficient career advancement opportunities and hence very well know that a particular post will be the maximum to achieve in any organization hence a performer or non performer will achieve it within a certain period of time hence PSUs must have some very motivating career advancement policies so that employees feel motivated to performer until they retire. This will definitely improve ERM status in the organization 4 Human Relations in terms of Motivational Measures and Grievance Redressal Motivational Measures - All the personnel‟s are of opinion that both Monetary & Non- Monetary motivational measures are of equal importance and all PSU‟s must have same scale of pay since different scales leads to de motivation of employees. Commissions should be given based on targets which are turned into recognition and motivating them for several purposes enhancing ERM. Promotions is always motivating individuals but job content is equally important as many people are working not as per their job content e.g. some finance specialized person is working in marketing department. Recognition through medals and certificates will make employees more motivated as in CPSE‟s. Motivational Measures should have LTC can be started in some PSU‟s to motivate employees under career advancement scheme. Many PSUs have set an example of showing team work e.g. TRIFAC which is now totally a Govt. facilitator but has organized so many activities like road show and seminars nationally and internationally. The personnel‟s are of the view that this is only because of team work that they could achieve so many awards and can increase Human Relations. Many PSUs have started with activities for improving ER like sports federation and societies which manages funds on their own to recognize meritorious children of the employees and distribute scholarships and certificates to motivate them. The society also organizes free health check up camps and sports competitions to have good interaction between the top management and employees. It is also suggested that if Govt. can support in providing some budget for activities like- a welfare fund and facilities for good work environment to improve Human Relations. PSUs were also established to give monopoly product to the market. Monopoly products are the products which can certainly improve the condition of state PSUs. As some personnels who have discovered some unique product or projects and because of it PSUs have achieved profits can be the major issue to enhance the productivity of PSUs. Thus this monopoly should be continued through giving special attention to those personnels who can continue monopoly in the market through unique products and services. Thus job analysis should be in line with objectives with which PSUs were established. At the time of establishment of PSUs there was no HR planning and this has resulted in overstaffing e.g. in M.P. State Civil Supplies the requirement was of 30 at the time of establishment but there were 200 employees recruited. This overstaffing has resulted in lack of competency. Many PSUs are not going for any recruitment process since last 25-30 years and there is lack of competent personnel‟s and fresh blood intake. Hence it is suggested to have proper HRD planning. All process of good ER depends on Trust which is the key element of success of any organization. As it is said that language of nature also teaches trust and HR is not only for organization but it is all over. Self analysis is must and complete involvement of any individual in his job with the feeling of importance of what they are doing to improve Human Relations. Grievance Redressal- Some PSU‟s have some or the other type of Grievance and it is also revealed that very less employees are satisfied with the redressal procedure and with time they have developed that attitude of being negative. An orientation programme and HR professional can help them better. Unions - Some professionals are of strong opinion that if oneself is having a high morale it can guide others and this is truly required in PSUs as everything is fixed and people do not get motivated easily and enhance Human Relations. Officers reveal that Employee Union are formed basically for employees support and because they work for check against Non- performance of organization and also corruption. Some personnels are of the opinion that unions can help to improve Employee Relations to great extent but it depends on type of union, if it is consisting of right kind of people and right office bearers to motivate employees. Though in some PSUs Employee Union does not have good impression but definitely Employee orientation can help them to perform well. www.ajer.org Page 96 American Journal of Engineering Research (AJER) V. 2013 CONCLUSION Thus it is concluded that if the suggestions which were based on findings of the study are implemented then definitely the PSUs will be in better position as the employee‟s satisfaction will be high and hence productivity will increase. The suggestions are based on face to face interview hence finds its applicability in PSUs. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] Afshan Naseem, Sadia Ejaz Sheikh, and Khusro P. Malik GPHR (2011) „Impact of Employee Satisfaction on Success of Organization: Relation between Customer Experience and Employee Satisfaction‟ International Journal of Multidisciplinary Sciences and Engineering, Vol. 2 No. 5 Aravamudhan A. (2011) „Transformation in Employee Relations- A Continuous Process‟, NHRD Network Journal, Vol.4 Issue 1 Belinda Renee Barnett and Lisa Bradley (2007) „The impact of organisational support for career development on career satisfaction‟, Career Development International, Vol. 12 Issue 7 pp 617 – 636 Bellou Victoria (2009), „Matching individuals and organizations: Evidence from Greek Public Sector‟, Employee Relations, Emerald Group Publishing Ltd., Vol. 31 No.5 pp 455-470 Calo Thomas J (2006) „The Psychological Contract and the Union Contract: A Paradigm Shift in Public Sector Employee Relations‟, Public Personnel Management, Vol. 35 Issue 4 pp 331-342 Cohn Gordon and Friedman Hershey H. (2002) „Improving Employer-Employee Relationships: a biblical and Talmudic perspective on Human Resource Management‟, Management Decision, Emerald Publishing Ltd. Corby Susan and White Geoff (1999) „Employee Relations in the Public Services Themes and Issues‟, Routledge studies in employment relations UK Davies Eleanor and Cartwright Susan (2011) „Psychological and psychosocial predictors of attitudes to working past normal retirement age‟, Employee Relations, Vol. 33 Issue 3 pp 249 – 268 Deenadayalan S. (2011) „Employee Relation Mantra- Is HR reciting it Right?‟, NHRD Network Journal, Vol. 4 Issue 1 Gary Blau (1999) „Testing the Longitudinal Impact of Work Variables and Performance Appraisal Satisfaction on Subsequent Overall Job Satisfaction‟, Human Relations Humanities, Social sciences and Law, Vol. 52 No. 8 Geary John (2008) „Do Unions Benefit from Working in Partnership with Employers? Evidence from Ireland‟, Industrial Relations: A Journal of Economy and Society, Vol. 47 Issue 4 pp 530–568 Gennard John (2009) „The Financial crisis and Employee Relations‟, Employee Relations, Emerald Group Publishing Ltd., Vol. 31 No. 5 pp 451-454 Giles, William F, Mossholder, and Kevin W (1990) „Employee reactions to contextual and session components of Performance Appraisal‟, Journal of Applied Psychology, Vol. 75(4) pp 371-377 Hussain Alavi (2011) „Changing Dynamics in the Employee Relations‟ NHRD Network Journal Vol. 4, Issue 1 Jan Muhammad and Umar Faroq (2009) „Mobilizing Human Resources for the Public Sector Goals and Objectives Accomplishment in Balochistan‟ Journal of Managerial Sciences, Vol. 3 No. 1 pp 23 Judy Pate, Phillip Beaumont and Sandra Stewart (2007) „Trust in senior management in the public sector‟, Employee Relations, Emerald Group Publishing Limited, Vol. 29 No. 5 pp 458-468 Karnes Roger Eugene (2009) „A Change in Business Ethics: The Impact on Employer–Employee Relations‟ Journal of Business Ethics, Vo. 87 pp 189–197 Kay Greasley and Paul Watson (2009) „The impact of organizational change on public sector employees implementing the UK Government‟s “Back to work” programme‟, Employee Relations, q Emerald Group Publishing Limited, Vol. 31 No. 4 pp. 382-397 Kuvaas Ba°rd (2009) „A test of hypotheses derived from self-determination theory among public sector employees‟, Employee Relations, Emerald Group Publishing Limited Vol. 31 No. 1 pp 39-56 Lakshminarayana V (2011) „HR & Employee Relations- An IT/ITeS Perspective‟, NHRD Network Journal, Vol. 4 Issue Manolopoulos Dimitris (2008) „An evaluation of employee motivation in the extended public sector in Greece‟, Employee Relations, q Emerald Group Publishing Limited, Vol. 30 No. 1 pp. 63-85 Nagaraj D R (2011) „Industrial Relations then and Employee Relations now‟ NHRD Network Journal, Vol. 4 Issue 1 Nath Chimun Kumar (2011) „Quality of appraisal practices in Indian PSUs: a case study‟, The International Journal of Human Resource Management, Vol. 22 Issue 3 pp 648 - 705 Nicole Torka, Birgit Schyns (2007) „On the transferability of “traditional” satisfaction theory to non-traditional employment relationships: temp agency work satisfaction‟, Employee Relations, Vol. 29 Issue: 5 Padmakumar (2011) „HR and Employee Relations‟, NHRD Network Journal, Vol. 4 Issue 1 Patil Sharad (2011) „Employee Relations – A Grossly Neglected Area of HRM‟, NHRD Network Journal, Vol. 4, Issue 1 Patwardhan Vivek (2011) „Enablers for Employee Relations and Engagement‟, NHRD Network Journal, Vol. 4 Issue 1 Rao Rama Prasada; Rao P. Subba (1991) „Employee Relations at Work in Urban Government - A Study‟, Indian Journal of Industrial Relations, Vol. 26 No. 3 pp. 262-270 Tyagi, Archana, Agrawal and Rakesh Kumar (2010) „Indian Journal of Industrial Relations‟ Vol. 45 Issue 3 pp 381395 www.ajer.org Page 97
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-75-82 www.ajer.org Research Paper Open Access Design and Implementation of Synchronous Generator Excitation Control System Using Fuzzy Logic Controller Hafiz Tehzibul Hassan1, Irfan A Khan2 and M. Usman Ali† 1 2 Associate Professor, Dept. of Electrical Engineering, University of Lahore Pakistan Assistant Professor, Dept. of Electrical Engineering, the University of Lahore, Pakistan. 3 Dept. of Electrical Engineering, PIEAS, Pakistan. Abstract: - The main advantage of the fuzzy logic controller (FLC) is that it can be applied to plants that are difficult to model mathematically, and the controller can be designed to apply heuristic rules that reflect the experience of human experts. This paper investigates the design and implementation of a fuzzy logic PID controller, then application of this fuzzy logic PID controller in synchronous generator excitation control system (AVR). It includes simulations for the justification of this design in MATLAB. This design uses the basic concepts of control system and also includes the mathematical models to represent the transfer functions of several components of automatic voltage regulator control system. Keywords: Fuzzy Logic Control, Synchronous Generator Excitation, PID controller I. INTRODUCTION The main motive of this paper is to design a regulator control system in such a manner to overcome the difficulties of complex mathematical model of the plant to be controlled, and to develop an intelligent control system, so that the regulator becomes independent of the system to be controlled and it generates controlling signals on the bases of experiences it faces during the operation. Also the system should be very simple to understand, and easy to program. Regulator control systems mostly use PID controllers which are given by [1]. The PID controller has two zeros and one pole. Generally an additional pole is required to limit the highfrequency gain [2]. To develop the control law for fuzzy logic controller, which performs the function of PID control efficiently, the concept of digital PID is necessary. In case of digital PID controllers, the multiplication, integration and differentiation are performed numerically in digital computers [2]. The transfer function of digital PID controller using numerical integration and differentiation is expressed in ztransform [3]. Fig 1 Digital PID controller www.ajer.org Page 75 American Journal of Engineering Research (AJER) II. 2013 IMPLEMENTATION OF FUZZY LOGIC PID CONTROLLER The basic control equation for PID is given by [2] (1) where m(t) is the control signal and e(t) is the error signal. Differentiating (1): (2) where m and e are time dependent variables. In discrete time (2) can be written as follows [4] where, T = sampling time. u = change in output ‘m’ over one sampling time. V = change in error signal ‘e’ over one sampling time. Characteristics of PI controller can be represented by the phase plane diagram shown below. A diagonal line where u=0, divides the area where u is positive and u is negative. In order to design a fuzzy controller based on PI control structure, the following definitions are made, Let E be the linguistic variable for the error e, V be the linguistic variable for the change in error over one sample time. U be the linguistic variable for control output u over one sample time. Fig 2 Characteristics of Fuzzy PI Controller Following fuzzy sets can be defined: LE = {NB, N, Z, P, PB} LV = {NB, N, Z, P, PB} LU = {NVB, NB, N, NS, Z, PS, P, PB, PVB} Each element of a linguistic variable set is a membership function of that variable. The present design uses three types of functions, s-function, z-function, and triangle function [5]. A. Membership Functions (1) For S-shaped (2) For Z-shaped www.ajer.org Page 76 American Journal of Engineering Research (AJER) 2013 (3) For triangular The crisp values of the input variables are mapped onto the fuzzy plane using the equations given above [6, 7]. A. Fuzzy Rule Base Each input variable can take any of the five linguistic values, therefore 5×5=25 rules are formulated. The rules have the typical fuzzy rule structure, using linguistic variables in both the antecedent, and are expressed in IFTHEN manner. The corresponding PI control law in IF-THEN rules can be represented as [8,9]: where: To implement this design into FLC, let:  x1 = E  x2 = V for i=1, 2 The rule base can be represented by a fuzzy associative memory (FAM) table shown below:I.TABLE x1 FUZZY A SSOCIATIVE M EMORY B1_1 B1_2 B1_3 B1_4 B1_5 B2_1 B2_2 B2_3 B2_4 C1 C6 C11 C16 C2 C7 C12 C17 C3 C8 C13 C18 C4 C9 C14 C19 C5 C10 C15 C20 B2_5 C21 C22 C23 C24 C25 x2 II. TABLE FUZZY A SSOCIATIVE M EMORY E NB N Z P PB NB PVB PB P PS Z N PB P PS Z NS Z P PS Z NS N P PS Z NS N NB PB Z NS N NB NVB V www.ajer.org Page 77 American Journal of Engineering Research (AJER) 2013 B. Inference Engine The FLC design in this project incorporates Mamdani’s implication method of inference. Mamdani’s implication for the fuzzy rules of the form [5] is given by The first phase of Mamdani’s implication involves min-operation since the antecedent pairs in the rule structure are connected by a logical ‘AND’. All the rules are then aggregated using a max-operation. According to this rule, the elements of Table 1 are: C1 = min[B1_1 , B2_1] C2 = min[B1_2 , B2_1] C3 = min[B1_3 , B2_1] C4 = min[B1_4 , B2_1] ................ up to so on till, C25 = min[B1_5 , B2_5] The max operation is used to take into account the combined effect of all the rules. The 25 output conditions are aggregated into 9 linguistic values (D1 to D9) based by the conditions set by the rules. Colour III. Max function D9 = [ C1] D8 = max[ C2 , C6 ] D7= max[ C3 , C7 , C11 ] D6 = max[ C4 , C8 , C12 , C16 ] D5 = max[ C5 , C9 , C13 , C17 , C21 ] D4 = max[ C10 , C14 , C18 , C22 ] D3 = max[ C15 , C19 , C23 ] D2 = max[ C20 , C24 ] D1 = [ C25 ] DEFUZZIFICATION TECHNIQUE The membership functions of the output values are intentionally made to be symmetrical, as this will simplify the defuzzification computation. In this project the weighted average method is used as a defuzzification technique. Due to the fact that the output functions are symmetrical in nature, the mean of fuzzy set can be used as weightings for the defuzzification process. This technique requires several multiply-by-aconstant operations and only one division process [8, 9] Dividend = E1*D1 + E2*D2 + E3*D3 E9*D9 ; + E4*D4 + E5*D5 + E6*D6 + E7*D7 + E8*D8 + Divisor = D1 + D2 + D3 + D4 + D5 + D6 + D7 + D8 + D9; Output y = Dividend / Divisor. {Divisor should not be 0} www.ajer.org Page 78 American Journal of Engineering Research (AJER) Linguist ic Value, D Weighti ng Values, E 2013 D 1 D 2 D 3 D 4 D 5 D 6 D 7 D 8 D 9 E 1 E 2 E 3 E 4 E 5 E 6 E 7 E 8 E 9 A. Interfacing Blocks The input interface makes an error signal ‘e’ and change in error ‘฀e’ by computing Vref and Vdc at the plant output. Output interface converts the output of FLC into the required value for the plant. The characteristics of interfacing blocks can be described by the following equations: Input interface: e = Vref – Vdc x1 = e x2 = x1 – x1.z-1 output interface: u=y w = u + w.z-1 Where z-1 is used to represent a delay in signal by one sampling time (according to z-transform) Fig.3 Block diagram of Fuzzy Control System IV. TRANSIENT RESPONSE The main purpose of derivative component in PID controller is to make the transient response better. Transient response is related to the rate of change of signal i.e. speed. In FLC the transient response depends on weighting values [10]. If weighting values in the defuzzification process are large, overcompensation will be produced due to which output oscillates. This transient response can be controlled by using appropriate weighting values in defuzzification process. It also depends on the sampling time of input and output interfaces or unit delay [11]. V. FUZZY LOGIC PID CONTROLLER IN SIMULINK The Fuzzy Logic Toolbox is designed to work seamlessly with Simulink, the simulation software available from The Math Works. Fig.4 Masked subsystem of Fuzzy Logic PID Controller designed in MATLAB www.ajer.org Page 79 American Journal of Engineering Research (AJER) VI. 2013 APPLICATION OF FUZZY LOGIC PID CONTROLLER IN SYNCHRONOUS GENERATOR EXCITATION SYSTEM (AVR) Fig 5 Looking under the mask is the block diagram of FLC Fig 6 Block diagram of AVR in which each block represents the transfer Functional of a particular element of AVR In the Fig (6),  Typical values of KA are in the range of 10 to 400.  Typical values of τA are in the range of 0.02 to 0.1 seconds.  Value of τE for modern exciter is very small.  The value of gain K can be adjusted according to controller behavior.  KG and τG depend on load on the generator. Typical values of KG are 0.7 to 1, and τG are between 1.0 and 2.0 seconds.  Values of τR are may be assumed between 0.01 to 0.06 i.e. very small. In block diagram, PID controller has been replaced by FLC which performs the job of PID more efficiently. II. SIMULATION IN MATLAB USING SIMULINK MODEL OF A TYPICAL AVR FOR JUSTIFICATION OF FLC Typical values for AVR given as: KA = 10, τA = 0.1, KE = 1, τE = 0.4, KG = 1, τG = 1.0, KR = 1, τR = 0.05, FLC parameters are defined by Fig (7, 8 &9) Fig 7 Membership function definition for Input 1 www.ajer.org Page 80 American Journal of Engineering Research (AJER) 2013 Fig 8 Membership function definition for Input 2 Fig 9 Membership function definition of the output Fig 10 Selection of input and output parameters Fig 11 Surface showing input-output relationship Fig 12 Simulink model for simulation www.ajer.org Page 81 American Journal of Engineering Research (AJER) 2013 Fig 13 Simulation Results VIII. DISCUSSION OF RESULTS The programmed Fuzzy controller was found very quick in raising the value to the steady state as shown in Fig 13 whereas the FIS designed fuzzy controller was on the second number in response. The conventional controller using transfer function was slow in comparison to the fuzzy controllers in achieving steady state value. The reference signal was having three levels and at the last level, reference signal gets stable and does not change its level for rest of the time. These switched levels have been produced giving a shape of square wave to check whether the controller is following the reference voltage signal or not. In response to the reference signal, the PID controller using transfer function does follow the reference signal but there are a lot of oscillations observed in switching from one level to other level. IX. CONCLUSIONS The performance of fuzzy logic controller is neither too fast, nor too slow, but moderate. Fuzzy logic controller is somewhat intelligent than PID, because it keeps the record of disturbances experienced by the system in FAM (Fuzzy Associate Memory). During defuzzification average area under output variable is calculated according to concept of centre of gravity. Hence the effect of disturbances which are large but for short time is diminished. Therefore overshooting and undershooting is small, and system is less oscillatory. REFERENCES [1]. Bagis, A: Determination of the PID Controller Parameters by Modified Genetic Algorithm for Improved Performance, In: Journal of Information Science And Engineering, 23(2007),1,2007. [2]. Phillips,C; Harbor,R: Feedback Control Systems, Prentice Hall,1999. [3]. Zhang,Li, Cai,K-Y and Chen,G :An improved robust fuzzy-PID controller with optimal fuzzy reasoning, In: IEEE trans on sys man and cybern Part B(2005), No. 35, Dec. 2005. pp. 1283- 1294. [4]. Chow,M :Fuzzy Logic Based Control, CRC Press Industrial Electronic Handbook,D.Inrwin,Ed,1996. [5]. Corcau,J-I; Stoenescu,E: Fuzzy Logic Controller as a Power System Stabilizer, In: International Journal Of Circuits, Systems and Signal Processing, Issue 3(2007), Volume 1, 2007. [6]. Tian,X; Wang,X and Cheng,Y: A Self-tuning Fuzzy Controller for Networked Control System, In: IJCSNS International Journal of Computer Science and Network Security,VOL.7(2007) No.1, January 2007. [7]. Sooraksa,P :On comparison of hybrid fuzzy PI plus conventional D controller versus fuzzy PI+D controller,In: IEEE Trans. on Industrial Electronics, (2004), Vol. 15,Feb. 2004.pp. 238-239, [8]. Taher,S.A and Shemshadi,A: Design of Robust Fuzzy Logic Power System Stabilizer, In: Progressing of World Academy of Science, [9]. Engineering and Technology, Volume 21(2007) May 2007 ISSN 1307-6884. [10]. Voropai,V.I: Application of Fuzzy Logic Power System Stabilizers to Transient Stability Improvement in a Large Electric Power System,In: IEEE, and P. [11]. V. Etingov . PowerCon vol.2(2002), Kunming, China, October 13-17, 2002. [12]. Zhao, Z.Y., XIE W.F. and Zhu W.H: Fuzzy Optimal Control for Harmonic Drive System with Friction Variation with Temperature, In IEEE International Conference on Mechatronics and Automation, (2007), August 5-8, 2007, Harbin, China, 2007. [13]. Merzougui,H; A. Ferhat -Hamida, K. Zehar : Robust PID-Sliding Mode Control of a Synchronous Machine,In: SETIT, 3rd(2005), International Conference:Sciences of Electronics, Technologies of Information and Telecommunications , March 27-31,2005,TUNISIA. www.ajer.org Page 82
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-313-320 www.ajer.org Research Paper Open Access Experimental Investigations of Performance and Emission Analysis of Direct Ignition Diesel Engine Fueled with Refined vegetable oils Hemanandh Janarthanam1, Narayanan K V2 1 Department of Mechanical Engineering, Sathyabama University, Chennai, India. Department of Mechanical Engineering, Sathyabama University, Chennai, India. 2 Abstract: - In this research, the refined vegetable oils are investigated to study the Emissions and performance from Kirloskar Direct Injection 4-stroke Diesel engine, single cylinder air-cooled, 4.4 kW, compression ratio 17.5:1 and at constant speed of 1500 rpm,. The Injection pressure, blend percentage, and various loading were used as input parameters. Emissions and Performance like NO x, CO, HC, CO2, and Break specific fuel consumption, Brake thermal efficiency were considered as output parameters. Methyl Esters of refined vegetable oils was transesterified with sodium meth oxide as catalyst before blending with diesel. A 3- hole nozzle was used to inject the fuel. The biodiesel and diesel properties are compared with ASTM and BIS standards. The Emission results were studied using AVL gas analyzer. In this study the experimental data showed that the break thermal efficiency of the Refined Palmolein of the biodiesel was marginally higher than diesel fuel. It was also observed that CO, HC, CO2 & NOx are less in Refined Palmolein than Refined Corn and Refined Sunflower oil. Also specific fuel consumption of the refined Palmolein oil biodiesel is reduced by 28.57% compared to pure diesel fuel. Keywords: - Kirloskar Di – Diesel Engine, Injection pressure, Biodiesel, 3 – hole Nozzle, Combustion and Emission characteristics. I. INTRODUCTION In the Present Indian scenario, an alternate fuel becomes most important due to the continuous increase in the diesel fuel price and increasing pollution in the environment due to diesel engine exhaust emissions. Many types of Biodiesel can be used in Diesel Engines. Biodiesel or Vegetable oil reduce the greenhouse emissions and is environmental friendly. The biodiesel is a renewable fuel and reduces the emission in transportation sector. Biodiesel is a very good alternative for fossil fuels and is available in plenty. The properties of the vegetable oils were compared with diesel and analyzed. The vegetable oils cannot be used directly along with diesel, since it is highly viscous. Transesterification process is done in the presence of methanol, and 5% of animal fat is added with sodium meth oxide as catalyst, This improves the performance of the engine and reduces the emissions. 1.1 Background Similar experiments on biodiesel was conducted by many researchers. The experiments on the DI diesel perkinson engine were conducted by Dorado MP et, al, [1] by using reused olive oil methyl ester to study the effect on combustion efficiency. As a result, oxygen concentration was increased and accelerated the combustion. It was also found that the rate of combustion efficiency in the use of reused olive oil, methyl esters, and the rate of combustion efficiency remains almost constant as in the use of diesel oil. A lower energy rate was seen in the palm oil combustion, done by Tashtoush G et, al, [2]. It was more efficient and higher rate of combustion (66%) seen in burning biodiesel, when compared with the diesel combustion (56%). This is because of the properties like high viscosity, less volatility and density. Sudhir C.V. et, al, [3] conducted test on Diesel Engine using waste cooking oil, the rate of combustion temperature and pressure were low in the operation of biodiesel, and the NOx emissions were also equal to that of diesel. The sulphate emission was very low due to www.ajer.org Page 313 American Journal of Engineering Research (AJER) 2013 the lesser level of sulphur. The pilot combustion caused the pre-combustion. The observation was that the blending ratio of 15% resulted in reduced smoke opacity. The test conducted in DI stationary engine by yusuf .T. F. et, al, [4] showed that as the blend increases, the brake power and CO increases in variable speed which was less than 1800 rpm. A review was done by shereena et, al, [5] using catalyst along with methanol in the transesterification process, which results in varying fatty acid content of the biodiesel. This could be a good alternative fuel for diesel. The method of varying the engine displacement by Valentin Mickunaitis, et, al, [6] showed the result of mass increase by 6.5% in petrol and 7.5% in diesel. Hence, there is an increase in fuel consumption and CO2 emission. Mahin pey.N et al., [7] explains that, low sulphur content with neutral CO 2 is essential for transportation and safe handling. The experiments were conducted by Jewel A. Capunitan et al., [8] one of the valuable energy of fuels is the chemical stock produced from pyrolysis processed of corn stover. The various studies made by, Ilknur Demiral et al., [9] on chromatographic and spectroscopic on bio-oil reveals that corncob stock can be classified as a renewable fuel. Significant reduction of about 52.1% in green house gas emissions is evident [Nathan Kauffman] [10]. The literature on production of raw material for biodiesel revealed by Xiao Huang et al., [11] that a corn stove hydrolysate as fermentation feedstock for preparing microbial liquid reduces Nitrogen content. N.N.A.N. Yusuf et al., [12] showed that compared with petroleum diesel, reduction in emissions of biodiesel, on CO2, SO2, particulate, CO and the HC and increase of about 10% NO x is noticed. However blending biodiesel with petroleum diesel reduces NO x emission with slight increase in other values but are of acceptable criteria. Sources of producing biodiesel include edible oil of corn and canola [Prafulla D. Patil et al.,] [13] by using reused olive oil methyl ester to study the effect on combustion efficiency. As a result, oxygen concentration increases and it accelerates the combustion. It was also found that the rate of combustion efficiency in the use of reused olive oil, methyl esters, and the rate of combustion efficiency remains almost constant as in the use of diesel oil. A lower energy rate was noticed in the palm oil combustion, done by Tashtoush G et, al,.[14]. It was more efficient and higher rate of combustion (66%) seen in burning biodiesel, when compared with the diesel combustion that is (56%). This is because of the properties like high viscosity, less volatility and density. xiulian yin et al., [15] shows that methanol with catalyst produces high yield in shorter time which results in Flat plate ultrasonic irradiation with mechanical stirring (UIMS) & probe ultrasonic irradiation (PUI) than mechanical stirring (MS) & Flat plate ultrasonic irradiation (FPUI) containing lesser quantity catalyst and less energy consumption. In hydro conversion of SF oil Raney Nickel catalyst was investigated by Gyorgy onyestyak etal., [16] and also tested with some of the octanoic acid as model and compounded at 21 bar in the temperature of 280˚ C to 340˚ C, in addition of ln 2O2 significantly resulted in high alcohol yields. The combustion and emission results in base line fuel and the emission of smoke and nitrogen oxide measured at the engine exhaust while using cottonseed or sunflower oil in different proportions with two speeds and 3 loads tested by D.C. Rakopoulos et al., [17] the blends of sunflower, cotton seed, corn and olive used in six cylinders turbo charged heavy duty DI, Mercedes benz mini bus engine with the amount of two speed and three load conditions with neat diesel resulted in no changes in the thermal efficiency, reduction of smoke and insignificant increase in NOx. M.S shehata [18] conducted experiments on Sunflower oil and Jajoba oil with 80 % PD by varying different engine speed resulted in lower brake thermal efficiency, smoke, CO and HC. The biodiesel must be used within 6 months from the date of manufacture. Cardone M et.al and Çetinkaya M et.al. reveals that, [19,20]. Specific fuel consumption is increased when biodiesel is mixed with diesel oil, whereas exhaust emissions affects the engine parameters. 1.2 Methodology The Density, Kinematic viscosity of the PM, CF, & SF fuel is within the limits of the Biodiesel Standards. The calorific value of the vegetable oils is slightly less when compared to diesel. The flash point of the vegetable oils is high compared with pure Diesel and is safe to store and transport. The aim of the work is to analyze emissions and the performance of the Diesel engine by using biodiesel. This has been done by varying the injection pressure, fuelled with transesterified refined Palmolein, refined Corn oil & refined sunflower oil (Methyl Esters) combined with pure diesel at different blends (10%,+ 90% PD, 30%+ 70% PD, and 40%+ 60% PD). 1.3 Nomenclature PM - Biodiesel Refined Palmolein CF - Biodiesel refined corn oil SF - Biodiesel Refined Sunflower oil PD - Pure Diesel Ρ - Density, kg/m3 BP - Brake power, kW BSFC - Break specific fuel consumption kg/kW- hr Ƞbt - Break thermal Efficiency www.ajer.org N - Engine running speed, rpm BIS - Bureau of Indian standards T - Torque, N- m CV - Calorific Value of the fuel, kJ R - Radius of the drum, mm A - Area of the piston, mm2 K - No. of cylinders ASTM - American standards of Testing and Materials Page 314 American Journal of Engineering Research (AJER) II. 2013 METHODOLOGY 2.1 Transesterification process The Methyl Esters are formed by transesterification process. One liter of refined Vegetable oil is treated with 400 gms of methanol and 8 gms of Sodium Meth oxide as catalyst. In the first stage, oil is preheated up to 20 ºC to 40 ºC and is allowed to cool down naturally. Methanol is added to the catalyst in the preheated oil at cold temperature (Atmospheric or lower) and temperature raised to 70 ºC to 80 ºC for reaction while performing Transesterification process of oil is to reduce high viscosity and gives pure methyl esters without any soap content. CH 2OCOR I CH2OCOR I CH2OCOR + 3ROH catalyst Transesterification Type Combustion Rated Power Rated Speed Compression Ratio Injector type Fuel injection pressure Dynamometer Dynamometer arm length Bore Stroke Connecting Rod Cubic Capacity Fuel tank Capacity Governor Type 1. 2. 3. 4. 5. 6. : : : : : : : : : : : : : : : CH 2OH I CH 2 OH I CH2 OH RCOOR + RCOOR RCOOR Alcohol Catalyst Glycerin Methyl Esters 2.2 Table 1 - Specification of Test Engine Kirloskar Vertical, 4S, Single acting, High speed, C.I. Diesel Direct Injection 4.3 kW 1500 rpm 17.5: 1 Single 3 hole jet injector 210 bar Eddy current 200 mm 87.5 mm 10 mm 200 mm 661.5 cm3 6.5 liters Mechanical centrifugal type engine 2.3 Table 2 - Details of Measuring Systems Pressure Transducer GH 12 D Software Version V 2.0 AVL 617 Indi meter Data Analyzer from Engine AVL PIEZO CHARGE AMPLIFIER To measure pressure AVL 364 Angle Encoder Smoke meter AVL 437 C Smoke 5 Gas Analyzer ( NOx, HC, CO, CO2, O2) AVL DIGAS 444 Analyzer 2.4 Experimental Setup A stationary kirloskar 4-Stroke, Direct Injection Diesel Engine was used to evaluate the Emission and performance of the various Refined Vegetable oils at various injection pressure and loading. Table – 1 show the specification of the kirloskar 4-stroke diesel engine. The main parameters are to evaluate the Emissions (CO, HC, NOx, CO2) and performance of the Brake specific fuel consumption, and Brake thermal Efficiency. The load on the engine was applied using Electrical loading (Dynamometer). The Eddy current dynamometer for loading is coupled to the engine for various loading ( 0%, 25%, 50%, 75%, 100% ) and at various blends (10%PM + 90% PD, 30%PM + 70% PD, 40%PM + 60%PD, 10%CF + 90% PD, 30%CF + 70% PD, 40%CF + 60%PD, 10%SF + 90% PD, 30%SF + 70% PD, 40%SF + 60%PD, ) The exhaust gas emissions from the engine measured by using 5 gas analyzer is AVL DIGAS 444 Analyser (NO x, HC, CO, CO2, O2 ). The brake fuel consumption was measured by fuel flow meter. The complete setup and schematic diagram of the Experimental setp as shown in Figure 1 and Figure 2. www.ajer.org Page 315 American Journal of Engineering Research (AJER) 2013 Fig. 1 Schematic Diagram of Experimental setp 1 - Kirloskar Vertical C.I. Diesel Engine, 2 - Fuel Tank, 3 – AVL 437 C Smoke meter, 4 – Electrical loading device, 5 – Engine temperature monitor Fig. 2 Image of the Experimental setup 2.5 Test procedure The experiments were conducted at different load conditions, with different Injection pressure at various blends of refined vegetable oil as fuel. The tests were conducted at a constant speed of 1500 rpm. The engine was allowed to run at No load condition for 10 minutes, using each proportion of the blend before applying the load. The loads were increased gradually for each blend in steps of 25 % upto 100% at constant speed of 1500 rpm at different Injection pressures (180 bar, 210 bar, and 240 bar) for various blends .The exhaust gases are measured by 5 gas analyzer from the exhasut stream of the engine. The CO, CO 2, HC, O2, and NO was measured by 5 gas analyzer as given in Table – 2. Test was conducted to analyze the emissions and performance based on the above conditions. The properties of vegetable oils compared with Diesel, ASTM and BIS standards as given in Table -3 2.6 Table 3 - Comparison of properties of Diesel, Biodiesel standards & Vegetable oils www.ajer.org Page 316 American Journal of Engineering Research (AJER) III. 2013 RESULTS & DISCUSSIONS Fig – 3 Variation of CO with respect to various vegetable oils at 240 bar & 30% blend Carbon mono oxide (CO) : Fig - 3 shows the CO emissions of 3 refined oils. CO emissions of PM reduces at higher blend ratio (30%PM+70%PD) by 31.25 %, whereas CF and SF reduces at higher loads by 22% and 12.5% when compared to pure diesel. It could be due to higher Injection pressure and effectiveness of the 3- hole nozzle leads to good spray characteristics, According to the literature, the CO emission reduces when compared to diesel. krahl J et al. [21] observed that CO emissions reduced by 50% in rapeseed oil when compared with ultra low sulphur diesel. Ozsezen AN et al. [22] show that WPOME & COME, the CO emissions are decreased by 86.89% and 72.68%. Fontaras et al., [23], CO increases in B50 and B100 in the range of 54% and 95% due to high viscosity and poor spray charecteristics for biodiesel, which lead to poor mixing and poor combustion. Pure biodiesel of karanja and polanga oil reduces the CO emissions compared with diesel. [Sahoo PK et al.] [24] reduces the droplet size leads to better combustion. While the difference show that CO emissions of PM is better than CF and SF. Fig – 4 Variation of HC with respect to various vegetable oils at 240 bar & 30% blend Hydrocarbon (HC): It is observed that HC emission reduces in PM by 55.26%, where as in SF is 28.35% and for CF is 18.91% at higher loads at constant speed of 1500 rpm. This could be due to higher injection pressure, the volatility of the fuel increases leads to good combustion, low viscosity leads to increase in gas temperature and better reduction in HC emission. To determine the Emission parameter of unburnt hydrocarbon (HC) is an important parameter. The three test fuels are analyzed and compared with diesel. HC emission shows the variation with respect to Load, injection pressure and blend ratio as shown in fig.4. At high injection pressure the air fuel mixture increases because of high viscosity of rapeseed oil and peak pressures were recorded with different blends. [Canakci M et.al, Devan PK et. al.] [25, 26, 27]. www.ajer.org Page 317 American Journal of Engineering Research (AJER) 2013 Fig – 5 Variation of NO with respect to various vegetable oils at 240 bar & 30% blend Nitrogen Oxide (NO): Fig 5 show the emissions of Nitrogen Oxide (NO) compared with diesel and three types of biodiesel at various injection pressure and loads . The formation of NO is based on combustion temperature. NO emissions are higher at lower injection pressures, However NO emissions are marginally low at 240 bar and 30% blend for PM where as in SF and CF, NO is decreased by 36.66 % & 39.93%. This may be due to the rich air - fuel mixture, reduces combustion temperature Usta N, et at. [28]. The marginal increase in NO due to the presence of oxygen, increases combustion temperature and decreases at lower loads. Cheng AS et al. [29] revials that flame charecteristics of a biodiesel increases the NO formation and reduces the soot heat transfer, resulting increase in flame temperature. Fig – 3 Variation of CO2 with respect to various vegetable oils at 240 bar & 30% blend Carbon di- Oxide (CO2): Figure 6 shows the comparision of carbon-di-oxide emissions with three biodiesel fuels with high injection pressure at full load condition under the constant speed of the engine. At lower injection pressure and at various loading there is a marginal increase in CO2. It is observed that 240 bar and 30% belnds of three biodiesel fuels are decresed in CO2 when compared to diesel . The emissions of PM, CF and SF reduces by 15.85%, 7.31% and 3.65% respectively. The decrease in CO 2, because of presence of more oxygen atoms in the vegtable oils. www.ajer.org Page 318 American Journal of Engineering Research (AJER) 2013 . Fig – 8 Variation of BSFC with respect to various vegetable oils at 240 bar & 30% blend Break Specific Fuel Consmption (BSFC): The Break specific fuel consumption of the various refined vegetable oils at different injection pressure and loads, compared with pure diesel as shown in fig – 8. The amount of fuel supplied to the engine decreesed at lower loads and at higher loads for PM by 28.57 % at maximum load. There is a increase in CF and SF biodiesel increases by 4.65% at initial load and same as diesel at higher loads when compared with diesel fuel. This shows that PM has lower energy content and higher density of the fuel. Break Thermal Efficiency (Ƞbt) : The Break thermal efficiency and performance of the refined vegetable oils as shown in fig-10. It is observed that, the operating conditions of average injection pressure test at each fuel, SF and CF thermal efficiency decreases gradually and same at 3.5 kw and increases at full load by 15.78% compared with pure diesel. The thermal efficiency of the PM is higher than pure diesel. The higher injection pressure increases the atomization of biodiesel using 3-hole nozzle & spray charecteristics (fine spray), and higher oxygen content leads to better combustion, increases the thermal efficiency. Fig – 10 Variation of Ƞbt with respect to various vegetable oils at 240 bar & 30% blend IV. CONCLUSIONS The Engine was tested with various injection pressure, various blends and load. The properties of the biodiesel are analyzed. Following conclusion could be arrived from the graph. 1. It was found that for a blend of diesel of 30% PM, at 240 bar and full load, the CO, HC, CO 2, decreased by 31.25%, 55.26%, 15.85%, and a marginal decrease in NO. 2. BSFC increases with 30% blend at higher injection pressur by 28.57% due to the lower energy content and higher density. The Break power is increased by 15.28%. www.ajer.org Page 319 American Journal of Engineering Research (AJER) 2013 REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] Daroda M.P, Ballesteras E, Arnal GOMEZJ JM, Lopez RJ, Exhaust Emissions from a Diesel Engine Fueled with Transesteified olive oil. Fuel, 2003; 82: 1311 – 1315. Tashtoush G, Al-widyan MI, AI – shyoukh AO, Combustion performance and Emissions of ethyl esters of a waste vegetable oil in water cooled furnace. Appl. Therm. Eng. 2003; 23: 285 - 293. Sudhir C V, Sharma N Y, Mohanan P, Potential of Waste Cooking oils as biodiesel feed stock, (ejer), Emirates Journal for Engineering Research, 2007; 12: 69 -75. Yusaf T F, yosif B F, Elawad M M, Crude palm oil fuel for diesel engines: Experimental and ANN simulation approaches, Energy, 2011; 36:4871 - 4878. Shereena K M, Thangaraj T, Biodiesel : An alternate fuel produced from vegetable oils by Transesterification, (ejbio),European Journal of Biochemistry, 2009; 5: 67 – 74. Valentinas Mickunaitis, Alvydas pikunas, Ignor Mackoit, Reducing fuel consumption and CO2 emission in motor cars, Transport, 2007; 22: 160-163. Mahinpey N, Murugan P, Mani.T, Raina.R, Analysis of bio oil, bio gas and bio char from pressurized pyrolysis of wheat straw using a tubular reactor energy Fuel. 2009; 5: 2736 – 2742. Jewel A. Capunitan, sergio C. Capareda, Assesing the potential for biofuel production of corn stover pyrolysis sing a pres.surized batch reactor, Fuel, 2010; 563 -572. Ilknur Demiral, Alper Eryazlci, sevgi sensoz, Bio oil production from pyrolysis of cron cob ( Zea Mays.L), Sciverse Science Direct, 2012; 43 -49. Nathen Kauffman, Dermot Hayes, Robert Brown, A life cycle assessemnt of advanced biofuel production froma hectare of corn, Fuel, 2011; 11: 3306 – 3314. Xioa Huang, Yumei wang, Wei Liu, Jie Bao, Biological removal of inhibitors leads to the improved lipid production in the lipid fermentation of corn stover hydrolysate by trichasporan cutaneum, Bio resources Technology, 2011; 20: 9705 – 9709. Yusuf N N A N, Kamarudin S K, yaakub Z, Overview on the current trends in biodiesel productions, Energy Conversion and Management, 2011; 52: 2741 – 2751. Prafulla D. Patil, Shugang Deng, Optimization biodiesel production from edible and non edible vegetable oils, Fuel, 2009; 88: 1302 – 1306. Tashtoush G, Al-widyan MI, AI – shyoukh AO. Combustion performance and Emissions of methyl esters of a waste vegetable oil in water cooled furnace. Applied Thermal Engineering, 2003; 23: 285 - 293 Yin X, Ma H et al. Comparison of four different enhancing methods for preparing biodiesel through transesterification of sunflower oil, Applied Energy, 2012; 320 – 326. Onyestyak G, Harnos S et al. Sunflower oil to green diesel over Raney-type Ni-catalyst, Fuel, 2012; 102: 282–288. Rakopoulos D C Heat release analysis of combustion in heavy–duty turbocharged diesel engine operating on blends of diesel fuel with cotton seed or sunflower oils and their biodiesel, Fuel, 2012, 96 : 524–534. Shehata M S, and Abdel Razek S M, Experimental investigation of diesel engine performance and emission characteristics using jojoba/diesel blend and sunflower oil, Fuel, 2011; 90: 886–897. Cardone M, Prati M V, Rocco V, Seggiani M, Senatore A, Vitolo S. Brassica carinata as an alternative oil crop for the production of biodiesel in Italy engine performance and regulated and unregulated exhaust emissions .Environ Sci Technol 2002; 36: 4656–4662. Çetinkaya M, Karaosmanog˘lu F. A new application area for used cooking oil originated biodiesel: generators. Energy Fuels 2005; 19: 645 – 652. Krahl J, Munack A, Schröder O, Stein H, Bünger J. Influence of biodiesel and different designed diesel fuels on the exhaust gas emissions and health effects.SAE paper 2003; 2003 - 01-3199. Ozsezen A N, Canakci M, Turkcan A, Sayin C. Performance and combustion characteristics of a DI diesel engine fueled with waste palm oil and canola oil methyl esters. Fuel, 2009; 88: 629–636. Fontaras G, Karavalakis G, Kousoulidou M, Tzamkiozis T, Ntziachristos L, Bakeas E, et al. Effects of biodiesel on passenger car fuel consumption, regulated and non-regulated pollutant missions over legislated and real-world driving cycles. Fuel 2009; 88: 1608–1617. Sahoo P K, Das LM, Babu MKG, Arora P, Singh V P, Kumar N R, et al. Comparative evaluation of performance and emission characteristics of jatropha, karanja and polanga based biodiesel as fuel in a tractor engine. Fuel 2009 88: 1698–1707. Canakci M, Ozsezen AN, Turkcan A. Combustion analysis of preheated crude sunflower oil in an IDI diesel engine. Biomass & Bio energy 2009; 33 : 760–7. Devan P K, Mahalakshmi NV. Study of the performance, emission and combustion characteristics of a diesel engine using poon oil-based fuels. Fuel Process Technol 2009; 90: 513–519. Devan P K, Mahalakshmi N V. Performance, emission and combustion characteritics of poon oil and its blends in a DI diesel engine. Fuel 2009; 88: 861–867. Usta N, O zturk E, Can O , Conkur ES, Nas S, C¸on A H, et al. Combustion of biodiesel fuel produced from hazelnut soapstock/waste sunflower oil mixture in a diesel engine. Energy Converse Manage 2005; 46: 741–755. Cheng AS, Mueller CJ, Upatnieks A. Investigation of the impact of biodiesel fuelling on NO emissions using an optical direct injection diesel engine. International Journal of Engine Research 2006; 7:297–318. www.ajer.org Page 320
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-373-381 www.ajer.org Research Paper Open Access Analysis of First Contact Miscible WAG Displacement of the Effects of WAG Ratio and Flow Rate Reza Cheraghi Kootiani, Ariffin Bin Samsuri 1 (Department Of Petroleum Engineering, Faculty of Petroleum and Renewable Energy Engineering, Universiti Teknologi Malaysia, Malaysia) 2 (Department Of Petroleum Engineering, Faculty of Petroleum and Renewable Energy Engineering, Universiti Teknologi Malaysia, Malaysia ) Abstract: - Miscible WAG injection has been implemented successfully in a number of fields around the world and there are a number of numerical studies investigating the effect of rate, gravity, slug size, heterogeneity on WAG performance. However there are very few laboratory studies of WAG displacement efficiency reported in the literature. In this paper we report the results from a series of well-characterized experiments using glass bead packs. The aim of these experiments is • To investigate the impact of first contact miscible WAG injection on oil recovery • To clarify the physical processes during displacements. • To provide benchmark data-sets to validate reservoir simulations. The use of bead-packs rather than cores enabled us to observe visually, for the first time to our knowledge, the fluid interactions during each WAG experiment. The relative permeability for water oil and water-solvent were carefully measured along with the permeability and porosity of the pack. A series of displacements were conducted and compared: solvent-oil, water-oil and secondary simultaneous WAG injection at WAG ratios of 1:1, 4:1 and 1:4. These were performed at a range of flow rates to investigate the influence of capillary number on recovery efficiency. For each experiment, the pressure drop across the pack, the flow rate, the cumulative recovery of displaced fluid and the fraction of displacing fluids in the effluent were monitored as well as video recording the water-solvent-oil distributions with time. The experimental results are compared with the predictions from conventional finite difference simulation. Keywords: - WAG injection, WAG displacement, bead- packs, permeability. I. INTRODUCTION Miscible WAG injection has been implemented successfully in a number of fields around the world [1]. In principal it combines the benefits of miscible gas injection and water flooding by injecting the two fluids either simultaneously or alternately. Miscible gas injection has excellent microscopic sweep efficiency but poor macroscopic sweep efficiency due to viscous fingering and gravity override. Furthermore it is expensive to implement. In contrast water-flooding is relatively cheap and is less subject to gravity segregation and frontal instabilities. However the residual oil saturation after water-flooding is typically of the order of 20%. Injecting water with the miscible gas reduces the instability of the gas-oil displacement process due to relative permeability effects [2-3], thus improving the overall sweep efficiency. It also improves the economics by reducing the volume of gas that needs to be injected into the reservoir. The optimum WAG ratio for simultaneous WAG injection in a homogeneous reservoir can be obtained by matching the advance rates of the water-oil and solvent-oil displacement fronts. Stalkup [3] provides a method for calculating the optimum WAG ratio from the relative permeability via graphical construction. However this method assumes that the water-oil and water solvent relative permeability are the same. It also neglects the influence of capillary pressure on small scale displacement efficiency and the fact that relative permeability may alter as a function of rate [4-7]. Field experience [1] and numerical studies [8] suggest that the optimum WAG ratio may be around 4:1, which is rather larger than the values typically, calculated using Stalkup’s method. www.ajer.org Page 373 American Journal of Engineering Research (AJER) 2013 This difference between theory and practice is normally attributed to the combined influences of reservoir heterogeneity and gravity, but may also be due to using inappropriate relative permeability curves in the calculation of the WAG ratio and the influence of capillary pressure. This is supported by the fact that the majority of WAG displacements have not recovered as much additional oil as was originally predicted [1] by simulation studies. This is despite the fact that simulation models take into account reservoir heterogeneity and gravity. In this paper we investigate the effect of rate on WAG displacement efficiency using a combination of well-characterized bead-pack experiments and detailed numerical simulation. We show that recovery from WAG is a function of rate as well as WAG ratio. We also show that, at least for the fluid pairs investigated, the measured water-oil and water-solvent relative permeability are not the same. This difference is due to increased levels of viscous fingering in the water solvent displacement. It appears that the microscopic relative permeability is in fact the same. a. EXPERIMENTAL DESINGED AND CONDITIONS Grade 11 (200-250μm) Ballotini glass beads were chosen as the porous medium because they enabled a relatively homogenous sample to be constructed and simple flow visualization techniques to be used. The beads were sealed in a Perspex box, dimensions 23cm×10cm×0.6cm. The pack’s thickness was determined by the requirement that the flow be essentially two-dimensional so that direct comparison with 2D numerical simulations could be made [9-10]. "Fig.1" shows a plane view of the Perspex model.The model was packed following the method described in Caruana [11-12]. The homogeneity of the pack was checked by performing an M=1 miscible displacement (dyed water displacing undyed water) through it and observing the linearity of the displacement front. Six inlet ports were used to ensure the injected fluid(s) entered the pack over its entire cross section. During WAG experiments the water and paraffin (solvent) were injected simultaneously into alternate ports across the inlet face in an attempt to ensure uniform injection of both fluids across the inlet face. Three types of displacement experiments were performed: miscible, immiscible and WAG. The miscible and immiscible displacements were performed in order to fully characterize the flow properties of the pack and to enable us to assess the efficiency of the WAG recovery process. We used ISOPAR V to represent the oil phase and paraffin to represent the miscible solvent. Both water paraffin and water-ISOPAR V displacements were performed in order to measure the relative permeability for both fluid pairs. The fluid pairs used for each displacement and their properties are summarized in "Tables 1 and 2." Before starting each experiment the model was flooded with carbon dioxide to displace the air and then flooded with distilled water until it was completely saturated. The pack was then flooded with oil (actually ISOPAR V or paraffin) and driven to irreducible water saturation. Seven WAG displacements were conducted in all. Three displacements were performed at a rate of 5ml/min to investigate the effect of WAG ratio on recovery, using WAG ratios of 1:4, 1:1 and 4:1. A further four displacement experiments were performed at a WAG ratio of 1:1 using constant rates of 1, 2, 4, and 6ml/min to investigate the effect of rate on recovery. In addition three water-paraffin immiscible displacements were performed at rates of 1, 3 and 5 ml/min to determine the effect of rate on relative permeability. Two water-ISOPAR V displacements and two miscible (paraffin displacing ISOPAR V) displacements were performed at rates of 1 and 5 ml/min respectively, for comparison purposes. Relative permeability was not obtained from the 1 ml/min water-ISOPAR V displacement because it would have taken too long to establish residual oil saturation. All displacements were recorded using a camera and video recorder. The displacing water phase was colored with Lissamine red dye whilst the displacing solvent phase was colored with Waxoline blue dye. The recovery and effluent profiles were also recorded for all displacements. The solvent and oil effluents were distinguished using the refractive index method. This enabled outlet concentrations to be determined with an accuracy of ±2%. JBN analysis [13] was used to determine the relative permeability from the water flooding experiments. The porosity of the pack was found to be 38% and the permeability 29D. These values are typical for glass bead packs [9-12], 14. The longitudinal dispersion characteristics used in the simulations were taken from Muggeridge et al [14]. They obtained a longitudinal dispersion coefficient of 0.036cm2/sec for a pack of similar sized beads using the method of Brigham et al [15]. This is also comparable with the value quoted in Christie et al [10]. The transverse dispersion coefficient was chosen to be 0.0012 cm (giving αL/αT=30), again by analogy with the experiments of Muggeridge et al [14] and Christie et al [10]. The relative permeability obtained from the water-paraffin displacements as a function of rate are given in Figure 2. It can be seen that there is a significant variation with rate. The relative permeability obtained from the water-paraffin and the water-ISOPAR V displacements at a flow rate of 5ml/min are given in "Fig.3". There is a significant difference in the overall displacement behavior of the two fluid systems, despite the fact that paraffin and ISOPAR V are first contact miscible. It is normal www.ajer.org Page 374 American Journal of Engineering Research (AJER) 2013 engineering practice to assume that water-oil and water-gas relative permeability are the same when the oil and gas are first contact miscible. We attribute the difference observed in our experiments to the high level of viscous fingering observed in the water- ISOPAR V displacement (see "Fig.6"). This is not accounted for in the JBN analysis used to calculate the relative permeability. II. FIGURES AND TABLES Table1. Fluid pairs used in the displacements. Displaced Phase Displacing Phase Miscible, M=1 Miscible, M=7 Miscible, M=1.5 Miscible, M=10.6 Miscible WAG Clear water (oil) Isopar V (viscous oil) Paraffin (light oil or solvent) Oil ISOPAR V (viscous oil) Oil ISOPAR V (viscous oil) Blue water (solvent) Blue Paraffin (solvent) Red water (water) Red water(water) Red water & Blue Paraffin Interfacial Tension (mN/m) 0 0 35.8 26.6 - Table2. Properties of fluids used in the displacements. Viscosity (cp) 1.01 Water 1.52 Paraffin (solvent) 10.56 Isopar V (heavy oil) Figure1. Plan view of experimental setup Figure2. Water-paraffin relative permeability curves obtained from 1ml/min, 3ml/min and 5ml/min displacements. www.ajer.org Page 375 American Journal of Engineering Research (AJER) 2013 Figure3. Relative permeability data obtained from experiments for both a) water-heavy oil b) water-light oil (solvent). Figure4. Comparison of experimental and simulated solvent-oil distributions at different pore volumes of solvent injected for M=7 miscible displacement. The initial irreducible water saturation is 8%. Figure5. Comparison of a) recovery b) solvent cut curves obtained from experiment and simulation for M=7 miscible displacement. www.ajer.org Page 376 American Journal of Engineering Research (AJER) 2013 Figure6. Comparison of a) oil recovery obtained from experiment and simulation b) experimental water cut for M=10.6 and M=1.5 immiscible displacements. Figure7. Comparison of experimental and simulated water-oil distributions at different pore volumes of water injected for M=10.6 immiscible displacement. The initial irreducible water saturation is 8%. Figure8. Comparison of recovery (a) and water cut (b) obtained from experiment for M=1.5 immiscible displacement at different injection rates. www.ajer.org Page 377 American Journal of Engineering Research (AJER) 2013 Figure9. Oil recovery obtained from experiments as a function of WAG ratio compared with recoveries from miscible injection and water-flooding. Figure10. Calculation of optimum WAG ratio from fractional flow curves using Stalkup’s analysis. Watersolvent fractional flow curve was calculated using the experimental water- ISOPAR V relative permeabilities. Figure11. Experimental water (blue) and solvent (red) fronts at different pore volumes of water and solvent injected for simultaneous secondary first contact miscible WAG injection at a WAG ratio of a) 1:1 and b) 4:1. The initial irreducible water saturation is 8%. www.ajer.org Page 378 American Journal of Engineering Research (AJER) 2013 Figure12. Comparison of oil recovery obtained from experiment and simulation for WAG injection at an injection ratio of 1:1 and a flow rate of 5ml/min. Water-oil relative permeabilities obtained at this rate were used as an input to the simulator. Figure13. Oil recovery obtained from experiments at a WAG ratio of 1:1, miscible flooding and water flooding as a function of injection rates. III. RESULTS We attempted to simulate all the experiments predictively in order to test our understanding of the physics of these displacements and thus validate our numerical model of WAG recovery processes. All the input data required by the simulator was obtained from careful characterization of the bead pack properties. There was no history matching involved in this process. "Fig.4 to 8" compare the experimental results from the miscible (M=7) and immiscible (M=1.5 and M=10.6) displacements with predictions from the simulator. It can be seen that the agreement between simulation and experiment is excellent. This confirms that we have characterized the bead pack and fluid properties correctly for both these displacement types. "Fig.7" indicates that the immiscible water-ISOPAR V displacement is unstable. Figure 9 shows the recovery profiles obtained from displacements with WAG ratios of 4:1, 1:1 and 1:4. The flow rate was 5ml/min. The recoveries obtained from miscible injection and water flooding at the same rate are shown for www.ajer.org Page 379 American Journal of Engineering Research (AJER) 2013 comparison. It can be seen that the optimum WAG ratio is around 1:1. This is the same as predicted using Stalkup’s [3] analysis ("Fig.10") on the water-oil relative permeability obtained for this flow-rate. "Fig.11" compares the fluid distributions observed in the 1:1 and 4:1 WAG ratio displacements. It can be seen that in the 1:1 WAG ratio displacement the water (red) and solvent (blue) fronts are travelling at approximately the same speed as would be expected from Stalkup’s3 analysis. However there is still significant fingering of the solvent. This is not expected as at the optimum WAG ratio, the water should suppress the development of viscous fingers. Nevertheless the recovery predicted by the simulator closely matches the experimental curve ("Fig.12"). The simulation used the water paraffin (solvent) relative permeability obtained at 5ml/min and assumed that the water-oil and water-solvent relative permeability were identical. This confirms our hypothesis that the only reason that the experimental measurements of watersolvent and water-oil relative permeability are different is due to the fact that the JBN analysis does not account for the viscous fingering in the water-ISOPAR V displacement. "Fig.13" compares the recovery obtained at 1 PVI as a function of rate for a WAG ratio of 1:1. Results from miscible injection and water flooding are shown for comparison. All data were obtained experimentally. It can be seen that recovery from WAG changes as a function of flow-rate and in fact there appears to be an optimum flow-rate at around 3ml/min. The change in recovery with flow-rate from WAG at 1PVI appears to follow that obtained from water flooding as there is no appreciable rate dependence in the miscible displacement efficiency. The capillary number for the water-ISOPAR displacement at 5ml/min is 0.07, whilst for the waterparaffin displacement it is 1. The capillary number is defined as ��� = � �� � � Where σ is the interfacial tension (N�−1 ), k is the permeability (�2 ), φ is the porosity (fraction), L is the length of the system (m), is the total velocity (m −1 ) and μ is the viscosity (N s �−1 ). Thus the rate dependent behavior is probably due to capillary pressure effects. This suggests that the rate dependence observed in the water-paraffin relative permeability is probably due to capillary pressure, which has not been taken into account in the JBN analysis. Further simulation work is required to confirm this. IV. CONCLUSION We have used a combination of well-characterized experiments and simulations to investigate secondary recovery from simultaneous, first contact miscible WAG injection. The use of bead-packs has enabled the visualization of the WAG displacement process as a function of time. This is the first time this has been achieved to our knowledge. We observe that: • The optimum WAG ratio calculated by Stalkup’s analysis does produce the most oil, however the viscous fingering is not suppressed as much as we expected. • For these experiments there is an optimum rate of injection which results in the best recovery from WAG. This appears to be due to capillary pressure effects. • It is possible that water-solvent and water-ISOPAR V relative permeability are not the same. However our simulations suggest that this is due to the level of viscous fingering present in the water-ISOPAR V displacement which is not factored out in the JBN analysis. V. ACKNOWLEDGEMENTS I would like to appreciate Universiti Teknologi Malaysia for their continual support during the course of this paper. Special thanks go to my supervisor Prof. Dr. Ariffin Bin Samsuri for his support in the publication of this paper. REFERENCES [1] [2] [3] [4] [5] [Christensen, J.R., Stenby, E.H. and Skauge, A.: “Review of WAG Field Experience,” paper SPE 39883 presented at the 1998 International Petroleum Conference and Exhibition, Mexico, 3-5 March. Caudle, B.H. and Dyes, A.B.: “Improving Miscible Displacement by Gas-Water Injection,” Transactions of the American Institute of Mining, Metallurgical, and Petroleum Engineering, 213 (1958), 281-284. Stalkup, F.I.: “Miscible Flooding Fundamentals,” Society of Petroleum Engineers Monograph Series, 1983. Heaviside, J. and Black, C.J.J.: “Fundamentals of Relative Permeability: Experimental and Theoretical Considerations”, SPE 12173 presented at the 58 ℎ ATCE, San Francisco, CA, 1983 Heaviside, J., Brown, C.E. and Gamble I.J.A.: “Relative Permeability for Intermediate Wettability Reservoirs”, paper SPE 16968, 62 ℎ ATCE, Dallas, TX, 1987 www.ajer.org Page 380 American Journal of Engineering Research (AJER) [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] 2013 Avraam, D.G. and Payatakes, A.C.: “Flow Regimes And Relative Permeabilities During Steady-State 2Phase Flow In Porous-Media”, Journal of Fluid Mechanics 293, (1995) 207-236 Hughes, R.G. and Blunt, M.F.: “Pore scale modeling of rate effects in imbibition”, Transport in Porous Media 40(3), (2000) 295-322 Christie, M.A., Muggeridge, A.H. and Barley, J.J.: "3D Simulation of Viscous Fingering and WAG Schemes", SPE Reservoir Engineering 8, (1993), 19-26. Christie, M.A. and Jones, A.D.W.: “Comparison between Laboratory Experiments and Detailed Simulation of Miscible Viscous Fingering”, presented at the 4 ℎ European Symposium on Enhanced Oil Recovery, Hamburg (1987) Christie, M.A., Jones, A.D.W. and Muggeridge, A.H.: "Comparison between Laboratory Experiments and Detailed Simulations of Unstable Miscible Displacement Influenced by Gravity", in “North Sea Oil and Gas Reservoirs - II”, (Graham & Trotman) 1994, 245-250 (Proc. of the North Sea Oil and Gas Reservoirs Conference, 1989). Caruana, A.: Immiscible Flow Behaviour within Heterogeneous Porous Media. Ph.D. Thesis. Imperial College, London (1997). Caruana, A. and Dawe, R.A.: “Experimental studies of the effects of heterogeneities on miscible and immiscible flow processes in porous media”, Trends in Chemical Engineering, 3 (1996) 185-203. Johnson, E.F., Bossler, D.P. and Naumann, V.O.: “Calculations of Relative Permeability from Displacement Experiments”, Trans AIME 216 (1959), 370-372. Muggeridge, A.H., Jackson, M.D., Al-Mahrooqi, S.H., Al-Marjabi, M. and Grattoni, C.A.: “Quantifying Bypassed Oil in the Vicinity of Discontinuous Shales ,” paper SPE 77487 presented at the 2002 SPE Annual Technical Conference and Exhibition, Texas, 29 September - 2 October. Brigham, W.E., Reed, P.W. and Dew, J.N.: “Experiments on mixing during miscible displacement in porous media”, SPE Journal 1 (March 1961), 1-8. Christie, M.A.: “High Resolution Simulation of Unstable Flows in Porous Media”: SPE Reservoir Engineering 4, (August 1989) 297-304. Fayers, F.J. and Muggeridge, A.H., “Extension to Dietz Theory and Behaviour of Gravity Tongues in Slightly Tilted Reservoirs”, SPE Reservoir Engineering 5, (1990), 487-494. Davies, G.W., Muggeridge, A.H. and Jones, A.D.W.: “Miscible Displacements in a Heterogeneous Rock: Detailed Measurements and Accurate Predictive Simulation”, paper SPE 22615 presented at the 1991 SPE Annual Technical Conference and Exhibition, Dallas, TX, October 6-9. Christie, M.A. and Bond, D.J.: “Detailed Simulation of Unstable Flows in Porous Media”, SPE Reservoir Engineering 2 (1987) 514-522. www.ajer.org Page 381
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-144-149 www.ajer.org Research Paper Open Access Experimental Analysis of Single and Double Pass Smooth Plate Solar Air Collector with and without Porous Media Dr. Bhupendra Gupta1, Jitendra Kumar Waiker2, Gopal Prasad Manikpuri3, Brahman Singh Bhalavi4 1 Assistant Professor, Jabalpur Engineering College, Jabalpur, Student, Master of Engineering, Heat Power, JEC, Jabalpur 4 Assistant Professor, School of Energy & Environmental studies, DAVV, Indore, India 2,3 Abstract: - This paper involves an experimental study to investigate the effect of mass flow of Air on thermal performance and pressure drop through the collector. The aim is to analyze thermal efficiency of flat plate solar air heater .The measured parameters were the inlet and outlet temperature, the absorbing plate temperature, and ambient temperature .further the measurements were performed at different value of mass flow rate of air in flow channel duct. It is concluded that smooth plate double pass solar air heater is 3-4% more efficient than single pass solar air heater. If we use the porous media in double pass solar air heater increase the air heater efficiency to be 5 % efficient than air heater in single pass, and 2-3% more in double pass without porous media. Keywords: – Single & double pass solar air heater, porous media, thermal performance, pressure drop. I. INTRODUCTION Solar air heater is the simplest form of Flat plate solar collector in which the working medium is air. The principle usually followed is to expose a dark surface to solar radiation so that radiation is absorbed. Apart of the absorbed radiation is the transferred to fluid like air. A flat plate collector used for heating the air generally known as solar air heater. [1] Adit Gaur et al. An experimental investigation of novel design of double pass solar air the main aim of using of using double pass arrangement is to minimize the heat loss to ambient from the front cover of collector and thus improving the thermal efficiency of the system. [2] Ajay Kumar et al. experimental investigation of solar air heater using porous media they show the effect of mass flow rate and solar radiation on efficiency of solar collector [3] Ahmad Foudoli et al. Analytical and experimental studies on the thermal efficiency of the double pass solar air collector with finned absorber the efficiency is increased to proportional to mass flow rate and the solar radiation and the efficiency is depend on mass flow rate. [4] Bashria et al. A mathematical simulation to predict the effect of different parameter on system thermal performance and pressure drop in single and double flow mode with and without using porous media have been conducted.[5] Bashria et al. performance of the double flow of solar air heater is studied and compare with the performances of single pass and it is found that double pass operation increases the efficiency of solar collector. [6] Ben Slama et al. collector with baffle aerodynamics, heat transfer and the efficiency [7] C. choudhray et.al performance and cost analysis of two pass solar air heater.[8] ] Fouedchabane et al. The researcher has given their attention to analysis of flat plate solar air heater by experimental method. In this paper analysis is done using smooth plate by varying different mass flow rate. [9] Fouedchabane et al. effect of tilt angle of natural convection in solar collector with longitudinal fins ,a series of experimental test carried out on plan and in this study shows that for a single pass solar air heater using internal fin inferior and absorber plate ,there is a significant increase in thermal efficiency of the air heater [10] M. pradharaj et al. performance of solar air heater without any cover is very poor and hence at least one cove be used for better performance .[11] Silvina Gonzaler et al. thermal evaluation and modify of double pass solar collector for air heating. www.ajer.org Page 144 American Journal of Engineering Research (AJER) II. 2013 COLLECTOR THERMAL EFFICIENCY The efficiency of a solar collector is defined as the ratio of useful gain to the incident solar energy, that is: Solar Energy Collect η= Total Solar Striking Collector Surface η= Q useful (1) I×A s Where, Q - Accumulated energy extracted from the collector during the working period in W, 2 Ac - Collector area in m Useful heat gain for air collector can be expressed as: Qu= m cp(Tout-Tin) 6 (2) 1 2 7 4 3 33333 8 5 9 1,2- inlet ,3- Solar Collector ,4,5- valve, 6- outlet , 7,8- thermocouple wire, 9- thermometer Figure1: Solar Energy Distributions of the Solar Air Collector U- Tube manometer It is the type of simple manometer. It consist of glass tube bent in U-shape, one end of which is connected to the point at which pressure is to be measured and other end remains open to the atmosphere. The tube generally contains mercury or any other liquid whose specific gravity is greater than the specific gravity of the liquid whose pressure is to measured. The manometer also had a graduated scale (1mm least count) for measuring the difference in liquid levels. Q = C Ac �(� − � ) C= �� −( � )^ � (3) Where, Q = volumetric flow rate (at any cross section) Cd = coefficient of discharge C = orifice flow coefficient d1= diameter of the pipe, m P1 = fluid upstream pressure P2 = fluid downstream pressure ρ = Air density, kg/ m3 III. EXPERIMENTAL SET UP The experimental setup is show in figure 3.1 has been used to estimate efficiency of mass flow rate and efficiency of flat plate air heater under varying conditions. Plywood is used for made the frame of solar collector in cuboidal shape of 10 mm thickness. The internal dimension was cuboidal shape 1 m× 0.5 m × 0.15 m. The top surface of the collector was left open for glass cover. The installation angle of the collector was 240 from horizontal. A glazed glass sheet 1.02 m × 0.52 m × 5 mm was used as the single glass cover for the apparatus. The thermocol sheet 0.9m × 0.5m×2.5 cm to secured to the bottom surface of the wooden frame by nails and glue. www.ajer.org Page 145 American Journal of Engineering Research (AJER) - 2013 The absorber was of the a plate absorption coefficient � = 0.95 , the transparent cover transmittance � = 0.9 and absorption of the glass cover �g = 0.05 The inlet was a 10 cm hole dried on the side surface near the bottom. For the outlet section 3 holes each of 1 inch diameter was drilled on the adjacent surface near the top. The orifice of 12mm diameter and the pipe diameter of 1 inch. U- tube manometer was used for the measured pressure difference . Glass wool was used as porous medium for experiment. Calculation is based on solar intensity taken as 900 w/m2. Figure 3.1: Experimental Set-up S. No. 01 02 03 04 05 06 Table No.1: Component List Protractor Name of Component Solar collector area 1m (length)×.5m (width) Glass 1.02� × .52m ×5 mm Thermocol 0.9 m× .5m × 2.5 cm Internal dimension of plywood 1m× .5m × .15m Outlet pipe diameter = 12 Fan Vac 220 & amp 0.24 IV. RESULTS AND DISCUSSION The performance of the double pass flow solar air heater is studied and compared with the single pass. In this analysis, it has been concluded that the double pass solar air heater is more efficient Then the single pass air heater. It can see that the efficiency of air heater greatly depends on air flow rate. The efficiency of air heater is increased up-to 1.336 kg/hr in single flow mode and upto 1.939 kg/hr in double flow mode. This figure clearly shows that the double flow mode is 3-4 % more efficient than single one. Thus, efficiency increases with double pass mode due to heat removal from two pass as compared to single pass. If porous media is used in double flow, the efficiency has been increased 6% more as compared to single flow mode. If porous media is not used in double flow, the efficiency has been increased 2-3% more as compared to single flow mode without porous media. Hence, the use of porous media increases the heat transfer area which contributes higher thermal efficiency. In this paper, figure 4.1 shows that efficiency variation with mass flow rate for single pass mode without porous media and figure 4.2 shows the pressure drop variation with mass flow rate for single pass mode without porous media. Figure 4.3 show efficiency variation with mass flow rate for single pass mode with porous media. Figure 4.4 shows that pressure drop variation with mass flow rate for single pass mode with porous. Figure 4.5 show the efficiency variation with mass flow rate for double pass mode with porous figure. 4.6 shows the pressure drop variation with mass flow rate for double pass mode with porous media. Finally our result shows in figure 4.7 and figure 4.8 efficiency variation with mass flow rate and pressure drop variation with mass flow rate in the both condition (single pass mode and double pass mode ) it shows that mass flow rate increase with pressure drop and efficiency increase with increase with mass flow rate. www.ajer.org Page 146 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 Pressure drop(mm of H2o) Efficiency % American Journal of Engineering Research (AJER) 0 0.5 1 Mass Flow Rate(Kg/hr) 0 1.5 1 0.5 0 2 Pressure drop(mm of H2o) 2 Efficiency (%) 0.5 1 Mass Flow Rate(Kg/hr) 1.5 Figure 4.2: Pressure drop variation with mass flow rate for single pass mode non porous 2.5 1 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 1.5 Figure 4.1: Efficiency variation with mass flow flow rate for single pass mode non porous 0 2013 2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0 0.5 1 1.5 Mass flow rate (kg/hr) Mass Flow rate (Kg/hr) Figure 4.3: Efficiency variation with mass flow for single pass mode with porous Figure 4.4: Pressure drop variation with mass flow rate rate for single pass mode with porous. Figure 4.5: Efficiency variation with mass flow rate for Double pass mode with porous www.ajer.org Page 147 American Journal of Engineering Research (AJER) 2013 4 3.5 Pressure Drop 3 2.5 2 1.5 1 0.5 0 0 0.5 1 1.5 2 2.5 Mass Flow Rate(Kg/hr) Figure 4.6: Pressure drop variation with mass flow rate for Double pass mode with porous 4 3.5 Efficiencyy( %) 3 Double pass with porous 2.5 Double pass non porous 2 1.5 Single pass non porous 1 0.5 Single pass with porous 0 0 0.5 1 1.5 2 Mass Flow Rate(Kg/hr) Figure 4.7: Efficiency variation with mass flow rate 4 Pressure Drop(mm of H2o) 3.5 3 2.5 Single pass non porous 2 Single pass with porous 1.5 Double pass non porous 1 Double pass with porous 0.5 0 0 0.5 1 1.5 2 Mass Flow Rate(Kg/hr) Figure 4.8: Pressure variation with Mass flow rate www.ajer.org Page 148 American Journal of Engineering Research (AJER) V. 2013 CONCLUSIONS An Experimental analysis is done to predict the effect of different parameter thermal performance and pressure drop, for smooth plate single pass and double pass solar air heater with and without using a porous media .It is found that thermal efficiency greatly based on mass flow rate it increase with increase mass flow rate but it also increase the pressure drop. The double flow is more efficient than the single floe made and using of porous media increase the system efficiency and the outlet temperature. Nomenclature Area of Collector that absorb solar radiation, m2 Coefficient of discharge of orifice Specific heat of air at constant pressure, J/Kg K Friction factor Heat removal factor Solar radiation, W/m2 Ac Cd Cp F F KI Ta Tc Tpr W H Ambient air temperature, K Cover temperature, K Porous media temperature, K Collector width, m Fluid heat transfer coefficient, W/m2 Greek symbols L m P Collector length, m Collector flow rate, kg/sec Pressure drop across the duct, mm of H2O � � Qu Collector thermal efficiency Air density, kg/ m3 Rate of solar energy gain, W REFRENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] Adit Gaur et.al.“An Experimental investigation of a novel design of double pass solar air heater “International Journal of chem. Tech Research vol.5pp 1036-1040. Ahmad Foudholi et al.(2011) “Analytical and Experimental Studies on thermal Efficiency of the Double pass Solar Air Collector With Finned Absorber “ American Journal of Applied science vol 8 pp 716-723 Ajay Kumar Kaprdaret.al(2012) “ Experimental investigation of solar air heater using porous medium” IJMET PP 387-396. BashiraAbdrubAirasoulYousef (2005) “Prediction Study on Single and Double flow Solar Air Heater “ Suranaree J. sci. ATechnolpp 123-136. Bashriaet. al(2007) “Analysis of single and Double passes V-Grooves solar collector with and without porous media “ International Journal of Energy And Management issue 2 volume1. Ben Slam R.et al. “Air solar Collector with Baffles Aerodynamics, Heat Transfer and Efficiency “RERIC International Energy Journal vol 18. C.Choudhury (1999) “Performance and Cost Analysis of two pass Solar air Heater” Elsevier science Ltd. FouedChabane et al.al(2013) “ Thermal Efficiency Analysis of a Single Flow Solar Air Heater With Different Mass Flow Rates In A Smooth Plate” Frontiers in Heat and Mass Transfer Foudechabane et.al(2012) “ Effect of the Tilt Angle of Natural Convection in a Solar Collector with Internal Longitudinal fin” International Journal of Science and Engineering Investigation vol1, issue M.Pradhrajet. al. “Review on Porous and non Porous flat plate air collector with mirror enclose Silvina Gonzalez et al.(2012) “ Thermal Evaluation and Modeling of Double-pass solar Collector For Air Heating “ conference , opportunities , limit & Need Towered an environmental responsible architecture lima pp 7-9. www.ajer.org Page 149
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-437-440 www.ajer.org Research Paper Open Access Effect of Manufactured Sand on Durability Properties of Concrete Nimitha. Vijayaraghavan1, Dr. A.S. Wayal2 1 2 (Department of civil and environmental engineering, V.J.T.I., India) (Department of civil and environmental engineering, V.J.T.I., India) Abstract: - Volume of concrete consumed by the construction industry is very large. In India, conventional concrete contains natural sand obtained from riverbeds as fine aggregates. In recent times with a boost in construction activities, there is a significant increase in the consumption of concrete causing the dwindling of natural sand. This has led to several environmental issues thereby government imposing a ban on the unrestricted use of natural sand. This has resulted in the scarcity and significant rise in the cost of natural sand. Therefore, an alternative to river sand has become the need of the hour. The promotional use of manufactured sand will conserve the natural resources for the sustainable development of the concrete in construction industry. Here various durability tests were conducted for concrete. From the test results, it is observed that with increasing proportion of manufactured sand the penetration of water into concrete decreases. Keywords: - Durability, Natural Sand, Manufactured sand, Water Permeability test, Rapid Chloride Penetration test. I. INTRODUCTION Durability of concrete is defined as its ability to withstand weathering action, chemical attack or any other process of deterioration. A durable concrete requires little or no maintenance and retains its original form, quality and serviceability when exposed to its environment expect harsh or highly aggressive environment. With increasing pollution level it has become necessary to check the durability of concrete. Concrete Mix design procedure considers only the compressive strength of concrete. Although compressive strength is a measure of durability of concrete to a great extent but it is not always true that a strong concrete is a durable concrete. In order to predict the durability of concrete Rapid Chloride Penetration Test and Water Permeability Test were conducted. II. LITERATURE REVIEW P.M. Shanmugavadiv etal. have shown from water permeability test that permeability reduced with increase in proportion of manufactured sand. This may be due to less voids present in concrete with manufactured sand showing better bonding between the aggregate and cement paste. Results of rapid chloride penetration test shows that chloride ion penetrability is high for concrete with natural sand while it is reduced using manufactured sand. They attribute this due to coarser grain size of manufactured sand resulting in better packing of particles. They suggest that 70% of manufactured sand in concrete is the optimum replacement for natural sand for better results. Experimental results of M.G. Shaikh et al. suggest that the sharp edges of the particles in artificial sand provide better bond with the cement than the rounded part of the natural sand. Both concrete made using artificial sand and natural sand are moderate to chloride permeability. III. EXPERIMENTAL INVESTIGATION 3.1. Cement www.ajer.org Page 437 American Journal of Engineering Research (AJER) 2013 The materials used are Ordinary Portland cement Grade53, natural and manufactured sand from obtained from a local supplier, 20mm and 10mm down size coarse aggregate. The properties of material are shown in the following tables Component Fineness m2/kg Initial setting time, minutes Final setting time. Minutes Standard consistency Soundness Table 1: Table showing physical properties of cement Results Requirements 1.63% <10% 135mins Minimum 30mins 315mins Maximum 10hrs 30% ---5.53mm Maximum 10mm Results show that the properties of cement are within the permissible limits. 3.2. Fine aggregates Sieve analysis of natural and manufactured sand show that the fineness modulus of manufactured sand is greater than that of natural sand. i.e. the fine aggregate changes from zone III to zone I. This indicates that fine aggregates are coarser in case of manufactured sand. 3.3. Coarse aggregates Crushed angular aggregate with maximum grain size of 20mm and downgraded was used and having bulk density 1.38 kg/m3. The specific gravity and fineness modulus was found to be 2.82 and 8 respectively 3.4. Mix proportions and Mix details Concrete mix design in this investigation was designed as per the guidelines specified. The Table 3 shows the mix proportions of Concrete (kg/m3).Concrete mixtures with different proportions of manufactured sand for natural sand ranging from 0% to 100% were casted. Proportion Materials Cement+ fly ash+ micro silica Coarse aggregate 20mm 10mm Fine aggregate Water Table 2: Table showing mix proportion details 100% natural sand (0% 50% natural sand +50% 100%manufacturd sand( 0% manufactured sand manufactured sand natural sand) 1 1 1 1.69 1.56 3.25 0.28 1.41 1.3 1.79 0.28 0.88 0.81 1.69 0.28 3.5. Testing Details 3.5.1. Rapid Chloride Penetration Test Concrete cubes of 150mmx150mm were casted and cured for a period of 28days. A sample of dia 100mm and thickness 50mm are subjected to a direct current of 60 volts across two faces. The specimens are placed in between two chambers one with NaOH(0.3N) and other with sodium chloride (3%) solutions. The current passing through the specimen, the specimen is monitored regularly over six hours. The total charge that has passed through the specimen is calculated and is the value of product of time in seconds and current in amperes and unit is “Coulomb”. Table 3: Table showing permeability values Mix proportion Coulombs 100% natural sand 5024 50% natural sand+50% 3276 manufactured sand 100% manufactured sand 798 www.ajer.org Permeability High Moderate Very low Page 438 American Journal of Engineering Research (AJER) 2013 Figure 1: Graph showing permeability of various mix proportion of concrete 6000 coulombs 5000 4000 3000 Coulombs 2000 1000 0 100%C.S +0% R.S 50% C.S +50% R.S 0%C.S + 100% R.S 3.5.2. Water Permeability Test Concrete cubes of 150mmx150mm are casted and cured for a period of 28 days. The surfaces of the cubes are wiped dry and they are placed in the Water Permeability Test apparatus as per DIN 1048. The compressor is started and the pressure is applied at a rate of 0.5MPa for a period of 72 hours. The specimens are later removed and split open. The actual penetration of water into the specimen were measured at three different points from the edges of the split cube and the average was found. Table 4: Table showing penetration values for water Mix Proportion Penetration of water (mm) 100% Manufactured Sand 10,32,11 50%Manufactured sand and 50% 32,18,10 Natural sand 100% Natural sand 51,34,54 Average 17 20 46 50 45 average penetraion values 40 35 30 25 20 average penetration values 15 10 5 0 100% 50%Manufactured 100% Natural sand Manufactured Sand sand and 50% Natural sand www.ajer.org Page 439 American Journal of Engineering Research (AJER) 2013 3.6. Results and Discussions Manufactured sands are made by crushing aggregate to sizes appropriate for use as a fine aggregate. During the crushing process the manufactured sand have irregular shapes.Due to irregular shape of the aggregates there is a better packing among the particles thereby reducing the voids in concrete. Results of the experimental studies show that resistance to penetration of water as proved by rapid chloride penetration test and water permeability test, is increased with increasing proportion of manufactured sand in concrete. Results show that river sand can be fully replaced by manufactured sand. The use of manufactured sand in the construction industry helps to prevent unnecessary damages to the environment and provide optimum exploitation of the resources. IV. [1] [2] [3] REFERENCES P.M.Shanmugavadivu and R.Malathy (2011)“Durability Properties of Concrete with Natural sand and Manufactured sand” International Conference on Science and Engineering M. G. Shaikh and S. A. Daimi (2011) “Durability studies of concrete made by using artificial sand with dust and natural sand” International Journal of Earth Sciences and Engineering Volume 04, pp 823-825 Nimitha. Vijayaraghavan and Dr. A.S. Wayal (2013) “ Effects of Manufactured sand on compressive strength and workability of concrete” International Journal of Structural and Civil Engineering Research Volume 02, pp 228-232 www.ajer.org Page 440

Sorry, this document isn't available for viewing at this time.


In the meantime, you can download the document by clicking the 'Download' button above.
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-203-213 www.ajer.org Research Paper Open Access Thermal Modeling and Efficiency of Solar Water Distillation: A Review Dr. Bhupendra Gupta1, Tonish Kumar Mandraha2, Pankaj j Edla3, Mohit Pandya4 1 2, 3 Assistant Professor, Jabalpur Engineering College, Jabalpur, Resarch scholar, Master of Engineering, Heat Power, JEC, Jabalpur 4 Assistant Professor, JNCT, Bhopal, India Abstract: - The most important aspect for sustaining life on earth is water. In spite of its abundant availability, a small percentage can be used for drinking purpose (approximate 1%). The solar water distillation comes out to be a non toxic and promising device which purifies water that uses a renewable solar energy source, Efficiency of the solar water distillation device can be enhanced by increasing evaporation rate that is a combined effect of solar radiation, cover glass temperature, water contamination density, base plate absorptivity and provide additional heat by solar water preheating system. Various investigators uses thermal modeling technique to analyses performance of Solar water distillation device carrying above mentioned factors as basis and shows an optimum value to enhance efficiency. Present paper is a tabulated review of all these governing parameters and modeling equations available for suitable selection. Keywords: - Solar water distillation, Solar Energy, Active and Passive techniques, Thermal modeling, Heat and mass transfer relation. I. INTRODUCTION Water is gifted by nature but around 97% of the water in the world is in the ocean, approximately 2% of the water in the world is at present stored as ice in polar region, and 1% is fresh water available in earth for the need of the plants, animals and human life. This water is reducing day by day continuously. And this 1% water is available in rivers, lakes, and underground reservoir. This ground water has also been polluted due to industries, agricultural and population growth during the current year. Polluted water causing Sevier diseases like ―water borne diseases‖. The term ―water borne diseases‖ is reserved largely for infections that predominantly are transmitted through contact with or consumption of infected water. And water borne diseases is affecting human health; nearly 70-75% diseases have infected water in India. The world is facing the scarcity of the fresh water. This has become a major problem and global challenge. Therefore, it is required to have a technology for water purification to meet the demand of the water all over the world. A solar distillation (SD) technology is one of the solutions for purifying the brackish (more saline than fresh) and underground water. Water salinity based on dissolved salts Fresh water Brackish water < 0.05% 0.05–3% Saline water 3–5% Brine > 5% It is a highly promising and an environment friendly technology. It produces distilled water which can be used as potable water for drinking and other purposes. The performance of solar distillation depends upon the design of solar still, operating and climatic conditions www.ajer.org Page 203 American Journal of Engineering Research (AJER) 2013 Solar distillation is a relatively simple treatment of brackish (i.e. contain dissolved salts) water supplies. Distillation is one of many processes that can be used for water purification and can use any heating source. Solar energy is low grade energy available. In this process, water is evaporated; using the energy of the sun then the vapor condenses as pure water. This process removes salts and other impurities. The solar power where sun hits atmosphere is 10 17 watts, whereas the solar power on earth’s surface is 16 10 watts. The total worldwide power demand of all needs of civilization is 10 13 watts. Therefore, the sun gives us 1000 times more power than we need. If we can use 5% of this energy, it will be 50 times what the world will require. II. PRINCIPLES OF SOLAR WATER DISTILLATION The basic principles of solar water distillation are simple, yet effective, as distillation replicates the way nature makes rain. The sun's energy heats water to the point of evaporation. As the water evaporates, water vapor rises, condensing on the glass surface for collection. This process removes impurities, such as salts and heavy metals, and eliminates microbiological organisms. The end result is water cleaner than the purest rainwater. Figure 1: Simple solar water distillation process III. DESIGN PRINCIPLE OF A SINGLE-SLOPE SOLAR WATER DISTILLATION SYSTEM Design requirements: [1]  Distills water so it is drinkable  Has a maximum yield of distilled water  Easy to build and repair (minimize amount of maintenance needed)  Reliable and Easy to clean.  Produces minimal waste at end of life  Can withstand harsh weather conditions and degradation by heat and UV.  Easy to use (don’t need to disassemble to put the dirty water in and get the fresh water out)  Be constructed with locally available materials, and natural building materials.  Be light weight for ease of handling and transportation.  Have an effective life of 10 to 20 Yrs.  No requirement of any external power sources.  Should also serve as a rainfall catchment surface.  Be able to withstand prevailing winds.  Inexpensive.  Should be cost low. Problems and justification:  Dust on the transparent cover  Algae and scaling on the inner black surface www.ajer.org Page 204 American Journal of Engineering Research (AJER)   2013 If it is flushed daily this may help Dry out ruins the still because the white salt dries to the black surface, the glass heats up and gets brittle, as well as the glass surface changes so that the condensate forms as droplets instead of a film, which decreases performance. IV. CLASSIFICATION OF SOLAR DISTILLATION SYSTEMS 1) Active distilation a) High temperetur distelation  Auxiliary Heating  collector/concentretor Panel Heating  PV Integrated Collector (Hybrid) b) Normal temperetur distilation 2) Passive Distillation a) High Temaretur Rang (>60oc)  Horizontal Basin solar still  Inclined Basin Solar still  Regenerative Effect Solar still  Vertical Solar still  Spharical Condensing Solar still b) Normal Temperature Rang(<60oc)  Conventional Solar still • Singal Slop Solar still • Double Slope Solar still • Symmetrical • Non Symmetrical  New Design of Solar still  Inclined Solar still Active Solar Stills In an active solar still, an extra thermal energy is fed to the water in the basin to create a faster rate of evaporation. A broad classification of the solar stills is depicted above. Further the active solar stills are classified as:  High temperature distillation solar stills: - hot water is fed into the basin from a solar collector panel.  Pre-heated water application solar stills: - hot water is fed into the basin at a constant flow rate.  Natural production solar stills:- hot water is feed into the basin once in a day. Passive solar still In a passive still the distillation takes place purely by direct sun light. The single slope and double slope solar stills are the conventional low temperature solar stills, operating at a temperature below 60 oC. Of the above two, single slope solar still is more versatile and efficient than double slope solar still. V. HEAT TRANSFER MODE IN SOLAR WATER DISTILLATION SYSTEM The heat transfer in solar still is mainly classified into two ways, internal and external heat transfer. The details of various heat transfers in solar still are shown in Figure 2. Figure 2: Energy flow diagram of single slope solar still. www.ajer.org Page 205 American Journal of Engineering Research (AJER) 2013 5.1 Internal heat transfer In solar still basically internal heat is transferred by evaporation, convection and radiation. The convective and evaporative heat transfers takes place simultaneously and are independent of radiative heat transfer. 5.1.1 Radiative heat transfer: – The view factor is considered as unity because of glass cover inclination is small in the solar still. The rate of radiative heat transfer between water to glass is given by qr,w–g = hr,w–g (Tw – Tgi) (1) Where, hr,w–g = Radiative heat transfer coefficient between water to glass, hr,w–g = εeff [{(Tw+273)2 + (Tgi +273)2}/ Tw+Tgi+546] (2) εeff = Effective emission between water to glass cover, is presented as εeff = 1/ [(1/εg + 1/εw)–1] (3) 5.1.2 Convective heat transfer: – Natural convection takes place across the humid air inside the basin due to the temperature difference between the water surfaces to inner surface of the glass cover. The rate of convective heat transfer between water to glass is given by [3] qc,w–g = hc,w–g (Tw – Tgi) (4) Where, hc,w–g = Convective heat transfer coefficient depends on the temperature difference between evaporating and condensing surface, physical properties of fluid, flow characteristic and condensing cover geometry. The various models were developed to find the convective heat transfer coefficient. One of the oldest methods was developed by Dunkle’s [4] and his expressions have certain limitations, which are listed below. I. Valid only for normal operating temperature (≈50 0C) in a solar still and equivalent temperature difference of ∆T=170C. II. This is independent of cavity volume, i.e., the average spacing between the condensing and evaporating surfaces. III. This is valid only for upward heat flow in horizontal enclosed air space, i.e., for parallel evaporative and condensing surfaces. The convective heat transfer coefficient is expressed as [4] (5) hc,w–g = 0.884(∆T’)1/3 Where, ∆T’= (Tw – Tgi) + [(Pw –Pgi)+(Pw+273)/(268.9x10–3 –Pw)] (6) Pw = exp [25.317–{5144/ (273+Tw)}] (7) Pgi = exp [25.317–{5144/ (273+Tgi)}] (8) Chen et al. [5] developed the model of free convection heat transfer coefficient of the solar still for wide range of Rayleigh number (3.5 x 103 <Ra<106) and as follows, (9) hc,w–g = 0.2Ra0.26 kv /xv Zheng et al. [6] have developed a modified Rayleigh number using Chen et al. [5] model for evaluating the convective heat transfer coefficient, (10) hc,w–g = 0.2(Ra')0.26 kv /xv Where, Ra' = ( xv3 ρv g /µv αv ) ∆T" (11) ∆T" = [(Tw – Tgi) + [(Pw–Pgi)/[{MaPt/(Ma–Mwv)}–Pw] (Tw+ 273.15)] (12) The convective heat transfer between basins to water is given by [7] qw = hw(Tb–Tw) (13) The convective heat transfer coefficient between basins to water is given as, hw = Kw/Xw C (Gr x Pr)n (14) Where, C = 0.54 and N = 1/4 5.1.3 Evaporative heat transfer: - The performance of solar still depends on the evaporative and convective heat transfer coefficients. Various scientists developed mathematical relations to evaluate the evaporative and convective heat transfer coefficients. The general equation for the rate of evaporative heat transfer between water to glass is given by [3] qe,w–g = he,w–g (Tw – Tgi) (15) www.ajer.org Page 206 American Journal of Engineering Research (AJER) he,w–g = Evaporative heat transfer coefficient. he,w–g = 16.273x10–3× he,w–g [ Pw – Pgi / Tw–Tgi] (developed by Dunkle’s) [4] 2013 (16) Malik et al. [8] developed a correlation based on Lewis relation for low operating temperature range and it is expressed as, he,w–g = 0.013 hc,w–g (17) The total heat transfer coefficient of water to glass is defined as, ht,w–g = hc,w–g + he,w–g + hr,w–g (18) The rate of total heat transfer of water to glass is defined as, qt,w–g = qc,w–g + qe,w–g + qr,w–g qt,w–g = ht,w–g(Tw – Tgi) (19) (20) 5.2 External heat transfer The external heat transfer in solar still is mainly governed by conduction, convection and radiation processes, which are independent each other. 5.2.1 Top loss heat transfer coefficient: - The heat is lost from outer surface of the glass to atmosphere through convection and radiation modes. The glass and atmospheric temperatures are directly related to the performance of the solar still. So, top loss is to be considered for the performance analysis. The temperature of the glass cover is assumed to be uniform because of small thickness. The total top loss heat transfer coefficient is defined as ht,g–a = hr,g–a + hc,g–a (21) qt,g–a = qr,g–a+ qc,g–a (22) qt,g–a = ht,g–a (Tgo – Ta) (23) The radiative heat transfer between glass to atmosphere is given by [9] qr,g–a= hr,g–a (Tgo – Ta) (24) The radiative heat transfer co efficient between glass to atmosphere is given as, hr,g–a = εg [(Tgo+273)4–(Tsky+273)4/Tgo–Ta] Where, Tsky = Ta–6 The convective heat transfer between glass to atmosphere is given by [10] qc,g–a = hc,g–a (Tgo – Ta) The convective heat transfer coefficient between glass to atmosphere is given as hc,g–a = 2.8 +(3.0 × v) (25) (26) (27) The total internal heat loss coefficient (ht,w–g) and conductive heat transfer coefficient of the glass (Kg/Lg) is expressed as Uwo = [(1/ht;w-g)+(Lg/Kg)] (28) The overall top loss coefficient (Ut) from the water surface to the ambient through glass cover, Ut = ht,w-g ht,g-a / (ht,g-a + Uwo) (29) 5.2.2 Side and bottom loss heat transfer coefficient: - The heat is transferred from water in the basin to the atmosphere through insulation and subsequently by convection and radiation from the side and bottom surface of the basin. The rate of conduction heat transfer between basin liner to atmosphere is given by [11] qb = hb(Tb –Ta) (30) The heat transfer coefficient between basin liner to atmosphere is given by [11], hb = [Li/Ki + 1/ht,b–a]–1 (31) Where, ht,b–a = hc,b–a + hr,b–a (32) There is no velocity in bottom of the solar still. By substituting v = 0, to obtain the heat transfer coefficient. The bottom loss heat transfer coefficient from the water mass to the ambient through the bottom is expressed as, Ub = [1/hw+1/hb]–1 (33) www.ajer.org Page 207 American Journal of Engineering Research (AJER) 2013 The conduction heat is lost through the vertical walls and through the insulation of the still and it is expressed as, Us = (Ass/As) Ub (34) The total side loss heat transfer coefficient (Us) will be neglected because of side still area (A ss) is very small compared with still basin area (As).The overall heat transfer coefficient from water to ambient through top, bottom and sides of the still is expressed as [11] ULS = Ut + Ub (35) 5.3 Efficiency calculation Overall thermal efficiency of solar still is, η = [∑Mew L / ∑{I(t)c ×Ac×3600} +∑{I(t)s × As ×3600}] × 100% (36) Where, The hourly yield is given by the following equation Mew = [he,w–g (Tw–Tgi)/L] ×3600 × As The total daily yield is given as follows Mew = 24 �=0 Mew VI. (37) (38) LITERATURE REVIEW 6.1 Fedali Saida, Bougriou Cherif (2010), presents the thermal analysis of passive solar still. Mathematical equations for water, absorber, glass and insulator temperatures yield and efficiency of single slope basin have been derived. The analysis is based on the basic energy balance for the solar still. A computer model has been developed to predict the performance of the solar still. The operation governing equations of a solar still are solved by a Runge-Kutta numerical method. The numerical calculations indicated that the wind speed has an influence on the glass cover temperature. It was noted that in sunshine duration, temperature of various components of the distiller follows the evolution of solar radiation. [12] 6.2 Xiaohua Liu, Wenbo Chen, Ming Gu, Shengqiang Shen, Guojian Cao (2013), represented, thermal and economic performance on solar desalination system with evacuated tube collectors and low temperature multi-effect distillation. Mathematical and economic models are established based on mass and energy conservation, which conclude evacuated tube collector model, heat storage tank model, flash tank model, multi-effect distillation model and electrical heating and cooling model. Taking actual operation into account, the influence of the heating steam temperature of the first effect and the effect number of multi-effect distillation system on system performance is analyzed. The cost constitution of solar desalination system with evacuated tube collectors is shown, and the proportion of the cost of evacuated tube collector is the largest. The water cost is given out to appreciate the economic performance of the solar desalination system. Under the calculation conditions of this paper, the following conclusions can be drawn:  With the increasing of heating steam temperature of the first effect, the area of evaporator and fresh water cost reduce the volume of storage tank increases, but fresh water production and fresh water production per unit of collector area all change slightly.  With the increasing of the number of effects, the volume of storage tank changes slightly, but the area of evaporator and fresh water production increase, fresh water cost reduces greatly.  Among the cost constitution of ETC solar desalination system, the proportion of the cost of evacuated tube collector is the largest (31%), then the cost of civil installation and auxiliary equipment and the cost of manpower is second (15%). [13] 6.3 Rajesh Tripathi, G.N. Tiwari(2005), presented the thermal analysis of passive and active solar distillation system by using the concept of solar fraction inside the solar still with the help of AUTOCAD 2000 for given solar azimuth and altitude angle and latitude, longitude of the place. Experiments have been conducted for 24 h (9 am to 8 am) for New Delhi climatic conditions (latitude 28 035' N, longitude 77012' E) during the months of November and December for different water depths in the basin (0.05, 0.1 and 0.15 m) for passive as well as active solar distillation system. Analytical expressions for water and glass cover temperatures and yield have been derived in terms of design and climatic parameters. The following conclusions were drawn: www.ajer.org Page 208 American Journal of Engineering Research (AJER) 2013  The degree of agreement between theoretical and experimental results is more for active mode as compared to passive mode of operation.  Solar fraction plays a very significant role in thermal modeling of solar still for active as well as passive mode of operation.  Relative humidity should be measured inside the solar still, particularly, for higher depths of water in the basin.  Temperature dependent internal heat transfer coefficients should be considered for thermal modeling of solar stills.[14] Figure 3 (a): Schematic diagram of an active solar still coupled with a flat plate collector, (b) photograph of the experimental set-up and (c) flow chart of the AUTOCAD 2000 model. 6.4 Anil Kr. Tiwari, G.N. Tiwari (2007), did experimental analysis on a setup having latitude 28.35'N for annual as well as seasonal performance. Different water depths in a single slope passive solar still of cover inclined at 300 for the months of June 2004 to may 2005, with six clear days per month is taken. The dominance of evaporative fraction within 32–370C has been noticed depending on the water depth under consideration. On the basis of studies the following conclusions were drawn:  The daily yield of lower water depth 0.02 m has been found 32.57% and 32.39% more than the daily yield of higher water depth 0.18 m in summer and winter respectively. The daily yield of summer, of lowest water depth (0.02 m) has been found 66.9% more than the corresponding value of winter for the same water depth.  The annual yield obtained by lower water depth (0.04 m) is 44.28% higher than that obtained by higher water depth (0.18 m). The annual yield becomes constant for the water depths more than 0.08–0.10 m.  In summer, unlike winter season, the evaporative energy fraction supersedes the radiative at 33 0C and at 400C for lower (0.02 m) and medium (0.08 m) water depths respectively, whereas it never supersedes in case of higher (0.16 m or more) water depths. The dominance of evaporative energy fraction has been observed at temperature near or more than 35 oC in both the seasons.  Increasing the basin absorptivity from 0.40 to 0.80 can lead to 30.59% more daily yield for lower water depth whereas increase in air velocity from 0.0 m/s to 2.4 m/s can increase the daily yield by 40.06% and 50.94% for water depths 0.02 m and 0.12 m respectively.[15] www.ajer.org Page 209 American Journal of Engineering Research (AJER) 2013 6.5 M.K. Ghosal, GN. Tiwari, N.S.L. Srivastava(2002), concerned with seasonal analysis of solar desalination system combined with a greenhouse. Analytical expressions for water temperature, greenhouse room air temperature, glass cover temperature, flowing water mass over the glass cover, hourly yield of fresh water and thermal efficiency have been derived in terms of design and climatic parameters for a typical day of summer and winter period. Temperature rise of flowing water mass with respect to distance and time in solar still unit has also been incorporated in the mathematical modeling. Based on the above results, the following conclusions had been drawn:  The rate of increase in the yield of fresh water becomes steady after the length (L) of south roof is 2.5 m.  The yield and the fall in greenhouse maximum room air temperature (∆T r,max,) decrease with increase of flow rate.[16] 6.6 Hikmet Ş. Aybar(2006), An inclined solar water distillation (ISWD) system, which generates distilled water (i.e., condensate) and hot water at the same time, was modeled and simulated. In the parametric studies, the effects of feed water mass flow rate and solar intensity on the system parameters were investigated. Finally, the system was simulated using actual deviations of solar intensity and environment temperature during a typical summer day in North Cyprus. The system can generate 3.5–5.4 kg (per m2 absorber plate area) distilled water during a day (i.e., 7 am till 7 pm). The temperature of the produced hot water reached as high as 60EC, and the average water temperature was about 40EC, which is good enough for domestic use, depending on the type of feed water. The simulation results are in good agreement with the experimental results. [17] 6.7 Gajendra Singh , Shiv Kumar, G.N. Tiwari (2011), devolved a double slope hybrid (PVT) active solar still which was designed, fabricated and experimentally tested under field conditions for different configurations. Parallel forced mode configuration of the solar still will produce higher yield than the other configurations and obtained as 7.54 kg/day with energy efficiency of 17.4%. The hourly exergy efficiency is also found to be highest for the same configuration and reached as high as 2.3%. The comparative yield obtained is about 1.4 times higher than that obtained for hybrid (PVT) single slope solar still. Annual yield is expected to be 1939 kg. The estimated energy payback time is found to be 3.0 years and is about 30% less than the hybrid (PVT) single slope solar still. The total cost of the fabricated still is about 14% less than hybrid (PVT) single slope solar still, experimental setup shown in figure 5. [18] Figure 5: Integrated flat plate collectors (FPCs) and double slope solar still. Figure 6: Photograph of hybrid (PVT) Active solar still. 6.8 Shiv Kumar (2013) did thermal and economic evaluation of a hybrid (PVT) active solar distillation system incorporating the effect of subsidy, tax benefit, inflation, and maintenance costs is presented for the climatic condition of New Delhi (India). The analysis is based on annualized costing and for the expected life spans of 15 and 30 years. Further CO2 emission/mitigation and revenue earned due to carbon credit are taken into account as per norms of Kyoto Protocol for India. Energy production factor (EPF) and life cycle conversion efficiency (LCCE) are found to be 5.9% and 14.5%, respectively, for expected life of 30 years. The energy and distillate production costs are found to be Rs. 0.85/kWh and Rs. 0.75/L, respectively, accounting the carbon credit earned. The cost payback period is estimated to be 4.2 years, if the distillate is sold out at the rate of Rs. 6.0/L in the local market, experimental setup shown in figure 6 [19] www.ajer.org Page 210 American Journal of Engineering Research (AJER) VII. 2013 CONCLUSION Solar energy technologies and its usage are very important and useful for the developing and under developed countries to sustain their energy needs. The use of solar energy in desalination process is one of the best applications of renewable energy. Solar still has become more popular particularly in rural areas. The solar stills are friendly to nature and eco-system. Various types and developments in solar distillation systems, theoretical analysis and future scope for research were reviewed in detail. Based on the review and discussions, the following point could be concluded.  The condensing glass cover inclination is equal to the latitude of the place for maximum distillation.  The total cost of the fabricated still is about 14% less than hybrid (PVT) single slope solar still. The hourly exergy efficiency is also found to be highest for the same configuration and reached as high as 2.3%. The comparative yield obtained is about 1.4 times higher than that obtained for hybrid (PVT) single slope solar still.  Single slope passive solar still is more efficient than the double slope passive solar still.  The thermal efficiency of double slope active solar still is lower than the thermal efficiency of double slope passive solar still.  The energy efficiency of double slope active solar still is higher than the energy efficiency of double slope passive solar still.  In active double effect solar still, a higher yield from the lower basin at noon is due to the high water temperature at that time.  The hourly yield is only possible in the active mode of operation and hence commercially viable. Solar still is suited to villages and to mass production water purification. Around the world, concerns over water quality are increasing, and in special situations a solar still can provide a water supply more economically than any other method. The two big advantages of a solar still are that it uses low grade solar energy which is available forever and there is no green house pollutant evolution as is the case with other desalination techniques using fossil fuels. Further it can be utilized in remote places where there is no electricity and fuels. Nomenclature Aa Aperture area of concentrating collector (m2) Ac Area of solar collector (m2) Ar Receiver area of concentrating collector (m2) Ass Area of sides in solar still (m2) As Area of basin in solar still (m2) C Constant in Nusselt number expression Cp Specific heat of vapor (J/kg oC) Cw Specific heat of water in solar still (J/kg oC) g Acceleration due to gravity (m/s2) Gr Grashof number hc,b-a Convective heat transfer coefficient from basin to ambient (W/m2 oC) hr,b-a Radiative heat transfer coefficient from basin to ambient (W/m2 oC) ht,b-a Total heat transfer coefficient from basin to ambient (W/m2 oC) hc,g-a Convective heat transfer coefficient from glass cover to ambient (W/m2 oC) hr,g-a Radiative heat transfer coefficient from glass cover to ambient (W/m2 oC) ht,g-a Total heat transfer coefficient from glass cover to ambient (W/m2 oC) hc,w-g Convective heat transfer coefficient from water to glass cover (W/m2 oC) he,w-g Evaporative heat transfer coefficient from water to glass cover (W/m2 oC) hr,w-g Radiative heat transfer coefficient from water to glass cover (W/m2 oC) ht,w-g Total heat transfer coefficient from water to glass covers (W/m2 oC) hw Convective heat transfer coefficient from basin liner to water (W/m2 oC) hb Overall heat transfer coefficient from basin liner to ambient through bottom insulation (W/ m 2 oC) I(t)c Intensity of solar radiation over the inclined surface of the solar collector (W/m2) I(t)s Intensity of solar radiation over the inclined surface of the solar still (W/m2) Ki Thermal conductivity of insulation material (W/m oC) Kg Thermal conductivity of glass covers (W/m oC) Kv Thermal conductivity of humid air (W/m oC) Kw Thermal conductivity of water (W/m oC) www.ajer.org Page 211 American Journal of Engineering Research (AJER) L Li Lg Ma mew Mew Mw Mwv n Pgi Pr Pt Pw qc,w-g qe,w-g qr,w-g qt,w-g qr,g-a qc,g-a qt,g-a qw qb Ra Ra’ T Ta Tb Tgi Tgo Tsky Tw ∆T Ub Us ULC ULS Ut V Xv Xw 2013 Latent heat of vaporization (J/kg) Thickness of insulation material (m) Thickness of insulation glass covers (m) Molecular weight of dry air (kg/mol) Hourly output from solar still (kg/m2 h) Daily output from solar still (kg/m2 day) Mass of water in the basin (kg) Molecular weight of water vapor (kg/mol) Constant in Nusselt number expression Partial vapor pressure at inner surface glass temperature (N/m2) Prandtl number Total vapor pressure in the basin (N/m2) Partial vapor pressure at water temperature (N/m2) Rate of convective heat transfer from water to glass cover (W/m2) Rate of evaporative heat transfer from water to glass cover (W/m2) Rate of radiative heat transfer from water to glass cover (W/m2) Rate of total heat transfer from water to glass cover (W/m2) Rate of radiative heat transfer t from glass cover to ambient (W/m2) Rate of convective heat transfer from glass cover to ambient (W/m2) Rate of total heat transfer from glass cover to ambient (W/m2) Rate of convective heat transfer from basin liner to water (W/m2) Rate of heat transfer from basin liner to ambient (W/m2) Rayleigh number Modified Rayleigh number Time (s) Ambient temperature (oC) Basin temperature (oC) Inner surface glass covers temperature (oC) Outer surface glass cover temperature (oC) Temperature of sky (oC) Water temperature (oC) Temperature difference between water and glass surface ( oC) Overall bottom heat loss coefficient (W/m2 oC) Overall side heat loss coefficient (W/m2 oC) Overall heat transfer coefficient for solar collector (W/m2 oC) Overall heat transfer coefficient for solar still (W/m2 oC) Overall top heat loss coefficient from water surface to ambient air (W/m2 oC) Wind velocity (m/s) Mean characteristic length of solar still between evaporation & condensation surface (m) Mean characteristic length of solar still between basin and water surface (m) Greek letters α Absorptivity αυ Thermal diffusivity of water vapor (m2/s) α’ Fraction of energy absorbed (α ) Absorptance–transmittance product Coefficient of volumetric thermal expansion factor (1/K) ε Emissivity Relative humidity µv Viscosity of humid air (Pa s) ρv Density of vapor (kg/m3) Stefan Boltzman constant (5.67 x10-8 W/m2 K4) Subscripts a Ambient b Basin liner c Collector eff Effective g Glass cover s Solar still www.ajer.org Page 212 American Journal of Engineering Research (AJER) w 2013 Water REFERENCES [1]. [2]. [3]. [4]. [5]. [6]. [7]. [8]. [9]. [10]. [11]. [12]. [13]. [14]. [15]. [16]. [17]. [18]. [19]. [20]. [21]. [22]. [23]. [24]. [25]. [26]. Prem Shankar and Shiv Kumar, ―Solar Distillation – A Parametric Review‖ VSRD-MAP, Vol. 2 (1), 2012, 17-33. K. Sampathkumar, T.V. Arjunan, P. Pitchandi, P. Senthilkumar ―Active solar distillation—A detailed review‖, Renewable and Sustainable Energy Reviews 14 (β010) 150γ–1526. Velmurugan V, Srithar K, ―Solar stills integrated with a mini solar pond—analytical simulation and experimental validation‖. Desalination β007; β16:βγβ–41. Dunkle RV. ―Solar water distillation, the roof type solar still and a multi effect diffusion still‖, International Developments in heat transfer, ASME Proceedings of International Heat Transfer, University of Colorado. 1961; 5:895–902. Chen Z, Ge X, Sun X, Bar L, Miao YX. ―σatural convection heat transfer across air layers at various angles of inclination‖. Engineering Thermo physics 1984; β11–20. Hongfei Zheng, Xiaoyan Zhang, Jing Zhang, Yuyuan Wu. ―A group of improved heat and mass transfer correlations in solar stills‖, Energy Conversion and Management 2002; 43:2469–78. Tiwari Gσ, Dimri Vimal, Singh Usha, Chel Aravind, Sarkar Bikash, ―Comparative thermal performance evaluation of an active solar distillation system‖. International Journal of Energy Research β007; 31:1465. Malik MAS, Tiwari GN, Kumar A, Sodha M S. ―Solar distillation‖. τxford, UK: Pergamon Press; 198β. p. 8–17. Omar O Badran, Mazen M Abu-Khader. ―Evaluating thermal performance of a single slope solar still‖, Heat and Mass Transfer 2007; 43:985–95. Tiwari Gσ, Tiwari AK. ―Solar distillation practice for water desalination systems‖. σew Delhi: Anamaya Publishers; 2008. Tiwari Gσ. ―Solar energy: fundamentals, design, modelling and application‖ .σew Delhi: σarosa Publishing House; 2004. p. 278–306. Fedali Saida, Bougriou Cherif, ―Thermal Modeling of Passive Solar Still‖ EFEEA’10 International Symposium on Environment Friendly Energies in Electrical Applications 2-4 November 2010 Ghardaïa, Algeria Xiaohua Liu, Wenbo Chen, Ming Gu, Shengqiang Shen, Guojian Cao, ―Thermal and economic analyses of solar desalination system with evacuated tube collectors‖, Solar Energy 93 (2013) 144–150 Rajesh Tripathi, G.σ. Tiwari, ―Thermal modeling of passive and active solar stills for different depths of water by using the concept of solar fraction‖, Solar Energy 80 (2006) 956–967. Anil Kr. Tiwari, G.σ. Tiwari, ―Thermal modeling based on solar fraction and experimental study of the annual and seasonal performance of a single slope passive solar still: The effect of water depths‖, Desalination 207 (2007) 184–204. M.K. Ghosal, GN. Tiwari, N.S.L. Srivastava, ―Thermal modeling of a controlled environment greenhouse cum solar distillation for composite and warm humid climates of India‖ Desalination 15 1 (2002) 293308. Hikmet Ş. Aybar, ―Mathematical modeling of an inclined solar water distillation system‖, Desalination 190 (2006) 63–70. Gajendra Singh , Shiv Kumar, G.σ. Tiwari ―Design, fabrication and performance evaluation of a hybrid photovoltaic thermal (PVT) double slope active solar still‖ Desalination β77 (β011) γ99–406. Shiv Kumar, ―Thermal–economic analysis of a hybrid photovoltaic thermal (PVT) active solar distillation system: Role of carbon credit‖, Urban Climate xxx (2013) xxx–xxx. Syed Firozuddin, Dr. P. V. Walke, ―Thermal Performance on Single Basin Solar Still with Evacuated Tubes Solar Collector-A review‖ International Journal of Modern Engineering Research (IJMER) Vol.3, Issue.2, March-April. 2013 pp-1022-1025. Ali A. F. Al-Hamadani, S. K. Shukla and Alok Dwivedi, ―Experimental Performance Analysis of a Solar Distillation System with PCM Storage‖ International Journal of Research in Engineering and Technology (IJRET) Vol. 1, No. 6, 2012 ISSN 2277 – 4378. V. Sivakumar, E. Ganapathy Sundaram, ―Improvement techniques of solar still efficiency: A review‖ Renewable and Sustainable Energy Reviews 28(2013)246–264. www.ajer.org Page 213
American Journal of Engineering Research (AJER) 2013 American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-02, Issue-12, pp-46-51 www.ajer.org Research Paper Open Access Experimental Investigation on The Effect Of M-Sand In High Performance Concrete M.Adams Joe, A.Maria Rajesh, P.Brightson, M.Prem Anand 1 Assistant Professor, Department of Built Environment Engineering, Muscat College, Oman. 2, 3, 4 Research Scholar, Anna University, Chennai, Tamilnadu, India. Abstract: - The natural river sand was the cheapest resource of sand. However the excessive mining of river bed to meet the increasing demand for sand in construction industry has led to the ecological imbalance in the country. Now the sand available in the river bed is very coarse and contains very high percentage of silt and clay. The silt and clay present in the sand reduce the strength of the concrete and holds dampness. A few alternatives have come up for the industry to bank on of which manufactured sand or M-sand, as it is called, is found to be the most suitable one to replace river sand. M-sand has caught the attention of the construction industry and environmentalists alike for its quality and the minimum damages it causes to nature. Usage of M-Sand can drastically reduce the cost since like river sand, it does not contain impurities and wastages is nil since it is made with modern technology and machinery. Once the M-sand becomes more popular in the construction industry, the demand for river sand and illegal sandmining would come down. Compared to the river sand, the M-sand has a better quality consistency high Strength concrete with signifance saving instrument. M-sand that is available is graded, sieved and washed. The particles are more rounded and granular and do not have sharp edges. Usage of M-Sand can overcome the defects occurring in concrete such as honey combing, segregation, voids, capillary, etc. The purpose of this research is to experimentally investigate the effect of M-Sand in structural concrete by replacing river sand and develop a high performance concrete. It is proposed to determine and compare the differences in properties of concrete containing river sand and M-sand. It is also proposed to use steel fibres and chemical admixtures to increase the strength and workability of concrete respectively. The investigations are to be carried out using several test which include workability test, compressive test, tensile test, and flexural test Keywords: - M-Sand, Steel fibre, Compressive strength, Split tensile strength and Flexural Strength I. INTRODUCTION Concrete is a material used in building construction, consisting of a hard ,chemically inert substance ,known as aggregate ;usually made from different type of sand and gravel ,that is bonded together by element and water. The word concrete comes from the Latin word „Concreters‟ the past participle of „Concrescere‟. „Con‟ means together and „crescere‟ means to grow. Concrete was used for construction in many ancient structures. The wide spread use of concrete in many Roman structures has ensured that many survive to the present day. Concrete is a composition of many material composed primarily of aggregates, cement and water. There are many formations that have varied properties. The aggregate is generally a coarse gravel or crushed rock such as limestone or granite along with fine aggregate. 1.2 MANUFACTURED SAND: For aggregate produces concrete aggregate are end products while for concrete manufacturers, aggregates are raw materials to be used for concrete production. The quality of aggregates can be influenced while raw materials, gravel or rock may have characteristics which can‟t be modified by the production process. One extremely important factor is consistent supply of course, fine aggregate. In this regard a course aggregate produced www.ajer.org Page 46 American Journal of Engineering Research (AJER) 2013 by crushing basaltic stone and river sand is the major natural source of fine aggregate in our country. However the intense construction activity is resulting in growing shortage and price increase of the natural sand in the country in addition the aggregate and concrete industry are presently facing a growing public awareness related to environmental threats. Therefore, looking for a viable alternative for natural sand is a must. One alternative used as replacement is the use of M sand. Due to the forecast shortfall in supply of natural sand and increased construction practices time will come when M sand will play a significant role as an ingredient in concrete production. M sand characteristics: When rock is crushed and sized in quarry the main aim has generally been to produce course aggregate and road construction materials. M sand is defined as a purpose made crushed fine aggregate produced from suitable source materials. Manufactured sand has been produced by variety of crushing equipments including cone crushers, impact crushers, roll crushers, road rollers etc., The raw material for M sand production is the parent mass of rock. It is based on the parent rock that the chemical, mineral properties, texture, composition of sand would change. II. MATERIALS USED The materials usually used in the concrete mix are cement, fine aggregate (M-Sand & River Sand), coarse aggregate. The materials used in this project for concrete mix are, 2.1 CEMENT The cement used in this experimental study is 43 grade Ordinary Portland Cement. All properties of cement are tested by referring IS 12269-1987 specification of 43 grade Ordinary Portland Cement. The properties of cement are given in table 1. Sl.No. 1 2 3 4 5 Table 1: Properties of Cement Property Value Specific Gravity 3.15 Fineness 97.25 Initial Setting Time 45 min Final Setting Time 385 min Fineness Modulus 6% 2.2 FINE AGGREGATE (M-SAND) Fine aggregate used in this research is M- sand. Fine aggregates are the aggregates whose size is less than 4.75mm. Sl.No. 1 2 3 4 Table 2: Properties of M-Sand Property Value Specific Gravity 2.68 Fineness modulus 5.2 Water Absorption 7.0% Surface texture smooth 2.3 FINE AGGREGATE (RIVER SAND) Good quality natural river sand is readily available in many areas and may be easily obtained and processed. As with the gravels that they often accompany, the sand deposits may not have been laid uniformly, meaning a potential change in quality. Generally fines are classified based on size, i.e.; below 4.75mm is regarded as fine aggregate. Table 3: Properties of Fine Aggregate (River Sand) Sl.No. Property Value 1 Specific Gravity 2.55 2 Fineness modulus 4.45 3 Water Absorption 6.2% 4 Surface texture smooth www.ajer.org Page 47 American Journal of Engineering Research (AJER) 2013 2.4 COARSE AGGREGATE Coarse aggregate of nominal size of 20mm is chosen and tests to determine the different physical properties as per IS 383-1970. Test results conform to the IS 383 (PART III) recommendations. Sl.No. 1 2 3 4 5 6 Table 4: Properties of Coarse Aggregate Property Value Specific Gravity 2.70 Fineness modulus 7.15 Water Absorption 8.0% Particle Shape Angular Impact value 8.5% Crushing Value 18.5 2.5 Chemical Admixtures Super plasticizers or high range water reducing admixtures (HRWRA) are an important component of High Performance Concrete. Viscosity modifying admixtures (VMA) may also be used to help reduce segregation and the sensitivity of the mix due to variations in other constituents, especially to moisture content. Other admixtures including air entraining, accelerating and retarding may be used in the same way as in traditional vibrated concrete but advice should be sought from the admixture manufacturer on use and the optimum time for addition. Choice of admixture for optimum performance may be influenced by the physical and chemical properties of the binder. Admixtures will normally be very consistent from batch to batch but moving to another source or to another type from the same manufacturer is likely to have a significant effect on Concrete performance and should be fully checked before any change is made. 2.6 STEEL FIBRES: In this study corrugated steel fibre with an aspect ratio 60 was chosen. Corrugated steel fibres offer cost efficient concrete reinforcement. They were evenly distributed in concrete mixtures to improve the tensile strength of concrete and to avoid the micro cracks in the concrete. III. EXPERIMENTAL TESTS Mix design was done for M30 concrete as per the Indian standard code specifications (IS 10262-2007). Initial tests on all the ingredients of concrete were done and the results were tabulated. Fresh concrete tests such as slump cone test, flow table test etc., were also conducted. Testing of hardened concrete plays an important role in controlling and confirming the quality of cement concrete works 3.1 Cube Compressive Strength The compressive strength, as one of the most important properties of hardened concrete, in general is the characteristic material value for classification of concrete. 28 days cube compressive strength is tested on cubes of size 150mmx150mmx150mm and 28 days compressive strength is tested. 3.2 Splitting Tensile Strength Splitting tensile strength is an indirect method used for determining the tensile strength of concrete. Tests are carried out on 150mmх300mm cylinders conforming to IS 5816: 1976 to obtain the splitting tensile strengths at the age of 28 days. In the splitting tensile test, the concrete cylinder is placed with its axis horizontal, between plates of the testing machine, and the load is increased until the failure occurred by splitting in the plane containing the vertical diameter of the specimen. The magnitudes of the tensile stress is given by 2P/πDL, were P is the applied load and D, L, are the diameter, length of the cylinder respectively. 3.3 Flexural Strength (Modulus of Rupture) Tests are carried out on 100mmx100mmx500mm beams conforming to IS 516: 1959 to obtain the flexural strength at the age of 28 days. In the flexural test a standard plain concrete beam of rectangular cross section is simply supported and subjected to central point loading until failure. www.ajer.org Page 48 American Journal of Engineering Research (AJER) IV. 2013 RESULTS AND DISCUSSIONS 4.1 Cube Compressive Strength Four cube samples each for various percentage of river sand replaced by M-Sand were tested to determine the 7 days and 28 days compressive strength using a 3000kN Compression Testing Machine. The compressive strength test on cubes is conducted as per standards. It is seen that 28-days compressive strength increases upto 50% replacement of M-Sand. Specimens (S) S1 S2 S3 S4 Table 5. Cube compressive strength of Concrete @ 28 days River Sand M-Sand Steel Fibers Average cube compressive strength (%) (%) (%) @ 28 days (N/mm2) 70 30 1 35.84 60 40 1 38.62 50 50 1 39.80 40 60 1 37.70 4.2 Split Tensile Strength Four cylinder samples each of the mix with various percentages of M-Sand were tested to determine the split tensile strength after 28 day using a 3000kN Compression Testing Machine. The tests were conducted as per standard specifications. The test results are tabulated in Table 4.15. It is seen that 28-day split tensile strength increases upto 50% replacement of M-Sand. Specimens (S) S1 S2 S3 S4 Table 6. Split tensile strength strength of Concrete @ 28 days River Sand M-Sand Steel Fibers Average Split tensile strength (%) (%) (%) @ 28 days (N/mm2) 70 30 1 2.95 60 40 1 3.58 50 50 1 4.12 40 60 1 3.62 4.3 Flexural Strength Four beam samples each of the mix with various percentage of M-Sand were tested to determine the flexural strength after 28 days using a 30 Tone Schimadzu Universal Testing Machine. The tests were conducted as per standard specifications. The flexural strength of Concreteis given in Table 4.16. It is seen that the 28-day flexural strength increases upto 50% replacement of M-Sand. Specimens (S) S1 S2 S3 S4 Table 7. Flexural strength of strength of Concrete @ 28 days River Sand M-Sand Steel Fibers Average Flexural strength (%) (%) (%) @ 28 days (N/mm2) 1 70 30 7.2 60 40 1 7.8 50 50 1 8.6 40 60 1 7.4 GRAPH The graph showing the compressive strength, split tensile strength and flexural strength of the various mix proportional at 28 days of curing as shown in fig 1,2 ,and 3 respectively. www.ajer.org Page 49 American Journal of Engineering Research (AJER) 2013 Chart: 1 Average cube compressive strength@ 28 days (N/mm2) Chart: 2 Average Split Tensile strength@ 28 days (N/mm2) Chart: 3 Average Flexural strength@ 28 days (N/mm2) www.ajer.org Page 50 American Journal of Engineering Research (AJER) V. 2013 CONCLUSION From the results it is concluded that the M-Sand can be used as a replacement for fine aggregate. It is found that 50% replacement of fine aggregate by M-Sand give maximum result in strength and durability aspects than the conventional concrete. The results proved that the replacement of 50% of fine aggregate by M-Sand induced higher compressive strength, higher split tensile strength, higher flexural strength. Thus the environmental effects, illegal extraction of sand and cost of fine aggregate can be significantly reduced. REFERENCES [1] [2] [3] [4] [5] Mohammad Danjehpour, Abang Abdulla Abang Ali, Ramzan demirboga, A review for characterization of silica fumes and its effects on concrete properties, International Journal of Sustainable Construction Engineering & Technology (ISSN: 2108-3242), Vol 2, Issue No.2, 2011, 1-7. M.S. Shetty, Admixtures and Construction chemicals, Concrete Technology, (New Delhi, S. Chand & Company Ltd., 2012), 124-217. IS 10262 - 2009 Recommended guidelines for concrete mix design Saeed Ahmad and Shahid Mahmood , “Effects of crushed and Natural Sand on the properties of fresh and Hardened concrete, Our World in Concrete & Structures, August 2008 Shanmugavadivu P.M., Malathy R. “A comparative study on Mechanical Properties of concrete with Manufactured Sand” International Journal of Technology World, Oct – Nov 200 www.ajer.org Page 51