Abstract: Fractal based digital image compression is a specific
technique in the field of color image. The method is best suited for
irregular shape of image like snow bobs, clouds, flame of fire; tree
leaves images, depending on the fact that parts of an image often
resemble with other parts of the same image. This technique has
drawn much attention in recent years because of very high
compression ratio that can be achieved. Hybrid scheme incorporating
fractal compression and speedup techniques have achieved high
compression ratio compared to pure fractal compression. Fractal
image compression is a lossy compression method in which selfsimilarity
nature of an image is used. This technique provides high
compression ratio, less encoding time and fart decoding process. In
this paper, fractal compression with quad tree and DCT is proposed
to compress the color image. The proposed hybrid schemes require
four phases to compress the color image. First: the image is
segmented and Discrete Cosine Transform is applied to each block of
the segmented image. Second: the block values are scanned in a
zigzag manner to prevent zero co-efficient. Third: the resulting image
is partitioned as fractals by quadtree approach. Fourth: the image is
compressed using Run length encoding technique.
Abstract: Orthogonal Frequency Division Multiplexing
(OFDM) has been used in many advanced wireless communication
systems due to its high spectral efficiency and robustness to
frequency selective fading channels. However, the major concern
with OFDM system is the high peak-to-average power ratio (PAPR)
of the transmitted signal. Some of the popular techniques used for
PAPR reduction in OFDM system are conventional partial transmit
sequences (CPTS) and clipping. In this paper, a parallel
combination/hybrid scheme of PAPR reduction using clipping and
CPTS algorithms is proposed. The proposed method intelligently
applies both the algorithms in order to reduce both PAPR as well as
computational complexity. The proposed scheme slightly degrades
bit error rate (BER) performance due to clipping operation and it can
be reduced by selecting an appropriate value of the clipping ratio
(CR). The simulation results show that the proposed algorithm
achieves significant PAPR reduction with much reduced
computational complexity.
Abstract: Teaching of mathematics to engineering students is an
open ended problem in education. The main goal of mathematics
learning for engineering students is the ability of applying a wide
range of mathematical techniques and skills in their engineering
classes and later in their professional work. Most of the
undergraduate engineering students and faculties feels that no efforts
and attempts are made to demonstrate the applicability of various
topics of mathematics that are taught thus making mathematics
unavoidable for some engineering faculty and their students. The lack
of understanding of concepts in engineering mathematics may hinder
the understanding of other concepts or even subjects. However, for
most undergraduate engineering students, mathematics is one of the
most difficult courses in their field of study. Most of the engineering students never understood mathematics or
they never liked it because it was too abstract for them and they could
never relate to it. A right balance of application and concept based
teaching can only fulfill the objectives of teaching mathematics to
engineering students. It will surely improve and enhance their
problem solving and creative thinking skills. In this paper, some practical (informal) ways of making
mathematics-teaching application based for the engineering students
is discussed. An attempt is made to understand the present state of
teaching mathematics in engineering colleges. The weaknesses and
strengths of the current teaching approach are elaborated. Some of
the causes of unpopularity of mathematics subject are analyzed and a
few pragmatic suggestions have been made. Faculty in mathematics
courses should spend more time discussing the applications as well as
the conceptual underpinnings rather than focus solely on strategies
and techniques to solve problems. They should also introduce more
‘word’ problems as these problems are commonly encountered in
engineering courses. Overspecialization in engineering education
should not occur at the expense of (or by diluting) mathematics and
basic sciences. The role of engineering education is to provide the
fundamental (basic) knowledge and to teach the students simple
methodology of self-learning and self-development. All these issues
would be better addressed if mathematics and engineering faculty
join hands together to plan and design the learning experiences for
the students who take their classes. When faculties stop competing
against each other and start competing against the situation, they will
perform better. Without creating any administrative hassles these
suggestions can be used by any young inexperienced faculty of
mathematics to inspire engineering students to learn engineering
mathematics effectively.
Abstract: In this current contribution, authors are dedicated to
investigate influence of the crystal lamellae orientation on
electromechanical behaviors of relaxor ferroelectric Poly
(vinylidene fluoride –trifluoroethylene -chlorotrifluoroethylene)
(P(VDF-TrFE-CTFE)) films by control of polymer microstructure,
aiming to picture the full map of structure-property relationship. In
order to define their crystal orientation films, terpolymer films were
fabricated by solution-casting, stretching and hot-pressing process.
Differential scanning calorimetry, impedance analyzer, and tensile
strength techniques were employed to characterize crystallographic
parameters, dielectric permittivity, and elastic Young’s modulus
respectively. In addition, large electrical induced out-of-plane
electrostrictive strain was obtained by cantilever beam mode.
Consequently, as-casted pristine films exhibited surprisingly high
electrostrictive strain 0.1774% due to considerably small value of
elastic Young’s modulus although relatively low dielectric
permittivity. Such reasons contributed to large mechanical elastic
energy density. Instead, due to 2 folds increase of elastic Young’s
modulus and less than 50% augmentation of dielectric constant, fullycrystallized
film showed weak electrostrictive behavior and
mechanical energy density as well. And subjected to mechanical
stretching process, Film C exhibited stronger dielectric constant and
out-performed electrostrictive strain over Film B because edge-on
crystal lamellae orientation induced by uniaxially mechanical stretch.
Hot-press films were compared in term of cooling rate. Rather large
electrostrictive strain of 0.2788% for hot-pressed Film D in
quenching process was observed although its dielectric permittivity
equivalent to that of pristine as-casted Film A, showing highest
mechanical elastic energy density value of 359.5 J/m3. In hot-press
cooling process, dielectric permittivity of Film E saw values at 48.8
concomitant with ca.100% increase of Young’s modulus. Films with
intermediate mechanical energy density were obtained.
Abstract: The paper presents an additive manufacturing process for the production of metal and composite parts. It is termed as composite metal foil manufacturing and is a combination of laminated object manufacturing and brazing techniques. The process has been described in detail and is being used to produce dissimilar aluminum to copper foil single lap joints. A three dimensional finite element model has been developed to study the thermo-mechanical characteristics of the dissimilar Al/Cu single lap joint. The effects of thermal stress and strain have been analyzed by carrying out transient thermal analysis on the heated plates used to join the two 0.1mm thin metal foils. Tensile test has been carried out on the foils before joining and after the single Al/Cu lap joints are made, they are subjected to tensile lap-shear test to analyze the effect of heat on the foils. The analyses are designed to assess the mechanical integrity of the foils after the brazing process and understand whether or not the heat treatment has an effect on the fracture modes of the produced specimens.
Abstract: The focus of this study was to determine the factors associated with the use of substances for sport performance of youth in Lagos state sport. Questionnaire was the instrument used for the study. Descriptive research method was used. The estimated population for the study was 2000 sport men and women. The sample size was 200 respondents for purposive sampling techniques were used. The instrument was validated in it content and constructs value. The instrument was administered with the assistance of the coaches. Same 200 copies administered were returned. The data obtained was analysed using simple percentage and chi-square (x2) for stated hypothesis at 0.05 level of significance. The finding reveal that sport injuries exercise induced and anaphylaxis and asthma and feeling of loss of efficacy associated with alcohol used on sport performance among the users of substances. Alcohol users are recommended to partake in sport like swimming, basketball and volleyball because they have space of time for resting while at play. Government should be fully in charge of the health of sport men and women.
Abstract: We investigate the large scale of networks in the
context of network survivability under attack. We use appropriate
techniques to evaluate and the attacker-based- and the defenderbased-
network survivability. The attacker is unaware of the operated
links by the defender. Each attacked link has some pre-specified
probability to be disconnected. The defender choice is so that to
maximize the chance of successfully sending the flow to the
destination node. The attacker however will select the cut-set with
the highest chance to be disabled in order to partition the network.
Moreover, we extend the problem to the case of selecting the best p
paths to operate by the defender and the best k cut-sets to target by
the attacker, for arbitrary integers p,k>1. We investigate some
variations of the problem and suggest polynomial-time solutions.
Abstract: Advances in spatial and spectral resolution of satellite
images have led to tremendous growth in large image databases. The
data we acquire through satellites, radars, and sensors consists of
important geographical information that can be used for remote
sensing applications such as region planning, disaster management.
Spatial data classification and object recognition are important tasks
for many applications. However, classifying objects and identifying
them manually from images is a difficult task. Object recognition is
often considered as a classification problem, this task can be
performed using machine-learning techniques. Despite of many
machine-learning algorithms, the classification is done using
supervised classifiers such as Support Vector Machines (SVM) as the
area of interest is known. We proposed a classification method,
which considers neighboring pixels in a region for feature extraction
and it evaluates classifications precisely according to neighboring
classes for semantic interpretation of region of interest (ROI). A
dataset has been created for training and testing purpose; we
generated the attributes by considering pixel intensity values and
mean values of reflectance. We demonstrated the benefits of using
knowledge discovery and data-mining techniques, which can be on
image data for accurate information extraction and classification from
high spatial resolution remote sensing imagery.
Abstract: Journal bearings used in IC engines are prone to premature
failures and are likely to fail earlier than the rated life due to
highly impulsive and unstable operating conditions and frequent
starts/stops. Vibration signature extraction and wear debris analysis
techniques are prevalent in industry for condition monitoring of
rotary machinery. However, both techniques involve a great deal of
technical expertise, time, and cost. Limited literature is available on
the application of these techniques for fault detection in reciprocating
machinery, due to the complex nature of impact forces that
confounds the extraction of fault signals for vibration-based analysis
and wear prediction. In present study, a simulation model was developed to investigate
the bearing wear behaviour, resulting because of different operating
conditions, to complement the vibration analysis. In current
simulation, the dynamics of the engine was established first, based on
which the hydrodynamic journal bearing forces were evaluated by
numerical solution of the Reynold’s equation. In addition, the
essential outputs of interest in this study, critical to determine wear
rates are the tangential velocity and oil film thickness between the
journals and bearing sleeve, which if not maintained appropriately,
have a detrimental effect on the bearing performance. Archard’s wear prediction model was used in the simulation to
calculate the wear rate of bearings with specific location information
as all determinative parameters were obtained with reference to crank
rotation. Oil film thickness obtained from the model was used as a
criterion to determine if the lubrication is sufficient to prevent contact
between the journal and bearing thus causing accelerated wear. A
limiting value of 1 μm was used as the minimum oil film thickness
needed to prevent contact. The increased wear rate with growing
severity of operating conditions is analogous and comparable to the
rise in amplitude of the squared envelope of the referenced vibration
signals. Thus on one hand, the developed model demonstrated its
capability to explain wear behaviour and on the other hand it also
helps to establish a co-relation between wear based and vibration
based analysis. Therefore, the model provides a cost effective and
quick approach to predict the impending wear in IC engine bearings
under various operating conditions.
Abstract: Cochlear Implantation (CI) which became a routine
procedure for the last decades is an electronic device that provides a
sense of sound for patients who are severely and profoundly deaf.
The optimal success of this implantation depends on the electrode
technology and deep insertion techniques. However, this manual
insertion procedure may cause mechanical trauma which can lead to
severe destruction of the delicate intracochlear structure.
Accordingly, future improvement of the cochlear electrode implant
insertion needs reduction of the excessive force application during
the cochlear implantation which causes tissue damage and trauma.
This study is examined tool-tissue interaction of large prototype scale
digit embedded with distributive tactile sensor based upon cochlear
electrode and large prototype scale cochlea phantom for simulating
the human cochlear which could lead to small scale digit
requirements. The digit, distributive tactile sensors embedded with
silicon-substrate was inserted into the cochlea phantom to measure
any digit/phantom interaction and position of the digit in order to
minimize tissue and trauma damage during the electrode cochlear
insertion. The digit have provided tactile information from the digitphantom
insertion interaction such as contact status, tip penetration,
obstacles, relative shape and location, contact orientation and
multiple contacts. The tests demonstrated that even devices of such a
relative simple design with low cost have potential to improve
cochlear implant surgery and other lumen mapping applications by
providing tactile sensory feedback information and thus controlling
the insertion through sensing and control of the tip of the implant
during the insertion. In that approach, the surgeon could minimize the
tissue damage and potential damage to the delicate structures within
the cochlear caused by current manual electrode insertion of the
cochlear implantation. This approach also can be applied to other
minimally invasive surgery applications as well as diagnosis and path
navigation procedures.
Abstract: Polymeric composites are being increasingly used as
repair material for repairing critical infrastructures such as building,
bridge, pressure vessel, piping and pipeline. Technique in repairing
damaged pipes is one of the major concerns of pipeline owners.
Considerable researches have been carried out on the repair of
corroded pipes using composite materials. This article attempts a
short review of the subject matter to provide insight into various
techniques used in repairing corroded pipes, focusing on a wide range
of composite repair systems. These systems including pre-cured
layered, flexible wet lay-up, pre-impregnated, split composite sleeve
and flexible tape systems. Both advantages and limitations of these
repair systems were highlighted. Critical technical aspects have been
discussed through the current standards and practices. Research gaps
and future study scopes in achieving more effective design
philosophy are also presented.
Abstract: Web Usage Mining is the application of data mining
techniques to find usage patterns from web log data, so as to grasp
required patterns and serve the requirements of Web-based
applications. User’s expertise on the internet may be improved by
minimizing user’s web access latency. This may be done by
predicting the future search page earlier and the same may be prefetched
and cached. Therefore, to enhance the standard of web
services, it is needed topic to research the user web navigation
behavior. Analysis of user’s web navigation behavior is achieved
through modeling web navigation history. We propose this technique
which cluster’s the user sessions, based on the K-medoids technique.
Abstract: River Hindon is an important river catering the
demand of highly populated rural and industrial cluster of western
Uttar Pradesh, India. Water quality of river Hindon is deteriorating at
an alarming rate due to various industrial, municipal and agricultural
activities. The present study aimed at identifying the pollution
sources and quantifying the degree to which these sources are
responsible for the deteriorating water quality of the river. Various
water quality parameters, like pH, temperature, electrical
conductivity, total dissolved solids, total hardness, calcium, chloride,
nitrate, sulphate, biological oxygen demand, chemical oxygen
demand, and total alkalinity were assessed. Water quality data
obtained from eight study sites for one year has been subjected to the
two multivariate techniques, namely, principal component analysis
and cluster analysis. Principal component analysis was applied with
the aim to find out spatial variability and to identify the sources
responsible for the water quality of the river. Three Varifactors were
obtained after varimax rotation of initial principal components using
principal component analysis. Cluster analysis was carried out to
classify sampling stations of certain similarity, which grouped eight
different sites into two clusters. The study reveals that the
anthropogenic influence (municipal, industrial, waste water and
agricultural runoff) was the major source of river water pollution.
Thus, this study illustrates the utility of multivariate statistical
techniques for analysis and elucidation of multifaceted data sets,
recognition of pollution sources/factors and understanding
temporal/spatial variations in water quality for effective river water
quality management.
Abstract: Wavelength Division Multiplexing (WDM)
technology is the most promising technology for the proper
utilization of huge raw bandwidth provided by an optical fiber. One
of the key problems in implementing the all-optical WDM network is
the packet contention. This problem can be solved by several
different techniques. In time domain approach the packet contention
can be reduced by incorporating Fiber Delay Lines (FDLs) as optical
buffer in the switch architecture. Different types of buffering
architectures are reported in literatures. In the present paper a
comparative performance analysis of three most popular FDL
architectures are presented in order to obtain the best contention
resolution performance. The analysis is further extended to consider
the effect of different fiber non-linearities on the network
performance.
Abstract: In this paper, a prototype PEM fuel cell vehicle
integrated with a 1 kW air-blowing proton exchange membrane fuel
cell (PEMFC) stack as a main power sources has been developed for
a lightweight cruising vehicle. The test vehicle is equipped with a
PEM fuel cell system that provides electric power to a brushed DC
motor. This vehicle was designed to compete with industrial
lightweight vehicle with the target of consuming least amount of
energy and high performance. Individual variations in driving style
have a significant impact on vehicle energy efficiency and it is well
established from the literature. The primary aim of this study was to
assesses the power and fuel consumption of a hydrogen fuel cell
vehicle operating at three difference driving technique (i.e. 25 km/h
constant speed, 22-28 km/h speed range, 20-30 km/h speed range).
The goal is to develop the best driving strategy to maximize
performance and minimize fuel consumption for the vehicle system.
The relationship between power demand and hydrogen consumption
has also been discussed. All the techniques can be evaluated and
compared on broadly similar terms. Automatic intelligent controller
for driving prototype fuel cell vehicle on different obstacle while
maintaining all systems at maximum efficiency was used. The result
showed that 25 km/h constant speed was identified for optimal
driving with less fuel consumption.
Abstract: Average temperatures worldwide are expected to
continue to rise. At the same time, major cities in developing
countries are becoming increasingly populated and polluted.
Governments are tasked with the problem of overheating and air
quality in residential buildings. This paper presents the development
of a model, which is able to estimate the occupant exposure
to extreme temperatures and high air pollution within domestic
buildings. Building physics simulations were performed using the
EnergyPlus building physics software. An accurate metamodel is
then formed by randomly sampling building input parameters and
training on the outputs of EnergyPlus simulations. Metamodels are
used to vastly reduce the amount of computation time required when
performing optimisation and sensitivity analyses. Neural Networks
(NNs) have been compared to a Radial Basis Function (RBF)
algorithm when forming a metamodel. These techniques were
implemented using the PyBrain and scikit-learn python libraries,
respectively. NNs are shown to perform around 15% better than RBFs
when estimating overheating and air pollution metrics modelled by
EnergyPlus.
Abstract: Social networking sites such as Twitter and Facebook
attracts over 500 million users across the world, for those users, their
social life, even their practical life, has become interrelated. Their
interaction with social networking has affected their life forever.
Accordingly, social networking sites have become among the main
channels that are responsible for vast dissemination of different kinds
of information during real time events. This popularity in Social
networking has led to different problems including the possibility of
exposing incorrect information to their users through fake accounts
which results to the spread of malicious content during life events.
This situation can result to a huge damage in the real world to the
society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting the
fake accounts on Twitter. The study determines the minimized set of
the main factors that influence the detection of the fake accounts on
Twitter, and then the determined factors are applied using different
classification techniques. A comparison of the results of these
techniques has been performed and the most accurate algorithm is
selected according to the accuracy of the results. The study has been
compared with different recent researches in the same area; this
comparison has proved the accuracy of the proposed study. We claim
that this study can be continuously applied on Twitter social network
to automatically detect the fake accounts; moreover, the study can be
applied on different social network sites such as Facebook with minor
changes according to the nature of the social network which are
discussed in this paper.
Abstract: Recently, Job Recommender Systems have gained
much attention in industries since they solve the problem of
information overload on the recruiting website. Therefore, we
proposed Extended Personalized Job System that has the capability of
providing the appropriate jobs for job seeker and recommending
some suitable information for them using Data Mining Techniques
and Dynamic User Profile. On the other hands, company can also
interact to the system for publishing and updating job information.
This system have emerged and supported various platforms such as
web application and android mobile application. In this paper, User
profiles, Implicit User Action, User Feedback, and Clustering
Techniques in WEKA libraries were applied and implemented. In
additions, open source tools like Yii Web Application Framework,
Bootstrap Front End Framework and Android Mobile Technology
were also applied.
Abstract: In order to utilize results from global climate models,
dynamical and statistical downscaling techniques have been
developed. For dynamical downscaling, usually a limited area
numerical model is used, with associated high computational cost.
This research proposes dynamic equation for specific space-time
regional climate downscaling from the Educational Global Climate
Model (EdGCM) for Southeast Asia. The equation is for surface air
temperature. This equation provides downscaling values of surface
air temperature at any specific location and time without running a
regional climate model. In the proposed equations, surface air
temperature is approximated from ground temperature, sensible heat
flux and 2m wind speed. Results from the application of the equation
show that the errors from the proposed equations are less than the
errors for direct interpolation from EdGCM.
Abstract: The aim of this paper is to propose a general
framework for storing, analyzing, and extracting knowledge from
two-dimensional echocardiographic images, color Doppler images,
non-medical images, and general data sets. A number of high
performance data mining algorithms have been used to carry out this
task. Our framework encompasses four layers namely physical
storage, object identification, knowledge discovery, user level.
Techniques such as active contour model to identify the cardiac
chambers, pixel classification to segment the color Doppler echo
image, universal model for image retrieval, Bayesian method for
classification, parallel algorithms for image segmentation, etc., were
employed. Using the feature vector database that have been
efficiently constructed, one can perform various data mining tasks
like clustering, classification, etc. with efficient algorithms along
with image mining given a query image. All these facilities are
included in the framework that is supported by state-of-the-art user
interface (UI). The algorithms were tested with actual patient data
and Coral image database and the results show that their performance
is better than the results reported already.