Abstract: Rapid prototyping (RP) techniques are a group of
advanced manufacturing processes that can produce custom made
objects directly from computer data such as Computer Aided Design
(CAD), Computed Tomography (CT) and Magnetic Resonance
Imaging (MRI) data. Using RP fabrication techniques, constructs
with controllable and complex internal architecture with appropriate
mechanical properties can be achieved. One of the attractive and
promising utilization of RP techniques is related to tissue engineering
(TE) scaffold fabrication. Tissue engineering scaffold is a 3D
construction that acts as a template for tissue regeneration. Although
several conventional techniques such as solvent casting and gas
forming are utilized in scaffold fabrication; these processes show
poor interconnectivity and uncontrollable porosity of the produced
scaffolds. So, RP techniques become the best alternative fabrication
methods of TE scaffolds. This paper reviews the current state of the
art in the area of tissue engineering scaffolds fabrication using
advanced RP processes, as well as the current limitations and future
trends in scaffold fabrication RP techniques.
Abstract: Today, biogenic magnetite nanoparticles among
magnetic nanoparticles have unique attracted attention because of
their magnetic characteristics and potential applications in various
fields such as therapeutic and diagnostic. A well known example of
these biogenic nanoparticles is magnetosomes of magnetotactic
bacteria. In this research, we used two different types of technique for
the isolation and purification of magnetosome nanoparticles from the
isolated magnetotactic bacterial cells, heat-alkaline treatment and
sonication. Also we evaluated pyrogen content and sterility of
synthesized the isolated individual magnetosome by the Limulus
Amoebocyte Lysate test and direct impedimetric method
respectively.
Abstract: One of the primary uses of higher order statistics in
signal processing has been for detecting and estimation of non-
Gaussian signals in Gaussian noise of unknown covariance. This is
motivated by the ability of higher order statistics to suppress additive
Gaussian noise. In this paper, several methods to test for non-
Gaussianity of a given process are presented. These methods include
histogram plot, kurtosis test, and hypothesis testing using cumulants
and bispectrum of the available sequence. The hypothesis testing is
performed by constructing a statistic to test whether the bispectrum
of the given signal is non-zero. A zero bispectrum is not a proof of
Gaussianity. Hence, other tests such as the kurtosis test should be
employed. Examples are given to demonstrate the performance of the
presented methods.
Abstract: This paper addresses the problems encountered by conventional distance relays when protecting double-circuit transmission lines. The problems arise principally as a result of the mutual coupling between the two circuits under different fault conditions; this mutual coupling is highly nonlinear in nature. An adaptive protection scheme is proposed for such lines based on application of artificial neural network (ANN). ANN has the ability to classify the nonlinear relationship between measured signals by identifying different patterns of the associated signals. One of the key points of the present work is that only current signals measured at local end have been used to detect and classify the faults in the double circuit transmission line with double end infeed. The adaptive protection scheme is tested under a specific fault type, but varying fault location, fault resistance, fault inception angle and with remote end infeed. An improved performance is experienced once the neural network is trained adequately, which performs precisely when faced with different system parameters and conditions. The entire test results clearly show that the fault is detected and classified within a quarter cycle; thus the proposed adaptive protection technique is well suited for double circuit transmission line fault detection & classification. Results of performance studies show that the proposed neural network-based module can improve the performance of conventional fault selection algorithms.
Abstract: This paper examines the impact of information and
communication technology (ICT) usage, internal relationship,
supplier-retailer relationship, logistics services and inventory
management on convenience store suppliers- performance. Data was
collected from 275 convenience store managers in Malaysia using a
set of questionnaire. The multiple linear regression results indicate
that inventory management, supplier-retailer relationship, logistics
services and internal relationship are predictors of supplier
performance as perceived by convenience store managers. However,
ICT usage is not a predictor of supplier performance. The study
focuses only on convenience stores and petrol station convenience
stores and concentrates only on managers. The results provide
insights to suppliers who serve convenience stores and possibly
similar retail format on factors to consider in improving their service
to retailers. The results also provide insights to government in its
aspiration to improve business operations of convenience store to
consider ways to enhance the adoption of ICT by retailers and
suppliers.
Abstract: The state of the art in instructional design for
computer-assisted learning has been strongly influenced by advances
in information technology, Internet and Web-based systems. The
emphasis of educational systems has shifted from training to
learning. The course delivered has also been changed from large
inflexible content to sequential small chunks of learning objects. The
concepts of learning objects together with the advanced technologies
of Web and communications support the reusability, interoperability,
and accessibility design criteria currently exploited by most learning
systems. These concepts enable just-in-time learning. We propose to
extend theses design criteria further to include the learnability
concept that will help adapting content to the needs of learners. The
learnability concept offers a better personalization leading to the
creation and delivery of course content more appropriate to
performance and interest of each learner. In this paper we present a
new framework of learning environments containing knowledge
discovery as a tool to automatically learn patterns of learning
behavior from learners' profiles and history.
Abstract: A clustering based technique has been developed and implemented for Short Term Load Forecasting, in this article. Formulation has been done using Mean Absolute Percentage Error (MAPE) as an objective function. Data Matrix and cluster size are optimization variables. Model designed, uses two temperature variables. This is compared with six input Radial Basis Function Neural Network (RBFNN) and Fuzzy Inference Neural Network (FINN) for the data of the same system, for same time period. The fuzzy inference system has the network structure and the training procedure of a neural network which initially creates a rule base from existing historical load data. It is observed that the proposed clustering based model is giving better forecasting accuracy as compared to the other two methods. Test results also indicate that the RBFNN can forecast future loads with accuracy comparable to that of proposed method, where as the training time required in the case of FINN is much less.
Abstract: ECG analysis method was developed using ROC
analysis of PVC detecting algorithm. ECG signal of MIT-BIH
arrhythmia database was analyzed by MATLAB. First of all, the
baseline was removed by median filter to preprocess the ECG signal.
R peaks were detected for ECG analysis method, and normal VCG
was extracted for VCG analysis method. Four PVC detecting
algorithm was analyzed by ROC curve, which parameters are
maximum amplitude of QRS complex, width of QRS complex, r-r
interval and geometric mean of VCG. To set cut-off value of
parameters, ROC curve was estimated by true-positive rate
(sensitivity) and false-positive rate. sensitivity and false negative rate
(specificity) of ROC curve calculated, and ECG was analyzed using
cut-off value which was estimated from ROC curve. As a result, PVC
detecting algorithm of VCG geometric mean have high availability,
and PVC could be detected more accurately with amplitude and width
of QRS complex.
Abstract: Named Entity Recognition (NER) aims to classify each word of a document into predefined target named entity classes and is now-a-days considered to be fundamental for many Natural Language Processing (NLP) tasks such as information retrieval, machine translation, information extraction, question answering systems and others. This paper reports about the development of a NER system for Bengali and Hindi using Support Vector Machine (SVM). Though this state of the art machine learning technique has been widely applied to NER in several well-studied languages, the use of this technique to Indian languages (ILs) is very new. The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the four different named (NE) classes, such as Person name, Location name, Organization name and Miscellaneous name. We have used the annotated corpora of 122,467 tokens of Bengali and 502,974 tokens of Hindi tagged with the twelve different NE classes 1, defined as part of the IJCNLP-08 NER Shared Task for South and South East Asian Languages (SSEAL) 2. In addition, we have manually annotated 150K wordforms of the Bengali news corpus, developed from the web-archive of a leading Bengali newspaper. We have also developed an unsupervised algorithm in order to generate the lexical context patterns from a part of the unlabeled Bengali news corpus. Lexical patterns have been used as the features of SVM in order to improve the system performance. The NER system has been tested with the gold standard test sets of 35K, and 60K tokens for Bengali, and Hindi, respectively. Evaluation results have demonstrated the recall, precision, and f-score values of 88.61%, 80.12%, and 84.15%, respectively, for Bengali and 80.23%, 74.34%, and 77.17%, respectively, for Hindi. Results show the improvement in the f-score by 5.13% with the use of context patterns. Statistical analysis, ANOVA is also performed to compare the performance of the proposed NER system with that of the existing HMM based system for both the languages.
Abstract: In this article an evolutionary technique has been used
for the solution of nonlinear Riccati differential equations of fractional order. In this method, genetic algorithm is used as a tool for
the competent global search method hybridized with active-set algorithm for efficient local search. The proposed method has been
successfully applied to solve the different forms of Riccati
differential equations. The strength of proposed method has in its
equal applicability for the integer order case, as well as, fractional
order case. Comparison of the method has been made with standard
numerical techniques as well as the analytic solutions. It is found
that the designed method can provide the solution to the equation
with better accuracy than its counterpart deterministic approaches.
Another advantage of the given approach is to provide results on
entire finite continuous domain unlike other numerical methods
which provide solutions only on discrete grid of points.
Abstract: Surface metrology with image processing is a challenging task having wide applications in industry. Surface roughness can be evaluated using texture classification approach. Important aspect here is appropriate selection of features that characterize the surface. We propose an effective combination of features for multi-scale and multi-directional analysis of engineering surfaces. The features include standard deviation, kurtosis and the Canny edge detector. We apply the method by analyzing the surfaces with Discrete Wavelet Transform (DWT) and Dual-Tree Complex Wavelet Transform (DT-CWT). We used Canberra distance metric for similarity comparison between the surface classes. Our database includes the surface textures manufactured by three machining processes namely Milling, Casting and Shaping. The comparative study shows that DT-CWT outperforms DWT giving correct classification performance of 91.27% with Canberra distance metric.
Abstract: Optimal reactive power flow is an optimization problem
with one or more objective of minimizing the active power losses for
fixed generation schedule. The control variables are generator bus
voltages, transformer tap settings and reactive power output of the
compensating devices placed on different bus bars. Biogeography-
Based Optimization (BBO) technique has been applied to solve
different kinds of optimal reactive power flow problems subject
to operational constraints like power balance constraint, line flow
and bus voltages limits etc. BBO searches for the global optimum
mainly through two steps: Migration and Mutation. In the present
work, BBO has been applied to solve the optimal reactive power
flow problems on IEEE 30-bus and standard IEEE 57-bus power
systems for minimization of active power loss. The superiority of the
proposed method has been demonstrated. Considering the quality of
the solution obtained, the proposed method seems to be a promising
one for solving these problems.
Abstract: In recent years, rapid advances in software and hardware in the field of information technology along with a digital imaging revolution in the medical domain facilitate the generation and storage of large collections of images by hospitals and clinics. To search these large image collections effectively and efficiently poses significant technical challenges, and it raises the necessity of constructing intelligent retrieval systems. Content-based Image Retrieval (CBIR) consists of retrieving the most visually similar images to a given query image from a database of images[5]. Medical CBIR (content-based image retrieval) applications pose unique challenges but at the same time offer many new opportunities. On one hand, while one can easily understand news or sports videos, a medical image is often completely incomprehensible to untrained eyes.
Abstract: Electromagnetic interference (EMI) is one of the
serious problems in most electrical and electronic appliances
including fluorescent lamps. The electronic ballast used to regulate
the power flow through the lamp is the major cause for EMI. The
interference is because of the high frequency switching operation of
the ballast. Formerly, some EMI mitigation techniques were in
practice, but they were not satisfactory because of the hardware
complexity in the circuit design, increased parasitic components and
power consumption and so on. The majority of the researchers have
their spotlight only on EMI mitigation without considering the other
constraints such as cost, effective operation of the equipment etc. In
this paper, we propose a technique for EMI mitigation in fluorescent
lamps by integrating Frequency Modulation and Evolutionary
Programming. By the Frequency Modulation technique, the
switching at a single central frequency is extended to a range of
frequencies, and so, the power is distributed throughout the range of
frequencies leading to EMI mitigation. But in order to meet the
operating frequency of the ballast and the operating power of the
fluorescent lamps, an optimal modulation index is necessary for
Frequency Modulation. The optimal modulation index is determined
using Evolutionary Programming. Thereby, the proposed technique
mitigates the EMI to a satisfactory level without disturbing the
operation of the fluorescent lamp.
Abstract: This paper proposes a copyright protection scheme for color images using secret sharing and wavelet transform. The scheme contains two phases: the share image generation phase and the watermark retrieval phase. In the generation phase, the proposed scheme first converts the image into the YCbCr color space and creates a special sampling plane from the color space. Next, the scheme extracts the features from the sampling plane using the discrete wavelet transform. Then, the scheme employs the features and the watermark to generate a principal share image. In the retrieval phase, an expanded watermark is first reconstructed using the features of the suspect image and the principal share image. Next, the scheme reduces the additional noise to obtain the recovered watermark, which is then verified against the original watermark to examine the copyright. The experimental results show that the proposed scheme can resist several attacks such as JPEG compression, blurring, sharpening, noise addition, and cropping. The accuracy rates are all higher than 97%.
Abstract: The analysis of Acoustic Emission (AE) signal
generated from metal cutting processes has often approached
statistically. This is due to the stochastic nature of the emission
signal as a result of factors effecting the signal from its generation
through transmission and sensing. Different techniques are applied in
this manner, each of which is suitable for certain processes. In metal
cutting where the emission generated by the deformation process is
rather continuous, an appropriate method for analysing the AE signal
based on the root mean square (RMS) of the signal is often used and
is suitable for use with the conventional signal processing systems.
The aim of this paper is to set a strategy in tool failure detection in
turning processes via the statistic analysis of the AE generated from
the cutting zone. The strategy is based on the investigation of the
distribution moments of the AE signal at predetermined sampling.
The skews and kurtosis of these distributions are the key elements in
the detection. A normal (Gaussian) distribution has first been
suggested then this was eliminated due to insufficiency. The so
called Beta distribution was then considered, this has been used with
an assumed β density function and has given promising results with
regard to chipping and tool breakage detection.
Abstract: We study a new technique for optimal data compression
subject to conditions of causality and different types of memory. The
technique is based on the assumption that some information about
compressed data can be obtained from a solution of the associated
problem without constraints of causality and memory. This allows
us to consider two separate problem related to compression and decompression
subject to those constraints. Their solutions are given
and the analysis of the associated errors is provided.
Abstract: Erroneous computer entry problems [here: 'e'errors] in hospital labs threaten the patients-–health carers- relationship, undermining the health system credibility. Are e-errors random, and do lab professionals make them accidentally, or may they be traced through meaningful determinants? Theories on internal causality of mistakes compel to seek specific causal ascriptions of hospital lab eerrors instead of accepting some inescapability. Undeniably, 'To Err is Human'. But in view of rapid global health organizational changes, e-errors are too expensive to lack in-depth considerations. Yet, that efunction might supposedly be entrenched in the health carers- job description remains under dispute – at least for Hellenic labs, where e-use falls behind generalized(able) appreciation and application. In this study: i) an empirical basis of a truly high annual cost of e-errors at about €498,000.00 per rural Hellenic hospital was established, hence interest in exploring the issue was sufficiently substantiated; ii) a sample of 270 lab-expert nurses, technicians and doctors were assessed on several personality, burnout and e-error measures, and iii) the hypothesis that the Hardiness vs Alienation personality construct disposition explains resistance vs proclivity to e-errors was tested and verified: Hardiness operates as a resilience source in the encounter of high pressures experienced in the hospital lab, whereas its 'opposite', i.e., Alienation, functions as a predictor, not only of making e-errors, but also of leading to burn-out. Implications for apt interventions are discussed.
Abstract: Biometrics methods include recognition techniques
such as fingerprint, iris, hand geometry, voice, face, ears and gait. The gait recognition approach has some advantages, for example it
does not need the prior concern of the observed subject and it can
record many biometric features in order to make deeper analysis, but
most of the research proposals use high computational cost. This
paper shows a gait recognition system with feature subtraction on a
bundle rectangle drawn over the observed person. Statistical results
within a database of 500 videos are shown.
Abstract: In this paper a procedure for the split-pipe design of looped water distribution network based on the use of simulated annealing is proposed. Simulated annealing is a heuristic-based search algorithm, motivated by an analogy of physical annealing in solids. It is capable for solving the combinatorial optimization problem. In contrast to the split-pipe design that is derived from a continuous diameter design that has been implemented in conventional optimization techniques, the split-pipe design proposed in this paper is derived from a discrete diameter design where a set of pipe diameters is chosen directly from a specified set of commercial pipes. The optimality and feasibility of the solutions are found to be guaranteed by using the proposed method. The performance of the proposed procedure is demonstrated through solving the three well-known problems of water distribution network taken from the literature. Simulated annealing provides very promising solutions and the lowest-cost solutions are found for all of these test problems. The results obtained from these applications show that simulated annealing is able to handle a combinatorial optimization problem of the least cost design of water distribution network. The technique can be considered as an alternative tool for similar areas of research. Further applications and improvements of the technique are expected as well.