Abstract: The American Health Level Seven (HL7) Reference Information Model (RIM) consists of six back-bone classes that have different specialized attributes. Furthermore, for the purpose of enforcing the semantic expression, there are some specific mandatory vocabulary domains have been defined for representing the content values of some attributes. In the light of the fact that it is a duplicated effort on spending a lot of time and human cost to develop and modify Clinical Information Systems (CIS) for most hospitals due to the variety of workflows. This study attempts to design and develop sharing RIM-based components of the CIS for the different business processes. Therefore, the CIS contains data of a consistent format and type. The programmers can do transactions with the RIM-based clinical repository by the sharing RIM-based components. And when developing functions of the CIS, the sharing components also can be adopted in the system. These components not only satisfy physicians- needs in using a CIS but also reduce the time of developing new components of a system. All in all, this study provides a new viewpoint that integrating the data and functions with the business processes, it is an easy and flexible approach to build a new CIS.
Abstract: The equivalence class subset algorithm is a powerful
tool for solving a wide variety of constraint satisfaction problems and
is based on the use of a decision function which has a very high but
not perfect accuracy. Perfect accuracy is not required in the decision
function as even a suboptimal solution contains valuable information
that can be used to help find an optimal solution. In the hardest
problems, the decision function can break down leading to a
suboptimal solution where there are more equivalence classes than
are necessary and which can be viewed as a mixture of good decision
and bad decisions. By choosing a subset of the decisions made in
reaching a suboptimal solution an iterative technique can lead to an
optimal solution, using series of steadily improved suboptimal
solutions. The goal is to reach an optimal solution as quickly as
possible. Various techniques for choosing the decision subset are
evaluated.
Abstract: The lack of any centralized infrastructure in mobile ad
hoc networks (MANET) is one of the greatest security concerns in
the deployment of wireless networks. Thus communication in
MANET functions properly only if the participating nodes cooperate
in routing without any malicious intention. However, some of the
nodes may be malicious in their behavior, by indulging in flooding
attacks on their neighbors. Some others may act malicious by
launching active security attacks like denial of service. This paper
addresses few related works done on trust evaluation and
establishment in ad hoc networks. Related works on flooding attack
prevention are reviewed. A new trust approach based on the extent of
friendship between the nodes is proposed which makes the nodes to
co-operate and prevent flooding attacks in an ad hoc environment.
The performance of the trust algorithm is tested in an ad hoc network
implementing the Ad hoc On-demand Distance Vector (AODV)
protocol.
Abstract: In this paper, multi-processors job shop scheduling problems are solved by a heuristic algorithm based on the hybrid of priority dispatching rules according to an ant colony optimization algorithm. The objective function is to minimize the makespan, i.e. total completion time, in which a simultanous presence of various kinds of ferons is allowed. By using the suitable hybrid of priority dispatching rules, the process of finding the best solution will be improved. Ant colony optimization algorithm, not only promote the ability of this proposed algorithm, but also decreases the total working time because of decreasing in setup times and modifying the working production line. Thus, the similar work has the same production lines. Other advantage of this algorithm is that the similar machines (not the same) can be considered. So, these machines are able to process a job with different processing and setup times. According to this capability and from this algorithm evaluation point of view, a number of test problems are solved and the associated results are analyzed. The results show a significant decrease in throughput time. It also shows that, this algorithm is able to recognize the bottleneck machine and to schedule jobs in an efficient way.
Abstract: Economic Load Dispatch (ELD) is a method of determining
the most efficient, low-cost and reliable operation of a power
system by dispatching available electricity generation resources to
supply load on the system. The primary objective of economic
dispatch is to minimize total cost of generation while honoring
operational constraints of available generation resources. In this paper
an intelligent water drop (IWD) algorithm has been proposed to
solve ELD problem with an objective of minimizing the total cost of
generation. Intelligent water drop algorithm is a swarm-based natureinspired
optimization algorithm, which has been inspired from natural
rivers. A natural river often finds good paths among lots of possible
paths in its ways from source to destination and finally find almost
optimal path to their destination. These ideas are embedded into
the proposed algorithm for solving economic load dispatch problem.
The main advantage of the proposed technique is easy is implement
and capable of finding feasible near global optimal solution with
less computational effort. In order to illustrate the effectiveness of
the proposed method, it has been tested on 6-unit and 20-unit test
systems with incremental fuel cost functions taking into account the
valve point-point loading effects. Numerical results shows that the
proposed method has good convergence property and better in quality
of solution than other algorithms reported in recent literature.
Abstract: We have developed an analytic model for the radial pn-junction in a nanowire (NW) core-shell structure utilizing as a new
building block in different semiconductor devices. The potential distribution through the p-n-junction is calculated and the analytical expressions are derived to compute the depletion region widths. We
show that the widths of space charge layers, surrounding the core, are
the functions of core radius, which is the manifestation of so called classical size effect. The relationship between the depletion layer width and the built-in potential in the asymptotes of infinitely large
core radius transforms to square-root dependence specific for conventional planar p-n-junctions. The explicit equation is derived to
compute the capacitance of radial p-n-junction. The current-voltage behavior is also carefully determined taking into account the “short
base" effects.
Abstract: Modern highly automated production systems faces
problems of reliability. Machine function reliability results in
changes of productivity rate and efficiency use of expensive
industrial facilities. Predicting of reliability has become an important
research and involves complex mathematical methods and
calculation. The reliability of high productivity technological
automatic machines that consists of complex mechanical, electrical
and electronic components is important. The failure of these units
results in major economic losses of production systems. The
reliability of transport and feeding systems for automatic
technological machines is also important, because failure of transport
leads to stops of technological machines. This paper presents
reliability engineering on the feeding system and its components for
transporting a complex shape parts to automatic machines. It also
discusses about the calculation of the reliability parameters of the
feeding unit by applying the probability theory. Equations produced
for calculating the limits of the geometrical sizes of feeders and the
probability of sticking the transported parts into the chute represents
the reliability of feeders as a function of its geometrical parameters.
Abstract: The acoustic and articulatory properties of fricative speech sounds are being studied using magnetic resonance imaging (MRI) and acoustic recordings from a single subject. Area functions were derived from a complete set of axial and coronal MR slices using two different methods: the Mermelstein technique and the Blum transform. Area functions derived from the two techniques were shown to differ significantly in some cases. Such differences will lead to different acoustic predictions and it is important to know which is the more accurate. The vocal tract acoustic transfer function (VTTF) was derived from these area functions for each fricative and compared with measured speech signals for the same fricative and same subject. The VTTFs for /f/ in two vowel contexts and the corresponding acoustic spectra are derived here; the Blum transform appears to show a better match between prediction and measurement than the Mermelstein technique.
Abstract: This paper studies the effect of different compression
constraints and schemes presented in a new and flexible paradigm to
achieve high compression ratios and acceptable signal to noise ratios
of Arabic speech signals. Compression parameters are computed for
variable frame sizes of a level 5 to 7 Discrete Wavelet Transform
(DWT) representation of the signals for different analyzing mother
wavelet functions. Results are obtained and compared for Global
threshold and level dependent threshold techniques. The results
obtained also include comparisons with Signal to Noise Ratios, Peak
Signal to Noise Ratios and Normalized Root Mean Square Error.
Abstract: Multi-Agent Systems (MAS) emerged in the pursuit to improve our standard of living, and hence can manifest complex human behaviors such as communication, decision making, negotiation and self-organization. The Social Network Services (SNSs) have attracted millions of users, many of whom have integrated these sites into their daily practices. The domains of MAS and SNS have lots of similarities such as architecture, features and functions. Exploring social network users- behavior through multiagent model is therefore our research focus, in order to generate more accurate and meaningful information to SNS users. An application of MAS is the e-Auction and e-Rental services of the Universiti Cyber AgenT(UniCAT), a Social Network for students in Universiti Tunku Abdul Rahman (UTAR), Kampar, Malaysia, built around the Belief- Desire-Intention (BDI) model. However, in spite of the various advantages of the BDI model, it has also been discovered to have some shortcomings. This paper therefore proposes a multi-agent framework utilizing a modified BDI model- Belief-Desire-Intention in Dynamic and Uncertain Situations (BDIDUS), using UniCAT system as a case study.
Abstract: One problem in evaluating recent computational models of human category learning is that there is no standardized method for systematically comparing the models' assumptions or hypotheses. In the present study, a flexible general model (called GECLE) is introduced that can be used as a framework to systematically manipulate and compare the effects and descriptive validities of a limited number of assumptions at a time. Two example simulation studies are presented to show how the GECLE framework can be useful in the field of human high-order cognition research.
Abstract: This study focuses on the development of triangular fuzzy numbers, the revising of triangular fuzzy numbers, and the constructing of a HCFN (half-circle fuzzy number) model which can be utilized to perform more plural operations. They are further transformed for trigonometric functions and polar coordinates. From half-circle fuzzy numbers we can conceive cylindrical fuzzy numbers, which work better in algebraic operations. An example of fuzzy control is given in a simulation to show the applicability of the proposed half-circle fuzzy numbers.
Abstract: This work deals with aspects of support vector machine learning for large-scale data mining tasks. Based on a decomposition algorithm for support vector machine training that can be run in serial as well as shared memory parallel mode we introduce a transformation of the training data that allows for the usage of an expensive generalized kernel without additional costs. We present experiments for the Gaussian kernel, but usage of other kernel functions is possible, too. In order to further speed up the decomposition algorithm we analyze the critical problem of working set selection for large training data sets. In addition, we analyze the influence of the working set sizes onto the scalability of the parallel decomposition scheme. Our tests and conclusions led to several modifications of the algorithm and the improvement of overall support vector machine learning performance. Our method allows for using extensive parameter search methods to optimize classification accuracy.
Abstract: Since dealing with high dimensional data is
computationally complex and sometimes even intractable, recently
several feature reductions methods have been developed to reduce
the dimensionality of the data in order to simplify the calculation
analysis in various applications such as text categorization, signal
processing, image retrieval, gene expressions and etc. Among feature
reduction techniques, feature selection is one the most popular
methods due to the preservation of the original features.
In this paper, we propose a new unsupervised feature selection
method which will remove redundant features from the original
feature space by the use of probability density functions of various
features. To show the effectiveness of the proposed method, popular
feature selection methods have been implemented and compared.
Experimental results on the several datasets derived from UCI
repository database, illustrate the effectiveness of our proposed
methods in comparison with the other compared methods in terms of
both classification accuracy and the number of selected features.
Abstract: As far as the latest technological improvements are concerned, digital systems more become popular than the past. Despite this growing demand to the digital systems, content copy and attack against the digital cinema contents becomes a serious problem. To solve the above security problem, we propose “traceable watermarking using Hash functions for digital cinema system. Digital Cinema is a great application for traceable watermarking since it uses watermarking technology during content play as well as content transmission. The watermark is embedded into the randomly selected movie frames using CRC-32 techniques. CRC-32 is a Hash function. Using it, the embedding position is distributed by Hash Function so that any party cannot break off the watermarking or will not be able to change. Finally, our experimental results show that proposed DWT watermarking method using CRC-32 is much better than the convenient watermarking techniques in terms of robustness, image quality and its simple but unbreakable algorithm.
Abstract: This paper proposes a novel methodology for enabling
debugging and tracing of production web applications without
affecting its normal flow and functionality. This method of debugging
enables developers and maintenance engineers to replace a set of
existing resources such as images, server side scripts, cascading
style sheets with another set of resources per web session. The new
resources will only be active in the debug session and other sessions
will not be affected. This methodology will help developers in tracing
defects, especially those that appear only in production environments
and in exploring the behaviour of the system. A realization of the
proposed methodology has been implemented in Java.
Abstract: In this paper an algorithm for fast wavelength calibration of Optical Spectrum Analyzers (OSAs) using low power reference gas spectra is proposed. In existing OSAs a reference spectrum with low noise for precise detection of the reference extreme values is needed. To generate this spectrum costly hardware with high optical power is necessary. With this new wavelength calibration algorithm it is possible to use a noisy reference spectrum and therefore hardware costs can be cut. With this algorithm the reference spectrum is filtered and the key information is extracted by segmenting and finding the local minima and maxima. Afterwards slope and offset of a linear correction function for best matching the measured and theoretical spectra are found by correlating the measured with the stored minima. With this algorithm a reliable wavelength referencing of an OSA can be implemented on a microcontroller with a calculation time of less than one second.
Abstract: This paper presents a procedure for modeling and tuning the parameters of Thyristor Controlled Series Compensation (TCSC) controller in a multi-machine power system to improve transient stability. First a simple transfer function model of TCSC controller for stability improvement is developed and the parameters of the proposed controller are optimally tuned. Genetic algorithm (GA) is employed for the optimization of the parameter-constrained nonlinear optimization problem implemented in a simulation environment. By minimizing an objective function in which the oscillatory rotor angle deviations of the generators are involved, transient stability performance of the system is improved. The proposed TCSC controller is tested on a multi-machine system and the simulation results are presented. The nonlinear simulation results validate the effectiveness of proposed approach for transient stability improvement in a multimachine power system installed with a TCSC. The simulation results also show that the proposed TCSC controller is also effective in damping low frequency oscillations.
Abstract: Trauma in early life is widely regarded as a cause for
adult mental health problems. This study explores the role of
secondary trauma on later functioning in a sample of 359 university
students enrolled in undergraduate psychology classes in the United
States. Participants were initially divided into four groups based on
1) having directly experienced trauma (assaultive violence), 2)
having directly experienced trauma and secondary traumatization
through the unanticipated death of a close friend or family member
or witnessing of an injury or shocking even), 3) having no
experience of direct trauma but having experienced indirect trauma
(secondary trauma), or 4) reporting no exposure. Participants
completed a battery of measures on concepts associated with
psychological functioning which included measures of
psychological well-being, problem solving, coping and resiliency.
Findings discuss differences in psychological functioning and
resilience based on participants who experienced secondary
traumatization and assaultive violence versus secondary
traumatization alone.
Abstract: The disaster from functional gastrointestinal disorders has detrimental impact on the quality of life of the effected population and imposes a tremendous social and economic burden. There are, however, rare diagnostic methods for the functional gastrointestinal disorders. Our research group identified recently that the gastrointestinal tract well in the patients with the functional gastrointestinal disorders becomes more rigid than healthy people when palpating the abdominal regions overlaying the gastrointestinal tract. Objective of current study is, therefore, identify feasibility of a diagnostic system for the functional gastrointestinal disorders based on ultrasound technique, which can quantify the characteristics above. Two-dimensional finite difference (FD) models (one normal and two rigid model) were developed to analyze the reflective characteristic (displacement) on each soft-tissue layer responded after application of ultrasound signals. The FD analysis was then based on elastic ultrasound theory. Validation of the model was performed via comparison of the characteristic of the ultrasonic responses predicted by FD analysis with that determined from the actual specimens for the normal and rigid conditions. Based on the results from FD analysis, ultrasound system for diagnosis of the functional gastrointestinal disorders was developed and clinically tested via application of it to 40 human subjects with/without functional gastrointestinal disorders who were assigned to Normal and Patient Groups. The FD models were favorably validated. The results from FD analysis showed that the maximum displacement amplitude in the rigid models (0.12 and 0.16) at the interface between the fat and muscle layers was explicitly less than that in the normal model (0.29). The results from actual specimens showed that the maximum amplitude of the ultrasonic reflective signal in the rigid models (0.2±0.1Vp-p) at the interface between the fat and muscle layers was explicitly higher than that in the normal model (0.1±0.2 Vp-p). Clinical tests using our customized ultrasound system showed that the maximum amplitudes of the ultrasonic reflective signals near to the gastrointestinal tract well for the patient group (2.6±0.3 Vp-p) were generally higher than those in normal group (0.1±0.2 Vp-p). Here, maximum reflective signals was appeared at 20mm depth approximately from abdominal skin for all human subjects, corresponding to the location of the boundary layer close to gastrointestinal tract well. These findings suggest that our customized ultrasound system using the ultrasonic reflective signal may be helpful to the diagnosis of the functional gastrointestinal disorders.