Abstract: The Ministry of Defense (MoD) spends hundreds of
millions of dollars on software to support its infrastructure, operate
its weapons and provide command, control, communications,
computing, intelligence, surveillance, and reconnaissance (C4ISR)
functions. These and other all new advanced systems have a common
critical component is information technology. Defense and
Aerospace environment is continuously striving to keep up with
increasingly sophisticated Information Technology (IT) in order to
remain effective in today-s dynamic and unpredictable threat
environment. This makes it one of the largest and fastest growing
expenses of Defense. Hundreds of millions of dollars spent a year on
IT projects. But, too many of those millions are wasted on costly
mistakes. Systems that do not work properly, new components that
are not compatible with old once, trendily new applications that do
not really satisfy defense needs or lost though poorly managed
contracts.
This paper investigates and compiles the effective strategies that
aim to end exasperation with low returns and high cost of
Information Technology Acquisition for defense; it tries to show how
to maximize value while reducing time and expenditure.
Abstract: We develop a new estimator of the renewal function for heavy-tailed claims amounts. Our approach is based on the peak over threshold method for estimating the tail of the distribution with a generalized Pareto distribution. The asymptotic normality of an appropriately centered and normalized estimator is established, and its performance illustrated in a simulation study.
Abstract: Manufacturing components of fiber-reinforced
thermoplastics requires three steps: heating the matrix, forming and
consolidation of the composite and terminal cooling the matrix. For
the heating process a pre-determined temperature distribution through
the layers and the thickness of the pre-consolidated sheets is
recommended to enable forming mechanism. Thus, a design for the
heating process for forming composites with thermoplastic matrices
is necessary. To obtain a constant temperature through thickness and
width of the sheet, the heating process was analyzed by the help of
the finite element method. The simulation models were validated by
experiments with resistance thermometers as well as with an infrared
camera. Based on the finite element simulation, heating methods for
infrared radiators have been developed. Using the numeric
simulation many iteration loops are required to determine the process
parameters. Hence, the initiation of a model for calculating relevant
process parameters started applying regression functions.
Abstract: Methods of clustering which were developed in the
data mining theory can be successfully applied to the investigation of
different kinds of dependencies between the conditions of
environment and human activities. It is known, that environmental
parameters such as temperature, relative humidity, atmospheric
pressure and illumination have significant effects on the human
mental performance. To investigate these parameters effect, data
mining technique of clustering using entropy and Information Gain
Ratio (IGR) K(Y/X) = (H(X)–H(Y/X))/H(Y) is used, where
H(Y)=-ΣPi ln(Pi). This technique allows adjusting the boundaries of
clusters. It is shown that the information gain ratio (IGR) grows
monotonically and simultaneously with degree of connectivity
between two variables. This approach has some preferences if
compared, for example, with correlation analysis due to relatively
smaller sensitivity to shape of functional dependencies. Variant of an
algorithm to implement the proposed method with some analysis of
above problem of environmental effects is also presented. It was
shown that proposed method converges with finite number of steps.
Abstract: To define or predict incipient motion in an alluvial
channel, most of the investigators use a standard or modified form of
Shields- diagram. Shields- diagram does give a process to determine
the incipient motion parameters but an iterative one. To design
properly (without iteration), one should have another equation for
resistance. Absence of a universal resistance equation also magnifies
the difficulties in defining the model. Neural network technique,
which is particularly useful in modeling a complex processes, is
presented as a tool complimentary to modeling incipient motion.
Present work develops a neural network model employing the RBF
network to predict the average velocity u and water depth y based on
the experimental data on incipient condition. Based on the model,
design curves have been presented for the field application.
Abstract: The Requirements Abstraction Model (RAM) helps in managing abstraction in requirements by organizing them at four levels (product, feature, function and component). The RAM is adaptable and can be tailored to meet the needs of the various organizations. Because software requirements are an important source of information for developing high-level tests, organizations willing to adopt the RAM model need to know the suitability of the RAM requirements for developing high-level tests. To investigate this suitability, test cases from twenty randomly selected requirements were developed, analyzed and graded. Requirements were selected from the requirements document of a Course Management System, a web based software system that supports teachers and students in performing course related tasks. This paper describes the results of the requirements document analysis. The results show that requirements at lower levels in the RAM are suitable for developing executable tests whereas it is hard to develop from requirements at higher levels.
Abstract: The American Health Level Seven (HL7) Reference Information Model (RIM) consists of six back-bone classes that have different specialized attributes. Furthermore, for the purpose of enforcing the semantic expression, there are some specific mandatory vocabulary domains have been defined for representing the content values of some attributes. In the light of the fact that it is a duplicated effort on spending a lot of time and human cost to develop and modify Clinical Information Systems (CIS) for most hospitals due to the variety of workflows. This study attempts to design and develop sharing RIM-based components of the CIS for the different business processes. Therefore, the CIS contains data of a consistent format and type. The programmers can do transactions with the RIM-based clinical repository by the sharing RIM-based components. And when developing functions of the CIS, the sharing components also can be adopted in the system. These components not only satisfy physicians- needs in using a CIS but also reduce the time of developing new components of a system. All in all, this study provides a new viewpoint that integrating the data and functions with the business processes, it is an easy and flexible approach to build a new CIS.
Abstract: The equivalence class subset algorithm is a powerful
tool for solving a wide variety of constraint satisfaction problems and
is based on the use of a decision function which has a very high but
not perfect accuracy. Perfect accuracy is not required in the decision
function as even a suboptimal solution contains valuable information
that can be used to help find an optimal solution. In the hardest
problems, the decision function can break down leading to a
suboptimal solution where there are more equivalence classes than
are necessary and which can be viewed as a mixture of good decision
and bad decisions. By choosing a subset of the decisions made in
reaching a suboptimal solution an iterative technique can lead to an
optimal solution, using series of steadily improved suboptimal
solutions. The goal is to reach an optimal solution as quickly as
possible. Various techniques for choosing the decision subset are
evaluated.
Abstract: The lack of any centralized infrastructure in mobile ad
hoc networks (MANET) is one of the greatest security concerns in
the deployment of wireless networks. Thus communication in
MANET functions properly only if the participating nodes cooperate
in routing without any malicious intention. However, some of the
nodes may be malicious in their behavior, by indulging in flooding
attacks on their neighbors. Some others may act malicious by
launching active security attacks like denial of service. This paper
addresses few related works done on trust evaluation and
establishment in ad hoc networks. Related works on flooding attack
prevention are reviewed. A new trust approach based on the extent of
friendship between the nodes is proposed which makes the nodes to
co-operate and prevent flooding attacks in an ad hoc environment.
The performance of the trust algorithm is tested in an ad hoc network
implementing the Ad hoc On-demand Distance Vector (AODV)
protocol.
Abstract: In this paper, multi-processors job shop scheduling problems are solved by a heuristic algorithm based on the hybrid of priority dispatching rules according to an ant colony optimization algorithm. The objective function is to minimize the makespan, i.e. total completion time, in which a simultanous presence of various kinds of ferons is allowed. By using the suitable hybrid of priority dispatching rules, the process of finding the best solution will be improved. Ant colony optimization algorithm, not only promote the ability of this proposed algorithm, but also decreases the total working time because of decreasing in setup times and modifying the working production line. Thus, the similar work has the same production lines. Other advantage of this algorithm is that the similar machines (not the same) can be considered. So, these machines are able to process a job with different processing and setup times. According to this capability and from this algorithm evaluation point of view, a number of test problems are solved and the associated results are analyzed. The results show a significant decrease in throughput time. It also shows that, this algorithm is able to recognize the bottleneck machine and to schedule jobs in an efficient way.
Abstract: Economic Load Dispatch (ELD) is a method of determining
the most efficient, low-cost and reliable operation of a power
system by dispatching available electricity generation resources to
supply load on the system. The primary objective of economic
dispatch is to minimize total cost of generation while honoring
operational constraints of available generation resources. In this paper
an intelligent water drop (IWD) algorithm has been proposed to
solve ELD problem with an objective of minimizing the total cost of
generation. Intelligent water drop algorithm is a swarm-based natureinspired
optimization algorithm, which has been inspired from natural
rivers. A natural river often finds good paths among lots of possible
paths in its ways from source to destination and finally find almost
optimal path to their destination. These ideas are embedded into
the proposed algorithm for solving economic load dispatch problem.
The main advantage of the proposed technique is easy is implement
and capable of finding feasible near global optimal solution with
less computational effort. In order to illustrate the effectiveness of
the proposed method, it has been tested on 6-unit and 20-unit test
systems with incremental fuel cost functions taking into account the
valve point-point loading effects. Numerical results shows that the
proposed method has good convergence property and better in quality
of solution than other algorithms reported in recent literature.
Abstract: We have developed an analytic model for the radial pn-junction in a nanowire (NW) core-shell structure utilizing as a new
building block in different semiconductor devices. The potential distribution through the p-n-junction is calculated and the analytical expressions are derived to compute the depletion region widths. We
show that the widths of space charge layers, surrounding the core, are
the functions of core radius, which is the manifestation of so called classical size effect. The relationship between the depletion layer width and the built-in potential in the asymptotes of infinitely large
core radius transforms to square-root dependence specific for conventional planar p-n-junctions. The explicit equation is derived to
compute the capacitance of radial p-n-junction. The current-voltage behavior is also carefully determined taking into account the “short
base" effects.
Abstract: Modern highly automated production systems faces
problems of reliability. Machine function reliability results in
changes of productivity rate and efficiency use of expensive
industrial facilities. Predicting of reliability has become an important
research and involves complex mathematical methods and
calculation. The reliability of high productivity technological
automatic machines that consists of complex mechanical, electrical
and electronic components is important. The failure of these units
results in major economic losses of production systems. The
reliability of transport and feeding systems for automatic
technological machines is also important, because failure of transport
leads to stops of technological machines. This paper presents
reliability engineering on the feeding system and its components for
transporting a complex shape parts to automatic machines. It also
discusses about the calculation of the reliability parameters of the
feeding unit by applying the probability theory. Equations produced
for calculating the limits of the geometrical sizes of feeders and the
probability of sticking the transported parts into the chute represents
the reliability of feeders as a function of its geometrical parameters.
Abstract: The acoustic and articulatory properties of fricative speech sounds are being studied using magnetic resonance imaging (MRI) and acoustic recordings from a single subject. Area functions were derived from a complete set of axial and coronal MR slices using two different methods: the Mermelstein technique and the Blum transform. Area functions derived from the two techniques were shown to differ significantly in some cases. Such differences will lead to different acoustic predictions and it is important to know which is the more accurate. The vocal tract acoustic transfer function (VTTF) was derived from these area functions for each fricative and compared with measured speech signals for the same fricative and same subject. The VTTFs for /f/ in two vowel contexts and the corresponding acoustic spectra are derived here; the Blum transform appears to show a better match between prediction and measurement than the Mermelstein technique.
Abstract: This paper studies the effect of different compression
constraints and schemes presented in a new and flexible paradigm to
achieve high compression ratios and acceptable signal to noise ratios
of Arabic speech signals. Compression parameters are computed for
variable frame sizes of a level 5 to 7 Discrete Wavelet Transform
(DWT) representation of the signals for different analyzing mother
wavelet functions. Results are obtained and compared for Global
threshold and level dependent threshold techniques. The results
obtained also include comparisons with Signal to Noise Ratios, Peak
Signal to Noise Ratios and Normalized Root Mean Square Error.
Abstract: Multi-Agent Systems (MAS) emerged in the pursuit to improve our standard of living, and hence can manifest complex human behaviors such as communication, decision making, negotiation and self-organization. The Social Network Services (SNSs) have attracted millions of users, many of whom have integrated these sites into their daily practices. The domains of MAS and SNS have lots of similarities such as architecture, features and functions. Exploring social network users- behavior through multiagent model is therefore our research focus, in order to generate more accurate and meaningful information to SNS users. An application of MAS is the e-Auction and e-Rental services of the Universiti Cyber AgenT(UniCAT), a Social Network for students in Universiti Tunku Abdul Rahman (UTAR), Kampar, Malaysia, built around the Belief- Desire-Intention (BDI) model. However, in spite of the various advantages of the BDI model, it has also been discovered to have some shortcomings. This paper therefore proposes a multi-agent framework utilizing a modified BDI model- Belief-Desire-Intention in Dynamic and Uncertain Situations (BDIDUS), using UniCAT system as a case study.
Abstract: The grey oyster mushroom, Pleurotus sajor-caju
(PSC), is a common edible mushroom and is now grown
commercially around the world for food. This fungus has been
broadly used as food or food ingredients in various food products for
a long time. To enhance the nutritional quality and sensory attributes
of bakery-based products, PSC powder is used in the present study to
partially replace wheat flour in baked product formulations. The
nutrient content and sensory properties of rice-porridge and
unleavened bread (paratha) incorporated with various levels of PSC
powder were studied. These food items were formulated with either
0%, 2%, 4% or 6% of PSC powder. Results show PSC powder
recorded β-glucan at 3.57g/100g. In sensory evaluation, consumers
gave higher score to both rice-porridge and paratha bread containing
2-4% PSC compared to those that are not added with PSC powder.
The paratha containing 4% PSC powder can be formulated with the
intention in improving overall acceptability of paratha bread.
Meanwhile, for rice-porridge, consumers prefer the formulated
product added with 4% PSC powder. In conclusion, the addition of
PSC powder to partially wheat flour can be recommended for the
purpose of enhancing nutritional composition and maintaining the
acceptability of carbohydrate-based products.
Abstract: One problem in evaluating recent computational models of human category learning is that there is no standardized method for systematically comparing the models' assumptions or hypotheses. In the present study, a flexible general model (called GECLE) is introduced that can be used as a framework to systematically manipulate and compare the effects and descriptive validities of a limited number of assumptions at a time. Two example simulation studies are presented to show how the GECLE framework can be useful in the field of human high-order cognition research.
Abstract: This study focuses on the development of triangular fuzzy numbers, the revising of triangular fuzzy numbers, and the constructing of a HCFN (half-circle fuzzy number) model which can be utilized to perform more plural operations. They are further transformed for trigonometric functions and polar coordinates. From half-circle fuzzy numbers we can conceive cylindrical fuzzy numbers, which work better in algebraic operations. An example of fuzzy control is given in a simulation to show the applicability of the proposed half-circle fuzzy numbers.
Abstract: This work deals with aspects of support vector machine learning for large-scale data mining tasks. Based on a decomposition algorithm for support vector machine training that can be run in serial as well as shared memory parallel mode we introduce a transformation of the training data that allows for the usage of an expensive generalized kernel without additional costs. We present experiments for the Gaussian kernel, but usage of other kernel functions is possible, too. In order to further speed up the decomposition algorithm we analyze the critical problem of working set selection for large training data sets. In addition, we analyze the influence of the working set sizes onto the scalability of the parallel decomposition scheme. Our tests and conclusions led to several modifications of the algorithm and the improvement of overall support vector machine learning performance. Our method allows for using extensive parameter search methods to optimize classification accuracy.