Abstract: Vortices can develop in intakes of turbojet and turbo
fan aero engines during high power operation in the vicinity of solid
surfaces. These vortices can cause catastrophic damage to the engine.
The factors determining the formation of the vortex include both
geometric dimensions as well as flow parameters. It was shown that
the threshold at which the vortex forms or disappears is also
dependent on the initial flow condition (i.e. whether a vortex forms
after stabilised non vortex flow or vice-versa). A computational fluid
dynamics study was conducted to determine the difference in
thresholds between the two conditions. This is the first reported
numerical investigation of the “memory effect". The numerical
results reproduce the phenomenon reported in previous experimental
studies and additional factors, which had not been previously studied,
were investigated. They are the rate at which ambient velocity
changes and the initial value of ambient velocity. The former was
found to cause a shift in the threshold but not the later. It was also
found that the varying condition thresholds are not symmetrical about
the neutral threshold. The vortex to no vortex threshold lie slightly
further away from the neutral threshold compared to the no vortex to
vortex threshold. The results suggests that experimental investigation
of vortex formation threshold performed either in vortex to no vortex
conditions, or vice versa, solely may introduce mis-predictions
greater than 10%.
Abstract: Software Reusability is primary attribute of software
quality. There are metrics for identifying the quality of reusable
components but the function that makes use of these metrics to find
reusability of software components is still not clear. These metrics if
identified in the design phase or even in the coding phase can help us
to reduce the rework by improving quality of reuse of the component
and hence improve the productivity due to probabilistic increase in
the reuse level. In this paper, we have devised the framework of
metrics that uses McCabe-s Cyclometric Complexity Measure for
Complexity measurement, Regularity Metric, Halstead Software
Science Indicator for Volume indication, Reuse Frequency metric
and Coupling Metric values of the software component as input
attributes and calculated reusability of the software component. Here,
comparative analysis of the fuzzy, Neuro-fuzzy and Fuzzy-GA
approaches is performed to evaluate the reusability of software
components and Fuzzy-GA results outperform the other used
approaches. The developed reusability model has produced high
precision results as expected by the human experts.
Abstract: In this paper the principle, basic torque theory and design optimisation of a six-phase reluctance dc machine are considered. A trapezoidal phase current waveform for the machine drive is proposed and evaluated to minimise ripple torque. Low cost normal laminated salient-pole rotors with and without slits and chamfered poles are investigated. The six-phase machine is optimised in multi-dimensions by linking the finite-element analysis method directly with an optimisation algorithm; the objective function is to maximise the torque per copper losses of the machine. The armature reaction effect is investigated in detail and found to be severe. The measured and calculated torque performances of a 35 kW optimum designed six-phase reluctance dc machine drive are presented.
Abstract: This paper presents the use of three-dimensional finite
elements coupled with infinite elements to investigate the ground
vibrations at the surface in terms of the peak particle velocity (PPV)
due to construction of the first bore of the Dublin Port Tunnel. This
situation is analysed using a commercially available general-purpose
finite element package ABAQUS. A series of parametric studies is
carried out to examine the sensitivity of the predicted vibrations to
variations in the various input parameters required by finite element
method, including the stiffness and the damping of ground. The
results of this study show that stiffness has a more significant effect
on the PPV rather than the damping of the ground.
Abstract: The use of neural networks for recognition application is generally constrained by their inherent parameters inflexibility after the training phase. This means no adaptation is accommodated for input variations that have any influence on the network parameters. Attempts were made in this work to design a neural network that includes an additional mechanism that adjusts the threshold values according to the input pattern variations. The new approach is based on splitting the whole network into two subnets; main traditional net and a supportive net. The first deals with the required output of trained patterns with predefined settings, while the second tolerates output generation dynamically with tuning capability for any newly applied input. This tuning comes in the form of an adjustment to the threshold values. Two levels of supportive net were studied; one implements an extended additional layer with adjustable neuronal threshold setting mechanism, while the second implements an auxiliary net with traditional architecture performs dynamic adjustment to the threshold value of the main net that is constructed in dual-layer architecture. Experiment results and analysis of the proposed designs have given quite satisfactory conducts. The supportive layer approach achieved over 90% recognition rate, while the multiple network technique shows more effective and acceptable level of recognition. However, this is achieved at the price of network complexity and computation time. Recognition generalization may be also improved by accommodating capabilities involving all the innate structures in conjugation with Intelligence abilities with the needs of further advanced learning phases.
Abstract: Vehicular Ad-hoc Network (VANET) is taking more
attention in automotive industry due to the safety concern of human
lives on roads. Security is one of the safety aspects in VANET. To be
secure, network availability must be obtained at all times since
availability of the network is critically needed when a node sends any
life critical information to other nodes. However, it can be expected
that security attacks are likely to increase in the coming future due to
more and more wireless applications being developed and deployed
onto the well-known expose nature of the wireless medium. In this
respect, the network availability is exposed to many types of attacks.
In this paper, Denial of Service (DOS) attack on network availability
is presented and its severity level in VANET environment is
elaborated. A model to secure the VANET from the DOS attacks has
been developed and some possible solutions to overcome the attacks
have been discussed.
Abstract: This paper describes the design of new method of
propagation delay measurement in micro and nanostructures during
characterization of ASIC standard library cell. Providing more
accuracy timing information about library cell to the design team we
can improve a quality of timing analysis inside of ASIC design flow
process. Also, this information could be very useful for semiconductor
foundry team to make correction in technology process. By
comparison of the propagation delay in the CMOS element and result
of analog SPICE simulation. It was implemented as digital IP core for
semiconductor manufacturing process. Specialized method helps to
observe the propagation time delay in one element of the standard-cell
library with up-to picoseconds accuracy and less. Thus, the special
useful solutions for VLSI schematic to parameters extraction, basic
cell layout verification, design simulation and verification are
announced.
Abstract: The Genetic Algorithm (GA) is one of the most important methods used to solve many combinatorial optimization problems. Therefore, many researchers have tried to improve the GA by using different methods and operations in order to find the optimal solution within reasonable time. This paper proposes an improved GA (IGA), where the new crossover operation, population reformulates operation, multi mutation operation, partial local optimal mutation operation, and rearrangement operation are used to solve the Traveling Salesman Problem. The proposed IGA was then compared with three GAs, which use different crossover operations and mutations. The results of this comparison show that the IGA can achieve better results for the solutions in a faster time.
Abstract: In the 1980s, companies began to feel the effect of three major influences on their product development: newer and innovative technologies, increasing product complexity and larger organizations. And therefore companies were forced to look for new product development methods. This paper tries to focus on the two of new product development methods (DFM and CE). The aim of this paper is to see and analyze different product development methods specifically on Design for Manufacturability and Concurrent Engineering. Companies can achieve and be benefited by minimizing product life cycle, cost and meeting delivery schedule. This paper also presents simplified models that can be modified and used by different companies based on the companies- objective and requirements. Methodologies that are followed to do this research are case studies. Two companies were taken and analysed on the product development process. Historical data, interview were conducted on these companies in addition to that, Survey of literatures and previous research works on similar topics has been done during this research. This paper also tries to show the implementation cost benefit analysis and tries to calculate the implementation time. From this research, it has been found that the two companies did not achieve the delivery time to the customer. Some of most frequently coming products are analyzed and 50% to 80 % of their products are not delivered on time to the customers. The companies are following the traditional way of product development that is sequentially design and production method, which highly affect time to market. In the case study it is found that by implementing these new methods and by forming multi disciplinary team in designing and quality inspection; the company can reduce the workflow steps from 40 to 30.
Abstract: The goal of a network-based intrusion detection
system is to classify activities of network traffics into two major
categories: normal and attack (intrusive) activities. Nowadays, data
mining and machine learning plays an important role in many
sciences; including intrusion detection system (IDS) using both
supervised and unsupervised techniques. However, one of the
essential steps of data mining is feature selection that helps in
improving the efficiency, performance and prediction rate of
proposed approach. This paper applies unsupervised K-means
clustering algorithm with information gain (IG) for feature selection
and reduction to build a network intrusion detection system. For our
experimental analysis, we have used the new NSL-KDD dataset,
which is a modified dataset for KDDCup 1999 intrusion detection
benchmark dataset. With a split of 60.0% for the training set and the
remainder for the testing set, a 2 class classifications have been
implemented (Normal, Attack). Weka framework which is a java
based open source software consists of a collection of machine
learning algorithms for data mining tasks has been used in the testing
process. The experimental results show that the proposed approach is
very accurate with low false positive rate and high true positive rate
and it takes less learning time in comparison with using the full
features of the dataset with the same algorithm.
Abstract: By the end of XX century in the structure of humanity some changes have been provoked: a new ethnos - Ethnos of Intellect is formed and is still being formed, beside the historical types of ethnoses: open ethnos, closed ethnos, wandering ethnos, dead ethnos, - and this event was caused by the technical progress, development of informational and transport communications, especially - by creation of Internet. The Ethnos of Intellect is something very close to the ÔÇ×Information Society“ described by J. Ellule and Y. Masuda that was regarded as the culture of XXI century, being an antithesis for technical and technicistical civilizations, but it-s necessary to indicate also the essential difference between these concepts: the Ethnos of Intellect is the antithesis of Socium. The existence of such an ethnos within human society that has already become an Information Society itself is extremely important in observing legally and informatically a new kind of reins in the hands of the political power, revealing every attempt to violate the human rights of simple citizens. A concrete example of some conjunction points of legal informatics and informatical law in a certain kind of ambiental studies of the project ''State Registre of Population'' in Russia is very eloquent.
Abstract: Information systems practitioners are frequently
required to master new technology, often without the aid of formal
training. They require the skill to manage their own learning and,
when this skill is developed in their formal training, their adaptability
to new technology may be improved. Self- directed learning is the
ability of the learner to manage his or her own learning experience
with some guidance from a facilitator. Self-directed learning skills
are best improved when practiced. This paper reflects on a critical
social research project to improve the self-directed learning skills of
fourth year Information Systems students. Critical social research
differs from other research paradigms in that the researcher is viewed
as the agent of change to achieve the desired outcome in the problem
situation.
Abstract: This paper has, as its point of departure, the foundational
axiomatic theory of E. De Giorgi (1996, Scuola Normale
Superiore di Pisa, Preprints di Matematica 26, 1), based on two
primitive notions of quality and relation. With the introduction of
a unary relation, we develop a system totally based on the sole
primitive notion of relation. Such a modification enables a definition
of the concept of dynamic unary relation. In this way we construct a
simple language capable to express other well known theories such
as Robinson-s arithmetic or a piece of a theory of concatenation. A
key role in this system plays an abstract relation designated by “( )",
which can be interpreted in different ways, but in this paper we will
focus on the case when we can perform computations and obtain
results.
Abstract: Mostly transforms are used for speech data
compressions which are lossy algorithms. Such algorithms are
tolerable for speech data compression since the loss in quality is not
perceived by the human ear. However the vector quantization (VQ)
has a potential to give more data compression maintaining the same
quality. In this paper we propose speech data compression algorithm
using vector quantization technique. We have used VQ algorithms
LBG, KPE and FCG. The results table shows computational
complexity of these three algorithms. Here we have introduced a new
performance parameter Average Fractional Change in Speech
Sample (AFCSS). Our FCG algorithm gives far better performance
considering mean absolute error, AFCSS and complexity as
compared to others.
Abstract: This paper proposes a novel improvement of forecasting approach based on using time-invariant fuzzy time series. In contrast to traditional forecasting methods, fuzzy time series can be also applied to problems, in which historical data are linguistic values. It is shown that proposed time-invariant method improves the performance of forecasting process. Further, the effect of using different number of fuzzy sets is tested as well. As with the most of cited papers, historical enrollment of the University of Alabama is used in this study to illustrate the forecasting process. Subsequently, the performance of the proposed method is compared with existing fuzzy time series time-invariant models based on forecasting accuracy. It reveals a certain performance superiority of the proposed method over methods described in the literature.
Abstract: A vertical SOI-based MOSFET with trench body
structure operated as 1T DRAM cell at various temperatures has been
studied and investigated. Different operation temperatures are
assigned for the device for its performance comparison, thus the
thermal stability is carefully evaluated for the future memory device
applications. Based on the simulation, the vertical SOI-based
MOSFET with trench body structure demonstrates the electrical
characteristics properly and possess conspicuous kink effect at
various operation temperatures. Transient characteristics were also
performed to prove that its programming window values and
retention time behaviors are acceptable when the new 1T DRAM cell
is operated at high operation temperature.
Abstract: Artificial neural networks (ANN) have the ability to model input-output relationships from processing raw data. This characteristic makes them invaluable in industry domains where such knowledge is scarce at best. In the recent decades, in order to overcome the black-box characteristic of ANNs, researchers have attempted to extract the knowledge embedded within ANNs in the form of rules that can be used in inference systems. This paper presents a new technique that is able to extract a small set of rules from a two-layer ANN. The extracted rules yield high classification accuracy when implemented within a fuzzy inference system. The technique targets industry domains that possess less complex problems for which no expert knowledge exists and for which a simpler solution is preferred to a complex one. The proposed technique is more efficient, simple, and applicable than most of the previously proposed techniques.
Abstract: This paper provides an overview of auction theory literature. We present a general review on literature of various auctions and focus ourselves specifically on an English auction. We are interested in modelling bidder's behavior in an English auction environment. And hence, we present an overview of the New Zealand wool auction followed by a model that would describe a bidder's decision making behavior from the New Zealand wool auction. The mathematical assumptions in an English auction environment are demonstrated from the perspective of the New Zealand wool auction.
Abstract: The most important subtype of non-Hodgkin-s
lymphoma is the Diffuse Large B-Cell Lymphoma. Approximately
40% of the patients suffering from it respond well to therapy,
whereas the remainder needs a more aggressive treatment, in order to
better their chances of survival. Data Mining techniques have helped
to identify the class of the lymphoma in an efficient manner. Despite
that, thousands of genes should be processed to obtain the results.
This paper presents a comparison of the use of various attribute
selection methods aiming to reduce the number of genes to be
searched, looking for a more effective procedure as a whole.
Abstract: The service industry accounts for about 70% of GDP of
Japan, and the importance of the service innovation is pointed out. The
importance of the system use and the support service increases in the
information system that is one of the service industries. However,
because the system is not used enough, the purpose for which it was
originally intended cannot often be achieved in the CRM system. To
promote the use of the system, the effective service method is needed.
It is thought that the service model's making and the clarification of the
success factors are necessary to improve the operation service of the
CRM system. In this research the model of the operation service in the
CRM system is made.