Abstract: Software maintenance is one of the essential processes of Software-Development Life Cycle. The main philosophies of retaining software concern the improvement of errors, the revision of codes, the inhibition of future errors, and the development in piece and capacity. While the adjustment has been employing, the software structure has to be retested to an upsurge a level of assurance that it will be prepared due to the requirements. According to this state, the test cases must be considered for challenging the revised modules and the whole software. A concept of resolving this problem is ongoing by regression test selection such as the retest-all selections, random/ad-hoc selection and the safe regression test selection. Particularly, the traditional techniques concern a mapping between the test cases in a test suite and the lines of code it executes. However, there are not only the lines of code as one of the requirements that can affect the size of test suite but including the number of functions and faulty versions. Therefore, a model for test case selection is developed to cover those three requirements by the integral technique which can produce the smaller size of the test cases when compared with the traditional regression selection techniques.
Abstract: Software maintenance is one of the essential processes of Software-Development Life Cycle. The main philosophies of retaining software concern the improvement of errors, the revision of codes, the inhibition of future errors, and the development in piece and capacity. While the adjustment has been employing, the software structure has to be retested to an upsurge a level of assurance that it will be prepared due to the requirements. According to this state, the test cases must be considered for challenging the revised modules and the whole software. A concept of resolving this problem is ongoing by regression test selection such as the retest-all selections, random/ad-hoc selection and the safe regression test selection. Particularly, the traditional techniques concern a mapping between the test cases in a test suite and the lines of code it executes. However, there are not only the lines of code as one of the requirements that can affect the size of test suite but including the number of functions and faulty versions. Therefore, a model for test case selection is developed to cover those three requirements by the integral technique which can produce the smaller size of the test cases when compared with the traditional regression selection techniques.
Abstract: In the article, the wire winding process for the reinforcement of a pressure vessel frame has been studied. Firstly, the importance of the wire winding method has been explained. The main step in the design process is the methodology axial force control and wire winding process. The hot isostatic press and wire winding process introduce. With use the equilibrium term in the pressure vessel and frame, stresses in the frame wires analyzed. A case study frame was studied to control axial force in the hot isostatic press. Frame and them wires simulated then friction effect and wires effect in elastic yoke in the simulation model considered. Then theoretical and simulate resulted compare and vessel pressure import to frame because we assurance wire wounded not received to yielding point.
Abstract: In recent year, with recent increase of interest towards leisure sports, increased number of Zip-Line or Zip-Wire facilities has built. Many researches have been actively conducted on the emphasis of the cable and the wire at the bridge. However, very limited researches have been conducted on the safety of the Zip-Line structure. In fact, fall accidents from Zip-Line have been reported frequently. Therefore, in this study, the structural safety of Zip-Line under dynamic impact loading condition were evaluated on the previously installed steel cable for leisure (Zip-Line), using 3-dimensional nonlinear Finite Element (FE) model. The result from current study would assist assurance of systematic stability of Zip-Line.
Abstract: The main objective of this paper is to identify and
disseminate good practice in quality assurance and enhancement as
well as in teaching and learning at master level. This paper focuses
on the experience of the Erasmus Mundus Master program CIMET
(Color in Informatics and Media Technology). Amongst topics
covered, we discuss the adjustments necessary to a curriculum
designed for excellent international students and their preparation for
a global labor market.
Abstract: Defect prevention is the most vital but habitually
neglected facet of software quality assurance in any project. If
functional at all stages of software development, it can condense the
time, overheads and wherewithal entailed to engineer a high quality
product. The key challenge of an IT industry is to engineer a
software product with minimum post deployment defects.
This effort is an analysis based on data obtained for five selected
projects from leading software companies of varying software
production competence. The main aim of this paper is to provide
information on various methods and practices supporting defect
detection and prevention leading to thriving software generation. The
defect prevention technique unearths 99% of defects. Inspection is
found to be an essential technique in generating ideal software
generation in factories through enhanced methodologies of abetted
and unaided inspection schedules. On an average 13 % to 15% of
inspection and 25% - 30% of testing out of whole project effort time
is required for 99% - 99.75% of defect elimination.
A comparison of the end results for the five selected projects
between the companies is also brought about throwing light on the
possibility of a particular company to position itself with an
appropriate complementary ratio of inspection testing.
Abstract: the article analyzes the national security as a scientific and practical problem, characterized by the state's political institutions to ensure effective action to maintain optimal conditions for the existence and development of the individual and society. National security, as a category of political science reflects the relationship between the security to the nation, including public relations and social consciousness, social institutions and their activities, ensuring the realization of national interests in a particular historical situation. In national security are three security levels: individual, society and state. Their role and place determined by the nature of social relations, political systems, the presence of internal and external threats. In terms of content in the concept of national security is taken to provide political, economic, military, environmental, information security and safety of the cultural development of the nation.
Abstract: Over the years, there is a growing trend towards
quality-based specifications in highway construction. In many
Quality Control/Quality Assurance (QC/QA) specifications, the
contractor is primarily responsible for quality control of the process,
whereas the highway agency is responsible for testing the acceptance
of the product. A cooperative investigation was conducted in Illinois
over several years to develop a prototype End-Result Specification
(ERS) for asphalt pavement construction. The final characteristics of
the product are stipulated in the ERS and the contractor is given
considerable freedom in achieving those characteristics. The risk for
the contractor or agency depends on how the acceptance limits and
processes are specified. Stochastic simulation models are very useful
in estimating and analyzing payment risk in ERS systems and these
form an integral part of the Illinois-s prototype ERS system. This
paper describes the development of an innovative methodology to
estimate the variability components in in-situ density, air voids and
asphalt content data from ERS projects. The information gained from
this would be crucial in simulating these ERS projects for estimation
and analysis of payment risks associated with asphalt pavement
construction. However, these methods require at least two parties to
conduct tests on all the split samples obtained according to the
sampling scheme prescribed in present ERS implemented in Illinois.
Abstract: The seemingly ambiguous title of this paper – use of the terms maturity and innovation in concord – signifies the imperative of every organisation within the competitive domain. Where organisational maturity and innovativeness were traditionally considered antonymous, the assimilation of these two seemingly contradictory notions is fundamental to the assurance of long-term organisational prosperity. Organisations are required, now more than ever, to grow and mature their innovation capability – rending consistent innovative outputs. This paper describes research conducted to consolidate the principles of innovation and identify the fundamental components that constitute organisational innovation capability. The process of developing an Innovation Capability Maturity Model is presented. A brief description is provided of the basic components of the model, followed by a description of the case studies that were conducted to evaluate the model. The paper concludes with a summary of the findings and potential future research.
Abstract: In most cases, it is considerably difficult to directly measure structural vibration with a lot of sensors because of complex
geometry, time and equipment cost. For this reason, this paper deals
with the problem of locating sensors on a plate model by four advanced sensor placement optimization (S.P.O) techniques. It also
suggests the evaluation index representing the characteristic of orthogonal between each of natural modes. The index value provides the assistance to selecting of proper S.P.O technique and optimal
positions for monitoring of dynamic systems without the experiment.
Abstract: In a competitive production environment, critical
decision making are based on data resulted by random sampling of
product units. Efficiency of these decisions depends on data quality
and also their reliability scale. This point leads to the necessity of a
reliable measurement system. Therefore, the conjecture process and
analysing the errors contributes to a measurement system known as
Measurement System Analysis (MSA). The aim of this research is on
determining the necessity and assurance of extensive development in
analysing measurement systems, particularly with the use of
Repeatability and Reproducibility Gages (GR&R) to improve
physical measurements. Nowadays in productive industries,
repeatability and reproducibility gages released so well but they are
not applicable as well as other measurement system analysis
methods. To get familiar with this method and gain a feedback in
improving measurement systems, this survey would be on
“ANOVA" method as the most widespread way of calculating
Repeatability and Reproducibility (R&R).
Abstract: In this article has been analyzed Kazakhstani
experience in organizing the system after the institute of higher education, legislative-regulative assurance of master preparation, and
statistic data in the republic. Have been the features of projecting the master programs, a condition of realization of studying credit system, have been analyzed the technologies of research teaching masters. In
conclusion have been given some recommendation on creating personal-oriented environment of research teaching masters.
Abstract: A virtualized and virtual approach is presented on
academically preparing students to successfully engage at a strategic
perspective to understand those concerns and measures that are both
structured and not structured in the area of cyber security and
information assurance. The Master of Science in Cyber Security and
Information Assurance (MSCSIA) is a professional degree for those
who endeavor through technical and managerial measures to ensure
the security, confidentiality, integrity, authenticity, control,
availability and utility of the world-s computing and information
systems infrastructure. The National University Cyber Security and
Information Assurance program is offered as a Master-s degree. The
emphasis of the MSCSIA program uniquely includes hands-on
academic instruction using virtual computers. This past year, 2011,
the NU facility has become fully operational using system
architecture to provide a Virtual Education Laboratory (VEL)
accessible to both onsite and online students. The first student cohort
completed their MSCSIA training this past March 2, 2012 after
fulfilling 12 courses, for a total of 54 units of college credits. The
rapid pace scheduling of one course per month is immensely
challenging, perpetually changing, and virtually multifaceted. This
paper analyses these descriptive terms in consideration of those
globalization penetration breaches as present in today-s world of
cyber security. In addition, we present current NU practices to
mitigate risks.
Abstract: The purpose of this research is to develop and apply the
RSCMAC to enhance the dynamic accuracy of Global Positioning
System (GPS). GPS devices provide services of accurate positioning,
speed detection and highly precise time standard for over 98% area on
the earth. The overall operation of Global Positioning System includes
24 GPS satellites in space; signal transmission that includes 2
frequency carrier waves (Link 1 and Link 2) and 2 sets random
telegraphic codes (C/A code and P code), on-earth monitoring stations
or client GPS receivers. Only 4 satellites utilization, the client position
and its elevation can be detected rapidly. The more receivable
satellites, the more accurate position can be decoded. Currently, the
standard positioning accuracy of the simplified GPS receiver is greatly
increased, but due to affected by the error of satellite clock, the
troposphere delay and the ionosphere delay, current measurement
accuracy is in the level of 5~15m. In increasing the dynamic GPS
positioning accuracy, most researchers mainly use inertial navigation
system (INS) and installation of other sensors or maps for the
assistance. This research utilizes the RSCMAC advantages of fast
learning, learning convergence assurance, solving capability of
time-related dynamic system problems with the static positioning
calibration structure to improve and increase the GPS dynamic
accuracy. The increasing of GPS dynamic positioning accuracy can be
achieved by using RSCMAC system with GPS receivers collecting
dynamic error data for the error prediction and follows by using the
predicted error to correct the GPS dynamic positioning data. The
ultimate purpose of this research is to improve the dynamic positioning
error of cheap GPS receivers and the economic benefits will be
enhanced while the accuracy is increased.
Abstract: The Institute of Product Development is dealing
with the development, design and dimensioning of micro components
and systems as a member of the Collaborative Research
Centre 499 “Design, Production and Quality Assurance of
Molded micro components made of Metallic and Ceramic Materials".
Because of technological restrictions in the miniaturization
of conventional manufacturing techniques, shape and
material deviations cannot be scaled down in the same proportion
as the micro parts, rendering components with relatively
wide tolerance fields. Systems that include such components
should be designed with this particularity in mind, often requiring
large clearance. On the end, the output of such systems
results variable and prone to dynamical instability. To save
production time and resources, every study of these effects
should happen early in the product development process and
base on computer simulation to avoid costly prototypes. A
suitable method is proposed here and exemplary applied to a
micro technology demonstrator developed by the CRC499. It
consists of a one stage planetary gear train in a sun-planet-ring
configuration, with input through the sun gear and output
through the carrier. The simulation procedure relies on ordinary
Multi Body Simulation methods and subsequently adds
other techniques to further investigate details of the system-s
behavior and to predict its response. The selection of the relevant
parameters and output functions followed the engineering
standards for regular sized gear trains. The first step is to
quantify the variability and to reveal the most critical points of
the system, performed through a whole-mechanism Sensitivity
Analysis. Due to the lack of previous knowledge about the system-s
behavior, different DOE methods involving small and
large amount of experiments were selected to perform the SA.
In this particular case the parameter space can be divided into
two well defined groups, one of them containing the gear-s profile
information and the other the components- spatial location.
This has been exploited to explore the different DOE techniques
more promptly. A reduced set of parameters is derived for
further investigation and to feed the final optimization process,
whether as optimization parameters or as external perturbation
collective. The 10 most relevant perturbation factors and 4 to 6
prospective variable parameters are considered in a new, simplified
model. All of the parameters are affected by the mentioned
production variability. The objective functions of interest
are based on scalar output-s variability measures, so the
problem becomes an optimization under robustness and reliability constrains. The study shows an initial step on the development
path of a method to design and optimize complex micro
mechanisms composed of wide tolerated elements accounting
for the robustness and reliability of the systems- output.
Abstract: A separation-kernel-based operating system (OS) has been designed for use in secure embedded systems by applying formal methods to the design of the separation-kernel part. The separation kernel is a small OS kernel that provides an abstract distributed environment on a single CPU. The design of the separation kernel was verified using two formal methods, the B method and the Spin model checker. A newly designed semi-formal method, the extended state transition method, was also applied. An OS comprising the separation-kernel part and additional OS services on top of the separation kernel was prototyped on the Intel IA-32 architecture. Developing and testing of a prototype embedded application, a point-of-sale application, on the prototype OS demonstrated that the proposed architecture and the use of formal methods to design its kernel part are effective for achieving a secure embedded system having a high-assurance separation kernel.
Abstract: Tensile armour wires provide a flexible pipe's
resistance to longitudinal stresses. Flexible pipe manufacturers need
to know the effect of defects such as scratches and cracks, with
dimensions less than 0.2mm which is the limit of the current nondestructive
detection technology, on the fracture stress and fracture
strain of the wire for quality assurance purposes. Recent research
involving the determination of the fracture strength of cracked wires
employed laboratory testing and classical fracture mechanics
approach using non-standardised fracture mechanics specimens
because standard test specimens could not be manufactured from the
wires owing to their sizes. In this work, the effect of miniature
cracks on the fracture properties of tensile armour wires was
investigated using laboratory and finite element tensile testing
simulations with the phenomenological shear fracture model. The
investigation revealed that the presence of cracks shallower than
0.2mm is worse on the fracture strain of the wire.
Abstract: This paper proposes an innovative methodology for
Acceptance Sampling by Variables, which is a particular category of
Statistical Quality Control dealing with the assurance of products
quality. Our contribution lies in the exploitation of machine learning
techniques to address the complexity and remedy the drawbacks of
existing approaches. More specifically, the proposed methodology
exploits Artificial Neural Networks (ANNs) to aid decision making
about the acceptance or rejection of an inspected sample. For any
type of inspection, ANNs are trained by data from corresponding
tables of a standard-s sampling plan schemes. Once trained, ANNs
can give closed-form solutions for any acceptance quality level and
sample size, thus leading to an automation of the reading of the
sampling plan tables, without any need of compromise with the
values of the specific standard chosen each time. The proposed
methodology provides enough flexibility to quality control engineers
during the inspection of their samples, allowing the consideration of
specific needs, while it also reduces the time and the cost required for
these inspections. Its applicability and advantages are demonstrated
through two numerical examples.
Abstract: Recently, distributed generation technologies have received much attention for the potential energy savings and reliability assurances that might be achieved as a result of their widespread adoption. Fueling the attention have been the possibilities of international agreements to reduce greenhouse gas emissions, electricity sector restructuring, high power reliability requirements for certain activities, and concern about easing transmission and distribution capacity bottlenecks and congestion. So it is necessary that impact of these kinds of generators on distribution feeder reconfiguration would be investigated. This paper presents an approach for distribution reconfiguration considering Distributed Generators (DGs). The objective function is summation of electrical power losses A Tabu search optimization is used to solve the optimal operation problem. The approach is tested on a real distribution feeder.
Abstract: In the automotive industry test drives are being conducted
during the development of new vehicle models or as a part of
quality assurance of series-production vehicles. The communication
on the in-vehicle network, data from external sensors, or internal
data from the electronic control units is recorded by automotive
data loggers during the test drives. The recordings are used for fault
analysis. Since the resulting data volume is tremendous, manually
analysing each recording in great detail is not feasible.
This paper proposes to use machine learning to support domainexperts
by preventing them from contemplating irrelevant data and
rather pointing them to the relevant parts in the recordings. The
underlying idea is to learn the normal behaviour from available
recordings, i.e. a training set, and then to autonomously detect
unexpected deviations and report them as anomalies.
The one-class support vector machine “support vector data description”
is utilised to calculate distances of feature vectors. SVDDSUBSEQ
is proposed as a novel approach, allowing to classify subsequences
in multivariate time series data. The approach allows to
detect unexpected faults without modelling effort as is shown with
experimental results on recordings from test drives.