Abstract: Currently, a large number of license activities (Early
Site Permits, Combined Operating License, reactor certifications,
etc.), are pending for review before the United States Nuclear
Regulatory Commission (US NRC). Much of the senior staff at the
NRC is now committed to these review and licensing actions. To
address this additional workload, the NRC has recruited a large
number of new Regulatory Staff for dealing with these and other
regulatory actions such as the US Fleet of Research and Test Reactors
(RTRs). These reactors pose unusual demands on Regulatory Staff
since the US Fleet of RTRs, although few (32 Licensed RTRs as of
2010), they represent a broad range of reactor types, operations, and
research and training aspects that nuclear reactor power plants (such
as the 104 LWRs) do not pose. The NRC must inspect and regulate
all these facilities. This paper addresses selected training topics and
regulatory activities providedNRC Inspectors for RTRs.
Abstract: High level synthesis (HLS) is a process which
generates register-transfer level design for digital systems from
behavioral description. There are many HLS algorithms and
commercial tools. However, most of these algorithms consider a
behavioral description for the system when a single token is
presented to the system. This approach does not exploit extra
hardware efficiently, especially in the design of digital filters where
common operations may exist between successive tokens. In this
paper, we modify the behavioral description to process multiple
tokens in parallel. However, this approach is unlike the full
processing that requires full hardware replication. It exploits the
presence of common operations between successive tokens. The
performance of the proposed approach is better than sequential
processing and approaches that of full parallel processing as the
hardware resources are increased.
Abstract: Encryption protects communication partners from
disclosure of their secret messages but cannot prevent traffic analysis
and the leakage of information about “who communicates with
whom". In the presence of collaborating adversaries, this linkability
of actions can danger anonymity. However, reliably providing
anonymity is crucial in many applications. Especially in contextaware
mobile business, where mobile users equipped with PDAs
request and receive services from service providers, providing
anonymous communication is mission-critical and challenging at the
same time. Firstly, the limited performance of mobile devices does
not allow for heavy use of expensive public-key operations which are
commonly used in anonymity protocols. Moreover, the demands for
security depend on the application (e.g., mobile dating vs. pizza
delivery service), but different users (e.g., a celebrity vs. a normal
person) may even require different security levels for the same
application. Considering both hardware limitations of mobile devices
and different sensitivity of users, we propose an anonymity
framework that is dynamically configurable according to user and
application preferences. Our framework is based on Chaum-s mixnet.
We explain the proposed framework, its configuration
parameters for the dynamic behavior and the algorithm to enforce
dynamic anonymity.
Abstract: Most fingerprint recognition techniques are based on minutiae matching and have been well studied. However, this technology still suffers from problems associated with the handling of poor quality impressions. One problem besetting fingerprint matching is distortion. Distortion changes both geometric position and orientation, and leads to difficulties in establishing a match among multiple impressions acquired from the same finger tip. Marking all the minutiae accurately as well as rejecting false minutiae is another issue still under research. Our work has combined many methods to build a minutia extractor and a minutia matcher. The combination of multiple methods comes from a wide investigation into research papers. Also some novel changes like segmentation using Morphological operations, improved thinning, false minutiae removal methods, minutia marking with special considering the triple branch counting, minutia unification by decomposing a branch into three terminations, and matching in the unified x-y coordinate system after a two-step transformation are used in the work.
Abstract: In this paper, the construction of fast algorithms for the computation of Periodic Walsh Piecewise-Linear PWL transform and the Periodic Haar Piecewise-Linear PHL transform will be presented. Algorithms for the computation of the inverse transforms are also proposed. The matrix equation of the PWL and PHL transforms are introduced. Comparison of the computational requirements for the periodic piecewise-linear transforms and other orthogonal transforms shows that the periodic piecewise-linear transforms require less number of operations than some orthogonal transforms such as the Fourier, Walsh and the Discrete Cosine transforms.
Abstract: I/O workload is a critical and important factor to
analyze I/O pattern and file system performance. However tracing I/O
operations on the fly distributed parallel file system is non-trivial due
to collection overhead and a large volume of data. In this paper, we
design and implement a parallel file system logging method for high
performance computing using shared memory-based multi-layer
scheme. It minimizes the overhead with reduced logging operation
response time and provides efficient post-processing scheme through
shared memory. Separated logging server can collect sequential logs
from multiple clients in a cluster through packet communication.
Implementation and evaluation result shows low overhead and high
scalability of this architecture for high performance parallel logging
analysis.
Abstract: The usual correctness condition for a schedule of
concurrent database transactions is some form of serializability of
the transactions. For general forms, the problem of deciding whether
a schedule is serializable is NP-complete. In those cases other approaches
to proving correctness, using proof rules that allow the steps
of the proof of serializability to be guided manually, are desirable.
Such an approach is possible in the case of conflict serializability
which is proved algebraically by deriving serial schedules using
commutativity of non-conflicting operations. However, conflict serializability
can be an unnecessarily strong form of serializability restricting
concurrency and thereby reducing performance. In practice,
weaker, more general, forms of serializability for extended models of
transactions are used. Currently, there are no known methods using
proof rules for proving those general forms of serializability. In this
paper, we define serializability for an extended model of partitioned
transactions, which we show to be as expressive as serializability
for general partitioned transactions. An algebraic method for proving
general serializability is obtained by giving an initial-algebra specification
of serializable schedules of concurrent transactions in the
model. This demonstrates that it is possible to conduct algebraic
proofs of correctness of concurrent transactions in general cases.
Abstract: Applying a rigorous process to optimize the elements
of a supply-chain network resulted in reduction of the waiting time
for a service provider and customer. Different sources of downtime
of hydraulic pressure controller/calibrator (HPC) were causing
interruptions in the operations. The process examined all the issues to
drive greater efficiencies. The issues included inherent design issues
with HPC pump, contamination of the HPC with impurities, and the
lead time required for annual calibration in the USA.
HPC is used for mandatory testing/verification of formation
tester/pressure measurement/logging-while drilling tools by oilfield
service providers, including Halliburton.
After market study andanalysis, it was concluded that the current
HPC model is best suited in the oilfield industry. To use theexisting
HPC model effectively, design andcontamination issues were
addressed through design and process improvements. An optimum
network is proposed after comparing different supply-chain models
for calibration lead-time reduction.
Abstract: Earthmoving operations are a major part of many
construction projects. Because of the complexity and fast-changing
environment of such operations, the planning and estimating are
crucial on both planning and operational levels. This paper presents
the framework ofa microscopic discrete-event simulation system for
modeling earthmoving operations and conducting productivity
estimations on an operational level.A prototype has been developed
to demonstrate the applicability of the proposed framework, and this
simulation system is presented via a case study based on an actual
earthmoving project. The case study shows that the proposed
simulation model is capable of evaluating alternative operating
strategies and resource utilization at a very detailed level.
Abstract: This paper and its companion (Part 2) deal with
modeling and optimization of two NP-hard problems in production
planning of flexible manufacturing system (FMS), part type selection
problem and loading problem. The part type selection problem and
the loading problem are strongly related and heavily influence the
system-s efficiency and productivity. The complexity of the problems
is harder when flexibilities of operations such as the possibility of
operation processed on alternative machines with alternative tools are
considered. These problems have been modeled and solved
simultaneously by using real coded genetic algorithms (RCGA)
which uses an array of real numbers as chromosome representation.
These real numbers can be converted into part type sequence and
machines that are used to process the part types. This first part of the
papers focuses on the modeling of the problems and discussing how
the novel chromosome representation can be applied to solve the
problems. The second part will discuss the effectiveness of the
RCGA to solve various test bed problems.
Abstract: Rotation or tilt present in an image capture by digital
means can be detected and corrected using Artificial Neural Network
(ANN) for application with a Face Recognition System (FRS). Principal
Component Analysis (PCA) features of faces at different angles
are used to train an ANN which detects the rotation for an input image
and corrected using a set of operations implemented using another
system based on ANN. The work also deals with the recognition
of human faces with features from the foreheads, eyes, nose and
mouths as decision support entities of the system configured using
a Generalized Feed Forward Artificial Neural Network (GFFANN).
These features are combined to provide a reinforced decision for
verification of a person-s identity despite illumination variations. The
complete system performing facial image rotation detection, correction
and recognition using re-enforced decision support provides a
success rate in the higher 90s.
Abstract: In the present paper, the three-dimensional
temperature field of tool is determined during the machining and
compared with experimental work on C45 workpiece using carbide
cutting tool inserts. During the metal cutting operations, high
temperature is generated in the tool cutting edge which influence on
the rate of tool wear. Temperature is most important characteristic of
machining processes; since many parameters such as cutting speed,
surface quality and cutting forces depend on the temperature and high
temperatures can cause high mechanical stresses which lead to early
tool wear and reduce tool life. Therefore, considerable attention is
paid to determine tool temperatures. The experiments are carried out
for dry and orthogonal machining condition. The results show that
the increase of tool temperature depends on depth of cut and
especially cutting speed in high range of cutting conditions.
Abstract: In this paper, a design methodology to implement low-power and high-speed 2nd order recursive digital Infinite Impulse Response (IIR) filter has been proposed. Since IIR filters suffer from a large number of constant multiplications, the proposed method replaces the constant multiplications by using addition/subtraction and shift operations. The proposed new 6T adder cell is used as the Carry-Save Adder (CSA) to implement addition/subtraction operations in the design of recursive section IIR filter to reduce the propagation delay. Furthermore, high-level algorithms designed for the optimization of the number of CSA blocks are used to reduce the complexity of the IIR filter. The DSCH3 tool is used to generate the schematic of the proposed 6T CSA based shift-adds architecture design and it is analyzed by using Microwind CAD tool to synthesize low-complexity and high-speed IIR filters. The proposed design outperforms in terms of power, propagation delay, area and throughput when compared with MUX-12T, MCIT-7T based CSA adder filter design. It is observed from the experimental results that the proposed 6T based design method can find better IIR filter designs in terms of power and delay than those obtained by using efficient general multipliers.
Abstract: This paper presents a comparative analysis of a new
unsupervised PCA-based technique for steel plates texture segmentation
towards defect detection. The proposed scheme called Variance
Based Component Analysis or VBCA employs PCA for feature
extraction, applies a feature reduction algorithm based on variance of
eigenpictures and classifies the pixels as defective and normal. While
the classic PCA uses a clusterer like Kmeans for pixel clustering,
VBCA employs thresholding and some post processing operations to
label pixels as defective and normal. The experimental results show
that proposed algorithm called VBCA is 12.46% more accurate and
78.85% faster than the classic PCA.
Abstract: A new method of adaptation in a partially integrated learning environment that includes electronic textbook (ET) and integrated tutoring system (ITS) is described. The algorithm of adaptation is described in detail. It includes: establishment of Interconnections of operations and concepts; estimate of the concept mastering level (for all concepts); estimate of student-s non-mastering level on the current learning step of information on each page of ET; creation of a rank-order list of links to the e-manual pages containing information that require repeated work.
Abstract: In this paper a simple watermarking method for
color images is proposed. The proposed method is based on
watermark embedding for the histograms of the HSV planes
using visual cryptography watermarking. The method has
been proved to be robust for various image processing
operations such as filtering, compression, additive noise, and
various geometrical attacks such as rotation, scaling, cropping,
flipping, and shearing.
Abstract: Compost manufacturing plants are one of units where
wastewater is produced in significantly large amounts. Wastewater
produced in these plants contains high amounts of substrate (organic
loads) and is classified as stringent waste which creates significant
pollution when discharged into the environment without treatment. A
compost production plant in the one of the Iran-s province treating
200 tons/day of waste is one of the most important environmental
pollutant operations in this zone. The main objectives of this paper
are to investigate the compost wastewater treatability in hybrid
anaerobic reactors with an upflow-downflow arrangement, to
determine the kinetic constants, and eventually to obtain an
appropriate mathematical model. After starting the hybrid anaerobic
reactor of the compost production plant, the average COD removal
rate efficiency was 95%.
Abstract: Today the intangible assets are the capital of knowledge and are the most important and the most valuable resource for organizations. All employees have knowledge independently of the kind of jobs they do. Knowledge is thus an asset, which influences business operations. The objective of this article is to identify knowledge continuity as an objective of business continuity management. The article has been prepared based on the analysis of secondary sources and the evaluation of primary sources of data by means of a quantitative survey conducted in the Czech Republic. The conclusion of the article is that organizations that apply business continuity management do not focus on the preservation of the knowledge of key employees. Organizations ensure knowledge continuity only intuitively, on a random basis, non-systematically and discontinuously. The non-ensuring of knowledge continuity represents a threat of loss of key knowledge for organizations and can also negatively affect business continuity.
Abstract: This paper focuses on operational risk measurement
techniques and on economic capital estimation methods. A data
sample of operational losses provided by an anonymous Central
European bank is analyzed using several approaches. Loss
Distribution Approach and scenario analysis method are considered.
Custom plausible loss events defined in a particular scenario are
merged with the original data sample and their impact on capital
estimates and on the financial institution is evaluated. Two main
questions are assessed – What is the most appropriate statistical
method to measure and model operational loss data distribution? and
What is the impact of hypothetical plausible events on the financial
institution? The g&h distribution was evaluated to be the most
suitable one for operational risk modeling. The method based on the
combination of historical loss events modeling and scenario analysis
provides reasonable capital estimates and allows for the measurement
of the impact of extreme events on banking operations.
Abstract: In this paper, a bond graph dynamic model for a valvecontrolled
hydraulic cylinder has been developed. A simplified bond
graph model of the inter-actuator interactions in a multi-cylinder
hydraulic system has also been presented. The overall bond graph
model of a valve-controlled hydraulic cylinder was developed by
combining the bond graph sub-models of the pump, spool valve and
the actuator using junction structures. Causality was then assigned
in order to obtain a computational model which could be simulated.
The causal bond graph model of the hydraulic cylinder was verified
by comparing the open loop state responses to those of an ODE
model which had been developed in literature based on the same
assumptions. The results were found to correlate very well both
in the shape of the curves, magnitude and the response times,
thus indicating that the developed model represents the hydraulic
dynamics of a valve-controlled cylinder. A simplified model for interactuator
interaction was presented by connecting an effort source with
constant pump pressure to the zero-junction from which the cylinders
in a multi-cylinder system are supplied with a constant pressure from
the pump. On simulating the state responses of the developed model
under different situations of cylinder operations, indicated that such
a simple model can be used to predict the inter-actuator interactions.