Abstract: Elliptic curve-based certificateless signature is slowly
gaining attention due to its ability to retain the efficiency of
identity-based signature to eliminate the need of certificate
management while it does not suffer from inherent private
key escrow problem. Generally, cryptosystem based on elliptic
curve offers equivalent security strength at smaller key sizes
compared to conventional cryptosystem such as RSA which
results in faster computations and efficient use of computing
power, bandwidth, and storage. This paper proposes to implement
certificateless signature based on bilinear pairing to
structure the framework of IKE authentication. In this paper,
we perform a comparative analysis of certificateless signature
scheme with a well-known RSA scheme and also present the
experimental results in the context of signing and verification
execution times. By generalizing our observations, we discuss the
different trade-offs involved in implementing IKE authentication
by using certificateless signature.
Abstract: For over a decade, the Pulse Coupled Neural Network
(PCNN) based algorithms have been successfully used in image
interpretation applications including image segmentation. There are
several versions of the PCNN based image segmentation methods,
and the segmentation accuracy of all of them is very sensitive to the
values of the network parameters. Most methods treat PCNN
parameters like linking coefficient and primary firing threshold as
global parameters, and determine them by trial-and-error. The
automatic determination of appropriate values for linking coefficient,
and primary firing threshold is a challenging problem and deserves
further research. This paper presents a method for obtaining global as
well as local values for the linking coefficient and the primary firing
threshold for neurons directly from the image statistics. Extensive
simulation results show that the proposed approach achieves
excellent segmentation accuracy comparable to the best accuracy
obtainable by trial-and-error for a variety of images.
Abstract: Since the conception of JML, many tools, applications and implementations have been done. In this context, the users or developers who want to use JML seem surounded by many of these tools, applications and so on. Looking for a common infrastructure and an independent language to provide a bridge between these tools and JML, we developed an approach to embedded contracts in XML for Java: XJML. This approach offer us the ability to separate preconditions, posconditions and class invariants using JML and XML, so we made a front-end which can process Runtime Assertion Checking, Extended Static Checking and Full Static Program Verification. Besides, the capabilities for this front-end can be extended and easily implemented thanks to XML. We believe that XJML is an easy way to start the building of a Graphic User Interface delivering in this way a friendly and IDE independency to developers community wich want to work with JML.
Abstract: In 1990 [1] the subband-DFT (SB-DFT) technique was proposed. This technique used the Hadamard filters in the decomposition step to split the input sequence into low- and highpass sequences. In the next step, either two DFTs are needed on both bands to compute the full-band DFT or one DFT on one of the two bands to compute an approximate DFT. A combination network with correction factors was to be applied after the DFTs. Another approach was proposed in 1997 [2] for using a special discrete wavelet transform (DWT) to compute the discrete Fourier transform (DFT). In the first step of the algorithm, the input sequence is decomposed in a similar manner to the SB-DFT into two sequences using wavelet decomposition with Haar filters. The second step is to perform DFTs on both bands to obtain the full-band DFT or to obtain a fast approximate DFT by implementing pruning at both input and output sides. In this paper, the wavelet-based DFT (W-DFT) with Haar filters is interpreted as SB-DFT with Hadamard filters. The only difference is in a constant factor in the combination network. This result is very important to complete the analysis of the W-DFT, since all the results concerning the accuracy and approximation errors in the SB-DFT are applicable. An application example in spectral analysis is given for both SB-DFT and W-DFT (with different filters). The adaptive capability of the SB-DFT is included in the W-DFT algorithm to select the band of most energy as the band to be computed. Finally, the W-DFT is extended to the two-dimensional case. An application in image transformation is given using two different types of wavelet filters.
Abstract: One of the main research directions in CAD/CAM
machining area is the reducing of machining time.
The feedrate scheduling is one of the advanced techniques that
allows keeping constant the uncut chip area and as sequel to keep
constant the main cutting force. They are two main ways for feedrate
optimization. The first consists in the cutting force monitoring, which
presumes to use complex equipment for the force measurement and
after this, to set the feedrate regarding the cutting force variation. The
second way is to optimize the feedrate by keeping constant the
material removal rate regarding the cutting conditions.
In this paper there is proposed a new approach using an extended
database that replaces the system model.
The feedrate scheduling is determined based on the identification
of the reconfigurable machine tool, and the feed value determination
regarding the uncut chip section area, the contact length between tool
and blank and also regarding the geometrical roughness.
The first stage consists in the blank and tool monitoring for the
determination of actual profiles. The next stage is the determination
of programmed tool path that allows obtaining the piece target
profile.
The graphic representation environment models the tool and blank
regions and, after this, the tool model is positioned regarding the
blank model according to the programmed tool path. For each of
these positions the geometrical roughness value, the uncut chip area
and the contact length between tool and blank are calculated. Each of
these parameters are compared with the admissible values and
according to the result the feed value is established.
We can consider that this approach has the following advantages:
in case of complex cutting processes the prediction of cutting force is
possible; there is considered the real cutting profile which has
deviations from the theoretical profile; the blank-tool contact length
limitation is possible; it is possible to correct the programmed tool
path so that the target profile can be obtained.
Applying this method, there are obtained data sets which allow the
feedrate scheduling so that the uncut chip area is constant and, as a
result, the cutting force is constant, which allows to use more
efficiently the machine tool and to obtain the reduction of machining
time.
Abstract: The need to implement intelligent highways is much
more emphasized with the growth of vehicle production line as well as vehicle intelligence. The control of intelligent vehicles in order to
reduce human error and boost ease congestion is not accomplished solely by the aid of human resources. The present article is an attempt
to introduce an intelligent control system based on a single central computer. In this project, central computer, without utilizing Global
Positioning System (GPS), is capable of tracking all vehicles, crisis management and control, traffic guidance and recording traffic
crimes along the highway. By the help of RFID technology, vehicles
are connected to computerized systems, intelligent light poles and
other available hardware along the way. By the aid of Wimax
communicative technology, all components of the system are
virtually connected together through local and global networks
devised in them and the energy of the network is provided by the
solar cells installed on the intelligent light poles.
Abstract: This paper proposes a new technique for improving
the efficiency of software testing, which is based on a conventional
attempt to reduce test cases that have to be tested for any given
software. The approach utilizes the advantage of Regression Testing
where fewer test cases would lessen time consumption of the testing
as a whole. The technique also offers a means to perform test case
generation automatically. Compared to one of the techniques in the
literature where the tester has no option but to perform the test case
generation manually, the proposed technique provides a better
option. As for the test cases reduction, the technique uses simple
algebraic conditions to assign fixed values to variables (Maximum,
minimum and constant variables). By doing this, the variables values
would be limited within a definite range, resulting in fewer numbers
of possible test cases to process. The technique can also be used in
program loops and arrays.
Abstract: This paper deals with the current space-vector
decomposition in three-phase, three-wire systems on the basis of
some case studies. We propose four components of the current spacevector
in terms of DC and AC components of the instantaneous
active and reactive powers. The term of supplementary useless
current vector is also pointed out. The analysis shows that the current
decomposition which respects the definition of the instantaneous
apparent power vector is useful for compensation reasons only if the
supply voltages are sinusoidal. A modified definition of the
components of the current is proposed for the operation under
nonsinusoidal voltage conditions.
Abstract: Subgrade moisture content varies with environmental and soil conditions and has significant influence on pavement performance. Therefore, it is important to establish realistic estimates of expected subgrade moisture contents to account for the effects of this variable on predicted pavement performance during the design stage properly. The initial boundary soil suction profile for a given pavement is a critical factor in determining expected moisture variations in the subgrade for given pavement and climatic and soil conditions. Several numerical models have been developed for predicting water and solute transport in saturated and unsaturated subgrade soils. Soil hydraulic properties are required for quantitatively describing water and chemical transport processes in soils by the numerical models. The required hydraulic properties are hydraulic conductivity, water diffusivity, and specific water capacity. The objective of this paper was to determine isothermal moisture profiles in a soil fill and predict the soil moisture movement above the ground water table using a simple one-dimensional finite difference model.
Abstract: This work presents a neural network model for the
clustering analysis of data based on Self Organizing Maps (SOM).
The model evolves during the training stage towards a hierarchical
structure according to the input requirements. The hierarchical structure
symbolizes a specialization tool that provides refinements of the
classification process. The structure behaves like a single map with
different resolutions depending on the region to analyze. The benefits
and performance of the algorithm are discussed in application to the
Iris dataset, a classical example for pattern recognition.
Abstract: The purpose of study is to demonstrate how the characteristics of technology and the process required for development of technology affect technology transfer from public organisations to industry on the technology level. In addition, using the advantage of the analytic level and the novel means of measuring technology convergence, we examine the characteristics of converging technologies as compared to non-converging technologies in technology transfer process. In sum, our study finds that a technology from the public sector is likely to be transferred when its readiness level is closer to generation of profit, when its stage of life cycle is early and when its economic values is high. Our findings also show that converging technologies are less likely to be transferred.
Abstract: The aim of this study was to investigate the effects of
supplementing the diluent of roosters' semen with different levels of
olive oil on motility, viability, morphology and acrosome integrity of
chicken spermatozoa after in vitro storage for up to 72 h. Semen was
collected from 60 White Layer males (62 wk of age) kept in
separated floor pens and randomly divided into six treatment groups
(10 males in each group). Experimental groups were as follows: T1
:fresh semen, T2 : semen extended 1:1 with Al – Daraji 2 diluent
(AD2D) alone, T3 – T6 :semen samples extended 1:1 with AD2D
supplemented with 2 ml, 4 ml, 6 ml or 8 ml of olive oil / 100 ml of
diluent, respectively. Semen samples were then stored at 5 °C for 24
h, 48 h or 72 h. There was a clear influence of diluent
supplementation with olive oil on the spermatozoa motility profile;
olive oil groups (T3, T4, T5 and T6) recorded the highest scores of
mass activity and individual motility during all storage periods
compared to T1 and T2 groups. In addition, the inclusion of olive oil
into semen diluent (T3, T4, T5 and T6) gave significantly higher
percentages of viable spermatozoa, normal morphologically
spermatozoa and intact acrosomes irrespective of storage period.
These results clearly show that supplementation the diluent of
roosters' semen with olive oil can improve semen quality when
semen samples in vitro stored at 5 °C for up to 72 h.
Abstract: The aim of this research is to evaluate surface
roughness and develop a multiple regression model for surface roughness as a function of cutting parameters during the turning of
flame hardened medium carbon steel with TiN-Al2O3-TiCN coated inserts. An experimental plan of work and signal-to-noise ratio (S/N)
were used to relate the influence of turning parameters to the
workpiece surface finish utilizing Taguchi methodology. The effects
of turning parameters were studied by using the analysis of variance (ANOVA) method. Evaluated parameters were feed, cutting speed,
and depth of cut. It was found that the most significant interaction among the considered turning parameters was between depth of cut and feed. The average surface roughness (Ra) resulted by TiN-Al2O3-
TiCN coated inserts was about 2.44 μm and minimum value was 0.74 μm. In addition, the regression model was able to predict values for surface roughness in comparison with experimental values within
reasonable limit.
Abstract: This paper presents a 24 watts SEPIC converter design
and control using microprocessor. SEPIC converter has advantages of
a wide input range and miniaturization caused by the low stress at
elements. There is also an advantage that the input and output are
isolated in MOSFET-off state. This paper presents the PID control
through the SEPIC converter transfer function using a DSP and the
protective circuit for fuel cell from the over-current and
inverse-voltage by using the characteristic of SEPIC converter. Then it
derives them through the experiments.
Abstract: Scheduling for the flexible job shop is very important
in both fields of production management and combinatorial
optimization. However, it quit difficult to achieve an optimal solution
to this problem with traditional optimization approaches owing to the
high computational complexity. The combining of several
optimization criteria induces additional complexity and new
problems. In this paper, a Pareto approach to solve the multi
objective flexible job shop scheduling problems is proposed. The
objectives considered are to minimize the overall completion time
(makespan) and total weighted tardiness (TWT). An effective
simulated annealing algorithm based on the proposed approach is
presented to solve multi objective flexible job shop scheduling
problem. An external memory of non-dominated solutions is
considered to save and update the non-dominated solutions during
the solution process. Numerical examples are used to evaluate and
study the performance of the proposed algorithm. The proposed
algorithm can be applied easily in real factory conditions and for
large size problems. It should thus be useful to both practitioners and
researchers.
Abstract: In this paper, an image adaptive, invisible digital
watermarking algorithm with Orthogonal Polynomials based
Transformation (OPT) is proposed, for copyright protection of digital
images. The proposed algorithm utilizes a visual model to determine
the watermarking strength necessary to invisibly embed the
watermark in the mid frequency AC coefficients of the cover image,
chosen with a secret key. The visual model is designed to generate a
Just Noticeable Distortion mask (JND) by analyzing the low level
image characteristics such as textures, edges and luminance of the
cover image in the orthogonal polynomials based transformation
domain. Since the secret key is required for both embedding and
extraction of watermark, it is not possible for an unauthorized user to
extract the embedded watermark. The proposed scheme is robust to
common image processing distortions like filtering, JPEG
compression and additive noise. Experimental results show that the
quality of OPT domain watermarked images is better than its DCT
counterpart.
Abstract: In this paper, RSA encryption algorithm and its hardware
implementation in Xilinx-s Virtex Field Programmable Gate
Arrays (FPGA) is analyzed. The issues of scalability, flexible performance,
and silicon efficiency for the hardware acceleration of
public key crypto systems are being explored in the present work.
Using techniques based on the interleaved math for exponentiation,
the proposed RSA calculation architecture is compared to existing
FPGA-based solutions for speed, FPGA utilization, and scalability.
The paper covers the RSA encryption algorithm, interleaved multiplication,
Miller Rabin algorithm for primality test, extended Euclidean
math, basic FPGA technology, and the implementation details of
the proposed RSA calculation architecture. Performance of several
alternative hardware architectures is discussed and compared. Finally,
conclusion is drawn, highlighting the advantages of a fully flexible
& parameterized design.
Abstract: Argument over the use of particular method in interlanguage pragmatics has increased recently. Researchers argued the advantages and disadvantages of each method either natural or elicited. Findings of different studies indicated that the use of one method may not provide enough data to answer all its questions. The current study investigated the validity of using multimethod approach in interlanguage pragmatics to understand the development of requests in Arabic as a second language (Arabic L2). To this end, the study adopted two methods belong to two types of data sources: the institutional discourse (natural data), and the role play (elicited data). Participants were 117 learners of Arabic L2 at the university level, representing four levels (beginners, low-intermediate, highintermediate, and advanced). Results showed that using two or more methods in interlanguage pragmatics affect the size and nature of data.
Abstract: This study numerically investigates the effects of Electrohydrodynamic on flow patterns and heat transfer enhancement within a cavity which is on the lower wall of channel. In this simulation, effects of using ground wire and ground plate on the flow patterns are compared. Moreover, the positions of electrode wire respecting with ground are tested in the range of angles θ = 0 - 180o. High electrical voltage exposes to air is 20 kV. Bulk mean velocity and temperature of inlet air are controlled at 0.1 m/s and 60 OC, respectively. The result shows when electric field is applied, swirling flow is appeared in the channel. In addition, swirling flow patterns in the main flow of using ground plate are widely spreader than that of using ground wire. Moreover, direction of swirling flow also affects the flow pattern and heat transfer in a cavity. These cause the using ground wire to give the maximum temperature and heat transfer higher than using ground plate. Furthermore, when the angle is at θ = 60o, high shear flow effect is obtained. This results show high strength of swirling flow and effective heat transfer enhancement.
Abstract: Image mosaicing is a technique that permits to enlarge the field of view of a camera. For instance, it is employed to achieve panoramas with common cameras or even in scientific applications, to achieve the image of a whole culture in microscopical imaging. Usually, a mosaic of cell cultures is achieved through using automated microscopes. However, this is often performed in batch, through CPU intensive minimization algorithms. In addition, live stem cells are studied in phase contrast, showing a low contrast that cannot be improved further. We present a method to study the flat field from live stem cells images even in case of 100% confluence, this permitting to build accurate mosaics on-line using high performance algorithms.