Abstract: Formaldehyde is the illegal chemical substance used
for food preservation in fish and vegetable. It can promote
carcinogenesis. Superoxide dismutases are the important
antioxidative enzymes that catalyze the dismutation of superoxide
anion into oxygen and hydrogen peroxide. The resultant level of
oxidative stress in formaldehyde-treated lymphocytes was
investigated. The formaldehyde concentrations of 0, 20, 40, 60, 80
and 120μmol/L were treated in human lymphocytes for 12 hours.
After 12 treated hours, the superoxide dismutase activity change was
measured in formaldehyde-treated lymphocytes. The results showed
that the formaldehyde concentrations of 60, 80 and 120μmol/L
significantly decreased superoxide dismutase activities in
lymphocytes (P < 0.05). The change of superoxide dismutase
activity in formaldehyde-treated lymphocytes may be the biomarker
for detect cellular injury, such as damage to DNA, due to
formaldehyde exposure.
Abstract: Many attempts have been made to strengthen Feistel based block ciphers. Among the successful proposals is the key- dependent S-box which was implemented in some of the high-profile ciphers. In this paper a key-dependent permutation box is proposed and implemented on DES as a case study. The new modified DES, MDES, was tested against Diehard Tests, avalanche test, and performance test. The results showed that in general MDES is more resistible to attacks than DES with negligible overhead. Therefore, it is believed that the proposed key-dependent permutation should be considered as a valuable primitive that can help strengthen the security of Substitution-Permutation Network which is a core design in many Feistel based block ciphers.
Abstract: This paper proposes an improved approach based on
conventional particle swarm optimization (PSO) for solving an
economic dispatch(ED) problem with considering the generator
constraints. The mutation operators of the differential evolution (DE)
are used for improving diversity exploration of PSO, which called
particle swarm optimization with mutation operators (PSOM). The
mutation operators are activated if velocity values of PSO nearly to
zero or violated from the boundaries. Four scenarios of mutation
operators are implemented for PSOM. The simulation results of all
scenarios of the PSOM outperform over the PSO and other existing
approaches which appeared in literatures.
Abstract: The phylogenetic analysis using the most conservative
portions of 18S rRNA gene revealed the phylogenetic relationship
among the two populations where DNA divergence showed that the
nucleotides diversity value were -0.00838 for the Tanjung Dawai,
Kedah and -0.00708 for the Cherating, Pahang populations
respectively. The net nucleotide divergence among populations (Da)
was -0.0073 indicating a low polymorphism among the populations
studied. Total number of mutations in the Tanjung Dawai, Kedah
samples was higher than Cherating, Pahang samples, which are 73 and
59 respectively while shared mutations across the populations were 8,
and reveal the evolutionary in the genome of Malaysian T. gigas. The
tree topology of both populations inferred using Neigbour-joining
method by comparing 1791 bp of partial 18S rRNA sequence revealed
that T. gigas haplotypes were clustered into seven clades, suggesting
that they are genetically diverse among populations which derived
from a common ancestor.
Abstract: In cryptography, confusion and diffusion are very
important to get confidentiality and privacy of message in block
ciphers and stream ciphers. There are two types of network to provide
confusion and diffusion properties of message in block ciphers. They
are Substitution- Permutation network (S-P network), and Feistel
network. NLFS (Non-Linear feedback stream cipher) is a fast and
secure stream cipher for software application. NLFS have two modes
basic mode that is synchronous mode and self synchronous mode.
Real random numbers are non-deterministic. R-box (random box)
based on the dynamic properties and it performs the stochastic
transformation of data that can be used effectively meet the
challenges of information is protected from international destructive
impacts. In this paper, a new implementation of stochastic
transformation will be proposed.
Abstract: Today, Higher Education in a global scope is subordinated to the greater institutional controls through the policies of the Quality of Education. These include processes of over evaluation of all the academic activities: students- and professors- performance, educational logistics, managerial standards for the administration of institutions of higher education, as well as the establishment of the imaginaries of excellence and prestige as the foundations on which universities of the XXI century will focus their present and future goals and interests. But at the same time higher education systems worldwide are facing the most profound crisis of sense and meaning and attending enormous mutations in their identity. Based in a qualitative research approach, this paper shows the social configurations that the scholars at the Universities in Mexico build around the discourse of the Quality of Education, and how these policies put in risk the social recognition of these individuals.
Abstract: In this paper, we will generate the wreath product
11 12 M wrM using only two permutations. Also, we will show the
structure of some groups containing the wreath product 11 12 M wrM .
The structure of the groups founded is determined in terms of wreath
product k (M wrM ) wrC 11 12 . Some related cases are also included.
Also, we will show that 132K+1 S and 132K+1 A can be generated
using the wreath product k (M wrM ) wrC 11 12 and a transposition in
132K+1 S and an element of order 3 in 132K+1 A . We will also show
that 132K+1 S and 132K+1 A can be generated using the wreath
product 11 12 M wrM and an element of order k +1.
Abstract: A hardware efficient, multi mode, re-configurable
architecture of interleaver/de-interleaver for multiple standards,
like DVB, WiMAX and WLAN is presented. The interleavers
consume a large part of silicon area when implemented by using
conventional methods as they use memories to store permutation
patterns. In addition, different types of interleavers in different
standards cannot share the hardware due to different construction
methodologies. The novelty of the work presented in this paper is
threefold: 1) Mapping of vital types of interleavers including
convolutional interleaver onto a single architecture with flexibility
to change interleaver size; 2) Hardware complexity for channel
interleaving in WiMAX is reduced by using 2-D realization of the
interleaver functions; and 3) Silicon cost overheads reduced by
avoiding the use of small memories. The proposed architecture
consumes 0.18mm2 silicon area for 0.12μm process and can
operate at a frequency of 140 MHz. The reduced complexity helps
in minimizing the memory utilization, and at the same time
provides strong support to on-the-fly computation of permutation
patterns.
Abstract: Stable nonzero populations without random deaths
caused by the Verhulst factor (Verhulst-free) are a rarity. Majority
either grow without bounds or die of excessive harmful mutations.
To delay the accumulation of bad genes or diseases, a new
environmental parameter Γ is introduced in the simulation. Current
results demonstrate that stability may be achieved by setting Γ = 0.1.
These steady states approach a maximum size that scales inversely
with reproduction age.
Abstract: The direct implementation of interleaver functions
in WiMAX is not hardware efficient due to presence of complex
functions. Also the conventional method i.e. using memories for
storing the permutation tables is silicon consuming. This work
presents a 2-D transformation for WiMAX channel interleaver
functions which reduces the overall hardware complexity to
compute the interleaver addresses on the fly. A fully reconfigurable
architecture for address generation in WiMAX
channel interleaver is presented, which consume 1.1 k-gates in
total. It can be configured for any block size and any modulation
scheme in WiMAX. The presented architecture can run at a
frequency of 200 MHz, thus fully supporting high bandwidth
requirements for WiMAX.
Abstract: This paper aims to develop an algorithm of finite
capacity material requirement planning (FCMRP) system for a multistage
assembly flow shop. The developed FCMRP system has two
main stages. The first stage is to allocate operations to the first and
second priority work centers and also determine the sequence of the
operations on each work center. The second stage is to determine the
optimal start time of each operation by using a linear programming
model. Real data from a factory is used to analyze and evaluate the
effectiveness of the proposed FCMRP system and also to guarantee a
practical solution to the user. There are five performance measures,
namely, the total tardiness, the number of tardy orders, the total
earliness, the number of early orders, and the average flow-time. The
proposed FCMRP system offers an adjustable solution which is a
compromised solution among the conflicting performance measures.
The user can adjust the weight of each performance measure to
obtain the desired performance. The result shows that the combination
of FCMRP NP3 and EDD outperforms other combinations
in term of overall performance index. The calculation time for the
proposed FCMRP system is about 10 minutes which is practical for
the planners of the factory.
Abstract: Flow-shop scheduling problem (FSP) deals with the
scheduling of a set of jobs that visit a set of machines in the same
order. The FSP is NP-hard, which means that an efficient algorithm
for solving the problem to optimality is unavailable. To meet the
requirements on time and to minimize the make-span performance of
large permutation flow-shop scheduling problems in which there are
sequence dependent setup times on each machine, this paper
develops one hybrid genetic algorithms (HGA). Proposed HGA
apply a modified approach to generate population of initial
chromosomes and also use an improved heuristic called the iterated
swap procedure to improve initial solutions. Also the author uses
three genetic operators to make good new offspring. The results are
compared to some recently developed heuristics and computational
experimental results show that the proposed HGA performs very
competitively with respect to accuracy and efficiency of solution.
Abstract: An approach and its implementation in 0.18 m CMOS process of the multichannel ASIC for capacitive (up to 30 pF) sensors are described in the paper. The main design aim was to study an analog data-driven architecture. The design was done for an analog derandomizing function of the 128 to 16 structure. That means that the ASIC structure should provide a parallel front-end readout of 128 input analog sensor signals and after the corresponding fast commutation with appropriate arbitration logic their processing by means of 16 output chains, including analog-to-digital conversion. The principal feature of the ASIC is a low power consumption within 2 mW/channel (including a 9-bit 20Ms/s ADC) at a maximum average channel hit rate not less than 150 kHz.
Abstract: Artificial Immune System is adopted as a Heuristic
Algorithm to solve the combinatorial problems for decades.
Nevertheless, many of these applications took advantage of the benefit
for applications but seldom proposed approaches for enhancing the
efficiency. In this paper, we continue the previous research to develop
a Self-evolving Artificial Immune System II via coordinating the T
and B cell in Immune System and built a block-based artificial
chromosome for speeding up the computation time and better
performance for different complexities of problems. Through the
design of Plasma cell and clonal selection which are relative the
function of the Immune Response. The Immune Response will help
the AIS have the global and local searching ability and preventing
trapped in local optima. From the experimental result, the significant
performance validates the SEAIS II is effective when solving the
permutation flows-hop problems.
Abstract: Genetic Algorithms (GAs) are direct searching
methods which require little information from design space. This
characteristic beside robustness of these algorithms makes them to be
very popular in recent decades. On the other hand, while this method
is employed, there is no guarantee to achieve optimum results. This
obliged designer to run such algorithms more than one time to
achieve more reliable results. There are many attempts to modify the
algorithms to make them more efficient. In this paper, by application
of fractal dimension (particularly, Box Counting Method), the
complexity of design space are established for determination of
mutation and crossover probabilities (Pm and Pc). This methodology
is followed by a numerical example for more clarification. It is
concluded that this modification will improve efficiency of GAs and
make them to bring about more reliable results especially for design
space with higher fractal dimensions.
Abstract: Evolvable hardware (EHW) refers to a selfreconfiguration
hardware design, where the configuration is under
the control of an evolutionary algorithm (EA). A lot of research has
been done in this area several different EA have been introduced.
Every time a specific EA is chosen for solving a particular problem,
all its components, such as population size, initialization, selection
mechanism, mutation rate, and genetic operators, should be selected
in order to achieve the best results. In the last three decade a lot of
research has been carried out in order to identify the best parameters
for the EA-s components for different “test-problems". However
different researchers propose different solutions. In this paper the
behaviour of mutation rate on (1+λ) evolution strategy (ES) for
designing logic circuits, which has not been done before, has been
deeply analyzed. The mutation rate for an EHW system modifies
values of the logic cell inputs, the cell type (for example from AND
to NOR) and the circuit output. The behaviour of the mutation has
been analyzed based on the number of generations, genotype
redundancy and number of logic gates used for the evolved circuits.
The experimental results found provide the behaviour of the mutation
rate to be used during evolution for the design and optimization of
logic circuits. The researches on the best mutation rate during the last
40 years are also summarized.
Abstract: This paper presents an improved variable ordering method to obtain the minimum number of nodes in Reduced Ordered Binary Decision Diagrams (ROBDD). The proposed method uses the graph topology to find the best variable ordering. Therefore the input Boolean function is converted to a unidirectional graph. Three levels of graph parameters are used to increase the probability of having a good variable ordering. The initial level uses the total number of nodes (NN) in all the paths, the total number of paths (NP) and the maximum number of nodes among all paths (MNNAP). The second and third levels use two extra parameters: The shortest path among two variables (SP) and the sum of shortest path from one variable to all the other variables (SSP). A permutation of the graph parameters is performed at each level for each variable order and the number of nodes is recorded. Experimental results are promising; the proposed method is found to be more effective in finding the variable ordering for the majority of benchmark circuits.
Abstract: In this researcha particle swarm optimization (PSO)
algorithm is proposedfor no-wait flowshopsequence dependent
setuptime scheduling problem with weighted earliness-tardiness
penalties as the criterion (|,
|Σ
"
).The
smallestposition value (SPV) rule is applied to convert the continuous
value of position vector of particles in PSO to job permutations.A
timing algorithm is generated to find the optimal schedule and
calculate the objective function value of a given sequence in PSO
algorithm. Twodifferent neighborhood structures are applied to
improve the solution quality of PSO algorithm.The first one is based
on variable neighborhood search (VNS) and the second one is a
simple one with invariable structure. In order to compare the
performance of two neighborhood structures, random test problems
are generated and solved by both neighborhood
approaches.Computational results show that the VNS algorithmhas
better performance than the other one especially for the large sized
problems.
Abstract: Web applications have become complex and crucial for many firms, especially when combined with areas such as CRM (Customer Relationship Management) and BPR (Business Process Reengineering). The scientific community has focused attention to Web application design, development, analysis, testing, by studying and proposing methodologies and tools. Static and dynamic techniques may be used to analyze existing Web applications. The use of traditional static source code analysis may be very difficult, for the presence of dynamically generated code, and for the multi-language nature of the Web. Dynamic analysis may be useful, but it has an intrinsic limitation, the low number of program executions used to extract information. Our reverse engineering analysis, used into our WAAT (Web Applications Analysis and Testing) project, applies mutational techniques in order to exploit server side execution engines to accomplish part of the dynamic analysis. This paper studies the effects of mutation source code analysis applied to Web software to build application models. Mutation-based generated models may contain more information then necessary, so we need a pruning mechanism.
Abstract: Star graphs are Cayley graphs of symmetric groups of permutations, with transpositions as the generating sets. A star graph is a preferred interconnection network topology to a hypercube for its ability to connect a greater number of nodes with lower degree. However, an attractive property of the hypercube is that it has a Hamiltonian decomposition, i.e. its edges can be partitioned into disjoint Hamiltonian cycles, and therefore a simple routing can be found in the case of an edge failure. The existence of Hamiltonian cycles in Cayley graphs has been known for some time. So far, there are no published results on the much stronger condition of the existence of Hamiltonian decompositions. In this paper, we give a construction of a Hamiltonian decomposition of the star graph 5-star of degree 4, by defining an automorphism for 5-star and a Hamiltonian cycle which is edge-disjoint with its image under the automorphism.