Abstract: This article presents the results of researchrelated to the assessment protocol weightedcumulative expected transmission time (WCETT)applied to cognitive radio networks.The development work was based on researchdone by different authors, we simulated a network,which communicates wirelessly, using a licensedchannel, through which other nodes are notlicensed, try to transmit during a given time nodeuntil the station's owner begins its transmission.
Abstract: The article presents findings from the study and
analysis of the results of an experimental programme focused on the
production of concrete and fibre reinforced concrete in which natural
aggregate has been substituted with brick or concrete recyclate. The
research results are analyzed to monitor the effect of mechanicalphysical
characteristics on the durability properties of tested
cementitious composites. The key parts of the fibre reinforced
concrete mix are the basic components: aggregates – recyclate,
cement, fly ash, water and fibres. Their specific ratios and the
properties of individual components principally affect the resulting
behaviour of fresh fibre reinforced concrete and the characteristics of
the final product. The article builds on the sources dealing with the
use of recycled aggregates from construction and demolition waste in
the production of fibre reinforced concrete. The implemented
procedure of testing the composite contributes to the building
sustainability in environmental engineering.
Abstract: Five lignin samples were fractionated with
Acetone/Water mixtures and the obtained fractions were subjected to
extensive structural characterization, including Fourier Transform
Infrared (FT-IR), Gel permeation Chromatography (GPC) and
Phosphorus-31 NMR spectroscopy (31P-NMR). The results showed
that for all studied lignins the solubility increases with the increment
of the acetone concentration. Wheat straw lignin has the highest
solubility in 90/10 (v/v) Acetone/Water mixture, 400 mg lignin being
dissolved in 1 mL mixture. The weight average molecular weight of
the obtained fractions increased with the increment of acetone
concentration and thus with solubility. 31P-NMR analysis based on
lignin modification by reactive phospholane into phosphitylated
compounds was used to differentiate and quantify the different types
of OH groups (aromatic, aliphatic, and carboxylic) found in the
fractions obtained with 70/30 (v/v) Acetone/Water mixture.
Abstract: Multiple sequence alignment is a fundamental part in
many bioinformatics applications such as phylogenetic analysis.
Many alignment methods have been proposed. Each method gives a
different result for the same data set, and consequently generates a
different phylogenetic tree. Hence, the chosen alignment method
affects the resulting tree. However in the literature, there is no
evaluation of multiple alignment methods based on the comparison of
their phylogenetic trees. This work evaluates the following eight
aligners: ClustalX, T-Coffee, SAGA, MUSCLE, MAFFT, DIALIGN,
ProbCons and Align-m, based on their phylogenetic trees (test trees)
produced on a given data set. The Neighbor-Joining method is used
to estimate trees. Three criteria, namely, the dNNI, the dRF and the
Id_Tree are established to test the ability of different alignment
methods to produce closer test tree compared to the reference one
(true tree). Results show that the method which produces the most
accurate alignment gives the nearest test tree to the reference tree.
MUSCLE outperforms all aligners with respect to the three criteria
and for all datasets, performing particularly better when sequence
identities are within 10-20%. It is followed by T-Coffee at lower
sequence identity (30%), trees scores of all methods
become similar.
Abstract: Wireless sensor networks can be used to measure and monitor many challenging problems and typically involve in monitoring, tracking and controlling areas such as battlefield monitoring, object tracking, habitat monitoring and home sentry systems. However, wireless sensor networks pose unique security challenges including forgery of sensor data, eavesdropping, denial of service attacks, and the physical compromise of sensor nodes. Node in a sensor networks may be vanished due to power exhaustion or malicious attacks. To expand the life span of the sensor network, a new node deployment is needed. In military scenarios, intruder may directly organize malicious nodes or manipulate existing nodes to set up malicious new nodes through many kinds of attacks. To avoid malicious nodes from joining the sensor network, a security is required in the design of sensor network protocols. In this paper, we proposed a security framework to provide a complete security solution against the known attacks in wireless sensor networks. Our framework accomplishes node authentication for new nodes with recognition of a malicious node. When deployed as a framework, a high degree of security is reachable compared with the conventional sensor network security solutions. A proposed framework can protect against most of the notorious attacks in sensor networks, and attain better computation and communication performance. This is different from conventional authentication methods based on the node identity. It includes identity of nodes and the node security time stamp into the authentication procedure. Hence security protocols not only see the identity of each node but also distinguish between new nodes and old nodes.
Abstract: Although Model Driven Architecture has taken
successful steps toward model-based software development, this
approach still faces complex situations and ambiguous questions
while applying to real world software systems. One of these
questions - which has taken the most interest and focus - is how
model transforms between different abstraction levels, MDA
proposes. In this paper, we propose an approach based on Story
Driven Modeling and Aspect Oriented Programming to ease these
transformations. Service Oriented Architecture is taken as the target
model to test the proposed mechanism in a functional system.
Service Oriented Architecture and Model Driven Architecture [1]
are both considered as the frontiers of their own domain in the
software world. Following components - which was the greatest step
after object oriented - SOA is introduced, focusing on more
integrated and automated software solutions. On the other hand - and
from the designers' point of view - MDA is just initiating another
evolution. MDA is considered as the next big step after UML in
designing domain.
Abstract: Allowing diagonalizability of sign pattern is still an open problem. In this paper, we make a carefully discussion about allowing unitary diagonalizability of two sign pattern. Some sufficient and necessary conditions of allowing unitary diagonalizability are also obtained.
Abstract: Magnesium alloys have gained increased attention in recent years in automotive, electronics, and medical industry. This because of magnesium alloys have better properties than aluminum alloys and steels in respects of their low density and high strength to weight ratio. However, the main problems of magnesium alloy welding are the crack formation and the appearance of porosity during the solidification. This paper proposes a unique technique to weld two thin sheets of AZ31B magnesium alloy using a paste containing Ag nanoparticles. The paste containing Ag nanoparticles of 5 nm in average diameter and an organic solvent was used to coat the surface of AZ31B thin sheet. The coated sheet was heated at 100 °C for 60 s to evaporate the solvent. The dried sheet was set as a lower AZ31B sheet on the jig, and then lap fillet welding was carried out by using a pulsed Nd:YAG laser in a closed box filled with argon gas. The characteristics of the microstructure and the corrosion behavior of the joints were analyzed by opticalmicroscopy (OM), energy dispersive spectrometry (EDS), electron probe micro-analyzer (EPMA), scanning electron microscopy (SEM), and immersion corrosion test. The experimental results show that the wrought AZ31B magnesium alloy can be joined successfully using Ag nanoparticles. Ag nanoparticles insert promote grain refinement, narrower the HAZ width and wider bond width compared to weld without and insert. Corrosion rate of welded AZ31B with Ag nanoparticles reduced up to 44 % compared to base metal. The improvement of corrosion resistance of welded AZ31B with Ag nanoparticles due to finer grains and large grain boundaries area which consist of high Al content. β-phase Mg17Al12 could serve as effective barrier and suppressed further propagation of corrosion. Furthermore, Ag distribution in fusion zone provide much more finer grains and may stabilize the magnesium solid solution making it less soluble or less anodic in aqueous
Abstract: Currently, slider process of Hard Disk Drive Industry
become more complex, defective diagnosis for yield improvement
becomes more complicated and time-consumed. Manufacturing data
analysis with data mining approach is widely used for solving that
problem. The existing mining approach from combining of the KMean
clustering, the machine oriented Kruskal-Wallis test and the
multivariate chart were applied for defective diagnosis but it is still
be a semiautomatic diagnosis system. This article aims to modify an
algorithm to support an automatic decision for the existing approach.
Based on the research framework, the new approach can do an
automatic diagnosis and help engineer to find out the defective
factors faster than the existing approach about 50%.
Abstract: Reactiondiffusion systems are mathematical models that describe how the concentration of one or more substances distributed in space changes under the influence of local chemical reactions in which the substances are converted into each other, and diffusion which causes the substances to spread out in space. The classical representation of a reaction-diffusion system is given by semi-linear parabolic partial differential equations, whose general form is ÔêétX(x, t) = DΔX(x, t), where X(x, t) is the state vector, D is the matrix of the diffusion coefficients and Δ is the Laplace operator. If the solute move in an homogeneous system in thermal equilibrium, the diffusion coefficients are constants that do not depend on the local concentration of solvent and of solutes and on local temperature of the medium. In this paper a new stochastic reaction-diffusion model in which the diffusion coefficients are function of the local concentration, viscosity and frictional forces of solvent and solute is presented. Such a model provides a more realistic description of the molecular kinetics in non-homogenoeus and highly structured media as the intra- and inter-cellular spaces. The movement of a molecule A from a region i to a region j of the space is described as a first order reaction Ai k- → Aj , where the rate constant k depends on the diffusion coefficient. Representing the diffusional motion as a chemical reaction allows to assimilate a reaction-diffusion system to a pure reaction system and to simulate it with Gillespie-inspired stochastic simulation algorithms. The stochastic time evolution of the system is given by the occurrence of diffusion events and chemical reaction events. At each time step an event (reaction or diffusion) is selected from a probability distribution of waiting times determined by the specific speed of reaction and diffusion events. Redi is the software tool, developed to implement the model of reaction-diffusion kinetics and dynamics. It is a free software, that can be downloaded from http://www.cosbi.eu. To demonstrate the validity of the new reaction-diffusion model, the simulation results of the chaperone-assisted protein folding in cytoplasm obtained with Redi are reported. This case study is redrawing the attention of the scientific community due to current interests on protein aggregation as a potential cause for neurodegenerative diseases.
Abstract: As the disfunctions of the information society and
social development progress, intrusion problems such as malicious
replies, spam mail, private information leakage, phishing, and
pharming, and side effects such as the spread of unwholesome
information and privacy invasion are becoming serious social
problems. Illegal access to information is also becoming a problem as
the exchange and sharing of information increases on the basis of the
extension of the communication network. On the other hand, as the
communication network has been constructed as an international,
global system, the legal response against invasion and cyber-attack
from abroad is facing its limit. In addition, in an environment where
the important infrastructures are managed and controlled on the basis
of the information communication network, such problems pose a
threat to national security. Countermeasures to such threats are
developed and implemented on a yearly basis to protect the major
infrastructures of information communication. As a part of such
measures, we have developed a methodology for assessing the
information protection level which can be used to establish the
quantitative object setting method required for the improvement of the
information protection level.
Abstract: Stochastic resonance (SR) is a phenomenon whereby
the signal transmission or signal processing through certain nonlinear
systems can be improved by adding noise. This paper discusses SR in
nonlinear signal detection by a simple test statistic, which can be
computed from multiple noisy data in a binary decision problem based
on a maximum a posteriori probability criterion. The performance of
detection is assessed by the probability of detection error Per . When
the input signal is subthreshold signal, we establish that benefit from
noise can be gained for different noises and confirm further that the
subthreshold SR exists in nonlinear signal detection. The efficacy of
SR is significantly improved and the minimum of Per can
dramatically approach to zero as the sample number increases. These
results show the robustness of SR in signal detection and extend the
applicability of SR in signal processing.
Abstract: There is a great deal of interest in constructing Double Skin Facade (DSF) structures which are considered as modern movement in field of Energy Conservation, renewable energies, and Architecture design. This trend provides many conclusive alternatives which are frequently associated with sustainable building. In this paper a building with Double Skin Facade is considered in the semiarid climate of Tehran, Iran, in order to consider the DSF-s performance during hot seasons. Mathematical formulations calculate solar heat gain by the external skin. Moreover, Computational Fluid Dynamics (CFD) simulations were performed on the case study building to enhance effectiveness of the facade. The conclusion divulged difference of gained energy by the cavity and room with and without blind and louvers. Some solutions were introduced to surge the performance of natural ventilation by plunging the cooling loads in summer.
Abstract: Environmental considerations have become an integral part of developmental thinking and decision making in many countries. It is growing rapidly in importance as a discipline of its own. Preventive approaches have been used at the evolutional process of environmental management as a broad and dynamic system for dealing with pollution and environmental degradation. In this regard, Environmental Assessment as an activity for identification and prediction of project’s impacts carried out in the world and its legal significance dates back to late 1960. In Iran, according to the Article 2 of Environmental Protection Act, Environmental Impact Assessment (EIA) should be prepared for seven categories of project. This article has been actively implementing by Department of Environment at 1997. World Bank in 1989 attempted to introducing application of Environmental Assessment for making decision about projects which are required financial assistance in developing countries. So, preparing EIA for obtaining World Bank loan was obligated. Alborz Project is one of the World Bank Projects in Iran which is environmentally significant. Seven out of ten W.B safeguard policies were considered at this project. In this paper, Alborz project, objectives, safeguard policies and role of environmental management will be elaborated
Abstract: Nowadays, quasi-continuous wave diode lasers are
used in a widespread variety of applications. Temperature effects in
these lasers can strongly influence their performance. In this paper,
the effects of temperature have been experimentally investigated on
different features of a 60W-QCW diode laser. The obtained results
indicate that the conversion efficiency and operation voltage of diode
laser decrease with the augmentation of the working temperature
associated with a redshift in the laser peak wavelength. Experimental
results show the emission peak wavelength of laser shifts 0.26 nm
and the conversion efficiency decreases 1.76 % with the increase of
temperature from 40 to 50 ̊C. Present study also shows the slope
efficiency decreases gradually at low temperatures and rapidly at
higher temperatures. Regarding the close dependence of the
mentioned parameters to the operating temperature, it is of great
importance to carefully control the working temperature of diode
laser, particularly for medical applications.
Abstract: In most of the popular implementation of Parallel GAs
the whole population is divided into a set of subpopulations, each
subpopulation executes GA independently and some individuals are
migrated at fixed intervals on a ring topology. In these studies,
the migrations usually occur 'synchronously' among subpopulations.
Therefore, CPUs are not used efficiently and the communication
do not occur efficiently either. A few studies tried asynchronous
migration but it is hard to implement and setting proper parameter
values is difficult.
The aim of our research is to develop a migration method which is
easy to implement, which is easy to set parameter values, and which
reduces communication traffic. In this paper, we propose a traffic
reduction method for the Asynchronous Parallel Distributed GA by
migration of elites only. This is a Server-Client model. Every client
executes GA on a subpopulation and sends an elite information to the
server. The server manages the elite information of each client and
the migrations occur according to the evolution of sub-population in
a client. This facilitates the reduction in communication traffic.
To evaluate our proposed model, we apply it to many function optimization
problems. We confirm that our proposed method performs
as well as current methods, the communication traffic is less, and
setting of the parameters are much easier.
Abstract: In the last decades, a number of robust fuzzy clustering algorithms have been proposed to partition data sets affected by noise and outliers. Robust fuzzy C-means (robust-FCM) is certainly one of the most known among these algorithms. In robust-FCM, noise is modeled as a separate cluster and is characterized by a prototype that has a constant distance δ from all data points. Distance δ determines the boundary of the noise cluster and therefore is a critical parameter of the algorithm. Though some approaches have been proposed to automatically determine the most suitable δ for the specific application, up to today an efficient and fully satisfactory solution does not exist. The aim of this paper is to propose a novel method to compute the optimal δ based on the analysis of the distribution of the percentage of objects assigned to the noise cluster in repeated executions of the robust-FCM with decreasing values of δ . The extremely encouraging results obtained on some data sets found in the literature are shown and discussed.
Abstract: The primary objective of the paper is to propose a new method for solving assignment problem under uncertain situation. In the classical assignment problem (AP), zpqdenotes the cost for assigning the qth job to the pth person which is deterministic in nature. Here in some uncertain situation, we have assigned a cost in the form of composite relative degree Fpq instead of and this replaced cost is in the maximization form. In this paper, it has been solved and validated by the two proposed algorithms, a new mathematical formulation of IVIF assignment problem has been presented where the cost has been considered to be an IVIFN and the membership of elements in the set can be explained by positive and negative evidences. To determine the composite relative degree of similarity of IVIFS the concept of similarity measure and the score function is used for validating the solution which is obtained by Composite relative similarity degree method. Further, hypothetical numeric illusion is conducted to clarify the method’s effectiveness and feasibility developed in the study. Finally, conclusion and suggestion for future work are also proposed.
Abstract: In the supply chain management customer is the most
significant component and mass customization is mostly related to
customers because it is the capability of any industry or organization
to deliver highly customized products and its services to the
respective customers with flexibility and integration, providing such
a variety of products that nearly everyone can find what they want.
Today all over the world many companies and markets are facing
varied situations that at one side customers are demanding that their
orders should be completed as quickly as possible while on other
hand it requires highly customized products and services. By
applying mass customization some companies face unwanted cost
and complexity. Now they are realizing that they should completely
examine what kind of customization would be best suited for their
companies. In this paper authors review some approaches and
principles which show effect in supply chain management that can be
adopted and used by companies for quickly meeting the customer
orders at reduced cost, with minimum amount of inventory and
maximum efficiency.
Abstract: In large Internet backbones, Service Providers
typically have to explicitly manage the traffic flows in order to
optimize the use of network resources. This process is often referred
to as Traffic Engineering (TE). Common objectives of traffic
engineering include balance traffic distribution across the network
and avoiding congestion hot spots. Raj P H and SVK Raja designed
the Bayesian network approach to identify congestion hors pots in
MPLS. In this approach for every node in the network the
Conditional Probability Distribution (CPD) is specified. Based on
the CPD the congestion hot spots are identified. Then the traffic can
be distributed so that no link in the network is either over utilized or
under utilized. Although the Bayesian network approach has been
implemented in operational networks, it has a number of well known
scaling issues.
This paper proposes a new approach, which we call the Pragati
(means Progress) Node Popularity (PNP) approach to identify the
congestion hot spots with the network topology alone. In the new
Pragati Node Popularity approach, IP routing runs natively over the
physical topology rather than depending on the CPD of each node as
in Bayesian network. We first illustrate our approach with a simple
network, then present a formal analysis of the Pragati Node
Popularity approach. Our PNP approach shows that for any given
network of Bayesian approach, it exactly identifies the same result
with minimum efforts. We further extend the result to a more
generic one: for any network topology and even though the network
is loopy. A theoretical insight of our result is that the optimal routing
is always shortest path routing with respect to some considerations of
hot spots in the networks.