Abstract: Nowadays, engineering ceramics have significant
applications in different industries such as; automotive, aerospace,
electrical, electronics and even martial industries due to their
attractive physical and mechanical properties like very high hardness
and strength at elevated temperatures, chemical stability, low friction
and high wear resistance. However, these interesting properties plus
low heat conductivity make their machining processes too hard,
costly and time consuming. Many attempts have been made in order
to make the grinding process of engineering ceramics easier and
many scientists have tried to find proper techniques to economize
ceramics' machining processes. This paper proposes a new diamond
plunge grinding technique using ultrasonic vibration for grinding
Alumina ceramic (Al2O3). For this purpose, a set of laboratory
equipments have been designed and simulated using Finite Element
Method (FEM) and constructed in order to be used in various
measurements. The results obtained have been compared with the
conventional plunge grinding process without ultrasonic vibration
and indicated that the surface roughness and fracture strength
improved and the grinding forces decreased.
Abstract: High speed networks provide realtime variable bit rate
service with diversified traffic flow characteristics and quality
requirements. The variable bit rate traffic has stringent delay and
packet loss requirements. The burstiness of the correlated traffic
makes dynamic buffer management highly desirable to satisfy the
Quality of Service (QoS) requirements. This paper presents an
algorithm for optimization of adaptive buffer allocation scheme for
traffic based on loss of consecutive packets in data-stream and buffer
occupancy level. Buffer is designed to allow the input traffic to be
partitioned into different priority classes and based on the input
traffic behavior it controls the threshold dynamically. This algorithm
allows input packets to enter into buffer if its occupancy level is less
than the threshold value for priority of that packet. The threshold is
dynamically varied in runtime based on packet loss behavior. The
simulation is run for two priority classes of the input traffic –
realtime and non-realtime classes. The simulation results show that
Adaptive Partial Buffer Sharing (ADPBS) has better performance
than Static Partial Buffer Sharing (SPBS) and First In First Out
(FIFO) queue under the same traffic conditions.
Abstract: The necessity of accurate and timely field data is
shared among organizations engaged in fundamentally different
activities, public services or commercial operations. Basically, there
are three major components in the process of the qualitative research:
data collection, interpretation and organization of data, and analytic
process. Representative technological advancements in terms of
innovation have been made in mobile devices (mobile phone, PDA-s,
tablets, laptops, etc). Resources that can be potentially applied on the
data collection activity for field researches in order to improve this
process.
This paper presents and discuss the main features of a mobile
phone based solution for field data collection, composed of basically
three modules: a survey editor, a server web application and a client
mobile application. The data gathering process begins with the
survey creation module, which enables the production of tailored
questionnaires. The field workforce receives the questionnaire(s) on
their mobile phones to collect the interviews responses and sending
them back to a server for immediate analysis.
Abstract: Network layer multicast, i.e. IP multicast, even after
many years of research, development and standardization, is not
deployed in large scale due to both technical (e.g. upgrading of
routers) and political (e.g. policy making and negotiation) issues.
Researchers looked for alternatives and proposed application/overlay
multicast where multicast functions are handled by end hosts, not
network layer routers. Member hosts wishing to receive multicast
data form a multicast delivery tree. The intermediate hosts in the tree
act as routers also, i.e. they forward data to the lower hosts in the
tree. Unlike IP multicast, where a router cannot leave the tree until all
members below it leave, in overlay multicast any member can leave
the tree at any time thus disjoining the tree and disrupting the data
dissemination. All the disrupted hosts have to rejoin the tree. This
characteristic of the overlay multicast causes multicast tree unstable,
data loss and rejoin overhead. In this paper, we propose that each node
sets its leaving time from the tree and sends join request to a number
of nodes in the tree. The nodes in the tree will reject the request if
their leaving time is earlier than the requesting node otherwise they
will accept the request. The node can join at one of the accepting
nodes. This makes the tree more stable as the nodes will join the tree
according to their leaving time, earliest leaving time node being at the
leaf of the tree. Some intermediate nodes may not follow their leaving
time and leave earlier than their leaving time thus disrupting the tree.
For this, we propose a proactive recovery mechanism so that disrupted
nodes can rejoin the tree at predetermined nodes immediately. We
have shown by simulation that there is less overhead when joining
the multicast tree and the recovery time of the disrupted nodes is
much less than the previous works. Keywords
Abstract: A model predictive controller based on recursive learning is proposed. In this SISO adaptive controller, a model is automatically updated using simple recursive equations. The identified models are then stored in the memory to be re-used in the future. The decision for model update is taken based on a new control performance index. The new controller allows the use of simple linear model predictive controllers in the control of nonlinear time varying processes.
Abstract: Supply chain networks are frequently hit by
unplanned events which lead to disruptions and cause operational and
financial consequences. It is neither possible to avoid disruption risk
entirely, nor are network members able to prepare for every possible
disruptive event. Therefore a continuity planning should be set up
which supports effective operational responses in supply chain
networks in times of emergencies. In this research network related
degrees of freedom which determine the options for responsive
actions are derived from interview data. The findings are further
embedded into a common risk management process. The paper
provides support for researchers and practitioners to identify the
network related options for responsive actions and to determine the
need for improving the reaction capabilities.
Abstract: Finite impulse response (FIR) filters have the advantage of linear phase, guaranteed stability, fewer finite precision errors, and efficient implementation. In contrast, they have a major disadvantage of high order need (more coefficients) than IIR counterpart with comparable performance. The high order demand imposes more hardware requirements, arithmetic operations, area usage, and power consumption when designing and fabricating the filter. Therefore, minimizing or reducing these parameters, is a major goal or target in digital filter design task. This paper presents an algorithm proposed for modifying values and the number of non-zero coefficients used to represent the FIR digital pulse shaping filter response. With this algorithm, the FIR filter frequency and phase response can be represented with a minimum number of non-zero coefficients. Therefore, reducing the arithmetic complexity needed to get the filter output. Consequently, the system characteristic i.e. power consumption, area usage, and processing time are also reduced. The proposed algorithm is more powerful when integrated with multiplierless algorithms such as distributed arithmetic (DA) in designing high order digital FIR filters. Here the DA usage eliminates the need for multipliers when implementing the multiply and accumulate unit (MAC) and the proposed algorithm will reduce the number of adders and addition operations needed through the minimization of the non-zero values coefficients to get the filter output.
Abstract: Mycophenolic acid “MPA" is a secondary metabolite
of Penicillium bervicompactum with antibiotic and
immunosuppressive properties. In this study, fermentation process
was established for production of mycophenolic acid by Penicillium
bervicompactum MUCL 19011 in shake flask. The maximum MPA
production, product yield and productivity were 1.379 g/L, 18.6 mg/g
glucose and 4.9 mg/L.h respectively. Glucose consumption, biomass
and MPA production profiles were investigated during fermentation
time. It was found that MPA production starts approximately after
180 hours and reaches to a maximum at 280 h. In the next step, the
effects of methionine and acetate concentrations on MPA production
were evaluated. Maximum MPA production, product yield and
productivity (1.763 g/L, 23.8 mg/g glucose and 6.30 mg/L. h
respectively) were obtained with using 2.5 g/L methionine in culture
medium. Further addition of methionine had not more positive effect
on MPA production. Finally, results showed that the addition of
acetate to the culture medium had not any observable effect on MPA
production
Abstract: In an assessment of the extractability of metals in
green liquor dregs from the chemical recovery circuit of semichemical
pulp mill, extractable concentrations of heavy metals in
artificial gastric fluid were between 10 (Ni) and 717 (Zn) times
higher than those in artificial sweat fluid. Only Al (6.7 mg/kg; d.w.),
Ni (1.2 mg/kg; d.w.) and Zn (1.8 mg/kg; d.w.) showed extractability
in the artificial sweat fluid, whereas Al (730 mg/kg; d.w.), Ba (770
mg/kg; d.w.) and Zn (1290 mg/kg; d.w.) showed clear extractability
in the artificial gastric fluid. As certain heavy metals were clearly
soluble in the artificial gastric fluid, the careful handling of this
residue is recommended in order to prevent the penetration of green
liquor dregs across the human gastrointestinal tract.
Abstract: A new concept for long-term reagent storage for Labon- a-Chip (LoC) devices is described. Here we present a polymer multilayer stack with integrated stick packs for long-term storage of several liquid reagents, which are necessary for many diagnostic applications. Stick packs are widely used in packaging industry for storing solids and liquids for long time. The storage concept fulfills two main requirements: First, a long-term storage of reagents in stick packs without significant losses and interaction with surroundings, second, on demand releasing of liquids, which is realized by pushing a membrane against the stick pack through pneumatic pressure. This concept enables long-term on-chip storage of liquid reagents at room temperature and allows an easy implementation in different LoC devices.
Abstract: The purpose of research was to know the role of
immunogenic protein of 49 kDa from V.alginolyticus which capable
to initiate molecule expression of MHC Class II in receptor of
Cromileptes altivelis. The method used was in vivo experimental
research through testing of immunogenic protein 49 kDa from
V.alginolyticus at Cromileptes altivelis (size of 250 - 300 grams)
using 3 times booster by injecting an immunogenic protein in a
intramuscular manner. Response of expressed MHC molecule was
shown using immunocytochemistry method and SEM. Results
indicated that adhesin V.alginolyticus 49 kDa which have
immunogenic character could trigger expression of MHC class II on
receptor of grouper and has been proven by staining using
immunocytochemistry and SEM with labeling using antibody anti
MHC (anti mouse). This visible expression based on binding between
epitopes antigen and antibody anti MHC in the receptor. Using
immunocytochemistry, intracellular response of MHC to in vivo
induction of immunogenic adhesin from V.alginolyticus was shown.
Abstract: In this paper, we propose an algorithm to compute
initial cluster centers for K-means clustering. Data in a cell is
partitioned using a cutting plane that divides cell in two smaller cells.
The plane is perpendicular to the data axis with the highest variance
and is designed to reduce the sum squared errors of the two cells as
much as possible, while at the same time keep the two cells far apart
as possible. Cells are partitioned one at a time until the number of
cells equals to the predefined number of clusters, K. The centers of
the K cells become the initial cluster centers for K-means. The
experimental results suggest that the proposed algorithm is effective,
converge to better clustering results than those of the random
initialization method. The research also indicated the proposed
algorithm would greatly improve the likelihood of every cluster
containing some data in it.
Abstract: HSDPA is a new feature which is introduced in
Release-5 specifications of the 3GPP WCDMA/UTRA standard to
realize higher speed data rate together with lower round-trip times.
Moreover, the HSDPA concept offers outstanding improvement of
packet throughput and also significantly reduces the packet call
transfer delay as compared to Release -99 DSCH. Till now the
HSDPA system uses turbo coding which is the best coding technique
to achieve the Shannon limit. However, the main drawbacks of turbo
coding are high decoding complexity and high latency which makes
it unsuitable for some applications like satellite communications,
since the transmission distance itself introduces latency due to
limited speed of light. Hence in this paper it is proposed to use LDPC
coding in place of Turbo coding for HSDPA system which decreases
the latency and decoding complexity. But LDPC coding increases the
Encoding complexity. Though the complexity of transmitter
increases at NodeB, the End user is at an advantage in terms of
receiver complexity and Bit- error rate. In this paper LDPC Encoder
is implemented using “sparse parity check matrix" H to generate a
codeword at Encoder and “Belief Propagation algorithm "for LDPC
decoding .Simulation results shows that in LDPC coding the BER
suddenly drops as the number of iterations increase with a small
increase in Eb/No. Which is not possible in Turbo coding. Also same
BER was achieved using less number of iterations and hence the
latency and receiver complexity has decreased for LDPC coding.
HSDPA increases the downlink data rate within a cell to a theoretical
maximum of 14Mbps, with 2Mbps on the uplink. The changes that
HSDPA enables includes better quality, more reliable and more
robust data services. In other words, while realistic data rates are
only a few Mbps, the actual quality and number of users achieved
will improve significantly.
Abstract: The main objective of this paper is applying a
comparison between the Wolf Pack Search (WPS) as a newly
introduced intelligent algorithm with several other known algorithms
including Particle Swarm Optimization (PSO), Shuffled Frog
Leaping (SFL), Binary and Continues Genetic algorithms. All
algorithms are applied on two benchmark cost functions. The aim is
to identify the best algorithm in terms of more speed and accuracy in
finding the solution, where speed is measured in terms of function
evaluations. The simulation results show that the SFL algorithm with
less function evaluations becomes first if the simulation time is
important, while if accuracy is the significant issue, WPS and PSO
would have a better performance.
Abstract: This paper proposes a “soft systems" approach to
domain-driven design of computer-based information systems. We
propose a systemic framework combining techniques from Soft
Systems Methodology (SSM), the Unified Modelling Language
(UML), and an implementation pattern known as “Naked Objects".
We have used this framework in action research projects that have
involved the investigation and modelling of business processes using
object-oriented domain models and the implementation of software
systems based on those domain models. Within the proposed
framework, Soft Systems Methodology (SSM) is used as a guiding
methodology to explore the problem situation and to generate a
ubiquitous language (soft language) which can be used as the basis
for developing an object-oriented domain model. The domain model
is further developed using techniques based on the UML and is
implemented in software following the “Naked Objects"
implementation pattern. We argue that there are advantages from
combining and using techniques from different methodologies in this
way.
The proposed systemic framework is overviewed and justified as
multimethodologyusing Mingers multimethodology ideas.
This multimethodology approach is being evaluated through a
series of action research projects based on real-world case studies. A
Peer-Tutoring case study is presented here as a sample of the
framework evaluation process
Abstract: This investigation examines the effect of the sintering
temperature curve in manufactured nickel powder capillary structure
(wick) for a loop heat pipe (LHP). The sintering temperature curve is
composed of a region of increasing temperature; a region of constant
temperature and a region of declining temperature. The most important
region is that in which the temperature increases, as an index in the
stage in which the temperature increases. The wick of nickel powder is
manufactured in the stage of fixed sintering temperature and the time
between the stage of constant temperature and the stage of falling
temperature. When the slope of the curve in the region of increasing
temperature is unity (equivalent to 10 °C/min), the structure of the
wick is complete and the heat transfer performance is optimal. The
result of experiment test demonstrates that the heat transfer
performance is optimal at 320W; the minimal total thermal resistance
is approximately 0.18°C/W, and the heat flux is 17W/cm2; the internal
parameters of the wick are an effective pore radius of 3.1 μm, a
permeability of 3.25×10-13m2 and a porosity of 71%.
Abstract: Since Cloud environment has appeared as the most powerful
keyword in the computing industry, the growth in VDI (Virtual Desktop
Infrastructure) became remarkable in domestic market. In recent years, with the trend
that mobile devices such as smartphones and pads spread so rapidly, the strengths of
VDI that allows people to access and perform business on the move along with
companies' office needs expedite more rapid spread of VDI.
In this paper, mobile OTP (One-Time Password) authentication method is proposed
to secure mobile device portability through rapid and secure authentication using
mobile devices such as mobile phones or pads, which does not require additional
purchase or possession of OTP tokens of users. To facilitate diverse and wide use of
Services in the future, service should be continuous and stable, and above all, security
should be considered the most important to meet advanced portability and user
accessibility, the strengths of VDI.
Abstract: Distributed wireless sensor network consist on several
scattered nodes in a knowledge area. Those sensors have as its only
power supplies a pair of batteries that must let them live up to five
years without substitution. That-s why it is necessary to develop
some power aware algorithms that could save battery lifetime as
much as possible. In this is document, a review of power aware
design for sensor nodes is presented. As example of implementations,
some resources and task management, communication, topology
control and routing protocols are named.
Abstract: Internet computer games turn to be more and more
attractive within the context of technology enhanced learning.
Educational games as quizzes and quests have gained significant
success in appealing and motivating learners to study in a different
way and provoke steadily increasing interest in new methods of
application. Board games are specific group of games where figures
are manipulated in competitive play mode with race conditions on a
surface according predefined rules. The article represents a new,
formalized model of traditional quizzes, puzzles and quests shown as
multimedia board games which facilitates the construction process of
such games. Authors provide different examples of quizzes and their
models in order to demonstrate the model is quite general and does
support not only quizzes, mazes and quests but also any set of
teaching activities. The execution process of such models is
explained and, as well, how they can be useful for creation and
delivery of adaptive e-learning courseware.
Abstract: With Power system movement toward restructuring along with factors such as life environment pollution, problems of transmission expansion and with advancement in construction technology of small generation units, it is expected that small units like wind turbines, fuel cells, photovoltaic, ... that most of the time connect to the distribution networks play a very essential role in electric power industry. With increase in developing usage of small generation units, management of distribution networks should be reviewed. The target of this paper is to present a new method for optimal management of active and reactive power in distribution networks with regard to costs pertaining to various types of dispersed generations, capacitors and cost of electric energy achieved from network. In other words, in this method it-s endeavored to select optimal sources of active and reactive power generation and controlling equipments such as dispersed generations, capacitors, under load tapchanger transformers and substations in a way that firstly costs in relation to them are minimized and secondly technical and physical constraints are regarded. Because the optimal management of distribution networks is an optimization problem with continuous and discrete variables, the new evolutionary method based on Ant Colony Algorithm has been applied. The simulation results of the method tested on two cases containing 23 and 34 buses exist and will be shown at later sections.