Abstract: The growing influence of service industries has
prompted greater attention being paid to service operations
management. However, service managers often have difficulty
articulating the veritable effects of their service innovation. Especially,
the performance evaluation process of service innovation problems
generally involves uncertain and imprecise data. This paper presents a
2-tuple fuzzy linguistic computing approach to dealing with
heterogeneous information and information loss problems while the
processes of subjective evaluation integration. The proposed method
based on group decision-making scenario to assist business managers
in measuring performance of service innovation manipulates the
heterogeneity integration processes and avoids the information loss
effectively.
Abstract: The world's population continues to grow at a quarter of a million people per day, increasing the consumption of energy. This has made the world to face the problem of energy crisis now days. In response to the energy crisis, the principles of renewable energy gained popularity. There are much advancement made in developing the wind and solar energy farms across the world. These energy farms are not enough to meet the energy requirement of world. This has attracted investors to procure new sources of energy to be substituted. Among these sources, extraction of energy from the waves is considered as best option. The world oceans contain enough energy to meet the requirement of world. Significant advancements in design and technology are being made to make waves as a continuous source of energy. One major hurdle in launching wave energy devices in a developing country like Pakistan is the initial cost. A simple, reliable and cost effective wave energy converter (WEC) is required to meet the nation-s energy need. This paper will present a novel design proposed by team SAS for harnessing wave energy. This paper has three major sections. The first section will give a brief and concise view of ocean wave creation, propagation and the energy carried by them. The second section will explain the designing of SAS-2. A gear chain mechanism is used for transferring the energy from the buoy to a rotary generator. The third section will explain the manufacturing of scaled down model for SAS-2 .Many modifications are made in the trouble shooting stage. The design of SAS-2 is simple and very less maintenance is required. SAS-2 is producing electricity at Clifton. The initial cost of SAS-2 is very low. This has proved SAS- 2 as one of the cost effective and reliable source of harnessing wave energy for developing countries.
Abstract: This study aims to demonstrate the quantification of
peptides based on isotope dilution surface enhanced Raman
scattering (IDSERS). SERS spectra of phenylalanine (Phe), leucine
(Leu) and two peptide sequences TGQIFK (T13) and
YSFLQNPQTSLCFSESIPTPSNR (T6) as part of the 22-kDa
human growth hormone (hGH) were obtained on Ag-nanoparticle
covered substrates. On the basis of the dominant Phe and Leu
vibrational modes, precise partial least squares (PLS) prediction
models were built enabling the determination of unknown T13 and
T6 concentrations. Detection of hGH in its physiological
concentration in order to investigate the possibility of protein
quantification has been achieved.
Abstract: The aim of this paper is to present a methodology in
three steps to forecast supply chain demand. In first step, various data
mining techniques are applied in order to prepare data for entering
into forecasting models. In second step, the modeling step, an
artificial neural network and support vector machine is presented
after defining Mean Absolute Percentage Error index for measuring
error. The structure of artificial neural network is selected based on
previous researchers' results and in this article the accuracy of
network is increased by using sensitivity analysis. The best forecast
for classical forecasting methods (Moving Average, Exponential
Smoothing, and Exponential Smoothing with Trend) is resulted based
on prepared data and this forecast is compared with result of support
vector machine and proposed artificial neural network. The results
show that artificial neural network can forecast more precisely in
comparison with other methods. Finally, forecasting methods'
stability is analyzed by using raw data and even the effectiveness of
clustering analysis is measured.
Abstract: The turbulent mixing of coolant streams of different
temperature and density can cause severe temperature fluctuations in
piping systems in nuclear reactors. In certain periodic contraction
cycles these conditions lead to thermal fatigue. The resulting aging
effect prompts investigation in how the mixing of flows over a sharp
temperature/density interface evolves. To study the fundamental
turbulent mixing phenomena in the presence of density gradients,
isokinetic (shear-free) mixing experiments are performed in a square
channel with Reynolds numbers ranging from 2-500 to 60-000.
Sucrose is used to create the density difference. A Wire Mesh Sensor
(WMS) is used to determine the concentration map of the flow in the
cross section. The mean interface width as a function of velocity,
density difference and distance from the mixing point are analyzed
based on traditional methods chosen for the purposes of
atmospheric/oceanic stratification analyses. A definition of the
mixing layer thickness more appropriate to thermal fatigue and based
on mixedness is devised. This definition shows that the thermal
fatigue risk assessed using simple mixing layer growth can be
misleading and why an approach that separates the effects of large
scale (turbulent) and small scale (molecular) mixing is necessary.
Abstract: While the problem based learning (PBL) approach promotes unsupervised self-directed learning (SDL), many students experience difficulty juggling the role of being an information recipient and information seeker. Logbooks have been used to assess trainee doctors but not in other areas. This study aimed to determine the effectiveness of logbook for assessing SDL during PBL sessions in first year medical students. The log book included a learning checklist and knowledge and skills components. Comparisons with the baseline assessment of student performance in PBL and that at semester end after logbook intervention showed significant improvements in student performance (31.5 ± 8 vs. 17.7 ± 4.4; p
Abstract: The paper reviews the relationship between spatial
and transportation planning in the Southern African Development
Community (SADC) region of Sub-Saharan Africa. It argues that
most urbanisation in the region has largely occurred subsequent to
the 1950s and, accordingly, urban development has been
profoundly and negatively affected by the (misguided) spatial and
institutional tenets of modernism. It demonstrates how a
considerable amount of the poor performance of these settlements
can be directly attributed to this. Two factors in particular about the
planning systems are emphasized: the way in which programmatic
land-use planning lies at the heart of both spatial and transportation
planning; and the way on which transportation and spatial planning
have been separated into independent processes. In the final
section, the paper identifies ways of improving the planning
system. Firstly, it identifies the performance qualities which
Southern African settlements should be seeking to achieve.
Secondly, it focuses on two necessary arenas of change: the need to
replace programmatic land-use planning practices with structuralspatial
approaches; and it makes a case for making urban corridors
a spatial focus of integrated planning, as a way of beginning the
restructuring and intensification of settlements which are currently
characterised by sprawl, fragmentation and separation
Abstract: Risk of infectious disease outbreaks is related to the
hygiene among the population. To assess the actual risks and modify
the relevant emergency procedures if necessary, a hygiene survey
was conducted among undergraduate students on the Rhodes
University campus. Soap was available to 10.5% and only 26.8% of
the study participants followed proper hygiene in relation to food
consumption. This combination increases the risk of infectious
disease outbreaks at the campus. Around 83.6% were willing to wash
their hands if soap was provided. Procurement and availability of
soap in undergraduate residences on campus should be improved, as
the total cost is estimated at only 2000 USD per annum. Awareness
campaigns about food-related hygiene and the need for regular handwashing
with soap should be run among Rhodes University students.
If successful, rates of respiratory and hygiene-related diseases will be
decreased and emergency health management simplified.
Abstract: This paper presents results of empirical studies that were conducted in enterprises from Podkarpackie Voivodeship (Poland). It shows the experiences of those enterprises resulting from implementing and improving the eco-innovativeness management that is formal Environmental Management System (EMS). This study shows the expected and obtained internal benefits which are the effects of a functioning EMS. The aim of this paper is to determine whether the information included in international theoretical studies concerning the benefits of implementing, functioning and improving formal EMS (which is based on the international standard ISO 14001) are confirmed by the effects of the enterprises- activities.
Abstract: Work is focused to the study of unburned carbon in
ash from coal (and wastes) combustion in 8 combustion tests at 3
fluidised-bed power station, at co-combustion of coal and wastes
(also at fluidized bed) and at bench-scale unit simulating coal
combustion in small domestic furnaces. The attention is paid to
unburned carbon contents in bottom ashes and fly ashes at these 8
combustion tests and to morphology of unburned carbons. Specific
surface area of coals, unburned carbons and ashes and the relation of
specific surface area of unburned carbon and the content of volatile
combustibles in coal were studied as well.
Abstract: Wireless sensor networks (WSN) are currently
receiving significant attention due to their unlimited potential. These
networks are used for various applications, such as habitat
monitoring, automation, agriculture, and security. The efficient nodeenergy
utilization is one of important performance factors in wireless
sensor networks because sensor nodes operate with limited battery
power. In this paper, we proposed the MiSense hierarchical cluster
based routing algorithm (MiCRA) to extend the lifetime of sensor
networks and to maintain a balanced energy consumption of nodes.
MiCRA is an extension of the HEED algorithm with two levels of
cluster heads. The performance of the proposed protocol has been
examined and evaluated through a simulation study. The simulation
results clearly show that MiCRA has a better performance in terms of
lifetime than HEED. Indeed, MiCRA our proposed protocol can
effectively extend the network lifetime without other critical
overheads and performance degradation. It has been noted that there
is about 35% of energy saving for MiCRA during the clustering
process and 65% energy savings during the routing process compared
to the HEED algorithm.
Abstract: Background: Dialign is a DNA/Protein alignment tool
for performing pairwise and multiple pairwise alignments through the
comparison of gap-free segments (fragments) between sequence
pairs. An alignment of two sequences is a chain of fragments, i.e
local gap-free pairwise alignments, with the highest total score.
METHOD: A new approach is defined in this article which relies on
the concept of using three-dimensional fragments – i.e. local threeway
alignments -- in the alignment process instead of twodimensional
ones. These three-dimensional fragments are gap-free
alignments constituting of equal-length segments belonging to three
distinct sequences. RESULTS: The obtained results showed good
improvments over the performance of DIALIGN.
Abstract: Network layer multicast, i.e. IP multicast, even after
many years of research, development and standardization, is not
deployed in large scale due to both technical (e.g. upgrading of
routers) and political (e.g. policy making and negotiation) issues.
Researchers looked for alternatives and proposed application/overlay
multicast where multicast functions are handled by end hosts, not
network layer routers. Member hosts wishing to receive multicast
data form a multicast delivery tree. The intermediate hosts in the tree
act as routers also, i.e. they forward data to the lower hosts in the
tree. Unlike IP multicast, where a router cannot leave the tree until all
members below it leave, in overlay multicast any member can leave
the tree at any time thus disjoining the tree and disrupting the data
dissemination. All the disrupted hosts have to rejoin the tree. This
characteristic of the overlay multicast causes multicast tree unstable,
data loss and rejoin overhead. In this paper, we propose that each node
sets its leaving time from the tree and sends join request to a number
of nodes in the tree. The nodes in the tree will reject the request if
their leaving time is earlier than the requesting node otherwise they
will accept the request. The node can join at one of the accepting
nodes. This makes the tree more stable as the nodes will join the tree
according to their leaving time, earliest leaving time node being at the
leaf of the tree. Some intermediate nodes may not follow their leaving
time and leave earlier than their leaving time thus disrupting the tree.
For this, we propose a proactive recovery mechanism so that disrupted
nodes can rejoin the tree at predetermined nodes immediately. We
have shown by simulation that there is less overhead when joining
the multicast tree and the recovery time of the disrupted nodes is
much less than the previous works. Keywords
Abstract: The major objective of this paper is to introduce a new method to select genes from DNA microarray data. As criterion to select genes we suggest to measure the local changes in the correlation graph of each gene and to select those genes whose local changes are largest. More precisely, we calculate the correlation networks from DNA microarray data of cervical cancer whereas each network represents a tissue of a certain tumor stage and each node in the network represents a gene. From these networks we extract one tree for each gene by a local decomposition of the correlation network. The interpretation of a tree is that it represents the n-nearest neighbor genes on the n-th level of a tree, measured by the Dijkstra distance, and, hence, gives the local embedding of a gene within the correlation network. For the obtained trees we measure the pairwise similarity between trees rooted by the same gene from normal to cancerous tissues. This evaluates the modification of the tree topology due to tumor progression. Finally, we rank the obtained similarity values from all tissue comparisons and select the top ranked genes. For these genes the local neighborhood in the correlation networks changes most between normal and cancerous tissues. As a result we find that the top ranked genes are candidates suspected to be involved in tumor growth. This indicates that our method captures essential information from the underlying DNA microarray data of cervical cancer.
Abstract: In order to perform on-line measuring and detection
of PD signals, a total solution composing of an HFCT, A/D
converter and a complete software package is proposed. The
software package includes compensation of HFCT contribution,
filtering and noise reduction using wavelet transform and soft
calibration routines. The results have shown good performance and
high accuracy.
Abstract: Temperature rise in a transformer depends on variety
of parameters such as ambient temperature, output current and type
of the core. Considering these parameters, temperature rise estimation
is still complicated procedure. In this paper, we present a new model
based on simple electrical equivalent circuit. This method avoids the
complication associated to accurate estimation and is in very good
agreement with practice.
Abstract: In most of the cases, natural disasters lead to the
necessity of evacuating people. The quality of evacuation
management is dramatically improved by the use of information
provided by decision support systems, which become indispensable
in case of large scale evacuation operations. This paper presents a
best practice case study. In November 2007, officers from the
Emergency Situations Inspectorate “Crisana" of Bihor County from
Romania participated to a cross-border evacuation exercise, when
700 people have been evacuated from Netherlands to Belgium. One
of the main objectives of the exercise was the test of four different
decision support systems. Afterwards, based on that experience,
software system called TEVAC (Trans Border Evacuation) has been
developed “in house" by the experts of this institution. This original
software system was successfully tested in September 2008, during
the deployment of the international exercise EU-HUROMEX 2008,
the scenario involving real evacuation of 200 persons from Hungary
to Romania. Based on the lessons learned and results, starting from
April 2009, the TEVAC software is used by all Emergency
Situations Inspectorates all over Romania.
Abstract: Using neural network we try to model the unknown function f for given input-output data pairs. The connection strength of each neuron is updated through learning. Repeated simulations of crisp neural network produce different values of weight factors that are directly affected by the change of different parameters. We propose the idea that for each neuron in the network, we can obtain quasi-fuzzy weight sets (QFWS) using repeated simulation of the crisp neural network. Such type of fuzzy weight functions may be applied where we have multivariate crisp input that needs to be adjusted after iterative learning, like claim amount distribution analysis. As real data is subjected to noise and uncertainty, therefore, QFWS may be helpful in the simplification of such complex problems. Secondly, these QFWS provide good initial solution for training of fuzzy neural networks with reduced computational complexity.
Abstract: The notion of Next Generation Network (NGN) is
based on the Network Convergence concept which refers to
integration of services (such as IT and communication services) over
IP layer. As the most popular implementation of Service Oriented
Architecture (SOA), Web Services technology is known to be the
base for service integration. In this paper, we present a platform to
deliver communication services as web services. We also implement
a sample service to show the simplicity of making composite web
and communication services using this platform. A Service Logic
Execution Environment (SLEE) is used to implement the
communication services. The proposed architecture is in agreement
with Service Oriented Architecture (SOA) and also can be integrated
to an Enterprise Service Bus to make a base for NGN Service
Delivery Platform (SDP).
Abstract: With the advance in wireless networking, IEEE 802.16 WiMAX technology has been widely deployed for several applications such as “last mile" broadband service, cellular backhaul, and high-speed enterprise connectivity. As a result, military employed WiMAX as a high-speed wireless connection for data-link because of its point to multi-point and non-line-of-sight (NLOS) capability for many years. However, the risk of using WiMAX is a critical factor in some sensitive area of military applications especially in ammunition manufacturing such as solid propellant rocket production. The US DoD policy states that the following certification requirements are met for WiMAX: electromagnetic effects on the environment (E3) and Hazards of Electromagnetic Radiation to Ordnance (HERO). This paper discuses the Recommended Power Densities and Safe Separation Distance (SSD) for HERO on WiMAX systems deployed on solid propellant rocket production. The result of this research found that WiMAX is safe to operate at close proximity distances to the rocket production based on AF Guidance Memorandum immediately changing AFMAN 91-201.