Abstract: This study investigates the suitability of using plastic,
such as polyethylene terephthalate (PET), as a partial replacement of
natural coarse and fine aggregates (for example, brick chips and
natural sand) to produce lightweight concrete for load bearing
structural members. The plastic coarse aggregate (PCA) and plastic
fine aggregate (PFA) were produced from melted polyethylene
terephthalate (PET) bottles. Tests were conducted using three
different water–cement (w/c) ratios, such as 0.42, 0.48, and 0.57,
where PCA and PFA were used as 50% replacement of coarse and
fine aggregate respectively. Fresh and hardened properties of
concrete have been compared for natural aggregate concrete (NAC),
PCA concrete (PCC) and PFA concrete (PFC). The compressive
strength of concrete at 28 days varied with the water–cement ratio for
both the PCC and PFC. Between PCC and PFC, PFA concrete
showed the highest compressive strength (23.7 MPa) at 0.42 w/c ratio
and also the lowest compressive strength (13.7 MPa) at 0.57 w/c
ratio. Significant reduction in concrete density was mostly observed
for PCC samples, ranging between 1977–1924 kg/m³. With the
increase in water–cement ratio PCC achieved higher workability
compare to both NAC and PFC. It was found that both the PCA and
PFA contained concrete achieved the required compressive strength
to be used for structural purpose as partial replacement of the natural
aggregate; but to obtain the desired lower density as lightweight
concrete the PCA is most suited.
Abstract: The biodegradable family of polymers
polyhydroxyalkanoates is an interesting substitute for convectional
fossil-based plastics. However, the manufacturing and environmental
impacts associated with their production via intracellular bacterial
fermentation are strongly dependent on the raw material used and on
energy consumption during the extraction process, limiting their
potential for commercialization. Industrial wastewater is studied in
this paper as a promising alternative feedstock for waste valorization.
Based on results from laboratory and pilot-scale experiments, a
conceptual process design, techno-economic analysis and life cycle
assessment are developed for the large-scale production of the most
common type of polyhydroxyalkanoate, polyhydroxbutyrate.
Intracellular polyhydroxybutyrate is obtained via fermentation of
microbial community present in industrial wastewater and the
downstream processing is based on chemical digestion with
surfactant and hypochlorite. The economic potential and
environmental performance results help identifying bottlenecks and
best opportunities to scale-up the process prior to industrial
implementation. The outcome of this research indicates that the
fermentation of wastewater towards PHB presents advantages
compared to traditional PHAs production from sugars because the
null environmental burdens and financial costs of the raw material in
the bioplastic production process. Nevertheless, process optimization
is still required to compete with the petrochemicals counterparts.
Abstract: Fluid viscous damping systems are well suited for
many air vehicles subjected to shock and vibration. These damping
system work with the principle of viscous fluid throttling through the
orifice to create huge pressure difference between compression and
rebound chamber and obtain the required damping force. One
application of such systems is its use in aircraft door system to
counteract the door’s velocity and safely stop it. In exigency
situations like crash or emergency landing where the door doesn’t
open easily, possibly due to unusually tilting of fuselage or some
obstacles or intrusion of debris obstruction to move the parts of the
door, such system can be combined with other systems to provide
needed force to forcefully open the door and also securely stop it
simultaneously within the required time i.e. less than 8 seconds. In
the present study, a hydraulic system called snubber along with other
systems like actuator, gas bottle assembly which together known as
emergency power assist system (EPAS) is designed, built and
experimentally studied to check the magnitude of angular velocity,
damping force and time required to effectively open the door.
Whenever needed, the gas pressure from the bottle is released to
actuate the actuator and at the same time pull the snubber’s piston to
operate the emergency opening of the door. Such EPAS installed in
the suspension arm of the aircraft door is studied explicitly changing
parameters like orifice size, oil level, oil viscosity and bypass valve
gap and its spring of the snubber at varying temperature to generate
the optimum design case. Comparative analysis of the EPAS at
several cases is done and conclusions are made. It is found that
during emergency condition, the system opening time and angular
velocity, when snubber with 0.3mm piston and shaft orifice and
bypass valve gap of 0.5 mm with its original spring is used, shows
significant improvement over the old ones.
Abstract: The characteristic requirement for producing
rectangular shape bottles was a uniform thickness of the plastic bottle
wall. Die shaping was a good technique which controlled the wall
thickness of bottles. An advance technology which was the finite
element method (FEM) for blowing parison to be a rectangular shape
bottle was conducted to reduce waste plastic from a trial and error
method of a die shaping and parison control method. The artificial
intelligent (AI) comprised of artificial neural network and genetic
algorithm was selected to optimize the die gap shape from the FEM
results. The application of AI technique could optimize the suitable
die gap shape for the parison blow molding which did not depend on
the parison control method to produce rectangular bottles with the
uniform wall. Particularly, this application can be used with cheap
blow molding machines without a parison controller therefore it will
reduce cost of production in the bottle blow molding process.
Abstract: Company managers are always looking for more and
more opportunities to succeed in today's fiercely competitive market.
To maintain your place among the successful companies on the
market today or to come up with a revolutionary business idea is
much more difficult than before. Each new or improved method, tool,
or approach that can improve the functioning of business processes or
even of the entire system is worth checking and verification. The use
of simulation in the design of manufacturing systems and their
management in practice is one of the ways without increased risk,
which makes it possible to find the optimal parameters of
manufacturing processes and systems. The paper presents an example
of use of simulation for solution of the bottleneck problem in the
concrete company.
Abstract: In-memory database systems are becoming popular
due to the availability and affordability of sufficiently large RAM and
processors in modern high-end servers with the capacity to manage
large in-memory database transactions. While fast and reliable inmemory
systems are still being developed to overcome cache misses,
CPU/IO bottlenecks and distributed transaction costs, disk-based data
stores still serve as the primary persistence. In addition, with the
recent growth in multi-tenancy cloud applications and associated
security concerns, many organisations consider the trade-offs and
continue to require fast and reliable transaction processing of diskbased
database systems as an available choice. For these
organizations, the only way of increasing throughput is by improving
the performance of disk-based concurrency control. This warrants a
hybrid database system with the ability to selectively apply an
enhanced disk-based data management within the context of inmemory
systems that would help improve overall throughput.
The general view is that in-memory systems substantially
outperform disk-based systems. We question this assumption and
examine how a modified variation of access invariance that we call
enhanced memory access, (EMA) can be used to allow very high
levels of concurrency in the pre-fetching of data in disk-based
systems. We demonstrate how this prefetching in disk-based systems
can yield close to in-memory performance, which paves the way for
improved hybrid database systems. This paper proposes a novel EMA
technique and presents a comparative study between disk-based EMA
systems and in-memory systems running on hardware configurations
of equivalent power in terms of the number of processors and their
speeds. The results of the experiments conducted clearly substantiate
that when used in conjunction with all concurrency control
mechanisms, EMA can increase the throughput of disk-based systems
to levels quite close to those achieved by in-memory system. The
promising results of this work show that enhanced disk-based
systems facilitate in improving hybrid data management within the
broader context of in-memory systems.
Abstract: Frequent pattern mining is the process of finding a
pattern (a set of items, subsequences, substructures, etc.) that occurs
frequently in a data set. It was proposed in the context of frequent
itemsets and association rule mining. Frequent pattern mining is used
to find inherent regularities in data. What products were often
purchased together? Its applications include basket data analysis,
cross-marketing, catalog design, sale campaign analysis, Web log
(click stream) analysis, and DNA sequence analysis. However, one of
the bottlenecks of frequent itemset mining is that as the data increase
the amount of time and resources required to mining the data
increases at an exponential rate. In this investigation a new algorithm
is proposed which can be uses as a pre-processor for frequent itemset
mining. FASTER (FeAture SelecTion using Entropy and Rough sets)
is a hybrid pre-processor algorithm which utilizes entropy and roughsets
to carry out record reduction and feature (attribute) selection
respectively. FASTER for frequent itemset mining can produce a
speed up of 3.1 times when compared to original algorithm while
maintaining an accuracy of 71%.
Abstract: Traditional Wireless Sensor Networks (WSNs) generally use static sinks to collect data from the sensor nodes via multiple forwarding. Therefore, network suffers with some problems like long message relay time, bottle neck problem which reduces the performance of the network.
Many approaches have been proposed to prevent this problem with the help of mobile sink to collect the data from the sensor nodes, but these approaches still suffer from the buffer overflow problem due to limited memory size of sensor nodes. This paper proposes an energy efficient scheme for data gathering which overcomes the buffer overflow problem. The proposed scheme creates virtual grid structure of heterogeneous nodes. Scheme has been designed for sensor nodes having variable sensing rate. Every node finds out its buffer overflow time and on the basis of this cluster heads are elected. A controlled traversing approach is used by the proposed scheme in order to transmit data to sink. The effectiveness of the proposed scheme is verified by simulation.
Abstract: Growing demand for gas has rekindled a debate on gas security of supply due to supply interruptions, increasing gas prices, cross-border bottlenecks and a growing reliance on imports over longer distances. Security of supply is defined mostly as an infrastructure package to satisfy N-1 criteria. In case of Estonia, Finland, Latvia and Lithuania all the gas infrastructure is built to supply natural gas only from one single supplier, Russia. In 2012 almost 100% of natural gas to the Eastern Baltic Region was supplied by Gazprom. Under such circumstances infrastructure N-1 criteria does not guarantee security of supply. In the Eastern Baltic Region, the assessment of risk of gas supply disruption has been worked out by applying the method of risk scenarios. There are various risks to be tackled in Eastern Baltic States in terms of improving security of supply, such as single supplier risk, physical infrastructure risk, regulatory gap, fair price and competition. The objective of this paper is to evaluate the energy security of the Eastern Baltic Region within the framework of the European Union’s policies and to make recommendations on how to better guarantee the energy security of the region.
Abstract: The use of Automated Teller Machine (ATM) has become an important tool among commercial banks, customers of banks have come to depend on and trust the ATM conveniently meet their banking needs. Although the overwhelming advantages of ATM cannot be over-emphasized, its alarming fraud rate has become a bottleneck in it’s full adoption in Nigeria. This study examined the menace of ATM in the society another cost of running ATM services by banks in the country. The researcher developed a prototype of an enhanced Automated Teller Machine Authentication using Short Message Service (SMS) Verification. The developed prototype was tested by Ten (10) respondents who are users of ATM cards in the country and the data collected was analyzed using Statistical Package for Social Science (SPSS). Based on the results of the analysis, it is being envisaged that the developed prototype will go a long way in reducing the alarming rate of ATM fraud in Nigeria.
Abstract: Data replication in data grid systems is one of the important solutions that improve availability, scalability, and fault tolerance. However, this technique can also bring some involved issues such as maintaining replica consistency. Moreover, as grid environment are very dynamic some nodes can be more uploaded than the others to become eventually a bottleneck. The main idea of our work is to propose a complementary solution between replica consistency maintenance and dynamic load balancing strategy to improve access performances under a simulated grid environment.
Abstract: Support vector clustering (SVC) is an important kernelbased clustering algorithm in multi applications. It has got two main bottle necks, the high computation price and labeling piece. In this paper, we presented a modified SVC method, named Grid–SVC, to improve the original algorithm computationally. First we normalized and then we parted the interval, where the SVC is processing, using a novel Grid–based clustering algorithm. The algorithm parts the intervals, based on the density function of the data set and then applying the cartesian multiply makes multi-dimensional grids. Eliminating many outliers and noise in the preprocess, we apply an improved SVC method to each parted grid in a parallel way. The experimental results show both improvement in time complexity order and the accuracy.
Abstract: In this paper, the detection and tracking of face, mouth, hands and medication bottles in the context of medication intake monitoring with a camera is presented. This is aimed at recognizing medication intake for elderly in their home setting to avoid an inappropriate use. Background subtraction is used to isolate moving objects, and then, skin and bottle segmentations are done in the RGB normalized color space. We use a minimum displacement distance criterion to track skin color regions and the R/G ratio to detect the mouth. The color-labeled medication bottles are simply tracked based on the color space distance to their mean color vector. For the recognition of medication intake, we propose a three-level hierarchal approach, which uses activity-patterns to recognize the normal medication intake activity. The proposed method was tested with three persons, with different medication intake scenarios, and gave an overall precision of over 98%.
Abstract: This paper presents a case study that uses processoriented
simulation to identify bottlenecks in the service delivery
system in an emergency department of a hospital in the United Arab
Emirates. Using results of the simulation, response surface models
were developed to explain patient waiting time and the total time
patients spend in the hospital system. Results of the study could be
used as a service improvement tool to help hospital management in
improving patient throughput and service quality in the hospital
system.
Abstract: A Simultaneous Multithreading (SMT) Processor is
capable of executing instructions from multiple threads in the same
cycle. SMT in fact was introduced as a powerful architecture to
superscalar to increase the throughput of the processor.
Simultaneous Multithreading is a technique that permits multiple
instructions from multiple independent applications or threads to
compete limited resources each cycle. While the fetch unit has been
identified as one of the major bottlenecks of SMT architecture, several
fetch schemes were proposed by prior works to enhance the fetching
efficiency and overall performance.
In this paper, we propose a novel fetch policy called queue situation
identifier (QSI) which counts some kind of long latency instructions of
each thread each cycle then properly selects which threads to fetch
next cycle. Simulation results show that in best case our fetch policy
can achieve 30% on speedup and also can reduce the data cache level 1
miss rate.
Abstract: In online context, the design and implementation of
effective remote laboratories environment is highly challenging on
account of hardware and software needs. This paper presents the
remote laboratory software framework modified from ilab shared
architecture (ISA). The ISA is a framework which enables students to
remotely acccess and control experimental hardware using internet
infrastructure. The need for remote laboratories came after
experiencing problems imposed by traditional laboratories. Among
them are: the high cost of laboratory equipment, scarcity of space,
scarcity of technical personnel along with the restricted university
budget creates a significant bottleneck on building required
laboratory experiments. The solution to these problems is to build
web-accessible laboratories. Remote laboratories allow students and
educators to interact with real laboratory equipment located
anywhere in the world at anytime. Recently, many universities and
other educational institutions especially in third world countries rely
on simulations because they do not afford the experimental
equipment they require to their students. Remote laboratories enable
users to get real data from real-time hand-on experiments. To
implement many remote laboratories, the system architecture should
be flexible, understandable and easy to implement, so that different
laboratories with different hardware can be deployed easily. The
modifications were made to enable developers to add more
equipment in ISA framework and to attract the new developers to
develop many online laboratories.
Abstract: Application-Specific Instruction (ASI ) set Processors
(ASIP) have become an important design choice for embedded
systems due to runtime flexibility, which cannot be provided by
custom ASIC solutions. One major bottleneck in maximizing ASIP
performance is the limitation on the data bandwidth between the
General Purpose Register File (GPRF) and ASIs. This paper presents
the Implicit Registers (IRs) to provide the desirable data bandwidth.
An ASI Input/Output model is proposed to formulate the overheads of
the additional data transfer between the GPRF and IRs, therefore,
an IRs allocation algorithm is used to achieve the better performance
by minimizing the number of extra data transfer instructions. The
experiment results show an up to 3.33x speedup compared to the
results without using IRs.
Abstract: Today, building automation is advancing from simple
monitoring and control tasks of lightning and heating towards more
and more complex applications that require a dynamic perception
and interpretation of different scenes occurring in a building. Current
approaches cannot handle these newly upcoming demands. In this
article, a bionically inspired approach for multimodal, dynamic scene
perception and interpretation is presented, which is based on neuroscientific
and neuro-psychological research findings about the perceptual
system of the human brain. This approach bases on data from diverse
sensory modalities being processed in a so-called neuro-symbolic
network. With its parallel structure and with its basic elements being
information processing and storing units at the same time, a very
efficient method for scene perception is provided overcoming the
problems and bottlenecks of classical dynamic scene interpretation
systems.
Abstract: This text studies glass bottle intelligent inspector
based machine vision instead of manual inspection. The system
structure is illustrated in detail in this paper. The text presents the
method based on watershed transform methods to segment the
possible defective regions and extract features of bottle wall by rules.
Then wavelet transform are used to exact features of bottle finish
from images. After extracting features, the fuzzy support vector
machine ensemble is putted forward as classifier. For ensuring that
the fuzzy support vector machines have good classification ability,
the GA based ensemble method is used to combining the several
fuzzy support vector machines. The experiments demonstrate that
using this inspector to inspect glass bottles, the accuracy rate may
reach above 97.5%.
Abstract: In order to maximize efficiency of an information management platform and to assist in decision making, the collection, storage and analysis of performance-relevant data has become of fundamental importance. This paper addresses the merits and drawbacks provided by the OLAP paradigm for efficiently navigating large volumes of performance measurement data hierarchically. The system managers or database administrators navigate through adequately (re)structured measurement data aiming to detect performance bottlenecks, identify causes for performance problems or assessing the impact of configuration changes on the system and its representative metrics. Of particular importance is finding the root cause of an imminent problem, threatening availability and performance of an information system. Leveraging OLAP techniques, in contrast to traditional static reporting, this is supposed to be accomplished within moderate amount of time and little processing complexity. It is shown how OLAP techniques can help improve understandability and manageability of measurement data and, hence, improve the whole Performance Analysis process.