Abstract: Six sigma is a framework that is used to identify inefficiency so that the cause of inefficiency will be known and right improvement to overcome cause of inefficiency can be conducted. This paper presents result of implementing six sigma to improve piston assembly line in Manufacturing Laboratory, Universitas Indonesia. Six sigma framework will be used to analyze the significant factor of inefficiency that needs to be improved which causes bottleneck in assembly line. After analysis based on six sigma framework conducted, line balancing method was chosen for improvement to overcome causative factor of inefficiency which is differences time between workstation that causes bottleneck in assembly line. Then after line balancing conducted in piston assembly line, the result is increase in efficiency. Efficiency is shown in the decreasing of Defects per Million Opportunities (DPMO) from 900,000 to 700,000, the increasing of level of labor productivity from 0.0041 to 0.00742, the decreasing of idle time from 121.3 seconds to 12.1 seconds, and the increasing of output, which is from 1 piston in 5 minutes become 3 pistons in 5 minutes.
Abstract: Trade liberalizations measures, as import tariff cuts, are not a sufficient trigger for trade growth. Given that price margins are narrow, traders and cargo operators tend to opt out of markets where the process of goods clearance is slow and costly. Excess paperwork and slow customs dispatch not only lead to institutional breakdowns and corruption but also to increasing transaction cost and trade constraints. The objective of this paper is, therefore, two-fold: First, to evaluate the relationship between institutional and infrastructural performance indexes and trade growth in container throughput; and, second, to investigate the causes for differences in container demurrage and detention fees in Latin American countries (using other emerging countries as benchmarking). The analysis is focused on manufactured goods, typically transported by containers. Institutional and infrastructure bottlenecks and, therefore, the country logistics efficiency – measured by the Logistics Performance Index (LPI, World Bank-WB) – are compared with other indexes, such as the Doing Business index (WB) and the Corruption Perception Index (Transparency International). The main results based on the comparison between Latin American countries and the others emerging countries point out in that the growth in containers trade is directly related to LPI performance. It has also been found that the main hypothesis is valid as aspects that more specifically identify trade facilitation and corruption are significant drivers of logistics performance. The exam of port efficiency (demurrage and detention fees) has demonstrated that not necessarily higher level of efficiency is related to lower charges; however, reductions in fees have been more significant within non-Latin American emerging countries.
Abstract: A significant percentage of production costs is the maintenance costs, and analysis of maintenance costs could to achieve greater productivity and competitiveness. With this is mind, the maintenance of machines and installations is considered as an essential part of organizational functions and applying effective strategies causes significant added value in manufacturing activities. Organizations are trying to achieve performance levels on a global scale with emphasis on creating competitive advantage by different methods consist of RCM (Reliability-Center-Maintenance), TPM (Total Productivity Maintenance) etc. In this study, increasing the capacity of Concentration Plant of Golgohar Iron Ore Mining & Industrial Company (GEG) was examined by using of reliability and maintainability analyses. The results of this research showed that instead of increasing the number of machines (in order to solve the bottleneck problems), the improving of reliability and maintainability would solve bottleneck problems in the best way. It should be mention that in the abovementioned study, the data set of Concentration Plant of GEG as a case study, was applied and analyzed.
Abstract: The aim of information systems integration is to make all the data sources, applications and business flows integrated into the new environment so that unwanted redundancies are reduced and bottlenecks and mismatches are eliminated. Two issues have to be dealt with to meet such requirements: the software architecture that supports resource integration, and the adaptor development tool that help integration and migration of legacy applications. In this paper, a service-enabled dependable integration environment (SDIE), is presented, which has two key components, i.e., a dependable service integration platform and a legacy application integration tool. For the dependable platform for service integration, the service integration bus, the service management framework, the dependable engine for service composition, and the service registry and discovery components are described. For the legacy application integration tool, its basic organization, functionalities and dependable measures taken are presented. Due to its service-oriented integration model, the light-weight extensible container, the service component combination-oriented p-lattice structure, and other features, SDIE has advantages in openness, flexibility, performance-price ratio and feature support over commercial products, is better than most of the open source integration software in functionality, performance and dependability support.
Abstract: Rainbow trout (Oncorhynchus mykiss) and Brown trout (Salmo trutta fario) are the two species of trout which were once introduced by British in waters of Kashmir has well adapted to favorable climatic conditions. Cold water fisheries are one of the emerging sectors in Kashmir valley and trout holds an important place Jammu and Kashmir fisheries. Realizing the immense potential of trout culture in Kashmir region, the state fisheries department started privatizing trout culture under the centrally funded scheme of RKVY in which they provide 80 percent subsidy for raceway construction and supply of feed and seed for the first year since 2009-10 and at present there are 362 private trout farms. To cater the growing demand for trout in the valley, it is important to understand the bottlenecks faced in the propagation of trout culture. Value chain analysis provides a generic framework to understand the various activities and processes, mapping and studying linkages is first step that needs to be done in any value chain analysis. In Kashmir, it is found that trout hatcheries play a crucial role in insuring the continuous supply of trout seed in valley. Feed is most limiting factor in trout culture and the farmer has to incur high cost in payment and in the transportation of feed from the feed mill to farm. Lack of aqua clinic in the Kashmir valley needs to be addressed. Brood stock maintenance, breeding and seed production, technical assistance to private farmer, extension services have to be strengthened and there is need to development healthier environment for new entrepreneurs. It was found that trout farmers do not avail credit facility as there is no well define credit scheme for fisheries in the state. The study showed weak institutional linkages. Research and development should focus more on applied science rather than basic science.
Abstract: Anammox is a novel and promising technology that has changed the traditional concept of biological nitrogen removal. The process facilitates direct oxidation of ammonical nitrogen under anaerobic conditions with nitrite as an electron acceptor without addition of external carbon sources. The present study investigated the feasibility of Anammox Hybrid Reactor (AHR) combining the dual advantages of suspended and attached growth media for biodegradation of ammonical nitrogen in wastewater. Experimental unit consisted of 4 nos. of 5L capacity AHR inoculated with mixed seed culture containing anoxic and activated sludge (1:1). The process was established by feeding the reactors with synthetic wastewater containing NH4-H and NO2-N in the ratio 1:1 at HRT (hydraulic retention time) of 1 day. The reactors were gradually acclimated to higher ammonium concentration till it attained pseudo steady state removal at a total nitrogen concentration of 1200 mg/l. During this period, the performance of the AHR was monitored at twelve different HRTs varying from 0.25-3.0 d with increasing NLR from 0.4 to 4.8 kg N/m3d. AHR demonstrated significantly higher nitrogen removal (95.1%) at optimal HRT of 1 day. Filter media in AHR contributed an additional 27.2% ammonium removal in addition to 72% reduction in the sludge washout rate. This may be attributed to the functional mechanism of filter media which acts as a mechanical sieve and reduces the sludge washout rate many folds. This enhances the biomass retention capacity of the reactor by 25%, which is the key parameter for successful operation of high rate bioreactors. The effluent nitrate concentration, which is one of the bottlenecks of anammox process was also minimised significantly (42.3-52.3 mg/L). Process kinetics was evaluated using first order and Grau-second order models. The first-order substrate removal rate constant was found as 13.0 d-1. Model validation revealed that Grau second order model was more precise and predicted effluent nitrogen concentration with least error (1.84±10%). A new mathematical model based on mass balance was developed to predict N2 gas in AHR. The mass balance model derived from total nitrogen dictated significantly higher correlation (R2=0.986) and predicted N2 gas with least error of precision (0.12±8.49%). SEM study of biomass indicated the presence of heterogeneous population of cocci and rod shaped bacteria of average diameter varying from 1.2-1.5 mm. Owing to enhanced NRE coupled with meagre production of effluent nitrate and its ability to retain high biomass, AHR proved to be the most competitive reactor configuration for dealing with nitrogen laden wastewater.
Abstract: One of the most famous techniques which affect the
efficiency of a production line is the assembly line balancing (ALB)
technique. This paper examines the balancing effect of a whole
production line of a real auto glass manufacturer in three steps. In the
first step, processing time of each activity in the workstations is
generated according to a practical approach. In the second step, the
whole production process is simulated and the bottleneck stations
have been identified, and finally in the third step, several
improvement scenarios are generated to optimize the system
throughput, and the best one is proposed. The main contribution of
the current research is the proposed framework which combines two
famous approaches including Assembly Line Balancing and
Optimization via Simulation technique (OvS). The results show that
the proposed framework could be applied in practical environments,
easily.
Abstract: The distribution of a single global clock across a chip
has become the major design bottleneck for high performance VLSI
systems owing to the power dissipation, process variability and multicycle
cross-chip signaling. A Network-on-Chip (NoC) architecture
partitioned into several synchronous blocks has become a promising
approach for attaining fine-grain power management at the system
level. In a NoC architecture the communication between the blocks is
handled asynchronously. To interface these blocks on a chip
operating at different frequencies, an asynchronous FIFO interface is
inevitable. However, these asynchronous FIFOs are not required if
adjacent blocks belong to the same clock domain. In this paper, we
have designed and analyzed a 16-bit asynchronous micropipelined
FIFO of depth four, with the awareness of place and route on an
FPGA device. We have used a commercially available Spartan 3
device and designed a high speed implementation of the
asynchronous 4-phase micropipeline. The asynchronous FIFO
implemented on the FPGA device shows 76 Mb/s throughput and a
handshake cycle of 109 ns for write and 101.3 ns for read at the
simulation under the worst case operating conditions (voltage =
0.95V) on a working chip at the room temperature.
Abstract: The biodegradable family of polymers
polyhydroxyalkanoates is an interesting substitute for convectional
fossil-based plastics. However, the manufacturing and environmental
impacts associated with their production via intracellular bacterial
fermentation are strongly dependent on the raw material used and on
energy consumption during the extraction process, limiting their
potential for commercialization. Industrial wastewater is studied in
this paper as a promising alternative feedstock for waste valorization.
Based on results from laboratory and pilot-scale experiments, a
conceptual process design, techno-economic analysis and life cycle
assessment are developed for the large-scale production of the most
common type of polyhydroxyalkanoate, polyhydroxbutyrate.
Intracellular polyhydroxybutyrate is obtained via fermentation of
microbial community present in industrial wastewater and the
downstream processing is based on chemical digestion with
surfactant and hypochlorite. The economic potential and
environmental performance results help identifying bottlenecks and
best opportunities to scale-up the process prior to industrial
implementation. The outcome of this research indicates that the
fermentation of wastewater towards PHB presents advantages
compared to traditional PHAs production from sugars because the
null environmental burdens and financial costs of the raw material in
the bioplastic production process. Nevertheless, process optimization
is still required to compete with the petrochemicals counterparts.
Abstract: Company managers are always looking for more and
more opportunities to succeed in today's fiercely competitive market.
To maintain your place among the successful companies on the
market today or to come up with a revolutionary business idea is
much more difficult than before. Each new or improved method, tool,
or approach that can improve the functioning of business processes or
even of the entire system is worth checking and verification. The use
of simulation in the design of manufacturing systems and their
management in practice is one of the ways without increased risk,
which makes it possible to find the optimal parameters of
manufacturing processes and systems. The paper presents an example
of use of simulation for solution of the bottleneck problem in the
concrete company.
Abstract: In-memory database systems are becoming popular
due to the availability and affordability of sufficiently large RAM and
processors in modern high-end servers with the capacity to manage
large in-memory database transactions. While fast and reliable inmemory
systems are still being developed to overcome cache misses,
CPU/IO bottlenecks and distributed transaction costs, disk-based data
stores still serve as the primary persistence. In addition, with the
recent growth in multi-tenancy cloud applications and associated
security concerns, many organisations consider the trade-offs and
continue to require fast and reliable transaction processing of diskbased
database systems as an available choice. For these
organizations, the only way of increasing throughput is by improving
the performance of disk-based concurrency control. This warrants a
hybrid database system with the ability to selectively apply an
enhanced disk-based data management within the context of inmemory
systems that would help improve overall throughput.
The general view is that in-memory systems substantially
outperform disk-based systems. We question this assumption and
examine how a modified variation of access invariance that we call
enhanced memory access, (EMA) can be used to allow very high
levels of concurrency in the pre-fetching of data in disk-based
systems. We demonstrate how this prefetching in disk-based systems
can yield close to in-memory performance, which paves the way for
improved hybrid database systems. This paper proposes a novel EMA
technique and presents a comparative study between disk-based EMA
systems and in-memory systems running on hardware configurations
of equivalent power in terms of the number of processors and their
speeds. The results of the experiments conducted clearly substantiate
that when used in conjunction with all concurrency control
mechanisms, EMA can increase the throughput of disk-based systems
to levels quite close to those achieved by in-memory system. The
promising results of this work show that enhanced disk-based
systems facilitate in improving hybrid data management within the
broader context of in-memory systems.
Abstract: Frequent pattern mining is the process of finding a
pattern (a set of items, subsequences, substructures, etc.) that occurs
frequently in a data set. It was proposed in the context of frequent
itemsets and association rule mining. Frequent pattern mining is used
to find inherent regularities in data. What products were often
purchased together? Its applications include basket data analysis,
cross-marketing, catalog design, sale campaign analysis, Web log
(click stream) analysis, and DNA sequence analysis. However, one of
the bottlenecks of frequent itemset mining is that as the data increase
the amount of time and resources required to mining the data
increases at an exponential rate. In this investigation a new algorithm
is proposed which can be uses as a pre-processor for frequent itemset
mining. FASTER (FeAture SelecTion using Entropy and Rough sets)
is a hybrid pre-processor algorithm which utilizes entropy and roughsets
to carry out record reduction and feature (attribute) selection
respectively. FASTER for frequent itemset mining can produce a
speed up of 3.1 times when compared to original algorithm while
maintaining an accuracy of 71%.
Abstract: Growing demand for gas has rekindled a debate on gas security of supply due to supply interruptions, increasing gas prices, cross-border bottlenecks and a growing reliance on imports over longer distances. Security of supply is defined mostly as an infrastructure package to satisfy N-1 criteria. In case of Estonia, Finland, Latvia and Lithuania all the gas infrastructure is built to supply natural gas only from one single supplier, Russia. In 2012 almost 100% of natural gas to the Eastern Baltic Region was supplied by Gazprom. Under such circumstances infrastructure N-1 criteria does not guarantee security of supply. In the Eastern Baltic Region, the assessment of risk of gas supply disruption has been worked out by applying the method of risk scenarios. There are various risks to be tackled in Eastern Baltic States in terms of improving security of supply, such as single supplier risk, physical infrastructure risk, regulatory gap, fair price and competition. The objective of this paper is to evaluate the energy security of the Eastern Baltic Region within the framework of the European Union’s policies and to make recommendations on how to better guarantee the energy security of the region.
Abstract: The use of Automated Teller Machine (ATM) has become an important tool among commercial banks, customers of banks have come to depend on and trust the ATM conveniently meet their banking needs. Although the overwhelming advantages of ATM cannot be over-emphasized, its alarming fraud rate has become a bottleneck in it’s full adoption in Nigeria. This study examined the menace of ATM in the society another cost of running ATM services by banks in the country. The researcher developed a prototype of an enhanced Automated Teller Machine Authentication using Short Message Service (SMS) Verification. The developed prototype was tested by Ten (10) respondents who are users of ATM cards in the country and the data collected was analyzed using Statistical Package for Social Science (SPSS). Based on the results of the analysis, it is being envisaged that the developed prototype will go a long way in reducing the alarming rate of ATM fraud in Nigeria.
Abstract: Data replication in data grid systems is one of the important solutions that improve availability, scalability, and fault tolerance. However, this technique can also bring some involved issues such as maintaining replica consistency. Moreover, as grid environment are very dynamic some nodes can be more uploaded than the others to become eventually a bottleneck. The main idea of our work is to propose a complementary solution between replica consistency maintenance and dynamic load balancing strategy to improve access performances under a simulated grid environment.
Abstract: This paper presents a case study that uses processoriented
simulation to identify bottlenecks in the service delivery
system in an emergency department of a hospital in the United Arab
Emirates. Using results of the simulation, response surface models
were developed to explain patient waiting time and the total time
patients spend in the hospital system. Results of the study could be
used as a service improvement tool to help hospital management in
improving patient throughput and service quality in the hospital
system.
Abstract: A Simultaneous Multithreading (SMT) Processor is
capable of executing instructions from multiple threads in the same
cycle. SMT in fact was introduced as a powerful architecture to
superscalar to increase the throughput of the processor.
Simultaneous Multithreading is a technique that permits multiple
instructions from multiple independent applications or threads to
compete limited resources each cycle. While the fetch unit has been
identified as one of the major bottlenecks of SMT architecture, several
fetch schemes were proposed by prior works to enhance the fetching
efficiency and overall performance.
In this paper, we propose a novel fetch policy called queue situation
identifier (QSI) which counts some kind of long latency instructions of
each thread each cycle then properly selects which threads to fetch
next cycle. Simulation results show that in best case our fetch policy
can achieve 30% on speedup and also can reduce the data cache level 1
miss rate.
Abstract: In online context, the design and implementation of
effective remote laboratories environment is highly challenging on
account of hardware and software needs. This paper presents the
remote laboratory software framework modified from ilab shared
architecture (ISA). The ISA is a framework which enables students to
remotely acccess and control experimental hardware using internet
infrastructure. The need for remote laboratories came after
experiencing problems imposed by traditional laboratories. Among
them are: the high cost of laboratory equipment, scarcity of space,
scarcity of technical personnel along with the restricted university
budget creates a significant bottleneck on building required
laboratory experiments. The solution to these problems is to build
web-accessible laboratories. Remote laboratories allow students and
educators to interact with real laboratory equipment located
anywhere in the world at anytime. Recently, many universities and
other educational institutions especially in third world countries rely
on simulations because they do not afford the experimental
equipment they require to their students. Remote laboratories enable
users to get real data from real-time hand-on experiments. To
implement many remote laboratories, the system architecture should
be flexible, understandable and easy to implement, so that different
laboratories with different hardware can be deployed easily. The
modifications were made to enable developers to add more
equipment in ISA framework and to attract the new developers to
develop many online laboratories.
Abstract: Application-Specific Instruction (ASI ) set Processors
(ASIP) have become an important design choice for embedded
systems due to runtime flexibility, which cannot be provided by
custom ASIC solutions. One major bottleneck in maximizing ASIP
performance is the limitation on the data bandwidth between the
General Purpose Register File (GPRF) and ASIs. This paper presents
the Implicit Registers (IRs) to provide the desirable data bandwidth.
An ASI Input/Output model is proposed to formulate the overheads of
the additional data transfer between the GPRF and IRs, therefore,
an IRs allocation algorithm is used to achieve the better performance
by minimizing the number of extra data transfer instructions. The
experiment results show an up to 3.33x speedup compared to the
results without using IRs.
Abstract: Today, building automation is advancing from simple
monitoring and control tasks of lightning and heating towards more
and more complex applications that require a dynamic perception
and interpretation of different scenes occurring in a building. Current
approaches cannot handle these newly upcoming demands. In this
article, a bionically inspired approach for multimodal, dynamic scene
perception and interpretation is presented, which is based on neuroscientific
and neuro-psychological research findings about the perceptual
system of the human brain. This approach bases on data from diverse
sensory modalities being processed in a so-called neuro-symbolic
network. With its parallel structure and with its basic elements being
information processing and storing units at the same time, a very
efficient method for scene perception is provided overcoming the
problems and bottlenecks of classical dynamic scene interpretation
systems.