Abstract: Compared with terrestrial network, the traffic of spatial information network has both self-similarity and short correlation characteristics. By studying its traffic prediction method, the resource utilization of spatial information network can be improved, and the method can provide an important basis for traffic planning of a spatial information network. In this paper, considering the accuracy and complexity of the algorithm, the spatial information network traffic is decomposed into approximate component with long correlation and detail component with short correlation, and a time series hybrid prediction model based on wavelet decomposition is proposed to predict the spatial network traffic. Firstly, the original traffic data are decomposed to approximate components and detail components by using wavelet decomposition algorithm. According to the autocorrelation and partial correlation smearing and truncation characteristics of each component, the corresponding model (AR/MA/ARMA) of each detail component can be directly established, while the type of approximate component modeling can be established by ARIMA model after smoothing. Finally, the prediction results of the multiple models are fitted to obtain the prediction results of the original data. The method not only considers the self-similarity of a spatial information network, but also takes into account the short correlation caused by network burst information, which is verified by using the measured data of a certain back bone network released by the MAWI working group in 2018. Compared with the typical time series model, the predicted data of hybrid model is closer to the real traffic data and has a smaller relative root means square error, which is more suitable for a spatial information network.
Abstract: Nowadays, demand for using real-time video transmission capable devices is ever-increasing. So, high resolution videos have made efficient video compression techniques an essential component for capturing and transmitting video data. Motion estimation has a critical role in encoding raw video. Hence, various motion estimation methods are introduced to efficiently compress the video. Low bit‑depth representation based motion estimation methods facilitate computation of matching criteria and thus, provide small hardware footprint. In this paper, a hardware implementation of a two-bit transformation based low-complexity motion estimation method using local binary pattern approach is proposed. Image frames are represented in two-bit depth instead of full-depth by making use of the local binary pattern as a binarization approach and the binarization part of the hardware architecture is explained in detail. Experimental results demonstrate the difference between the proposed hardware architecture and the architectures of well-known low-complexity motion estimation methods in terms of important aspects such as resource utilization, energy and power consumption.
Abstract: Geoengineering approaches to climate change mitigation are unpopular and regarded with suspicion. Of these, space-based approaches are regarded as unworkable and enormously costly. Here, a space-based approach is presented that is modest in cost, fully controllable and reversible, and acts as a natural spur to the development of solar power satellites over the longer term as a clean source of energy. The low-cost approach exploits self-replication technology which it is proposed may be enabled by 3D printing technology. Self-replication of 3D printing platforms will enable mass production of simple spacecraft units. Key elements being developed are 3D-printable electric motors and 3D-printable vacuum tube-based electronics. The power of such technologies will open up enormous possibilities at low cost including space-based geoengineering.
Abstract: Cloud computing is the outcome of rapid growth of internet. Due to elastic nature of cloud computing and unpredictable behavior of user, load balancing is the major issue in cloud computing paradigm. An efficient load balancing technique can improve the performance in terms of efficient resource utilization and higher customer satisfaction. Load balancing can be implemented through task scheduling, resource allocation and task migration. Various parameters to analyze the performance of load balancing approach are response time, cost, data processing time and throughput. This paper demonstrates a two level load balancer approach by combining join idle queue and join shortest queue approach. Authors have used cloud analyst simulator to test proposed two level load balancer approach. The results are analyzed and compared with the existing algorithms and as observed, proposed work is one step ahead of existing techniques.
Abstract: Revenue leakages are one of the major challenges
manufacturers face in production processes, as most of the input
materials that should emanate as products from the lines are lost as
waste. Rather than generating income from material input which is
meant to end-up as products, losses are further incurred as costs in
order to manage waste generated. In addition, due to the lack of a
clear view of the flow of resources on the lines from input to output
stage, acquiring information on the true cost of waste generated have
become a challenge. This has therefore given birth to the
conceptualization and implementation of waste minimization
strategies by several manufacturing industries. This paper reviews the
principles and applications of three environmental management
accounting tools namely Activity-based Costing (ABC), Life-Cycle
Assessment (LCA) and Material Flow Cost Accounting (MFCA) in
the manufacturing industry and their effectiveness in curbing revenue
leakages. The paper unveils the strengths and limitations of each of
the tools; beaming a searchlight on the tool that could allow for
optimal resource utilization, transparency in production process as
well as improved cost efficiency. Findings from this review reveal
that MFCA may offer superior advantages with regards to the
provision of more detailed information (both in physical and
monetary terms) on the flow of material inputs throughout the
production process compared to the other environmental accounting
tools. This paper therefore makes a case for the adoption of MFCA as
a viable technique for the identification and reduction of waste in
production processes, and also for effective decision making by
production managers, financial advisors and other relevant
stakeholders.
Abstract: A sensory network consists of multiple detection
locations called sensor nodes, each of which is tiny, featherweight
and portable. A single path routing protocols in wireless sensor
network can lead to holes in the network, since only the nodes
present in the single path is used for the data transmission. Apart
from the advantages like reduced computation, complexity and
resource utilization, there are some drawbacks like throughput,
increased traffic load and delay in data delivery. Therefore, multipath
routing protocols are preferred for WSN. Distributing the traffic
among multiple paths increases the network lifetime. We propose a
scheme, for the data to be transmitted through a dominant path to
save energy. In order to obtain a high delivery ratio, a basic route
reconstruction protocol is utilized to reconstruct the path whenever a
failure is detected. A basic reconstruction routing (BRR) algorithm is
proposed, in which a node can leap over path failure by using the
already existing routing information from its neighbourhood while
the composed data is transmitted from the source to the sink. In order
to save the energy and attain high data delivery ratio, data is
transmitted along a multiple path, which is achieved by BRR
algorithm whenever a failure is detected. Further, the analysis of
how the proposed protocol overcomes the drawback of the existing
protocols is presented. The performance of our protocol is compared
to AOMDV and energy efficient node-disjoint multipath routing
protocol (EENDMRP). The system is implemented using NS-2.34.
The simulation results show that the proposed protocol has high
delivery ratio with low energy consumption.
Abstract: Grid is an environment with millions of resources
which are dynamic and heterogeneous in nature. A computational
grid is one in which the resources are computing nodes and is meant
for applications that involves larger computations. A scheduling
algorithm is said to be efficient if and only if it performs better
resource allocation even in case of resource failure. Resource
allocation is a tedious issue since it has to consider several
requirements such as system load, processing cost and time, user’s
deadline and resource failure. This work attempts in designing a
resource allocation algorithm which is cost-effective and also targets
at load balancing, fault tolerance and user satisfaction by considering
the above requirements. The proposed Budget Constrained Load
Balancing Fault Tolerant algorithm with user satisfaction (BLBFT)
reduces the schedule makespan, schedule cost and task failure rate
and improves resource utilization. Evaluation of the proposed
BLBFT algorithm is done using Gridsim toolkit and the results are
compared with the algorithms which separately concentrates on all
these factors. The comparison results ensure that the proposed
algorithm works better than its counterparts.
Abstract: In MANET, mobile nodes communicate with each
other using the wireless channel where transmission takes place with
significant interference. The wireless medium used in MANET is a
shared resource used by all the nodes available in MANET. Packet
reserving is one important resource management scheme which
controls the allocation of bandwidth among multiple flows through
node cooperation in MANET. This paper proposes packet reserving
and clogging control via Routing Aware Packet Reserving (RAPR)
framework in MANET. It mainly focuses the end-to-end routing
condition with maximal throughput. RAPR is complimentary system
where the packet reserving utilizes local routing information
available in each node. Path setup in RAPR estimates the security
level of the system, and symbolizes the end-to-end routing by
controlling the clogging. RAPR reaches the packet to the destination
with high probability ratio and minimal delay count. The standard
performance measures such as network security level,
communication overhead, end-to-end throughput, resource utilization
efficiency and delay measure are considered in this work. The results
reveals that the proposed packet reservation and clogging control via
Routing Aware Packet Reserving (RAPR) framework performs well
for the above said performance measures compare to the existing
methods.
Abstract: This paper focuses on the operational and strategic planning decisions related to the quayside of container terminals. We introduce an integrated operational research (OR) and system dynamics (SD) approach to solve the Berth Allocation Problem (BAP) and the Quay Crane Assignment Problem (QCAP). A BAP-QCAP optimization modeling approach which considers practical aspects not studied before in the integration of BAP and QCAP is discussed. A conceptual SD model is developed to determine the long-term effect of optimization on the system behavior factors like resource utilization, attractiveness to port, number of incoming vessels to port and port profits. The framework can be used for improving the operational efficiency of container terminals and providing a strategic view after applying optimization.
Abstract: Time-Cost Optimization "TCO" is one of the greatest challenges in construction project planning and control, since the optimization of either time or cost, would usually be at the expense of the other. Since there is a hidden trade-off relationship between project and cost, it might be difficult to predict whether the total cost would increase or decrease as a result of the schedule compression. Recently third dimension in trade-off analysis is taken into consideration that is quality of the projects. Few of the existing algorithms are applied in a case of construction project with threedimensional trade-off analysis, Time-Cost-Quality relationships. The objective of this paper is to presents the development of a practical software system; that named Automatic Multi-objective Typical Construction Resource Optimization System "AMTCROS". This system incorporates the basic concepts of Line Of Balance "LOB" and Critical Path Method "CPM" in a multi-objective Genetic Algorithms "GAs" model. The main objective of this system is to provide a practical support for typical construction planners who need to optimize resource utilization in order to minimize project cost and duration while maximizing its quality simultaneously. The application of these research developments in planning the typical construction projects holds a strong promise to: 1) Increase the efficiency of resource use in typical construction projects; 2) Reduce construction duration period; 3) Minimize construction cost (direct cost plus indirect cost); and 4) Improve the quality of newly construction projects. A general description of the proposed software for the Time-Cost-Quality Trade-Off "TCQTO" is presented. The main inputs and outputs of the proposed software are outlined. The main subroutines and the inference engine of this software are detailed. The complexity analysis of the software is discussed. In addition, the verification, and complexity of the proposed software are proved and tested using a real case study.
Abstract: This paper presents a case study that uses processoriented
simulation to identify bottlenecks in the service delivery
system in an emergency department of a hospital in the United Arab
Emirates. Using results of the simulation, response surface models
were developed to explain patient waiting time and the total time
patients spend in the hospital system. Results of the study could be
used as a service improvement tool to help hospital management in
improving patient throughput and service quality in the hospital
system.
Abstract: In this paper, we study statistical multiplexing of VBR
video in ATM networks. ATM promises to provide high speed realtime
multi-point to central video transmission for telemedicine
applications in rural hospitals and in emergency medical services.
Video coders are known to produce variable bit rate (VBR) signals
and the effects of aggregating these VBR signals need to be
determined in order to design a telemedicine network infrastructure
capable of carrying these signals. We first model the VBR video
signal and simulate it using a generic continuous-data autoregressive
(AR) scheme. We carry out the queueing analysis by the Fluid
Approximation Model (FAM) and the Markov Modulated Poisson
Process (MMPP). The study has shown a trade off: multiplexing
VBR signals reduces burstiness and improves resource utilization,
however, the buffer size needs to be increased with an associated
economic cost. We also show that the MMPP model and the Fluid
Approximation model fit best, respectively, the cell region and the
burst region. Therefore, a hybrid MMPP and FAM completely
characterizes the overall performance of the ATM statistical
multiplexer. The ramifications of this technology are clear: speed,
reliability (lower loss rate and jitter), and increased capacity in video
transmission for telemedicine. With migration to full IP-based
networks still a long way to achieving both high speed and high
quality of service, the proposed ATM architecture will remain of
significant use for telemedicine.
Abstract: Hospital staff and managers are under pressure and
concerned for effective use and management of scarce resources. The
hospital admissions require many decisions that have complex and
uncertain consequences for hospital resource utilization and patient
flow. It is challenging to predict risk of admissions and length of stay
of a patient due to their vague nature. There is no method to capture
the vague definition of admission of a patient. Also, current methods
and tools used to predict patients at risk of admission fail to deal with
uncertainty in unplanned admission, LOS, patients- characteristics.
The main objective of this paper is to deal with uncertainty in
health system variables, and handles uncertain relationship among
variables. An introduction of machine learning techniques along with
statistical methods like Regression methods can be a proposed
solution approach to handle uncertainty in health system variables. A
model that adapts fuzzy methods to handle uncertain data and
uncertain relationships can be an efficient solution to capture the
vague definition of admission of a patient.
Abstract: In this paper, we propose a dynamic TDMA slot
reservation (DTSR) protocol for cognitive radio ad hoc networks.
Quality of Service (QoS) guarantee plays a critically important role
in such networks. We consider the problem of providing QoS
guarantee to users as well as to maintain the most efficient use of
scarce bandwidth resources. According to one hop neighboring
information and the bandwidth requirement, our proposed protocol
dynamically changes the frame length and the transmission schedule.
A dynamic frame length expansion and shrinking scheme that
controls the excessive increase of unassigned slots has been
proposed. This method efficiently utilizes the channel bandwidth by
assigning unused slots to new neighboring nodes and increasing the
frame length when the number of slots in the frame is insufficient to
support the neighboring nodes. It also shrinks the frame length when
half of the slots in the frame of a node are empty. An efficient slot
reservation protocol not only guarantees successful data
transmissions without collisions but also enhance channel spatial
reuse to maximize the system throughput. Our proposed scheme,
which provides both QoS guarantee and efficient resource utilization,
be employed to optimize the channel spatial reuse and maximize the
system throughput. Extensive simulation results show that the
proposed mechanism achieves desirable performance in multichannel
multi-rate cognitive radio ad hoc networks.
Abstract: Server provisioning is one of the most attractive topics in virtualization systems. Virtualization is a method of running multiple independent virtual operating systems on a single physical computer. It is a way of maximizing physical resources to maximize the investment in hardware. Additionally, it can help to consolidate servers, improve hardware utilization and reduce the consumption of power and physical space in the data center. However, management of heterogeneous workloads, especially for resource utilization of the server, or so called provisioning becomes a challenge. In this paper, a new concept for managing workloads based on user behavior is presented. The experimental results show that user behaviors are different in each type of service workload and time. Understanding user behaviors may improve the efficiency of management in provisioning concept. This preliminary study may be an approach to improve management of data centers running heterogeneous workloads for provisioning in virtualization system.
Abstract: Earthmoving operations are a major part of many
construction projects. Because of the complexity and fast-changing
environment of such operations, the planning and estimating are
crucial on both planning and operational levels. This paper presents
the framework ofa microscopic discrete-event simulation system for
modeling earthmoving operations and conducting productivity
estimations on an operational level.A prototype has been developed
to demonstrate the applicability of the proposed framework, and this
simulation system is presented via a case study based on an actual
earthmoving project. The case study shows that the proposed
simulation model is capable of evaluating alternative operating
strategies and resource utilization at a very detailed level.
Abstract: Designing and implementing intelligent systems has become a crucial factor for the innovation and development of better products of space technologies. A neural network is a parallel system, capable of resolving paradigms that linear computing cannot. Field programmable gate array (FPGA) is a digital device that owns reprogrammable properties and robust flexibility. For the neural network based instrument prototype in real time application, conventional specific VLSI neural chip design suffers the limitation in time and cost. With low precision artificial neural network design, FPGAs have higher speed and smaller size for real time application than the VLSI and DSP chips. So, many researchers have made great efforts on the realization of neural network (NN) using FPGA technique. In this paper, an introduction of ANN and FPGA technique are briefly shown. Also, Hardware Description Language (VHDL) code has been proposed to implement ANNs as well as to present simulation results with floating point arithmetic. Synthesis results for ANN controller are developed using Precision RTL. Proposed VHDL implementation creates a flexible, fast method and high degree of parallelism for implementing ANN. The implementation of multi-layer NN using lookup table LUT reduces the resource utilization for implementation and time for execution.
Abstract: Virtualization-based server consolidation has been
proven to be an ideal technique to solve the server sprawl problem by
consolidating multiple virtualized servers onto a few physical servers
leading to improved resource utilization and return on investment. In
this paper, we solve this problem by using existing servers, which are
heterogeneous and diversely preferred by IT managers. Five practical
consolidation rules are introduced, and a decision model is proposed to
optimally allocate source services to physical target servers while
maximizing the average resource utilization and preference value. Our
model can be regarded as a multi-objective multi-dimension
bin-packing (MOMDBP) problem with constraints, which is strongly
NP-hard. An improved grouping generic algorithm (GGA) is
introduced for the problem. Extensive simulations were performed and
the results are given.
Abstract: Developers need to evaluate software's performance to make software efficient. This paper suggests a performance evaluation system for embedded software. The suggested system consists of code analyzer, testing agents, data analyzer, and report viewer. The code analyzer inserts additional code dependent on target system into source code and compiles the source code. The testing agents execute performance test. The data analyzer translates raw-level results data to class-level APIs for reporting viewer. The report viewer offers users graphical report views by using the APIs. We hope that the suggested tool will be useful for embedded-related software development,because developers can easily and intuitively analyze software's performance and resource utilization.
Abstract: Next Generation Wireless Network (NGWN) is
expected to be a heterogeneous network which integrates all different
Radio Access Technologies (RATs) through a common platform. A
major challenge is how to allocate users to the most suitable RAT for
them. An optimized solution can lead to maximize the efficient use
of radio resources, achieve better performance for service providers
and provide Quality of Service (QoS) with low costs to users.
Currently, Radio Resource Management (RRM) is implemented
efficiently for the RAT that it was developed. However, it is not
suitable for a heterogeneous network. Common RRM (CRRM) was
proposed to manage radio resource utilization in the heterogeneous
network. This paper presents a user level Markov model for a three
co-located RAT networks. The load-balancing based and service
based CRRM algorithms have been studied using the presented
Markov model. A comparison for the performance of load-balancing
based and service based CRRM algorithms is studied in terms of
traffic distribution, new call blocking probability, vertical handover
(VHO) call dropping probability and throughput.