Abstract: The use of hydroelectric pump-storage system at large
scale, MW-size systems, is already widespread around the world.
Designed for large scale applications, pump-storage station can be
scaled-down for small, remote residential applications. Given the cost
and complexity associated with installing a substation further than
100 miles from the main transmission lines, a remote, independent
and self-sufficient system is by far the most feasible solution. This
article is aiming at the design of wind and solar power generating
system, by means of pumped-storage to replace the wind and /or solar
power systems with a battery bank energy storage. Wind and solar
pumped-storage power generating system can reduce the cost of
power generation system, according to the user's electricity load and
resource condition and also can ensure system reliability of power
supply. Wind and solar pumped-storage power generation system is
well suited for remote residential applications with intermittent wind
and/or solar energy. This type of power systems, installed in these
locations, could be a very good alternative, with economic benefits
and positive social effects. The advantage of pumped storage power
system, where wind power regulation is calculated, shows that a
significant smoothing of the produced power is obtained, resulting in
a power-on-demand system’s capability, concomitant to extra
economic benefits.
Abstract: Floorplanning plays a vital role in the physical design
process of Very Large Scale Integrated (VLSI) chips. It is an
essential design step to estimate the chip area prior to the optimized
placement of digital blocks and their interconnections. Since VLSI
floorplanning is an NP-hard problem, many optimization techniques
were adopted in the literature. In this work, a music-inspired
Harmony Search (HS) algorithm is used for the fixed die outline
constrained floorplanning, with the aim of reducing the total chip
area. HS draws inspiration from the musical improvisation process of
searching for a perfect state of harmony. Initially, B*-tree is used to
generate the primary floorplan for the given rectangular hard
modules and then HS algorithm is applied to obtain an optimal
solution for the efficient floorplan. The experimental results of the
HS algorithm are obtained for the MCNC benchmark circuits.
Abstract: Biomass briquette gasification is regarded as a
promising route for efficient briquette use in energy generation, fuels
and other useful chemicals. However, previous research has been
focused on briquette gasification in fixed bed gasifiers such as
updraft and downdraft gasifiers. Fluidised bed gasifier has the
potential to be effectively sized to medium or large scale. This study
investigated the use of fuel briquettes produced from blends of rice
husks and corn cobs biomass, in a bubbling fluidised bed gasifier.
The study adopted a combination of numerical equations and Aspen
Plus simulation software, to predict the product gas (syngas)
composition base on briquette density and biomass composition
(blend ratio of rice husks to corn cobs). The Aspen Plus model was
based on an experimentally validated model from the literature. The
results based on a briquette size 32 mm diameter and relaxed density
range of 500 to 650kg/m3, indicated that fluidisation air required in
the gasifier increased with increase in briquette density, and the
fluidisation air showed to be the controlling factor compared with the
actual air required for gasification of the biomass briquettes. The
mass flowrate of CO2 in the predicted syngas composition increased
with an increase in air flow, in the gasifier, while CO decreased and
H2 was almost constant. The ratio of H2 to CO for various blends of
rice husks and corn cobs did not significantly change at the designed
process air, but a significant difference of 1.0 was observed between
10/90 and 90/10 % blend of rice husks and corn cobs.
Abstract: The biodegradable family of polymers
polyhydroxyalkanoates is an interesting substitute for convectional
fossil-based plastics. However, the manufacturing and environmental
impacts associated with their production via intracellular bacterial
fermentation are strongly dependent on the raw material used and on
energy consumption during the extraction process, limiting their
potential for commercialization. Industrial wastewater is studied in
this paper as a promising alternative feedstock for waste valorization.
Based on results from laboratory and pilot-scale experiments, a
conceptual process design, techno-economic analysis and life cycle
assessment are developed for the large-scale production of the most
common type of polyhydroxyalkanoate, polyhydroxbutyrate.
Intracellular polyhydroxybutyrate is obtained via fermentation of
microbial community present in industrial wastewater and the
downstream processing is based on chemical digestion with
surfactant and hypochlorite. The economic potential and
environmental performance results help identifying bottlenecks and
best opportunities to scale-up the process prior to industrial
implementation. The outcome of this research indicates that the
fermentation of wastewater towards PHB presents advantages
compared to traditional PHAs production from sugars because the
null environmental burdens and financial costs of the raw material in
the bioplastic production process. Nevertheless, process optimization
is still required to compete with the petrochemicals counterparts.
Abstract: Robotics brings together several very different
engineering areas and skills. There are various types of robot such as
humanoid robot, mobile robots, remotely operated vehicles, modern
autonomous robots etc. This survey paper advocates the operation of a
robotic car (remotely operated vehicle) that is controlled by a mobile
phone (communicate on a large scale over a large distance even from
different cities). The person makes a call to the mobile phone placed
in the car. In the case of a call, if any one of the button is pressed, a
tone equivalent to the button pressed is heard at the other end of the
call. This tone is known as DTMF (Dual Tone Multiple Frequency).
The car recognizes this DTMF tone with the help of the phone stacked
in the car. The received tone is processed by the Arduino
microcontroller. The microcontroller is programmed to acquire a
decision for any given input and outputs its decision to motor drivers
in order to drive the motors in the forward direction or backward
direction or left or right direction. The mobile phone that makes a call
to cell phone stacked in the car act as a remote.
Abstract: To determine the potential of a low cost Irish
engineered timber product to replace high cost solid timber for use in
bending active structures such as gridshells a single Irish engineered
timber product in the form of orientated strand board (OSB) was
selected. A comparative study of OSB and solid timber was carried
out to determine the optimum properties that make a material suitable
for use in gridshells. Three parameters were identified to be relevant
in the selection of a material for gridshells. These three parameters
are the strength to stiffness ratio, the flexural stiffness of
commercially available sections, and the variability of material and
section properties. It is shown that when comparing OSB against
solid timber, OSB is a more suitable material for use in gridshells that
are at the smaller end of the scale and that have tight radii of
curvature. Typically, for solid timber materials, stiffness is used as an
indicator for strength and engineered timber is no different. Thus, low
flexural stiffness would mean low flexural strength. However, when
it comes to bending active gridshells, OSB offers a significant
advantage. By the addition of multiple layers, an increased section
size is created, thus endowing the structure with higher stiffness and
higher strength from initial low stiffness and low strength materials
while still maintaining tight radii of curvature. This allows OSB to
compete with solid timber on large scale gridshells. Additionally, a
preliminary sustainability study using a set of sustainability indicators
was carried out to determine the relative sustainability of building a
large-scale gridshell in Ireland with a primary focus on economic
viability but a mention is also given to social and environmental
aspects. For this, the Savill garden gridshell in the UK was used as
the functional unit with the sustainability of the structural roof
skeleton constructed from UK larch solid timber being compared
with the same structure using Irish OSB. Albeit that the advantages of
using commercially available OSB in a bending active gridshell are
marginal and limited to specific gridshell applications, further study
into an optimised engineered timber product is merited.
Abstract: Meeting the growth in demand for digital services
such as social media, telecommunications, and business and cloud
services requires large scale data centres, which has led to an increase
in their end use energy demand. Generally, over 30% of data centre
power is consumed by the necessary cooling overhead. Thus energy
can be reduced by improving the cooling efficiency. Air and liquid
can both be used as cooling media for the data centre. Traditional
data centre cooling systems use air, however liquid is recognised as a
promising method that can handle the more densely packed data
centres. Liquid cooling can be classified into three methods; rack heat
exchanger, on-chip heat exchanger and full immersion of the
microelectronics. This study quantifies the improvements of heat
transfer specifically for the case of immersed microelectronics by
varying the CPU and heat sink location. Immersion of the server is
achieved by filling the gap between the microelectronics and a water
jacket with a dielectric liquid which convects the heat from the CPU
to the water jacket on the opposite side. Heat transfer is governed by
two physical mechanisms, which is natural convection for the fixed
enclosure filled with dielectric liquid and forced convection for the
water that is pumped through the water jacket. The model in this
study is validated with published numerical and experimental work
and shows good agreement with previous work. The results show that
the heat transfer performance and Nusselt number (Nu) is improved
by 89% by placing the CPU and heat sink on the bottom of the
microelectronics enclosure.
Abstract: The Simulation based VLSI Implementation of
FELICS (Fast Efficient Lossless Image Compression System)
Algorithm is proposed to provide the lossless image compression and
is implemented in simulation oriented VLSI (Very Large Scale
Integrated). To analysis the performance of Lossless image
compression and to reduce the image without losing image quality
and then implemented in VLSI based FELICS algorithm. In FELICS
algorithm, which consists of simplified adjusted binary code for
Image compression and these compression image is converted in
pixel and then implemented in VLSI domain. This parameter is used
to achieve high processing speed and minimize the area and power.
The simplified adjusted binary code reduces the number of arithmetic
operation and achieved high processing speed. The color difference
preprocessing is also proposed to improve coding efficiency with
simple arithmetic operation. Although VLSI based FELICS
Algorithm provides effective solution for hardware architecture
design for regular pipelining data flow parallelism with four stages.
With two level parallelisms, consecutive pixels can be classified into
even and odd samples and the individual hardware engine is
dedicated for each one. This method can be further enhanced by
multilevel parallelisms.
Abstract: This work presents the modelling and simulation of
saponification of ethyl acetate in the presence of sodium hydroxide in
a plug flow reactor using Aspen Plus simulation software. Plug flow
reactors are widely used in the industry due to the non-mixing
property. The use of plug flow reactors becomes significant when
there is a need for continuous large scale reaction or fast reaction.
Plug flow reactors have a high volumetric unit conversion as the
occurrence for side reactions is minimum. In this research Aspen Plus
V8.0 has been successfully used to simulate the plug flow reactor. In
order to simulate the process as accurately as possible HYSYS Peng-
Robinson EOS package was used as the property method. The results
obtained from the simulation were verified by the experiment carried
out in the EDIBON plug flow reactor module. The correlation
coefficient (r2) was 0.98 and it proved that simulation results
satisfactorily fit for the experimental model. The developed model
can be used as a guide for understanding the reaction kinetics of a
plug flow reactor.
Abstract: Compared with traditional distributed environment, the
net-centric environment brings on more demanding challenges for
information sharing with the characteristics of ultra-large scale and
strong distribution, dynamic, autonomy, heterogeneity, redundancy.
This paper realizes an information sharing model and a series of core
services, through which provides an open, flexible and scalable
information sharing platform.
Abstract: The scope of this paper is to evaluate and compare the potential of LS-PV(Large Scale Photovoltaic Power Plant) power generation systems in the southern region of Libya at Al-Kufra for both stationary and tracking systems. A Microsoft Excel-VBA program has been developed to compute slope radiation, dew-point, sky temperature, and then cell temperature, maximum power output and module efficiency of the system for stationary system and for tracking system. The results for energy production show that the total energy output is 114GWh/year for stationary system and 148GWh/year for tracking system. The average module efficiency for the stationary system is 16.6% and 16.2% for the tracking system.
The values of electricity generation capacity factor (CF) and solar capacity factor (SCF) for stationary system were found to be 26% and 62.5% respectively and 34% and 82% for tracking system. The GCR (Ground Cover Ratio) for a stationary system is 0.7, which corresponds to a tilt angle of 24°. The GCR for tracking system was found to be 0.12. The estimated ground area needed to build a 50MW PV plant amounts to approx. 0.55km2 for a stationary PV field constituted by HIT PV arrays and approx. 91MW/ km2. In case of a tracker PV field, the required ground area amounts approx.2.4km2 and approx. 20.5MW/ km2.
Abstract: Qatar’s primary source of fresh water is through
seawater desalination. Amongst the major processes that are
commercially available on the market, the most common large scale
techniques are Multi-Stage Flash distillation (MSF), Multi Effect
distillation (MED), and Reverse Osmosis (RO). Although commonly
used, these three processes are highly expensive down to high energy
input requirements and high operating costs allied with maintenance
and stress induced on the systems in harsh alkaline media. Beside that
cost, environmental footprint of these desalination techniques are
significant; from damaging marine eco-system, to huge land use, to
discharge of tons of GHG and huge carbon footprint.
Other less energy consuming techniques based on membrane
separation are being sought to reduce both the carbon footprint and
operating costs is membrane distillation (MD).
Emerged in 1960s, MD is an alternative technology for water
desalination attracting more attention since 1980s. MD process
involves the evaporation of a hot feed, typically below boiling point
of brine at standard conditions, by creating a water vapor pressure
difference across the porous, hydrophobic membrane. Main
advantages of MD compared to other commercially available
technologies (MSF and MED) and specially RO are reduction of
membrane and module stress due to absence of trans-membrane
pressure, less impact of contaminant fouling on distillate due to
transfer of only water vapor, utilization of low grade or waste heat
from oil and gas industries to heat up the feed up to required
temperature difference across the membrane, superior water quality,
and relatively lower capital and operating cost.
To achieve the objective of this study, state of the art flat-sheet
cross-flow DCMD bench scale unit was designed, commissioned, and
tested. The objective of this study is to analyze the characteristics and
morphology of the membrane suitable for DCMD through SEM
imaging and contact angle measurement and to study the water
quality of distillate produced by DCMD bench scale unit.
Comparison with available literature data is undertaken where
appropriate and laboratory data is used to compare a DCMD distillate
quality with that of other desalination techniques and standards.
Membrane SEM analysis showed that the PTFE membrane used
for the study has contact angle of 127º with highly porous surface
supported with less porous and bigger pore size PP membrane. Study
on the effect of feed solution (salinity) and temperature on water
quality of distillate produced from ICP and IC analysis showed that
with any salinity and different feed temperature (up to 70ºC) the
electric conductivity of distillate is less than 5 μS/cm with 99.99%
salt rejection and proved to be feasible and effective process capable
of consistently producing high quality distillate from very high feed
salinity solution (i.e. 100000 mg/L TDS) even with substantial
quality difference compared to other desalination methods such as
RO and MSF.
Abstract: Shifted polynomial basis (SPB) is a variation of
polynomial basis representation. SPB has potential for efficient
bit level and digi -level implementations of multiplication over
binary extension fields with subquadratic space complexity. For
efficient implementation of pairing computation with large finite
fields, this paper presents a new SPB multiplication algorithm based
on Karatsuba schemes, and used that to derive a novel scalable
multiplier architecture. Analytical results show that the proposed
multiplier provides a trade-off between space and time complexities.
Our proposed multiplier is modular, regular, and suitable for very
large scale integration (VLSI) implementations. It involves less
area complexity compared to the multipliers based on traditional
decomposition methods. It is therefore, more suitable for efficient
hardware implementation of pairing based cryptography and elliptic
curve cryptography (ECC) in constraint driven applications.
Abstract: Pasta is one of the most widely consumed food products around the world. Rapid determination of the moisture content in pasta will assist food processors to provide online quality control of pasta during large scale production. Rapid Fourier transform near-infrared method (FT-NIR) was developed for determining moisture content in pasta. A calibration set of 150 samples, a validation set of 30 samples and a prediction set of 25 samples of pasta were used. The diffuse reflection spectra of different types of pastas were measured by FT-NIR analyzer in the 4,000-12,000cm-1 spectral range. Calibration and validation sets were designed for the conception and evaluation of the method adequacy in the range of moisture content 10 to 15 percent (w.b) of the pasta. The prediction models based on partial least squares (PLS) regression, were developed in the near-infrared. Conventional criteria such as the R2, the root mean square errors of cross validation (RMSECV), root mean square errors of estimation (RMSEE) as well as the number of PLS factors were considered for the selection of three pre-processing (vector normalization, minimum-maximum normalization and multiplicative scatter correction) methods. Spectra of pasta sample were treated with different mathematic pre-treatments before being used to build models between the spectral information and moisture content. The moisture content in pasta predicted by FT-NIR methods had very good correlation with their values determined via traditional methods (R2 = 0.983), which clearly indicated that FT-NIR methods could be used as an effective tool for rapid determination of moisture content in pasta. The best calibration model was developed with min-max normalization (MMN) spectral pre-processing (R2 = 0.9775). The MMN pre-processing method was found most suitable and the maximum coefficient of determination (R2) value of 0.9875 was obtained for the calibration model developed.
Abstract: Conjugate gradient method has been enormously used
to solve large scale unconstrained optimization problems due to the
number of iteration, memory, CPU time, and convergence property,
in this paper we find a new class of nonlinear conjugate gradient
coefficient with global convergence properties proved by exact line
search. The numerical results for our new βK give a good result when
it compared with well known formulas.
Abstract: Numerical simulations have been performed for assessment of compressibility correction of two-equation turbulence models suitable for large scale separation flows perturbed by pintle strokes. In order to take into account pintle movement, a sliding mesh method was applied. The chamber pressure, mass flow rate, and thrust have been analyzed, and the response lag and sensitivity at the chamber and nozzle were estimated for a movable pintle. The nozzle performance for pintle reciprocating as its insertion and extraction processes, were analyzed to better understand the dynamic performance of the pintle nozzle.
Abstract: Economic Dispatch is one of the most important power system management tools. It is used to allocate an amount of power generation to the generating units to meet the load demand. The Economic Dispatch problem is a large scale nonlinear constrained optimization problem. In general, heuristic optimization techniques are used to solve non-convex Economic Dispatch problem. In this paper, ideas from Reinforcement Learning are proposed to solve the non-convex Economic Dispatch problem. Q-Learning is a reinforcement learning techniques where each generating unit learn the optimal schedule of the generated power that minimizes the generation cost function. The eligibility traces are used to speed up the Q-Learning process. Q-Learning with eligibility traces is used to solve Economic Dispatch problems with valve point loading effect, multiple fuel options, and power transmission losses.
Abstract: In this paper various techniques in relation to large-scale systems are presented. At first, explanation of large-scale systems and differences from traditional systems are given. Next, possible specifications and requirements on hardware and software are listed. Finally, examples of large-scale systems are presented.
Abstract: In this paper, a robust decentralized congestion control strategy is developed for a large scale network with Differentiated Services (Diff-Serv) traffic. The network is modeled by a nonlinear fluid flow model corresponding to two classes of traffic, namely the premium traffic and the ordinary traffic. The proposed congestion controller does take into account the associated physical network resource limitations and is shown to be robust to the unknown and time-varying delays. Our proposed decentralized congestion control strategy is developed on the basis of Diff-Serv architecture by utilizing a robust adaptive technique. A Linear Matrix Inequality (LMI) condition is obtained to guarantee the ultimate boundedness of the closed-loop system. Numerical simulation implementations are presented by utilizing the QualNet and Matlab software tools to illustrate the effectiveness and capabilities of our proposed decentralized congestion control strategy.
Abstract: Time-Cost Optimization "TCO" is one of the greatest challenges in construction project planning and control, since the optimization of either time or cost, would usually be at the expense of the other. Since there is a hidden trade-off relationship between project and cost, it might be difficult to predict whether the total cost would increase or decrease as a result of the schedule compression. Recently third dimension in trade-off analysis is taken into consideration that is quality of the projects. Few of the existing algorithms are applied in a case of construction project with threedimensional trade-off analysis, Time-Cost-Quality relationships. The objective of this paper is to presents the development of a practical software system; that named Automatic Multi-objective Typical Construction Resource Optimization System "AMTCROS". This system incorporates the basic concepts of Line Of Balance "LOB" and Critical Path Method "CPM" in a multi-objective Genetic Algorithms "GAs" model. The main objective of this system is to provide a practical support for typical construction planners who need to optimize resource utilization in order to minimize project cost and duration while maximizing its quality simultaneously. The application of these research developments in planning the typical construction projects holds a strong promise to: 1) Increase the efficiency of resource use in typical construction projects; 2) Reduce construction duration period; 3) Minimize construction cost (direct cost plus indirect cost); and 4) Improve the quality of newly construction projects. A general description of the proposed software for the Time-Cost-Quality Trade-Off "TCQTO" is presented. The main inputs and outputs of the proposed software are outlined. The main subroutines and the inference engine of this software are detailed. The complexity analysis of the software is discussed. In addition, the verification, and complexity of the proposed software are proved and tested using a real case study.