Abstract: As the data-driven economy is growing faster than
ever and the demand for energy is being spurred, we are facing
unprecedented challenges of improving energy efficiency in data
centers. Effectively maximizing energy efficiency or minimising the
cooling energy demand is becoming pervasive for data centers. This
paper investigates overall energy consumption and the energy
efficiency of cooling system for a data center in Finland as a case
study. The power, cooling and energy consumption characteristics
and operation condition of facilities are examined and analysed.
Potential energy and cooling saving opportunities are identified and
further suggestions for improving the performance of cooling system
are put forward. Results are presented as a comprehensive evaluation
of both the energy performance and good practices of energy
efficient cooling operations for the data center. Utilization of an
energy recovery concept for cooling system is proposed. The
conclusion we can draw is that even though the analysed data center
demonstrated relatively high energy efficiency, based on its power
usage effectiveness value, there is still a significant potential for
energy saving from its cooling systems.
Abstract: Fast development of technologies, economic globalization and many other external circumstances stimulate company’s competitiveness. One of the major trends in today’s business is the shift to the exploitation of the Internet and electronic environment for entrepreneurial needs. Latest researches confirm that e-environment provides a range of possibilities and opportunities for companies, especially for micro-, small- and medium-sized companies, which have limited resources. The usage of e-tools raises the effectiveness and the profitability of an organization, as well as its competitiveness.
In the electronic market, as in the classic one, there are factors, such as globalization, development of new technology, price sensitive consumers, Internet, new distribution and communication channels that influence entrepreneurship. As a result of eenvironment development, e-commerce and e-marketing grow as well.
Objective of the paper: To describe and identify factors influencing company’s competitiveness in e-environment.
Research methodology: The authors employ well-established quantitative and qualitative methods of research: grouping, analysis, statistics method, factor analysis in SPSS 20 environment, etc. The theoretical and methodological background of the research is formed by using scientific researches and publications, such as that from mass media and professional literature; statistical information from legal institutions as well as information collected by the authors during the surveying process.
Research result: The authors detected and classified factors influencing competitiveness in e-environment.
In this paper, the authors presented their findings based on theoretical, scientific, and field research. Authors have conducted a research on e-environment utilization among Latvian enterprises.
Abstract: This paper is an extension of a previous work where a diagonally implicit harmonic balance method was developed and applied to simulate oscillatory motions of pitching airfoil and wing. A more detailed study on the accuracy, convergence, and the efficiency of the method is carried out in the current paperby varying the number of harmonics in the solution approximation. As the main advantage of the method is itsusage for the design optimization of the unsteady problems, its application to more practical case of rotor flow analysis during forward flight is carried out and compared with flight test data and time-accurate computation results.
Abstract: Large-scale systems such as Grids offer
infrastructures for both data distribution and parallel processing. The
use of Grid infrastructures is a more recent issue that is already
impacting the Distributed Database Management System industry. In
DBMS, distributed query processing has emerged as a fundamental
technique for ensuring high performance in distributed databases.
Database placement is particularly important in large-scale systems
because it reduces communication costs and improves resource
usage. In this paper, we propose a dynamic database placement
policy that depends on query patterns and Grid sites capabilities. We
evaluate the performance of the proposed database placement policy
using simulations. The obtained results show that dynamic database
placement can significantly improve the performance of distributed
query processing.
Abstract: For complete support of Quality of Service, it is better that environment itself predicts resource requirements of a job by using special methods in the Grid computing. The exact and correct prediction causes exact matching of required resources with available resources. After the execution of each job, the used resources will be saved in the active database named "History". At first some of the attributes will be exploit from the main job and according to a defined similarity algorithm the most similar executed job will be exploited from "History" using statistic terms such as linear regression or average, resource requirements will be predicted. The new idea in this research is based on active database and centralized history maintenance. Implementation and testing of the proposed architecture results in accuracy percentage of 96.68% to predict CPU usage of jobs and 91.29% of memory usage and 89.80% of the band width usage.
Abstract: The term interactive education indicates the meaning
related with multidisciplinary aspects of distance education following
contemporary means around a common basis with different
functional requirements. The aim of this paper is to reflect the new
techniques in education with the new methods and inventions. These
methods are better supplied by interactivity. The integration of
interactive facilities in the discipline of education with distance
learning is not a new concept but in addition the usage of these
methods on design issue is newly being adapted to design education.
In this paper the general approach of this method and after the
analysis of different samples, the advantages and disadvantages of
these approaches are being identified. The method of this paper is to
evaluate the related samples and then analyzing the main hypothesis.
The main focus is to mention the formation processes of this
education. Technological developments in education should be
filtered around the necessities of the design education and the
structure of the system could then be formed or renewed. The
conclusion indicates that interactive methods of education in design
issue is a meaning capturing not only technical and computational
intelligence aspects but also aesthetical and artistic approaches
coming together around the same purpose.
Abstract: BioEnergy is an archetypal appropriate technology
and alternate source of energy in rural areas of China, and can meet
the basic need for cooking fuel in rural areas. The paper introduces
with an alternate mean of research that can accelerate the biogas
energy production. Tithonia diversifolia or the Tree marigold can be
hailed as mesophillic anaerobic digestion to increase the production
of more Bioenergy. Tithonia diversifolia is very native to Mexico and
Central America, which can be served as ornamental plants- green
manure and can prevent soil erosion. Tithonia diversifolia is widely
grown and known to Asia, Africa, America and Australia as well.
Nowadays, Considering China’s geographical condition it is found
that Tithonia diversifolia is widely growing plant in the many tropical
and subtropical regions of southern Yunnan- which can have great
usage in accelerating and increasing the Bioenergy production
technology. The paper discussed aiming at proving possibility that
Tithonia diversifolia can be applied in biogas fermentation and its
biogas production potential, the research carried experiment on
Tithonia diversifolia biogas fermentation under the mesophilic
condition (35 Celsius Degree). The result revealed that Tithonia
diversifolia can be used as biogas fermentative material, and 6%
concentration can get the best biogas production, with the TS biogas
production rate 656mL/g and VS biogas production rate 801mL/g. It
is well addressed that Tithonia diversifolia grows wildly in 53
Counties and 9 cities of Yunnan Province, which mainly grows in
form of the road side plants, the edge of the field, countryside, forest
edge, open space; of which demersum-natures can form dense
monospecific beds -causing serious harm to agricultural production
landforms threatening the ecological system as a potentially harmful
exotic plant. There are also found the three types of invasive daisy
alien plants -Eupatorium adenophorum, Eupatorium Odorata and
Tithonia diversifolia in Yunnan Province of China-among them the
Tithonia diversifolia is responsible for causing serious harm to
agricultural production. In this paper we have designed the
experimental explanation of Biogas energy production that requires
anaerobic environment and some microbes; Tithonia diversifolia
plant has been taken into consideration while carrying experiments
and with successful resulting of generating more BioEnergy
emphasizing on the practical applications of Tithonia diversifolia.
This paper aims at- to find a new mechanism to provide a more
scientific basis for the development of this plant herbicides in Biogas
energy and to improve the utilization throughout the world as well.
Abstract: In this work, we present for the first time in our perception an efficient digital watermarking scheme for mpeg audio layer 3 files that operates directly in the compressed data domain, while manipulating the time and subband/channel domain. In addition, it does not need the original signal to detect the watermark. Our scheme was implemented taking special care for the efficient usage of the two limited resources of computer systems: time and space. It offers to the industrial user the capability of watermark embedding and detection in time immediately comparable to the real music time of the original audio file that depends on the mpeg compression, while the end user/audience does not face any artifacts or delays hearing the watermarked audio file. Furthermore, it overcomes the disadvantage of algorithms operating in the PCMData domain to be vulnerable to compression/recompression attacks, as it places the watermark in the scale factors domain and not in the digitized sound audio data. The strength of our scheme, that allows it to be used with success in both authentication and copyright protection, relies on the fact that it gives to the users the enhanced capability their ownership of the audio file not to be accomplished simply by detecting the bit pattern that comprises the watermark itself, but by showing that the legal owner knows a hard to compute property of the watermark.
Abstract: In this paper, five options of Iran’s gas flare recovery
have been compared via MCDM method. For developing the model,
the weighing factor of each indicator an AHP method is used via the
Expert-choice software. Several cases were considered in this
analysis. They are defined where the priorities were defined always
keeping one criterion in first position, while the priorities of the other
criteria were defined by ordinal information defining the mutual
relations of the criteria and the respective indicators. The results,
show that amongst these cases, priority is obtained for CHP usage
where availability indicator is highly weighted while the pipeline
usage is obtained where environmental indicator highly weighted and
the injection priority is obtained where economic indicator is highly
weighted and also when the weighing factor of all the criteria are the
same the Injection priority is obtained.
Abstract: The more recent satellite projects/programs makes
extensive usage of real – time embedded systems. 16 bit processors
which meet the Mil-Std-1750 standard architecture have been used in
on-board systems. Most of the Space Applications have been written
in ADA. From a futuristic point of view, 32 bit/ 64 bit processors are
needed in the area of spacecraft computing and therefore an effort is
desirable in the study and survey of 64 bit architectures for space
applications. This will also result in significant technology
development in terms of VLSI and software tools for ADA (as the
legacy code is in ADA).
There are several basic requirements for a special processor for
this purpose. They include Radiation Hardened (RadHard) devices,
very low power dissipation, compatibility with existing operational
systems, scalable architectures for higher computational needs,
reliability, higher memory and I/O bandwidth, predictability, realtime
operating system and manufacturability of such processors.
Further on, these may include selection of FPGA devices, selection
of EDA tool chains, design flow, partitioning of the design, pin
count, performance evaluation, timing analysis etc.
This project deals with a brief study of 32 and 64 bit processors
readily available in the market and designing/ fabricating a 64 bit
RISC processor named RISC MicroProcessor with added
functionalities of an extended double precision floating point unit
and a 32 bit signal processing unit acting as co-processors. In this
paper, we emphasize the ease and importance of using Open Core
(OpenSparc T1 Verilog RTL) and Open “Source" EDA tools such as
Icarus to develop FPGA based prototypes quickly. Commercial tools
such as Xilinx ISE for Synthesis are also used when appropriate.
Abstract: In the context of computer numerical control (CNC) and computer aided manufacturing (CAM), the capabilities of programming languages such as symbolic and intuitive programming, program portability and geometrical portfolio have special importance. They allow to save time and to avoid errors during part programming and permit code re-usage. Our updated literature review indicates that the current state of art presents voids in parametric programming, program portability and programming flexibility. In response to this situation, this article presents a compiler implementation for EGCL (Extended G-code Language), a new, enriched CNC programming language which allows the use of descriptive variable names, geometrical functions and flow-control statements (if-then-else, while). Our compiler produces low-level generic, elementary ISO-compliant Gcode, thus allowing for flexibility in the choice of the executing CNC machine and in portability. Our results show that readable variable names and flow control statements allow a simplified and intuitive part programming and permit re-usage of the programs. Future work includes allowing the programmer to define own functions in terms of EGCL, in contrast to the current status of having them as library built-in functions.
Abstract: Software reliability, defined as the probability of a
software system or application functioning without failure or errors
over a defined period of time, has been an important area of research
for over three decades. Several research efforts aimed at developing
models to improve reliability are currently underway. One of the
most popular approaches to software reliability adopted by some of
these research efforts involves the use of operational profiles to
predict how software applications will be used. Operational profiles
are a quantification of usage patterns for a software application. The
research presented in this paper investigates an innovative multiagent
framework for automatic creation and management of
operational profiles for generic distributed systems after their release
into the market. The architecture of the proposed Operational Profile
MAS (Multi-Agent System) is presented along with detailed
descriptions of the various models arrived at following the analysis
and design phases of the proposed system. The operational profile in
this paper is extended to comprise seven different profiles. Further,
the criticality of operations is defined using a new composed metrics
in order to organize the testing process as well as to decrease the time
and cost involved in this process. A prototype implementation of the
proposed MAS is included as proof-of-concept and the framework is
considered as a step towards making distributed systems intelligent
and self-managing.
Abstract: As privacy becomes a major concern for consumers
and enterprises, many research have been focused on the privacy
protecting technology in recent years. In this paper, we present a
comprehensive approach for usage access control based on the notion
purpose. In our model, purpose information associated with a given
data element specifies the intended use of the subjects and objects in
the usage access control model. A key feature of our model is that it
allows when an access is required, the access purpose is checked
against the intended purposes for the data item. We propose an
approach to represent purpose information to support access control
based on purpose information. Our proposed solution relies on usage
access control (UAC) models as well as the components which based
on the notions of the purpose information used in subjects and
objects. Finally, comparisons with related works are analyzed.
Abstract: Sustainable energy usage has been recognized as one
of the important measure to increase the competitiveness of the
nation globally. Many strong emphases were given in the Ninth
Malaysia Plan (RMK9) to improve energy efficient especially to
government buildings. With this in view, a project to investigate the
potential of energy saving in selected building in Universiti Tun
Hussein Onn Malaysia (UTHM) was carried out. In this project, a
case study involving electric energy consumption of the academic
staff office building was conducted. The scope of the study include to
identify energy consumption in a selected building, to study energy
saving opportunities, to analyse cost investment in term of economic
and to identify users attitude with respect to energy usage. The
MS1525:2001, Malaysian Standard -Code of practice on energy
efficiency and use of renewable energy for non-residential buildings
was used as reference. Several energy efficient measures were
considered and their merits and priority were compared. Improving
human behavior can reduce energy consumption by 6% while
technical measure can reduce energy consumption by 44%. Two
economic analysis evaluation methods were applied; they are the
payback period method and net present value method.
Abstract: Acoustic Imaging based sound localization using microphone
array is a challenging task in digital-signal processing.
Discrete Fourier transform (DFT) based near-field acoustical holography
(NAH) is an important acoustical technique for sound source
localization and provide an efficient solution to the ill-posed problem.
However, in practice, due to the usage of small curtailed aperture
and its consequence of significant spectral leakage, the DFT could
not reconstruct the active-region-of-sound (AROS) effectively, especially
near the edges of aperture. In this paper, we emphasize the
fundamental problems of DFT-based NAH, provide a solution to
spectral leakage effect by the extrapolation based on linear predictive
coding and 2D Tukey windowing. This approach has been tested to
localize the single and multi-point sound sources. We observe that
incorporating extrapolation technique increases the spatial resolution,
localization accuracy and reduces spectral leakage when small curtail
aperture with a lower number of sensors accounts.
Abstract: With the development of technology, the growing
trend of fast and safe passenger transport, air pollution, traffic
congestion, increase in problems such as the increasing population
and the high cost of private vehicle usage made many cities around
the world with a population of more or less, start to build rail systems
as a means of urban transport in order to ensure the economic and
environmental sustainability and more efficient use of land in the
city. The implementation phase of rail systems costs much more than
other public transport systems. However, social and economic returns
in the long term made these systems the most popular investment tool
for planned and developing cities.
In our country, the purpose, goals and policies of transportation
plans are away from integrity, and the problems are not clearly
detected. Also, not defined and incomplete assessment of
transportation systems and insufficient financial analysis are the most
important cause of failure. Rail systems and other transportation
systems to be addressed as a whole is seen as the main factor in
increasing efficiency in applications that are not integrated yet in our
country to come to this point has led to the problem.
Abstract: Scheduling algorithms are used in operating systems
to optimize the usage of processors. One of the most efficient
algorithms for scheduling is Multi-Layer Feedback Queue (MLFQ)
algorithm which uses several queues with different quanta. The most
important weakness of this method is the inability to define the
optimized the number of the queues and quantum of each queue. This
weakness has been improved in IMLFQ scheduling algorithm.
Number of the queues and quantum of each queue affect the response
time directly. In this paper, we review the IMLFQ algorithm for
solving these problems and minimizing the response time. In this
algorithm Recurrent Neural Network has been utilized to find both
the number of queues and the optimized quantum of each queue.
Also in order to prevent any probable faults in processes' response
time computation, a new fault tolerant approach has been presented.
In this approach we use combinational software redundancy to
prevent the any probable faults. The experimental results show that
using the IMLFQ algorithm results in better response time in
comparison with other scheduling algorithms also by using fault
tolerant mechanism we improve IMLFQ performance.
Abstract: There are many issues that affect modeling and designing real-time databases. One of those issues is maintaining consistency between the actual state of the real-time object of the external environment and its images as reflected by all its replicas distributed over multiple nodes. The need to improve the scalability is another important issue. In this paper, we present a general framework to design a replicated real-time database for small to medium scale systems and maintain all timing constrains. In order to extend the idea for modeling a large scale database, we present a general outline that consider improving the scalability by using an existing static segmentation algorithm applied on the whole database, with the intent to lower the degree of replication, enables segments to have individual degrees of replication with the purpose of avoiding excessive resource usage, which all together contribute in solving the scalability problem for DRTDBS.
Abstract: Increasing concerns over climate change have limited
the liberal usage of available energy technology options. India faces
a formidable challenge to meet its energy needs and provide adequate
energy of desired quality in various forms to users in sustainable
manner at reasonable costs. In this paper, work carried out with an
objective to study the role of various energy technology options
under different scenarios namely base line scenario, high nuclear
scenario, high renewable scenario, low growth and high growth rate
scenario. The study has been carried out using Model for Energy
Supply Strategy Alternatives and their General Environmental
Impacts (MESSAGE) model which evaluates the alternative energy
supply strategies with user defined constraints on fuel availability,
environmental regulations etc. The projected electricity demand, at
the end of study period i.e. 2035 is 500490 MWYr. The model
predicted the share of the demand by Thermal: 428170 MWYr,
Hydro: 40320 MWYr, Nuclear: 14000 MWYr, Wind: 18000 MWYr
in the base line scenario. Coal remains the dominant fuel for
production of electricity during the study period. However, the
import dependency of coal increased during the study period. In
baseline scenario the cumulative carbon dioxide emissions upto 2035
are about 11,000 million tones of CO2. In the scenario of high nuclear
capacity the carbon dioxide emissions reduced by 10 % when nuclear
energy share increased to 9 % compared to 3 % in baseline scenario.
Similarly aggressive use of renewables reduces 4 % of carbon
dioxide emissions.
Abstract: The batch nature limits the standard kernel principal component analysis (KPCA) methods in numerous applications, especially for dynamic or large-scale data. In this paper, an efficient adaptive approach is presented for online extraction of the kernel principal components (KPC). The contribution of this paper may be divided into two parts. First, kernel covariance matrix is correctly updated to adapt to the changing characteristics of data. Second, KPC are recursively formulated to overcome the batch nature of standard KPCA.This formulation is derived from the recursive eigen-decomposition of kernel covariance matrix and indicates the KPC variation caused by the new data. The proposed method not only alleviates sub-optimality of the KPCA method for non-stationary data, but also maintains constant update speed and memory usage as the data-size increases. Experiments for simulation data and real applications demonstrate that our approach yields improvements in terms of both computational speed and approximation accuracy.