Abstract: Until recently, energy security and climate change
were considered separate issues to be dealt with by policymakers.
The two issues are now converging, challenging the security and
climate communities to develop a better understanding of how to deal
with both issues simultaneously. Although Egypt is not a major
contributor to the world's total GHG emissions, it is particularly
vulnerable to the potential effects of global climate change such as
rising sea levels and changed patterns of rainfall in the Nile Basin.
Climate change is a major threat to sustainable growth and
development in Egypt, and the achievement of the Millennium
Development Goals. Egypt-s capacity to respond to the challenges of
climate instability will be expanded by improving overall resilience,
integrating climate change goals into sustainable development
strategies, increasing the use of modern energy systems with reduced
carbon intensity, and strengthening international initiatives. This
study seeks to establish a framework for considering the complex and
evolving links between energy security and climate change,
applicable to Egypt.
Abstract: This paper presents an optimal design of poly-phase induction motor using Quadratic Interpolation based Particle Swarm Optimization (QI-PSO). The optimization algorithm considers the efficiency, starting torque and temperature rise as objective function (which are considered separately) and ten performance related items including harmonic current as constraints. The QI-PSO algorithm was implemented on a test motor and the results are compared with the Simulated Annealing (SA) technique, Standard Particle Swarm Optimization (SPSO), and normal design. Some benchmark problems are used for validating QI-PSO. From the test results QI-PSO gave better results and more suitable to motor-s design optimization. Cµ code is used for implementing entire algorithms.
Abstract: In this paper we propose a new criterion for solving
the problem of channel shortening in multi-carrier systems. In a
discrete multitone receiver, a time-domain equalizer (TEQ) reduces
intersymbol interference (ISI) by shortening the effective duration of
the channel impulse response. Minimum mean square error (MMSE)
method for TEQ does not give satisfactory results. In [1] a new
criterion for partially equalizing severe ISI channels to reduce the
cyclic prefix overhead of the discrete multitone transceiver (DMT),
assuming a fixed transmission bandwidth, is introduced. Due to
specific constrained (unit morm constraint on the target impulse
response (TIR)) in their method, the freedom to choose optimum
vector (TIR) is reduced. Better results can be obtained by avoiding
the unit norm constraint on the target impulse response (TIR). In
this paper we change the cost function proposed in [1] to the cost
function of determining the maximum of a determinant subject to
linear matrix inequality (LMI) and quadratic constraint and solve the
resulting optimization problem. Usefulness of the proposed method
is shown with the help of simulations.
Abstract: Many researchers are working on information hiding
techniques using different ideas and areas to hide their secrete data.
This paper introduces a robust technique of hiding secret data in
image based on LSB insertion and RSA encryption technique. The
key of the proposed technique is to encrypt the secret data. Then the
encrypted data will be converted into a bit stream and divided it into
number of segments. However, the cover image will also be divided
into the same number of segments. Each segment of data will be
compared with each segment of image to find the best match
segment, in order to create a new random sequence of segments to be
inserted then in a cover image. Experimental results show that the
proposed technique has a high security level and produced better
stego-image quality.
Abstract: Internal controls of accounting are an essential
business function for a growth-oriented organization, and include the
elements of risk assessment, information communications and even
employees' roles and responsibilities. Internal controls of accounting
systems are designed to protect a company from fraud, abuse and
inaccurate data recording and help organizations keep track of
essential financial activities. Internal controls of accounting provide a
streamlined solution for organizing all accounting procedures and
ensuring that the accounting cycle is completed consistently and
successfully. Implementing a formal Accounting Procedures Manual
for the organization allows the financial department to facilitate
several processes and maintain rigorous standards. Internal controls
also allow organizations to keep detailed records, manage and
organize important financial transactions and set a high standard for
the organization's financial management structure and protocols. A
well-implemented system also reduces the risk of accounting errors
and abuse. A well-implemented controls system allows a company's
financial managers to regulate and streamline all functions of the
accounting department. Internal controls of accounting can be set up
for every area to track deposits, monitor check handling, keep track
of creditor accounts, and even assess budgets and financial statements
on an ongoing basis. Setting up an effective accounting system to
monitor accounting reports, analyze records and protect sensitive
financial information also can help a company set clear goals and
make accurate projections. Creating efficient accounting processes
allows an organization to set specific policies and protocols on
accounting procedures, and reach its financial objectives on a regular
basis. Internal accounting controls can help keep track of such areas
as cash-receipt recording, payroll management, appropriate recording
of grants and gifts, cash disbursements by authorized personnel, and
the recording of assets. These systems also can take into account any
government regulations and requirements for financial reporting.
Abstract: An economic operation scheduling problem of a
hydro-thermal power generation system has been properly solved by
the proposed multipath adaptive tabu search algorithm (MATS). Four
reservoirs with their own hydro plants and another one thermal plant
are integrated to be a studied system used to formulate the objective
function under complicated constraints, eg water managements,
power balance and thermal generator limits. MATS with four subsearch
units (ATSs) and two stages of discarding mechanism (DM),
has been setting and trying to solve the problem through 25 trials
under function evaluation criterion. It is shown that MATS can
provide superior results with respect to single ATS and other
previous methods, genetic algorithms (GA) and differential evolution
(DE).
Abstract: A data warehouse (DW) is a system which has value and role for decision-making by querying. Queries to DW are critical regarding to their complexity and length. They often access millions of tuples, and involve joins between relations and aggregations. Materialized views are able to provide the better performance for DW queries. However, these views have maintenance cost, so materialization of all views is not possible. An important challenge of DW environment is materialized view selection because we have to realize the trade-off between performance and view maintenance cost. Therefore, in this paper, we introduce a new approach aimed at solve this challenge based on Two-Phase Optimization (2PO), which is a combination of Simulated Annealing (SA) and Iterative Improvement (II), with the use of Multiple View Processing Plan (MVPP). Our experiments show that our method provides a further improvement in term of query processing cost and view maintenance cost.
Abstract: The IEEE 802.11e which is an enhanced version of the 802.11 WLAN standards incorporates the Quality of Service (QoS) which makes it a better choice for multimedia and real time applications. In this paper we study various aspects concerned with 802.11e standard. Further, the analysis results for this standard are compared with the legacy 802.11 standard. Simulation results show that IEEE 802.11e out performs legacy IEEE 802.11 in terms of quality of service due to its flow differentiated channel allocation and better queue management architecture. We also propose a method to improve the unfair allocation of bandwidth for downlink and uplink channels by varying the medium access priority level.
Abstract: Hydrogen is an important chemical in many industries
and it is expected to become one of the major fuels for energy
generation in the future. Unfortunately, hydrogen does not exist in its
elemental form in nature and therefore has to be produced from
hydrocarbons, hydrogen-containing compounds or water.
Above its critical point (374.8oC and 22.1MPa), water has lower
density and viscosity, and a higher heat capacity than those of
ambient water. Mass transfer in supercritical water (SCW) is
enhanced due to its increased diffusivity and transport ability. The
reduced dielectric constant makes supercritical water a better solvent
for organic compounds and gases. Hence, due to the aforementioned
desirable properties, there is a growing interest toward studies
regarding the gasification of organic matter containing biomass or
model biomass solutions in supercritical water.
In this study, hydrogen and biofuel production by the catalytic
gasification of 2-Propanol in supercritical conditions of water was
investigated. Pt/Al2O3and Ni/Al2O3were the catalysts used in the
gasification reactions. All of the experiments were performed under a
constant pressure of 25MPa. The effects of five reaction temperatures
(400, 450, 500, 550 and 600°C) and five reaction times (10, 15, 20,
25 and 30 s) on the gasification yield and flammable component
content were investigated.
Abstract: In this paper we have proposed a methodology to
develop an amperometric biosensor for the analysis of glucose
concentration using a simple microcontroller based data acquisition
system. The work involves the development of Detachable
Membrane Unit (enzyme based biomembrane) with immobilized
glucose oxidase on the membrane and interfacing the same to the
signal conditioning system. The current generated by the biosensor
for different glucose concentrations was signal conditioned, then
acquired and computed by a simple AT89C51-microcontroller. The
optimum operating parameters for the better performance were found
and reported. The detailed performance evaluation of the biosensor
has been carried out. The proposed microcontroller based biosensor
system has the sensitivity of 0.04V/g/dl, with a resolution of
50mg/dl. It has exhibited very good inter day stability observed up to
30 days. Comparing to the reference method such as HPLC, the
accuracy of the proposed biosensor system is well within ± 1.5%.
The system can be used for real time analysis of glucose
concentration in the field such as, food and fermentation and clinical
(In-Vitro) applications.
Abstract: One major source of performance decline in speaker
recognition system is channel mismatch between training and testing.
This paper focuses on improving channel robustness of speaker
recognition system in two aspects of channel compensation technique
and channel robust features. The system is text-independent speaker
identification system based on two-stage recognition. In the aspect of
channel compensation technique, this paper applies MAP (Maximum
A Posterior Probability) channel compensation technique, which was
used in speech recognition, to speaker recognition system. In the
aspect of channel robust features, this paper introduces
pitch-dependent features and pitch-dependent speaker model for the
second stage recognition. Based on the first stage recognition to
testing speech using GMM (Gaussian Mixture Model), the system
uses GMM scores to decide if it needs to be recognized again. If it
needs to, the system selects a few speakers from all of the speakers
who participate in the first stage recognition for the second stage
recognition. For each selected speaker, the system obtains 3
pitch-dependent results from his pitch-dependent speaker model, and
then uses ANN (Artificial Neural Network) to unite the 3
pitch-dependent results and 1 GMM score for getting a fused result.
The system makes the second stage recognition based on these fused
results. The experiments show that the correct rate of two-stage
recognition system based on MAP channel compensation technique
and pitch-dependent features is 41.7% better than the baseline system
for closed-set test.
Abstract: Subjective loneliness describes people who feel a
disagreeable or unacceptable lack of meaningful social relationships,
both at the quantitative and qualitative level. The studies to be
presented tested an Italian 18-items self-report loneliness measure,
that included items adapted from scales previously developed,
namely a short version of the UCLA (Russell, Peplau and Cutrona,
1980), and the 11-items Loneliness scale by De Jong-Gierveld &
Kamphuis (JGLS; 1985). The studies aimed at testing the developed
scale and at verifying whether loneliness is better conceptualized as a
unidimensional (so-called 'general loneliness') or a bidimensional
construct, namely comprising the distinct facets of social and
emotional loneliness. The loneliness questionnaire included 2 singleitem
criterion measures of sad mood, and social contact, and asked
participants to supply information on a number of socio-demographic
variables. Factorial analyses of responses obtained in two
preliminary studies, with 59 and 143 Italian participants respectively,
showed good factor loadings and subscale reliability and confirmed
that perceived loneliness has clearly two components, a social and an
emotional one, the latter measured by two subscales, a 7-item
'general' loneliness subscale derived from UCLA, and a 6–item
'emotional' scale included in the JGLS. Results further showed that
type and amount of loneliness are related, negatively, to frequency of
social contacts, and, positively, to sad mood. In a third study data
were obtained from a nation-wide sample of 9.097 Italian subjects,
12 to about 70 year-olds, who filled the test on-line, on the Italian
web site of a large-audience magazine, Focus. The results again
confirmed the reliability of the component subscales, namely social,
emotional, and 'general' loneliness, and showed that they were
highly correlated with each other, especially the latter two.
Loneliness scores were significantly predicted by sex, age, education
level, sad mood and social contact, and, less so, by other variables –
e.g., geographical area and profession. The scale validity was
confirmed by the results of a fourth study, with elderly men and
women (N 105) living at home or in residential care units. The three
subscales were significantly related, among others, to depression, and
to various measures of the extension of, and satisfaction with, social
contacts with relatives and friends. Finally, a fifth study with 315
career-starters showed that social and emotional loneliness correlate
with life satisfaction, and with measures of emotional intelligence.
Altogether the results showed a good validity and reliability in the
tested samples of the entire scale, and of its components.
Abstract: Arguments on a popular microblogging site were analysed by means of a methodological approach to business rhetoric focusing on the logos communication technique. The focus of the analysis was the 100 day countdown to the 2011 Rugby World Cup as advanced by the organisers. Big sporting events provide an attractive medium for sport event marketers in that they have become important strategic communication tools directed at sport consumers. Sport event marketing is understood in the sense of using a microblogging site as a communication tool whose purpose it is to disseminate a company-s marketing messages by involving the target audience in experiential activities. Sport creates a universal language in that it excites and increases the spread of information by word of mouth and other means. The findings highlight the limitations of a microblogging site in terms of marketing messages which can assist in better practices. This study can also serve as a heuristic tool for other researchers analysing sports marketing messages in social network environments.
Abstract: This paper presents a new classification algorithm using colour and texture for obstacle detection. Colour information is computationally cheap to learn and process. However in many cases, colour alone does not provide enough information for classification. Texture information can improve classification performance but usually comes at an expensive cost. Our algorithm uses both colour and texture features but texture is only needed when colour is unreliable. During the training stage, texture features are learned specifically to improve the performance of a colour classifier. The algorithm learns a set of simple texture features and only the most effective features are used in the classification stage. Therefore our algorithm has a very good classification rate while is still fast enough to run on a limited computer platform. The proposed algorithm was tested with a challenging outdoor image set. Test result shows the algorithm achieves a much better trade-off between classification performance and efficiency than a typical colour classifier.
Abstract: This paper presents the determination of the proper
quality costs parameters which provide the optimum return. The
system dynamics simulation was applied. The simulation model was
constructed by the real data from a case of the electronic devices
manufacturer in Thailand. The Steepest Descent algorithm was
employed to optimise. The experimental results show that the
company should spend on prevention and appraisal activities for 850
and 10 Baht/day respectively. It provides minimum cumulative total
quality cost, which is 258,000 Baht in twelve months. The effect of
the step size in the stage of improving the variables to the optimum
was also investigated. It can be stated that the smaller step size
provided a better result with more experimental runs. However, the
different yield in this case is not significant in practice. Therefore, the
greater step size is recommended because the region of optima could
be reached more easily and rapidly.
Abstract: In this study an extensive experimental research is
carried out to develop a better understanding of the effects of Piano Key (PK) weir geometry on weir flow threshold submergence.
Experiments were conducted in a 12 m long, 0.4 m wide and 0.7 m deep rectangular glass wall flume. The main objectives were to
investigate the effect of the PK weir geometries including the weir
length, weir height, inlet-outlet key widths, upstream and
downstream apex overhangs, and slopped floors on threshold submergence and study the hydraulic flow characteristics. From the
experimental results, a practical formula is proposed to evaluate the flow threshold submergence over PK weirs.
Abstract: It is necessary to incorporate technological advances
achieved in the field of engineering into dentistry in order to enhance
the process of diagnosis, treatment planning and enable the doctors to
render better treatment to their patients. To achieve this ultimate goal
long distance collaborations are often necessary. This paper discusses
the various collaborative tools and their applications to solve a few
burning problems confronted by the dentists. Customization is often
the solution to most of the problems. But rapid designing,
development and cost effective manufacturing is a difficult task to
achieve. This problem can be solved using the technique of digital
manufacturing. Cases from 6 major branches of dentistry have been
discussed and possible solutions with the help of state of art
technology using rapid digital manufacturing have been proposed in
the present paper. The paper also entails the usage of existing tools in
collaborative and digital manufacturing area.
Abstract: The purpose of this paper is to describe the process of
setting up a learning community within an elementary school in
Ontario, Canada. The description is provided through reflection and
examination of field notes taken during the yearlong training and
implementation process. Specifically the impact of teachers- capacity
on the creation of a learning community was of interest. This paper is
intended to inform and add to the debate around the tensions that
exist in implementing a bottom-up professional development model
like the learning community in a top-down organizational structure.
My reflections of the process illustrate that implementation of the
learning community professional development model may be
difficult and yet transformative in the professional lives of the
teachers, students, and administration involved in the change process.
I conclude by suggesting the need for a new model of professional
development that requires a transformative shift in power dynamics
and a shift in the view of what constitutes effective professional
learning.
Abstract: In open settings, the participants in virtual
organization are autonomous and there is no central authority to
ensure the felicity of their interactions. When agents interact in such
settings, each relies upon being able to model the trustworthiness of
the agents with whom it interacts. Fundamentally, such models must
consider the past behavior of the other parties in order to predict their
future behavior. Further, it is sensible for the agents to share
information via referrals to trustworthy agents. In this article, trust is
a bet on the future contingent actions of others" and enumerates six
major factors supporting it: (1) reputation, (2) performance, (3)
appearance, (4) accountability, (5) precommitment, and (6)
contextual facilitation.
Abstract: For the characterization of the weld defect region in the radiographic image, looking for features which are invariant regarding the geometrical transformations (rotation, translation and scaling) proves to be necessary because the same defect can be seen from several angles according to the orientation and the distance from the welded framework to the radiation source. Thus, panoply of geometrical attributes satisfying the above conditions is proposed and which result from the calculation of the geometrical parameters (surface, perimeter, etc.) on the one hand and the calculation of the different order moments, on the other hand. Because the large range in values of the raw features and taking into account other considerations imposed by some classifiers, the scaling of these values to lie between 0 and 1 is indispensable. The principal component analysis technique is used in order to reduce the number of the attribute variables in the aim to give better performance to the further defect classification.