Abstract: Technology transfer of renewable energy technologies is very often unsuccessful in the developing world. Aside from challenges that have social, economic, financial, institutional and environmental dimensions, technology transfer has generally been misunderstood, and largely seen as mere delivery of high tech equipment from developed to developing countries or within the developing world from R&D institutions to society. Technology transfer entails much more, including, but not limited to: entire systems and their component parts, know-how, goods and services, equipment, and organisational and managerial procedures. Means to facilitate the successful transfer of energy technologies, including the sharing of lessons are subsequently extremely important for developing countries as they grapple with increasing energy needs to sustain adequate economic growth and development. Improving the success of technology transfer is an ongoing process as more projects are implemented, new problems are encountered and new lessons are learnt. Renewable energy is also critical to improve the quality of lives of the majority of people in developing countries. In rural areas energy is primarily traditional biomass. The consumption activities typically occur in an inefficient manner, thus working against the notion of sustainable development. This paper explores the implementation of technology transfer in the developing world (sub-Saharan Africa). The focus is necessarily on RETs since most rural energy initiatives are RETs-based. Additionally, it aims to highlight some lessons drawn from the cited RE projects and identifies notable differences where energy technology transfer was judged to be successful. This is done through a literature review based on a selection of documented case studies which are judged against the definition provided for technology transfer. This paper also puts forth research recommendations that might contribute to improved technology transfer in the developing world. Key findings of this paper include: Technology transfer cannot be complete without satisfying pre-conditions such as: affordability, maintenance (and associated plans), knowledge and skills transfer, appropriate know how, ownership and commitment, ability to adapt technology, sound business principles such as financial viability and sustainability, project management, relevance and many others. It is also shown that lessons are learnt in both successful and unsuccessful projects.
Abstract: The purpose of this study is to suggest energy efficient
routing for ad hoc networks which are composed of nodes with limited
energy. There are diverse problems including limitation of energy
supply of node, and the node energy management problem has been
presented. And a number of protocols have been proposed for energy
conservation and energy efficiency. In this study, the critical point of
the EA-MPDSR, that is the type of energy efficient routing using only
two paths, is improved and developed. The proposed TP-MESR uses
multi-path routing technique and traffic prediction function to increase
number of path more than 2. It also verifies its efficiency compared to
EA-MPDSR using network simulator (NS-2). Also, To give a
academic value and explain protocol systematically, research
guidelines which the Hevner(2004) suggests are applied. This
proposed TP-MESR solved the existing multi-path routing problem
related to overhead, radio interference, packet reassembly and it
confirmed its contribution to effective use of energy in ad hoc
networks.
Abstract: Routing in MANET is extremely challenging because
of MANETs dynamic features, its limited bandwidth, frequent
topology changes caused by node mobility and power energy
consumption. In order to efficiently transmit data to destinations, the
applicable routing algorithms must be implemented in mobile ad-hoc
networks. Thus we can increase the efficiency of the routing by
satisfying the Quality of Service (QoS) parameters by developing
routing algorithms for MANETs. The algorithms that are inspired by
the principles of natural biological evolution and distributed
collective behavior of social colonies have shown excellence in
dealing with complex optimization problems and are becoming more
popular. This paper presents a survey on few meta-heuristic
algorithms and naturally-inspired algorithms.
Abstract: In technological processes, in addition to the main
product, result a large amount of materials, called wastes, but due to
the possibilities of recovery, by means of recycling and reusing it can
fit in the category of by-products. These large amounts of dust from
the steel industry are a major problem in terms of environmental and
human health, landscape, etc. Solving these problems, the impressive
amounts of waste can be done through their proper management and
recovery for every type of waste. In this article it was watched the
capitalizing through pelleting and briquetting of small and powdery
waste aiming to obtain the sponge iron as raw material, used in blast
furnaces and electric arc furnaces. The data have been processed in
the Excel spreadsheet program, being presented in the form of
diagrams.
Abstract: Technological innovation capability (TIC) is
defined as a comprehensive set of characteristics of a firm that
facilities and supports its technological innovation strategies.
An audit to evaluate the TICs of a firm may trigger
improvement in its future practices. Such an audit can be used
by the firm for self assessment or third-party independent
assessment to identify problems of its capability status. This
paper attempts to develop such an auditing framework that
can help to determine the subtle links between innovation
capabilities and business performance; and to enable the
auditor to determine whether good practice is in place. The
seven TICs in this study include learning, R&D, resources
allocation, manufacturing, marketing, organization and
strategic planning capabilities. Empirical data was acquired
through a survey study of 200 manufacturing firms in the
Hong Kong/Pearl River Delta (HK/PRD) region. Structural
equation modelling was employed to examine the
relationships among TICs and various performance indicators:
sales performance, innovation performance, product
performance, and sales growth. The results revealed that
different TICs have different impacts on different
performance measures. Organization capability was found to
have the most influential impact. Hong Kong manufacturers
are now facing the challenge of high-mix-low-volume
customer orders. In order to cope with this change, good
capability in organizing different activities among various
departments is critical to the success of a company.
Abstract: The equivalence class subset algorithm is a powerful
tool for solving a wide variety of constraint satisfaction problems and
is based on the use of a decision function which has a very high but
not perfect accuracy. Perfect accuracy is not required in the decision
function as even a suboptimal solution contains valuable information
that can be used to help find an optimal solution. In the hardest
problems, the decision function can break down leading to a
suboptimal solution where there are more equivalence classes than
are necessary and which can be viewed as a mixture of good decision
and bad decisions. By choosing a subset of the decisions made in
reaching a suboptimal solution an iterative technique can lead to an
optimal solution, using series of steadily improved suboptimal
solutions. The goal is to reach an optimal solution as quickly as
possible. Various techniques for choosing the decision subset are
evaluated.
Abstract: In this paper, multi-processors job shop scheduling problems are solved by a heuristic algorithm based on the hybrid of priority dispatching rules according to an ant colony optimization algorithm. The objective function is to minimize the makespan, i.e. total completion time, in which a simultanous presence of various kinds of ferons is allowed. By using the suitable hybrid of priority dispatching rules, the process of finding the best solution will be improved. Ant colony optimization algorithm, not only promote the ability of this proposed algorithm, but also decreases the total working time because of decreasing in setup times and modifying the working production line. Thus, the similar work has the same production lines. Other advantage of this algorithm is that the similar machines (not the same) can be considered. So, these machines are able to process a job with different processing and setup times. According to this capability and from this algorithm evaluation point of view, a number of test problems are solved and the associated results are analyzed. The results show a significant decrease in throughput time. It also shows that, this algorithm is able to recognize the bottleneck machine and to schedule jobs in an efficient way.
Abstract: Modern highly automated production systems faces
problems of reliability. Machine function reliability results in
changes of productivity rate and efficiency use of expensive
industrial facilities. Predicting of reliability has become an important
research and involves complex mathematical methods and
calculation. The reliability of high productivity technological
automatic machines that consists of complex mechanical, electrical
and electronic components is important. The failure of these units
results in major economic losses of production systems. The
reliability of transport and feeding systems for automatic
technological machines is also important, because failure of transport
leads to stops of technological machines. This paper presents
reliability engineering on the feeding system and its components for
transporting a complex shape parts to automatic machines. It also
discusses about the calculation of the reliability parameters of the
feeding unit by applying the probability theory. Equations produced
for calculating the limits of the geometrical sizes of feeders and the
probability of sticking the transported parts into the chute represents
the reliability of feeders as a function of its geometrical parameters.
Abstract: The emerging Semantic Web has been attracted many
researchers and developers. New applications have been developed on top of Semantic Web and many supporting tools introduced to improve its software development process. Metadata modeling is one of development process where supporting tools exists. The existing
tools are lack of readability and easiness for a domain knowledge expert to graphically models a problem in semantic model. In this paper, a metadata modeling tool called RDFGraph is proposed. This
tool is meant to solve those problems. RDFGraph is also designed to work with modern database management systems that support RDF and to improve the performance of the query execution process. The
testing result shows that the rules used in RDFGraph follows the W3C standard and the graphical model produced in this tool is properly translated and correct.
Abstract: There are many problems associated with the World Wide
Web: getting lost in the hyperspace; the web content is still accessible only
to humans and difficulties of web administration. The solution to these
problems is the Semantic Web which is considered to be the extension
for the current web presents information in both human readable and
machine processable form. The aim of this study is to reach new
generic foundation architecture for the Semantic Web because there
is no clear architecture for it, there are four versions, but still up to
now there is no agreement for one of these versions nor is there a
clear picture for the relation between different layers and
technologies inside this architecture. This can be done depending on
the idea of previous versions as well as Gerber-s evaluation method
as a step toward an agreement for one Semantic Web architecture.
Abstract: The development of Artificial Neural Networks
(ANNs) is usually a slow process in which the human expert has to
test several architectures until he finds the one that achieves best
results to solve a certain problem. This work presents a new
technique that uses Genetic Programming (GP) for automatically
generating ANNs. To do this, the GP algorithm had to be changed in
order to work with graph structures, so ANNs can be developed. This
technique also allows the obtaining of simplified networks that solve
the problem with a small group of neurons. In order to measure the
performance of the system and to compare the results with other
ANN development methods by means of Evolutionary Computation
(EC) techniques, several tests were performed with problems based
on some of the most used test databases. The results of those
comparisons show that the system achieves good results comparable
with the already existing techniques and, in most of the cases, they
worked better than those techniques.
Abstract: Trauma in early life is widely regarded as a cause for
adult mental health problems. This study explores the role of
secondary trauma on later functioning in a sample of 359 university
students enrolled in undergraduate psychology classes in the United
States. Participants were initially divided into four groups based on
1) having directly experienced trauma (assaultive violence), 2)
having directly experienced trauma and secondary traumatization
through the unanticipated death of a close friend or family member
or witnessing of an injury or shocking even), 3) having no
experience of direct trauma but having experienced indirect trauma
(secondary trauma), or 4) reporting no exposure. Participants
completed a battery of measures on concepts associated with
psychological functioning which included measures of
psychological well-being, problem solving, coping and resiliency.
Findings discuss differences in psychological functioning and
resilience based on participants who experienced secondary
traumatization and assaultive violence versus secondary
traumatization alone.
Abstract: The paper is concerned with the state examination as
well as the problems during the post surgical (orthopedic)
rehabilitation of the knee and ankle joint. An observation of the
current appliances for a passive rehabilitation devices is presented.
The major necessary and basic features of the intelligent
rehabilitation devices are considered. An approach for a new
intelligent appliance is suggested. The main advantages of the device
are: both active as well as passive rehabilitation of the patient based
on the human - patient reactions and a real time feedback. The basic
components: controller; electrical motor; encoder, force – torque
sensor are discussed in details. The main modes of operation of the
device are considered.
Abstract: Power Factor (PF) is one of the most important parameters in the electrical systems, especially in the water pumping station. The low power factor value of the water pumping stations causes penalty for the electrical bill. There are many methods use for power factor improvement. Each one of them uses a capacitor on the electrical power network. The position of the capacitors is varied depends on many factors such as; voltage level and capacitors rating. Adding capacitors on the motor terminals increase the supply power factor from 0.8 to more than 0.9 but these capacitors cause some problems for the electrical grid network, such as increasing the harmonic contents of the grid line voltage. In this paper the effects of using capacitors in the water pumping stations to improve the power factor value on the harmonic contents of the electrical grid network are studied. One of large water pumping stations in Kafr El-Shikh Governorate in Egypt was used, as a case study. The effect of capacitors on the line voltage harmonic contents is measured. The station uses capacitors to improve the PF values at the 1 lkv grid network. The power supply harmonics values are measured by a power quality analyzer at different loading conditions. The results showed that; the capacitors improved the power factor value of the feeder and its value increased than 0.9. But the THD values are increased by adding these capacitors. The harmonic analysis showed that; the 13th, 17th, and 19th harmonics orders are increased also by adding the capacitors.
Abstract: The RK5GL3 method is a numerical method for solving
initial value problems in ordinary differential equations, and is based
on a combination of a fifth-order Runge-Kutta method and 3-point
Gauss-Legendre quadrature. In this paper we describe the propagation
of local errors in this method, and show that the global order of
RK5GL3 is expected to be six, one better than the underlying Runge-
Kutta method.
Abstract: A new Meta heuristic approach called "Randomized gravitational emulation search algorithm (RGES)" for solving vertex covering problems has been designed. This algorithm is found upon introducing randomization concept along with the two of the four primary parameters -velocity- and -gravity- in physics. A new heuristic operator is introduced in the domain of RGES to maintain feasibility specifically for the vertex covering problem to yield best solutions. The performance of this algorithm has been evaluated on a large set of benchmark problems from OR-library. Computational results showed that the randomized gravitational emulation search algorithm - based heuristic is capable of producing high quality solutions. The performance of this heuristic when compared with other existing heuristic algorithms is found to be excellent in terms of solution quality.
Abstract: This paper addresses the controller synthesis problem of discrete-time switched positive systems with bounded time-varying delays. Based on the switched copositive Lyapunov function approach, some necessary and sufficient conditions for the existence of state-feedback controller are presented as a set of linear programming and linear matrix inequality problems, hence easy to be verified. Another advantage is that the state-feedback law is independent on time-varying delays and initial conditions. A numerical example is provided to illustrate the effectiveness and feasibility of the developed controller.
Abstract: In most study fields, a phenomenon may not be
studied directly but it will be examined indirectly by phenomenon
model. Making an accurate model of system, there is attained new
information from modeled phenomenon without any charge, danger,
etc... there have been developed more solutions for describing and
analyzing the recent complicated systems but few of them have
analyzed the performance in the range of system description. Petri
nets are of limited solutions which may make such union. Petri nets
are being applied in problems related to modeling and designing the
systems. Theory of Petri nets allow a system to model
mathematically by a Petri net and analyzing the Petri net can then
determine main information of modeled system-s structure and
dynamic. This information can be used for assessing the performance
of systems and suggesting corrections in the system. In this paper,
beside the introduction of Petri nets, a real case study will be studied
in order to show the application of generalized stochastic Petri nets in
modeling a resource sharing production system and evaluating the
efficiency of its machines and robots. The modeling tool used here is
SHARP software which calculates specific indicators helping to
make decision.
Abstract: since in natural accidents, facilities that relate to this vita element are underground so, it is difficult to find quickly some right, exact and definite information about water utilities. There fore, this article has done operationally in Boukan city in Western Azarbaijan of Iran and it tries to represent operation and capabilities of Geographical Information system (GIS) in urban water management at the time of natural accidents. Structure of this article is that firstly it has established a comprehensive data base related to water utilities by collecting, entering, saving and data management, then by modeling water utilities we have practically considered its operational aspects related to water utility problems in urban regions.
Abstract: Master plan is a tool to guide and manage the growth of cities in a planned manner. The soul of a master plan lies in its implementation framework. If not implemented, people are trapped in a mess of urban problems and laissez-faire development having serious long term repercussions. Unfortunately, Master Plans prepared for several major cities of Pakistan could not be fully implemented due to host of reasons and Lahore is no exception. Being the second largest city of Pakistan with a population of over 7 million people, Lahore holds the distinction that the first ever Master Plan in the country was prepared for this city in 1966. Recently in 2004, a new plan titled `Integrated Master Plan for Lahore-2021- has been approved for implementation. This paper provides a comprehensive account of the weaknesses and constraints in the plan preparation process and implementation strategies of Master Plans prepared for Lahore. It also critically reviews the new Master Plan particularly with respect to the proposed implementation framework. The paper discusses the prospects and pre-conditions for successful implementation of the new Plan in the light of historic analysis, interviews with stakeholders and the new institutional context under the devolution plan.