Abstract: A case study of the generation scheduling optimization
of the multi-hydroplants on the Yuan River Basin in China is reported
in this paper. Concerning the uncertainty of the inflows, the
long/mid-term generation scheduling (LMTGS) problem is solved by
a stochastic model in which the inflows are considered as stochastic
variables. For the short-term generation scheduling (STGS) problem, a
constraint violation priority is defined in case not all constraints are
satisfied. Provided the stage-wise separable condition and low
dimensions, the hydroplant-based operational region schedules
(HBORS) problem is solved by dynamic programming (DP). The
coordination of LMTGS and STGS is presented as well. The
feasibility and the effectiveness of the models and solution methods
are verified by the numerical results.
Abstract: Defect prevention is the most vital but habitually
neglected facet of software quality assurance in any project. If
functional at all stages of software development, it can condense the
time, overheads and wherewithal entailed to engineer a high quality
product. The key challenge of an IT industry is to engineer a
software product with minimum post deployment defects.
This effort is an analysis based on data obtained for five selected
projects from leading software companies of varying software
production competence. The main aim of this paper is to provide
information on various methods and practices supporting defect
detection and prevention leading to thriving software generation. The
defect prevention technique unearths 99% of defects. Inspection is
found to be an essential technique in generating ideal software
generation in factories through enhanced methodologies of abetted
and unaided inspection schedules. On an average 13 % to 15% of
inspection and 25% - 30% of testing out of whole project effort time
is required for 99% - 99.75% of defect elimination.
A comparison of the end results for the five selected projects
between the companies is also brought about throwing light on the
possibility of a particular company to position itself with an
appropriate complementary ratio of inspection testing.
Abstract: In the 1980s, companies began to feel the effect of three major influences on their product development: newer and innovative technologies, increasing product complexity and larger organizations. And therefore companies were forced to look for new product development methods. This paper tries to focus on the two of new product development methods (DFM and CE). The aim of this paper is to see and analyze different product development methods specifically on Design for Manufacturability and Concurrent Engineering. Companies can achieve and be benefited by minimizing product life cycle, cost and meeting delivery schedule. This paper also presents simplified models that can be modified and used by different companies based on the companies- objective and requirements. Methodologies that are followed to do this research are case studies. Two companies were taken and analysed on the product development process. Historical data, interview were conducted on these companies in addition to that, Survey of literatures and previous research works on similar topics has been done during this research. This paper also tries to show the implementation cost benefit analysis and tries to calculate the implementation time. From this research, it has been found that the two companies did not achieve the delivery time to the customer. Some of most frequently coming products are analyzed and 50% to 80 % of their products are not delivered on time to the customers. The companies are following the traditional way of product development that is sequentially design and production method, which highly affect time to market. In the case study it is found that by implementing these new methods and by forming multi disciplinary team in designing and quality inspection; the company can reduce the workflow steps from 40 to 30.
Abstract: Time-Cost Optimization "TCO" is one of the greatest challenges in construction project planning and control, since the optimization of either time or cost, would usually be at the expense of the other. Since there is a hidden trade-off relationship between project and cost, it might be difficult to predict whether the total cost would increase or decrease as a result of the schedule compression. Recently third dimension in trade-off analysis is taken into consideration that is quality of the projects. Few of the existing algorithms are applied in a case of construction project with threedimensional trade-off analysis, Time-Cost-Quality relationships. The objective of this paper is to presents the development of a practical software system; that named Automatic Multi-objective Typical Construction Resource Optimization System "AMTCROS". This system incorporates the basic concepts of Line Of Balance "LOB" and Critical Path Method "CPM" in a multi-objective Genetic Algorithms "GAs" model. The main objective of this system is to provide a practical support for typical construction planners who need to optimize resource utilization in order to minimize project cost and duration while maximizing its quality simultaneously. The application of these research developments in planning the typical construction projects holds a strong promise to: 1) Increase the efficiency of resource use in typical construction projects; 2) Reduce construction duration period; 3) Minimize construction cost (direct cost plus indirect cost); and 4) Improve the quality of newly construction projects. A general description of the proposed software for the Time-Cost-Quality Trade-Off "TCQTO" is presented. The main inputs and outputs of the proposed software are outlined. The main subroutines and the inference engine of this software are detailed. The complexity analysis of the software is discussed. In addition, the verification, and complexity of the proposed software are proved and tested using a real case study.
Abstract: In this paper, we propose a dynamic TDMA slot
reservation (DTSR) protocol for cognitive radio ad hoc networks.
Quality of Service (QoS) guarantee plays a critically important role
in such networks. We consider the problem of providing QoS
guarantee to users as well as to maintain the most efficient use of
scarce bandwidth resources. According to one hop neighboring
information and the bandwidth requirement, our proposed protocol
dynamically changes the frame length and the transmission schedule.
A dynamic frame length expansion and shrinking scheme that
controls the excessive increase of unassigned slots has been
proposed. This method efficiently utilizes the channel bandwidth by
assigning unused slots to new neighboring nodes and increasing the
frame length when the number of slots in the frame is insufficient to
support the neighboring nodes. It also shrinks the frame length when
half of the slots in the frame of a node are empty. An efficient slot
reservation protocol not only guarantees successful data
transmissions without collisions but also enhance channel spatial
reuse to maximize the system throughput. Our proposed scheme,
which provides both QoS guarantee and efficient resource utilization,
be employed to optimize the channel spatial reuse and maximize the
system throughput. Extensive simulation results show that the
proposed mechanism achieves desirable performance in multichannel
multi-rate cognitive radio ad hoc networks.
Abstract: In this paper, the periodic surveillance scheme has
been proposed for any convex region using mobile wireless sensor
nodes. A sensor network typically consists of fixed number of
sensor nodes which report the measurements of sensed data such as
temperature, pressure, humidity, etc., of its immediate proximity
(the area within its sensing range). For the purpose of sensing an
area of interest, there are adequate number of fixed sensor
nodes required to cover the entire region of interest. It implies
that the number of fixed sensor nodes required to cover a given
area will depend on the sensing range of the sensor as well as
deployment strategies employed. It is assumed that the sensors to
be mobile within the region of surveillance, can be mounted on
moving bodies like robots or vehicle. Therefore, in our
scheme, the surveillance time period determines the number of
sensor nodes required to be deployed in the region of interest.
The proposed scheme comprises of three algorithms namely:
Hexagonalization, Clustering, and Scheduling, The first algorithm
partitions the coverage area into fixed sized hexagons that
approximate the sensing range (cell) of individual sensor node.
The clustering algorithm groups the cells into clusters, each of
which will be covered by a single sensor node. The later
determines a schedule for each sensor to serve its respective cluster.
Each sensor node traverses all the cells belonging to the cluster
assigned to it by oscillating between the first and the last cell for
the duration of its life time. Simulation results show that our
scheme provides full coverage within a given period of time using
few sensors with minimum movement, less power consumption,
and relatively less infrastructure cost.
Abstract: In this paper, SFQ (Start Time Fair Queuing)
algorithm is analyzed when this is applied in computer networks to
know what kind of behavior the traffic in the net has when different
data sources are managed by the scheduler. Using the NS2 software
the computer networks were simulated to be able to get the graphs
showing the performance of the scheduler. Different traffic sources
were introduced in the scripts, trying to establish the real scenario.
Finally the results were that depending on the data source, the traffic
can be affected in different levels, when Constant Bite Rate is
applied, the scheduler ensures a constant level of data sent and
received, but the truth is that in the real life it is impossible to ensure
a level that resists the changes in work load.
Abstract: In this paper we propose a novel Run Time Interface
(RTI) technique to provide an efficient environment for MPI jobs on
the heterogeneous architecture of PARAM Padma. It suggests an
innovative, unified framework for the job management interface
system in parallel and distributed computing. This approach employs
proxy scheme. The implementation shows that the proposed RTI is
highly scalable and stable. Moreover RTI provides the storage access
for the MPI jobs in various operating system platforms and improve
the data access performance through high performance C-DAC
Parallel File System (C-PFS). The performance of the RTI is
evaluated by using the standard HPC benchmark suites and the
simulation results show that the proposed RTI gives good
performance on large scale supercomputing system.
Abstract: Value engineering is an efficacious contraption for
administrators to make up their minds. Value perusals proffer the
gaffers a suitable instrument to decrease the expenditures of the life
span, quality amelioration, structural improvement, curtailment of the
construction schedule, longevity prolongation or a merging of the
aforementioned cases. Subjecting organizers to pressures on one
hand and their accountability towards their pertinent fields together
with inherent risks and ambiguities of other options on the other hand
set some comptrollers in a dilemma utilization of risk management
and the value engineering in projects manipulation with regard to
complexities of implementing projects can be wielded as a
contraption to identify and efface each item which wreaks
unnecessary expenses and time squandering sans inflicting any
damages upon the essential project applications. Of course It should
be noted that implementation of risk management and value
engineering with regard to the betterment of efficiency and functions
may lead to the project implementation timing elongation. Here time
revamping does not refer to time diminishing in the whole cases. his
article deals with risk and value engineering conceptualizations at
first. The germane reverberations effectuated due to its execution in
Iran Khodro Corporation are regarded together with the joint features
and amalgamation of the aforesaid entia; hence the proposed
blueprint is submitted to be taken advantage of in engineering and
industrial projects including Iran Khodro Corporation.
Abstract: An adaptive fuzzy PID controller with gain scheduling is proposed in this paper. The structure of the proposed gain scheduled fuzzy PID (GS_FPID) controller consists of both fuzzy PI-like controller and fuzzy PD-like controller. Both of fuzzy PI-like and PD-like controllers are weighted through adaptive gain scheduling, which are also determined by fuzzy logic inference. A modified genetic algorithm called accumulated genetic algorithm is designed to learn the parameters of fuzzy inference system. In order to learn the number of fuzzy rules required for the TSK model, the fuzzy rules are learned in an accumulated way. In other words, the parameters learned in the previous rules are accumulated and updated along with the parameters in the current rule. It will be shown that the proposed GS_FPID controllers learned by the accumulated GA perform well for not only the regular linear systems but also the higher order and time-delayed systems.
Abstract: The usual correctness condition for a schedule of
concurrent database transactions is some form of serializability of
the transactions. For general forms, the problem of deciding whether
a schedule is serializable is NP-complete. In those cases other approaches
to proving correctness, using proof rules that allow the steps
of the proof of serializability to be guided manually, are desirable.
Such an approach is possible in the case of conflict serializability
which is proved algebraically by deriving serial schedules using
commutativity of non-conflicting operations. However, conflict serializability
can be an unnecessarily strong form of serializability restricting
concurrency and thereby reducing performance. In practice,
weaker, more general, forms of serializability for extended models of
transactions are used. Currently, there are no known methods using
proof rules for proving those general forms of serializability. In this
paper, we define serializability for an extended model of partitioned
transactions, which we show to be as expressive as serializability
for general partitioned transactions. An algebraic method for proving
general serializability is obtained by giving an initial-algebra specification
of serializable schedules of concurrent transactions in the
model. This demonstrates that it is possible to conduct algebraic
proofs of correctness of concurrent transactions in general cases.
Abstract: In this paper we address a multi-objective scheduling problem for unrelated parallel machines. In unrelated parallel systems, the processing cost/time of a given job on different machines may vary. The objective of scheduling is to simultaneously determine the job-machine assignment and job sequencing on each machine. In such a way the total cost of the schedule is minimized. The cost function consists of three components, namely; machining cost, earliness/tardiness penalties and makespan related cost. Such scheduling problem is combinatorial in nature. Therefore, a Simulated Annealing approach is employed to provide good solutions within reasonable computational times. Computational results show that the proposed approach can efficiently solve such complicated problems.
Abstract: Producing companies aspire to high delivery
availability despite appearing disruptions. To ensure high delivery
availability safety stocksare required. Howeversafety stock leads to
additional capital commitment and compensates disruptions instead
of solving the reasons.The intention is to increase the stability in
production by configuring the production planning and control
systematically. Thus the safety stock can be reduced. The largest
proportion of inventory in producing companies is caused by batch
inventory, schedule deviations and variability of demand rates.These
reasons for high inventory levels can be reduced by configuring the
production planning and control specifically. Hence the inventory
level can be reduced. This is enabled by synchronizing the lot size
straightening the demand as well as optimizing the releasing order,
sequencing and capacity control.
Abstract: The main objective of this work is to compare the
quality of service of the bus companies operating in the city of Rio
Branco, located in the state of Acre with the quality of service of the
bus companies operating in the city of Campos, situated in the state
of Rio de Janeiro, both cities in Brazil. This comparison, based on the
opinion of the bus users, will determine their degree of satisfaction
with the service available in both cities. The outcome of this
evaluation shows the users unhappy with the quality of the service
provided by the bus companies operating in both cities and the need
to identify alternative solutions that may minimize the consequences
caused by the main problems detected in this work. With these
alternatives available, the bus companies will be able to better
understand the needs of their customers in terms of manpower,
service cost, time schedule, etc.
Abstract: The job shop scheduling problem (JSSP) is a
notoriously difficult problem in combinatorial optimization. This
paper presents a hybrid artificial immune system for the JSSP with the
objective of minimizing makespan. The proposed approach combines
the artificial immune system, which has a powerful global exploration
capability, with the local search method, which can exploit the optimal
antibody. The antibody coding scheme is based on the operation based
representation. The decoding procedure limits the search space to the
set of full active schedules. In each generation, a local search heuristic
based on the neighborhood structure proposed by Nowicki and
Smutnicki is applied to improve the solutions. The approach is tested
on 43 benchmark problems taken from the literature and compared
with other approaches. The computation results validate the
effectiveness of the proposed algorithm.
Abstract: In this paper, enhanced ground proximity warning simulation and validation system is designed and implemented. First, based on square grid and sub-grid structure, the global digital terrain database is designed and constructed. Terrain data searching is implemented through querying the latitude and longitude bands and separated zones of global terrain database with the current aircraft position. A combination of dynamic scheduling and hierarchical scheduling is adopted to schedule the terrain data, and the terrain data can be read and delete dynamically in the memory. Secondly, according to the scope, distance, approach speed information etc. to the dangerous terrain in front, and using security profiles calculating method, collision threat detection is executed in real-time, and provides caution and warning alarm. According to this scheme, the implementation of the enhanced ground proximity warning simulation system is realized. Simulations are carried out to verify a good real-time in terrain display and alarm trigger, and the results show simulation system is realized correctly, reasonably and stable.
Abstract: WiMAX is defined as Worldwide Interoperability for
Microwave Access by the WiMAX Forum, formed in June 2001 to
promote conformance and interoperability of the IEEE 802.16
standard, officially known as WirelessMAN. The attractive features
of WiMAX technology are very high throughput and Broadband
Wireless Access over a long distance. A detailed simulation
environment is demonstrated with the UGS, nrtPS and ertPS service
classes for throughput, delay and packet delivery ratio for a mixed
environment of fixed and mobile WiMAX. A simple mobility aspect
is considered for the mobile WiMAX and the PMP mode of
transmission is considered in TDD mode. The Network Simulator 2
(NS-2) is the tool which is used to simulate the WiMAX network
scenario. A simple Priority Scheduler and Weighted Round Robin
Schedulers are the WiMAX schedulers used in the research work
Abstract: In the self-stabilizing algorithmic paradigm, each node has a local view of the system, in a finite amount of time the system converges to a global state with desired property. In a graph G =
(V, E), a subset S C V is a 2-packing if Vi c V: IN[i] n SI
Abstract: Trends in business intelligence, e-commerce and
remote access make it necessary and practical to store data in
different ways on multiple systems with different operating systems.
As business evolve and grow, they require efficient computerized
solution to perform data update and to access data from diverse
enterprise business applications. The objective of this paper is to
demonstrate the capability of DTS [1] as a database solution for
automatic data transfer and update in solving business problem. This
DTS package is developed for the sales of variety of plants and
eventually expanded into commercial supply and landscaping
business. Dimension data modeling is used in DTS package to
extract, transform and load data from heterogeneous database
systems such as MySQL, Microsoft Access and Oracle that
consolidates into a Data Mart residing in SQL Server. Hence, the
data transfer from various databases is scheduled to run automatically
every quarter of the year to review the efficient sales analysis.
Therefore, DTS is absolutely an attractive solution for automatic data
transfer and update which meeting today-s business needs.
Abstract: Through inward perceptions, we intuitively expect
distributed software development to increase the risks associated with
achieving cost, schedule, and quality goals. To compound this
problem, agile software development (ASD) insists one of the main
ingredients of its success is cohesive communication attributed to
collocation of the development team. The following study identified
the degree of communication richness needed to achieve comparable
software quality (reduce pre-release defects) between distributed and
collocated teams. This paper explores the relevancy of
communication richness in various development phases and its
impact on quality. Through examination of a large distributed agile
development project, this investigation seeks to understand the levels
of communication required within each ASD phase to produce
comparable quality results achieved by collocated teams. Obviously,
a multitude of factors affects the outcome of software projects.
However, within distributed agile software development teams, the
mode of communication is one of the critical components required to
achieve team cohesiveness and effectiveness. As such, this study
constructs a distributed agile communication model (DAC-M) for
potential application to similar distributed agile development efforts
using the measurement of the suitable level of communication. The
results of the study show that less rich communication methods, in
the appropriate phase, might be satisfactory to achieve equivalent
quality in distributed ASD efforts.