Abstract: Interpolated contour maps drawn for aluminum,
copper and molybdenum in downstream monitoring boreholes of
water dam in Miduk Copper Complex and the values of pH, redox
potential (Eh) and distance from water dam indicate different trends
of variation and behavior of these three elements in downward
groundwater resources. As these maps exhibit, aluminum is dominant
in the most alkaline (pH = 9-11) borehole (MB5) to water dam. The
highest concentration of molybdenum is found in the nearest
borehole (MB6) to water dam. Main concentration of copper is
observed in the most oxidized borehole (MB3 with Eh=293.2mV).
The spatial difference among sampling stations can be attributed to
the existence of faults and diaclases in the geologic structure of
Miduk region which causes the groundwater sampling sites to be
impressed by different contamination sources (toe seepage and upper
seepage water originated from different zones of tailings dump).
Abstract: In Blind Source Separation (BSS) processing, taking
advantage of scaling factor indetermination and based on the floatingpoint
representation, we propose a scaling technique applied to the
separation matrix, to avoid the saturation or the weakness in the
recovered source signals. This technique performs an Automatic Gain
Control (AGC) in an on-line BSS environment. We demonstrate
the effectiveness of this technique by using the implementation of
a division free BSS algorithm with two input, two output. This
technique is computationally cheaper and efficient for a hardware
implementation.
Abstract: Server provisioning is one of the most attractive topics in virtualization systems. Virtualization is a method of running multiple independent virtual operating systems on a single physical computer. It is a way of maximizing physical resources to maximize the investment in hardware. Additionally, it can help to consolidate servers, improve hardware utilization and reduce the consumption of power and physical space in the data center. However, management of heterogeneous workloads, especially for resource utilization of the server, or so called provisioning becomes a challenge. In this paper, a new concept for managing workloads based on user behavior is presented. The experimental results show that user behaviors are different in each type of service workload and time. Understanding user behaviors may improve the efficiency of management in provisioning concept. This preliminary study may be an approach to improve management of data centers running heterogeneous workloads for provisioning in virtualization system.
Abstract: This paper presents the hardware design of a unified
architecture to compute the 4x4, 8x8 and 16x16 efficient twodimensional
(2-D) transform for the HEVC standard. This
architecture is based on fast integer transform algorithms. It is
designed only with adders and shifts in order to reduce the hardware
cost significantly. The goal is to ensure the maximum circuit reuse
during the computing while saving 40% for the number of operations.
The architecture is developed using FIFOs to compute the second
dimension. The proposed hardware was implemented in VHDL. The
VHDL RTL code works at 240 MHZ in an Altera Stratix III FPGA.
The number of cycles in this architecture varies from 33 in 4-point-
2D-DCT to 172 when the 16-point-2D-DCT is computed. Results
show frequency improvements reaching 96% when compared to an
architecture described as the direct transcription of the algorithm.
Abstract: In this paper an efficient implementation of Ripemd-
160 hash function is presented. Hash functions are a special family
of cryptographic algorithms, which is used in technological
applications with requirements for security, confidentiality and
validity. Applications like PKI, IPSec, DSA, MAC-s incorporate
hash functions and are used widely today. The Ripemd-160 is
emanated from the necessity for existence of very strong algorithms
in cryptanalysis. The proposed hardware implementation can be
synthesized easily for a variety of FPGA and ASIC technologies.
Simulation results, using commercial tools, verified the efficiency of
the implementation in terms of performance and throughput. Special
care has been taken so that the proposed implementation doesn-t
introduce extra design complexity; while in parallel functionality was
kept to the required levels.
Abstract: High level synthesis (HLS) is a process which
generates register-transfer level design for digital systems from
behavioral description. There are many HLS algorithms and
commercial tools. However, most of these algorithms consider a
behavioral description for the system when a single token is
presented to the system. This approach does not exploit extra
hardware efficiently, especially in the design of digital filters where
common operations may exist between successive tokens. In this
paper, we modify the behavioral description to process multiple
tokens in parallel. However, this approach is unlike the full
processing that requires full hardware replication. It exploits the
presence of common operations between successive tokens. The
performance of the proposed approach is better than sequential
processing and approaches that of full parallel processing as the
hardware resources are increased.
Abstract: In this paper, we demonstrate the adaptive
least-mean-square (LMS) filter modeling using SystemC. SystemC is
a modeling language that allows designer to model both hardware and
software component and makes it possible to design from high level
system of abstraction to low level system of abstraction. We produced
five adaptive least-mean-square filter models that are classed as five
abstraction levels using SystemC proceeding from the abstract model
to the more concrete model.
Abstract: This study investigated morphology of the Spanner Barb (Puntius lateristriga Valenciennes, 1842) and water quality at Thepchana waterfall. This study was conducted at Thepchana Waterfall, Khao Nan National Park from March to May 2007. There were 40 Spanner Barb collected with 20 males and 20 females. Males had an average of 5.57 cm in standard length, 6.62 cm in total length and 5.18 g in total body weight. Females had an average of 7.25 cm in standard length, 8.24 cm in total length and 10.96 g in total body weight. The length (L) – weight (W) relationships for combining sexes, males and females were LogW = -2.137 + 3.355logL, log W = -0.068 + 3.297logL, and log W = -2.068 + 3.297logL, respectively. The Spanner Barb were smaller size fish with a compressed form; terminal mouth; villiform teeth; ctenoid scale; concave tail; general body color yellowish olive, with slight reddish tint to fins; vertical band beginning below dorsal and horizontal stripe from base of tail almost to vertical band. They also had a vertical band midway between the eye and first vertical band. There was a black spot above anal fin. The bladder looked like J-shape. Inside of the bladder was found small insects and insect lava. The body length and the bowels length was 1:1 ratio. The water temperature ranged from 25.00 – 27.00 °C which was appropriate for their habitat characteristics. Acid - alkalinity ranged from 6.65 – 6.90 mg/l. Dissolved oxygen ranged from 4.55 – 4.70 mg/l. Water hardness ranged from 31.00 – 48.00 mg/l. The amount of ammonia was about 0.25 mg/l.
Abstract: Encryption protects communication partners from
disclosure of their secret messages but cannot prevent traffic analysis
and the leakage of information about “who communicates with
whom". In the presence of collaborating adversaries, this linkability
of actions can danger anonymity. However, reliably providing
anonymity is crucial in many applications. Especially in contextaware
mobile business, where mobile users equipped with PDAs
request and receive services from service providers, providing
anonymous communication is mission-critical and challenging at the
same time. Firstly, the limited performance of mobile devices does
not allow for heavy use of expensive public-key operations which are
commonly used in anonymity protocols. Moreover, the demands for
security depend on the application (e.g., mobile dating vs. pizza
delivery service), but different users (e.g., a celebrity vs. a normal
person) may even require different security levels for the same
application. Considering both hardware limitations of mobile devices
and different sensitivity of users, we propose an anonymity
framework that is dynamically configurable according to user and
application preferences. Our framework is based on Chaum-s mixnet.
We explain the proposed framework, its configuration
parameters for the dynamic behavior and the algorithm to enforce
dynamic anonymity.
Abstract: In H.264/AVC video encoding, rate-distortion
optimization for mode selection plays a significant role to achieve
outstanding performance in compression efficiency and video quality.
However, this mode selection process also makes the encoding
process extremely complex, especially in the computation of the ratedistortion
cost function, which includes the computations of the sum
of squared difference (SSD) between the original and reconstructed
image blocks and context-based entropy coding of the block. In this
paper, a transform-domain rate-distortion optimization accelerator
based on fast SSD (FSSD) and VLC-based rate estimation algorithm
is proposed. This algorithm could significantly simplify the hardware
architecture for the rate-distortion cost computation with only
ignorable performance degradation. An efficient hardware structure
for implementing the proposed transform-domain rate-distortion
optimization accelerator is also proposed. Simulation results
demonstrated that the proposed algorithm reduces about 47% of total
encoding time with negligible degradation of coding performance.
The proposed method can be easily applied to many mobile video
application areas such as a digital camera and a DMB (Digital
Multimedia Broadcasting) phone.
Abstract: In online context, the design and implementation of
effective remote laboratories environment is highly challenging on
account of hardware and software needs. This paper presents the
remote laboratory software framework modified from ilab shared
architecture (ISA). The ISA is a framework which enables students to
remotely acccess and control experimental hardware using internet
infrastructure. The need for remote laboratories came after
experiencing problems imposed by traditional laboratories. Among
them are: the high cost of laboratory equipment, scarcity of space,
scarcity of technical personnel along with the restricted university
budget creates a significant bottleneck on building required
laboratory experiments. The solution to these problems is to build
web-accessible laboratories. Remote laboratories allow students and
educators to interact with real laboratory equipment located
anywhere in the world at anytime. Recently, many universities and
other educational institutions especially in third world countries rely
on simulations because they do not afford the experimental
equipment they require to their students. Remote laboratories enable
users to get real data from real-time hand-on experiments. To
implement many remote laboratories, the system architecture should
be flexible, understandable and easy to implement, so that different
laboratories with different hardware can be deployed easily. The
modifications were made to enable developers to add more
equipment in ISA framework and to attract the new developers to
develop many online laboratories.
Abstract: In field of Computer Science and Mathematics,
sorting algorithm is an algorithm that puts elements of a list in a
certain order i.e. ascending or descending. Sorting is perhaps the
most widely studied problem in computer science and is frequently
used as a benchmark of a system-s performance. This paper
presented the comparative performance study of four sorting
algorithms on different platform. For each machine, it is found that
the algorithm depends upon the number of elements to be sorted. In
addition, as expected, results show that the relative performance of
the algorithms differed on the various machines. So, algorithm
performance is dependent on data size and there exists impact of
hardware also.
Abstract: Software maintenance is extremely important activity in software development life cycle. It involves a lot of human efforts, cost and time. Software maintenance may be further subdivided into different activities such as fault prediction, fault detection, fault prevention, fault correction etc. This topic has gained substantial attention due to sophisticated and complex applications, commercial hardware, clustered architecture and artificial intelligence. In this paper we surveyed the work done in the field of software maintenance. Software fault prediction has been studied in context of fault prone modules, self healing systems, developer information, maintenance models etc. Still a lot of things like modeling and weightage of impact of different kind of faults in the various types of software systems need to be explored in the field of fault severity.
Abstract: Music Information Retrieval (MIR) and modern data mining techniques are applied to identify style markers in midi music for stylometric analysis and author attribution. Over 100 attributes are extracted from a library of 2830 songs then mined using supervised learning data mining techniques. Two attributes are identified that provide high informational gain. These attributes are then used as style markers to predict authorship. Using these style markers the authors are able to correctly distinguish songs written by the Beatles from those that were not with a precision and accuracy of over 98 per cent. The identification of these style markers as well as the architecture for this research provides a foundation for future research in musical stylometry.
Abstract: In the recent past, there has been an increasing interest
in applying evolutionary methods to Knowledge Discovery in
Databases (KDD) and a number of successful applications of Genetic
Algorithms (GA) and Genetic Programming (GP) to KDD have been
demonstrated. The most predominant representation of the
discovered knowledge is the standard Production Rules (PRs) in the
form If P Then D. The PRs, however, are unable to handle
exceptions and do not exhibit variable precision. The Censored
Production Rules (CPRs), an extension of PRs, were proposed by
Michalski & Winston that exhibit variable precision and supports an
efficient mechanism for handling exceptions. A CPR is an
augmented production rule of the form:
If P Then D Unless C, where C (Censor) is an exception to the rule.
Such rules are employed in situations, in which the conditional
statement 'If P Then D' holds frequently and the assertion C holds
rarely. By using a rule of this type we are free to ignore the exception
conditions, when the resources needed to establish its presence are
tight or there is simply no information available as to whether it
holds or not. Thus, the 'If P Then D' part of the CPR expresses
important information, while the Unless C part acts only as a switch
and changes the polarity of D to ~D.
This paper presents a classification algorithm based on evolutionary
approach that discovers comprehensible rules with exceptions in the
form of CPRs.
The proposed approach has flexible chromosome encoding, where
each chromosome corresponds to a CPR. Appropriate genetic
operators are suggested and a fitness function is proposed that
incorporates the basic constraints on CPRs. Experimental results are
presented to demonstrate the performance of the proposed algorithm.
Abstract: The balance between nitrogen loading and runoff in the
forested headwater streams of the Kanna River was estimated to
elucidate the current status of nitrogen saturation in a forested
watershed. NO3-N concentration in the study area was far higher than
the average value in Japan. Estimated nitrogen runoff accounted for
55–57% of nitrogen loading; suggesting that the forest-s nitrogen
retention capacity is most likely in decline. Since the 1970s, Japan-s
forestry industry has been declining due to the decrease in lumber
demand and increase in cheap imported materials. Thus, this decline
will contribute significantly to further reducing nitrogen saturation in
forest ecosystems.
Abstract: With the hardware technology advancing, the cost of
storing is decreasing. Thus there is an urgent need for new techniques
and tools that can intelligently and automatically assist us in
transferring this data into useful knowledge. Different techniques of
data mining are developed which are helpful for handling these large
size databases [7]. Data mining is also finding its role in the field of
biotechnology. Pedigree means the associated ancestry of a crop
variety. Genetic diversity is the variation in the genetic composition
of individuals within or among species. Genetic diversity depends
upon the pedigree information of the varieties. Parents at lower
hierarchic levels have more weightage for predicting genetic
diversity as compared to the upper hierarchic levels. The weightage
decreases as the level increases. For crossbreeding, the two varieties
should be more and more genetically diverse so as to incorporate the
useful characters of the two varieties in the newly developed variety.
This paper discusses the searching and analyzing of different possible
pairs of varieties selected on the basis of morphological characters,
Climatic conditions and Nutrients so as to obtain the most optimal
pair that can produce the required crossbreed variety. An algorithm
was developed to determine the genetic diversity between the
selected wheat varieties. Cluster analysis technique is used for
retrieving the results.
Abstract: Intelligent schools are those which use IT devices and
technologies as media software, hardware and networks to improve
learning process. On the other hand Strategic management is a field
that deals with the major intended and emergent initiatives taken by
general managers on behalf of owners, involving utilization of resources, to enhance the performance of firms in their external environments. Here, we present a model Strategic Management System that has been applied on some schools and have made strict
improvement.
Abstract: The purpose of this study was to determine the most satisfying and frustrating aspects of ICT (Information and Communications Technologies) teaching in Turkish schools. Another aim was to compare these aspects based-on ICT teachers- selfefficacy. Participants were 119 ICT teachers from different geographical areas of Turkey. Participants were asked to list salient satisfying and frustrating aspects of ICT teaching, and to fill out the Self-Efficacy Scale for ICT Teachers. Results showed that the high self-efficacy teachers listed more positive and negative aspects of ICT teaching then did the low self-efficacy teachers. The satisfying aspects of ICT teaching were the dynamic nature of ICT subject, higher student interest, having opportunity to help other subject teachers, and lecturing in well-equipped labs, whereas the most frequently cited frustrating aspects of ICT teaching were ICT-related extra works of schools and colleagues, shortages of hardware and technical problems, indifferent students, insufficient teaching time, and the status of ICT subject in school curriculum. This information could be useful in redesigning ICT teachers- roles and responsibilities as well as job environment in schools.
Abstract: Among various testing methodologies, Built-in Self-
Test (BIST) is recognized as a low cost, effective paradigm. Also,
full adders are one of the basic building blocks of most arithmetic
circuits in all processing units. In this paper, an optimized testable 2-
bit full adder as a test building block is proposed. Then, a BIST
procedure is introduced to scale up the building block and to generate
a self testable n-bit full adders. The target design can achieve 100%
fault coverage using insignificant amount of hardware redundancy.
Moreover, Overall test time is reduced by utilizing polymorphic
gates and also by testing full adder building blocks in parallel.