Abstract: Deoxyribonucleic Acid or DNA computing has
emerged as an interdisciplinary field that draws together chemistry,
molecular biology, computer science and mathematics. Thus, in this
paper, the possibility of DNA-based computing to solve an absolute
1-center problem by molecular manipulations is presented. This is
truly the first attempt to solve such a problem by DNA-based
computing approach. Since, part of the procedures involve with
shortest path computation, research works on DNA computing for
shortest path Traveling Salesman Problem, in short, TSP are reviewed.
These approaches are studied and only the appropriate one is adapted
in designing the computation procedures. This DNA-based
computation is designed in such a way that every path is encoded by
oligonucleotides and the path-s length is directly proportional to the
length of oligonucleotides. Using these properties, gel electrophoresis
is performed in order to separate the respective DNA molecules
according to their length. One expectation arise from this paper is that
it is possible to verify the instance absolute 1-center problem using
DNA computing by laboratory experiments.
Abstract: A new approach has been used for optimized design of multipliers based upon the concepts of Vedic mathematics. The design has been targeted to state-of-the art field-programmable gate arrays (FPGAs). The multiplier generates partial products using Vedic mathematics method by employing basic 4x4 multipliers designed by exploiting 6-input LUTs and multiplexers in the same slices resulting in drastic reduction in area. The multiplier is realized on Xilinx FPGAs using devices Virtex-5 and Virtex-6.Carry Chain Adder was employed to obtain final products. The performance of the proposed multiplier was examined and compared to well-known multipliers such as Booth, Carry Save, Carry ripple, and array multipliers. It is demonstrated that the proposed multiplier is superior in terms of speed as well as power consumption.
Abstract: Nowadays e-Learning is more popular, in Vietnam
especially. In e-learning, materials for studying are very important.
It is necessary to design the knowledge base systems and expert
systems which support for searching, querying, solving of
problems. The ontology, which was called Computational Object
Knowledge Base Ontology (COB-ONT), is a useful tool for
designing knowledge base systems in practice. In this paper, a
design method for knowledge base systems in education using
COKB-ONT will be presented. We also present the design of a
knowledge base system that supports studying knowledge and
solving problems in higher mathematics.
Abstract: This paper deals with the application of a fuzzy set in
measuring teachers- beliefs about mathematics. The vagueness of
beliefs was transformed into standard mathematical values using a
fuzzy preferences model. The study employed a fuzzy approach
questionnaire which consists of six attributes for measuring
mathematics teachers- beliefs about mathematics. The fuzzy conjoint
analysis approach based on fuzzy set theory was used to analyze the
data from twenty three mathematics teachers from four secondary
schools in Terengganu, Malaysia. Teachers- beliefs were recorded in
form of degrees of similarity and its levels of agreement. The
attribute 'Drills and practice is one of the best ways of learning
mathematics' scored the highest degree of similarity at 0. 79860 with
level of 'strongly agree'. The results showed that the teachers- beliefs
about mathematics were varied. This is shown by different levels of
agreement and degrees of similarity of the measured attributes.
Abstract: A logic model for analyzing complex systems- stability
is very useful to many areas of sciences. In the real world, we are
enlightened from some natural phenomena such as “biosphere", “food
chain", “ecological balance" etc. By research and practice, and taking
advantage of the orthogonality and symmetry defined by the theory of
multilateral matrices, we put forward a logic analysis model of
stability of complex systems with three relations, and prove it by
means of mathematics. This logic model is usually successful in
analyzing stability of a complex system. The structure of the logic
model is not only clear and simple, but also can be easily used to
research and solve many stability problems of complex systems. As an
application, some examples are given.
Abstract: A given polynomial, possibly with multiple roots, is
factored into several lower-degree distinct-root polynomials with
natural-order-integer powers. All the roots, including multiplicities,
of the original polynomial may be obtained by solving these lowerdegree
distinct-root polynomials, instead of the original high-degree
multiple-root polynomial directly.
The approach requires polynomial Greatest Common Divisor
(GCD) computation. The very simple and effective process, “Monic
polynomial subtractions" converted trickily from “Longhand
polynomial divisions" of Euclidean algorithm is employed. It
requires only simple elementary arithmetic operations without any
advanced mathematics.
Amazingly, the derived routine gives the expected results for the
test polynomials of very high degree, such as p( x) =(x+1)1000.
Abstract: The Goursat partial differential equation arises in
linear and non linear partial differential equations with mixed
derivatives. This equation is a second order hyperbolic partial
differential equation which occurs in various fields of study such as
in engineering, physics, and applied mathematics. There are many
approaches that have been suggested to approximate the solution of
the Goursat partial differential equation. However, all of the
suggested methods traditionally focused on numerical differentiation
approaches including forward and central differences in deriving the
scheme. An innovation has been done in deriving the Goursat partial
differential equation scheme which involves numerical integration
techniques. In this paper we have developed a new scheme to solve
the Goursat partial differential equation based on the Adomian
decomposition (ADM) and associated with Boole-s integration rule to
approximate the integration terms. The new scheme can easily be
applied to many linear and non linear Goursat partial differential
equations and is capable to reduce the size of computational work.
The accuracy of the results reveals the advantage of this new scheme
over existing numerical method.
Abstract: Systems Analysis and Design is a key subject in
Information Technology courses, but students do not find it easy to
cope with, since it is not “precise" like programming and not exact
like Mathematics. It is a subject working with many concepts,
modeling ideas into visual representations and then translating the
pictures into a real life system. To complicate matters users who are
not necessarily familiar with computers need to give their inputs to
ensure that they get the system the need. Systems Analysis and
Design also covers two fields, namely Analysis, focusing on the
analysis of the existing system and Design, focusing on the design of
the new system. To be able to test the analysis and design of a
system, it is necessary to develop a system or at least a prototype of
the system to test the validity of the analysis and design. The skills
necessary in each aspect differs vastly. Project Management Skills,
Database Knowledge and Object Oriented Principles are all
necessary. In the context of a developing country where students
enter tertiary education underprepared and the digital divide is alive
and well, students need to be motivated to learn the necessary skills,
get an opportunity to test it in a “live" but protected environment –
within the framework of a university. The purpose of this article is to
improve the learning experience in Systems Analysis and Design
through reviewing the underlying teaching principles used, the
teaching tools implemented, the observations made and the
reflections that will influence future developments in Systems
Analysis and Design. Action research principles allows the focus to
be on a few problematic aspects during a particular semester.
Abstract: Wavelet transform or wavelet analysis is a recently
developed mathematical tool in applied mathematics. In numerical
analysis, wavelets also serve as a Galerkin basis to solve partial
differential equations. Haar transform or Haar wavelet transform has
been used as a simplest and earliest example for orthonormal wavelet
transform. Since its popularity in wavelet analysis, there are several
definitions and various generalizations or algorithms for calculating
Haar transform. Fast Haar transform, FHT, is one of the algorithms
which can reduce the tedious calculation works in Haar transform. In
this paper, we present a modified fast and exact algorithm for FHT,
namely Modified Fast Haar Transform, MFHT. The algorithm or
procedure proposed allows certain calculation in the process
decomposition be ignored without affecting the results.
Abstract: In this paper we will constructively prove the existence
of an equilibrium in a competitive economy with sequentially locally
non-constant excess demand functions. And we will show that the
existence of such an equilibrium in a competitive economy implies
Sperner-s lemma. We follow the Bishop style constructive mathematics.
Abstract: South Africa is facing a crisis with not being able to produce enough graduates in the scarce skills areas to sustain economic growth. The crisis is fuelled by a school system that does not produce enough potential students with Mathematics, Accounting and Science. Since the introduction of the new school curriculum in 2008, there is no longer an option to take pure maths on a standard grade level. Instead, only two mathematical subjects are offered: pure maths (which is on par with higher grade maths) and mathematical literacy. It is compulsory to take one or the other. As a result, lees student finishes Grade 12 with pure mathematics every year. This national problem needs urgent attention if South Africa is to make any headway in critical skills development as mathematics is a gateway to scarce skills professions. Higher education institutions initiated several initiatives in an attempt to address the above, including preparatory courses, bridging programmes and extended curricula with foundation provisions. In view of the above, and government policy directives to broaden access in the scarce skills areas to increase student throughput, foundation provision was introduced for Commerce and Information Technology programmes at the Vaal Triangle Campus (VTC) of North-West University (NWU) in 2010. Students enrolling for extended programmes do not comply with the minimum prerequisites for the normal programmes. The question then arises as to whether these programmes have the intended impact? This paper reports the results of a two year longitudinal study, tracking the first year academic achievement of the two cohorts of enrolments since 2010. The results provide valuable insight into the structuring of an extended programme and its potential impact.
Abstract: Students often adopt routine practicing as learning
strategy for mathematics. The reason is they are often bound and
trained to solving conventional-typed questions in Mathematics in
high school. This will be problematic if students further consolidate
this practice in university. Therefore, the Department of Mathematics
emphasized and integrated the Discovery-enriched approach in the
undergraduate curriculum. This paper presents the details of
implementing the Discovery-enriched Curriculum by providing
adequate platform for project-learning, expertise for guidance and
internship opportunities for students majoring in Mathematics. The
Department also provided project-learning opportunities to
mathematics courses targeted for students majoring in other science or
engineering disciplines. The outcome is promising: the research
ability and problem solving skills of students are enhanced.
Abstract: We examine the maximum theorem by Berge from the
point of view of Bishop style constructive mathematics. We will show
an approximate version of the maximum theorem and the maximum
theorem for functions with sequentially locally at most one maximum.
Abstract: The paper presents an applied study of a multivariate AR(p) process fitted to daily data from U.S. commodity futures markets with the use of Bayesian statistics. In the first part a detailed description of the methods used is given. In the second part two BVAR models are chosen one with assumption of lognormal, the second with normal distribution of prices conditioned on the parameters. For a comparison two simple benchmark models are chosen that are commonly used in todays Financial Mathematics. The article compares the quality of predictions of all the models, tries to find an adequate rate of forgetting of information and questions the validity of Efficient Market Hypothesis in the semi-strong form.
Abstract: The study of a real function of two real variables can be supported by visualization using a Computer Algebra System (CAS). One type of constraints of the system is due to the algorithms implemented, yielding continuous approximations of the given function by interpolation. This often masks discontinuities of the function and can provide strange plots, not compatible with the mathematics. In recent years, point based geometry has gained increasing attention as an alternative surface representation, both for efficient rendering and for flexible geometry processing of complex surfaces. In this paper we present different artifacts created by mesh surfaces near discontinuities and propose a point based method that controls and reduces these artifacts. A least squares penalty method for an automatic generation of the mesh that controls the behavior of the chosen function is presented. The special feature of this method is the ability to improve the accuracy of the surface visualization near a set of interior points where the function may be discontinuous. The present method is formulated as a minimax problem and the non uniform mesh is generated using an iterative algorithm. Results show that for large poorly conditioned matrices, the new algorithm gives more accurate results than the classical preconditioned conjugate algorithm.