Abstract: A computer model of Quantum Theory (QT) has been
developed by the author. Major goal of the computer model was
support and demonstration of an as large as possible scope of QT.
This includes simulations for the major QT (Gedanken-) experiments
such as, for example, the famous double-slit experiment.
Besides the anticipated difficulties with (1) transforming exacting
mathematics into a computer program, two further types of problems
showed up, namely (2) areas where QT provides a complete mathematical
formalism, but when it comes to concrete applications the
equations are not solvable at all, or only with extremely high effort;
(3) QT rules which are formulated in natural language and which do
not seem to be translatable to precise mathematical expressions, nor
to a computer program.
The paper lists problems in all three categories and describes also
the possible solutions or circumventions developed for the computer
model.
Abstract: In this paper we investigated a number of the Internet
congestion control algorithms that has been developed in the last few
years. It was obviously found that many of these algorithms were
designed to deal with the Internet traffic merely as a train of
consequent packets. Other few algorithms were specifically tailored
to handle the Internet congestion caused by running media traffic that
represents audiovisual content. This later set of algorithms is
considered to be aware of the nature of this media content. In this
context we briefly explained a number of congestion control
algorithms and hence categorized them into the two following
categories: i) Media congestion control algorithms. ii) Common
congestion control algorithms. We hereby recommend the usage of
the media congestion control algorithms for the reason of being
media content-aware rather than the other common type of
algorithms that blindly manipulates such traffic. We showed that the
spread of such media content-aware algorithms over Internet will
lead to better congestion control status in the coming years. This is
due to the observed emergence of the era of digital convergence
where the media traffic type will form the majority of the Internet
traffic.
Abstract: Every commercial bank optimises its asset portfolio
depending on the profitability of assets and chosen or imposed
constraints. This paper proposes and applies a stylized model for
optimising banks' asset and liability structure, reflecting profitability
of different asset categories and their risks as well as costs associated
with different liability categories and reserve requirements. The level
of detail for asset and liability categories is chosen to create a
suitably parsimonious model and to include the most important
categories in the model. It is shown that the most appropriate
optimisation criterion for the model is the maximisation of the ratio
of net interest income to assets. The maximisation of this ratio is
subject to several constraints. Some are accounting identities or
dictated by legislative requirements; others vary depending on the
market objectives for a particular bank. The model predicts variable
amount of assets allocated to loan provision.
Abstract: The computer, among the most important inventions of the twentieth century, has become an increasingly important component in our everyday lives. Computer games also have become increasingly popular among people day-by-day, owing to their features based on realistic virtual environments, audio and visual features, and the roles they offer players. In the present study, the metaphors students have for computer games are investigated, as well as an effort to fill the gap in the literature. Students were asked to complete the sentence—‘Computer game is like/similar to….because….’— to determine the middle school students’ metaphorical images of the concept for ‘computer game’. The metaphors created by the students were grouped in six categories, based on the source of the metaphor. These categories were ordered as ‘computer game as a means of entertainment’, ‘computer game as a beneficial means’, ‘computer game as a basic need’, ‘computer game as a source of evil’, ‘computer game as a means of withdrawal’, and ‘computer game as a source of addiction’, according to the number of metaphors they included.
Abstract: The paper deals with quality labels used in the food products market, especially with labels of quality, labels of origin, and labels of organic farming. The aim of the paper is to identify perception of these labels by consumers in the Czech Republic. The first part refers to the definition and specification of food quality labels that are relevant in the Czech Republic. The second part includes the discussion of marketing research results. Data were collected with personal questioning method. Empirical findings on 150 respondents are related to consumer awareness and perception of national and European food quality labels used in the Czech Republic, attitudes to purchases of labelled products, and interest in information regarding the labels. Statistical methods, in the concrete Pearson´s chi-square test of independence, coefficient of contingency, and coefficient of association are used to determinate if significant differences do exist among selected demographic categories of Czech consumers.
Abstract: This paper aims at identifying and analyzing the
knowledge transmission channels in textile and clothing clusters
located in Brazil and in Europe. Primary data was obtained through
interviews with key individuals. The collection of primary data was
carried out based on a questionnaire with ten categories of indicators
of knowledge transmission. Secondary data was also collected
through a literature review and through international organizations
sites. Similarities related to the use of the main transmission channels
of knowledge are observed in all cases. The main similarities are:
influence of suppliers of machinery, equipment and raw materials;
imitation of products and best practices; training promoted by
technical institutions and businesses; and cluster companies being
open to acquire new knowledge. The main differences lie in the
relationship between companies, where in Europe the intensity of this
relationship is bigger when compared to Brazil. The differences also
occur in importance and frequency of the relationship with the
government, with the cultural environment, and with the activities of
research and development. It is also found factors that reduce the
importance of geographical proximity in transmission of knowledge,
and in generating trust and the establishment of collaborative
behavior.
Abstract: This study was investigated on sampling and
analyzing water quality in water reservoir & water tower installed in
two kind of residential buildings and school facilities. Data of water
quality was collected for correlation analysis with frequency of
sanitization of water reservoir through questioning managers of
building about the inspection charts recorded on equipment for water
reservoir. Statistical software packages (SPSS) were applied to the
data of two groups (cleaning frequency and water quality) for
regression analysis to determine the optimal cleaning frequency of
sanitization. The correlation coefficient (R) in this paper represented
the degree of correlation, with values of R ranging from +1 to -1.After
investigating three categories of drinking water users; this study found
that the frequency of sanitization of water reservoir significantly
influenced the water quality of drinking water. A higher frequency of
sanitization (more than four times per 1 year) implied a higher quality
of drinking water. Results indicated that sanitizing water reservoir &
water tower should at least twice annually for achieving the aim of
safety of drinking water.
Abstract: One of the main advantages of the LO paradigm is to
allow the availability of good quality, shareable learning material
through the Web. The effectiveness of the retrieval process requires a
formal description of the resources (metadata) that closely fits the
user-s search criteria; in spite of the huge international efforts in this
field, educational metadata schemata often fail to fulfil this
requirement. This work aims to improve the situation, by the
definition of a metadata model capturing specific didactic features of
shareable learning resources. It classifies LOs into “teacher-oriented"
and “student-oriented" categories, in order to describe the role a LO
is to play when it is integrated into the educational process. This
article describes the model and a first experimental validation process
that has been carried out in a controlled environment.
Abstract: The correct design of the regulators structure requires complete prediction of the ultimate dimensions of the scour hole profile formed downstream the solid apron. The study of scour downstream regulator is studied either on solid aprons by means of velocity distribution or on movable bed by studying the topography of the scour hole formed in the downstream. In this paper, a new technique was developed to study the scour hole downstream regulators on movable beds. The study was divided into two categories; the first is to find out the sum of the lengths of rigid apron behind the gates in addition to the length of scour hole formed downstream, while the second is to find the minimum length of rigid apron behind the gates to prevent erosion downstream it. The study covers free and submerged hydraulic jump conditions in both symmetrical and asymmetrical under-gated regulations. From the comparison between the studied categories, we found that the minimum length of rigid apron to prevent scour (Ls) is greater than the sum of the lengths of rigid apron and that of scour hole formed behind it (L+Xs). On the other hand, the scour hole dimensions in case of submerged hydraulic jump is always greater than free one, also the scour hole dimensions in asymmetrical operation is greater than symmetrical one.
Abstract: Participation in sporting activities can lead to injury.
Sport injuries have been widely studied in many sports including the
more extreme categories of aquatic board sports. Kitesurfing is a
relatively new water surface action sport, and has not yet been
widely studied in terms of injuries and stress on the body. The aim of
this study was to get information about which injuries that are most
common among kitesurfing participants, where they occur, and their
causes. Injuries were studied using an international open web
questionnaire (n=206).
The results showed that many respondents reported injuries, in
total 251 injuries to knee (24%), ankle (17%), trunk (16%) and
shoulders (10%), often sustained while doing jumps and tricks
(40%). Among the reported injuries were joint injuries (n=101),
muscle/tendon damages (n=47), wounds and cuts (n=36) and bone
fractures (n=28). Also environmental factors and equipment can
influence the risk of injury, or the extent of injury in a hazardous
situation. Conclusively, the information from this retrospective study
supports earlier studies in terms of prevalence and site of injuries.
Suggestively, this information should be used for to build a
foundation of knowledge about the sport for development of
applications for physical training and product development.
Abstract: In general, image-based 3D scenes can now be found in many popular vision systems, computer games and virtual reality tours. So, It is important to segment ROI (region of interest) from input scenes as a preprocessing step for geometric stricture detection in 3D scene. In this paper, we propose a method for segmenting ROI based on tensor voting and Dirichlet process mixture model. In particular, to estimate geometric structure information for 3D scene from a single outdoor image, we apply the tensor voting and Dirichlet process mixture model to a image segmentation. The tensor voting is used based on the fact that homogeneous region in an image are usually close together on a smooth region and therefore the tokens corresponding to centers of these regions have high saliency values. The proposed approach is a novel nonparametric Bayesian segmentation method using Gaussian Dirichlet process mixture model to automatically segment various natural scenes. Finally, our method can label regions of the input image into coarse categories: “ground", “sky", and “vertical" for 3D application. The experimental results show that our method successfully segments coarse regions in many complex natural scene images for 3D.
Abstract: Text categorization (the assignment of texts in natural language into predefined categories) is an important and extensively studied problem in Machine Learning. Currently, popular techniques developed to deal with this task include many preprocessing and learning algorithms, many of which in turn require tuning nontrivial internal parameters. Although partial studies are available, many authors fail to report values of the parameters they use in their experiments, or reasons why these values were used instead of others. The goal of this work then is to create a more thorough comparison of preprocessing parameters and their mutual influence, and report interesting observations and results.
Abstract: In this paper, a novel multi join algorithm to join
multiple relations will be introduced. The novel algorithm is based
on a hashed-based join algorithm of two relations to produce a double index. This is done by scanning the two relations once. But
instead of moving the records into buckets, a double index will be built. This will eliminate the collision that can happen from a complete hash algorithm. The double index will be divided into join
buckets of similar categories from the two relations. The algorithm then joins buckets with similar keys to produce joined buckets. This
will lead at the end to a complete join index of the two relations. without actually joining the actual relations. The time complexity
required to build the join index of two categories is Om log m where m is the size of each category. Totaling time complexity to O n log m
for all buckets. The join index will be used to materialize the joined relation if required. Otherwise, it will be used along with other join
indices of other relations to build a lattice to be used in multi-join operations with minimal I/O requirements. The lattice of the join indices can be fitted into the main memory to reduce time complexity of the multi join algorithm.
Abstract: Free and open source software is gaining popularity at
an unprecedented rate of growth. Organizations despite some
concerns about the quality have been using them for various
purposes. One of the biggest concerns about free and open source
software is post release software defects and their fixing. Many
believe that there is no appropriate support available to fix the bugs.
On the contrary some believe that due to the active involvement of
internet user in online forums, they become a major source of
communicating the identification and fixing of defects in open source
software. The research model of this empirical investigation
establishes and studies the relationship between open source software
defects and online public forums. The results of this empirical study
provide evidence about the realities of software defects myths of
open source software. We used a dataset consist of 616 open source
software projects covering a broad range of categories to study the
research model of this investigation. The results of this investigation
show that online forums play a significant role identifying and fixing
the defects in open source software.
Abstract: This paper presents a methodology to harvest the kinetic energy of the raindrops using piezoelectric devices. In the study 1m×1m PVDF (Polyvinylidene fluoride) piezoelectric membrane, which is fixed by the four edges, is considered for the numerical simulation on deformation of the membrane due to the impact of the raindrops. Then according to the drop size of the rain, the simulation is performed classifying the rainfall types into three categories as light stratiform rain, moderate stratiform rain and heavy thundershower. The impact force of the raindrop is dependent on the terminal velocity of the raindrop, which is a function of raindrop diameter. The results were then analyzed to calculate the harvestable energy from the deformation of the piezoelectric membrane.
Abstract: IEEE 802.11e is the enhanced version of the IEEE
802.11 MAC dedicated to provide Quality of Service of wireless
network. It supports QoS by the service differentiation and
prioritization mechanism. Data traffic receives different priority
based on QoS requirements. Fundamentally, applications are divided
into four Access Categories (AC). Each AC has its own buffer queue
and behaves as an independent backoff entity. Every frame with a
specific priority of data traffic is assigned to one of these access
categories. IEEE 802.11e EDCA (Enhanced Distributed Channel
Access) is designed to enhance the IEEE 802.11 DCF (Distributed
Coordination Function) mechanisms by providing a distributed
access method that can support service differentiation among
different classes of traffic. Performance of IEEE 802.11e MAC layer
with different ACs is evaluated to understand the actual benefits
deriving from the MAC enhancements.
Abstract: Data mining is an extraordinarily demanding field referring to extraction of implicit knowledge and relationships, which are not explicitly stored in databases. A wide variety of methods of data mining have been introduced (classification, characterization, generalization...). Each one of these methods includes more than algorithm. A system of data mining implies different user categories,, which mean that the user-s behavior must be a component of the system. The problem at this level is to know which algorithm of which method to employ for an exploratory end, which one for a decisional end, and how can they collaborate and communicate. Agent paradigm presents a new way of conception and realizing of data mining system. The purpose is to combine different algorithms of data mining to prepare elements for decision-makers, benefiting from the possibilities offered by the multi-agent systems. In this paper the agent framework for data mining is introduced, and its overall architecture and functionality are presented. The validation is made on spatial data. Principal results will be presented.
Abstract: Using entropy weight and TOPSIS method, a
comprehensive evaluation is done on the development level of
Chinese regional service industry in this paper. Firstly, based on
existing research results, an evaluation index system is constructed
from the scale of development, the industrial structure and the
economic benefits. An evaluation model is then built up based on
entropy weight and TOPSIS, and an empirical analysis is conducted on
the development level of service industries in 31 Chinese provinces
during 2006 and 2009 from the two dimensions or time series and
cross section, which provides new idea for assessing regional service
industry. Furthermore, the 31 provinces are classified into four
categories based on the evaluation results, and deep analysis is carried
out on the evaluation results.
Abstract: The article presents analysis results of maps of
expected subsidence in undermined areas for road repair
management. The analysis was done in the area of Karvina district in
the Czech Republic, including undermined areas with ongoing deep
mining activities or finished deep mining in years 2003 - 2009.
The article discusses the possibilities of local road maintenance
authorities to determine areas that will need most repairs in the future
with limited data available. Using the expected subsidence maps new
map of surface curvature was calculated. Combined with road maps
and historical data about repairs the result came for five main
categories of undermined areas, proving very simple tool for
management.
Abstract: Reconfigurable optical add/drop multiplexers
(ROADMs) can be classified into three categories based on their
underlying switching technologies. Category I consists of a single
large optical switch; category II is composed of a number of small
optical switches aligned in parallel; and category III has a single
optical switch and only one wavelength being added/dropped. In this
paper, to evaluate the wavelength-routing capability of ROADMs of
category-II in dynamic optical networks,the dynamic traffic models
are designed based on Bernoulli, Poisson distributions for smooth
and regular types of traffic. Through Analytical and Simulation
results, the routing power of cat-II of ROADM networks for two
traffic models are determined.