Abstract: Model-based development approach is gaining more support and acceptance. Its higher abstraction level brings simplification of systems’ description that allows domain experts to do their best without particular knowledge in programming. The different levels of simulation support the rapid prototyping, verifying and validating the product even before it exists physically. Nowadays model-based approach is beneficial for modelling of complex embedded systems as well as a generation of code for many different hardware platforms. Moreover, it is possible to be applied in safety-relevant industries like automotive, which brings extra automation of the expensive device certification process and especially in the software qualification. Using it, some companies report about cost savings and quality improvements, but there are others claiming no major changes or even about cost increases. This publication demonstrates the level of maturity and autonomy of model-based approach for code generation. It is based on a real live automotive seat heater (ASH) module, developed using The Mathworks, Inc. tools. The model, created with Simulink, Stateflow and Matlab is used for automatic generation of C code with Embedded Coder. To prove the maturity of the process, Code generation advisor is used for automatic configuration. All additional configuration parameters are set to auto, when applicable, leaving the generation process to function autonomously. As a result of the investigation, the publication compares the quality of generated embedded code and a manually developed one. The measurements show that generally, the code generated by automatic approach is not worse than the manual one. A deeper analysis of the technical parameters enumerates the disadvantages, part of them identified as topics for our future work.
Abstract: Since its inception, predictive analysis has revolutionized the IT industry through its robustness and decision-making facilities. It involves the application of a set of data processing techniques and algorithms in order to create predictive models. Its principle is based on finding relationships between explanatory variables and the predicted variables. Past occurrences are exploited to predict and to derive the unknown outcome. With the advent of big data, many studies have suggested the use of predictive analytics in order to process and analyze big data. Nevertheless, they have been curbed by the limits of classical methods of predictive analysis in case of a large amount of data. In fact, because of their volumes, their nature (semi or unstructured) and their variety, it is impossible to analyze efficiently big data via classical methods of predictive analysis. The authors attribute this weakness to the fact that predictive analysis algorithms do not allow the parallelization and distribution of calculation. In this paper, we propose to extend the predictive analysis algorithm, Classification And Regression Trees (CART), in order to adapt it for big data analysis. The major changes of this algorithm are presented and then a version of the extended algorithm is defined in order to make it applicable for a huge quantity of data.
Abstract: Geometric modeling plays an important role in the
constructions and manufacturing of curve, surface and solid
modeling. Their algorithms are critically important not only in
the automobile, ship and aircraft manufacturing business, but are
also absolutely necessary in a wide variety of modern applications,
e.g., robotics, optimization, computer vision, data analytics and
visualization. The calculation and display of geometric objects
can be accomplished by these six techniques: Polynomial basis,
Recursive, Iterative, Coefficient matrix, Polar form approach and
Pyramidal algorithms. In this research, the coefficient matrix (simply
called monomial form approach) will be used to model polynomial
rectangular patches, i.e., Said-Ball, Wang-Ball, DP, Dejdumrong and
NB1 surfaces. Some examples of the monomial forms for these
surface modeling are illustrated in many aspects, e.g., construction,
derivatives, model transformation, degree elevation and degress
reduction.
Abstract: Newton-Lagrange Interpolations are widely used in
numerical analysis. However, it requires a quadratic computational
time for their constructions. In computer aided geometric design
(CAGD), there are some polynomial curves: Wang-Ball, DP and
Dejdumrong curves, which have linear time complexity algorithms.
Thus, the computational time for Newton-Lagrange Interpolations
can be reduced by applying the algorithms of Wang-Ball, DP and
Dejdumrong curves. In order to use Wang-Ball, DP and Dejdumrong
algorithms, first, it is necessary to convert Newton-Lagrange
polynomials into Wang-Ball, DP or Dejdumrong polynomials. In
this work, the algorithms for converting from both uniform and
non-uniform Newton-Lagrange polynomials into Wang-Ball, DP and
Dejdumrong polynomials are investigated. Thus, the computational
time for representing Newton-Lagrange polynomials can be reduced
into linear complexity. In addition, the other utilizations of using
CAGD curves to modify the Newton-Lagrange curves can be taken.
Abstract: 2016 has become the year of the Artificial Intelligence explosion. AI technologies are getting more and more matured that most world well-known tech giants are making large investment to increase the capabilities in AI. Machine learning is the science of getting computers to act without being explicitly programmed, and deep learning is a subset of machine learning that uses deep neural network to train a machine to learn features directly from data. Deep learning realizes many machine learning applications which expand the field of AI. At the present time, deep learning frameworks have been widely deployed on servers for deep learning applications in both academia and industry. In training deep neural networks, there are many standard processes or algorithms, but the performance of different frameworks might be different. In this paper we evaluate the running performance of two state-of-the-art distributed deep learning frameworks that are running training calculation in parallel over multi GPU and multi nodes in our cloud environment. We evaluate the training performance of the frameworks with ResNet-50 convolutional neural network, and we analyze what factors that result in the performance among both distributed frameworks as well. Through the experimental analysis, we identify the overheads which could be further optimized. The main contribution is that the evaluation results provide further optimization directions in both performance tuning and algorithmic design.
Abstract: Organizations support their operations and decision making on the data they have at their disposal, so the quality of these data is remarkably important and Data Quality (DQ) is currently a relevant issue, the literature being unanimous in pointing out that poor DQ can result in large costs for organizations. The literature review identified and described 24 Critical Success Factors (CSF) for Data Quality Management (DQM) that were presented to a panel of experts, who ordered them according to their degree of importance, using the Delphi method with the Q-sort technique, based on an online questionnaire. The study shows that the five most important CSF for DQM are: definition of appropriate policies and standards, control of inputs, definition of a strategic plan for DQ, organizational culture focused on quality of the data and obtaining top management commitment and support.