Abstract: In the traditional buckling analysis of rectangular
plates the classical thin plate theory is generally applied, so
neglecting the plating shear deformation. It seems quite clear that this
method is not totally appropriate for the analysis of thick plates, so
that in the following the two variable refined plate theory proposed
by Shimpi (2006), that permits to take into account the transverse
shear effects, is applied for the buckling analysis of simply supported
isotropic rectangular plates, compressed in one and two orthogonal
directions.
The relevant results are compared with the classical ones and, for
rectangular plates under uniaxial compression, a new direct
expression, similar to the classical Bryan-s formula, is proposed for
the Euler buckling stress.
As the buckling analysis is a widely diffused topic for a variety of
structures, such as ship ones, some applications for plates uniformly
compressed in one and two orthogonal directions are presented and
the relevant theoretical results are compared with those ones obtained
by a FEM analysis, carried out by ANSYS, to show the feasibility of
the presented method.
Abstract: In this paper, we intend to study the synthesis of the
multibeam arrays. The synthesis implementation-s method for this
type of arrays permits to approach the appropriated radiance-s
diagram. The used approach is based on neural network that are
capable to model the multibeam arrays, consider predetermined
general criteria-s, and finally it permits to predict the appropriated
diagram from the neural model. Our main contribution in this paper is
the extension of a synthesis model of these multibeam arrays.
Abstract: With the rapid advanced of technology, the industrial processes become increasingly demanding, from the point of view, power quality and controllability. The advent of multi levels inverters responds partially to these requirements. But actually, the new generation of multi-cells inverters permits to reach more performances, since, it offers more voltage levels. The disadvantage in the increase of voltage levels by the number of cells in cascades is on account of series igbts synchronisation loss, from where, a limitation of cells in cascade to 4. Regarding to these constraints, a new topology is proposed in this paper, which increases the voltage levels of the three-cell inverter from 4 to 8; with the same number of igbts, and using less stored energy in the flaying capacitors. The details of operation and modelling of this new inverter structure are also presented, then tested thanks to a three phase induction motor. KeywordsFlaying capacitors, Multi-cells inverter, pwm, switchers, modelling.
Abstract: An experimental study is realized in order to verify the
Mini Heat Pipe (MHP) concept for cooling high power dissipation
electronic components and determines the potential advantages of
constructing mini channels as an integrated part of a flat heat pipe. A
Flat Mini Heat Pipe (FMHP) prototype including a capillary structure
composed of parallel rectangular microchannels is manufactured and
a filling apparatus is developed in order to charge the FMHP. The
heat transfer improvement obtained by comparing the heat pipe
thermal resistance to the heat conduction thermal resistance of a
copper plate having the same dimensions as the tested FMHP is
demonstrated for different heat input flux rates. Moreover, the heat
transfer in the evaporator and condenser sections are analyzed, and
heat transfer laws are proposed. In the theoretical part of this work, a
detailed mathematical model of a FMHP with axial microchannels is
developed in which the fluid flow is considered along with the heat
and mass transfer processes during evaporation and condensation.
The model is based on the equations for the mass, momentum and
energy conservation, which are written for the evaporator, adiabatic,
and condenser zones. The model, which permits to simulate several
shapes of microchannels, can predict the maximum heat transfer
capacity of FMHP, the optimal fluid mass, and the flow and thermal
parameters along the FMHP. The comparison between experimental
and model results shows the good ability of the numerical model to
predict the axial temperature distribution along the FMHP.
Abstract: The paper describes design of an ontology in the
financial domain for mutual funds. The design of this ontology
consists of four steps, namely, specification, knowledge acquisition,
implementation and semantic query. Specification includes a
description of the taxonomy and different types mutual funds and
their scope. Knowledge acquisition involves the information
extraction from heterogeneous resources. Implementation describes
the conceptualization and encoding of this data. Finally, semantic
query permits complex queries to integrated data, mapping of these
database entities to ontological concepts.
Abstract: The logistical requirements placed on industrial manufacturing companies are steadily increasing. In order to meet those requirements, a consistent and efficient concept is necessary for production control. Set up properly, production control offers considerable potential with respect to achieving the logistical targets. As experience with the many production control methods already in existence and their compatibility is, however, often inadequate, this article describes a systematic approach to the configuration of production control based on the Lödding model. This model enables production control to be set up individually to suit a company and the requirements. It therefore permits today-s demands regarding logistical performance to be met.
Abstract: Human activities are increasingly based on the use of remote resources and services, and on the interaction between
remotely located parties that may know little about each other. Mobile agents must be prepared to execute on different hosts with
various environmental security conditions. The aim of this paper is to
propose a trust based mechanism to improve the security of mobile
agents and allow their execution in various environments. Thus, an
adaptive trust mechanism is proposed. It is based on the dynamic interaction between the agent and the environment. Information
collected during the interaction enables generation of an environment
key. This key informs on the host-s trust degree and permits the mobile agent to adapt its execution. Trust estimation is based on
concrete parameters values. Thus, in case of distrust, the source of problem can be located and a mobile agent appropriate behavior can
be selected.
Abstract: This paper presents an new vision technique for
robotic manipulation of randomly oriented objects in industrial
applications. The proposed approach uses 2D and 3D vision for
efficiently extracting the 3D pose of an object in the presence of
multiple randomly positioned objects. 2D vision permits to quickly
select the objects of interest for 3D processing with a new modified
ICP algorithm (FaR-ICP), thus reducing significantly the processing
time. The extracted 3D pose is then sent to the robot manipulator for
picking. The tests show that the proposed system achieves high
performances
Abstract: Through 1980s, management accounting researchers
described the increasing irrelevance of traditional control and
performance measurement systems. The Balanced Scorecard (BSC)
is a critical business tool for a lot of organizations. It is a
performance measurement system which translates mission and
strategy into objectives. Strategy map approach is a development
variant of BSC in which some necessary causal relations must be
established. To recognize these relations, experts usually use
experience. It is also possible to utilize regression for the same
purpose. Structural Equation Modeling (SEM), which is one of the
most powerful methods of multivariate data analysis, obtains more
appropriate results than traditional methods such as regression. In the
present paper, we propose SEM for the first time to identify the
relations between objectives in the strategy map, and a test to
measure the importance of relations. In SEM, factor analysis and test
of hypotheses are done in the same analysis. SEM is known to be
better than other techniques at supporting analysis and reporting. Our
approach provides a framework which permits the experts to design
the strategy map by applying a comprehensive and scientific method
together with their experience. Therefore this scheme is a more
reliable method in comparison with the previously established
methods.
Abstract: The new programming technologies allow for the
creation of components which can be automatically or manually
assembled to reach a new experience in knowledge understanding
and mastering or in getting skills for a specific knowledge area. The
project proposes an interactive framework that permits the creation,
combination and utilization of components that are specific to
mathematical training in high schools.
The main framework-s objectives are:
• authoring lessons by the teacher or the students; all they need
are simple operating skills for Equation Editor (or something
similar, or Latex); the rest are just drag & drop operations,
inserting data into a grid, or navigating through menus
• allowing sonorous presentations of mathematical texts and
solving hints (easier understood by the students)
• offering graphical representations of a mathematical function
edited in Equation
• storing of learning objects in a database
• storing of predefined lessons (efficient for expressions and
commands, the rest being calculations; allows a high
compression)
• viewing and/or modifying predefined lessons, according to the
curricula
The whole thing is focused on a mathematical expressions minicompiler,
storing the code that will be later used for different
purposes (tables, graphics, and optimisations).
Programming technologies used. A Visual C# .NET
implementation is proposed. New and innovative digital learning
objects for mathematics will be developed; they are capable to
interpret, contextualize and react depending on the architecture
where they are assembled.
Abstract: In this work a new method for low complexity
image coding is presented, that permits different settings and great
scalability in the generation of the final bit stream. This coding
presents a continuous-tone still image compression system that
groups loss and lossless compression making use of finite arithmetic
reversible transforms. Both transformation in the space of color and
wavelet transformation are reversible. The transformed coefficients
are coded by means of a coding system in depending on a
subdivision into smaller components (CFDS) similar to the bit
importance codification. The subcomponents so obtained are
reordered by means of a highly configure alignment system
depending on the application that makes possible the re-configure of
the elements of the image and obtaining different importance levels
from which the bit stream will be generated. The subcomponents of
each importance level are coded using a variable length entropy
coding system (VBLm) that permits the generation of an embedded
bit stream. This bit stream supposes itself a bit stream that codes a
compressed still image. However, the use of a packing system on the
bit stream after the VBLm allows the realization of a final highly
scalable bit stream from a basic image level and one or several
improvement levels.
Abstract: Knowledge is attributed to human whose problemsolving
behavior is subjective and complex. In today-s knowledge
economy, the need to manage knowledge produced by a community
of actors cannot be overemphasized. This is due to the fact that
actors possess some level of tacit knowledge which is generally
difficult to articulate. Problem-solving requires searching and sharing
of knowledge among a group of actors in a particular context.
Knowledge expressed within the context of a problem resolution
must be capitalized for future reuse. In this paper, an approach that
permits dynamic capitalization of relevant and reliable actors-
knowledge in solving decision problem following Economic
Intelligence process is proposed. Knowledge annotation method and
temporal attributes are used for handling the complexity in the
communication among actors and in contextualizing expressed
knowledge. A prototype is built to demonstrate the functionalities of
a collaborative Knowledge Management system based on this
approach. It is tested with sample cases and the result showed that
dynamic capitalization leads to knowledge validation hence
increasing reliability of captured knowledge for reuse. The system
can be adapted to various domains.
Abstract: Knowledge sharing in general and the contextual
access to knowledge in particular, still represent a key challenge in
the knowledge management framework. Researchers on semantic
web and human machine interface study techniques to enhance this
access. For instance, in semantic web, the information retrieval is
based on domain ontology. In human machine interface, keeping
track of user's activity provides some elements of the context that can
guide the access to information. We suggest an approach based on
these two key guidelines, whilst avoiding some of their weaknesses.
The approach permits a representation of both the context and the
design rationale of a project for an efficient access to knowledge. In
fact, the method consists of an information retrieval environment
that, in the one hand, can infer knowledge, modeled as a semantic
network, and on the other hand, is based on the context and the
objectives of a specific activity (the design). The environment we
defined can also be used to gather similar project elements in order to
build classifications of tasks, problems, arguments, etc. produced in a
company. These classifications can show the evolution of design
strategies in the company.
Abstract: The practical implementation of audio-video coupled speech recognition systems is mainly limited by the hardware complexity to integrate two radically different information capturing devices with good temporal synchronisation. In this paper, we propose a solution based on a smart CMOS image sensor in order to simplify the hardware integration difficulties. By using on-chip image processing, this smart sensor can calculate in real time the X/Y projections of the captured image. This on-chip projection reduces considerably the volume of the output data. This data-volume reduction permits a transmission of the condensed visual information via the same audio channel by using a stereophonic input available on most of the standard computation devices such as PC, PDA and mobile phones. A prototype called VMIKE (Visio-Microphone) has been designed and realised by using standard 0.35um CMOS technology. A preliminary experiment gives encouraged results. Its efficiency will be further investigated in a large variety of applications such as biometrics, speech recognition in noisy environments, and vocal control for military or disabled persons, etc.
Abstract: Image mosaicing is a technique that permits to enlarge the field of view of a camera. For instance, it is employed to achieve panoramas with common cameras or even in scientific applications, to achieve the image of a whole culture in microscopical imaging. Usually, a mosaic of cell cultures is achieved through using automated microscopes. However, this is often performed in batch, through CPU intensive minimization algorithms. In addition, live stem cells are studied in phase contrast, showing a low contrast that cannot be improved further. We present a method to study the flat field from live stem cells images even in case of 100% confluence, this permitting to build accurate mosaics on-line using high performance algorithms.
Abstract: Over the years, many implementations have been
proposed for solving IA networks. These implementations are
concerned with finding a solution efficiently. The primary goal of
our implementation is simplicity and ease of use.
We present an IA network implementation based on finite domain
non-binary CSPs, and constraint logic programming. The
implementation has a GUI which permits the drawing of arbitrary IA
networks. We then show how the implementation can be extended to
find all the solutions to an IA network. One application of finding all
the solutions, is solving probabilistic IA networks.
Abstract: Software maintenance, which involves making enhancements, modifications and corrections to existing software systems, consumes more than half of developer time. Specification comprehensibility plays an important role in software maintenance as it permits the understanding of the system properties more easily and quickly. The use of formal notation such as B increases a specification-s precision and consistency. However, the notation is regarded as being difficult to comprehend. Semi-formal notation such as the Unified Modelling Language (UML) is perceived as more accessible but it lacks formality. Perhaps by combining both notations could produce a specification that is not only accurate and consistent but also accessible to users. This paper presents an experiment conducted on a model that integrates the use of both UML and B notations, namely UML-B, versus a B model alone. The objective of the experiment was to evaluate the comprehensibility of a UML-B model compared to a traditional B model. The measurement used in the experiment focused on the efficiency in performing the comprehension tasks. The experiment employed a cross-over design and was conducted on forty-one subjects, including undergraduate and masters students. The results show that the notation used in the UML-B model is more comprehensible than the B model.
Abstract: In this paper, some problem formulations of dynamic object parameters recovery described by non-autonomous system of ordinary differential equations with multipoint unshared edge conditions are investigated. Depending on the number of additional conditions the problem is reduced to an algebraic equations system or to a problem of quadratic programming. With this purpose the paper offers a new scheme of the edge conditions transfer method called by conditions shift. The method permits to get rid from differential links and multipoint unshared initially-edge conditions. The advantage of the proposed approach is concluded by capabilities of reduction of a parametric identification problem to essential simple problems of the solution of an algebraic system or quadratic programming.
Abstract: The study of the generated defects on manufactured
parts shows the difficulty to maintain parts in their positions during
the machining process and to estimate them during the pre-process
plan. This work presents a contribution to the development of 3D
models for the optimization of the manufacturing tolerances. An
experimental study allows the measurement of the defects of part
positioning for the determination of ε and the choice of an optimal
setup of the part. An approach of 3D tolerance based on the small
displacements method permits the determination of the
manufacturing errors upstream. A developed tool, allows an
automatic generation of the tolerance intervals along the three axes.
Abstract: The myocardial sintigraphy is an imaging modality which provides functional informations. Whereas, coronarography modality gives useful informations about coronary arteries anatomy. In case of coronary artery disease (CAD), the coronarography can not determine precisely which moderate lesions (artery reduction between 50% and 70%), known as the “gray zone", are haemodynamicaly significant. In this paper, we aim to define the relationship between the location and the degree of the stenosis in coronary arteries and the observed perfusion on the myocardial scintigraphy. This allows us to model the impact evolution of these stenoses in order to justify a coronarography or to avoid it for patients suspected being in the gray zone. Our approach is decomposed in two steps. The first step consists in modelling a coronary artery bed and stenoses of different location and degree. The second step consists in modelling the left ventricle at stress and at rest using the sphercical harmonics model and myocardial scintigraphic data. We use the spherical harmonics descriptors to analyse left ventricle model deformation between stress and rest which permits us to conclude if ever an ischemia exists and to quantify it.