Abstract: This research work proposed a study of fruit bruise detection by means of a biospeckle method, selecting the papaya fruit (Carica papaya) as testing body. Papaya is recognized as a fruit of outstanding nutritional qualities, showing high vitamin A content, calcium, carbohydrates, exhibiting high popularity all over the world, considering consumption and acceptability. The commercialization of papaya faces special problems which are associated to bruise generation during harvesting, packing and transportation. Papaya is classified as climacteric fruit, permitting to be harvested before the maturation is completed. However, by one side bruise generation is partially controlled once the fruit flesh exhibits high mechanical firmness. By the other side, mechanical loads can set a future bruise at that maturation stage, when it can not be detected yet by conventional methods. Mechanical damages of fruit skin leave an entrance door to microorganisms and pathogens, which will cause severe losses of quality attributes. Traditional techniques of fruit quality inspection include total soluble solids determination, mechanical firmness tests, visual inspections, which would hardly meet required conditions for a fully automated process. However, the pertinent literature reveals a new method named biospeckle which is based on the laser reflectance and interference phenomenon. The laser biospeckle or dynamic speckle is quantified by means of the Moment of Inertia, named after its mechanical counterpart due to similarity between the defining formulae. Biospeckle techniques are able to quantify biological activities of living tissues, which has been applied to seed viability analysis, vegetable senescence and similar topics. Since the biospeckle techniques can monitor tissue physiology, it could also detect changes in the fruit caused by mechanical damages. The proposed technique holds non invasive character, being able to generate numerical results consistent with an adequate automation. The experimental tests associated to this research work included the selection of papaya fruit at different maturation stages which were submitted to artificial mechanical bruising tests. Damages were visually compared with the frequency maps yielded by the biospeckle technique. Results were considered in close agreement.
Abstract: A three-dimensional finite element modeling for austenitic stainless steel AISI 304 annealed condition sheets of 1.0 mm thickness are developed using ABAQUS® software. This includes spot welded and weld bonded joints models. Both models undergo thermal heat caused by spot welding process and then are subjected to axial load up to the failure point. The properties of elastic and plastic regions, modulus of elasticity, fracture limit, nugget and heat affected zones are determined. Complete loaddisplacement curve for each joining model is obtained and compared with the experiment data and with the finite element models without including the effect of thermal process. In general, the results obtained for both spot welded and weld-bonded joints affected by thermal process showed an excellent agreement with the experimental data.
Abstract: Face recognition in the infrared spectrum has attracted a lot of interest in recent years. Many of the techniques used in infrared are based on their visible counterpart, especially linear techniques like PCA and LDA. In this work, we introduce a probabilistic Bayesian framework for face recognition in the infrared spectrum. In the infrared spectrum, variations can occur between face images of the same individual due to pose, metabolic, time changes, etc. Bayesian approaches permit to reduce intrapersonal variation, thus making them very interesting for infrared face recognition. This framework is compared with classical linear techniques. Non linear techniques we developed recently for infrared face recognition are also presented and compared to the Bayesian face recognition framework. A new approach for infrared face extraction based on SVM is introduced. Experimental results show that the Bayesian technique is promising and lead to interesting results in the infrared spectrum when a sufficient number of face images is used in an intrapersonal learning process.
Abstract: Bead-on-plate welds were carried out on AISI 316L
(N) austenitic stainless steel (ASS) using flux cored arc welding
(FCAW) process. The bead on plates weld was conducted as per L25
orthogonal array. In this paper, the weld bead geometry such as depth
of penetration (DOP), bead width (BW) and weld reinforcement (R)
of AISI 316L (N) ASS are investigated. Taguchi approach is used as
statistical design of experiment (DOE) technique for optimizing the
selected welding input parameters. Grey relational analysis and
desirability approach are applied to optimize the input parameters
considering multiple output variables simultaneously. Confirmation
experiment has also been conducted to validate the optimized
parameters.
Abstract: This paper presents performance analysis of the
Evolutionary Programming-Artificial Neural Network (EPANN)
based technique to optimize the architecture and training parameters
of a one-hidden layer feedforward ANN model for the prediction of
energy output from a grid connected photovoltaic system. The ANN
utilizes solar radiation and ambient temperature as its inputs while the
output is the total watt-hour energy produced from the grid-connected
PV system. EP is used to optimize the regression performance of the
ANN model by determining the optimum values for the number of
nodes in the hidden layer as well as the optimal momentum rate and
learning rate for the training. The EPANN model is tested using two
types of transfer function for the hidden layer, namely the tangent
sigmoid and logarithmic sigmoid. The best transfer function, neural
topology and learning parameters were selected based on the highest
regression performance obtained during the ANN training and testing
process. It is observed that the best transfer function configuration for
the prediction model is [logarithmic sigmoid, purely linear].
Abstract: The pyrolysis of hazelnut shell, polyethylene oxide and their blends were carried out catalytically at 500 and 650 ºC. Potassium dichromate was chosen according to its oxidative characteristics and decomposition temperature (500 ºC) where decomposition products are CrO3 and K2CrO4. As a main effect, a remarkable increase in gasification was observed using this catalyst for pure components and blends especially at 500 ºC rather than 650 ºC contrary to the main observation in the pyrolysis process. The increase in gas product quantity was compensated mainly with decrease in solid product and additionally in some cases liquid products.
Abstract: In this study, a frame work for verification of famous seismic codes is utilized. To verify the seismic codes performance, damage quantity of RC frames is compared with the target performance. Due to the randomness property of seismic design and earthquake loads excitation, in this paper, fragility curves are developed. These diagrams are utilized to evaluate performance level of structures which are designed by the seismic codes. These diagrams further illustrate the effect of load combination and reduction factors of codes on probability of damage exceedance. Two types of structures; very high important structures with high ductility and medium important structures with intermediate ductility are designed by different seismic codes. The Results reveal that usually lower damage ratio generate lower probability of exceedance. In addition, the findings indicate that there are buildings with higher quantity of bars which they have higher probability of damage exceedance. Life-cycle cost analysis utilized for comparison and final decision making process.
Abstract: Learning programming is difficult for many learners. Some researches have found that the main difficulty relates to cognitive load. Cognitive overload happens in programming due to the nature of the subject which is intrinisicly over-bearing on the working memory. It happens due to the complexity of the subject itself. The problem is made worse by the poor instructional design methodology used in the teaching and learning process. Various efforts have been proposed to reduce the cognitive load, e.g. visualization softwares, part-program method etc. Use of many computer based systems have also been tried to tackle the problem. However, little success has been made to alleviate the problem. More has to be done to overcome this hurdle. This research attempts at understanding how cognitive load can be managed so as to reduce the problem of overloading. We propose a mechanism to measure the cognitive load during pre instruction, post instruction and in instructional stages of learning. This mechanism is used to help the instruction. As the load changes the instruction is made to adapt itself to ensure cognitive viability. This mechanism could be incorporated as a sub domain in the student model of various computer based instructional systems to facilitate the learning of programming.
Abstract: Data mining, which is the exploration of
knowledge from the large set of data, generated as a result of
the various data processing activities. Frequent Pattern Mining
is a very important task in data mining. The previous
approaches applied to generate frequent set generally adopt
candidate generation and pruning techniques for the
satisfaction of the desired objective. This paper shows how
the different approaches achieve the objective of frequent
mining along with the complexities required to perform the
job. This paper will also look for hardware approach of cache
coherence to improve efficiency of the above process. The
process of data mining is helpful in generation of support
systems that can help in Management, Bioinformatics,
Biotechnology, Medical Science, Statistics, Mathematics,
Banking, Networking and other Computer related
applications. This paper proposes the use of both upward and
downward closure property for the extraction of frequent item
sets which reduces the total number of scans required for the
generation of Candidate Sets.
Abstract: We study the problem of decision making with Dempster-Shafer belief structure. We analyze the previous work developed by Yager about using the ordered weighted averaging (OWA) operator in the aggregation of the Dempster-Shafer decision process. We discuss the possibility of aggregating with an ascending order in the OWA operator for the cases where the smallest value is the best result. We suggest the introduction of the ordered weighted geometric (OWG) operator in the Dempster-Shafer framework. In this case, we also discuss the possibility of aggregating with an ascending order and we find that it is completely necessary as the OWG operator cannot aggregate negative numbers. Finally, we give an illustrative example where we can see the different results obtained by using the OWA, the Ascending OWA (AOWA), the OWG and the Ascending OWG (AOWG) operator.
Abstract: This work aims to reduce the read power consumption
as well as to enhance the stability of the SRAM cell during the read
operation. A new 10-transisor cell is proposed with a new read
scheme to minimize the power consumption within the memory core.
It has separate read and write ports, thus cell read stability is
significantly improved. A 16Kb SRAM macro operating at 1V
supply voltage is demonstrated in 65 nm CMOS process. Its read
power consumption is reduced to 24% of the conventional design.
The new cell also has lower leakage current due to its special bit-line
pre-charge scheme. As a result, it is suitable for low-power mobile
applications where power supply is restricted by the battery.
Abstract: XML is an important standard of data exchange and
representation. As a mature database system, using relational database
to support XML data may bring some advantages. But storing XML in
relational database has obvious redundancy that wastes disk space,
bandwidth and disk I/O when querying XML data. For the efficiency
of storage and query XML, it is necessary to use compressed XML
data in relational database. In this paper, a compressed relational
database technology supporting XML data is presented. Original
relational storage structure is adaptive to XPath query process. The
compression method keeps this feature. Besides traditional relational
database techniques, additional query process technologies on
compressed relations and for special structure for XML are presented.
In this paper, technologies for XQuery process in compressed
relational database are presented..
Abstract: The environmental impact caused by industries is an issue that, in the last 20 years, has become very important in terms of society, economics and politics in Colombia. Particularly, the tannery process is extremely polluting because of uneffective treatments and regulations given to the dumping process and atmospheric emissions. Considering that, this investigation is intended to propose a management model based on the integration of Lean Supply Chain, Green Supply Chain, Cleaner Production and ISO 14001-2004, that prioritizes the strategic components of the organizations. As a result, a management model will be obtained and it will provide a strategic perspective through a systemic approach to the tanning process. This will be achieved through the use of Multicriteria Decision tools, along with Quality Function Deployment and Fuzzy Logic. The strategic approach that embraces the management model using the alignment of Lean Supply Chain, Green Supply Chain, Cleaner Production and ISO 14001-2004, is an integrated perspective that allows a gradual frame of the tactical and operative elements through the correct setting of the information flow, improving the decision making process. In that way, Small Medium Enterprises (SMEs) could improve their productivity, competitiveness and as an added value, the minimization of the environmental impact. This improvement is expected to be controlled through a Dashboard that helps the Organization measure its performance along the implementation of the model in its productive process.
Abstract: novel and simple method is introduced for rapid and
highly efficient water treatment by reverse osmosis (RO) method using
multi-walled carbon nanotubes (MWCNTs) / polyacrylonitrile (PAN)
polymer as a flexible, highly efficient, reusable and semi-permeable
mixed matrix membrane (MMM). For this purpose, MWCNTs were
directly synthesized and on-line purified by chemical vapor deposition
(CVD) process, followed by directing the MWCNT bundles towards an
ultrasonic bath, in which PAN polymer was simultaneously suspended
inside a solid porous silica support in water at temperature to ~70 οC.
Fabrication process of MMM was finally completed by hot isostatic
pressing (HIP) process. In accordance with the analytical figures of
merit, the efficiency of fabricated MMM was ~97%. The rate of water
treatment process was also evaluated to 6.35 L min-1. The results reveal
that, the CNT-based MMM is suitable for rapid treatment of different
forms of industrial, sea, drinking and well water samples.
Abstract: During the last few years, several sheet hydroforming
processes have been introduced. Despite the advantages of these
methods, they have some limitations. Of the processes, the two main
ones are the standard hydroforming and hydromechanical deep
drawing. A new sheet hydroforming die set was proposed that has the
advantages of both processes and eliminates their limitations. In this
method, a polyurethane plate was used as a part of the die-set to
control the blank holder force. This paper outlines the Taguchi
optimization methodology, which is applied to optimize the effective
parameters in forming cylindrical cups by the new die set of sheet
hydroforming process. The process parameters evaluated in this
research are polyurethane hardness, polyurethane thickness, forming
pressure path and polyurethane hole diameter. The design of
experiments based upon L9 orthogonal arrays by Taguchi was used
and analysis of variance (ANOVA) was employed to analyze the
effect of these parameters on the forming pressure. The analysis of
the results showed that the optimal combination for low forming
pressure is harder polyurethane, bigger diameter of polyurethane hole
and thinner polyurethane. Finally, the confirmation test was derived
based on the optimal combination of parameters and it was shown
that the Taguchi method is suitable to examine the optimization
process.
Abstract: This study investigated possible ways to improve the
efficiency of the platinum precipitation process using ammonium
chloride by reducing the platinum content reporting to the effluent.
The ore treated consist of five platinum group metals namely,
ruthenium, rhodium, iridium, platinum, palladium and a precious
metal gold. Gold, ruthenium, rhodium and iridium were extracted
prior the platinum precipitation process. Temperature, reducing
agent, flow rate and potential difference were the variables controlled
to determine the operation conditions for optimum platinum
precipitation efficiency. Hydrogen peroxide was added as the
oxidizing agent at the temperature of 85-90oC and potential
difference of 700-850mV was the variable used to check the
oxidizing state of platinum. The platinum was further purified at
temperature between 60-65oC, potential difference above 700 mV,
ammonium chloride of 200 l, and at these conditions the platinum
content reporting to the effluent was reduced to less than 300ppm,
resulting in optimum platinum precipitation efficiency and purity of
99.9%.
Abstract: Moral decisions are considered as an intuitive process,
while conscious reasoning is mostly used only to justify those
intuitions. This problem is described in few different dual-process
theories of mind, that are being developed e.g. by Frederick and
Kahneman, Stanovich and Evans. Those theories recently evolved
into tri-process theories with a proposed process that makes ultimate
decision or allows to paraformal processing with focal bias..
Presented experiment compares the decision patterns to the
implications of those models.
In presented study participants (n=179) considered different
aspects of trolley dilemma or its footbridge version and decided after
that.
Results show that in the control group 70% of people decided to
use the lever to change tracks for the running trolley, and 20% chose
to push the fat man down the tracks. In contrast, after experimental
manipulation almost no one decided to act. Also the decision time
difference between dilemmas disappeared after experimental
manipulation.
The result supports the idea of three co-working processes:
intuitive (TASS), paraformal (reflective mind) and algorithmic
process.
Abstract: Citizens are increasingly are provided with choice and
customization in public services and this has now also become a key
feature of higher education in terms of policy roll-outs on personal
development planning (PDP) and more generally as part of the
employability agenda. The goal here is to transform people, in this
case graduates, into active, responsible citizen-workers. A key part of
this rhetoric and logic is the inculcation of graduate attributes within
students. However, there has also been a concern with the issue of
student lack of engagement and perseverance with their studies. This
paper sets out to explore some of these conceptions that link graduate
attributes with citizenship as well as the notion of how identity is
forged through the higher education process. Examples are drawn
from a quality enhancement project that is being operated within the
context of the Scottish higher education system. This is further
framed within the wider context of competing and conflicting
demands on higher education, exacerbated by the current worldwide
economic climate. There are now pressures on students to develop
their employability skills as well as their capacity to engage with
global issues such as behavioural change in the light of
environmental concerns. It is argued that these pressures, in effect,
lead to a form of personalization that is concerned with how
graduates develop their sense of identity as something that is
engineered and re-engineered to meet these demands.
Abstract: Hazard rate estimation is one of the important topics
in forecasting earthquake occurrence. Forecasting earthquake
occurrence is a part of the statistical seismology where the main
subject is the point process. Generally, earthquake hazard rate is
estimated based on the point process likelihood equation called the
Hazard Rate Likelihood of Point Process (HRLPP). In this research,
we have developed estimation method, that is hazard rate single
decrement HRSD. This method was adapted from estimation method
in actuarial studies. Here, one individual associated with an
earthquake with inter event time is exponentially distributed. The
information of epicenter and time of earthquake occurrence are used
to estimate hazard rate. At the end, a case study of earthquake hazard
rate will be given. Furthermore, we compare the hazard rate between
HRLPP and HRSD method.
Abstract: Heavy metals have bad effects on environment and
soils and it can uptake by natural HAP .natural Hap is an inexpensive
material that uptake large amounts of various heavy metals like Zn
(II) .Natural HAP (N-HAP), extracted from bovine cortical bone ash,
is a good choice for substitution of commercial HAP. Several
experiments were done to investigate the sorption capacity of Zn (II)
to N-HAP in various particles sizes, temperatures, initial
concentrations, pH and reaction times. In this study, the sorption of
Zinc ions from a Zn solution onto HAP particles with sizes of 1537.6
nm and 47.6 nm at three initial pH values of 4.50, 6.00 and 7.50 was
studied. The results showed that better performance was obtained
through a 47.6 nm particle size and higher pH values. The
experimental data were analyzed using Langmuir, Freundlich, and
Arrhenius equations for equilibrium, kinetic and thermodynamic
studies. The analysis showed a maximum adsorption capacity of NHAP
as being 1.562 mmol/g at a pH of 7.5 and small particle size.
Kinetically, the prepared N-HAP is a feasible sorbent that retains Zn
(II) ions through a favorable and spontaneous sorption process.