Abstract: The importance of this study is to understand how Indonesian military court asserts its jurisdiction over military members who commit general crimes within the Indonesian military judiciary system in comparison to other countries. This research employs a normative-juridical approach in combination with historical and comparative-juridical approaches. The research specification is analytical-descriptive in nature, i.e. describing or outlining the principles, basic concepts, and norms related to military judiciary system, which are further analyzed within the context of implementation and as the inputs for military justice regulation under the Indonesian legal system. Main data used in this research are secondary data, including primary, secondary and tertiary legal sources. The research focuses on secondary data, while primary data are supplementary in nature. The validity of data is checked using multi-methods commonly known as triangulation, i.e. to reflect the efforts to gain an in-depth understanding of phenomena being studied. Here, the military element is kept intact in the judiciary process with due observance of the Military Criminal Justice System and the Military Command Development Principle. The Indonesian military judiciary jurisdiction over military members committing general crimes is based on national legal system and global development while taking into account the structure, composition and position of military forces within the state structure. Jurisdiction is formulated by setting forth the substantive norm of crimes that are military in nature. At the level of adjudication jurisdiction, the military court has a jurisdiction to adjudicate military personnel who commit general offences. At the level of execution jurisdiction, the military court has a jurisdiction to execute the sentence against military members who have been convicted with a final and binding judgement. Military court's jurisdiction needs to be expanded when the country is in the state of war.
Abstract: This study makes an integrated investigation on how
life satisfaction is associated with the Korean game users'
psychological variables (self-esteem, game and life self- efficacy),
social variables (bonding and bridging social capital), and
demographic variables (age, gender). The data used for the empirical
analysis came from a representative sample survey conducted in South
Korea. Results show that self-esteem and game efficacy were an
important antecedent to the degree of users’ life satisfaction. Both
bonding social capital and bridging social capital enhance the level of
the users’ life satisfaction. The importance of perspectives as well as
their implications for the game users and further associated research is
explored.
Abstract: In emerging economies, recycling is an opportunity
for the cities to increase the lifespan of sanitary landfills, reduce the
costs of the solid waste management, decrease the environmental
problems of the waste treatment through reincorporate waste in the
productive cycle and protect and develop people’s livelihoods of
informal waste pickers. However, few studies have analysed the
possibilities and strategies to integrate formal and informal sectors in
the solid waste management for the benefit of both. This study seek
to make a strength, weakness, opportunity, and threat (SWOT)
analysis in three recycling associations of Bogotá with the aim to
understand and determine the situation of recycling from perspective
of informal sector in its transition to enter as authorized waste
providers. Data used in the analysis are derived from multiple
strategies such as literature review, the Bogota’s recycling database,
focus group meetings, governmental reports, national laws and
regulations and specific interviews with key stakeholders. Results of
this study show as the main stakeholders of formal and informal
sector of waste management can identify the internal and internal
conditions of recycling in Bogotá. Several strategies were designed
based on the SWOTs determined, could be useful for Bogotá to
advance and promote recycling as a key strategy for integrated
sustainable waste management in the city.
Abstract: The aim of this paper is to understand emerging
learning conditions, when a visual analytics is implemented and used
in K 12 (education). To date, little attention has been paid to the role
visual analytics (digital media and technology that highlight visual
data communication in order to support analytical tasks) can play in
education, and to the extent to which these tools can process
actionable data for young students. This study was conducted in three
public K 12 schools, in four social science classes with students aged
10 to 13 years, over a period of two to four weeks at each school.
Empirical data were generated using video observations and analyzed
with help of metaphors within Actor-network theory (ANT). The
learning conditions are found to be distinguished by broad
complexity, characterized by four dimensions. These emerge from
the actors’ deeply intertwined relations in the activities. The paper
argues in relation to the found dimensions that novel approaches to
teaching and learning could benefit students’ knowledge building as
they work with visual analytics, analyzing visualized data.
Abstract: In our paper we describe the security capabilities of
data collection. Data are collected with probes located in the near and
distant surroundings of the company. Considering the numerous
obstacles e.g. forests, hills, urban areas, the data collection is realized
in several ways. The collection of data uses connection via wireless
communication, LAN network, GSM network and in certain areas
data are collected by using vehicles. In order to ensure the connection
to the server most of the probes have ability to communicate in
several ways. Collected data are archived and subsequently used in
supervisory applications.
To ensure the collection of the required data, it is necessary to
propose algorithms that will allow the probes to select suitable
communication channel.
Abstract: The measured data obtained from sensors in
continuous monitoring of civil structures are mainly used for modal
identification and damage detection. Therefore, when modal
identification analysis is carried out the quality in the identification of
the modes will highly influence the damage detection results. It is
also widely recognized that the usefulness of the measured data used
for modal identification and damage detection is significantly
influenced by the number and locations of sensors. The objective of
this study is the numerical implementation of two widely known
optimum sensor placement methods in beam-like structures.
Abstract: Accurate Short Term Load Forecasting (STLF) is essential for a variety of decision making processes. However, forecasting accuracy can drop due to the presence of uncertainty in the operation of energy systems or unexpected behavior of exogenous variables. Interval Type 2 Fuzzy Logic System (IT2 FLS), with additional degrees of freedom, gives an excellent tool for handling uncertainties and it improved the prediction accuracy. The training data used in this study covers the period from January 1, 2012 to February 1, 2012 for winter season and the period from July 1, 2012 to August 1, 2012 for summer season. The actual load forecasting period starts from January 22, till 28, 2012 for winter model and from July 22 till 28, 2012 for summer model. The real data for Iraqi power system which belongs to the Ministry of Electricity.
Abstract: Machine learning represents a set of topics dealing with the creation and evaluation of algorithms that facilitate pattern recognition, classification, and prediction, based on models derived from existing data. The data can present identification patterns which are used to classify into groups. The result of the analysis is the pattern which can be used for identification of data set without the need to obtain input data used for creation of this pattern. An important requirement in this process is careful data preparation validation of model used and its suitable interpretation. For breeders, it is important to know the origin of animals from the point of the genetic diversity. In case of missing pedigree information, other methods can be used for traceability of animal´s origin. Genetic diversity written in genetic data is holding relatively useful information to identify animals originated from individual countries. We can conclude that the application of data mining for molecular genetic data using supervised learning is an appropriate tool for hypothesis testing and identifying an individual.
Abstract: The research aims to approximate the amount of daily rainfall by using a pixel value data approach. The daily rainfall maps from the Thailand Meteorological Department in period of time from January to December 2013 were the data used in this study. The results showed that this approach can approximate the amount of daily rainfall with RMSE=3.343.
Abstract: This research aimed to study form of traffic distribution and environmental factors of road that affect traffic accidents in Dusit District, only areas responsible of Samsen Police Station. Data used in this analysis is the secondary data of traffic accident case from year 2011. Observed area units are 15 traffic lines that are under responsible of Samsen Police Station. Technique and method used are the Cartographic Method, the Correlation Analysis, and the Multiple Regression Analysis. The results of form of traffic accidents show that, the Samsen Road area had most traffic accidents (24.29%), second was Rachvithi Road(18.10%), third was Sukhothai Road (15.71%), fourth was Rachasrima Road (12.38%), and fifth was Amnuaysongkram Road(7.62%). The result from Dusit District, onlyareasresponsibleofSamsen police station, has suggested that the scale of accidents have high positive correlation with statistic significant at level 0.05 and the frequency of travel (r=0.857). Traffic intersection point (r=0.763)and traffic control equipments (r=0.713) are relevant factors respectively. By using the Multiple Regression Analysis, travel frequency is the only one that has considerable influences on traffic accidents in Dusit district only Samsen Police Station area. Also, a factor in frequency of travel can explain the change in traffic accidents scale to 73.40 (R2 = 0.734). By using the Multiple regression summation from analysis was Ŷ=-7.977+0.044X6
Abstract: Many inherited diseases and non-hereditary disorders are common in the development of renal cystic diseases. Polycystic kidney disease (PKD) is a disorder developed within the kidneys in which grouping of cysts filled with water like fluid. PKD is responsible for 5-10% of end-stage renal failure treated by dialysis or transplantation. New experimental models, application of molecular biology techniques have provided new insights into the pathogenesis of PKD. Researchers are showing keen interest for developing an automated system by applying computer aided techniques for the diagnosis of diseases. In this paper a multilayered feed forward neural network with one hidden layer is constructed, trained and tested by applying back propagation learning rule for the diagnosis of PKD based on physical symptoms and test results of urinalysis collected from the individual patients. The data collected from 50 patients are used to train and test the network. Among these samples, 75% of the data used for training and remaining 25% of the data are used for testing purpose. Further, this trained network is used to implement for new samples. The output results in normality and abnormality of the patient.
Abstract: Artificial Immune System (AIS) is relatively naive paradigm for intelligent computations. The inspiration for AIS is derived from natural Immune System (IS). Classically it is believed that IS strives to discriminate between self and non-self. Most of the existing AIS research is based on this approach. Danger Theory (DT) argues this approach and proposes that IS fights against danger producing elements and tolerates others. We, the computational researchers, are not concerned with the arguments among immunologists but try to extract from it novel abstractions for intelligent computation. This paper aims to follow DT inspiration for intelligent data processing. The approach may introduce new avenue in intelligent processing. The data used is system calls data that is potentially significant in intrusion detection applications.
Abstract: Continuous measurements and multivariate methods are applied in researching the effects of energy consumption on indoor air quality (IAQ) in a Finnish one-family house. Measured data used in this study was collected continuously in a house in Kuopio, Eastern Finland, during fourteen months long period. Consumption parameters measured were the consumptions of district heat, electricity and water. Indoor parameters gathered were temperature, relative humidity (RH), the concentrations of carbon dioxide (CO2) and carbon monoxide (CO) and differential air pressure. In this study, self-organizing map (SOM) and Sammon's mapping were applied to resolve the effects of energy consumption on indoor air quality. Namely, the SOM was qualified as a suitable method having a property to summarize the multivariable dependencies into easily observable two-dimensional map. Accompanying that, the Sammon's mapping method was used to cluster pre-processed data to find similarities of the variables, expressing distances and groups in the data. The methods used were able to distinguish 7 different clusters characterizing indoor air quality and energy efficiency in the study house. The results indicate, that the cost implications in euros of heating and electricity energy vary according to the differential pressure, concentration of carbon dioxide, temperature and season.
Abstract: A Decision Support System/Expert System for stock
portfolio selection presented where at first step, both technical and
fundamental data used to estimate technical and fundamental return
and risk (1st phase); Then, the estimated values are aggregated with
the investor preferences (2nd phase) to produce convenient stock
portfolio.
In the 1st phase, there are two expert systems, each of which is
responsible for technical or fundamental estimation. In the technical
expert system, for each stock, twenty seven candidates are identified
and with using rough sets-based clustering method (RC) the effective
variables have been selected. Next, for each stock two fuzzy rulebases
are developed with fuzzy C-Mean method and Takai-Sugeno-
Kang (TSK) approach; one for return estimation and the other for
risk. Thereafter, the parameters of the rule-bases are tuned with backpropagation
method. In parallel, for fundamental expert systems,
fuzzy rule-bases have been identified in the form of “IF-THEN" rules
through brainstorming with the stock market experts and the input
data have been derived from financial statements; as a result two
fuzzy rule-bases have been generated for all the stocks, one for return
and the other for risk.
In the 2nd phase, user preferences represented by four criteria and
are obtained by questionnaire. Using an expert system, four estimated
values of return and risk have been aggregated with the respective
values of user preference. At last, a fuzzy rule base having four rules,
treats these values and produce a ranking score for each stock which
will lead to a satisfactory portfolio for the user.
The stocks of six manufacturing companies and the period of
2003-2006 selected for data gathering.
Abstract: Single nucleotide polymorphisms (SNPs) hold much promise as a basis for disease-gene association. However, research is limited by the cost of genotyping the tremendous number of SNPs. Therefore, it is important to identify a small subset of informative SNPs, the so-called tag SNPs. This subset consists of selected SNPs of the genotypes, and accurately represents the rest of the SNPs. Furthermore, an effective evaluation method is needed to evaluate prediction accuracy of a set of tag SNPs. In this paper, a genetic algorithm (GA) is applied to tag SNP problems, and the K-nearest neighbor (K-NN) serves as a prediction method of tag SNP selection. The experimental data used was taken from the HapMap project; it consists of genotype data rather than haplotype data. The proposed method consistently identified tag SNPs with considerably better prediction accuracy than methods from the literature. At the same time, the number of tag SNPs identified was smaller than the number of tag SNPs in the other methods. The run time of the proposed method was much shorter than the run time of the SVM/STSA method when the same accuracy was reached.
Abstract: Protein residue contact map is a compact
representation of secondary structure of protein. Due to the
information hold in the contact map, attentions from researchers in
related field were drawn and plenty of works have been done
throughout the past decade. Artificial intelligence approaches have
been widely adapted in related works such as neural networks,
genetic programming, and Hidden Markov model as well as support
vector machine. However, the performance of the prediction was not
generalized which probably depends on the data used to train and
generate the prediction model. This situation shown the importance
of the features or information used in affecting the prediction
performance. In this research, support vector machine was used to
predict protein residue contact map on different combination of
features in order to show and analyze the effectiveness of the
features.
Abstract: High Strength Concrete (HSC) is defined as concrete
that meets special combination of performance and uniformity
requirements that cannot be achieved routinely using conventional
constituents and normal mixing, placing, and curing procedures. It is
a highly complex material, which makes modeling its behavior a very
difficult task. This paper aimed to show possible applicability of
Neural Networks (NN) to predict the slump in High Strength
Concrete (HSC). Neural Network models is constructed, trained and
tested using the available test data of 349 different concrete mix
designs of High Strength Concrete (HSC) gathered from a particular
Ready Mix Concrete (RMC) batching plant. The most versatile
Neural Network model is selected to predict the slump in concrete.
The data used in the Neural Network models are arranged in a format
of eight input parameters that cover the Cement, Fly Ash, Sand,
Coarse Aggregate (10 mm), Coarse Aggregate (20 mm), Water,
Super-Plasticizer and Water/Binder ratio. Furthermore, to test the
accuracy for predicting slump in concrete, the final selected model is
further used to test the data of 40 different concrete mix designs of
High Strength Concrete (HSC) taken from the other batching plant.
The results are compared on the basis of error function (or
performance function).
Abstract: The nature of consumer products causes the difficulty
in forecasting the future demands and the accuracy of the forecasts
significantly affects the overall performance of the supply chain
system. In this study, two data mining methods, artificial neural
network (ANN) and support vector machine (SVM), were utilized to
predict the demand of consumer products. The training data used was
the actual demand of six different products from a consumer product
company in Thailand. The results indicated that SVM had a better
forecast quality (in term of MAPE) than ANN in every category of
products. Moreover, another important finding was the margin
difference of MAPE from these two methods was significantly high
when the data was highly correlated.
Abstract: Large scale climate signals and their teleconnections can influence hydro-meteorological variables on a local scale. Several extreme flow and timing measures, including high flow and low flow measures, from 62 hydrometric stations in Canada are investigated to detect possible linkages with several large scale climate indices. The streamflow data used in this study are derived from the Canadian Reference Hydrometric Basin Network and are characterized by relatively pristine and stable land-use conditions with a minimum of 40 years of record. A composite analysis approach was used to identify linkages between extreme flow and timing measures and climate indices. The approach involves determining the 10 highest and 10 lowest values of various climate indices from the data record. Extreme flow and timing measures for each station were examined for the years associated with the 10 largest values and the years associated with the 10 smallest values. In each case, a re-sampling approach was applied to determine if the 10 values of extreme flow measures differed significantly from the series mean. Results indicate that several stations are impacted by the large scale climate indices considered in this study. The results allow the determination of any relationship between stations that exhibit a statistically significant trend and stations for which the extreme measures exhibit a linkage with the climate indices.
Abstract: Heterogeneity of solid waste characteristics as well as the complex processes taking place within the landfill ecosystem motivated the implementation of soft computing methodologies such as artificial neural networks (ANN), fuzzy logic (FL), and their combination. The present work uses a hybrid ANN-FL model that employs knowledge-based FL to describe the process qualitatively and implements the learning algorithm of ANN to optimize model parameters. The model was developed to simulate and predict the landfill gas production at a given time based on operational parameters. The experimental data used were compiled from lab-scale experiment that involved various operating scenarios. The developed model was validated and statistically analyzed using F-test, linear regression between actual and predicted data, and mean squared error measures. Overall, the simulated landfill gas production rates demonstrated reasonable agreement with actual data. The discussion focused on the effect of the size of training datasets and number of training epochs.