Smadav 2016

Gaya Hidup Sehat

WIRELESS LAN

Hasil gambar untuk wireles lan

LAN nirkabel adalah tambahan diperlukan untuk LAN kabel tradisional, untuk memenuhi persyaratan untuk mobilitas, relokasi, jaringan ad hoc, dan cakupan lokasi
sulit untuk kawat. Bab ini memberikan sebuah survei LAN nirkabel • teknologi utama yang digunakan untuk LAN nirkabel inframerah, spread spectrum, dan narrowband microwave. 

• The IEEE 802.11 standar mendefinisikan satu set layanan dan fisik
Pilihan lapisan LAN nirkabel.
• IEEE 802.11 jasa termasuk asosiasi mengelola, memberikan
data, dan keamanan.
• IEEE 802.11 lapisan fisik meliputi inframerah dan penyebaran
spektrum dan mencakup berbagai kecepatan data  Wireless LAN umumnya dikategorikan menurut teknik transmisi yang
digunakan. Semua produk LAN nirkabel saat ini termasuk dalam salah satu kategori berikut:
• Inframerah (IR) LAN: Sebuah sel individu IR LAN terbatas pada satu
kamar, karena cahaya inframerah tidak menembus dinding buram.
• LAN Spread spectrum: Jenis LAN memanfaatkan transmisi spread spectrum
teknologi. Dalam kebanyakan kasus, LAN ini beroperasi di ISM (industrial,
ilmiah, dan medis) band microwave sehingga tidak ada Communications federal
Lisensi Commission (FCC) diperlukan untuk penggunaannya di Amerika Serikat.

LAN (Strange and Cross)

Hasil gambar untuk urutan kabel straight and cross


Kabel Local area Networck (LAN) adalah kabel yang memiliki 2 tipe standar yang biasa di gunakan untuk menyambungkan antar komponen jaringan.ada 2 tipe kabel yang harus anda ketahui
1Strange ( Lurus )
2.Cross (Silang)

Strange

Ujung 1
putih list Orange
orange
putih list Hijau
Biru
putih list Biru
Hijau
putih list Coklat
Coklat

Ujung 2
putih list Orange
orange
putih list Hijau
Biru
putih list Biru
Hijau
putih list Coklat
Coklat

Bila strange Kabel ujung 1 dan ujung 2 itu sama


Cross

Ujung 1
putih list Orange
orange
putih list Hijau
Biru
putih list Biru
Hijau
putih list Coklat
Coklat


Ujung 2
putih list Hijau
Hijau
putih list Orange
Biru
putih list Biru
Orange
putih list Coklat
Coklat

Bila Cross ujung 1 dan 2 berbeda..


Semoga bermanfaat

Defenisi Sistem Pakar

Hasil gambar untuk SISTEM PAKAR


Sistem Pakar
  Sistem pakar adalah salah satu aplikasi pertama yang muncul dari riset awal dalam bidang kecerdasan buata, dan penjelasan dan penalaraan sistem pakar merupakan salah satu aplikasi pertmana dari generasi bahasa alami.  Hal ini disebabkan oleh basis pengetahuan seperti penalaran harus secara relatif dan secara langsung. Namun demikian manakala penjelasan yang bersifat universal  telah diakui sebagai keinginan fungsionalitas dalam sistem pakar  ( Arhami muhamad, 2005).
Pada dasarnya sistem pakar diterapkan untuk mendukung aktivitas pemecahan masalah.Beberapa aktivitas pemecahan yang dimaksud antara lain:
a.       Interpretasi
membuat kesimpulan atau deskripsi dari sekumpulan data mentah.
b.      Prediksi
memproyeksikan akibat – akibat yang dimungkinkan dari situasi – situasi tertentu.
c.       Diagnosis 
menentukan penyebab malfungsi dalam situasi kompleks yang didasarkan pada gejala – gejala yang teramati.
d.      Desain
menentukan konfigurasi komponen – komponen sistem yang cocok dengan tujuan – tujuan kinerja tertentu yang memenuhi kendala – kendala tertentu.
e.       Perencanaan
merencanakan serangkaian tindakan yang akan dapat mencapai sejumlah tujuan dengan kondisi awal tertentu.
f.       Debugging dan Repair
menentukan dan menginterpretasikan cara – cara untuk mengatasi malfungsi.
g.      Intruksi 
mendeteksi dan mengoreksi defisiensi dalam pemahaman domain subyek.
h.      Pengendalia
 mengatur tingkah laku suatu environment yang kompleks.
i.        Selection 
mengidentifikasi pilihan terbaik dari sekumpulan (list) kemungkinan.
j.        Simulation 
permodelan interaksi antara komponen – komponen sistem.
k.      Monitoring 
membandingkan hasil pengamatan dengan kondisi yang diharapkan.

Dengan sistem pakar, pemakai dapat memperoleh informasi yang berkualitas dengan mudah seperti halnya memperoleh  dari para ahli di bidangnya. Selain itu, sistem pakar juga dapat membantu aktifitas para pakar sebagai asisten yang mempunyai pengetahuan yang di butuhkan.

Konsep dasar fungsi sistem pakar
Arhami muhammad (2005) konsep dasar suatu sistem pakar knowledge-base. Pengguna menyampaikan fakta atau informasi untuk sistem pakar dan kemudian menerima saran dari pakar atau jawaban ahlinya. Bagian dalam sistem pakar terdiri dari 2 komponen utama, yaitu knowledge base yang berisi knowledge dan mesin inferensi yang menggambarkan kesimpulan  adapun konsep dasar sistem pakar dapat di lihat seperti
 Hasil gambar untuk konsep dasar fungsi sistem pakar
===================SEMOGA BERMANFAAT=======================

Naive Bayesian Classiflers for Ranking


Naive Bayesian Classiflers for Ranking
Harry Zhang and Jiang Su
Faculty of Computer Science, University of New Brunswick
P.O. Box 4400, Fredericton, NB, Canada E3B 5A3
hzhang@csd.uwo.ca,
WWW home page: http://www.cs.unb.ca/profs/hzhang/

Abstract. It is well-known that naive Bayes performs surprisingly well
in classiflcation, but its probability estimation is poor. In many applications, however, a ranking based on class probabilities is desired. For
example, a ranking of customers in terms of the likelihood that they buy
one’s products is useful in direct marketing. What is the general performance of naive Bayes in ranking? In this paper, we study it by both
empirical experiments and theoretical analysis. Our experiments show
that naive Bayes outperforms C4.4, the most state-of-the-art decisiontree algorithm for ranking. We study two example problems that have
been used in analyzing the performance of naive Bayes in classiflcation
[3]. Surprisingly, naive Bayes performs perfectly on them in ranking,
even though it does not in classiflcation. Finally, we present and prove a
su–cient condition for the optimality of naive Bayes in ranking.
1 Introduction
Naive Bayes is one of the most efiective and e–cient classiflcation algorithms.
In classiflcation learning problems, a learner attempts to construct a classifler
from a given set of training examples with class labels. Assume that A1, A2,¢ ¢ ¢,
An are n attributes. An example E is represented by a vector (a1; a2; ; ¢ ¢ ¢ ; an),
where ai is the value of Ai. Let C represent the class variable, which takes value
+ (the positive class) or ¡ (the negative class). We use c to represent the value
that C takes. A naive Bayesian classifler, or simply naive Bayes, is deflned as:
Cnb(E) = arg
c
max p(c)
nY i
=1
p(aijc): (1)
Because the values of p(aijc) can be estimated from the training examples,
naive Bayes is easy to construct. It is also, however, surprisingly efiective [10].
Naive Bayes is based on the conditional independence assumption that all attributes are independent given the value of the class variable. It is obvious that
the conditional independence assumption is rarely true in reality. Indeed, naive
Bayes is found to work poorly for regression problems [7], and produces poor
probability estimates [1].
Typically, the performance of a classifler is measured by its predictive accuracy (or error rate). Some classiflers, such as naive Bayes and decision trees, also

2 Harry Zhang and Jiang Su
produce the estimates of the class probability p(cjE). This information is often
ignored in classiflcation, as long as the class with the highest class probability
estimate is identical to the actual class. In many applications, however, classi
flcation and error rate are not enough. For example, a CS department needs a
ranking of its students in terms of their performance in various aspects in order
to award scholarships. Thus, a ranking is desired.
If a ranking is desired and only a dataset with class labels is given, the area
under the ROC (Receiver Operating Characteristics) curve [18, 15], or simply
AUC can be used to evaluate the quality of rankings generated by a classifler.
AUC is a good \summary" for comparing two classiflers across the entire range
of class distributions and error costs. Bradley [2] shows that AUC is a proper
metric for the quality of classiflers averaged across all possible probability thresh
olds. It has been shown that, for binary classiflcation, AUC is equivalent to the
probability that a randomly chosen example of class ¡ will have a smaller esti
mated probability of belonging to class + than a randomly chosen example of
class + [9]. Thus, AUC is actually a measure of the quality of ranking. The AUC
of a ranking is 1 (the maximum AUC value) if no positive example precedes any
negative example.
Some researchers believe that AUC is a better and more discriminating eval
uation method than accuracy for classiflers that produce class probability esti
mates [11]. Since AUC is a difierent, probably better, evaluation method than
accuracy for machine learning algorithms, the next natural question is: What
is the performance of traditional learning algorithms, such as naive Bayes and
decision trees, in terms of AUC?
It has been shown that traditional decision tree algorithms, such as C4.5,
produce poor probability estimates, and thus produce poor probability-based
rankings. Substantial work has been done in improving the ranking quality of
decision tree algorithms (see next section for detail).
It is also well-known that naive Bayes performs surprisingly well in clas
siflcation, but has a poor performance in probability estimation. What is its
performance in ranking? In this paper, we argue that naive Bayes also works
well in ranking.
The rest of the paper is organized as follows: Section 2 reviews the related
work in improving traditional learning algorithms to produce accurate rankings.
Section 3 describes an empirical study showing that naive Bayes outperforms
a sophisticated decision tree learning algorithm that has recently been devel
oped for generating accurate rankings, which provides empirical evidence that
naive Bayes has good performance in ranking, just as in classiflcation. Section
4 explores the theoretical reason for the superb performance of naive Bayes in
ranking. The paper concludes with a summary of our work and discussion.


2 Related Work
The ranking addressed in this paper is based on the class probabilities of examples. If a learning algorithm produces accurate class probability estimates, it
Naive Bayesian Classiflers for Ranking 3
certainly produces an accurate ranking. But the opposite is not true. For example, assume that E+ and E¡ are a positive and a negative example respectively,
and that the actual class probabilities are p(+jE+) = 0:9 and p(+jE¡) = 0:4. An
algorithm that gives class probability estimates: ^ p(+jE+) = 0:5 and ^ p(+jE¡) =
0:45, gives a correct order of E+ and E¡ in the ranking, although the probability
estimates are poor. In the ranking problem, an algorithm tolerates the error of
probability estimates to some extent, which is similar to that in classiflcation.
Recall that a classiflcation algorithm gives the correct classiflcation on an example, as long as the class with the maximum posterior probability estimate is
identical to the actual class.
Naive Bayes is easy to construct and has surprisingly good performance in
classiflcation, even though the conditional independence assumption is rarely
true in real-world applications. On the other hand, naive Bayes is found to
produce poor probability estimates [3]. Some work has been published to improve
its probability estimates. Zadrozny and Elkan [19] propose using a histogram
method to calibrate probability estimation. A more efiective and straightforward
way to improve naive Bayes is to extend its structure to represent dependencies
among attributes [8]. Most of the extensions, however, aim at improving the
predictive accuracy, not at better probability estimation or ranking. Lachiche
and Flach present a method that uses AUC to flnd an optimal threshold for
naive Bayes, and thus improves its classiflcation accuracy [6]. An interesting
question is, what is the performance of naive Bayes in terms of ranking (AUC)?
Decision tree learning algorithms are one of the simplest and most efiective
learning algorithms, widely used in many applications. Traditional decision tree
learning algorithms, such as C4.5, are error-based, and also produce probability
estimates. In decision trees, the class probability p(cjE) of an example E is the
fraction of the examples of class c in the leaf that E falls into. How to build
decision trees with accurate probability estimates is an interesting question.
Unfortunately, traditional decision tree algorithms, such as C4.5, have been
observed to produce poor estimates of probabilities [14, 16]. According to Provost
and Domingos [17], the decision tree representation, however, is not (inherently)
doomed to produce poor probability estimates, and a part of the problem is
that modern decision tree algorithms are biased against building the tree with
accurate probability estimates. They propose the two techniques to improve the
AUC of C4.5: smooth probability estimates by Laplace correction and turning
ofi pruning. The resulting algorithm is called C4.4 [17]. They compared C4.4 to
C4.5 by empirical experiments, and found that C4.4 is a signiflcant improvement
over C4.5 with regard to AUC.
Ling and Yan proposed a method to calibrate the probability estimate generated by C4.5 [12]. Their method does not just determine the class probability
of an example E by the leaf into which it falls. Instead, each leaf in the tree contributes to the probability estimate. Ferri, Flach and Hernandez-Orallo present
a novel algorithm for learning decision trees, which is based on AUC, rather
than entropy. The resulting decision trees have better AUC without sacriflcing
accuracy [5].

4 Harry Zhang and Jiang Su
However, to our knowledge, there is no systematical study of the performance
of naive Bayes with respect to ranking, measured by AUC. By a systematical
study, we flnd that naive Bayes actually performs well in ranking, just as it
does in classiflcation. In this paper, we present empirical experiments and the
theoretical analysis for this observation.


3 Comparison between Naive Bayes and Decision Tree
In this section, we present an empirical comparison between naive Bayes and
C4.4, and give some explanation of the experimental results.
3.1 Experiments
We conduct experiments to compare naive Bayes with C4.4, and AUC is used as
the evaluation criterion. We use 15 datasets from the UCI repository [13], shown
in Table 1. In our experiments, the average AUC has been obtained for both
C4.4 and naive Bayes by using 10-fold stratifled cross validation. C4.4 has been
implemented in Weka [20] and compared to existing Weka implementations of
naive Bayes. Since Laplace correction has been used in C4.4 and signiflcantly
improves the AUC [17], we also use it in naive Bayes. The experimental results
are shown in Table 2.
Table 1. Description of the datasets used in the experiments.

Dataset sizes num. of attributes missing value
Breast cancer
Vote
Chess
Mushroom
Horse Colic
Wisconsin-breast-cancer
Credit Approval
German Credit
Pima Indians Diabetes
Heart-statlog
Hepatitis Domain
Ionosphere
Labor
Sick
Sonar
286
435
3196
8124
368
699
690
1000
768
270
155
351
57
3772
208
9
16
36
22
28
9
15
24
8
13
19
34
16
30
61
Yes
Yes
No
Yes
Yes
Yes
Yes
No
No
No
Yes
No
No
Yes
No


We conduct a paired two-tailed t-test by using 95% as the confldence level to
see if one algorithm is better than the other. Figures in Table 2 are indicated in
boldface whenever the observed difierence of the AUCs between naive Bayes and
Naive Bayesian Classiflers for Ranking 5
C4.4 is signiflcant. We can see that naive Bayes outperforms C4.4 in 8 datasets,
ties in 3 dataset and loses in 4 datasets, and that the average AUC of naive Bayes
is 90.36%, substantially higher than the average 85.25% of C4.4. Considering
that C4.4 is the state-of-art decision tree algorithm speciflcally designed for high
AUC, we believe that this presents evidence that naive Bayes has some advantage
over decision trees in producing better rankings.
Table 2. Experimental results on AUC.

Dataset C4.4 Naive Bayes
Breast cancer
Vote
Chess End-Game
Mushroom
Wisconsin-breast-cancer
Credit Approval
German Credit
Pima Indians Diabetes
Heart-statlog
Hepatitis Domain
Ionosphere
Horse Colic
Labor
Sick
Sonar
59.42 § 10.94 100.00 § 0.00 100.00 § 0.00 98.13 § 2.19 98.33 § 2.29
88.47
§ 4.39
69.88
§ 5.83
73.76
§ 5.74
82.82
§ 9.84
82.42
§ 11.84
92.34
§ 4.65 86.38 § 8.82 70.67 § 28.18 99.84 § 1.12 76.24 § 9.94
70.43 § 15.9495.26 § 1.10
100.00
§ 0.00
97.97
§ 2.01
99.57
§ 1.4592.43 § 3.26
79.63
§ 5.48
82.43
§ 5.29
91.36
§ 4.39
89.23
§ 9.9494.95 § 3.94
84.23
§ 6.8595.73 § 16.9396.23 § 2.1885.95 § 11.01
Average 85.25 90.36


3.2 Comparing Naive Bayes with Decision Trees from
Representational Capacity
The experiment in the preceding section indicates that naive Bayes has some
advantage over the decision tree algorithm C4.4. What are the reasons behind
the experimental results? In this section, we give some intuitive explanation, and
we will analyze the ranking performance of naive Bayes theoretically in Section
4.
In decision trees, the class probability of an example is estimated by the
proportion of the examples of that class in the leaf into which the example
falls. Thus, all examples in the same leaf have the same probability, and will be
ranked randomly. This weakens substantially the capacity of decision trees in
representing accurate rankings. That is because two contradictory factors are in
the play at the same time. On one hand, decision tree algorithms, such as ID3 and
C4.5, tend to build small decision trees. This results in more examples in leaves
with more reliable probability estimates of the leaves. However, smaller trees
imply a smaller number of leaves, thus more examples will have the same class

6 Harry Zhang and Jiang Su
probability. This limits the discriminating power of the tree to rank examples.
On the other hand, if the tree is large, not only may the tree overflt the data,
but the number of examples in each leaf becomes small, and thus the probability
estimates would not be accurate. This would also produce bad rankings.
Let us assume that all attributes and the class variable are Boolean, and that
we have n attributes. Then, for a given decision tree T , each leaf represents only
one class probability p(C = +jE) (p(C = ¡jE) = 1 ¡ p(C = +jE)). Assume
that T has L leaves, then the maximum number of the possible distinct class
probabilities is L. A full decision tree, in which each attribute occurs once on
each path from the root to a leaf, can represent at most 2n distinct class proba
bilities. Obviously, such full decision trees are rarely meaningful, since decision
tree algorithms tend to construct small trees, and the number of training exam
ples is normally much less than 2n. Therefore, in reality, L is much less than
2n. In a small decision tree, however, the number of distinct class probabilities
that it can represent, i.e., the number of its leaves, is also small. Thus, it is very
possible for many examples to have the same class probability. This is an obvious
disadvantage for generating an accurate probability-based ranking. That is why
Provost and Domingos [17] recommend turning ofi pruning for better ranking.
That contradiction does not exist in naive Bayes, which calculates the class
probability p(cjE) based on p(aijc), as showed in Equation 1, where ai is the
value of attribute Ai of example E. Although naive Bayes has only 2n + 1
parameters, the number of possible difierent class probabilities can be as many
as 2n. Therefore, intuitively speaking, naive Bayes has an advantage over decision
trees in the capacity of representing difierent class probabilities.


4 Theoretical Analysis on the Performance of Naive
Bayes in Ranking
Although naive Bayes performs well in classiflcation, its learnability is very limited. In the binary domain, it can learn only linearly separable functions [4].
Moreover, it cannot learn even all the linearly separable functions. For example,
Domingos and Pazzani [3] discover that several speciflc linear functions are not
learnable by naive Bayes, such as conjunctive concepts and m-of-n concepts. In
other words, naive Bayes is not optimal in learning those concepts. We flnd out,
however, that naive Bayes is optimal in ranking in both conjunctive concepts
and m-of-n concepts. Here the optimality in ranking is deflned as follows.
Deflnition 1. A classifler is called locally optimal on example E in ranking,
1. if E is a positive example, there is no negative example ranked after E; or
2. if E is a negative example, there is no positive example ranked before E.
Deflnition 2. A classifler is called globally optimal in ranking, if it is locally
optimal on all the examples in the example space of a given problem.
When a classifler is globally optimal, the AUC of the ranking produced by it is
always 1.
Naive Bayesian Classiflers for Ranking 7
4.1 Conjunctive concepts
A conjunctive concept is a conjunction of n literals Li, where a literal is a Boolean
attribute or its negation. It has been shown that naive Bayes, as a classifler, is
optimal in learning conjunctive concepts if examples are uniformly distributed
and the training set includes all the 2n possible examples [3]. Let + and ¡
denote the class of C = 1 (true) and the class of C = 0 (false), respectively. In
the training set, only one example that has L1 = L2 = ¢ ¢ ¢ = Ln = 1 is in class +.
Thus, p(+) = 21 n , p(¡) = 2n 2¡ n 1, p(Lij+) = 1, p(L ij+) = 0, p(L i) = 2 2n n¡ ¡1 1, and
p(Li) = 2n 2¡ n¡ 1¡ 11. Assume that E is an arbitrary example and m is its number
of the conjunction literals being true. Then, the class probability estimates given
by naive Bayes are
pnb(+jE) = p(+)pm(Lij+)pn¡m(L ij+)
= 0 otherwise, 21 n if m = n (2)
(3)
and
pnb(¡jE) = p(¡)pm(Li)pn¡m(L i)
=
2n ¡ 1
2n
(2n¡1 ¡ 1
2n ¡ 1
)m( 2n¡1
2n ¡ 1
)n¡m:
It is easy to show that naive Bayes will give the correct classiflcation for all
examples. Let us consider the ranking produced by naive Bayes. For a positive
example E+, we have m = n. The probability pnb(+jE+) is 21 n . For any negative
example E¡, m < n, and pnb(+jE¡) = 0 < 21 n = pnb(+jE+). That means that
naive Bayes never ranks a positive example before a negative example in the
class probability based ranking. Naive Bayes is therefore optimal for conjunctive
concepts under uniform distribution.
If the assumption that examples are uniformly distributed is removed, naive
Bayes gives the correct classiflcation for all the examples in class ¡, given a
su–cient training set. However, for a positive example (m = n), the result will
depend on the class distribution. If p(+) < 21 n , it is possible that naive Bayes
will fail to assign a correct class to a positive example. That means that naive
Bayes is not optimal in classiflcation if the example distribution is not uniform.
However, no matter what the value of p(+) is, pnb(+jE¡) = 0 and pnb(+jE+) =
p(+) > 0. Therefore, naive Bayes is still optimal for conjunctive concepts in
ranking, as shown in the theorem below.
Theorem 1. Naive Bayes is globally optimal in ranking on conjunctive concepts.
4.2 m-of-n concepts
An m-of-n concept is a Boolean function that is true if m or more out of n
Boolean attributes are true. Clearly, it is a linearly separable function. Domingos and Pazzani [3] show that for the concept 8-of-25, when the input Boolean

8 Harry Zhang and Jiang Su
attributes have just six or seven 1s, naive Bayes gives an incorrect answer of 1
(instead of 0).
Their result is based on two assumptions: (1) The sampling consists of all 225
examples of the 8-of-25 function, or is the uniform distribution; (2) The thresh
old for classiflcation is 0.5. That is, an example E belongs to class + if and only
if p(+jE) 0:5. The corresponding probabilities can then be obtained explicitly
[3]:


p(+) =
Pn i=m µn i
2n ;
p(¡) =
Pm i=0 ¡1 µn i
2n ;
p(Ai = 1j+) =
Pn i= ¡ m 1¡1 µn ¡ i 1
Pn i=m µn i ;
p(Ai = 1) =
Pm i=0 ¡2 µn ¡ i 1
Pm i=0 ¡1 µn i :
Let q denote p(Ai = 1j+). Obviously, q > 0:5. The class probability estimate
produced by naive Bayes, denoted by pnb(+jE), is:
pnb(+jE) = p(+)qi(1 ¡ q)(n¡i);
where i is the number of attributes of 1.
Now let us consider the ranking performance of naive Bayes in m-of-n concepts. Assume that E+ is a positive example with k1 attributes of 1, and that
E
¡ is a negative example with k2 attributes of 1. Obviously, k1 m > k2. Then
we have
pnb(+jE+) ¡ pnb(+jE¡) = p(+)qk2(1 ¡ q)n¡k1(qk1¡k2 ¡ (1 ¡ q)k1¡k2): (4)
Since q > 0:5 and k1 > k2, Equation 4 is always positive. Thus, for m-of-n
concepts, the class probability of a positive example is always greater than the
class probability of a negative example in naive Bayes. Therefore, the ranking
generated by naive Bayes is optimal, as shown in the following theorem.
Theorem 2. Naive Bayes is globally optimal in ranking on m-of-n concepts.
Naive Bayesian Classiflers for Ranking 9
4.3 General Optimality of Naive Bayes
The two example problems in the preceding sections are quite surprising, since it
has been known that, as a classifler, naive Bayes cannot learn all m-of-n concepts
under uniform distribution and cannot learn all conjunctive concepts under some
non-uniform distributions. The rankings generated by naive Bayes, however, are
optimal in both problems. This provides us evidence that naive Bayes performs
well in ranking, in some problems even better than classiflcation.
In our following discussion, we assume that the prior probabilities p(E) of
all examples E are equal. Since p(+jE) = p(+)p(Ej+)
p(E) , thus the ranking is also
determined by p(Ej+).
Now let us consider the general case. Assume that E+ is a positive example
and E
¡ is a negative example. Thus, p(E+j+) > p(E¡j+). Let pnb(Eij+) denote
the probability estimates generated by naive Bayes, i = +; ¡. Let x and y denote
the errors of probability estimates on E+ and E¡ given by naive Bayes. That is:
x = p(E+j+) ¡ pnb(E+j+)
y = p(E¡j+) ¡ pnb(E¡j+)
Naive Bayes generates the correct order for E+ and E¡, if
pnb(E+j+) > pnb(E¡j+):
That is
y ¡ x + (p(E+j+) ¡ p(E¡j+)) > 0: (5)
Assuming that x and y are uniformly distributed, we plot a flgure in which
x any y corresponds to the horizotal and vertical axes respectively, as shown in
Figure 1. The shaded area corresponds to the cases in which Equation 5 is true.
Since p(E+j+) > p(E¡j+), naive Bayes is optimal in more than a half of the
possible area. It is easy to calculate the area of the shaded area, denoted by A.
A = ¡1
2
((p(E+j+) ¡ p(E¡j+)) ¡ 2)2 + 4 (6)
It is interesting to notice that, the greater difierence between p(E+j+) and
p(E¡j+), the greater chance that naive Bayes is optimal. For example, when
p(E+j+) ¡ p(E¡j+) = 0:5, the probability of naive Bayes being optimal is
0.78125.
Now let us assume that all the dependences among attributes are complete.
An attribute Ai is said to depend on Aj completely, if Ai = Aj. If Ai = Aj and
all other attributes are independent, the true probablity p(Ej+) for an example
E = (a1; a2; ¢ ¢ ¢ ; an) is
p(Ej+) = p(aij+) Y
k6=i;j
p(akj+):
The probability pnb(Ej+) given by naive Bayes is
pnb(Ej+) = p(aij+)2 Y
k6=i;j
p(akj+):
10 Harry Zhang and Jiang Su
x
y d
-1
-1
1
1
y=x+d
Fig. 1. A flgure shows the optimality of naive Bayes in a general case, in which d =
p(E¡j+) ¡ p(E+j+), and the shaded area corresponds the optimal area of naive Bayes.
Given two examples E+ = (a+ 1 ; a+ 2 ; ¢ ¢ ¢ ; a+ n ) and E¡ = (a¡ 1 ; a¡ 2 ; ¢ ¢ ¢ ; a¡ n ) belonging to the positive and negative class respectively, we have
p(E+j+) = p(a+ i j+) Y
k6=i;j
p(a+ k j+) > p(E¡j+) = p(a¡ i j+) Y
k6=i;j
p(a¡ k j+):
It is easy to show that, if p(a+ i j+) 0:5, pnb(E+j+) > pnb(E¡j+). Notice that
E+ is a positive example, it is a reasonable assumption that p(a+ i j+) 0:5. We
have a formal deflnition on the property of such an attribute value.
Deflnition 3. A value ai of attributes Ai is called indicative to class c, if p(Ai =
aijc) p(Ai = „ aijc), where a i is another value of Ai other than ai.
For example, for the problem of m-of-n concepts, p(Ai = 1j+) > p(Ai = 0j+)
for any attribute. So Ai = 1 is indicative to class +. If all the attribute values
of an example are indicative, naive Bayes always gives the optimal ranking for
it, illustrated by the theorem below.
Theorem 3. Naive Bayes is optimal on example E = (a1; a2; ¢ ¢ ¢ ; an) in ranking, if each attribute value of E is indicative to class +.
Proof. By induction on i, the number of pairs of attributes with complete dependence.
When i = 1, it is true from the preceding discussion. Assume that the claim
is true when i = k. That is, if there are k complete dependences among attributes and p(E+j+) > p(E¡j+), then pnb(E+j+) > pnb(E¡j+), where E+ =
(a+ 1 ; a+ 2 ; ¢ ¢ ¢ ; a+ n ) and E¡ = (a¡ 1 ; a¡ 2 ; ¢ ¢ ¢ ; a¡ n ) belong to positive and negative class
respectively. Consider that i = k+1. Assume that the new complete dependence
is between An¡1 and An. Then p(E+j+) > p(E¡j+). Since An¡1 = An,
p(E+j+) = p(E+ ¡ fAn¡1gj+) = p(a+ 1 ; ¢ ¢ ¢ ; a+ n¡2; a+ n j+);
p(E¡j+) = p(E¡ ¡ fAn¡1gj+) = p(a¡ 1 ; ¢ ¢ ¢ ; a¡ n¡2; a¡ n j+):
Naive Bayesian Classiflers for Ranking 11
Since there are only k dependences among A1, ¢ ¢ ¢, An¡2, An, according to
induction hypothesis,
pnb(a+ 1 ; ¢ ¢ ¢ ; a+ n¡2; a+ n j+) > pnb(a¡ 1 ; ¢ ¢ ¢ ; a¡ n¡2; a¡ n j+):
Thus, we have
nY
i=1i6=n¡1
p(a+ i j+) >
nY
i=1i6=n¡1
p(a¡ i j+):
Since all the attribute values of E are indicative, p(a+ n¡1j+) > p(a¡ n¡1j+). Then,
we have
nY i
=1
p(a+ i j+) >
nY i
=1
p(a¡ i j+):
Therefore, pnb(E+j+) > pnb(E¡j+).
Theorem 3 presents a su–cient condition on the local optimality of naive
Bayes. Notice that even when all the attribute values of an example are indictative, it is possible that naive Bayes gives a wrong classiflcation.
5 Conclusion
In this paper, we argue that naive Bayes performs well in ranking, just as it
does in classiflcation. We compare empirically naive Bayes with the state-ofthe-art decision tree learning algorithm C4.4 in terms of ranking, measured by
AUC, and our experiment shows that naive Bayes has some advantage over
C4.4. We investigate two example problems theoretically: conjunctive literals
and m-of-n concepts, which were used to analyze the classiflcation performance
of naive Bayes in [3]. Surprisingly, naive Bayes works perfectly in both problems
with respect to ranking, although it does not perform perfectly in terms of
classiflcation. For more general cases, we propose a su–cient condition for the
local optimality of naive Bayes in ranking.
Generally, the performance of naive Bayes in ranking is similar to that in
classiflcation, in the sense that both tolerate the estimation error of class probabilities to some extent. It is interesting to know which one tolerates error to a
higher extent. Our conjecture is that, for naive Bayes, it might be ranking.
References
1. Bennett, P. N.: Assessing the calibration of Naive Bayes’ posterior estimates. Technical Report No. CMU-CS00-155 (2000)
2. Bradley, A. P.: The use of the area under the ROC curve in the evaluation of
machine learning algorithms. Pattern Recognition 30 (1997) 1145-1159
3. Domingos, P., Pazzani M.: Beyond Independence: Conditions for the Optimality
of the Simple Bayesian Classifler. Machine Learning 29 (1997) 103-130

12 Harry Zhang and Jiang Su
4. Duda, R. O., Hart, P. E.: Pattern Classiflcation and Scene Analysis. A Wiley
Interscience Publication (1973)
5. Ferri, C., Flach, P. A., Hernndez-Orallo, J.: Learning Decision Trees Using the
Area Under the ROC Curve. Proceedings of the 19th International Conference on
Machine Learning. Morgan Kaufmann (2002) 139-146
6. Lachiche, N., Flach, P. A.: Improving Accuracy and Cost of Two-class and Multi
class Probabilistic Classiflers Using ROC Curves. Proceedings of the 20th Interna
tional conference on Machine Learning. Morgan Kaufmann (2003) 416-423
7. Frank, E., Trigg, L., Holmes, G., Witten, I. H.: Naive Bayes for Regression. Machine
Learning 41(1) (2000) 5-15
8. Friedman, N., Greiger, D., Goldszmidt, M.: Bayesian Network Classiflers. Machine
Learning 29 (1997) 103{130
9. Hand, D. J., Till, R. J.: A simple generalisation of the area under the ROC curve
for multiple class classiflcation problems. Machine Learning 45 (2001) 171-186
10. Kononenko, I.: Comparison of Inductive and Naive Bayesian Learning Approaches
to Automatic Knowledge Acquisition. Current Trends in Knowledge Acquisition.
IOS Press (1990)
11. Ling, C. X., Huang, J., Zhang, H.: AUC: a statistically consistent and more discrim
inating measure than accuracy. Proceedings of the International Joint Conference
on Artiflcial Intelligence IJCAI03. Morgan Kaufmann (2003) 329-341
12. Ling, C. X., Yan, R. J.: Decision Tree with Better Ranking. Proceedings of the 20th
International Conference on Machine Learning. Morgan Kaufmann (2003) 480-487


13. Merz, C., Murphy, P., Aha, D.: UCI repository of machine learning databases. Dept of ICS, University of California, Irvine (1997).
http://www.ics.uci.edu/ mlearn/MLRepository.html
14. M. Pazzani, P., Merz, C., Murphy, P., Ali, K., Hume, T., Brunk, C.: Reducing
misclassiflcation costs. Proceedings of the 11th International conference on Machine
Learning. Morgan Kaufmann (1994) 217-225
15. Provost, F., Fawcett, T.: Analysis and visualization of classifler performance: comparison under imprecise class and cost distribution. Proceedings of the Third International Conference on Knowledge Discovery and Data Mining. AAAI Press
(1997) 43-48
16. Provost, F., Fawcett, T., Kohavi, R.: The case against accuracy estimation for comparing induction algorithms. Proceedings of the Fifteenth International Conference
on Machine Learning. Morgan Kaufmann (1998) 445-453
17. Provost, F. J., Domingos, P.: Tree Induction for Probability-Based Ranking. Machine Learning 52(3) (2003) 199-215
18. Swets, J.: Measuring the accuracy of diagnostic systems. Science 240 (1988) 1285-
1293
19. Zadrozny, B., Elkan, C.: Obtaining calibrated probability estimates from decision
trees and naive Bayesian classiflers. Proceedings of the Eighteenth International
conference on Machine Learning. Morgan Kaufmann (2001) 609-616
20. Witten, I. H., Frank, E.: Data Mining {Practical Machine Learning Tools and
Techniques with Java Implementation. Morgan Kaufmann (2000)  

Slideshow

Sponsor

test

Connect With us

Over 600,000+ Readers Get fresh content from FastBlog

Translate