Professional Documents
Culture Documents
Week 2
Topics
DataTypes
DataRepositories
DataPreprocessing
Presenthomeworkassignment#1
P
Preparefortheonepagedescriptionofyourgroupproject
f h
d
i i
f
j
topic
Prepareforpresentationusingslides
Prepare for presentation using slides
Duedate
beginningofthelectureonFridayFebruary11th.
WelcometotheRealWorld!
Noqualitydata,noqualityminingresults!
Preprocessingisoneofthemostcriticalstepsinadatamining
process
Data Cleaning
Datacleaningisoneofthebiggestproblemsindata
warehousing RalphKimball
Datacleaningisthenumberoneproblemindata
warehousing DCIsurvey
10
IIncomplete,noisy,andinconsistentdataare
l
i
di
i
d
commonplacepropertiesoflargerealworld
databases (p 48)
databases.(p.48)
Therearemanypossiblereasonsfornoisydata.(p.
48)
11
Missingvalues
Fillinmissingvalues
Noisydata(incorrectvalues)
Identifyoutliersandsmoothoutnoisydata
12
Ignorethetuple
Fillinthemissingvaluemanually
Fill in the missing value manually
Useaglobalconstanttofillinthemissingvalue
13
Usetheattributemeantofillinthemissingvalue
Usetheattributemeanforallsamplesbelongingtothesame
classasthegiventuple
Usethemostprobablevaluetofillinthemissingvalue
14
Binning
Regression
Clustering
15
Binning
16
Regression
17
Clustering
Figure2.12 A2Dplotofcustomerdatawithrespecttocustomerlocationsinacity,
showingthreedataclusters.Eachclustercentroidismarkedwitha+,representingthe
averagepointonspacethatcluster.Outliersmaybedetectedasvaluesthatfalloutsideof
i t
th t l t O tli
b d t t d
l
th t f ll t id f
thesetsofclusters.
18
Data Integration
19
Data Integration
Schemaintegrationandobjectmatching
Entityidentificationproblem
Redundantdata(betweenattributes)occuroftenwhen
integrationofmultipledatabases
Redundantattributesmaybeabletobedetectedby
correlationanalysis,andchisquaremethod
l ti
l i
d hi
th d
20
custom_id andcust_number
Schemaconflict
HandS,and1 and2 forpay_type inonedatabase
Valueconflict
Solutions
metadata(dataaboutdata)
t d t (d t b t d t )
21
Ifanattributedcanbederivedfromanotherattributeora
setofattributes,itmayberedundant
22
Someredundanciescanbedetectedbycorrelationanalysis
Correlationcoefficientfornumericdata
Chisquaretestforcategoricaldata
Thesecanbealsousedfordatareduction
23
Chi-square
Chi
square Test
Forcategorical(discrete)data,acorrelationrelationship
betweentwoattributes,AandB,canbediscoveredbya2
test
Giventhedegreeoffreedom,thevalueof2isusedto
decidecorrelationbasedonasignificancelevel
24
(o e )
2 =
eij
i =1 j =1
c
ij
ij
count ( A = ai ) count ( B = bj )
eij =
N
p.68
Thelargerthe2 value,themorelikelythevariablesarerelated.
25
Chi-square
Chi
square Test
fiction
non_ffiction
Total
male
250
female
200
Total
450
50
300
1000
1200
1050
1500
26
27
Correlation Coefficient
N
(a A)(b B) (a b ) N AB
i
rA, B =
i =1
NAB
1 rA, B +1
i i
i =1
NAB
p.68
28
http://upload.wikimedia.org/wikipedia/commons/0/02/Correlation_examples.png
29
Data Transformation
30
Data Transformation/Consolidation
Smoothing
Aggregation
Generalization
Normaliza on
Attributeconstruc on
31
Smoothing
Removenoisefromthedata
Binning,regression,andclustering
32
Data Normalization
Min
Min-max
max normalization
v minA
v' =
(new _ maxA new _ minA) + new _ minA
maxA minA
z-score normalization
v' =
v A
Data Normalization
Supposethattheminimumandmaximumvaluesforattribute
incomeare$12,000and$98,000,respectively.Wewouldlike
to map income to the range [0 0 1 0] Do Minmax
tomapincometotherange[0.0,1.0].DoMin
max
normalization,zscorenormalization,anddecimalscalingfor
theattributeincome
34
Attribution Construction
New
Newattributesareconstructedfromgivenattributesand
attributes are constructed from given attributes and
addedinordertohelpimproveaccuracyandunderstanding
ofstructureinhighdimensiondata
Example
Addtheattributearea basedontheattributesheight and
width
35
Data Reduction
36
Data Reduction
Datareductiontechniquescanbeappliedtoobtainareduced
representation of the data set that is much smaller in volume
representationofthedatasetthatismuchsmallerinvolume,
yetcloselymaintainstheintegrityoftheoriginaldata
37
Data Reduction
(DataCube)Aggregation
Attribute(Subset)Selection
DimensionalityReduction
Numerosity Reduction
DataDiscretization
C
ConceptHierarchyGeneration
t Hi
h G
ti
38
The
The Curse of Dimensionality(1)
Dimensionality (1)
Size
Thesizeofadatasetyieldingthesamedensityofdata
pointsinanndimensionalspaceincreaseexponentially
with dimensions
withdimensions
Radius
Alargerradiusisneededtoencloseafactionofthedata
pointsinahighdimensionalspace
39
The
The Curse of Dimensionality(2)
Dimensionality (2)
Distance
Almosteverypointisclosertoanedgethantoanother
samplepointinahighdimensionalspace
Outlier
Almosteverypointisanoutlierinahighdimensional
space
40
Summarize(aggregate)databasedondimensions
Theresultingdatasetissmallerinvolume,withoutlossof
information necessary for analysis task
informationnecessaryforanalysistask
Concepthierarchiesmayexistforeachattribute,allowingthe
analysisofdataatmultiplelevelsofabstraction
41
Data Aggregation
Figure 2.13 Sales data for a given branch of AllElectronics for the
years 2002 to 2004. On the left, the sales are shown per quarter. On
the right, the data are aggregated to provide the annual sales
42
Data Cube
Providefastaccesstoprecomputed,summarizeddata,
thereby benefiting online
therebybenefitingon
lineanalyticalprocessingaswellas
analytical processing as well as
datamining
43
45
Attribute(Feature)selectionisasearchproblem
Searchdirections
(Sequential)Forwardselection
(Sequential)Backwardselection(elimination)
Bidirectionalselection
Decisiontreealgorithm(induction)
D ii t
l ith (i d ti )
46
47
Data Discretization
R
Reducethenumberofvaluesforagiven
d
th
b
f l
f
i
continuousattributebydividingtherange
of the attribute into intervals
oftheattributeintointervals
Intervallabelscanthenbeusedtoreplace
actualdatavalues
t ld t
l
Split(topdown)vs.merge(bottomup)
Discretization canbeperformedrecursively
onanattribute
49/51
Reducedatasize.
Transformingquantitativedatatoqualitativedata.
50
Mergingbased(bottomup)
Merge:Findthebestneighboringintervalsandmergethemto
formlargerintervalsrecursively
ChiMerge [Kerber AAAI1992,SeealsoLiuetal.DMKD2002]
51/51
Initially,eachdistinctvalueofanumericalattributeAis
considered to be one interval
consideredtobeoneinterval
2testsareperformedforeverypairofadjacentintervals
Adjacentintervalswiththeleast 2valuesaremerged
together,sincelow 2valuesforapairindicatesimilarclass
distributions
Thismergeprocessproceedsrecursivelyuntilapredefined
This merge process proceeds recursively until a predefined
stoppingcriterionismet
52
Entropy-Based
Entropy
Based Discretization
Thegoalofthisalgorithmistofindthesplitwiththe
maximuminformationgain.
Theboundarythatminimizestheentropyoverall
possibleboundariesisselected
Theprocessisrecursivelyappliedtopartitions
obtaineduntilsomestoppingcriterionismet
Suchaboundarymayreducedatasizeandimprove
h b
d
d
d
d
classificationaccuracy
53/51
What is Entropy?
The entropy is a
measure of the
uncertainty associated
with
ith a random
d
variable
i bl
As uncertainty and or
randomness increases
for a result set so does
the entropy
Values range from 0 1
to represent the entropy
of information
54
Entropy Example
55
EntropyExample
py
p
56
EntropyExample(contd)
py
p (
)
57
CalculatingEntropy
g
py
Form classes:
m
Entropy ( S ) = pi log 2 pi
For 2 classes:
For2classes:
i =1
CalculatingEntropyFromSplit
g
py
p
Entropy
EntropyofsubsetsS
of subsets S1 andS
and S2 arecalculated.
are calculated
Thecalculationsareweightedbytheirprobabilityofbeingin
setSandsummed.
Informulabelow,
Sistheset
TisthevalueusedtosplitSintoS
T is the value used to split S into S1 andS
and S2
E (S , T ) =
S1
S
Entropy ( S1 ) +
S2
S
Entropy ( S 2 )
59
CalculatingInformationGain
g
IInformationGain
f
ti G i =Differenceinentropybetween
Diff
i
t
b t
originalset(S)andweightedsplit(S1 +S2)
Gain( S , T ) = Entopy ( S ) E ( S , T )
Gain( S ,56) = 0.991076 0.766289
Aconcepthierarchyforagivennumericalattribute
definesadiscretization oftheattribute
Recursivelyreducethedatabycollectingand
replacinglowlevelconceptsbyhigherlevelconcepts
61
62/51
Asimply345rulecanbeusedtosegmentnumericdatainto
relativelyuniform,naturalintervals
l
l
f
l
l
Ifanintervalcovers3,6,7or9distinctvaluesatthemostsignificant
digit,partitiontherangeinto3equiwidthintervals
Ifitcovers2,4,or8distinctvaluesatthemostsignificantdigit,
partitiontherangeinto4intervals
Ifitcovers1,5,or10distinctvaluesatthemostsignificantdigit,
, ,
g
g ,
partitiontherangeinto5intervals
63/51
Fig
gure 2.23. Automattic generation of a concep
pt
Hie
erarchy fo
or profit based
b
on 3-4-5 rule
e.
64
Specification
Specificationofapartialorderingofattributesexplicitlyatthe
of a partial ordering of attributes explicitly at the
schemalevelbyusersorexperts
Specificationofaportionofahierarchybyexplicitdata
grouping
Specificationofasetofattributes,butnotoftheirpartial
ordering
65
15distinctvalules
provinceorstate 365distinctvalues
city
3,567distinctvalues
street
674,339distinctvalues
66
Datapreprocessing
Datacleaning
Missingvalues
Usethemostprobablevaluetofillinthemissingvalue(andfiveothermethods)
Noisydata
Binning;Regression;Clusttering
Dataintegration
EntityIDproblem
Metadata
Redundancy
Correlation analysis (Correlation coefficient chi square test)
Correlationanalysis(Correlationcoefficient,chisquaretest)
Datatrasnformation
Smoothing
Datacleaning
Aggregation
Data reduction
Datareduction
Generailization
Datareduction
Normalization
Minmax;zscore;decimalscaling
AttributeConstruction
Datareduction
Datacubeaggregation
Datacubestoremultidimensionalaggregatedinformation
Attributesubsetselection
Stepwiseforwardselection;stepwisebackwardselection;combination;decisiontreeinduction
Dimensionalityreduction
Discretewavelettrasnforms(DWT);Principlecomponentsanalysis(PCA);
NumerosityReduction
Regressionandloglinearmodels;histograms;clustering;sampling
Datadiscretization
Binning;historgramanalysis;entropybaseddiscretization;
Bi
i hi t
l i
t
b d di
ti ti
Intervalmergingbychisquareanalysis;clusteranalysis;intuitivepartitioning
Concepthierarchy
Concepthierarchygeneration
67