<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <!--Converted with LaTeX2HTML 98.1p1 release (March 2nd, 1998) originally by Nikos Drakos (nikos@cbl.leeds.ac.uk), CBLU, University of Leeds * revised and updated by: Marcus Hennecke, Ross Moore, Herb Swan * with significant contributions from: Jens Lippmann, Marek Rouchal, Martin Wilck and others --> <HTML> <HEAD> <TITLE>Signal detection</TITLE> <META NAME="description" CONTENT="Signal detection"> <META NAME="keywords" CONTENT="vol2"> <META NAME="resource-type" CONTENT="document"> <META NAME="distribution" CONTENT="global"> <META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1"> <LINK REL="STYLESHEET" HREF="vol2.css"> <LINK REL="next" HREF="node225.html"> <LINK REL="previous" HREF="node223.html"> <LINK REL="up" HREF="node222.html"> <LINK REL="next" HREF="node225.html"> </HEAD> <BODY > <!--Navigation Panel--> <A NAME="tex2html4215" HREF="node225.html"> <IMG WIDTH="37" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="next" SRC="icons.gif/next_motif.gif"></A> <A NAME="tex2html4212" HREF="node222.html"> <IMG WIDTH="26" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="up" SRC="icons.gif/up_motif.gif"></A> <A NAME="tex2html4206" HREF="node223.html"> <IMG WIDTH="63" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="previous" SRC="icons.gif/previous_motif.gif"></A> <A NAME="tex2html4214" HREF="node1.html"> <IMG WIDTH="65" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="contents" SRC="icons.gif/contents_motif.gif"></A> <BR> <B> Next:</B> <A NAME="tex2html4216" HREF="node225.html">Test statistics</A> <B> Up:</B> <A NAME="tex2html4213" HREF="node222.html">Basic principles of time</A> <B> Previous:</B> <A NAME="tex2html4207" HREF="node223.html">Signals and their models</A> <BR> <BR> <!--End of Navigation Panel--> <H2><A NAME="SECTION001722000000000000000"> Signal detection</A> </H2> <P> Paradoxically, signal detection is concerned with fitting models to supposedly random series similarly to mathematical proofs by <I>reductio ad absurdum</I> of the antithesis. That is, the hypothesis (antithesis) <I>H</I><SUB><I>o</I></SUB> is made, that the observed series <I>X</I><SUP>(<I>o</I>)</SUP> has properties of a pure noise series, <I>N</I><SUP>(<I>o</I>)</SUP>. Then, a model is fitted and series <I>X</I><SUP>(<I>m</I>)</SUP> and <I>X</I><SUP>(<I>r</I>)</SUP> are obtained. If the quality of the model fits to the observations <I>X</I><SUP>(<I>o</I>)</SUP> does not significantly differ from the quality of a fit to pure noise <I>N</I><SUP>(<I>o</I>)</SUP>, then <I>H</I><SUB><I>o</I></SUB> is true and we say that <I>X</I><SUP>(<I>o</I>)</SUP> contains no signal but noise. In the opposite case of model fitting <I>X</I><SUP>(<I>o</I>)</SUP> significantly better than <I>N</I><SUP>(<I>o</I>)</SUP>, we reject <I>H</I><SUB><I>o</I></SUB> and say that the model signal was detected in <I>X</I><SUP>(<I>o</I>)</SUP>. The difference is significant (at some level) if it is not likely (at this level) to occur between two different realizations of the noise <I>N</I><SUP>(<I>o</I>)</SUP>. <P> The quality of the fit is evaluated using a function <I>S</I> of the series <I>X</I><SUP>(<I>o</I>)</SUP>, <I>X</I><SUP>(<I>m</I>)</SUP>, and <I>X</I><SUP>(<I>r</I>)</SUP>. A function of random variables, such as <!-- MATH: $S(X^{(o)})$ --> <I>S</I>(<I>X</I><SUP>(<I>o</I>)</SUP>), is a random variable itself and is called a <EM>statistic</EM>. A random variable S is characterized by its probability distribution function. Following <I>H</I><SUB><I>o</I></SUB> we use the distribution of <I>S</I>for pure noise signal <I>N</I><SUP>(<I>o</I>)</SUP>, <I>N</I><SUP>(<I>m</I>)</SUP> and <I>N</I><SUP>(<I>r</I>)</SUP>, to be denoted <I>p</I><SUB><I>N</I></SUB>(<I>S</I>) or simply <I>p</I>(<I>S</I>). Precisely, we shall use the <EM>cumulative probability distribution</EM> function which for a given critical value of the statistic <I>S</I>=<I>S</I><SUB><I>o</I></SUB> supplies the probability <I>p</I>(<I>S</I><SUB><I>o</I></SUB>) for the observed <I>S</I> to fall on one side of the <I>S</I><SUB><I>o</I></SUB>. <P> The observed value of the statistic and its probability distribution, <!-- MATH: $S(X^{(o)})$ --> <I>S</I>(<I>X</I><SUP>(<I>o</I>)</SUP>) and <I>p</I>(<I>S</I>) respectively, are used to obtain the probability <I>p</I>(<I>S</I>(<I>X</I>)) of <I>H</I><SUB><I>o</I></SUB> being true. That is, if <I>p</I> turns out small, <IMG WIDTH="15" HEIGHT="33" ALIGN="MIDDLE" BORDER="0" SRC="img441.gif" ALT="$p<\alpha$">, <I>H</I><SUB><I>o</I></SUB> is improbable and <I>X</I><SUP>(<I>o</I>)</SUP> has no properties of <I>N</I><SUP>(<I>o</I>)</SUP>. Then we say that the model signal has been detected at the confidence level <IMG WIDTH="44" HEIGHT="54" ALIGN="BOTTOM" BORDER="0" SRC="img442.gif" ALT="$\alpha$">. The smaller <IMG WIDTH="44" HEIGHT="54" ALIGN="BOTTOM" BORDER="0" SRC="img443.gif" ALT="$\alpha$"> is, the more convincing (significant) is the detection. The special realization of a random series which consists of independent variables of common (gaussian) distribution is called (gaussian) white noise. We assume here that the noise <I>N</I><SUP>(<I>o</I>)</SUP> is white noise. Note that in the signal detection process, frequency <IMG WIDTH="42" HEIGHT="54" ALIGN="BOTTOM" BORDER="0" SRC="img444.gif" ALT="$\nu$"> and lag <I>l</I> are considered independent variables and do not count as parameters. <P> Summarizing, the basis for the determination of the properties of a an observed time series is a test statistic, <I>S</I>, with known probability distribution for (white) noise, <I>p</I>(<I>S</I>). <P> Let <I>N</I><SUP>(<I>o</I>)</SUP> consist of <I>n</I><SUB><I>o</I></SUB> random variables and let a given model have <I>n</I><SUB><I>m</I></SUB> parameters. Then the modeled series <I>N</I><SUP>(<I>m</I>)</SUP> corresponds to a combination of <I>n</I><SUB><I>m</I></SUB> random variables and the residual series <I>N</I><SUP>(<I>r</I>)</SUP> corresponds to a combination of <!-- MATH: $n_r=n_o-n_m$ --> <I>n</I><SUB><I>r</I></SUB>=<I>n</I><SUB><I>o</I></SUB>-<I>n</I><SUB><I>m</I></SUB> random variables. The proof rests on the observation that orthogonal transformations convert vectors of independent variables into vectors of independent variables. Let us consider an approximately linear model with matrix <IMG WIDTH="56" HEIGHT="54" ALIGN="BOTTOM" BORDER="0" SRC="img445.gif" ALT="${\cal M}$"> so that <!-- MATH: $N^{(m)} = {\cal M} \circ P$ --> <IMG WIDTH="102" HEIGHT="26" ALIGN="BOTTOM" BORDER="0" SRC="img446.gif" ALT="$N^{(m)} = {\cal M} \circ P$">, where <I>P</I> is a vector of <I>n</I><SUB><I>m</I></SUB> parameters. Then <I>N</I><SUP>(<I>m</I>)</SUP> spans a vector space with no more than <I>n</I><SUB><I>m</I></SUB> orthogonal vectors (dimensions). The numbers <I>n</I><SUB><I>o</I></SUB>, <I>n</I><SUB><I>m</I></SUB> and <I>n</I><SUB><I>r</I></SUB> are called the numbers of degrees of freedom of the observations, the model fit, and the residuals, respectively. <P> <HR> <!--Navigation Panel--> <A NAME="tex2html4215" HREF="node225.html"> <IMG WIDTH="37" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="next" SRC="icons.gif/next_motif.gif"></A> <A NAME="tex2html4212" HREF="node222.html"> <IMG WIDTH="26" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="up" SRC="icons.gif/up_motif.gif"></A> <A NAME="tex2html4206" HREF="node223.html"> <IMG WIDTH="63" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="previous" SRC="icons.gif/previous_motif.gif"></A> <A NAME="tex2html4214" HREF="node1.html"> <IMG WIDTH="65" HEIGHT="24" ALIGN="BOTTOM" BORDER="0" ALT="contents" SRC="icons.gif/contents_motif.gif"></A> <BR> <B> Next:</B> <A NAME="tex2html4216" HREF="node225.html">Test statistics</A> <B> Up:</B> <A NAME="tex2html4213" HREF="node222.html">Basic principles of time</A> <B> Previous:</B> <A NAME="tex2html4207" HREF="node223.html">Signals and their models</A> <!--End of Navigation Panel--> <ADDRESS> <I>Petra Nass</I> <BR><I>1999-06-15</I> </ADDRESS> </BODY> </HTML>