Sophie

Sophie

distrib > Mageia > 2 > i586 > by-pkgid > a2e5ae2091c2674a899ba2cbfce176e5 > files > 63

festival-2.1-3.mga1.i586.rpm

This is festival.info, produced by Makeinfo version 3.12h from
festival.texi.

   This file documents the `Festival' Speech Synthesis System a general
text to speech system for making your computer talk and developing new
synthesis techniques.

   Copyright (C) 1996-2001 University of Edinburgh

   Permission is granted to make and distribute verbatim copies of this
manual provided the copyright notice and this permission notice are
preserved on all copies.

   Permission is granted to copy and distribute modified versions of
this manual under the conditions for verbatim copying, provided that
the entire resulting derived work is distributed under the terms of a
permission notice identical to this one.

   Permission is granted to copy and distribute translations of this
manual into another language, under the above conditions for modified
versions, except that this permission notice may be stated in a
translation approved by the authors.


File: festival.info,  Node: Current voices,  Next: Building a new voice,  Up: Voices

Current voices
==============

   Currently there are a number of voices available in Festival and we
expect that number to increase. Each is elected via a function of the
name `voice_*' which sets up the waveform synthesizer, phone set,
lexicon, duration and intonation models (and anything else necessary)
for that speaker.  These voice setup functions are defined in
`lib/voices.scm'.

   The current voice functions are
`voice_rab_diphone'
     A British English male RP speaker, Roger.  This uses the UniSyn
     residual excited LPC diphone synthesizer.  The lexicon is the
     computer users version of Oxford Advanced Learners' Dictionary,
     with letter to sound rules trained from that lexicon.  Intonation
     is provided by a ToBI-like system using a decision tree to predict
     accent and end tone position.  The F0 itself is predicted as three
     points on each syllable, using linear regression trained from the
     Boston University FM database (f2b) and mapped to Roger's pitch
     range.  Duration is predicted by decision tree, predicting zscore
     durations for segments trained from the 460 Timit sentence spoken
     by another British male speaker.

`voice_ked_diphone'
     An American English male speaker, Kurt.  Again this uses the UniSyn
     residual excited LPC diphone synthesizer.  This uses the CMU
     lexicon, and letter to sound rules trained from it.  Intonation as
     with Roger is trained from the Boston University FM Radio corpus.
     Duration for this voice also comes from that database.

`voice_kal_diphone'
     An American English male speaker.  Again this uses the UniSyn
     residual excited LPC diphone synthesizer.  And like ked, uses the
     CMU lexicon, and letter to sound rules trained from it.
     Intonation as with Roger is trained from the Boston University FM
     Radio corpus.  Duration for this voice also comes from that
     database.  This voice was built in two days work and is at least
     as good as ked due to us understanding the process better.  The
     diphone labels were autoaligned with hand correction.

`voice_don_diphone'
     Steve Isard's LPC based diphone synthesizer, Donovan diphones.  The
     other parts of this voice, lexicon, intonation, and duration are
     the same as `voice_rab_diphone' described above.  The quality of
     the diphones is not as good as the other voices because it uses
     spike excited LPC.  Although the quality is not as good it is much
     faster and the database is much smaller than the others.

`voice_el_diphone'
     A male Castilian Spanish speaker, using the Eduardo Lopez diphones.
     Alistair Conkie and Borja Etxebarria did much to make this.  It has
     improved recently but is not as comprehensive as our English
     voices.

`voice_gsw_diphone'
     This offers a male RP speaker, Gordon, famed for many previous CSTR
     synthesizers, using the standard diphone module.  Its higher
     levels are very similar to the Roger voice above.  This voice is
     not in the standard distribution, and is unlikely to be added for
     commercial reasons, even though it sounds better than Roger.

`voice_en1_mbrola'
     The Roger diphone set using the same front end as
     `voice_rab_diphone' but uses the MBROLA diphone synthesizer for
     waveform synthesis.  The MBROLA synthesizer and Roger diphone
     database (called `en1') is not distributed by CSTR but is
     available for non-commercial use for free from
     `http://tcts.fpms.ac.be/synthesis/mbrola.html'.  We do however
     provide the Festival part of the voice in `festvox_en1.tar.gz'.

`voice_us1_mbrola'
     A female Amercian English voice using our standard US English
     front end and the `us1' database for the MBROLA diphone
     synthesizer for waveform synthesis.  The MBROLA synthesizer and
     the `us1' diphone database is not distributed by CSTR but is
     available for non-commercial use for free from
     `http://tcts.fpms.ac.be/synthesis/mbrola.html'.  We provide the
     Festival part of the voice in `festvox_us1.tar.gz'.

`voice_us2_mbrola'
     A male Amercian English voice using our standard US English front
     end and the `us2' database for the MBROLA diphone synthesizer for
     waveform synthesis.  The MBROLA synthesizer and the `us2' diphone
     database is not distributed by CSTR but is available for
     non-commercial use for free from
     `http://tcts.fpms.ac.be/synthesis/mbrola.html'.  We provide the
     Festival part of the voice in `festvox_us2.tar.gz'.

`voice_us3_mbrola'
     Another male Amercian English voice using our standard US English
     front end and the `us2' database for the MBROLA diphone
     synthesizer for waveform synthesis.  The MBROLA synthesizer and
     the `us2' diphone database is not distributed by CSTR but is
     available for non-commercial use for free from
     `http://tcts.fpms.ac.be/synthesis/mbrola.html'.  We provide the
     Festival part of the voice in `festvox_us1.tar.gz'.  Other voices
will become available through time.  Groups other than CSTR are working
on new voices.  Particularly OGI's CSLU have release a number of
American English voices, two Mexican Spanish voices and two German
voices.  All use OGI's their own residual excited LPC synthesizer which
is distributed as a plug-in for Festival.  (see
`http://www.cse.ogi.edu/CSLU/research/TTS' for details).

   Other languages are being worked on including German, Basque, Welsh,
Greek and Polish already have been developed and could be release soon.
CSTR has a set of Klingon diphones though the text anlysis for Klingon
still requires some work (If anyone has access to a good Klingon
continous speech corpora please let us know.)

   Pointers and examples of voices developed at CSTR and elsewhere will
be posted on the Festival home page.


File: festival.info,  Node: Building a new voice,  Next: Defining a new voice,  Prev: Current voices,  Up: Voices

Building a new voice
====================

   This section runs through the definition of a new voice in Festival.
Although this voice is simple (it is a simplified version of the
distributed spanish voice) it shows all the major parts that must be
defined to get Festival to speak in a new voice.  Thanks go to Alistair
Conkie for helping me define this but as I don't speak Spanish there are
probably many mistakes.  Hopefully its pedagogical use is better than
its ability to be understood in Castille.

   A much more detailed document on building voices in Festival has been
written and is recommend reading for any one attempting to add a new
voice to Festival `black99'.  The information here is a little sparse
though gives the basic requirements.

   The general method for defining a new voice is to define the
parameters for all the various sub-parts e.g. phoneset, duration
parameter intonation parameters etc., then defined a function of the
form `voice_NAME' which when called will actually select the voice.

Phoneset
--------

   For most new languages and often for new dialects, a new phoneset is
required.  It is really the basic building block of a voice and most
other parts are defined in terms of this set, so defining it first is a
good start.
     (defPhoneSet
       spanish
       ;;;  Phone Features
       (;; vowel or consonant
        (vc + -)
        ;; vowel length: short long diphthong schwa
        (vlng s l d a 0)
        ;; vowel height: high mid low
        (vheight 1 2 3 -)
        ;; vowel frontness: front mid back
        (vfront 1 2 3 -)
        ;; lip rounding
        (vrnd + -)
        ;; consonant type: stop fricative affricative nasal liquid
        (ctype s f a n l 0)
        ;; place of articulation: labial alveolar palatal labio-dental
        ;;                         dental velar
        (cplace l a p b d v 0)
        ;; consonant voicing
        (cvox + -)
        )
       ;; Phone set members (features are not! set properly)
       (
        (#  - 0 - - - 0 0 -)
        (a  + l 3 1 - 0 0 -)
        (e  + l 2 1 - 0 0 -)
        (i  + l 1 1 - 0 0 -)
        (o  + l 3 3 - 0 0 -)
        (u  + l 1 3 + 0 0 -)
        (b  - 0 - - + s l +)
        (ch - 0 - - + a a -)
        (d  - 0 - - + s a +)
        (f  - 0 - - + f b -)
        (g  - 0 - - + s p +)
        (j  - 0 - - + l a +)
        (k  - 0 - - + s p -)
        (l  - 0 - - + l d +)
        (ll - 0 - - + l d +)
        (m  - 0 - - + n l +)
        (n  - 0 - - + n d +)
        (ny - 0 - - + n v +)
        (p  - 0 - - + s l -)
        (r  - 0 - - + l p +)
        (rr - 0 - - + l p +)
        (s  - 0 - - + f a +)
        (t  - 0 - - + s t +)
        (th - 0 - - + f d +)
        (x  - 0 - - + a a -)
       )
     )
     (PhoneSet.silences '(#))
   Note some phonetic features may be wrong.

Lexicon and LTS
---------------

   Spanish is a language whose pronunciation can almost completely be
predicted from its orthography so in this case we do not need a list of
words and their pronunciations and can do most of the work with letter
to sound rules.

   Let us first make a lexicon structure as follows
     (lex.create "spanish")
     (lex.set.phoneset "spanish")
   However if we did just want a few entries to test our system without
building any letter to sound rules we could add entries directly to the
addenda.  For example
     (lex.add.entry
        '("amigos" nil (((a) 0) ((m i) 1) (g o s))))
   A letter to sound rule system for Spanish is quite simple in the
format supported by Festival.  The following is a good start to a full
set.
     (lts.ruleset
     ;  Name of rule set
      spanish
     ;  Sets used in the rules
     (
       (LNS l n s )
       (AEOU a e o u )
       (AEO a e o )
       (EI e i )
       (BDGLMN b d g l m n )
     )
     ;  Rules
     (
      ( [ a ] = a )
      ( [ e ] = e )
      ( [ i ] = i )
      ( [ o ] = o )
      ( [ u ] = u )
      ( [ "'" a ] = a1 )   ;; stressed vowels
      ( [ "'" e ] = e1 )
      ( [ "'" i ] = i1 )
      ( [ "'" o ] = o1 )
      ( [ "'" u ] = u1 )
      ( [ b ] = b )
      ( [ v ] = b )
      ( [ c ] "'" EI = th )
      ( [ c ] EI = th )
      ( [ c h ] = ch )
      ( [ c ] = k )
      ( [ d ] = d )
      ( [ f ] = f )
      ( [ g ] "'" EI = x )
      ( [ g ] EI = x )
      ( [ g u ] "'" EI = g )
      ( [ g u ] EI = g )
      ( [ g ] = g )
      ( [ h u e ] = u e )
      ( [ h i e ] = i e )
      ( [ h ] =  )
      ( [ j ] = x )
      ( [ k ] = k )
      ( [ l l ] # = l )
      ( [ l l ] = ll )
      ( [ l ] = l )
      ( [ m ] = m )
      ( [ ~ n ] = ny )
      ( [ n ] = n )
      ( [ p ] = p )
      ( [ q u ] = k )
      ( [ r r ] = rr )
      ( # [ r ] = rr )
      ( LNS [ r ] = rr )
      ( [ r ] = r )
      ( [ s ] BDGLMN = th )
      ( [ s ] = s )
      ( # [ s ] C = e s )
      ( [ t ] = t )
      ( [ w ] = u )
      ( [ x ] = k s )
      ( AEO [ y ] = i )
      ( # [ y ] # = i )
      ( [ y ] = ll )
      ( [ z ] = th )
     ))
   We could simply set our lexicon to use the above letter to sound
system with the following command
     (lex.set.lts.ruleset 'spanish)
   But this would not deal with upper case letters.  Instead of writing
new rules for upper case letters we can define that a Lisp function be
called when looking up a word and intercept the lookup with our own
function.  First we state that unknown words should call a function,
and then define the function we wish called.  The actual link to ensure
our function will be called is done below at lexicon selection time
     (define (spanish_lts word features)
       "(spanish_lts WORD FEATURES)
     Using letter to sound rules build a spanish pronunciation of WORD."
       (list word
             nil
             (lex.syllabify.phstress (lts.apply (downcase word) 'spanish))))
     (lex.set.lts.method spanish_lts)
   In the function we downcase the word and apply the LTS rule to it.
Next we syllabify it and return the created lexical entry.

Phrasing
--------

   Without detailed labelled databases we cannot build statistical
models of phrase breaks, but we can simply build a phrase break model
based on punctuation.  The following is a CART tree to predict simple
breaks, from punctuation.
     (set! spanish_phrase_cart_tree
     '
     ((lisp_token_end_punc in ("?" "." ":"))
       ((BB))
       ((lisp_token_end_punc in ("'" "\"" "," ";"))
        ((B))
        ((n.name is 0)  ;; end of utterance
         ((BB))
         ((NB))))))

Intonation
----------

   For intonation there are number of simple options without requiring
training data.  For this example we will simply use a hat pattern on all
stressed syllables in content words and on single syllable content
words. (i.e.  `Simple') Thus we need an accent prediction CART tree.
     (set! spanish_accent_cart_tree
      '
       ((R:SylStructure.parent.gpos is content)
        ((stress is 1)
         ((Accented))
         ((position_type is single)
          ((Accented))
          ((NONE))))
        ((NONE))))
   We also need to specify the pitch range of our speaker.  We will be
using a male Spanish diphone database of the follow range
     (set! spanish_el_int_simple_params
         '((f0_mean 120) (f0_std 30)))

Duration
--------

   We will use the trick mentioned above for duration prediction.
Using the zscore CART tree method, we will actually use it to predict
factors rather than zscores.

   The tree predicts longer durations in stressed syllables and in
clause initial and clause final syllables.
     (set! spanish_dur_tree
      '
        ((R:SylStructure.parent.R:Syllable.p.syl_break > 1 ) ;; clause initial
         ((R:SylStructure.parent.stress is 1)
          ((1.5))
          ((1.2)))
         ((R:SylStructure.parent.syl_break > 1)   ;; clause final
          ((R:SylStructure.parent.stress is 1)
           ((2.0))
           ((1.5)))
          ((R:SylStructure.parent.stress is 1)
           ((1.2))
           ((1.0))))))
   In addition to the tree we need durations for each phone in the set
     (set! spanish_el_phone_data
     '(
        (# 0.0 0.250)
        (a 0.0 0.090)
        (e 0.0 0.090)
        (i 0.0 0.080)
        (o 0.0 0.090)
        (u 0.0 0.080)
        (b 0.0 0.065)
        (ch 0.0 0.135)
        (d 0.0 0.060)
        (f 0.0 0.100)
        (g 0.0 0.080)
        (j 0.0 0.100)
        (k 0.0 0.100)
        (l 0.0 0.080)
        (ll 0.0 0.105)
        (m 0.0 0.070)
        (n 0.0 0.080)
        (ny 0.0 0.110)
        (p 0.0 0.100)
        (r 0.0 0.030)
        (rr 0.0 0.080)
        (s 0.0 0.110)
        (t 0.0 0.085)
        (th 0.0 0.100)
        (x 0.0 0.130)
     ))

Waveform synthesis
------------------

   There are a number of choices for waveform synthesis currently
supported.  MBROLA supports Spanish, so we could use that.  But their
Spanish diphones in fact use a slightly different phoneset so we would
need to change the above definitions to use it effectively.  Here we
will use a diphone database for Spanish recorded by Eduardo Lopez when
he was a Masters student some years ago.

   Here we simply load our pre-built diphone database
     (us_diphone_init
        (list
         '(name "el_lpc_group")
         (list 'index_file
               (path-append spanish_el_dir "group/ellpc11k.group"))
         '(grouped "true")
         '(default_diphone "#-#")))

Voice selection function
------------------------

   The standard way to define a voice in Festival is to define a
function of the form `voice_NAME' which selects all the appropriate
parameters.  Because the definition below follows the above definitions
we know that everything appropriate has been loaded into Festival and
hence we just need to select the appropriate a parameters.

     (define (voice_spanish_el)
     "(voice_spanish_el)
     Set up synthesis for Male Spanish speaker: Eduardo Lopez"
       (voice_reset)
       (Parameter.set 'Language 'spanish)
       ;; Phone set
       (Parameter.set 'PhoneSet 'spanish)
       (PhoneSet.select 'spanish)
       (set! pos_lex_name nil)
       ;; Phrase break prediction by punctuation
       (set! pos_supported nil)
       ;; Phrasing
       (set! phrase_cart_tree spanish_phrase_cart_tree)
       (Parameter.set 'Phrase_Method 'cart_tree)
       ;; Lexicon selection
       (lex.select "spanish")
       ;; Accent prediction
       (set! int_accent_cart_tree spanish_accent_cart_tree)
       (set! int_simple_params spanish_el_int_simple_params)
       (Parameter.set 'Int_Method 'Simple)
       ;; Duration prediction
       (set! duration_cart_tree spanish_dur_tree)
       (set! duration_ph_info spanish_el_phone_data)
       (Parameter.set 'Duration_Method 'Tree_ZScores)
       ;; Waveform synthesizer: diphones
       (Parameter.set 'Synth_Method 'UniSyn)
       (Parameter.set 'us_sigpr 'lpc)
       (us_db_select 'el_lpc_group)
     
       (set! current-voice 'spanish_el)
     )
     
     (provide 'spanish_el)

Last remarks
------------

   We save the above definitions in a file `spanish_el.scm'.  Now we
can declare the new voice to Festival.  *Note Defining a new voice::
for a description of methods for adding new voices.  For testing
purposes we can explciitly load the file `spanish_el.scm'

   The voice is now available for use in festival.
     festival> (voice_spanish_el)
     spanish_el
     festival> (SayText "hola amigos")
     <Utterance 0x04666>

   As you can see adding a new voice is not very difficult.  Of course
there is quite a lot more than the above to add a high quality robust
voice to Festival.  But as we can see many of the basic tools that we
wish to use already exist.  The main difference between the above voice
and the English voices already in Festival are that their models are
better trained from databases.  This produces, in general, better
results, but the concepts behind them are basically the same.  All of
those trainable methods may be parameterized with data for new voices.

   As Festival develops, more modules will be added with better support
for training new voices so in the end we hope that adding in high
quality new voices is actually as simple as (or indeed simpler than)
the above description.

Resetting globals
-----------------

   Because the version of Scheme used in Festival only has a single flat
name space it is unfortunately too easy for voices to set some global
which accidentally affects all other voices selected after it.  Because
of this problem we have introduced a convention to try to minimise the
possibility of this becoming a problem.  Each voice function defined
should always call `voice_reset' at the start.  This will reset any
globals and also call a tidy up function provided by the previous voice
function.

   Likewise in your new voice function you should provide a tidy up
function to reset any non-standard global variables you set.  The
function `current_voice_reset' will be called by `voice_reset'.  If the
value of `current_voice_reset' is `nil' then it is not called.
`voice_reset' sets `current_voice_reset' to `nil', after calling it.

   For example suppose some new voice requires the audio device to be
directed to a different machine.  In this example we make the giant's
voice go through the netaudio machine `big_speakers' while the standard
voice go through `small_speakers'.

   Although we can easily select the machine `big_speakers' as out when
our `voice_giant' is called, we also need to set it back when the next
voice is selected, and don't want to have to modify every other voice
defined in the system.  Let us first define two functions to selection
the audio output.
     (define (select_big)
       (set! giant_previous_audio (getenv "AUDIOSERVER"))
       (setenv "AUDIOSERVER" "big_speakers"))
     
     (define (select_normal)
       (setenv "AUDIOSERVER" giant_previous_audio))
   Note we save the previous value of `AUDIOSERVER' rather than simply
assuming it was `small_speakers'.

   Our definition of `voice_giant' definition of `voice_giant' will
look something like
     (define (voice_giant)
     "comment comment ..."
        (voice_reset)  ;; get into a known state
        (select_big)
        ;;; other giant voice parameters
        ...
     
        (set! current_voice_rest select_normal)
        (set! current-voice 'giant))
   The obvious question is which variables should a voice reset.
Unfortunately there is not a definitive answer to that.  To a certain
extent I don't want to define that list as there will be many variables
that will by various people in Festival which are not in the original
distribution and we don't want to restrict them.  The longer term answer
is some for of partitioning of the Scheme name space perhaps having
voice local variables (cf. Emacs buffer local variables).  But
ultimately a voice may set global variables which could redefine the
operation of later selected voices and there seems no real way to stop
that, and keep the generality of the system.

   Note the convention of setting the global `current-voice' as the end
of any voice definition file.  We do not enforce this but probabaly
should.  The variable `current-voice' at any time should identify the
current voice, the voice description information (described below) will
relate this name to properties identifying it.


File: festival.info,  Node: Defining a new voice,  Prev: Building a new voice,  Up: Voices

Defining a new voice
====================

   As there are a number of voices available for Festival and they may
or may not exists in different installations we have tried to make it
as simple as possible to add new voices to the system without having to
change any of the basic distribution.  In fact if the voices use the
following standard method for describing themselves it is merely a
matter of unpacking them in order for them to be used by the system.

   The variable `voice-path' conatins a list of directories where
voices will be automatically searched for.  If this is not set it is set
automatically by appending `/voices/' to all paths in festival
`load-path'.  You may add new directories explicitly to this variable
in your `sitevars.scm' file or your own `.festivalrc' as you wish.

   Each voice directory is assumed to be of the form
     LANGUAGE/VOICENAME/
   Within the `VOICENAME/' directory itself it is assumed there is a
file `festvox/VOICENAME.scm' which when loaded will define the voice
itself.  The actual voice function should be called `voice_VOICENAME'.

   For example the voices distributed with the standard Festival
distribution all unpack in `festival/lib/voices'.  The Amercan voice
`ked_diphone' unpacks into
     festival/lib/voices/english/ked_diphone/
   Its actual definition file is in
     festival/lib/voices/english/ked_diphone/festvox/ked_diphone.scm
   Note the name of the directory and the name of the Scheme definition
file must be the same.

   Alternative voices using perhaps a different encoding of the
database but the same front end may be defined in the same way by using
symbolic links in the langauge directoriy to the main directory.  For
example a PSOLA version of the ked voice may be defined in
     festival/lib/voices/english/ked_diphone/festvox/ked_psola.scm
   Adding a symbole link in `festival/lib/voices/english/' ro
`ked_diphone' called `ked_psola' will allow that voice to be
automatically registered when Festival starts up.

   Note that this method doesn't actually load the voices it finds, that
could be prohibitively time consuming to the start up process.  It
blindly assumes that there is a file `VOICENAME/festvox/VOICENAME.scm'
to load.  An autoload definition is given for `voice_VOICENAME' which
when called will load that file and call the real definition if it
exists in the file.

   This is only a recommended method to make adding new voices easier,
it may be ignored if you wish.  However we still recommend that even if
you use your own convetions for adding new voices you consider the
autoload function to define them in, for example, the `siteinit.scm'
file or `.festivalrc'.  The autoload function takes three arguments: a
function name, a file containing the actual definiton and a comment.
For example a definition of voice can be done explicitly by
     (autooad voice_f2b  "/home/awb/data/f2b/ducs/f2b_ducs"
          "American English female f2b")))
   Of course you can also load the definition file explicitly if you
wish.

   In order to allow the system to start making intellegent use of
voices we recommend that all voice definitions include a call to the
function `voice_proclaim' this allows the system to know some properties
about the voice such as language, gender and dialect.  The
`proclaim_voice' function taks two arguments a name (e.g.
`rab_diphone' and an assoc list of features and names.  Currently we
require `language', `gender', `dialect' and `description'.  The last
being a textual description of the voice itself.  An example
proclaimation is
     (proclaim_voice
      'rab_diphone
      '((language english)
        (gender male)
        (dialect british)
        (description
         "This voice provides a British RP English male voice using a
          residual excited LPC diphone synthesis method.  It uses a
          modified Oxford Advanced Learners' Dictionary for pronunciations.
          Prosodic phrasing is provided by a statistically trained model
          using part of speech and local distribution of breaks.  Intonation
          is provided by a CART tree predicting ToBI accents and an F0
          contour generated from a model trained from natural speech.  The
          duration model is also trained from data using a CART tree.")))
   There are functions to access a description.  `voice.description'
will return the description for a given voice and will load that voice
if it is not already loaded.  `voice.describe' will describe the given
given voice by synthesizing the textual description using the current
voice.  It would be nice to use the voice itself to give a self
introduction but unfortunately that introduces of problem of decide
which language the description should be in, we are not all as fluent in
welsh as we'd like to be.

   The function `voice.list' will list the _potential_ voices in the
system.  These are the names of voices which have been found in the
`voice-path'.  As they have not actaully been loaded they can't
actually be confirmed as usable voices.  One solution to this would be
to load all voices at start up time which would allow confirmation they
exist and to get their full description through `proclaim_voice'.  But
start up is already too slow in festival so we have to accept this stat
for the time being.  Splitting the description of the voice from the
actual definition is a possible solution to this problem but we have
not yet looked in to this.


File: festival.info,  Node: Tools,  Next: Building models from databases,  Prev: Voices,  Up: Top

Tools
*****

   A number of basic data manipulation tools are supported by Festival.
These often make building new modules very easy and are already used in
many of the existing modules.  They typically offer a Scheme method for
entering data, and Scheme and C++ functions for evaluating it.

* Menu:

* Regular expressions::
* CART trees::          Building and using CART
* Ngrams::              Building and using Ngrams
* Viterbi decoder::     Using the Viterbi decoder
* Linear regression::   Building and using linear regression models


File: festival.info,  Node: Regular expressions,  Next: CART trees,  Up: Tools

Regular expressions
===================

   Regular expressions are a formal method for describing a certain
class of mathematical languages.  They may be viewed as patterns which
match some set of strings.  They are very common in many software tools
such as scripting languages like the UNIX shell, PERL, awk, Emacs etc.
Unfortunately the exact form of regualr expressions often differs
slightly between different applications making their use often a little
tricky.

   Festival support regular expressions based mainly of the form used in
the GNU libg++ `Regex' class, though we have our own implementation of
it.  Our implementation (`EST_Regex') is actually based on Henry
Spencer's `regex.c' as distributed with BSD 4.4.

   Regular expressions are represented as character strings which are
interpreted as regular expressions by certain Scheme and C++ functions.
Most characters in a regular expression are treated as literals and
match only that character but a number of others have special meaning.
Some characters may be escaped with preceeding backslashes to change
them from operators to literals (or sometime literals to operators).

`.'
     Matches any character.

`$'
     matches end of string

`^'
     matches beginning of string

`X*'
     matches zero or more occurrences of X, X may be a character, range
     of parenthesized expression.

`X+'
     matches one or more occurrences of X, X may be a character, range
     of parenthesized expression.

`X?'
     matches zero or one occurrence of X, X may be a character, range
     of parenthesized expression.

`[...]'
     a ranges matches an of the values in the brackets.  The range
     operator "-" allows specification of ranges e.g. `a-z' for all
     lower case characters.  If the first character of the range is `^'
     then it matches anything character except those specificed in the
     range.  If you wish `-' to be in the range you must put that first.

`\\(...\\)'
     Treat contents of parentheses as single object allowing operators
     `*', `+', `?' etc to operate on more than single characters.

`X\\|Y'
     matches either X or Y.  X or Y may be single characters, ranges or
     parenthesized expressions.  Note that actuall only one backslash
is needed before a character to escape it but becuase these expressions
are most often contained with Scheme or C++ strings, the escpae
mechanaism for those strings requires that backslash itself be escaped,
hence you will most often be required to type two backslashes.

   Some example may help in enderstanding the use of regular
expressions.
`a.b'
     matches any three letter string starting with an `a' and ending
     with a `b'.

`.*a'
     matches any string ending in an `a'

`.*a.*'
     matches any string containing an `a'

`[A-Z].*'
     matches any string starting with a capital letter

`[0-9]+'
     matches any string of digits

`-?[0-9]+\\(\\.[0-9]+\\)?'
     matches any positive or negative real number.  Note the optional
     preceeding minus sign and the optional part contain the point and
     following numbers.  The point itself must be escaped as dot on its
     own matches any character.

`[^aeiouAEIOU]+'
     mathes any non-empty string which doesn't conatin a vowel

`\\([Ss]at\\(urday\\)\\)?\\|\\([Ss]un\\(day\\)\\)'
     matches Saturday and Sunday in various ways

   The Scheme function `string-matches' takes a string and a regular
expression and returns `t' if the regular expression macthes the string
and `nil' otherwise.


File: festival.info,  Node: CART trees,  Next: Ngrams,  Prev: Regular expressions,  Up: Tools

CART trees
==========

   One of the basic tools available with Festival is a system for
building and using Classification and Regression Trees (`breiman84').
This standard statistical method can be used to predict both
categorical and continuous data from a set of feature vectors.

   The tree itself contains yes/no questions about features and
ultimately provides either a probability distribution, when predicting
categorical values (classification tree), or a mean and standard
deviation when predicting continuous values (regression tree).  Well
defined techniques can be used to construct an optimal tree from a set
of training data.  The program, developed in conjunction with Festival,
called `wagon', distributed with the speech tools, provides a basic but
ever increasingly powerful method for constructing trees.

   A tree need not be automatically constructed, CART trees have the
advantage over some other automatic training methods, such as neural
networks and linear regression, in that their output is more readable
and often understandable by humans.  Importantly this makes it possible
to modify them.  CART trees may also be fully hand constructed.  This
is used, for example, in generating some duration models for languages
we do not yet have full databases to train from.

   A CART tree has the following syntax
         CART ::= QUESTION-NODE || ANSWER-NODE
         QUESTION-NODE ::= ( QUESTION YES-NODE NO-NODE )
         YES-NODE ::= CART
         NO-NODE ::= CART
         QUESTION ::= ( FEATURE in LIST )
         QUESTION ::= ( FEATURE is STRVALUE )
         QUESTION ::= ( FEATURE = NUMVALUE )
         QUESTION ::= ( FEATURE > NUMVALUE )
         QUESTION ::= ( FEATURE < NUMVALUE )
         QUESTION ::= ( FEATURE matches REGEX )
         ANSWER-NODE ::= CLASS-ANSWER || REGRESS-ANSWER
         CLASS-ANSWER ::= ( (VALUE0 PROB) (VALUE1 PROB) ... MOST-PROB-VALUE )
         REGRESS-ANSWER ::= ( ( STANDARD-DEVIATION MEAN ) )
   Note that answer nodes are distinguished by their car not being
atomic.

   The interpretation of a tree is with respect to a Stream_Item The
FEATURE in a tree is a standard feature (*note Features::.).

   The following example tree is used in one of the Spanish voices to
predict variations from average durations.
     (set! spanish_dur_tree
      '
     (set! spanish_dur_tree
      '
        ((R:SylStructure.parent.R:Syllable.p.syl_break > 1 ) ;; clause initial
         ((R:SylStructure.parent.stress is 1)
          ((1.5))
          ((1.2)))
         ((R:SylStructure.parent.syl_break > 1)   ;; clause final
          ((R:SylStructure.parent.stress is 1)
           ((2.0))
           ((1.5)))
          ((R:SylStructure.parent.stress is 1)
           ((1.2))
           ((1.0))))))
   It is applied to the segment stream to give a factor to multiply the
average by.

   `wagon' is constantly improving and with version 1.2 of the speech
tools may now be considered fairly stable for its basic operations.
Experimental features are described in help it gives.  See the Speech
Tools manual for a more comprehensive discussion of using `wagon'.

   However the above format of trees is similar to those produced by
many other systems and hence it is reasonable to translate their
formats into one which Festival can use.


File: festival.info,  Node: Ngrams,  Next: Viterbi decoder,  Prev: CART trees,  Up: Tools

Ngrams
======

   Bigram, trigrams, and general ngrams are used in the part of speech
tagger and the phrase break predicter.  An Ngram C++ Class is defined
in the speech tools library and some simple facilities are added within
Festival itself.

   Ngrams may be built from files of tokens using the program
`ngram_build' which is part of the speech tools.  See the speech tools
documentation for details.

   Within Festival ngrams may be named and loaded from files and used
when required.  The LISP function `load_ngram' takes a name and a
filename as argument and loads the Ngram from that file.  For an
example of its use once loaded see `src/modules/base/pos.cc' or
`src/modules/base/phrasify.cc'.


File: festival.info,  Node: Viterbi decoder,  Next: Linear regression,  Prev: Ngrams,  Up: Tools

Viterbi decoder
===============

   Another common tool is a Viterbi decoder.  This C++ Class is defined
in the speech tools library `speech_tooks/include/EST_viterbi.h' and
`speech_tools/stats/EST_viterbi.cc'.  A Viterbi decoder requires two
functions at declaration time.  The first constructs candidates at each
stage, while the second combines paths.  A number of options are
available (which may change).

   The prototypical example of use is in the part of speech tagger which
using standard Ngram models to predict probabilities of tags.  See
`src/modules/base/pos.cc' for an example.

   The Viterbi decoder can also be used through the Scheme function
`Gen_Viterbi'.  This function respects the parameters defined in the
variable `get_vit_params'.  Like other modules this parameter list is
an assoc list of feature name and value.  The parameters supported are:
`Relation'
     The name of the relation the decoeder is to be applied to.

`cand_function'
     A function that is to be called for each item that will return a
     list of candidates (with probilities).

`return_feat'
     The name of a feature that the best candidate is to be returned in
     for each item in the named relation.

`p_word'
     The previous word to the first item in the named relation (only
     used when ngrams are the "language model").

`pp_word'
     The previous previous word to the first item in the named relation
     (only used when ngrams are the "language model").

`ngramname'
     the name of an ngram (loaded by `ngram.load') to be used as a
     "language model".

`wfstmname'
     the name of a WFST (loaded by `wfst.load') to be used as a
     "language model", this is ignored if an `ngramname' is also
     specified.

`debug'
     If specified more debug features are added to the items in the
     relation.

`gscale_p'
     Grammar scaling factor.  Here is a short example to help make the
use of this facility clearer.

   There are two parts required for the Viterbi decode a set of
candidate observations and some "language model".  For the math to work
properly the candidate observations must be reverse probabilities (for
each candidiate as given what is the probability of the observation,
rather than the probability of the candidate given the observation).
These can be calculated for the probabilties candidate given the
observation divided by the probability of the candidate in isolation.

   For the sake of simplicity let us assume we have a lexicon of words
to distribution of part of speech tags with reverse probabilities.  And
an tri-gram called `pos-tri-gram' over ngram sequences of part of
speech tags.  First we must define the candidate function
     (define (pos_cand_function w)
      ;; select the appropriate lexicon
      (lex.select 'pos_lex)
      ;; return the list of cands with rprobs
      (cadr
       (lex.lookup (item.name w) nil)))
   The returned candidate list would look somthing like
     ( (jj -9.872) (vbd -6.284) (vbn -5.565) )
   Our part of speech tagger function would look something like this
     (define (pos_tagger utt)
       (set! get_vit_params
             (list
              (list 'Relation "Word")
              (list 'return_feat 'pos_tag)
              (list 'p_word "punc")
              (list 'pp_word "nn")
              (list 'ngramname "pos-tri-gram")
              (list 'cand_function 'pos_cand_function)))
       (Gen_Viterbi utt)
       utt)
   this will assign the optimal part of speech tags to each word in utt.


File: festival.info,  Node: Linear regression,  Prev: Viterbi decoder,  Up: Tools

Linear regression
=================

   The linear regression model takes models built from some external
package and finds coefficients based on the features and weights.  A
model consists of a list of features.  The first should be the atom
`Intercept' plus a value.  The following in the list should consist of
a feature (*note Features::.) followed by a weight.  An optional third
element may be a list of atomic values.  If the result of the feature is
a member of this list the feature's value is treated as 1 else it is 0.
This third argument allows an efficient way to map categorical values
into numeric values.  For example, from the F0 prediction model in
`lib/f2bf0lr.scm'.  The first few parameters are
     (set! f2b_f0_lr_start
     '(
        ( Intercept 160.584956 )
        ( Word.Token.EMPH 36.0 )
        ( pp.tobi_accent 10.081770 (H*) )
        ( pp.tobi_accent 3.358613 (!H*) )
        ( pp.tobi_accent 4.144342 (*? X*? H*!H* * L+H* L+!H*) )
        ( pp.tobi_accent -1.111794 (L*) )
        ...
     )
   Note the feature `pp.tobi_accent' returns an atom, and is hence
tested with the map groups specified as third arguments.

   Models may be built from feature data (in the same format as `wagon'
using the `ols' program distributed with the speech tools library.


File: festival.info,  Node: Building models from databases,  Next: Programming,  Prev: Tools,  Up: Top

Building models from databases
******************************

   Because our research interests tend towards creating statistical
models trained from real speech data, Festival offers various support
for extracting information from speech databases, in a way suitable for
building models.

   Models for accent prediction, F0 generation, duration, vowel
reduction, homograph disambiguation, phrase break assignment and unit
selection have been built using Festival to extract and process various
databases.

* Menu:

* Labelling databases::    Phones, syllables, words etc.
* Extracting features::    Extraction of model parameters.
* Building models::        Building stochastic models from features


File: festival.info,  Node: Labelling databases,  Next: Extracting features,  Up: Building models from databases

Labelling databases
===================

   In order for Festival to use a database it is most useful to build
utterance structures for each utterance in the database.  As discussed
earlier, utterance structures contain relations of items.  Given such a
structure for each utterance in a database we can easily read in the
utterance representation and access it, dumping information in a
normalised way allowing for easy building and testing of models.

   Of course the level of labelling that exists, or that you are
willing to do by hand or using some automatic tool, for a particular
database will vary.  For many purposes you will at least need phonetic
labelling.  Hand labelled data is still better than auto-labelled data,
but that could change.  The size and consistency of the data is
important too.

   For this discussion we will assume labels for: segments, syllables,
words, phrases, intonation events, pitch targets.  Some of these can be
derived, some need to be labelled.  This would not fail with less
labelling but of course you wouldn't be able to extract as much
information from the result.

   In our databases these labels are in Entropic's Xlabel format,
though it is fairly easy to convert any reasonable format.

_Segment_
     These give phoneme labels for files.  Note the these labels _must_
     be members of the phoneset that you will be using for this
     database.  Often phone label files may contain extra labels (e.g.
     beginning and end silence) which are not really part of the
     phoneset.  You should remove (or re-label) these phones
     accordingly.

_Word_
     Again these will need to be provided.  The end of the word should
     come at the last phone in the word (or just after).
     Pauses/silences should not be part of the word.

_Syllable_
     There is a chance these can be automatically generated from Word
     and Segment files given a lexicon.  Ideally these should include
     lexical stress.

_IntEvent_
     These should ideally mark accent/boundary tone type for each
     syllable, but this almost definitely requires hand-labelling.
     Also given that hand-labelling of accent type is harder and not as
     accurate, it is arguable that anything other than accented vs.
     non-accented can be used reliably.

_Phrase_
     This could just mark the last non-silence phone in each utterance,
     or before any silence phones in the whole utterance.

_Target_
     This can be automatically derived from an F0 file and the Segment
     files.  A marking of the mean F0 in each voiced phone seem to give
     adequate results.  Once these files are created an utterance file
can be automatically created from the above data.   Note it is pretty
easy to get the streams right but getting the relations between the
streams is much harder.  Firstly labelling is rarely accurate and small
windows of error must be allowed to ensure things line up properly.
The second problem is that some label files identify point type
information (IntEvent and Target) while others identify segments (e.g.
Segment, Words etc.).  Relations have to know this in order to get it
right.  For example is not right for all syllables between two
IntEvents to be linked to the IntEvent, only to the Syllable the
IntEvent is within.

   The script `festival/examples/make_utts' is an example Festival
script which automatically builds the utterance files from the above
labelled files.

   The script, by default assumes, a hierarchy in an database directory
of the following form.  Under a directory `festival/' where all
festival specific database ifnromation can be kept, a directory
`relations/' contains a subdirectory for each basic relation (e.g.
`Segment/', `Syllable/', etc.)  Each of which contains the basic label
files for that relation.

   The following command will build a set of utterance structures
(including building hte relations that link between these basic
relations).
     make_utts -phoneset radio festival/relation/Segment/*.Segment
   This will create utterances in `festival/utts/'.  There are a number
of options to `make_utts' use `-h' to find them.  The `-eval' option
allows extra scheme code to be loaded which may be called by the
utterance building process.  The function `make_utts_user_function'
will be called on all utterance created.  Redefining that in database
specific loaded code will allow database specific fixed to the
utterance.


File: festival.info,  Node: Extracting features,  Next: Building models,  Prev: Labelling databases,  Up: Building models from databases

Extracting features
===================

   The easiest way to extract features from a labelled database of the
form described in the previous section is by loading in each of the
utterance structures and dumping the desired features.

   Using the same mechanism to extract the features as will eventually
be used by models built from the features has the important advantage of
avoiding spurious errors easily introduced when collecting data.  For
example a feature such as `n.accent' in a Festival utterance will be
defined as 0 when there is no next accent.  Extracting all the accents
and using an external program to calculate the next accent may make a
different decision so that when the generated model is used a different
value for this feature will be produced.  Such mismatches in training
models and actual use are unfortunately common, so using the same
mechanism to extract data for training, and for actual use is
worthwhile.

   The recommedn method for extracting features is using the festival
script `dumpfeats'.  It basically takes a list of feature names and a
list of utterance files and dumps the desired features.

   Features may be dumped into a single file or into separate files one
for each utterance.  Feature names may be specified on the command line
or in a separate file.  Extar code to define new features may be loaded
too.

   For example suppose we wanted to save the features for a set of
utterances include the duration, phone name, previous and next phone
names for all segments in each utterance.
     dumpfeats -feats "(segment_duration name p.name n.name)" \
               -output feats/%s.dur -relation Segment \
               festival/utts/*.utt
   This will save these features in files named for the utterances they
come from in the directory `feats/'.  The argument to `-feats' is
treated as literal list only if it starts with a left parenthesis,
otherwise it is treated as a filename contain named features
(unbracketed).

   Extra code (for new feature definitions) may be loaded through the
`-eval' option.  If the argument to `-eval' starts with a left
parenthesis it is trated as an s-expression rather than a filename and
is evaluated.  If argument `-output' contains "%s" it will be filled in
with the utterance's filename, if it is a simple filename the features
from all utterances will be saved in that same file.  The features for
each item in the named relation are saved on a single line.