Brief Vita
Dr. Marc Schröder is a Senior Researcher at DFKI and the leader of the DFKI speech group. Since 1998, he is responsible at DFKI for building up technology and research in TTS and emotion-oriented computing. Within the FP6 NoE HUMAINE, Schröder has built up the scientific portal http://emotion-research.net, which won the Grand Prize for the best IST project website 2006. He is Editor of the W3C Emotion Markup Language specification, Coordinator of the FP7 STREP SEMAINE, and project leader of the national-funded basic research project PAVOQUE and the FP7 Network of Excellence SSPNet. Dr. Schröder authored more than 50 scientific publications and is PC member in many conferences and workshops.
SSPNet Tutorial: OpenMary Text-to-Speech
This tutorial (and the preceding plenary lecture) will take place on the 14th of July. Dr. Schröder will present the key properties of the OpenMary Text-to-Speech system, an open-source multi-lingual client-server platform for generating speech from text. OpenMary is written in 100% pure Java, and currently supports British and American English, German, Turkish, and Telugu. Dr. Schröder will present both the runtime system and the toolkit for supporting new languages and building new voices. The plenary will present the concepts; the tutorial will allow participants to try hands-on how to work with the OpenMary system. Programming capabilities are useful but not strictly required for the tutorial.
SSPNet Tutorial: Building emotion-oriented real-time interactive systems with the SEMAINE API
This tutorial (and the preceding plenary lecture) will take place on the 15th of July. The plenary talk will first describe the SEMAINE project and the Sensitive Artificial Listener system it builds. Dr. Schröder will then zoom in on the system integration level, describing the SEMAINE API as a cross-platform component integration framework based on the message-oriented middleware ActiveMQ. Using standard representation formats such as SSML, EMMA, BML or EmotionML wherever possible, the SEMAINE API aims to make it very easy to build new emotion-oriented real-time interactive systems from both old and new components with minimal integration overhead. The plenary will outline the concepts; the tutorial session will allow participants to build their own emotion-oriented systems using the SEMAINE API. Programming skills in Java or C++ are required to participate in the tutorial.
Brief Vita
Hakan Erdošan
Sabanci University

Dr. Hakan Erdošan is an assistant professor at Sabancż University in Istanbul, Turkey. He received his B.S. degree in Electrical Engineering and Mathematics in 1993 from METU, Ankara and his M.S. and Ph.D. degrees in Electrical Engineering: Systems from the University of Michigan, Ann Arbor in 1995 and 1999 respectively. His Ph.D. was on developing algorithms to speed up statistical image reconstruction methods for PET transmission and emission scans. His work there resulted in three journal papers which are highly cited. He was with the Human Language Technologies group at IBM T.J. Watson Research Center, NY between 1999 and 2002 where he worked on various internally funded and DARPA funded projects. At IBM, he focused on the following problems of speech recognition: acoustic modeling, language modeling and speech translation. He has been with Sabancż University since 2002. His research interests are in developing and applying probabilistic methods and algorithms for multimedia information extraction. Specifically, he is interested in sequence labeling, speech recognition and developing algorithms for learning. As of March 2010, Dr. Erdogan has published 10 journal papers, 2 book chapters, 50+ conference papers and co-edited a book. He has 3 patents.
Erasmus Tutorial: Structured learning approaches for sequence labeling
This tutorial will take place on the last week of the workshop, and it is composed of five 90-minute presentations. Sequence labeling is a problem of critical importance in many fields such as speech recognition, natural language processing, structure prediction in bioinformatics and video analysis. In this tutorial, Dr. Erdošan will present recent methods for solving this problem. Sequence labeling requires structured prediction which has to consider relations among predicted labels. The inference algorithm should take into account this relation and use this information in prediction. We will first discuss linear classification methods like LDA, logistic regression and SVM which are basic building blocks for structured prediction. Dr. Erdošan will also explain the relation between generative and discriminative methods for classification. Next, the traditional approach of hidden Markov models for sequence labeling will be covered. Hidden Markov models (HMMs) are generative models of sequential observations. HMMs have been used in sequence labeling extensively and remain as a powerful approach for many problems. The next topic of interest is conditional random fields (CRFs). CRFs have been introduced as a discriminative alternative of HMMs and are shown to outperform HMMs in some problems. The bottleneck of CRFs is that they are computationally harder and it is challenging to apply for continuous data. The tutorial will explore how CRFs can be used for continuous data. After that, the use of max-margin methods for training CRFs will be explored. Max-margin methods similar to SVMs are interesting methods for effective training of CRF parameters. Finally, some application areas of the sequence labeling problem will be covered and results and comparisons among different methods of learning introduced in the seminar series will be provided. The tentative schedule is as follows:
  • August 2nd (10:30-12:00) Generative and discriminative linear classifiers: Fisher's discriminant, logistic regression and SVMs
  • August 3rd (10:00-11:30) Hidden Markov models, introduction and inference algorithms
  • August 4th (10:00-11:30) Conditional random fields, introduction and inference algorithms
  • August 5th (10:00-13:00) Max-margin training of CRFs, application areas of sequence labeling: speech recognition, bioinformatics, natural languge processing examples
Brief Vita

SSPNet Tutorial: Elckerlyc Virtual Human Platform
This tutorial will introduce the virtual human platform Elckerlyc. Elckerlyc is a BML compliant behavior realizer for generating multimodal verbal and nonverbal behavior for Virtual Humans (VHs). It is designed specifically for continuous (as opposed to turn-based) interaction with tight temporal coordination between the behavior of a VH and its interaction partners. Animation in Elckerlyc is generated using a mix between the precise temporal and spatial control offered by procedural motion and the naturalness of physical simulation. More information can be found here.

eNTERFACE 2010 Web Site eNTERFACE | EURASIP| OpenInterface Foundation| SSPNet | IOP-MMI