Speakers

»Building Mobile Experiences: Teaching Mobile HCI in a 12-Week Project-Based Course«

Frank Bentley, Motorola Mobility Research / MIT, USA

Abstract

For the past seven years Frank has been teaching a Mobile HCI class at MIT in the Comparative Media Studies department. As one of the first mobile application classes (starting well before iPhones and App Stores), he has tried a variety of approaches to teaching over the years. The current class, as captured in the book Building Mobile Experiences, focuses on helping students to create novel mobile applications that are evaluated in daily life. The course covers methods from generative research to paper prototyping, usability analysis, mobile programming, and multi-week field evaluations. The focus is on getting something built and in use in real-world settings as quickly as possible to learn how these systems integrate with and enhance daily life. This talk will describe the class, several student projects, and how these same methods have been applied by Frank in his primary role as a Principal Scientist at Motorola Mobility.

Bio

Frank Bentley is a Principal Staff Research Scientist in the Core Research Group at Motorola Mobility. He also teaches Communicating with Mobile Technology, a Mobile HCI class at MIT. Frank's research centers on building new experiences to strengthen strong tie social relationships and helping people to better understand how complex aspects of their lives fit together. This work combines methods from anthropology, HCI, design, computer science, and business to understand how people adopt new technology in their lives and the impact of using these new systems in longitudinal studies for months at a time.

»Affective-Aware Technology: What the Body Can Tell«

Nadia Berthouze, University College London, UK

Abstract

The talk will provide an introduction to the field of Affective HCI and discuss the importance of creating technology that can sense, respond to and help to regulate our emotions. Particular attention will be dedicated to the emerging full body sensing technology and the directions that the field is taking. Using examples from game technology, I will explore the possibilities that full body technology offers as a way to read as well as to steer our own emotions. The discussion will be extended to other fields such as education and clinical intervention. Finally, I will discuss the possibility that body movement and touch sensing technology offers when on the move. I will conclude by presenting an overview of the MSc modules on Affective Computing and on Affective Interaction I currently teach at the University College London Interaction Centre (UCLIC).

Bio

Dr Nadia Bianchi-Berthouze is Associate Professor in the UCL Interaction Centre. She received her PhD in computer science from the University of Milano. From 1996 to 2000 she has been a postdoc fellow at the Electrotechnical laboratory in Japan working in the area of Kansei Engineering. From 2000 to 2006, she was a lecturer in computer science at the University of Aizu in Japan. Her main area of expertise is the study of body posture/movement as a modality for recognising, modulating and measuring human affective states in HCI. She has published more than 100 papers in affective computing, HCI, and pattern recognition. She was awarded the 2003 Technical Prize from the Japanese Society of Kansei Engineering and she was invited to give a TEDxStMartin talk (2012).

»Mutual Engagement in Mobile Artistic Experiences«

Nick Bryan-Kinns, Queen Mary University of London, UK

Abstract

Mutual engagement occurs when people creatively spark together. In this talk we suggest that mutual engagement is key to creating new forms of multi-user social music systems which will capture the public's heart and imagination. Through our ongoing research we propose a number of design features which support mutual engagement, and a set of techniques for evaluating mutual engagement by examining the minutiae of inter-person communication. We suggest how these techniques could be used in empirical studies, and how they might inform artistic practice to design and evaluate new forms of collaborative music making. Key to the success of our techniques is their deployment on mobile devices in contexts ranging from usability labs to music venues.

Bio

Dr. Nick Bryan-Kinns is Senior Lecturer in Computer Science (User Interface Design), Director of EECS Admissions (Undergraduate and Postgraduate), and the Deputy Dean for Science and Engineering at Queen Mary, University of London. He leads the Interactional Sound and Music Special Interest Group in the Centre for Digital Music and has published award winning international journal and conference papers on his funded research on mutual engagement, cross-modal interaction, and tangible interfaces. He was a panel member for the National Science Foundation’s CreativeIT funding panel, and provided expert consultation for the European Commission’s funding of Creativity and ICT. He was involved in two research networks focusing on the art-computer cross-over and future design, chaired the Association of Computing Machinery (ACM) Creativity and Cognition conference 2010, and co-chaired the British Computer Society (BCS) international HCI conference 2006. Dr. Bryan-Kinns is a BCS Fellow, and a recipient of the ACM Recognition of Service Award, and BCS Recognition of ten years service. In 1998 he was awarded a Ph.D. in Human Computer Interaction from the University of London.

»Understanding Mobile & Local: From Check-Ins and Location Mash-Ups, to People's Ties to Their Cities«

Henriette Cramer, Mobile Life Centre, Sweden

Abstract

Location-based services have become mainstream. Millions of people share their location with others, track their journeys, leave local reviews, and get recommendations on where to go. The popularity of these services, the amount of local user-generated content and the increasing integration of sensors in devices and our physical surroundings offer huge opportunities for new mobile interactions. This talk will present a selection from research at Mobile Life on people's usage of mobile, location-based services; challenges in 'Research in the Large' when deploying apps and using user-generated data, and open opportunities based on research of people's perceptions of their cities and neighborhoods.

Bio

Henriette Cramer is a sr. researcher and project lead at Mobile Life @ SICS in Stockholm, Sweden. Her research focuses on mobile location-based services and people’s perceptions of cities and places; 'Research in the Large': using wide distribution channels, existing services and mash-ups for research purposes; and people’s interaction with applications and 'robotic things' that use data around -or about- them. She led the last phase of Mobile Life's Mobile 2.0 project, and currently leads its Citizen Dialogue project, and activities of SICS within the European LIREC human-robot interaction project. Henriette has a PhD from the University of Amsterdam, focusing on people's responses to autonomous and adaptive systems, ranging from spam filters and recommenders to social robots.

More info: henriettecramer.com / mobilelifecentre.org / drawingthecity.com

»Manifestation of Emotions in the Brain and Non-Invasive Measurement Techniques«

Didem Gökçay, Middle East Technical University, Turkey

Abstract

In this talk, the underlying brain systems for physical sensation and expression of emotions will be introduced first. The dimensional theory of emotions will be highlighted and non-invasive measurements for two dimensions, valence and arousal will be explained. Finally, a few application areas such as Internet chat and movie ratings will be brought to the attention of the audience for an interactive discussion.

Bio

Didem Gökçay has graduated from the Electrical and Electronics Engineering Department of Middle East Technical University. She finished her PhD at University of Florida, Department of Computer Engineering, as a Fulbright scholar. She was supported by a fellowship from University California, San Diego and Salk Institute during her postdoctoral studies at the Laboratory of Neuroscience of Autism in San Diego. Currently she is assistant professor at Middle East Technical University, Informatics Institute. She is the founder of the METUNEURO Lab, which specializes in cognitive neuroscience and emotion research. The laboratory has widespread collaborations across several medical schools in Turkey and utilizes tools such as fMRI, fNIRS, EEG, thermal imaging, EMG.

»Towards a Responsive Digital Environment«

Salih Ergüt, Avea Labs, Turkey

Abstract

Drastic improvements in computing and communication technologies have recently positioned the "mobile terminal" as a faithful partner of ordinary citizens, used in most aspects of their daily lives. Ubiqutous use of information technologies to get in touch with the customer, understand him/her and take the necessary action on time, has thus become a necessity, rather than an option, for operators that wish to differentiate their services in the increasingly competitive landscape.  This paradigm shift presents challenges not only in the development of these "aware" technologies, but also in the training of skilled engineers who will be contributing to these developments. These engineers will need to be proficient in both technological and human dimensions of such solutions. In the first part of this talk, illustrative scenarios of the use of affective computing and innovate HCI in enhancing customer experience will be provided, along with possible evolution paths in the future. The pluridisciplinary of such advances and the shortened time to market requirements in the business environment will be considered in proposing new facets of engineering training in affective computing and HCI technologies.

Bio

Dr. Salih Ergut received his B.S. degree from Bilkent University (Ankara, Turkey, 1998), and  M.S. and PhD degrees  from Northeastern University (Boston, MA, 2000), and University of California, San Diego  (La Jolla, CA, 2010), respectively,  all in electrical and computer engineering. He worked for Aware, Inc. (Bedford, MA), an ADSL company, for a year in 2000. He then joined Ericsson Wireless, Inc. in San Diego, CA, USA in 2001 and worked as part of an international team focusing on CDMA infrastructure, for 5 years. He then joined Turk Telekom (Istanbul, Turkey) in 2010, where he served as the R&D manager responsible for university collaborations. He has been working for Avea (Istanbul, Turkey), sole GSM 1800 operator of Turkey, since 2011 and currently heads AveaLabs.  Representing the innovative face of the company, AveaLabs consists of Innovation Center, Incubation Center, and Customer Experience Center.

»Designing for Playful Emotional Action in Mobile Use Settings«

Ylva Fernaeus, KTH / Mobile Life Centre, Sweden

Abstract

What does it mean to design for playful emotional expressivity within a mobile use context? We present examples how this research question has been addressed in various research projects at the mobile life centre, ending with some broad orienting design challenges. The challenges are framed around ways that human actions can support emotional expressivity: through physical manipulation, perception and sensing, social and contextually oriented action, and through digitally mediated actions.

Bio

Ylva Fernaeus is a researcher at the Mobile Life Centre in Stockholm and associate professor in Interaction Design at KTH, with a current research focus on the crafts involved in the making of interactive systems. Her PhD work was on the topic of social, bodily and hands-on making of screen-based interactive systems with children, but recent project have spanned many forms of creative activities around mobile, tangible and robot technologies.

»Media As Interaction: A Human-Centered Perspective«

Alejandro Jaimes, Yahoo! Research Barcelona, Spain

Abstract

In recent years there has been a tremendous explosion in the creation and sharing of media, in particular, given the advent of mobile devices. While most of the produced content circulating in social media is being generated using mobile phones, there has also been a recent shift in consumption, driven by the advent of tablets. In this presentation, I will discuss the implications of this shift in the way that we interact with our environment and with each other, and describe the types of insights that can be gained from large-scale analysis in social media. I will describe how such insights and observations can impact a human-centered design process to innovation, what this means for mobile, and what role affect is playing both in the generation of content and in its consumption.

Bio

Alejandro (Alex) Jaimes is Senior Research Scientist at Yahoo! Research where he is leading new initiatives at the intersection of web-scale data analysis and user understanding (user engagement & improving user experience). Dr. Jaimes is the founder of the ACM Multimedia Interactive Art program, Industry Track chair for ACM RecSys 2010 and UMAP 2009, and panels chair for KDD 2009. He was program cochair of ACM Multimedia 2008, co-editor of the IEEE Trans. on Multimedia Special issue on Integration of Context and Content for Multimedia Management (2008), and a founding member of the IEEE CS Taskforce on Human-Centered Computing. His work has led to over 70 technical publications in international conferences and journals, and to numerous contributions to MPEG-7. He has been granted several patents, and serves in the program committee of several international conferences. He has been an invited speaker at Practitioner Web Analytics 2010, CIVR 2010, ECML-PKDD 2010 and KDD 2009 and (Industry tracks), ACM Recommender Systems 2008 (panel), DAGM 2008 (keynote), 2007 ICCV Workshop on HCI, and several others. Before joining Yahoo! Dr. Jaimes was a visiting professor at U. Carlos III in Madrid and founded and managed the User Modeling and Data Mining group at Telefónica Research. Prior to that Dr. Jaimes was Scientific Manager at IDIAP-EPFL (Switzerland), and was previously at Fuji Xerox (Japan), IBM TJ Watson (USA), IBM Tokyo Research Laboratory (Japan), Siemens Corporate Research (USA), and AT&T Bell Laboratories (USA). Dr. Jaimes received a Ph.D. in Electrical Engineering (2003) and a M.S. in Computer Science from Columbia U. (1997) in NYC.

»Emotions on the Go: Leveraging Mobile to Capture Affect Data in the Wild and Across the Globe«

Rana el Kaliouby, MIT Media Labs / Affectiva, USA

Abstract

Emotions influence every aspect of our lives, from our health and wellbeing  to the decisions we make. Yet, researchers have struggled to capture  emotion experiences in an objective, unobtrusive and scalable way. This  talk will present latest technologies for measuring and communicating  emotions, including wearable biosensors and automated facial analysis. We  will show how leveraging mobile devices enables the crowdsourcing of  affective data from every corner of the world and across demographics.  Challenges of data collection from a mobile device will be discussed as  they present exciting research opportunities for young researchers in this  field. The resulting database of affective responses also poses an exciting  machine learning and data mining challenge for researchers who are  interested in uncovering patterns about how emotional expressions are  similar (and different) across cultures.

Bio

Rana el Kaliouby, Ph.D. is co-founder and Chief Technology Officer of MIT  start up Affectiva, and Research Scientist at the MIT Media Lab. Her vision  is to bring emotion measurement and communication technologies to the  masses, including the facial expression recognition technology (Affdex,  FaceSense), which she is lead inventor on. Her research is applied to a  variety of applications ranging from advertising to autism. Rana was  recently listed as one of MIT Technology Review's Top 35 Innovators under the age of 35 (TR35 award). The New York Times rated her research as one of the top 100 innovations of 2006, and her work has been featured in Forbes, Wired, The Boston Globe, TechCrunch, FastCompany and more. She holds a BSc and MSc in computer science from the American University in Cairo and a Ph.D. from the computer laboratory, University of Cambridge.

»Speech Emotion Recognition on the Go«

Björn Schuller, Technische Universität München, Germany

Abstract

With the recently increasing usage of speech processing technology in mobile HCI, the affective computing community is asked to provide accordingly adapted solutions for the recognition of emotion in speech and language. This offers great potential if technology is distributed such as in mobile Automatic Speech Recognition, as on the server side massive amounts of data can be collected for centralized model updates. At the same time, however, this requires suited encryption or reduction of information, as emotional content often is private. Further, mobile applications can be considerably more demanding with respect to acoustic conditions with noise addition, (dynamic) reverberation, coding artifacts, and package loss. Limiting factors thereby are often restrictions in computational power and memory on mobile devices and bandwidth on the transmission channel. In this light, the talk focuses on the peculiarities arising from making speech emotion recognition technology ready "for the go". This includes distributed speech processing, robust and efficient pre-processing, enhancement, and feature extraction and vector quantization on the client front-end side, and active, semi-, and unsupervised learning on the server back-end side. Further, avenues towards confidence measurement are shown, to better inform the application side and adaptation of models.

Bio

Björn W. Schuller received his diploma in 1999 and his doctoral degree for his study on Automatic Speech and Emotion Recognition in 2006, both in electrical engineering and information technology from TUM (Munich University of Technology), where he recently finished his habilitation thesis on Intelligent Audio Analysis. At present, he is with JOANNEUM RESEARCH, Institute for Information and Communication Technologies in Graz/Austria, working in the Research Group for Remote Sensing and Geoinformation and the Research Group for Space and Acoustics. He is further tenured as Senior Lecturer in Signal Processing and Machine Intelligence heading the Intelligent Audio Analysis Group at TUM’s Institute for Human-Machine Communication since 2006. From 2009 to 2010 he lived in Paris/France and was with the CNRS-LIMSI Spoken Language Processing Group in Orsay/France dealing with affective and social signals in speech. In 2010 he was also a visiting scientist in the Imperial College London's Department of Computing in London/UK working on audiovisual behaviour recognition. In 2011 he was guest lecturer at the Università Politecnica delle Marche (UNIVPM) in Ancona/Italy and visiting researcher of NICTA in Sydney/Australia.
Dr. Schuller is president-elect of the HUMAINE Association and member of the ACM, IEEE and ISCA and (co-) authored 4 books and more than 300 publications in peer reviewed books (23), journals (43), and conference proceedings in the field leading to more than 2,800 citations (h-index = 28). He serves as co-founding member and secretary of the steering committee, associate editor, and guest editor of the IEEE Transactions on Affective Computing, associate and repeated guest editor for the Computer Speech and Language, associate editor for the IEEE Transactions on Systems, Man and Cybernetics: Part B Cybernetics and the IEEE Transactions on Neural Networks and Learning Systems, and guest editor for the IEEE Intelligent Systems Magazine, Speech Communication, Image and Vision Computing, Cognitive Computation, and the EURASIP Journal on Advances in Signal Processing, reviewer for more than 50 leading journals and 30 conferences in the field, and as workshop and challenge organizer including the first of their kind INTERSPEECH 2009 Emotion, 2010 Paralinguistic, 2011 Speaker State, and 2012 Speaker Trait Challenges and the 2011 and 2012 Audio/Visual Emotion Challenge and Workshop and programme committee member of more than 40 international workshops and conferences. Steering and involvement in current and past research projects includes the European Community funded ASC-Inclusion STREP project as coordinator and the awarded SEMAINE project, and projects funded by the German Research Foundation (DFG) and companies such as BMW, Continental, Daimler, HUAWEI, Siemens, Toyota, and VDO. Advisory board activities comprise his membership as invited expert in the W3C Emotion Incubator and Emotion Markup Language Incubator Groups.