APSIPA ASC 2020: Keynote Speeches
Context-Aware Language Processing
Mari Ostendorf
School of Electrical and Computer Engineering, University of Washington, USA
Abstract
Automatic processing of human language (both text and speech) is playing increasingly important and diverse roles in technology, from enabling natural communication with devices to learning from social media. Language processing is challenging because word use is highly dependent on context. New methods of neural modeling that learn embedded word representations from neighboring words have enabled substantial advances on a variety of tasks, including language understanding, translation and generation. However, there are other types of context that are easily available for many forms of language: genre or speaking style, author or speaker index, location, social context, etc. This talk describes different neural architectures for contextualizing language that involve learning embedded representations of context as a separate factor in the model. Looking at a variety of language processing problems, we explore different mechanisms for representing and leveraging context, showing that explicit representation of context both improves performance and provides insights into characteristics of language associated with different contexts.
Speaker's Biography
Mari Ostendorf joined the University of Washington in 1999. She is an Endowed Professor of System Design Methodologies in the Electrical & Computer Engineering Department, an Adjunct Professor in Linguistics and in Computer Science & Engineering, and Associate Vice Provost for Research.
She is a Fellow of the IEEE, ISCA and ACL, a former Australian-American Fulbright Scholar, and a member of the Washington State Academy of Sciences. In 2017, Prof. Ostendorf served as a faculty advisor for the student team winning the inaugural Alexa Prize competition to build a socialbot, and conversational AI is a focus of her current work.
Her research explores dynamic models for understanding and generating speech and text, particularly in multi-party contexts, and it contributes to a variety of applications, from education to clinical and scientific information extraction.
The New Era of Image Coding
David Taubman
School of Electrical Engineering and Telecommunications, UNSW, Australia
Abstract
For three decades, the original JPEG standard has held an unassailable position as the codec of choice for consumer imaging applications, while many professional appliations have relied upon JPEG 2000 and ProRes. With the growing importance of HDR and the advent of new methods and imaging modalities, however, image compression has become a rapidly evolving field. In this talk, the speaker will aim to provide an informative and balanced perspective of recent and emerging technologies and standards. At one extreme, there exist low complexity codecs such as the new High Throughput JPEG 2000 (a.k.a. JPH) standard that can outperform JPEG in both coding efficiency and speed, while offering rich features that are increasingly important. At the other extreme, with much higher compplexity, machine learning techniques have reached coding efficiencies that rival state-of-the-art traditional methods. The speaker will also discuss hybrid approaches that combine machine learning techniques with more traditional structures, along with the challenges introduced by new imaging modalities, such as lightfields and digital holography.
Speaker's Biography
Professor Taubman is with the School of Electrical Engineering and Telecommunications at the University of New South Wales, where he is currently Deputy Head of School (Research). He is also Co-Director and Founder of Kakadu Software Pty Ltd. Before joining UNSW at the end of 1998, he spent 4 years at Hewlett-Packard's research laboratories in Palo Alto, California. He received the B.S. and B.E. (Electrical) degrees in 1986 and 1988 from the University of Sydney, Australia, and the M.S. and Ph.D. degrees in 1992 and 1994 from the University of California at Berkeley. Professor Taubman has contributed extensively to the JPEG 2000 suite of standards, contributing the core coding algorithms for both Part-1 and Part-15, as well as the JPIP standard (Part-9) for interactive image communication. His contributions to scalable video compression are also widely known. Professor Taubman is author, with Michael Marcellin, of the book "JPEG2000: Image compression fundamentals, standards and practice" and author of the popular "Kakadu" software tools for JPEG2000 developers. He is recipient of two IEEE Best Paper awards: for the 1996 paper, "A Common Framework for Rate and Distortion Based Scaling of Highly Scalable Compressed Video;" and for the 2000 paper, "High Performance Scalable Image Compression with EBCOT". He is Fellow of the IEEE and the IEAust. His research interests include scalable image and video compression, interactive, robust and efficient communication of multimedia content, motion and depth modelling and estimation and statistical inverse problems.
Brain-Inspired Computation for Deep, Incremental Learning of Spatio-Temporal Data and for Knowledge Acquisition
Nikola Kasabov
Fellow IEEE, Fellow RSNZ, Fellow INNS College of Fellows, DV Fellow RAE UK
Director, Knowledge Engineering and Discovery Research Institute
Professor, Auckland University of Technology, Auckland, New Zealand, nkasabov@aut.ac.nz
Advisory/Visiting Professor SJTU and CASIA China, RGU UK, UZH/ETH Zurich, USI Lugano
Honorary Professor of Teesside University, UK
Abstract
The talk demonstrates that the third generation of artificial neural networks, the brain-inspired spiking neural networks (SNN) are not only capable of deep, incremental learning of temporal or spatio-temporal data, but also enabling the extraction of knowledge representation from the learned data and tracing the knowledge evolution over time from the incoming data. Similarly to how the brain learns, these SNN models do not need to be restricted in number of layers, neurons in each layer, etc. as they adopt self-organising learning principles of the brain. The talk covers:
1. Algorithms for deep, incremental and potentially “life-long” learning in SNN.
2. Algorithms for knowledge representation and for tracing the knowledge evolution in SNN over time from incoming data.
3. Selected Applications
The material is illustrated on an exemplar SNN architecture NeuCube that is built according to a 3D brain spatial template (free software and open source along with a cloud-based version available from www.kedri.aut.ac.nz/neucube). Case studies are presented of brain signal- and environmental streaming data modelling and knowledge representation using incremental and transfer learning algorithms. These include: predictive modelling of EEG and fMRI signals measuring cognitive processes and response to treatment; AD prediction; understanding depression; brain-computer interfaces; predicting environmental hazards and extreme events. Brain-inspired SNN systems result not only in better classification and prediction accuracy when used on spatio-temporal data, but they also allow to extract meaningful knowledge, thus opening a way of building open and transparent AI. Reference: N.Kasabov, Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence, Springer, 2019,
https://www.springer.com/gp/book/9783662577134
Speaker's Biography
Professor Nikola Kasabov is Fellow of IEEE, Fellow of the Royal Society of New Zealand, Fellow of the INNS College of Fellows, DVF of the Royal Academy of Engineering UK. He is the Founding Director of the Knowledge Engineering and Discovery Research Institute (KEDRI), Auckland and Professor at the School of Engineering, Computing and Mathematical Sciences at Auckland University of Technology, New Zealand. Kasabov is the 2019 President of the Asia Pacific Neural Network Society (APNNS) and Past President of the International Neural Network Society (INNS). He is member of several technical committees of IEEE Computational Intelligence Society and Distinguished Lecturer of IEEE (2012-2014). He is Editor of Springer Handbook of Bio-Neuroinformatics, Springer Series of Bio-and Neuro-systems and Springer journal Evolving Systems. He is Associate Editor of several journals, including Neural Networks, IEEE TrNN, Tr CDS, Information Sciences, Applied Soft Computing. Kasabov holds MSc and PhD from TU Sofia, Bulgaria. His main research interests are in the areas of neural networks, intelligent information systems, soft computing, bioinformatics, neuroinformatics. He has published more than 620 publications highly cited internationally. He has extensive academic experience at various academic and research organisations in Europe and Asia, including: TU Sofia Bulgaria; University of Essex UK; University of Otago, NZ; Advisory Professor at Shanghai Jiao Tong University and CASIA China, Visiting Professor at ETH/University of Zurich and Robert Gordon University UK, Honorary Professor of Teesside University, UK. Prof. Kasabov has received a number of awards, among them: Doctor Honoris Causa from Obuda University, Budapest; INNS Ada Lovelace Meritorious Service Award; NN Best Paper Award for 2016; APNNA ‘Outstanding Achievements Award’; INNS Gabor Award for ‘Outstanding contributions to engineering applications of neural networks’; EU Marie Curie Fellowship; Bayer Science Innovation Award; APNNA Excellent Service Award; RSNZ Science and Technology Medal; 2015 AUT Medal; Honorable Member of the Bulgarian, the Greek and the Scottish Societies for Computer Science. More information of Prof. Kasabov can be found from:
http://www.kedri.aut.ac.nz/staff