Keynotes
|
Keynote 1: |
|
Prof. Aggelos K. Katsaggelos
received the Diploma degree in electrical and mechanical engineering from the Aristotelian University of Thessaloniki, Greece, in 1979, and the M.S. and Ph.D. degrees in Electrical Engineering from the Georgia Institute of Technology, in 1981 and 1985, respectively.
In 1985, he joined the Department of Electrical Engineering and Computer Science at Northwestern University, where he is currently a Professor holder of the AT&T chair. He was previously the holder of the Ameritech Chair of Information Technology (1997–2003). He is also the Director of the Motorola Center for Seamless Communications, a member of the Academic Staff, NorthShore University Health System, an affiliated faculty at the Department of Linguistics and he has an appointment with the Argonne National Laboratory.
He has published extensively in the areas of multimedia signal processing and communications (over 250 journal papers, 500 conference papers and 40 book chapters) and he is the holder of 25 international patents. He is the co-author of Rate-Distortion Based Video Compression (Kluwer, 1997), Super-Resolution for Images and Video (Claypool, 2007), Joint Source-Channel Video Transmission (Claypool, 2007), and Machine Learning, Optimization, and Sparsity (Cambridge University Press, forthcoming). He has supervised 50 Ph.D. theses so far.
Among his many professional activities Prof. Katsaggelos was Editor-in-Chief of the IEEE Signal Processing Magazine (1997–2002), a BOG Member of the IEEE Signal Processing Society (1999–2001), a member of the Publication Board of the IEEE Proceedings (2003-2007), and he is currently a Member of the Award Board of the IEEE Signal Processing Society. He is a Fellow of the IEEE (1998) and SPIE (2009) and the recipient of the IEEE Third Millennium Medal (2000), the IEEE Signal Processing Society Meritorious Service Award (2001), the IEEE Signal Processing Society Technical Achievement Award (2010), an IEEE Signal Processing Society Best Paper Award (2001), an IEEE ICME Paper Award (2006), an IEEE ICIP Paper Award (2007), an ISPA Paper Award (2009), and a EUSIPCO paper award (2013). He was a Distinguished Lecturer of the IEEE Signal Processing Society (2007–2008).
Learning for Future Video: Learning has made it possible to unleash the power of data. We have moved away from the detailed modeling of a system or a phenomenon of interest thanks to the abundance of data as well as the huge improvements in processing power. With approaches like dictionary learning we can discover linear relationships between the input and output. On the other hand, recent advancements in deep learning have made it possible to discover non-linear relationships. As one of the examples in this talk we discuss the application of dictionary and deep learning to the video super-resolution problem. We describe a multiple-frame algorithm based on dictionary learning and motion estimation. We further describe the use of a convolutional neural network that is trained on both the spatial and temporal dimensions of videos to enhance their resolution. We demonstrate experimentally the effectiveness of these approaches. We finally discuss future research directions on the topic of learning.
|
Keynote 2: |
|
Dr. Eduardo Romero Director Centro de Telemedicina
Universidad Nacional de Colombia, Colombia
Towards Finding Complex Patterns in Medical Images: In this information era, many new analysis techniques have changed our understanding of many complex problems. This lecture is about different computational approaches addressed to find out hidden knowledge in three use cases, namely computational anatomy for discovering patterns in neurodegenative diseases, motion analysis and development of gait models to understand motion patterns in Parkinson disease and construction of analysis tools in digital pathology.
|
Keynote 3: |
|
Prof. Sergio A Velastin was recently a research professor at the University of Santiago de Chile and is now Conex Research Professor in the Applied Artificial Intelligence Research Group at the Universidad Carlos III in Madrid. He trained and worked most of his life in the UK where he became Professor in Applied Computer Vision at Kingston University and where he was also director of the Digital Imaging Research Centre. He is also a Fellow of the Institution of Engineering and Technology (IET) and Senior Member of the IEEE where he was member of the Board of Governors of the Intelligent Transportation Society (IEEE-ITSS). Sergio has worked for many years in the field of artificial vision and its application to improve public safety especially in public transport systems. He co-founded Ipsotek Ltd and has worked, on projects with transport authorities in London, Rome, Paris etc in several EU Framework Programme projects.
The potential of fusion in computer vision applications:
There are many computer vision applications that can benefit from the fusion of data and information at various levels of processing. For example, even for a monocular image it is possible to extract different image features such as edges, local neighborhood histograms, texture, transforms (Fourier, wavelet, …), etc. and it is important to define how these heterogeneous features could be combined to aid image interpretation. In the context of multiple cameras (possibly of different image modalities such as visible light, infrared, 3D) providing different views of the same phenomena, we need methods to relate the data obtained from each sensor into common frames of reference (registration) and then to combine such data in ways that take into account sensor characteristics and noise levels. Typical scenarios in computer vision include multimodal medical diagnosis, multicamera visual surveillance and multisensor ambient intelligence applications. The talk will give a number of examples of how fusion is being used in computer vision by various research teams in different parts of the world.
|
Keynote 4: |
|
Prof. Domingo Mery.
Computer Vision for X-Ray Testing:
X-ray imaging has been developed not only for its use in medical imaging for human beings, but also for materials or objects, where the aim is to analyze -non-destructively- those inner parts that are undetectable to the naked eye. Thus, X-ray testing is used to determine if a test object deviates from a given set of specifications. Typical applications are analysis of food products, screening of baggage, inspection of automotive parts, and quality control of welds. In order to achieve efficient and effective X-ray testing, automated and semi-automated systems are being developed to execute this task. This talk presents an introduction to computer vision algorithms for industrially-relevant applications of X-ray testing. There are some application areas -like casting inspection- where automated systems are very effective, and other application areas -such as baggage screening- where human inspection is still used; there are certain application areas -like weld and cargo inspections- where the process is semi-automatic; and there is some research in areas -including food analysis- where processes are beginning to be characterized by the use of X-ray imaging. We will provide supporting material available on-line, including a database of X-ray images and a Matlab toolbox for use with some examples.
|
Keynote 5: |
|
Prof. Ebroul Izquierdo PhD, MSc, CEng, FIET, SMIEEE, MBMVA, is Chair of Multimedia and Computer Vision and head of the Multimedia and Vision Group in the school of Electronic Engineering and Computer Science at Queen Mary, University of London. For his thesis on the numerical approximation of algebraic-differential equations, he received the Dr. Rerum Naturalium (PhD) from the Humboldt University, Berlin, Germany. He has been a senior researcher at the Heinrich-Hertz Institute for Communication Technology (HHI), Berlin, Germany, and the Department of Electronic Systems Engineering of the University of Essex.
Face Recognition: from forensics and machine vision to understanding the visual system of human super-recognizers: Automated face recognition is one of the oldest and probably best understood tasks in computer vision. Due to the plethora of applications, it is also the basis for a fast evolving technology drawing attention from researchers and practitioners in several fields including forensics, biometrics, visual information retrieval, automated surveillance, internet driven social networking and psychologists. Despite its maturity, face recognition is still regarded as one the most challenging tasks in computer vision since in most critical applications it requires extremely high accuracy under very adverse conditions. Indeed, in most cases available input information undergoes significant variations in image quality, scale, orientation, noise and distortions induced by other faces or objects in the same image. This makes an already difficult problem even harder.
In this talk important aspects of face recognition and few crucial applications will be presented. Initially, the state of the art in face recognition technology will be outlined. Then, we will refer to essential mathematical and statistical methods used to achieve highly accurate face recognition, as well as, the advantages and disadvantages of available algorithmic solutions. The main still open technical challenges and some important generic aspects of face recognition will be discussed. The focus will be on the lack of robustness under adverse conditions for real-world cases in automated surveillance applications. The usefulness of face recognition, as a tool to help forensic investigators when mining the vast amounts of data in crime solving, will be presented. Furthermore, examples of recent technological developments in two specific application scenarios will be given. The first one relates to recent developments in advanced linear algebra that promise to deliver better higher accuracy in face alignment and recognition technology. The second introduces new discovery's coming from human sciences (psychology), the understanding and use of super-recognizers skills for very robust face recognition.
|
Keynote 6: |
|
Dr. Luciana Porcher Nedel
Towards "calm interfaces" using a network of sensors and actuators: Better than a friendly and natural human-computer interface is "no interface". Let's imagine the time when computers will anticipate our desires and intentions and help us to solve problems without any explicit command. In this still futuristic scenario, much more than tools, computers will be seen as personal assistants that know their owners, needs and tasks to accomplish. This concept is being explored since the 90's, but we are still far from a good solution. In this talk, we will discuss the idea of "calm interfaces" as well as our filling about its implementation through the use of a network of sensors and actuators. Ambient and personal sensors help the computer to learn about users while actuators are used to communicate to the human. Some preliminary research results will be shown to illustrate our ideas on the future of human-computer interaction.
|
Local organisation:
|
(c) Website: Sergio A Velastin/IET, 2009-...
|
|
Enlaces relacionados |
|
|
|
|
|
|
|
|
|
|
|