ICDP 2017

 
Home
Registration

Accommodation
Venue
LASIE Workshop
Keynotes
Programme
Information for
Authors

Organisers
Sponsorship/
Exhibition

Call for Papers
Topics
Contact
KeyDates
Sponsors:

Co-Sponsors:







Keynote Speakers
As per other years there will be a number of invited speakers who will talk on various aspects related to the use of imaging technologies to effect crime detection and prevention.

Dr Josh Davis Reader, Applied Psychology Research Group, University of Greenwich, United Kingdom.
Searching for super-recognisers in the police and the general public

Facial recognition ability ranges from those suffering prosopagnosia or face blindness to those labelled as super-recognisers, who seemingly may never forget a face. In London, some Metropolitan Police Service (MPS) officers possessing super-recognition ability have identified very large numbers of suspects from CCTV images. A few, mainly working in a specialised Super-Recogniser Unit originally based at New Scotland Yard identify 100’s per annum. They are assisted by a keyword searchable database containing approximately 100,000 images of recent and historical unsolved crimes, and from memory are able to ‘snap’ occurrences of the same suspect(s) depicted committing multiple crimes. Davis has empirically tested the ability of some MPS super-recognisers, and those in other police forces using tests of familiar and unfamiliar face recognition and simultaneous face matching and a spotting the face in a crowd test involving searching for actors depicted in videos of crowded tourist scenes. Current research based on the European Commission funded LASIE project is examining how super-recognisers can best be assisted by technology in this type of video review task. Publicity surrounding super-recogniser police officers has also led to over 5 million international members of the public taking Davis’ online “could you be a super-recogniser test” with over 150,000 contributing to follow up research. Large numbers of super-recognisers have been identified this way, many willing to continue with the research programme, and thus providing a unique source of information about this fascinating ability.

Dr Davis joined the University of Greenwich in September 2008. He had previously been awarded a PhD from Goldsmiths, University of London in 2007 on "The Forensic Identification of Unfamiliar People in CCTV Images". He has published empirical research papers and book chapters examining human face recognition and eyewitness identification ability and his first co-edited book “Forensic Facial Identification: Theory and Practice of Identification from Eyewitnesses, Composites and CCTV” was published in 2015.

Prof. Carlo Regazzoni, ISIP40, University of Genova, Italy
Modular incremental learning of abnormality models for embodied agents in non stationary environments

In a Internet of Things scenario where agents are normally expected to be provided of sufficient level of intelligence to autonomously adapt the way they are doing stand alone or cooperative tasks it is important to be able to learn models that can detect abnormalities. Such models can for example allow to detect attacks to the physical network of objects and to the operating environment. This lecture will discuss how different Bayesian models can be considered and jointly used as arising from three main different sources of abnormality that are strictly related each other. First, causal temporal interaction models can be used describing how in normal conditions the relative state of the agents in the environment and the observable components of the actions done by the agent itself are related. Abnormalities can be detected as deviations from such Environment Centered (EC) models by an observer placed in the environment centered reference system not having access to internal agent variables. This clearly does not exclude the agent itself can be able to use such models to detect such abnormalities. Other sources of abnormalities can be associated with dynamical interaction causality models that relate:

  • sensorial data representations of the embodied agent obtained from observing the external context with EC agent situation models, that one can define as Context Sensing (CS) models
  • state representations obtained by observing the outcome of the inner agent decision system, so associated with actuator states, with resepect to EC agent situation models here defined as Self Sensing (SS) models
These two latter models assume private agent variable can be observed through eso and endo sensors, to be able to generate state estimates of context and of self from an internal agent viewpoint, constituting together the private layer (PL) of agent Self Awareness, in comparison with the shared level (SL) of Self Awareness represented by EC models. Consequently, such models cannot directly be available outside the agent, unless the observer itself can use PL estimated models of the agent (so exhibiting a sort of empathy with it). The agent can in every case be provided of PL and SL of Self Awareness, so being able to detect deviations of normality at different abstractions levels. An external observed, instead can just rely, unless having a direct access to inner agent variables, on EC models and on SL of Self Aawareness that corresponds to a more traditional situation awareness.
In this talk, machine learning methods will be discussed that are capable to incrementally learn from temporal data the three different normality models by observing the agent when performing in normality conditions and to use such models to detect abnormalities. A real case where the agents consists of an autonomous driving cab used for critical infrastructure monitoring will be used to show different methods based n Gaussian Processes, Variational learning and Generative Adversarial Networks.

Carlo S. Regazzoni received the Laurea degree in Electronic Engineering and the Ph.D. in Telecommunications and Signal Processing from the University of Genoa (UniGE), in 1987 and 1992, respectively. Since 2005 he is Full Professor of Telecommunications Systems. . From April 2017 to October 2017 he is at University Carlos III Madrid as one of the 2017/2018 UC3M/Santander Chair of Excellence. He is currently 2015/2017 Vice President of IEEE Signal Processing Society.

Dr. Eduardo Cermeño Vaelsys, Spain.
Deep Learning Techniques and their Application in the Video Surveillance Real-World

Deep learning techniques are one of the topics attracting more attention in the scientific community but also in the industrial world. They are applied in a wide range of fields, from self-driving cars to music creation. Image recognition has been one of the main research areas that have benefited with deep learning techniques, either to recognize handwritten characters or a variety of objects. Video surveillance is no exception. New papers and products related to deep learning appear every day. In this talk, we first present relevant applications of these techniques by researchers and manufacturers and then comment their capacity to tackle video surveillance real-world challenges.

Eduardo Cermeño holds a Master of Engineering in Computer Science, a Master of Science in Artificial Intelligence and a Ph.D. in Computer Science and Telecommunication. His research interests are mainly focused on machine visual perception, a field in which he has several international publications. Dr. Cermeño is currently the C.E.O of Vaelsys, one of the european pioneers developing video processing solutions. Vaelsys technology is used to control more than 40.000 cameras and provides video solutions to the main Spanish IT companies and police corps.

Dr Dimitrios Makris Associate Professor, Kingston University London and Chair of the IET's Vision and Imaging Network, United Kingdom.
The case for Transfer Learning in Visual Surveillance

Over the last few years, Computer Vision has achieved great progress, mainly driven by advances in Machine Learning. Research in Deep Learning and the availability of large general-purpose datasets contributed to many successful applications. In visual surveillance, issues such as imaging quality, clutter, crowdedness, occlusions and perspective still pose significant challenges in developing reliable security applications. This is partly because general-purpose datasets may not provide sufficient training examples related to particular surveillance scenarios. This presentation will advocate the suitability of Transfer Learning in Visual Surveillance to resolve such challenges, focusing on the task of Action/Activity Recognition. Considering a model learnt in a specific “Source Domain” for a given task, Transfer Learning allows the adaptation to a different “Target Domain” and/or for a different task in a supervised, semi-supervised or unsupervised manner.

Dimitrios is an Associate Professor at the Digital Information Research Centre at Kingston University. His research interests are Computer Vision, Machine Learning and in particular Motion Analysis and Dimensionality Reduction. His work on learning scene semantic models and on multiple camera surveillance systems has been highly acknowledged by the international research community as reflected by the high number of citations. More recent work is in the area of human motion analysis. He has been awarded a number of research project as principal investigator and financially supported by EPSRC, TSB as well as national (Ipsotek Ltd, Legion Ltd) and international companies (BARCO Ltd/Belgium, LG Electronics/Korea). Dimitrios was one of the only two UK academics that have been interviewed by ZDF/Discovery Channel for their documentary: "2057 – The World in 50 years".

Local organisation:

Related Links

(c) Website: Sergio A Velastin/IET, 2009...