The organizing committee is pleased to announce the following keynote speakers for DICTA 2018:
Terence M. Peters
Dr. Terry Peters is a Scientist in the Imaging Research Laboratories at the Robarts Research Institute, London, ON, Canada, and Director of the Biomedical Imaging Research Centre at Western University. He is a Professor in the Departments of Medical Imaging and Medical Biophysics at Western University London, Canada, and is the Graduate Chair of Biomedical Engineering. For the past 35 years, his research has focussed on the application of computational hardware and software advances to medical imaging modalities in surgery and therapy. Beginning in 1978 at the Montreal Neurological Institute, Dr. Peters’ lab pioneered many of the image-guidance techniques and applications for image-guided neurosurgery. In 1997, he moved to the Robarts Research Institute at Western, to establish a focus of image-guided surgery and therapy within the Robarts Imaging Research Laboratories. He has authored over 300 peer-reviewed papers and book chapters, and has mentored over 90 Masters, PhD and Postdoctoral trainees. He is a Fellow of the Institute of Electrical and Electronics Engineers, the Canadian College of Physicists in Medicine; the Canadian Organization of Medical Physicists; the American Association of Physicists in Medicine, the Australasian College of Physical Scientists and Engineers in Medicine; the MICCAI Society, the Canadian Academy of Health Sciences, and the Royal Society of Canada. In addition, he received the Helmuth Prize for Achievement in Research from Western in 2012, and the MICCAI Society’s Enduring Impact Award in 2014.
Over the past half century, medical Imaging has grown in sophistication and its use has evolved well beyond diagnosis. Much effort has been dedicated to minimizing invasiveness in surgical interventions, most of which has been achieved through developments in medical imaging, surgical navigation, visualization and display technologies. Image-guided procedures hold the promise to dramatically change the way therapies are delivered to many organs. This presentation provides an overview of developments in image-guided interventions with particular emphasis on the use of Ultrasound as a non-invasive imaging modality, as well as the use of Augmented Reality to provide an optimal user interface between the surgeon and the instruments and images employed during a patient procedure.
Frederic Dufaux is a CNRS Research Director at Laboratoire des Signaux et Systèmes (L2S, UMR 8506), CNRS – CentraleSupelec – Université Paris-Sud, where he is head of the Telecom and Networking division. He is also Editor-in-Chief of Signal Processing: Image Communication. Frédéric received his M.Sc. in physics and Ph.D. in electrical engineering from EPFL in 1990 and 1994 respectively. He has over 20 years of experience in research, previously holding positions at EPFL, Emitall Surveillance, Genimedia, Compaq, Digital Equipment, and MIT. Frederic is a Fellow of IEEE. He was Vice General Chair of ICIP 2014. He is Chair of the IEEE SPS Multimedia Signal Processing (MMSP) Technical Committee and Chair of the EURASIP Special Area Team on Visual Information Processing. He has been involved in the standardization of digital video and imaging technologies, participating both in the MPEG and JPEG committees. He is the recipient of two ISO awards for his contributions. His research interests include image and video coding, 3D video, high dynamic range imaging, visual quality assessment, and video transmission over wireless network. He is author or co-author of 3 books (“High Dynamic Range Video”, “Digital Holographic Data Representation and Compression”, “Emerging Technologies for 3D Video”), more than 120 research publications and 17 patents issued or pending.
Producing truly realistic video is widely seen as the holy grail towards further improving Quality of Experience (QoE) for end users of multimedia services. Currently investigated directions include high spatial resolutions, high frame rates, wide color gamut and high dynamic range.
The human visual system is able to perceive a wide range of colors and luminous intensities, as present in outdoor scenes in everyday real life, ranging from bright sunshine to dark shadows. However, current traditional imaging technologies cannot capture nor reproduce such a broad range of luminance. The objective of HDR imaging is to overcome these limitations, hence leading to more realistic videos and a greatly enhanced user experience.
HDR applied to still images has been an active field of research and development for many years, especially for photography. However, its extension to video content has only been considered recently. The effective deployment of HDR video technologies involves redefining common interfaces for end-to-end content delivery, which in turn, entails many technical and scientific challenges. In this talk, I will discuss recent research activities covering several aspects of an HDR video system.