12.09.2012, 16:00–17:00, LRZ, Hörsaal (H.E.009)
Prof. Katsushi Ikeuchi, The University of Tokyo
e-Heritage, Cyber Archaeology, and Cloud museum
Abstract: We have been conducting the e-Heritage project, which converts assets that form our cultural heritage into digital forms, by using computer vision and computer graphics technologies. We hope to utilize such forms 1) for preservation in digital form of our irreplaceable treasures for future generations, 2) for planning and physical restoration, using digital forms as basic models from which we can manipulate data, 3) for cyber archaeology, i.e., investigation of digitized data through computer analysis, and 4) for education and promotion through multimedia contents based on the digital data. This talk briefly overviews our e-Heritage projects underway in Italy, Cambodia, and Japan. We will explain what hardware and software issues have arisen, how to overcome them by designing new sensors using recent computer vision technologies, as well as how to process these data using computer graphics technologies. We will also explain how to use such data for archaeological analysis, and review new findings. Finally, we will discuss a new way to display such digital data by using the mixed reality systems, i.e. head-mount displays on site, connected from cloud computers.
Short Bio: Dr. Katsushi Ikeuchi is a Professor at the University of Tokyo. He received a Ph.D. degree in Information Engineering from the University of Tokyo in 1978. After working at the Massachusetts Institute of Technology’s AI Lab for two years, Electrotechnical Lab, Japan for five years, and Carnegie Mellon University for ten years, he joined the university in 1996. His research interest spans computer vision, robotics, and computer graphics. He has received several awards, including the IEEE Marr Award, the IEEE RAS “most active distinguished lecturer” award and the IEEE-CS ICCV Significant Researcher Award as well as Shiju Houshou (the Medal of Honor with Purple ribbon) from the Emperor of Japan. He is a fellow of IEEE, IEICE, IPSJ, and RSJ.
21.06.2012, 16:00-17:30 LRZ, Hörsaal (H.E.009)
Prof. David Roberts, Professor of Telepresence, Head of Centre for Virtual Environments and Future Media University Salford
Reproducing the face to face meeting in Telepresence
Abstract: A grand challenge shared between computer science and communication technology is reproducing the face-to-face meeting across a distance. At present, we are some way from reproducing many of the semantics of a face-to-face meeting. Furthermore, while we can reproduce some in certain mediums and others in others, we are currently unable to reproduce most in any. For example while some mediums can show us what someone really looks like and others what or who they are really looking at, communicating both together has not yet been achieved to any reasonable quality across a reasonable distance. This talk explains some of the primary challenges, comparing our approaches to “telepresent” video conferencing, immersive virtual environments, and 3D video based tele-immersion.
Short Bio: Professor David Roberts' primary research interest is in creative group work encouraged or supported by immersive mediums. Towards this he leads both the development of new technologies and studies of their use. Through a framework of social human communication that includes verbal and non-verbal communication and the role of objects and environment, he studies how people interact around simulated artefacts in environments enriched or joined through technology. Most of his work has focussed on telepresence, and his current focus is on combining VR and computer vision to enhance the naturalness of telecollaboration through free view immersive 3D video.
19.03.2012, 14:00-15:30 LRZ, Hörsaal (H.E.009)
Dr. Bernhard Reitinger, Software Development Lead, Vexcel Imaging GmbH, Microsoft Photogrammetry
Map Generation – From the sensor to imagery
Abstract: Maps are ubiquitous in our daily lives ranging from navigation and routing to trip planning and virtual tours in the internet. Although navigation maps is common nowadays, the generation requires lots of world class research efforts. This talk will give a tour from the very beginning of the processing chain – the digital aerial camera – to the very end – the map product known as Bing Maps. Different to consumer cameras, the developed aerial sensor captures images with a resolution of 260 MPix per shot every 2 seconds. Algorithms has been developed by our research team which provide an interactive visualization of large amount of image data as well as efficient parallel processing to the final product. Techniques known from Microsoft’s PhotoSynth product, but also world class dense matching and ortho generation methods are part of the presented pipeline.
Short Bio: Bernhard Reitinger received his master degree from the Johannes Kepler University in Linz. In 2005, he finished his PhD at the Technical University in Graz where he was focused on medical visualization within virtual reality environments. After he was working as a post-doc researcher in Graz in the context of augmented reality and computer vision, he was hired by Microsoft in 2007 to lead a research and development team focusing on computer vision and visualization of digital aerial cameras.