Personal tools
You are here: Home Student Projects Master Projects Gallery

Master Projects Gallery

This page shows a selection of Master projects that have been completed in our research group. Please contact Prof. Zwicker if you are interested in pursuing a project with us!

Hand-Held 3D Light Field Photography and Applications

Siavash Bigdeli, Spring 2014


This thesis proposes a method to acquire 3D light fields using a hand-held camera, and several computational photography applications facilitated by this approach are described. Initially an image sequence from a camera translating along an approximately linear path with limited camera rotations is taken as an input. One can acquire such sequence easily in a few seconds by moving a hand-held camera. Then the inputs are resampled into regular 3D light elds by aligning them in the spatio-temporal domain. Here a technique for high-quality disparity estimation from light elds is proposed. Finally, the thesis develops some applications including digital refocusing and synthetic aperture blur, foreground removal, and selective colorization.

Design and Development of an Endoscope Calibration Method and Unit for Laparoscopic Image Guidance Systems

Marius Schwalbe, Fall 2013


The introduction of laparoscopic techniques has revolutionized modern surgery. Although the advantages of laparoscopic surgery are evident, many drawbacks limit this patient friendly technique. Among them, the most important is the loss of three-dimensional (3D) viewing of the surgical field and consequently the loss of spatial understanding. In order to reduce these limitations, a computer aided system for laparoscopic surgery is developed at the ARTORG Center of the University of Bern. The main aim of the system is to provide to the surgeon a 3D model of the patient’s organ as an augmented reality view of the endoscope image. In order to achieve that, endoscope calibration plays a fundamental role. The goal of this thesis was to develop an endoscope calibration method and unit which can be used intra-operatively by the surgeon. Through experiments it was verified, that the endoscope calibration is accurate and reproducible. Finally, the feasibility of performing a calibration during the surgery was proven by performing a fast and accurate calibration during a surgery at the Inselpital in Bern.

Discrete Exterior Calculus: Theory and Applications in Computer Graphics

Peter Bertholet, Fall 2012


Discrete exterior calculus (DEC) provides tools to discretize partial differential equations (PDEs) defined on manifolds. The discretized PDEs are represented by global sparse linear equations which can be solved using standard linear solvers. DEC has successfully been used in various computer graphics applications. Nevertheless, DEC is not easily accessible to all graduate students in computer sciences. To correctly use DEC a good understanding of exterior calculus (EC) is needed, and EC is often not covered in standard computer science courses. This thesis aims to render DEC accessible to a broader public, by giving an introduction to DEC alongside of EC. Only a basic knowledge of calculus and linear algebra is assumed, and basic notions like manifolds are covered as well. The geometric aspects of both DEC and EC are emphasized in order to put across the insights behind the DEC and EC formalisms. More advanced DEC and EC results are given in the context of applications where they are relevant. The use of DEC is demonstrated in the context of surface parametrization, vector field design, and fluid simulations. This text provides a working knowledge of both exterior calculus and discrete exterior calculus, enabling the reader to apply and adapt DEC to new problems and to follow reasonings made using EC.

Adaptive Sampling and Reconstruction for Interactive Ray Tracing

Marco Manzi, Spring 2012


This thesis examines two algorithms to combine adaptive sampling and reconstruction for Monte Carlo ray tracing: Adaptive Wavelet Rendering (AWR) and Greedy Adaptive Mean Squared Error Minimization (GAMSEM). We examine them in the context of interactive ray tracing. The second algorithm has been successfully implemented on modern GPU hardware to achieve interactivity using a combination of OpenGL and the nVidia OptiX framework. Furthermore, GAMSEM has been extended by a method similar to irradiance caching to reduce the needed number of samples in scenes using indirect illumination. In contrast to common irradiance caching methods, no geometrical information is needed for this algorithm to work, which makes it much easier to use with effects like motion blur or depth of field.

A Global Texture Pipeline and Data-Driven Matting

Daniel Frey, Spring 2012


Recent sports broadcasting enhancement systems astonish spectators by presenting controversial events from novel viewpoints not possible with standard video. These novel views are generated using video-based rendering algorithms where the stadium is rendered using the current frame as background. Players are cut out and rendered as billboards. We extend the range of possible viewpoints by enhancing the rendering with global background textures and improve the player cut-out with alpha matting algorithms.

User-Guided Conversion of 2D Videos to Stereoscopic 3D

Gregor Budweiser, Spring 2012


3D-media is setting foot in the home entertainment segment. Many digital and live action movies are created in stereoscopic 3D. Others are converted to stereoscopic 3D in post-production by highly specialized companies. While the results are high quality, they are labour-intensive and there is a need for tools that minimize user interaction. This thesis covers how to do such a conversion from 2D to stereoscopic 3D that builds on top of recent research. The resulting pipeline starts by running an over- segmentation on the input video in a preprocessing step. Then in an interactive session the user starts to separate objects by drawing scribbles onto an arbitrary frame of the video. The scribbles are propagated over the segments by comparing the segment's histograms. Once the objects are found (i.e. all the segments are merged into the user-specified objects), the user can start to tweak the depth of the objects or add further scribbles/objects. After each change to the scene, the current frame is rendered in stereoscopic 3D by applying a warp. To make the tool useful the region merging and the warping have to be interactive. Since the segments and therefore the regions are defined over the whole video, the user input on one frame also influences the rest of the video This allows to skim through the frames and get a 3D view immediately at any frame. In an optimal case the user works on one frame and gets the whole clip in convincing 3D.

Real-Time Processing of Multi-View Video Data for 3D Display

Simon Fankhauser, Spring 2012


The ability to acquire, process, transmit and display three-dimensional content in real-time is essential for many emerging applications like 3DTV live broadcasting. In this thesis we present an end-to-end 3D video streaming system. Video data captured by a one-dimensional array of cameras is processed instantly and displayed on an automultiscopic display. It supports interactive baseline control to make the displayed depth range consistent with the restrictions implied by the display device and the capabilities of the human visual system. We apply an image-based rendering technique to synthesize arbitrary novel view-points in-between physical camera positions. In order to meet the real-time constraint, substantial parts of the processing pipeline are delegated to the graphics hardware. We implement different dense stereo algorithms with CUDA technology, evaluate and compare them, and present efficiency optimization techniques with respect to the GPU streaming architecture. Our work also covers various techniques to solve the problems of camera calibration and multiview image rectification.

Multiview Autostereoscopic Displays and Rendering - A Prototype System and Potential Applications in Medical Technology

by Mathias Griesen, Fall 2011


3D image acquisition and stereo display is becoming a standard technology in medical systems. Yet, the next step are autostereoscopic displays. They provide a more natural 3D viewing experience than two-view stereo systems because they do not require special glasses. They further provide motion parallax, and they can show appropriate 3D images to several viewers simultaneously depending on their position. In this project, a working prototype of a multi-view autostereoscopic rendering and display system was built. This includes rasterized rendering of conventional polygon based objects and real-time volume rendering of medical datasets acquired by CT or MRI. In addition, we developed a calibration technique for autostereoscopic displays based on a light field mapping model. This also makes it possible to display multiview content in an optimal fashion on any autostereoscopic display. We also extended our prototype with viewer tracking using a stereo camera frame. This tracking system is limited to a single viewer but it increases the image quality and the 3D perception. Finally, this thesis discusses potential applications of such multiview autostereoscopic systems for medical applications.

Real-Time Adaptive Global Illumination with Incremental Imperfect Shadow Maps

by Thomas Killer, Spring 2011

thomas killer screenshot

Enabling real-time rendering applications to produce realistic lighting simulations is a challenging task. In this thesis, we describe an algorithm that offers a possible solution to approximating global illumination in real-time. Our algorithm provides one-bounce real-time global illumination for fully dynamic scenes. The approach is based on instant radiosity and imperfect shadow maps. We combine and improve current methods by incorporating sampling strategies to optimize computations, as well as allowing for a user controlled trade-off between image quality and rendering performance. We show that it is possible to produce convincing lighting with satisfying rendering speed on current graphics hardware, while still staying true to the physically based principles of the global illumination problem.

Segmentation and Deformation of Objects in Cel Animations

by Simon Willi, Fall 2010

Segmentation of objects in cel animations

This thesis provides various tools for fast and easy segmentation and deformation of cel animation objects. The main contribution of this work are several newly developed segmentation algorithms based on graph-cuts. The algorithms take into account the specific characteristics of cel animations to optimize segmentation performance. This allows users to segment objects through a sequence of video frames with just a few mouse strokes. In order to handle the large amount of data that accumulates in such video applications, a multiresolution data structure is introduced. In addition, a soft segmentation algorithm ensures smooth boundaries of the segmented objects while preserving their characteristic outlines. The presented methods are faster and more stable compared to existing segmentation tools for video and image data. The system also provides three different intuitive tools for deforming segmented objects. Together with a keyframe interpolation algorithm, the deformation tools enable users to create their own animations. The output data, either adapted to special user requirements with the deformation tools or left unmodified in their original shape, can be used in different domains, such as image and video compositing.

Real-time Rendering of Refractive Objects in Participating Media

by Michael Pfeuti, Fall 2009

In this master's thesis we propose a rendering pipeline that is capable of rendering images with refractive objects in participating media in real-time. Real-time refresh rates are achieved because most parts of the pipeline are executed on the graphics processing unit. The pipeline computes the lighting by photon tracing and storing the illumination data in a 3D array. Our pipeline performs importance sampling in order to use as few photons as possible. Due to this adaptive photon sampling a specialized filtering technique needs to be applied. In comparison with similar rendering pipelines, we were able to reduce the number of required photon by more than 50%. Yet, because of hardware limitations on current GPUs, our pipeline is slower in common environments.

Document Actions