Thursday 28 August 2014

Constant colour matting with foreground estimation

Constant colour matting consists of estimating for each pixel of an image the propor- tion α of an unknown foreground colour with a known constant background colour. The α-matte is then used to replace this background with another image. Existing approaches approximate α directly but post-processing is required to remove spill of the background colour in semi-transparent areas. Instead of estimating α directly, we propose 3 methods to estimate the unknown foreground colour, and then to deduce α. This approach leads to high quality mattes for transparent objects and allows spill-free results (see www.cs.nuim.ie/research/vision/data/imvip2014/). We show this through an evaluation of the proposed methods based on a ground truth dataset.

Friday 8 March 2013

A Vision-Based Mobile Platform for Seamless Indoor/Outdoor Positioning

The emergence of smartphones equipped with Internet access, high resolution cameras, and posi- tioning sensors opens up great opportunities for visualising geospatial information within augmented reality applications. While smartphones are able to provide geolocalisation, the inherent uncertainty in the estimated position, especially indoors, does not allow for completely accurate and robust alignment of the data with the camera images. In this paper we present a system that exploits computer vision techniques in conjunction with GPS and inertial sensors to create a seamless indoor/outdoor positioning vision-based platform. The vision-based approach estimates the pose of the camera relative to the fac ̧ade of a building and recognises the fac ̧ade from a georeferenced image database. This permits the insertion of 3D widgets into the user’s view with a known orientation relative to the fac ̧ade. For example, in Figure 1 (a) we show how this feature can be used to overlay directional information on the input image. Furthermore we provide an easy and intuitive interface for non-expert users to add their own georeferenced content to the system, encouraging volunteering GI. Indeed, to achieve this users only need to drag and drop predefined 3D widgets into a reference view of the fac ̧ade, see Figure 1 (b). The infrastructure is flexible in that we can add different layers of content on top of the fac ̧ades and hence, this opens many possibilities for different applications. Furthermore the system provides a representation suitable for both manual and automatic content authoring.

Sunday 4 November 2012

An Authoring Solution for a Façade-Based AR Platform: Infrastructure, Annotation and Visualization

In the last few years, the emergence of smartphones equipped with Internet access, high resolution cameras, and positioning sensors provides the possibility for augmented reality (AR) applications in urban settings. An important component of such systems will be the ability of non-expert users to add their own AR content to the system. A comprehensive 3D model of a scene is normally required as part of this authoring process. This can be computationally ex- pensive to compute and unnecessary since often the fac ̧ades within the urban scene are the main reference point for the user. This paper describes a method of authoring augmented reality content based on planar façade extraction from a single image. With this method the extracted plane can be used to create a front parallel view which in turn can be used as a frame of reference for authoring 3D AR content. This approach permits a simplified method for integrating augmented content into the view which can highlight and visualize geographically or contextually meaningful information about the scene. Results presented demonstrate an authoring solution which is easy to use for a non-expert user based on the fact that the façade based infrastructure allows for an intuitive frame of reference. Furthermore, although the underlying representation is not a complete 3D representation of the environment, it still allows the user to cre- ate full 3D AR content.

Content authoring is an important stage in the workflow of creating rich augmented reality applications. In this paper we describe a fac ̧ade-based database infrastructure for authoring and storing 3D content for use in urban environments. It provides frames of reference for the environment as well as a mechanism to match new images with the fac ̧ades and thus retrieving associated 3D content. The infrastructure is flexible in that we can add different 3D “layers” of content on top of the façades and hence opens many possibilities for augmented reality applications in urban environments. Furthermore the system provides a representation suitable for both manual and automatic content authoring.

Slides from our presentation at ISMAR 12 Workshop Authoring Solutions for Augmented Reality