Sibgrapi 2008 - Workshop of Theses and Dissertations

-   Editorial (in portuguese)
-   Accepted Work
-   Program Committee
-   Full WTDCGPI proceedings



  O Workshop de Teses e Dissertações em Computação Gráfica, Processamento de Imagens e Visão Computacional (WTDCGPI) visa criar um espaço para debate e divulgação dos trabalhos de doutorado e mestrado desenvolvidos no país em Computação Gráfica, Processamento de Imagens, Visão Computacional e Visualização. O evento tem sido parte integrante do SIBGRAPI – Simpósio Brasileiro de Computação Gráfica e Processamento de Imagens, que em 2008 ocorre na cidade de Campo Grande, MS, no período de 12 a 15 de outubro de 2008.

   Foram aceitas submissões de trabalhos de alunos regularmente matriculados em cursos de doutorado ou mestrado vinculados a cursos de Pós-Graduação stricto sensu no país, ou ainda ex-alunos que tenham defendido tese ou dissertação depois de 01/01/2008 e antes do prazo final para submissão em 09/07/2008.

   Como forma de permitir uma maior disseminação dos benefícios do Workshop aos autores, decidiu-se por aceitar o maior número possível de trabalhos, respeitando os pareceres dos revisores (pelo menos 2 aceitações). Ao todo, foram aceitos 8 trabalhos na área de Computação Gráfica e 7 na área de Processamento de Imagens e Visão Computacional de um total de 18 submetidos.

   Os trabalhos aceitos na área de Computação Gráfica versam sobre temas variados tais como: visualização e reconstrução a partir de modelos baseados em pontos, síntese de fontes de luz, visualização volumétrica, deformação de imagens e simulação e animação baseada em física.

   Os trabalhos aceitos na área de Processamento de Imagens e Visão Computacional também contemplaram temas variados, incluindo a segmentação de imagens coloridas, realce de imagens, biometria a partir de impressões digitais, processamento de imagens aéreas, médicas e biológicas, bem como métodos para representação e análise de formas.



     Claudio Esperança and Herman Martins Gomes
     WTDCGPI 2008 Co-Chairs.


Topo

Accepted Work

Image Processing
Discrete Models for Animating Gas-Liquid and Fluid-Surface Interactions
Sicilia F. Judice, Gilson A. Giraldi, National Laboratory of Scientific Computing
(pp 7 - 16). (paper)

The past two decades showed a rapid growing of physically-based modeling of fluids and solid for computer graphics applications. In particular, techniques in the field of Computational Mechanics have been applied for realistic animation of systems that involve gas-fluid and fluid-surface interaction for computer graphics and virtual reality applications. The main goal of our work is the development of a particle based framework to create realistic animations of such systems. Specifically, we model and simulate the gas through a Lattice Gas Cellular Automata (LGCA), the liquid by Smoothed Particle Hydrodynamics (SPH) method and the surface through Mass-Spring systems. LGCAs are discrete models based on point particles that move on a lattice, according to suitable rules in order to mimic a fully molecular dynamics. SPH is a Lagrangian, meshfree method for numerical simulation which is based on particle systems and interpolation theory. Mass-Spring systems may be geometrically represented by regular meshes which nodes are treated like mass points and each edge acts like a spring. When combining these methods (LGCA, SPH and Mass-Spring), we get the advantage of the low computational cost of cellular automata and mass-spring systems and the realistic fluid dynamics inherent in the SPH to develop a new animating framework for computer graphics applications. In this work, we discuss the theoretical elements of our proposal and present some preliminary experimental results.

Exploration of Volumetric Datasets through Interaction in Transfer Function Space
Francisco de M. Pinto, Carla M. D. S. Freitas, Instituto de Informática – Universidade Federal do Rio Grande do Sul
(pp 17 - 26). (paper)

Direct volume rendering techniques allow visualization of volume data without extracting intermediate geometry. The mapping from voxel attributes to optical properties is performed by transfer functions which, consequently, play a crucial role in building informative images from the data. Onedimensional transfer functions, which are based only on a scalar value per voxel, often do not provide proper visualizations. On the other hand, multidimensional transfer functions can perform more sophisticated data classification, based on vectorial voxel signatures. The transfer function design is a nontrivial and unintuitive task, especially in the multidimensional case, and its controlled modification allows the user to selectively enhance different structures in the volume. In this paper we discuss the interactive approach of a transfer function design technique that allows the user to explore volumetric datasets by interacting with a derived space as well as with voxels in the volume space.

Fluid Warping
Dalia Bonilla, Luiz Velho, André Nachbin, Instituto de Matemática Pura e Aplicada
Luis Gustavo Nonato, Universidade de São Paulo - São Carlos

(pp 27 - 32). (paper)

Warping techniques can be complicated and difficult to use, but through the use of fluid dynamics the warping becomes simple and it is intuitively controlled by physical properties such as viscosity and forces. These properties are naturally associated with the image itself or with spatial control handles. The key idea is to think of the image domain as a two-dimensional incompressible and homogeneous fluid, and to use the Navier Stokes equations to change it by applying forces to the image function. In this way, the process does not move the image values as in fluid simulations, but transforms the coordinates of a parametrization of the image through a vector field generated by the simulation equations — effectively acting as a texture mapping.

Image-Based Techniques for Surface Reconstruction of Adaptively Sampled Models
Ricardo Marroquim, Universidade Federal do Rio de Janeiro
Paulo Roma Cavalcanti, Universidade Federal do Rio de Janeiro

(pp 33 - 42). (paper)

Image based methods have proved to efficiently render scenes with a higher efficiency than geometry based approaches, mainly because one of their most important advantages: the bounded complexity by the image resolution, instead of by the number of primitives. Furthermore, due to their parallel and discrete nature, they are highly suitable for GPU implementations. On the other hand, during the last few years point-based graphics has emerged as a promising complement to other representations. However, with the continuous increase of scene complexity, solutions for directly processing and rendering point clouds are in demand. In this paper, algorithms for efficiently rendering large point models using image reconstruction techniques are proposed. Except for the projection of samples onto screen space, the reconstruction time is bounded only by the screen resolution. The method is also extended to interpolate other primitives, such as lines and triangles. In addition, no extra data-structure is required, making the strategy memory efficient.

Least Squares and Point-based Surfaces: New Perspectives and Applications
João Paulo Gois, Antonio Castelo Filho, Instituto de Ciências Matemáticas e de Computação Universidade de São Paulo
(pp 43 - 52). (paper)

Surface approximation from unorganized points belongs to the state-of-art of computer graphics. In this work, we present approaches for surface reconstruction that are based on efficient numerical schemes for function approximation from scattered data and on sophisticate data structures. In addition, we develop a relevant surface reconstruction method to model moving interfaces, specifically, interfaces of numerically simulated multiphase fluid flow. Finally, from our accumulated experiences on numerical schemes and on the development of surface reconstruction methods, we propose a matrix-free approach for rendering arbitrary volumetric scattered data, which presents interesting properties to be implemented on GPU.

Renderizações Não Fotorealísticas para Estilização de Imagens e Vídeos usando Areia Colorida
Laurindo S. Britto Neto, Departamento de Ciências da Natureza/Picos - UFPI
Bruno M. Carvalho (Orientador), Departamento de Informática e Matemática Aplicada - UFRN

(pp 53 - 62). (paper)

Non-Photorealisitc Rendering (NPR) can be defined as the processing of scenes, images or videos into artwork. This work presents a new method of NPR for stylization of images and videos, based on a typical artistic expression of the Northeast region of Brazil, that uses colored sand to compose landscape images on the inner surface of glass bottles. This method is comprised by one technique for generating 2D procedural textures of sand, and two techniques that mimic effects created by the artists using their tools. We also present a method for generating 2 1/2D animations of stylized videos as if they were placed in a sandbox. The temporal coherence within these stylized videos can be enforced on individual objects with the aid of a video segmentation algorithm.

Sistema Composto para Amostragem e Geração de Luzes a partir de Mapas de Iluminação
Aldo R. Zang, Laboratório VISGRAF - IMPA
Luiz Velho, Laboratório VISGRAF - IMPA

(pp 63 - 68). (paper)

In this paper we introduce a new approach to the problem of direct illumination in physically-based rendering of 3D scenes using illumination maps captured from real environments. We developed a system that takes advantage of the best features of the current solutions to the problem: namely, the approximation of illumination maps through directional lights; and stochastic sampling of the light maps. Our framework is flexible and can be used with most rendering programs.

Um sistema simplificado para animação física usando malhas tetraedrais
Guina Sotomayor Alzamora, Claudio Esperança, University Federal of Rio of Janeiro Computer Graphics Laboratory
(pp 69 - 78). (paper)

We present a simplified approach for animation of geometrically complex deformable objects represented as tetrahedral meshes. Our prototype system detects and responds to collisions of objects subject to elastic deformations of variable stiffness. The proposed approach combines several techniques, namely, collision detection using a Spatial Hashing, collision response through a contact surface that use a consistent penetration depth using propagation, an estimate for displacement vector of the deformation region and binary search to separate objects. The dynamics is based on shape matching and a modal analysis scheme, using an Euler explicit-implicit integrator. Preliminary results show that collisions between objects containing several hundreds tetrahedra can be animated in real-time.

Image Processing and Computer Vision
Automatização do Ajuste de Segmentadores Neurais de Imagens Coloridas
Fernando H. B. Cardoso, Herman M. Gomes, UFCG, Departamento de Sistemas e Computação - Laboratório de Visão Computacional
(pp 80 - 85). (paper)

Tuning material-detecting-capable image segmenters is performed manually, with little automation. Once the segmentation is usually an intermediary step to a myriad of applications within Computer Vision, the lack of automation leads to effort wasting in secondary tasks. In this work, we propose a techinique to automatically tune neural networks that segment images based on color and texture information. This technique does not require human supervision, reducing the effort in obtaining segmenters. The automatically tuned neural networks detect 2.56% more of the material’s surface – related to other segmenters also tested – with false acceptance rate of 6.89%.

Contrast Enhancement in Digital Imaging using Histogram Equalization
David Menotti, Universidade Federal de Ouro Preto
Arnaldo de A. Araújo, Gisele L. Pappa, Universidade Federal de Minas Gerais
Laurent Najman, Université Paris-Est
Jacques Facon, Pontifícia Universidade Católica do Paraná

(pp 86 - 95). (paper)

This work proposes two methodologies for fast image contrast enhancement based on histogram equalization (HE), one for gray-level images, and other for color images. For gray-level images, we propose a technique called Multi-HE, which decomposes the input image into several sub-images, and then applies the classical HE process to each one of them. In order to decompose the input image, we propose two different discrepancy functions, conceiving two new methods. Experimental results show that both methods are better in preserving the brightness and producing more natural looking images than other HE methods. For color images, we introduce a generic fast hue-preserving histogram equalization method based on the RGB color space, and two instantiations of the proposed generic method, using 1D and 2D histograms. HE is performed using shift hue-preserving transformations, avoiding the appearance of unrealistic colors. Experimental results show that the value of the image contrast produced by our methods is in average 50% greater than the value of contrast in the original image, still keeping the quality of the output images close to the original.

Fusão de Métodos Baseados em Minúcias e em Cristas para Reconhecimento de Impressões Digitais
Fernanda Pereira Sartori Falguera, Aparecido Nilceu Marana, UNESP
(pp 96 - 105). (paper)

Biometrics is one of the major tendencies in human identification and fingerprints are the most widely used biometrics trait. However, considering the automatic fingerprint recognition a completely solved problem is a common mistake. The most extensively used methods, the minutiae-based methods, do not perform well on poor-quality images and when just a small area of overlap between the template and the query image exists. The Multibiometrics is considered one of the keys to overcome the weakness and to improve the accuracy of biometrics systems. This master thesis presents the fusion of minutiae-based and ridge-based methods. The achieved results (mean reduction of the Equal Error Rate of more than 30% and an increase of 75% in the Correct Retrieval Rate) have showed that the fusion of minutiae-based and ridge-based methods can provide a significant accuracy improvement of the fingerprint recognition systems.

High-resolution image resonstruction using the Discontinuity Adaptive ICM algorithm
MARTINS, A.L.D. and MASCARENHAS, N.D.A., Universidade Federal de São Carlos
(pp 106 - 115). (paper)

Super-Resolution reconstruction methods intend to reconstruct a high-resolution image from a set of lowresolution observations. For that, the observed images must have sub-pixel displacements between each other. This requirement allows the existence of different information on each of the low-resolution images. This paper discusses a Bayesian approach for the super-resolution reconstruction problem using Markov Random Fields (MRF) and the Potts-Strauss model for the image characterization. Since it is difficult to maximize the joint probability, the Iterated Conditional Modes (ICM) algorithm is used to maximize the local conditional probabilities sequentially. For the oversmoothness inherent to Maximum a Posteriori (MAP) formulations using MRF prior models, we adopt a discontinuity adaptive (DA) procedure for the ICM algorithm. The proposed method was evaluated in a simulated situation by the Peak signal-to-noise ratio (PSNR) method and the Universal Image Quality Index (UIQI). Also, video frames with sub-pixel displacements were used for the visual evaluation. The results indicate the effectiveness of our approach both by numerical and visual evaluation.

Mapeamento e Monitoramento Ambiental Usando Imagens Aéreas de Pequeno Formato
Natal Henrique Cordeiro, Bruno Motta de Carvalho, Luiz Marcos Garcia Gonçalves, Universidade Federal do Rio Grande do Norte
(pp 116 - 125). (paper)

We propose a technique that uses small format aerial images, or SFAI, considered as not controlled, and stereophotogrammetry techniques to construct georeferenced mosaics. Images are obtained using a simple digital camera coupled to a radio controlled (RC) helicopter. Techniques for removing common distortions are applied and the relative orientation of the models are recovered using perspective geometry. Ground truth points are used to get absolute orientation, plus a definition of scale and a coordinate system which relates image measures to the ground. The mosaic is read into a GIS system, providing useful information to different types of users, such as researchers, government officers, fishers and tourism enterprises. Results are reported, illustrating the applicability of the system. The main contribution is the generation of georeferenced mosaics using SFAIs, what has not been widely explored previously in cartography projects. The proposed architecture presents a viable and much less expensive solution, when compared to systems using controlled pictures.

Reconhecimento Semi-Automático de Sinus Frontais para Identificação Humana Forense Baseado na Transformada Imagem-Floresta e no Contexto da Forma
Juan Rogelio Falguera, Aparecido Nilceu Marana - UNESP
(pp 126 - 135). (paper)

Several methods based on Biometrics such as fingerprint, face and iris have been proposed for person identification. However, for postmortem identification such biometric measurements may not be available. In such cases, parts of the human skeleton can be used. Previous investigations showed that frontal sinus patterns are unique for each individual. The objective of this master thesis is to propose a computational method for frontal sinus recognition for postmortem human identification. In order to achieve this, methods for frontal sinus segmentation from anteroposterior radiographs were evaluated. The method based on Image-Foresting Transform has shown itself efficient in frontal sinus segmentation from radiograph images. Techniques for extracting frontal sinus geometrical and shape-based descriptors were investigated and implemented as well. The results obtained in our experiments confirm the outcomes described in literature about the individuality of the frontal sinus and its feasibility in terms of precision and usability for postmortem human identification.

Shape Descriptors based on Tensor Scale
Fernanda A. Andaló, Ricardo da S. Torres, Alexandre X. Falcão, Institute of Computing – State University of Campinas (Unicamp)
(pp 136 - 144). (paper)

Tensor scale is a morphometric parameter that unifies the representation of local structure thickness, orientation, and anisotropy, which can be used in several computer vision and image processing tasks. We exploit this concept for binary images and propose two shape descriptors – Tensor Scale Descriptor with Influence Zones and Tensor Scale Contour Saliences. It also introduces a robust method to compute tensor scale, using a graph-based approach – the image foresting transform. Experimental results are provided, showing the effectiveness of the proposed methods, when compared to other relevant methods with regard to their use in content-based image retrieval tasks.

 
 


Topo

Program Committee

Workshop Co-Chairs: Claudio Esperança (UFRJ) and Herman Martins Gomes (UFCG)


Carlos Morimoto, USP
Claudio Esperanca, UFRJ
Esteban Clua, UFF
Herman Martins Gomes, UFCG
Joao Marques de Carvalho, UFCG
João Comba, UFRGS
Luiz Henrique de Figueiredo, IMPA
Nelson Mascarenhas, UFSCar
Olga Bellon, UFPR
Waldemar Celes, PUC-Rio
Wu Shin-Ting, UNICAMP

Reviewers

Arnaldo de Albuquerque, UFMG
Carlos Morimoto, USP
Chauã Queirolo, UFPR
Claudio Esperanca, UFRJ
Esteban Clua, UFF
Harlen Batagelo, UNICAMP
Herman Martins Gomes, UFCG
Joao Carvalho, UFCG
João Comba, UFRGS
Julio d'Alge, INPE
Junior Barrera, USP
Luciano Silva, UFPR
Luiz Henrique de Figueiredo, IMPA
Murillo Homem, UFSCar
Nelson Mascarenhas, UFSCar
Olga Bellon, UFPR
Waldemar Celes, PUC-Rio
Wu Shin-Ting, UNICAMP


Topo