Sean M. Arietta

University of California - Berkeley
Department of Computer Science
My name is Sean Michael Arietta. I am a graduate of the University of Virginia with a Bachelor of Science in Physics and a Master of Computer Science. I am now attending the Univeristy of California at Berkeley as a PhD student of Computer Science with a focus in Computer Graphics and Computer Vision. My advisors are Maneesh Agrawala and Ravi Ramamoorthi. When I'm not slaving away in front of a computer I enjoy playing music, playing soccer, cooking, entrepreneurship, and traveling.

Early Experiences in Building and Using a Database of One Trillion Natural Image Patches
S. Arietta, J. Lawrence IEEE Computer Graphics and Applications (CG&A) (2010)

To Appear

Abstract: Many example-based imageprocessing algorithms operate on image patches (texture synthesis, resolution enhancement, image denoising, and so on). However, inaccessibility to a large, varied collection of image patches has hindered widespread adoption of these methods. The authors describe the construction of a database of one trillion image patches and demonstrate its research utility.
A User-Assisted Approach to Visualizing Multidimensional Images
J. Lawrence, S. Arietta, M. Kazhdan, D. Lepage, and C. O'Hagan IEEE Transactions on Visualizations and Computer Graphics (TVCG) (2010)

Abstract: We present a new technique for fusing together an arbitrary number of aligned images into a single color or intensity image. We approach this fusion problem from the context of Multidimensional Scaling (MDS) and describe an algorithm that preserves the relative distances between pairs of pixel values in the input (vectors of measurements) as perceived differences in a color image. The two main advantages of our approach over existing techniques are that it can incorporate user constraints into the mapping process and it allows adaptively compressing or exaggerating features in the input in order to make better use of the output's limited dynamic range. We demonstrate these benefits by showing applications in various scientific domains and comparing our algorithm to previously proposed techniques. [pdf]

Current Research
Strontium: Bayesian Methods in Computer Vision

Strontium is not one project, but rather a set of projects aimed at applying Bayes' Theorem to the contexts of Computer Vision. Super-resolution is the initial focus of the project. This implementation will incorporate petabytes of data from everyday images that will be used as a training set to learn the correlations between neighboring image octaves. Currently, the software is slated to run on a Distributed File System and MapReduce implementation known as Hadoop. There is also a C code version of the code for single core execution.


Previous Research
Multispectral CT Imaging

Extending the ideas of High Dynamic Range reconstruction techniques to CT, we seek to find new uses for multispectral CT scanners. Specifically, we are developing algorithms that will allow aided identification of soft tissue regions in CT scans. By exploiting the spectral-dependent response of soft tissue, we can more accurately differentiate two nearby regions of different tissue types that would otherwise be clustered together in a k-means or equivalent algorithm.


MRI Reconstruction for Non-Cartesian k-Space Trajectories on Commodity GPU 's

By utilizing the highly programmable nature of modern GPU's, reconstruction times for MRI's can be greatly increased. The GPU itself is a very specific type of parallel processor, allowing multiple "shaders" to process information in parallel. By exploiting this fundamentally parallel nature, the GPU can be used for General Purpose GPU computing (GPGPU). There has been much work in the field of medical imaging concerned with speeding up the acquisition time of MRI's. However, the time needed to reconstruct these images is far greater than before necessitating a better approach. GPU's fit this bill rather well as they can break up the computation needed to perform the reconstruction into parallel units.


High Dynamic Range Reconstruction of Temporally Varying Scenes

High Dynamic Range (HDR) Reconstruction is a method of recreating High Dynamic Range images from multiple Low Dynamic Range (LDR) images taken over differing exposure times. Dynamic range is defined as the ratio of the brightest spot in an image to the darkest. Even professional cameras only have a dynamic range of about 400:1, whereas the human eye can perceive up to a 1,000,000:1 difference in light. Clearly there is room for improvement. Although there do exist techniques to recover an HDR image from several LDR images, they assume that the scene being photographed is not changing across the LDR images. In order to correct for this, several computer vision techniques are being employed to compensate for the non-coherence between multiple images, allowing HDR images to be reconstructed from temporally varying LDR images.


Previous Classes
CS6240: Software Engineering

(1) Hafnium - Home Automation Software

A small software development project infused with the use of agile development methods. We used a Rapid Collaborative Refinement model to quickly produce a working prototype of a home automation system. We plan to continue this work to also include a unified hardware interface.

(2) PaperLess - Electronic Receipts

Born out of the age of "green", we developed a system to electronically aggregate receipts for large and small companies. Our method relies on a mechanism called gPrint which allows us to interface easily with any existing POS system. We have also developed the necessary software and systems architectures to accomodate the high bandwidth traffic that would be expected in a production environment.

CS656: Operating Systems
CS660: Computational Theory
CS451: Advanced Computer Graphics
PHYS554: Computational Physics II
CS651: Computer Vision
CS647: Image Synthesis

Past Projects

An OpenGL remake of the classic NES game Marble Madness. This game was completed during my 2nd semester of 2nd year in CS446 (Real Time Rendering and Gaming). It cannot be played by most machines due to the high-level graphics techniques that were employed. These include bump mapping, environment mapping, reflection/refraction, fresnel optics, shadow mapping, GPU implemented particle systems, blooming, and multitexturing.



Bubble-O is an internet-based 2 player game written in Java. It utilizes the GameGardens architecture to handle the networking aspect of the game. The game itself has nothing to do with bubbles at all, and is only named that because it was originally designed to involve combative bubbles. The game turned out to be a remake of the childhood game played by drawing a grid of dots and subsequently taking turns connecting dots with lines to create squares. The player who captures the most squares wins. NOTE: In order to run this game, Ant needs to be installed.



A Java-based interface to control the motion of a security camera. It has a user interface that allows a security guard to create a set of points for the camera to cycle through. In addition, simple controls like zoom, depth of field, and elevation angle can be altered. This was a project completed in CS201 (Software Development Methods) and requires the Java Runtime Environment to run.



A website for students at the UC Berkeley. Our photographers go to parties we are invited to and take pictures that students can later download or buy. This company was started by myself and another student when we were in our first year of college. It has since shutdown.

6th Grade Software

A website from my early days of web design. Although it is still up, it does not serve much of purpose these days.