Immersive Analysis


Ongoing research, started with Atelier Ten, into immersive representations of environmental analysis. Tools include Rhino, Radiance, Python, Javascript, HTML, CSS


I’m always on the lookout for new ways to create more meaningful and impactful data visualizations. At Atelier Ten, we oftentimes went beyond merely translating data into information; we used simulations and visualizations to represent the experiential qualities of the built environment. Simulating and representing experiences, in service of conveying an idea, is a fascinating challenge and my ongoing interest in the subject prompted me to put together some work samples to try and better articulate the problem.

But first, some background. For 99% of my work as a design consultant, a good old-fashioned bar chart or line graph does the trick. As long as there are limited variables, and a clear story that needs to be told, these simple visualizations are extremely effective. Nevertheless, there are times when we confront the limits of conventional charts and graphs, even when we extend their functionality through interactivity. One of the first instances where I felt constrained by standard visualization libraries was in trying to map thermal comfort in buildings. This was a two part problem: first we had to figure out the appropriate metric for evaluating thermal comfort, and then we had to represent that metric in a way that people would understand.

For the metric, we settled on Predicted Mean Vote (PMV), which is a fairly well-documented and widely accepted method for calculating thermal comfort. One of the components of PMV is mean radiant temperature, which is partly a function of surface view factors–the degree to which a body “sees” each surface. There are a few ways of calculating this, but seeing as we already had the architectural model in Rhino, we used a raytracing script that simply counted the number of times a ray intersected each surface. The heart of the script relies on Vogel’s method for evenly distributing the rays, or vectors, around a sphere.

Using sub-hourly surface temperatures from the energyPlus model, and a custom script for deriving surface view factors, we could get spatial map of mean radiant temperature.

Unfortunately, the whole question of radiative heat transfer is actually something of a Pandora’s box, since the problem is a little more involved; we also wanted to capture the effects of shortwave radiation. We ultimately used Radiance and a bit of post processing to do this, relying on a dMRT calculation from this paper by Edward Arens. We wound up with with a graph (ugh) for a specific point in space, and distinct use cases over time (windows open, shades). However this really wasn’t a good approximation of the experience. Reality is much more dynamic.

Yeah I know it’s a graph, but it was the best we could do at the time. More importantly, it got the point across. Data represents perceived temperature under different conditioning scenarios.


We face a similar problem of spatially and temporally mapping experiences in daylighting analysis. We want to show something dynamic. While it’s a little easier with light–it’s a visual representation of a visual experience–the same principles generally apply. One of the most common methods for representing lighting analysis results is with spatial maps. If the geometry is particularly complicated, you can even unfold surfaces to display analysis grids.

For particularly complex geometry, one strategy is to map illuminance values onto each surface and unfold the analysis grid onto a two-dimensional plane.

This method is fine, but it requires a little explanation and occasionally fails to capture the subtleties of daylighting design. Before computer simulations came into widespread use, we used to create physical models with heliodons to assess the quality of light in a space, particularly how light interacts with materials. Simulations made it possible to rapidly iterate on designs, but we lost something in the process of abstracting information onto a 2D plane. As the AEC industry starts to embrace AR/VR technologies, we might be able to regain some of what was lost.

DIVA is only of the most widely used plugins for daylighting analysis and over the last year I’ve been experimenting with altering the Radiance batch files to extend its capabilities. Radiance has a ton of functionality, and only part of it is handled within DIVA. So far my workflow has been to let DIVA take care of generating sky files, creating materials files, and general setup work, and then edit the batch files to customize the simulation. Adding a little metadata to the resulting jpeg, you can load the image into Google cardboard or use it as is with lightweight photosphere viewers, like Matthew Petroff’s Pannellum javascript library.

Radiance rendering with custom camera definition to create equirectangular image via rtrace. Rendered with using AWS.

Once you have the fundamentals of the process down, you can alter the radiance files to overlay analysis results on the VR image. Moving forward, I want to start integrating animations into the VR experience, and give users the ability to actually adjust the environment in real time, or make design changes on the fly and see the effects. There are some platforms that do this really well, like Unity. But radiance is really the only platform out there that produces photometrically accurate renderings and analysis results, which is the whole point of this exercise - otherwise we’re just creating nice images.

Using shell scripts you can post-process the radiance output to overlay analysis results on HDR images.


Moving forward, how do we leverage technology to represent complex experiences in the physical world. Acoustic engineers have sound labs, and lighting designers can start to use more immersive technologies like AR/VR. What is the best way to represent more complex phenomena, like thermal comfort? A few things I’ve learned in trying to answer these questions.

First, be very specific about what you’re analyzing. If you can narrow the analysis down to one or two variables, then it’s easier to use the visualization as a tool for making informed design decisions. Second, use visualization to inform the design process. This might sound intuitive, but it means designing visualizations as tools rather than one-off representations of an idea. If you are in a VR environment for example, you should be able to see design options without switching views, or loading a different dataset. Preserve object constancy whenever possible. Third, don’t fall into the sexy image trap. It’s hard to avoid the cult of architectural representation. Architects use various representational techniques  to sell an idea, but that’s not the purpose of analysis. The point of this exercise is to better represent the reality of an experience to inform the design process.