Current trends in publishing 3D Anatomy

From Rybczinski et al., 3D modeling of Hadrosaur Chewing, PE, 2008.: a beautiful 3D model pasted onto a 2D page and defaced with 2D muscles

One might say that the past decade has been the decade of 3D visualization in paleontology. Although techniques such as CT, MRI, laser scanning, serially-reconstructed histology, and confocal microscopy (and of course molding/casting)  have been around for a long time, and championed by a few, only recently have they become common enough to be available to so many different researchers, students, and enthusiasts alike. With these 3D data-driven techniques have come new ways of disseminating 3D data via the internet. Publications such as Paleontologica Electronica and PLoS have made it easier to include movies  (check out Alex Tirabasso’s movie of Hadrosaur Chewing in PE 2008) and 3D models directly into the publication, rather than as supplementary files which were long the only means to append video files–a common occurrence in journals like Journal of Experimental Biology, Journal of Zoology, Journal of Morphology, and the like. But the ability to post 3D material directly onto the web has really changed the game.

During the past 15years, we’ve seen numerous new virtual museums enter the Web, offering new views, visualizations, and interactive features that allow viewers to study a variety of specimens, bones, and raw data. The real heroes of this movement, ones that first grabbed my attention, were sites like Digimorph at U Texas and  3D Museum at UC Davis. Numerous others have cropped up like Paleoview at Marshall, Larry Witmer’s 3D visualization page, and most recently Aves3D at Holy Cross. These were  spearheaded by Tim Rowe, Ryosuke Motani, Suzanne Strait, Larry Witmer, and Leon Claessens respectively. I’m sure there are others, and our lab too has a growing assortment of 3D content on its page.

From Holliday Lab 3D database: a 2D picture of a 3D model

What they all have in common is that they share 3D specimen data freely, which is awesome. How they differ is in the way they process and present it. Digimorph offers views of the raw CT data (in stacked .tiffs) via their Inspector Java applet as well as movie files of 3D reconstructions of the specimen. 3D museum and Aves3D make use of a software package called WireFusion, which converts the 3D model, in these cases taken from laser scan data, and embeds it as a .vrml object in a 3D interactive environment where you can manipulate the object directly on the webpage. We gave this method a shot 2 years ago where Nick Gardner made a number of WireFusion models of our Lizard microCT data used in our ongoing histology studies. Thanks to Leon for turning me on to that. Paleoview 3D and the WitmerLab page take it one step further, by sharing the 3D data in downloadable formats such as .obj (works well with Maya, 3DSmax), .wrp (Geomagic), .stl (most CAD programs), and finally 3D pdf formats.

The 3D pdf is particularly slick because everyone has access to Adobe Acrobat, which is all you need to view the models; but also you can generate models that have parts you can highlight, select, or hide thus enabling the viewer to see inside of structures. This method is particularly useful if you study endocasts, or models of brains or pneumatic cavities, or jaw muscles that happen to be inside that pesky skull. Check it out its awesome. We’ve generated some of these ourselves and use them teaching medical students, high school students, or include them in long format talks where there’s time to switch out of your powerpoint presentation. We’ll have online 3D and raw CT content for a new crocodilian species we’re writing up too, these, along with the raw CT data, will also go to the accessioning museum upon the return of the specimen. They’ll be posted soon enough on our lab page.

To generate a 3D pdf, first you need a 3D dataset that you can then turn into one of many file types that Adobe 3D (v8, or 9 extended) can read. We use Amira, since we primarily use CT and MRI data, but if you’re using laser scan data, there are several different file formats you can export from SolidWorks, RapidForm, Geomagic, etc.  into Adobe. Also there’s VGStudio, Mimics, and the handy freeware Slicer, all of which export models into readable formats such as .stl or .obj. Those of you that don’t have access to the pricier software packages, Slicer is nice and, geez, now years ago, Andy Farke over at the Open Source Paleontologist posted an inspiring tutorial on how to use this software package. Finally, I learned many of these tricks during my dissertation at Ohio University w/ Larry. His lab has remained a useful go-to place for help on occasion.

Besides the general ease, the slick look, and the accessibility of these 3D models, what I like these days, since I don’t necessarily have as much time to “color” as I used to, is that undergrads, and even high school students (and of course, hopefully, grad students) can easily master the skills necessary to dive into a data set and make a 3D model and see it published/shared on the web. During this process, they learn a particular animal, and its parts pretty well, while also learning how to use some still “cutting-edge” software. This is particularly useful to students that have interests in radiology, surgery, and various health care professions let alone those that choose to go to graduate school


Axial section of alligator embryo, with muscles and nerves, soon to be a model (and pubbed).

Currently both Rebecca and Cortaiga (our Undergrads), Henry the PhD student, and Ian, a rotation PhD student, all have models in preparation that complement research projects and will be ready for our Annual Health Sciences Day Nov 11. Each of these projects will find their way into the publication pipeline in the next few months as well and some form of a 3D model in 3Dpdf format (and more) will be made available. So stay tuned.

Putting these data up for free is cool, and indeed they will be used initially in particular papers (where we can ‘spin’ their utility), and likely others in the future. But how does one “get credit” for sharing data? Funding agencies like NSF apparently like this behavior of making data and models available, so I’ll stick with it just for that; though I have to say, when I included our 3D lizard page  as an example of public dissemination as well as UG training  in my Broader Impacts a grant or so ago, it was called “fleeting” (Since apparently the webpage might disappear one day, a reviewer opined) as well as “lipservice”. I’m still not sure wtf that means since I have an ok record of getting things out, and then getting them up on the page, and have had the pleasure of having several most excellent undergrads working in the lab. But it takes time, people, and funding to expand before one can share significantly, and easily.

But for readers, here’s a question: Are there outlets where one can actually “publish” online atlases, or small modeling projects? Are there good, or bad things you see happening regarding 3D data dissemination? Do you find it easy to get 3D data and scanning facilities, easy access to software, or help with software? Or if you’re without these things, do people share their model data if you bug them enough?

-Casey

About these ads

9 thoughts on “Current trends in publishing 3D Anatomy

  1. I suspect you’re targeting more scientific forums in your question, but I’ll throw this out there as you’ll likely confront us artist-types soon enough.

    You’re likely aware of commercial archives (ie. turbosquid dot com) of skeletons created with a good dose of artistic license. The scanned meshes will be of great interest and you’ll certainly encounter cases of meshes appearing on these venues, which has been a big issue. If anyone is releasing the meshes themselves, I’d therefore recommend including such venues from the get-go.

    I’m personally trying to create abstracted skeletals biased towards information pertinent to determining classification. A first attempt: http://www.drip.de/?p=508

    The goal is an attractive ‘light’ mesh for real-time interactive illustrations to be integrated in future eBooks and learning applications together with detailed scans. (Of course, I’ve completely underestimated the first step in this process – the determination of ‘pertinent information’. Ouch.)

    I’m convinced that interactive 3D and non-photorealistic rendering technologies will enable us to shift public fascination from specific dinosaurs to underlying scientific principles by presenting these in an experiential way. I applaud you and your colleagues for opening up these collections!

  2. David, that is way cool. I hadn’t seen those types of meshes yet. And this post isn’t completely science-geared. Particularly since it takes a degree of artistic license, touch, and perspective to see some of the “science-based” models to their finish. I envision many of these models meeting somewhere half way. Thanks!

    • Thank YOU.
      I hate the indistinct skeletons often modeled for documentaries… the important details aren’t included, yet it purports validity via hyper realistic surface details.
      I’d love to have a go at the Hadrosaur skull. I’m working on a Kentrosaur with Heinrich Mallison at the moment. Its invaluable having expert guidance in these creations.

  3. Excellent and interesting post! You’re right in that this is the direction that morphological research is heading and will have to head. A few comments, some in response to your queries at the end:

    1) Re: good or bad things with 3D data dissemination, I have a few opinions. I _am_ concerned about the long-term availability and accessibility of the data, particularly as related to supplementary data posted with papers. For instance, will 3D PDFs be supported 100 years from now? This makes it extra important to archive data with museums, which brings me to point #2. . .

    2) In general, I have had little difficulty in getting archived CT data from museums or places like DigiMorph. But. . .my sense is also that most museums have no plan for long-term archival of data. This is a disaster in the making. Lots of museums have stacks of DVDs and CDs with specimen data on them – storage formats that will inevitably break down, get damaged, or lost. In fact, most of the specimen databases I’ve seen don’t even have a record that scan data were ever taken! Of course, I won’t pretend that there is an easy solution – fixing this (long term archival, databasing, dissemination, etc.) requires money that museums just don’t have these days.

  4. I am personally challenged by maintaining properly organized digital archive files and also have books and stacks of CD/DVDs of raw image data, external harddrives of various quality and the like, so I can’t imagine what larger collections must struggle with.

  5. Archiving is a huge issue.
    Dave Sproxton from Aardman told how the studio suffered a fire in its storage rooms (clay models and digital backups). They couldn’t even ascertain what they’d lost and the damage was of a degree that subsequent films became economically unfeasible. Since then, they’ve hired a librarian. He reported that this was a severe culture clash that resulted in entire new work processes. The issue of archiving still isn’t solved in a satisfactory manner, and he invited other mid to small-size studios to collaborate on solutions. The giants like Disney just bulk store everything, but even they are running up against the limits of their huge capacities.
    No open-source results that I know of, but there may be things on the horizon. Archiving film assets is complex enough that there’s at least hope that such systems would be flexible enough to function for paleo data.

  6. 3D data long term storage remains a problem that even NARA (national archives and records administration) has no solution for. For now, it’s hard to tell WTF we should do in terms of this. It’s obvious that proprietary solutions like Quicktime movies and Adobe 3D Pdfs, while wicked cool and serving all sorts of awesome up for our use (everyone loves ‘em), can’t be the long term solution. Not only do we need to have our data in formats that 20 years from now someone can come along, open, study; we also need a solution for storing metadata (e.g. scan parameters, WTF the scan is even of, notes on the specimen and/or scan data, etc.). It doesn’t help when there’s zillions of different formats all doing the same thing and no common framework between them or w/e.

  7. Pingback: Alligator Sesamoid Anatomy | Holliday Lab at Mizzou

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s