Sadly the Corner Store was cut from Surviving Independence so I'm posting a couple screen captures to archive the level. There wasn't enough production time to add the unique gameplay and scenarios for this level. One video shows the completed base modeling and the other shows nearly final environment art.
Independence has over 500 models & 1000 textures in it. As the only artist on the game, I had to develop fast production techniques. Knowing I wouldn't be able to hand-paint most of the textures, I created an uber shader network. The shader looked-up vertex values I would flood fill onto polygons while modeling. The shader then applied the appropriate base colors for the level theme and added some color variation , edge wear, & damage. I could then bake all the level textures with a single push of a button. This texturing method was similar to a Substance Designer workflow, but lightweight and without having to leave & return to the base 3d application. Below are screen captures showing this technique applied to props in the Corner Store. While this level was cut from the game, this method helped ship the other 37 levels!
Rendering has always been a black-art with render engines of the past. With Mental Ray, combining deformation motion blur with ambient occlusion was a recipe for disaster. On the GPU, it's no problem at all for Redshift.
Here is a Redshift vs Mental Ray sequence test. Cheats and workarounds for computationally expensive processing like global illumination, motion blur, depth of field, and refractions/reflections are a thing of the past!
Took a chance buying a new, but cosmetically damaged Tesla k20c off of Ebay. Had a small ding on one corner so I got it for pennies on the dollar. Works perfectly! Being a headless GPU compute card, I can change the motherboard bios to TCC (Tesla Compute Cluster) mode letting the hardware run without any overhead from MS Windows. In TCC mode my test renders were 50% faster, so it's worth the trouble to see if your headless GPU can run in TCC mode; Titans can... possibly others as well. Combined with the Quadro k5000 the nVidia drivers go into Maximus mode. Redshift screeeeeams!
Redshift GPU computing on the Reading Nook scene. This used to take hours to render with Mental Ray.
Autodesk is ending development and support of Softimage in 2016. The end of an era. I look forward to the new players this will bring to the DCC space.
I came up with a clever way to solve transitions to different areas of the city via subway (and bus) and thought I would share. The background tracks & subway cars scroll in a loop while the interior is static. I added a bit of shake animation to the game camera to sell the illusion. Below you can see how this transition level works in-game.
As with syncing props to animated objects, syncing 2 (or more) animated objects presents a similar challenge. But in the previous post, the problem was a little easier. Basically a character moves to a registration point and an event is triggered. The event calls synced animations on multiple objects and the objects interact. Pretty simple.
However, syncing becomes difficult when two complex animated objects with layers of blends & animations need to interact in unison. To avoid making a programmer write spaghetti code & painting oneself into an art production corner, it's better to abstract the interaction with the use of states. In Unity, a state tree can drive a network of animations; simple to complex. Two (or more) objects can share the same state tree and unique animations can be called per object via overrides. So two dissimilar objects can interact together as they transition from state to state. Very slick. Below are some simple examples of how states keep animated objects glued together.
Side Note: To abstract art production a little further, an "object" can be only a hierarchy for transforms. Meshes properties can be attached as needed. Below you can see how the bike is a bunch of mesh parts attached to a 'naked' hierarchy of bones, making it's components & paint upgrade-able.
Setting-up & registering multiple static & dynamic objects to create a "cookie cutter" standard in which all Unity prefabs will derive from. In this example: The character's "sit-in" and "sit-out" animations sync with the dynamic chair's "sit-in" and "sit-out" animations. Both the character & chair are spatially registered to the static table. So long as new mesh objects are created to match this standard, this system can drive different table & chair combos.
The same method works for registering & synchronizing deformable props with animated objects as well. So long as the bath tubs & shower curtains in the game conform to a standard, the objects will interact perfectly when triggered together.
Got some very good performance results running many of these characters on a Nexus 7 tablet. The exported FBX file has 60 bones and 5k triangles. Mobile isn't for low resolution graphics anymore! The videos below show the animated rig in Softimage and the exported result running realtime in Unity.
I continued testing flexible, stretchy, & stylized rigs & animations in Unity. I made this mockup character... let's call him "Boxy". In the first video, you can see Boxy's rig driving the STR values of nulls (the yellow dots). In the second video, the plotted nulls are driving the same deformations in realtime. Pretty slick!
highly flexible character rig - in 3d app
complex rig controls plotted down to STR data - realtime in Unity
Derek Jenson Blog
My website serves to archive experiments, document projects, share techniques, and motivate further exploration & artistry in 3d space.