Search

Man Posts Scratchings

Sometimes Says Things

A Wrap With Everything On It

So this is the end of studio and its a good time to reflect upon where I’ve gotten to and where to go from here. In the last year I’ve had the opportunity to try my hand at a variety of skills, from concepting, to character design, game animation, sculpting, hard surface modelling, texturing for a differing styles, environment composition, lighting for a variety of situations and creating renders for final delivery. At the end of the day though my portfolio is sparse on content (that I’m willing to put my name to) and in the coming months this is imperative to the next step of gainful employment. This means my next step is to create some refined examples to display my knowledge with polish and flair. To improve my practice going forward I’ll be focusing specifically on the following skills.

Sculpting for Realism

I adopted ZBrush very early in studio as a solution to a good deal of my high-poly needs in regards to characters and organic set pieces, as well as detailing on other objects I produced through more conventional methods. Although I’ve picked up a good deal just through practice and association with the program, I feel my work is still lacking in professional touches and I’ve yet to achieve that truly alive feeling in any of my work till this point.

frank-tzeng-head-close-up-zbrush.jpg
Sculpt for Nathan Drake by Frank Tzeng, Lead Character Artist at Naughtydog

Going forward I want to make ZBrush an integral showing point of my skill set, and develop more complete pieces to add to my portfolio that display a thorough understanding of artistic principles and how they can then be adapted for production of real-time use assets. By the end of the year I plan to have a character sculpted to a standard (similar) to that shown above. To achieve this I’m going to be working through more in depth tutorials focusing on complex muscle and bone anatomy, beginning with the following.

https://www.pluralsight.com/courses/sculpting-realistic-bust-zbrush-329

Hard Surface Techniques

Particularly important for improving my work for the upcoming release of our game Incapacitor. I need to greatly increase my workflow efficiency for hard surface production, with the intent of having adaptable skills that can apply to both characters and set pieces.

gun.PNG
My Gun

I’ve been following Tor Frick for a while, and think his tutorials have a lot to offer in this regard. His Modo workflow is insanely fast, and I might pick the program up in the future, but in the meantime I’m going to be making a point of working in as much as I can to my 3DS Max work, hopefully increasing quality in time for release.

 The Gun My Gun Could Look Like

Animation

Given how little has gone on in the last year I almost feeling like saying ‘The A word’ is a dirty thing at this point. Part of my responsibilities in the aforementioned upcoming game is to rig and animate robotic characters. I’ve not invested enough time for my liking into character animation in my time at SAE so far and I feel this is a major shortcoming, especially given it doesn’t yet feature in my portfolio. In order to get the best result for the task at hand and to give me further skills into the future I will be looking into more complex rigging techniques and a good deal of reference for mechanized posturing. And hopefully some more organic personal stuff in there too.

At the end of the day I’m just shooting to improve quality. I’ve come a long way in two years but this is only a starting point. From a soft-skills standpoint my poor time management has cut majorly into every other facet of the way I work. In a way I have more reasons to update now that I’m no longer posting to meet a grade, but rather just for myself, so I’ll be making a concerted effort from here on out to make this blog a hosting space for content updates.

Onward and upward, etc.

References (These will be APA soon)

https://www.pluralsight.com/courses/sculpting-realistic-bust-zbrush-329

>https://vimeo.com/21858497

>https://www.artstation.com/artwork/L3gJr

>https://www.youtube.com/watch?v=BTwxaoykh-E

Devil in the Details – OpenSubdiv

To preface upon delving deeper into the uses for this tool I have concluded firstly that I have only begun to scratch the surface of all the uses it has within an animation workflow, doubly so as I’ve only been using it for it’s static properties.

Since the Catmull Clark subdivision model was originally devised in 1978 up to modern times the standard for subdivision has been a straightforward linear b-spline division of each polygonal face into 4, the difference being averaged, eventually granting a smooth surface, generally by the 5th subdivision or so, as pictured below.

catmull-clark
The Standard Catmull Clark Subdivision Model

Pixar has over the years been updating this method in-house and has recently released their more contemporary methods for public development, with the intent of creating a standard across all 3D applications from inception of model to rendering stage. Known as the OpenSubdiv Project for the sake of universal adaptation, subdivision will now create smaller complex patches of subdivision polygons around key control vertices, with the rest of the mesh being made up of patches alternating a standard b-spline triangulation where possible. From this ‘transition’ patches will fill in the spaces in between with a dynamic arrangement of quads and tris to smooth the geometry while the patch itself remains the same shape as the standard b-spline segment. In this way only the primary vertex points take on an irregular (not a square) patch when subdividing, allowing the other patches to use minimal geometry to achieve a perfect curve in less iterations. The fact this rendering is now also primarily GPU based means it can be represented in the viewport while working much more easily as well. The video below explains this with graphic detail (vid starts 7:40).

 

I have begun this trimester to use OpenSubdiv as an alternative to turbosmooth for high poly modelling. In addition to the  benefits listed above the program also allows for much better edge-definition options via its crease definition without the need to add extra supporting edge-loops. Pictured below, the center model in each image is that on which the OpenSubdiv is acting upon to create the high-detail smooth on the left.

In as much I was able to create a suitable high-poly variant of a model from an object much more similar to my base mesh, keeping the poly count lower at both the low and high-poly stages.

For this model I set all of my creasing values manually, though for more complex models there is a ‘creaseset’ modifier available. This modifier allows you to quickly set values for groups of edges based upon angle values. The video below illustrates that this can be used for very complex objects, assigning crease values in a matter of minutes to thousands of edges.

Given that the place the improvements really shine is in regard to accurate pre-visualisation for render when animating I’m looking forward to applying what I’ve learned on the subject to character models in the coming months.

References

3dsmaxtrainer,. (2016). 3ds Max Tips and Tricks: Modeling Open Subdiv vs. Turbosmooth. Retrieved from https://www.youtube.com/watch?v=ktM61yLeXfM

Autodesk,. (2016). Exploring OpenSubdiv in Autodesk® 3ds Max® Extension 1 (part1). Retrieved from https://www.youtube.com/watch?v=ENA2FAF_PIc

Autodesk,. (2016). Meet the Experts: Pixar Animation Studios, The OpenSubdiv Project. Retrieved from https://www.youtube.com/watch?v=xFZazwvYc5o

Catmull Clark Subdivison. (2016). Retrieved from http://www.geocities.ws/jason_zxu/subdivision/pic/catmull-clark.gif

Introduction. (2016). Graphics.pixar.com. Retrieved 29 August 2016, from http://graphics.pixar.com/opensubdiv/docs/intro.html

Seymour, M. (2016). Fxguide.com. Retrieved 29 August 2016, from https://www.fxguide.com/featured/pixars-opensubdiv-v2-a-detailed-look/

Smoke and Mirrors Part 2 – Shaders

Continuing on from the last post on general particle effects I now direct attention to the benefits of good shader application. As with last time, I’ll begin with some straightforward definitions.

Generally shaders fall into two groups, the first is vertex shaders, which directly affect existing geometry to create deformation effects within a scene. The most common use for such are applications like wave surfaces liquids, or affecting certain elements of a model for scale.

Pixel shaders, which are present in the majority of modern modelling and texturing workflows. Pixel shaders draw information from maps that present normals data, as discussed in my previous post, or the likes of opacity, height, illumination, subsurface data maps etc. Basically any information which is directly effecting the way in which a surface handles light outside of the actual geometric points.

Regarding specific shaders I’ve used in work this trimester, outside of the obvious standard normal maps. I’ve created instances for water, glowing runes and rain.

The water shader was used quite extensively in the Belisaere scene as I discussed in my previous post, as well as being re purposed as the basis for the runes moving across the charter stone in that scene, though plugged instead to the emissive colour channel, with alerternating sine and cosine

charter stone.PNG
The charter stone.

The Quiet Alley scene made use of a pretty standard setup to paint ‘moisture’ to the blue channel verts. The shader is simultaneously using a lerp function with a defaulted normal to decrease the overall normal detail of affected ‘wet’ areas to make it appear as though water is pooling in the cracks.

In the end there were a lot of points in the scene which I wanted to expand upon and just didn’t have the time to do so. The original aim was to have similar pan/rotation setup to the charter stone applied to the window spaces with visual noise to make it seem as though life was taking place inside. As I added rain to the scene at a latter stage as well my plan is also to add the raindrop shader to the floor tile, with an additional layer applied to the normal layer to show directional flow of the pooling rain towards the edge of the building. This will take place over the coming weeks as I continue to toy with the scene with the intent to add it to my portfolio.

References

Allen, B. (2016). UE4 Ocean Water Shader. Retrieved from https://www.youtube.com/watch?v=u82fxXHBFhA

Hired GunGames,. (2016). Unreal Engine 4.1.1 basic Water shader (Calm Water) Tutorial. Retrieved from https://www.youtube.com/watch?v=dpEK1APS3jM

OnlineMediaTutor | Maya modeling & animation tutorials,. (2016). What is a Shader? | Pixel and Vertex Shaders. Retrieved from https://www.youtube.com/watch?v=TDZMSozKZ20

Opara, A. (2016). Raindrops Shader Tutorial – Unreal 4. Retrieved from https://www.youtube.com/watch?v=ymAuk1z6f-g

Pulsing Emissive Color – UE4 AnswerHub. (2016). Answers.unrealengine.com. Retrieved 28 August 2016, from https://answers.unrealengine.com/questions/65309/pulsing-emissive-color.html

Technologies, U. (2016). Unity – Manual: Vertex and fragment shader examples. Docs.unity3d.com. Retrieved 28 August 2016, from https://docs.unity3d.com/Manual/SL-VertexFragmentShaderExamples.html

Water Shader Tutorial – Epic Wiki. (2016). Wiki.unrealengine.com. Retrieved 28 August 2016, from https://wiki.unrealengine.com/Water_Shader_Tutorial

 

Shot Deconstruction – Minas Tirith

Post post mortem I present the shot that inspires the shot. I concepted my shot for the Belisaere project to be a vista of the palace road from the top of the aqueduct, showing the expanse of the city the archers can survey from atop and the sheer size of the palace on the hill.

belisaereconcept.png

Then I considered where I had subconsciously picked this up from, and came to some kind of approximation I could draw elements from to get the shot framed just right from the establishing shot of Minas Tirith in ‘Return of the King’.

minas tirith.PNG

Lord of the Rings is king among high fantasy, it contains tropes and overarching themes that translate well to almost every fantastic setting. So what makes this shot so well composed, and how can I apply that to my own? Let’s begin with the obvious.

Colour and Tone

A rather obvious example in this case but it does fit well with what I ultimately want in an inverted sense. Minas Tirith and Gandalf are our two standout points of interest in the scene. Gondor is under a blanket of shadow being cast from Mordor, our source of evil just off screen. To contrast this, the city is cutting through the gloom in a like a literal blade, the only true white in the image being caught from the white of the stone and the white of Gandalf’s robes. The light itself has no direct source, vaguely blooming from the upper right of the shot in such a way that it’s almost being cast by the city itself.

The importance of this is the symbolic value of white as representative of purity, goodness and safety. Light itself is associated too with the same values. From this we can infer without even knowing who and what the scene is depicting that we are viewing a bastion of the ‘good’ is a shadowy world, and it’s champion.

In my own scene I can turn this around a bit. The city and palace in shot is controlled by the undead and therefore strong use of shadow within the scene can imply their control, a dark bastion to the hostile force.

Figure to Ground

This is the concept that you can create a strong sense of scale and importance within the composition of the image by having a human figure placed strategically within the shot. In the Minas Tirith shot Gandalf is our figure, and together with the mountains in the background creates a grand sense of scale from the city, as well as the distance that still lays between the two.

Not too much to clarify there, I have a lone figure in almost the same position as Gandalf, a huge structure in the distance and a standard sized mid-ground object (the houses) for a sense of distant scale.

Leading Lines

Leading lines don’t actually have to be a line as such. In the Minas Tirith scene the leading lines are the slopes of the mountains, the turrets at the base of the city, even the mud on the planes and the blades of grass are all directing the eye to the centerpiece of the city.

In my scene the leading lines direct towards the palace. The walls, the peninsula, the slope of the buildings and the kings road all direct the eye to the palace, the ominous centerpiece.

Simply from an aesthetic and framing point the still final positions the shots come to rest upon the correct positions for the rule of thirds, major points of interest landing at third quadrants if the image were divided.

Overall the Minas Tirith shot creates a grand scale scene which immediately paints a world with good and evil struggling for dominance. By effectively using the elements listed in this analysis I can hope to achieve similar themes in my own shot (or that I have).

References

9 Composition Techniques That Will Make Your Images Eye-Catching on a Biological Level. (2016). No Film School. Retrieved 28 August 2016, from http://nofilmschool.com/2015/09/9-composition-techniques-make-images-eye-catching-biological-level

Creating and Using Leading Lines. (2014). Photography Life. Retrieved 28 August 2016, from https://photographylife.com/creating-and-using-leading-lines

How to Analyse Movies #2: Signs, Codes & Conventions. (2013). Film Inquiry. Retrieved 28 August 2016, from http://www.filminquiry.com/analyse-movies-signs/

Static and Dynamic Composition, Lead Room, Rule of Thirds | Elements of Film Photography. (2016). Elementsofcinema.com. Retrieved 28 August 2016, from http://www.elementsofcinema.com/cinematography/composition.html

 

World Builders Post Mortem – Belisaere

So this is the story all about how our project got flip turned upside down so I’d like to take a minute just sit right there and I’ll tell you what became of the town Belisaere.

My own feelings on the way this project played out are a bit dismaying, not because I think the group worked poorly overall, but because it was one of the suggestions I put forward and in hindsight (I honestly realised by week 3) we were never going to be able to do justice within the time-frame. This is not to say the team didn’t work admirably, it’s just a matter if scope. Scope of producible assets led to an overall repetitive feel to what the final scene, clear skill gaps and associated program knowledge between members caused issues in workflow speeds and tasks might have been divided differently to achieve more in the timeframe. Additionally there were general misinterpretations of sections of the brief and ways of overcoming steps early in production which led to some fairly bottlenecks further down the track.

THE GOOD, THE BAD, THE CRITICAL REFLECTION

Starting from the beginning there were issues relating to pre-production work. The book in question, ‘Sabriel’ had only been read by myself previously and as such the understanding of the rest of the group was coming from the 3 short chapters I’d set aside as being the best describers of the environment. Going on record as being almost entirely my fault this was in hindsight not a great piece of source material to work from. Much of the greater description of the town comes from the broken pieces across the series as a whole, and I found my own initial interpretation to be skewed from what was being described, to the point where we had to revise our entire initial interpretation from French to Mediterranean architectural inspirations. While none of this was world ending by any means it did definitely slow pre-production right down, leaving us initially without clear views for an art bible, compounded by the fact that colour concepts were produced by only 3 of the members, the other two opting for a more placeholder 3D approach which ultimately also became what we had assumed to be the requirement for a pre-vis at the time. These were not intended as shortcuts, the work done at this stage by Ed and Heaven in particular was far and away the most impressive. My own concept image was unfinished, though from how things eventuated probably because the basis for a good deal of the composition in the final product.

What all this did amount to however was a fairly weak pitch, and the pre-vis was rendered as a 3DS Max file. Strictly speaking all of this was the plan at that time, under the assumptions that we made regard the FBX import functions available in Unreal 4.12, we believed it possible that our entire scene could be adequately set up in max and then transferred to Unreal for rendering. Here we made a couple of crucial mistakes, which could have been solved with with better prior research. Fore-mostly the exported scene doesn’t  acknowledge instancing or referencing, instead treating each model in the scene as it’s own entity. For reasons of general file size this is already an issue, as well as making any changes in engine fairly long winded given the objects dwell in a sub-file existing separate to the entity list for the scene. In addition we could have avoided a good deal of pain even in using this method if  materials had been created and imported from max with the FBX, as in the end Ed spent a good deal of time going through and selecting each instance of the specific models to apply textures to that otherwise could have been handled in a single application. In truth though the lesson to be learned is create your pre-vis in unreal if your intent is to make that the end of your pipeline, so as models can just be swapped out at a later time, and each genuinely remains an instance, greatly cutting down on system resources and file size, as well as just keeping your digital work-space neat and easy to read.

Asset production itself suffered a bit simply on workload divide, and as previously mentioned, some misconceptions of prior knowledge and working speeds of members. In certain respects it wasn’t a bad thing that people tried to tackle new ground here, it was clear Duncan picked up a lot from the exercise which really is the intention of such assignments. Things took longer than expected, however, especially on my end. I made the mistake of treating two detailed character models as a trivial matter, and the results of the guard character in particular were less than stellar (though I was happy to note how much faster my Zbrush workflow has gotten). In the end this meant shots were vague come the pre-screening in week 6, and as a result the feedback we received was limited, as there wasn’t really much on which to comment. I like blaming myself in these circumstances (probably some acute narcissism there), though I was likely putting as much time into the activity as anyone else involved. It likely needed more overall, and better communication and cohesion (I was fully guilty there) on a regular basis to get things done. But mostly just time. I in particular still struggle to manage tasks as they mount up, had my focus on the pitch for final project that week. It doesn’t excuse me being lax, I just know it was a factor in my own poor time management. I have real trouble actually sticking to a schedule outside of class, and it’s something I need desperately to sort out. I’ve recently found through final project that I can remain more focused as long as I’m verbally in a meeting (skype) with the group I’m working with, so that may be one solution for the future.

We did eventually get a result we could be fairly proud of, in accordance with what we set out to do from our concepts that showed off talents from all members strongly in the framing of the shots, so I’d qualify that as a success. As gloomy as all that sounded as well everyone did work hard, communicate excellently and complete every task they were set to the best they could offer within the scope of time. Overall this was a pretty excellent group to work with.

TOOLS AND PROCESS

Given the relatively wide difference background of skills and practices of the members and the fact we were all working on the same overall scene for this project we stuck to a fairly straightforward low-poly with high-poly sub-division for the purpose of normal-map baking in max, taken to Quixel Suite 2 for texturing, then imported to Unreal Engine 4 to compose the scene, lighting, particles, shaders and cameras for the final render. In addition to this I used ZBrush into 3DCoat prior to the standard workflow to create the character models.

Asset Modelling

I’ll talk specifically on my own workflow in a moment, and first address the overall way the team was coordinated in the task. From the first day we had in place a quality control rule, relating to the parameters listed in the art bible an associated paperwork, in addition to an agreement to revise all assets produced each week as a group in the Wednesday class. Overall this fostered a very consistent style to the assets produced, in addition to providing an excellent chance to share knowledge on different modelling techniques and workflow elements people hadn’t previous tackled. There is always the potential that this could have backfired and limited the work being produced, but overall I’d say that the team found their stride, and outside of a few small instances where another member spent some time tweaking or cleaning up a couple of small geometry and unwrapping mistakes the whole pipeline was handled by the person producing the model until it came time to bring it into unreal.

If there was anything majorly lacking to the workflow it’s that there was an inconsistent amount of objects which received any kind of normal bake. This was an experience thing, and left too late in production to do anything about, despite being integral to a efficient reduced polygon workflow. This could be avoided in future by handling workflow study in the preliminary stages of the project.

Sculpting

This more just pertains to myself, though Duncan did make a concerted effort to learn a new tool in ZBrush during the thick of the process, for which he should be commended. My primary role for the project until it came to compositing was primarily character models to go into the scene for the guards of the city and the undead infesting it. (and the background buildings)

the boys.PNG
Mah Bois

I’ve used ZBrush at various points across studio projects in the past so this was building on a present skillset instead of developing a new one. I got three major things from the practice. The first is just a general efficiency increase, sculpting to rigged completion two character models in less time than it had previously taken me to do a single, and to the same if not better standard. Secondly, and somewhat related to the workflow issue I brought up with the max modelling workflow, is getting the correct divisions of the model, by dividing off different parts of the model (arms, legs, helmet, face, body) and having efficient unwraps, in order to get a good normal bake. With the exception of a couple of small errors around the hands, which can be easily fixed by a slight adjustment to the cage.

The third and most important point was learning alternative forms of retopology available. The guards were retopologised using the standard method I’ve used in the past, moving through a draw-quads workflow in 3DCoat to get an efficient simplification of the mesh, as I’ve shown before in the Sabriel and Metum projects. The alternative I tried when retopologising the undead model. Using the zremesher tool in ZBrush to quickly decimate the mesh for use in a real time scene, as I covered in my research post last trimester. Though I’ve done this before it was the first time it’s been for a model that had to be able to move. What I found upon trial was that if there’s clear edge loop flow then the results would be pretty much spot on to what you’d ordinarily want for a deforming character, with the exception of adding triangles to bends in limbs, but that’s then an easy fix in a program like Max. The issues arise when tackling something with a more complex joint, as exists for instance around the shoulder of the guard, which will create intersections on the joint, instead of a meeting of two-loops. Drawing on loops using the subsections of the tool will grant some semblance of what you could get manually. What I came to understand in the end though was that it can be a useful shortcut, definitely still in the case of statics as it will often to better than a lowpoly you could construct yourself, unless the object is insanely simple. If the model is to be rigged however unless you get the results you want within 5 minutes of trying it’s likely going to be faster just to go with the more standardised manual method.

Texturing

Not a great deal to say here. Same Quixel workflow we’ve used on several projects now. We made sure we were using the same materials where they would crop up together in the scene. There were instances as there usually are on a project using the suite where more could have been done with the texture. Most of the group was still learning the suite on the go, so it’s likely just a matter practice and unfamiliarity, though time management did factor in on some level with texturing still being finished right up to the line.

There were also a few small instances where texture islands needed rotating that weren’t caught up until the compositing stage. Though again I feel its to do with experience with Quixel, as the workflow it and other specialised programs like substance require UV islands to be oriented correctly on the map to get the results you want.

I also made the dirt and floortile textures in substance, through a fairly simple stacking of the tiling presets available with the program. In future I’d like to explore substance as an alternative to Quixel, as it seems a very versatile and powerful tool, with more standalone options and greater direct integration options with the game engines I find myself frequently working with.

Oh, and 8k textures. NEVER. AGAIN.

Composition and Lighting

I had the pleasure of handling a lot of the latter stages in the Unreal 4 scene. Most of the actual arranging of the assets had already been done by Ed so this was mostly just the application of a few textures and some delicate tweaking to the light.

With the textures made in substance I set up a vertex-paint shader in the same fashion as I’d used before to do the blood and glowing green stuff in the gaslit lab scene, this time to paint dirt and grass into damaged sections of the road. I created a water shader using a couple of normalized wave-pattern noise maps to flow through the aqueducts and sit in the fountain, as well as a gravity-weighted water particle for the fountain to dispense. The lighting was a tad rushed, and really needed revisions after what the render revealed. The aim was to use a height fog and extended depth of field to create an ocean haze, given that the city sits upon a peninsula on an inland sea, and to an effect I managed to achieve that, but upon revisiting I realised it could have been more effective with a bit of toning down.

The fountain and aqueducts could also have benefited from a screen capture reflection to get some cooler tones happening around the spaces, and even just a blue point light in relation to the fountain. Previous experience I’ve had with lighting environments has been entirely interior, though, so it was a good lesson in blending light forms between exterior and interior spaces, particularly around the undead shot. The long and short of it is I need more practice, and didn’t really give Ed enough time to run his eyes over it before final render, as his own experiences lighting in the engine could only have improved it.

I also cobbled together some grass and low-poly buildings to fill background space and areas where the city was being taken back by nature, which I then applied using the object painter in Unreal 4. Due to scoping reasons which we put in place early we decided to keep the vegetation minimal, but after seeing what was done by the other teams were we to do the project from scratch I think we’d make much greater use of store assets or items from the demo levels in Unreal to fill out areas with more detail. Some additional moss texture painting and the like would stretch a long was as well.

THE WRAP

It was a long process, with some speed bumps that could have been avoided with a bit more planning and a few more early considerations, but we got there in the end, and everyone learned something.

Smoke and Mirrors Part 1 – Particles

This is the first of a 2 part research post which I’m choosing to tackle as a topic almost purely on the grounds that it made my very average modelling and scene composition this trimester look somewhat presentable.  I’ll aim to keep non-specifics brief here as those who shall definitely have to suffer my waffling have already covered this topic at some point.

A ‘Particle System’ is a process used across varied disciples of computer graphics to generate appropriate visual representations of ‘fuzzy’ phenomena; ie things like flames, snow, water, sand etc. that are complexly affected by outside forces. This is achieved by spawning a large amount of small objects or images possessing variables for range and often with a limited lifespan within the space. Particle smoke is a particularly (see what I did there) easy example to illustrate this with as it generally made up of only one or two images, expanding in a certain range and direction, altering in opaque properties before reaching an assigned end of lifespan, appearing to dissipate. Then it spawns again. Wew.

The important thing to note here is that particle systems maintain a usefulness in real-time graphics (and more complex stuff on a grander scale) by imposing a simple but necessary limiting system upon the number of images or objects, referred to as ‘sprites’. At any given time, once this upper limit is reached, there will be the same number of particles loaded into memory on a stack awaiting spawn. As a particle on screen ‘dies’ the next on the stack will take it’s place, so on and so forth. In such there can be situations where the variant lifetimes of particles can cause the stack to constantly rearrange, creating the truly unpredictably diverse visual situations that make a particle system the clear choice over more deliberate traditional method.

particleSwapping

I’m somewhat keen on Unreal 4 so the rest of the post is going to focus largely on that. In Unreal you’ve got a couple of major elements that make achieving what you want with particles faster. Firstly, particles are handled by an emitter module which serves as a spawn point potentially for multiple particle effects on the same point. In Unreal 4 this is represented visually by the cascade editor, which displays each individual particle system as a stack of parameters in a column. A few of the things you might edit at this stage include the lifetime of your particle, directional forces such as wind and gravity, collision meshes, sphere of influence or changes to colour over time. For instance if you’re aiming to create a fire system you would aim to determine parameters for colour over life, velocity of the flame sprite, directional influence etc. but then also create secondary systems for sparks and smoke that will be spawning from the same emitter but at different points during the life, their own set of expiry and directional influences, and in the case of the sparks likely collision targets and behaviors.

My own use for the trimester hasn’t really gone beyond a singular sprite emitter system being used by each emitter, mainly due to my own poor time management. In future works (some of which may re-use these specific assets) I plan to make greater use of secondary systems, and also to closely integrate shaders into and around the particle effects, as I will discuss in the following post…

References

Epic Games,. (2016). 1 – Particle Terminology. Retrieved from https://www.youtube.com/watch?v=OXK2Xbd7D9w

Greer, M. (2012). Particle Systems From the Ground Up – Build New Games. Buildnewgames.com. Retrieved 28 August 2016, from http://buildnewgames.com/particle-systems/

Routh, B. (2016). Unreal Particles: Smoke. Retrieved from https://www.youtube.com/watch?v=YqyMle6AHCI

Digital Sculpting – Progression via Regression

This is by far one of the oldest stretching topics in 3D graphics, as it goes back to the roots of what people have been trying to achieve in cinema since it’s inception. Prior the the fairly recent insurgence of 3D we’ve seen in the last 20 years with large portions of what was previously either live acted or practical effects in film.

So why did I name the post progression via regression? Well the technical aspect of ‘digital sculpting’ as it’s existed till quite recently has tended to outweigh flat artistic merit, in the sense that using more traditional techniques transferable from mediums such as clay or marble sculpture.

Back in the dark ages (or the renaissance, depending how you look at it) known as the 90s the prospect of such an application was only a thing of dreams (well, it was in development).  We were just starting to see Pixar produce feature 3D film in a fashion in a fashion that would be taken seriously. It’s during this time as well that we see the rise of the now very standardised NURBs and subdivision pipeline that we still use prominently in 3D modelling today. This is still integral to the knowledge base of any 3D artist of course, but if your primary concern is 100% to get something that LOOKS the bill, outside of the work that needs to be done later to get models functioning for animation topology or efficiency, then it becomes a somewhat cumbersome and deliberate process that is a skill set unto itself and more traditional techniques can’t be brought to the table.

1
NURBS (Non-uniform rational Basis spline) create averages between vertex points to create higher detailed object sculpts from low density geometry.

One of the first examples of a more traditional sculpting application, ZBrush is introduced at the 1999 SIGGRAPH conference in Los Angeles. Offering the ability to manipulate a dynamic mesh in a manner simulating tools more traditional to a sculpting background. On a similar vein and the main competing program is Mudbox, developed initially by Skymatter, a studio founded by Tibor Madjar, David Cardwell and Andrew Camenisch, formerly artists in the direct employ of WETA digital (yes, that WETA) and first used to produced assets for Peter Jackson’s 2005 rendition of King Kong. Mudbox has since been bought by Autodesk and integrates near seamlessly with their their software.

tmp49c63_thumb_thumb
A comparison of Mudbox to traditional clay

These work on a very similar workflow to that of a clay sculpt. If I was to use zbush as an example:

1. Reference and Roughing Out

As with most artistic endeavors, the first thing you should do is acquire and set up appropriate reference material. Most 3D suites offer the ability to have you reference directly in the scene and ZBrush is no exception.

What ‘roughing out is referring to is much like what you can see if the ZBrush image on the right. In traditional sculpting this would be your initial shaping of the core piece of clay you’re using with your hands. At this stage you’re not aiming for details, just to get a rough shape in place that you can work detail into on a much smaller scale later. In a program like ZBrush you’d make good use of your scale and move tools here; they become your hands, as such.

2. Adding, removing and maneuvering

When presented in a traditional fashion your sculpting tool kit will incorporate a wide variety of knives, sticks, brushes, wires etc. Basically any physical object can give a different effect when manipulating your clay. As you refine your model as well sections of that clay may need to be removed volumetrically from sections or require new clay added to build upon the rough foundations. In reality, this involves physically adding more clay.

tmp49c62_thumb_thumb
Traditional Sculpting Tools

In ZBrush, there’s a brush for that.

3. Cutting and re-positioning

Prior to any small details to be added it might be required to shift something as large as an arm from it’s foundations to a new position. In a physical sense you’d use a knife to slice wet clay away and then rejoin it elsewhere to your model (might only be a cm or two).

In ZBrush you can mask off whole sections of a model to be moved, adjusted or deleted all-together. Alternately, any major geometric divides, such as a limb, can be listed as their own subtool, either from inception or using the aforementioned technique, and be treated as a separate model for the sake or position adjustment, which still treating any overlapping geometry as the same surface for smoothing.

4. Detailing

The end of your workflow as far as the sculpt itself is concerned, detailing to a traditional model is either a matter of getting in very close with concise tools, though if you’re looking to repeat detail in a more uniform fashion you may also employ ‘pressing’ – the act of pushing a mould into the clay to indent it uniformly,  or ‘strigging’ – using an indented mould to fashion your detail from a more malleable material, such as plaster, then attaching that to the main model by pressing it to the wet clay.

In ZBrush, you can use your brushes set very small to eek out small details in much the same way, and your pressing and strigging comes from using custom stroke, alpha and texture brushes, as illustrated in the video below.

References

Comparing Traditional and Digital Sculpting (Sculpting Concepts) (Digital Sculpting with Mudbox). (2016). What-when-how.com. Retrieved 17 May 2016, from http://what-when-how.com/digital-sculpting-with-mudbox/comparing-traditional-and-digital-sculpting-sculpting-concepts-digital-sculpting-with-mudbox/

Holland, P. (2016). Clay Sculpture Technique – An Introduction. Figurines-Sculpture.com. Retrieved 17 May 2016, from http://www.figurines-sculpture.com/sculpture-technique.html

Mahon,. (2011). Digital sculpting vs. traditional sculpting • Chest of Colors. Chest of Colors. Retrieved 17 May 2016, from http://chestofcolors.com/digital-sculpting-traditional-sculpting/

Polygons to NURBS. (2016). Retrieved from http://www.3dtutorialzone.com/Polygons_To_NURBS/1.jpg

Raitt, B. & Minter, G. (2016). The Minters. Theminters.com. Retrieved 17 May 2016, from http://www.theminters.com/misc/articles/derived-surfaces/index.htm

Visual Fundamentals – I’m going to talk about lighting

Because I can. Bits of what I go over here can be seen in the Gaslit Labs scene, and to a lesser extent in the rather rushed composition I put together to display Metum in Unreal. Frankly after looking into it further I could have done better in general with my lighting composition, but some of what I learned shows through, and then rest are just point to work on. So without further to distract:

Fundamentals of Lighting (for 3D and in general)

Starting with…

Things I’ve done well

Bounce light

Bounce light is much as the name implies ambient lightning created by bouncing light off of reflective objects.

Modern rendering has seen the introduction of global illumination lighting, and with it manual bounce lighting has become less essential than it was previously, but it’s still important to consider if you want to get the best out of your scene space, given that what the renderer will interpret as being true bounce lighting may not actually be visible isn’t actually going to be that visible.

For this reason and particularly for the sake of adding coloured highlights via your bounce it can be beneficial to manually simulate a bounce light using a point light. To get bounce light tones to look natural use a low intensity/ high saturation colour to add illumination to corners that otherwise would remain dark. Doing so will add life to a scene that using global illumination alone might have missed.

Spill light

Indirect light comes in many forms and spill light is the next on the list. Spill light in the secondary illumination that you get off of a directed that, that illuminates the area directly surrounding where it falls. To use an example from Gaslit scene, I applied a manual spill light by adding a second and third point light to the area surrounding the wall lanterns.

extra ;light
Look, ma, dodgy spill light!

So yeah, at a fairly introductory level that functions, but I could have made it even more subtle in the end to achieve a better effect. The projecting object, in this case the lamp, could also benefit from a separate collision model for lighting purposes, as at present it’s difficult to allow for accurate shadow casting with the additional spills casting shadows.

Dividing up the space

This is a bit more of a abstract concept and relates to composition as much as maintaining accurate lighting conditions to reality. What dividing the space is referring to is that a larger scene can have very different lighting across the space to create more visual interest. On a more technical level, if you have a room which is lit from the windows that the areas in the corners or the center of the room are likely to be lit very differently. Using the lighting from Gaslit as an example again, the differences could be seen if you count the light spilling from the broken window as your key lighting, the room directly beyond it being one zone, then the corner with the writing desk being another zone, then around the tank where the primary source of light become the green glow, and finally the very dreary and obscured lighting conditions of the factory floor.

These sorts of subtle changes to the light can help make a space seem much bigger. Those 4 shots are all within 10 meters of each other.

Things I didn’t do so well

Key lighting

Your key light is the primary light source when considering scene composition. It should be as a rule the strongest light in the scene while also being the strongest light affecting your focus point. From the perspective of a dynamic game environment this can be bent a little, but if you have control over when the light can be pointed in a situation you should always be striving to follow these rules.

Most of the key lighting in the Gaslit scene was already done for me when I got a hold of it, and it does a pretty good job of doing what is intended. I added the key lighting around the workbench myself so I can say in earnest I did an alright job there. The instance I’m not terribly proud of is how I handled the light in Metum’s pose scene.

metumdodgylight
This is a pretty prime example of over-reliance on global illumination.

It would be fair to say I failed to light the character properly at all for a pose-shot. This was a perfect opportunity to have complete control over light-sources, given it was a shot with a central pivot. In truth despite the scene itself being fairly well lit overall if zoomed back there is no conditional lighting at all on this shot, which is something I must ensure never to repeat for a posed turnaround of all things.

Optimising my models

This again can be made apparent from the screenshot above. In this case optimising the model is referring to making sure the geometry is not so much efficient, but smooth enough in all places to properly interact with the light hitting it. Overall most of Metum’s model works fine in this sense, but the mechanical arm is mostly made of hard 90 degree + edges which shade as hard edges as well. Where the light over the rest of the model graduates across the geometry as it fades to shadow it’s an overly hard edge on the arm. As a general rule with geometry this is something I need to improve, by at least having an appropriate normal map baked to fake a curved edge for lighting purposes if nothing else.

Construct from reference

This is a bit of a good an bad again, as the lighting in the Gaslit scene does draw from the compendium of reference photos we compiled for the project, but still doesn’t truly capture what the lighting should be for that situation. Could always use tweaking etc.

This is pretty self explanatory in any sense, when you go to light a scene, much like anything else with visual artistry, it will only be better if you draw from reference to reality if it’s available.

When I go back to do the Wastes for Metum’s scene for example, I’ll spend a good deal of time getting the lighting to look something like…

Mann's_planet

References

Birn, J. (2016). Top Ten Tips for More Convincing Lighting and Rendering | | Peachpit. Peachpit.com. Retrieved 16 May 2016, from http://www.peachpit.com/articles/article.aspx?p=2165641

bounce light. (2016). Retrieved from https://www.nyip.edu/images/cms/photo-articles/bounce-basic-setup.jpg

How to Light a 3D Scene. Overview of Lighting Techniques. (2013). 3D-Ace Studio. Retrieved 16 May 2016, from http://www.3d-ace.com/press-room/articles/how-light-3d-scene-overview-lighting-techniques

Planet Mann, Interstella. (2016). Retrieved from http://vignette4.wikia.nocookie.net/interstellarfilm/images/5/5c/Mann’s_planet.png/revision/latest?cb=20150322004702

Standard lighting. (2016). Retrieved from http://3.bp.blogspot.com/-8pPZ7uUy3v0/Ui3r91_Ib-I/AAAAAAAABQw/gWkPsOo0D14/s1600/Light_Color_Lecture_sm-6.jpg

Three Point Lighting Tutorial. (2016). 3drender.com. Retrieved 16 May 2016, from http://www.3drender.com/light/3point.html

Re-topology – The Art of Decimation

As I’ve already touched on basically any modern pipeline with a budget on polygons that isn’t intentionally going for a low-poly aesthetic is going to incorporate re-topology in some way.  Re-topology itself being the process of creating a generally lower topology version (though there are some cases where you just want slightly different topology) of a more detailed high-poly mesh, which can then have the detail projected with a normal, ambient occlusion, cavity maps, thickness maps etc. In the end what you’re aiming for is to create the closest shape you can to the dominant topology of the initial object, with the lowest polygon count that still allows for proper shading and in the case of things that need to deform like character models, proper arrangement of edge loops.

So what then makes good topology? Well the same rules apply that normally would if you were working to a low poly pipeline. Keep all you can to quads, maintain edge flows in a fashion that follows the natural flow of whatever your topology is. For an organic model this generally means tracing muscle definition, making sure your edge loops terminate in areas where it’s easiest to hide, such as under the arms, or drawing in along the spine. If there are distinctive shapes along your high-poly, follow the edges in much the same way. The cleaner your edge flow, the easier you can attain good results when animating.

Most 3D suits these days come packaged with some tool that allows re-topology.  Generally, unless the object is static, this will involve a fairly lengthy manual process of picking out topology quad by quad (or triangle) across the face of the original model until you have achieved the results you want. Given my workflow is so heavily reliant on ZBrush, I’ll go over a couple of the options available for making a sculpted object usable in a real-time engine in short order.

Firstly, in ZBrush alone there are a couple of options you can begin with. The ZRemesher tool provides powerful options for automatic re-topology and the ability more manually re-position verts or readjust rings as you can with the more general geometry overlay method of 3ds Max for instance.

The video below details how the ZRemesher can be used to quickly automatically re-topologise an object, or be set up to remesh following strict guidelines of topology density and specific edge loops.

 

I gave this a bit of a go myself, when trying to make some quick object for a turnaround deck. Though I just utilised the initial re-mesh because it suited the purpose well enough to get some rocks happening. Ultimately didn’t end up using it in favour of a more standard 3D-Coat arrangement (which I’m about to go over) but I’ll post it here anyway to illustrate the point.

This for many applications of stationary objects would be enough. If you were to take your time with the method shown in the video above it would also be generally suitable for animation deformation as well. I’ve become fairly accustomed to 3D-Coat as part of my pipeline, though, so I’ll go over a couple of the options that presents.

For Metum I went though a fairly slow but reliable approach with re-topology that saw me using the quad overlay tool to reconstruct the mesh. 3D-Coat does however also offer options similar to those we’ve just looked at in ZBrush, as far as being able to generate topology by drawing in guidelines and defining shapes for certain areas are concerned. It also posses a useful tool to drag out geometric standards over the top of your high-poly mesh that will then collapse inward to snap-to over the top, which can then be adjusted in small ways as needed from the existing geometry.

icepillar.3b
RtpModel cylinder from the tool on the right, dragged into the scene over my ice pillar, where it collapses down into shape.

This method could very easily be used to re-topologise a more complex mesh in pieces as well. Extremities like arms and fingers on a character model could quite quickly be re-topologised and joined at the seams, for example.

References

Find answers to all your CG Questions anc catchup on the LATEST CG News, EXCLUSIVE Features and Images from movies, games and art. (2012). CGSociety Forums. Retrieved 16 May 2016, from http://forums.cgsociety.org/archive/index.php?t-1064185.html

Polygon Table – Help Building One. (2009). Forums.newtek.com. Retrieved 16 May 2016, from http://forums.newtek.com/showthread.php?98754-Polygon-Table-Help-Building-One

XY01,. (2016). zBrush: High poly sculpt to Low poly mesh and normal map workflow. Retrieved from https://vimeo.com/111813917

Create a free website or blog at WordPress.com.

Up ↑