In Unity you can write your own custom shaders, but it’s no secret that writing them is hard, especially when you need shaders that interact with per-pixel lights & shadows. In Unity 3, that would be even harder because in addition to all the old stuff, your shaders would have to support the new Deferred Lighting renderer. We decided it’s time to make shaders somewhat easier to write.
Warning: a technical post ahead with almost no pictures!
Over a year ago I had a thought that “Shaders must die” (part 1, part 2, part 3). And what do you know – turns out we’re doing this in Unity 3. We call this Surface Shaders cause I’ve a suspicion “shaders must die” as a feature name wouldn’t have flied very far.
Idea
The main idea is that 90% of the time I just want to declare surface properties. This is what I want to say:
Hey, albedo comes from this texture mixed with this texture, and normal comes from this normal map. Use Blinn-Phong lighting model please, and don’t bother me again!
With the above, I don’t have to care whether this will be used in a forward or deferred rendering, or how various light types will be handled, or how many lights per pass will be done in a forward renderer, or how some indirect illumination SH probes will come in, etc. I’m not interested in all that! These dirty bits are job of rendering programmers, just make it work dammit!
This is not a new idea. Most graphical shader editors that make sense do not have “pixel color” as the final output node; instead they have some node that basically describes surface parameters (diffuse, specularity, normal, …), and all the lighting code is usually not expressed in the shader graph itself. OpenShadingLanguage is a similar idea as well (but because it’s targeted at offline rendering for movies, it’s much richer & more complex).
Example
Here’s a simple – but full & complete – Unity 3.0 shader that does diffuse lighting with a texture & a normal map.
Given pretty model & textures, it can produce pretty pictures! How cool is that?
I grayed out bits that are not really interesting (declaration of serialized shader properties & their UI names, shader fallback for older machines etc.). What’s left is Cg/HLSL code, which is then augmented by tons of auto-generated code that deals with lighting & whatnot.
This surface shader dissected into pieces:
■#pragma surface surf Lambert: this is a surface shader with main function “surf”, and a Lambert lighting model. Lambert is one of predefined lighting models, but you can write your own.
■struct Input: input data for the surface shader. This can have various predefined inputs that will be computed per-vertex & passed into your surface function per-pixel. In this case, it’s two texture coordinates.
■surf function: actual surface shader code. It takes Input, and writes into SurfaceOutput (a predefined structure). It is possible to write into custom structures, provided you use lighting models that operate on those structures. The actual code just writes Albedo and Normal to the output.