- 最后登录
- 2014-10-23
- 注册时间
- 2011-7-19
- 阅读权限
- 90
- 积分
- 81303
- 纳金币
- -1
- 精华
- 11
|
Predefined Input values
The Input structure can contain texture coordinates and some predefined values, for example view direction, world space position, world space reflection vector and so on. Code to compute them is only generated if they are actually used. For example, if you use world space reflection to do some cubemap reflections (as emissive term) in your surface shader, then in Deferred Lighting base pass the reflection vector will not be computed (since it does not output emission, so by extension does not need reflection vector).
As a small example, the shader above extended to do simple rim lighting:
#pragma surface surf Lambert
struct Input {
float2 uv
_
MainTex;
float2 uv
_
BumpMap; float3 viewDir; };
sampler2D
_
MainTex;
sampler2D
_
BumpMap; float4
_
RimColor; float
_
RimPower; void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = tex2D (
_
MainTex, IN.uv
_
MainTex).rgb;
o.Normal = UnpackNormal (tex2D (
_
BumpMap, IN.uv
_
BumpMap)); half rim = 1.0 - saturate(dot (normalize(IN.viewDir), o.Normal)); o.Emission =
_
RimColor.rgb * pow (rim,
_
RimPower); }
Vertex shader modifiers
It is possible to specify custom “vertex modifier” function that will be called at start of the generated vertex shader, to modify (or generate) per-vertex data. You know, vertex shader based tree wind animation, grass billboard extrusion and so on. It can also fill in any non-predefined values in the Input structure.
My favorite vertex modifier? Moving vertices along their normals.
Custom Lighting Models
There are a couple simple lighting models built-in, but it’s possible to specify your own. A lighting model is nothing more than a function that will be called with the filled SurfaceOutput structure and per-light parameters (direction, attenuation and so on). Different functions would have to be called in forward & deferred rendering cases; and naturally the deferred one has much less flexibility. So for any fancy effects, it is possible to say “do not compile this shader for deferred”, in which case it will be rendered via forward rendering.
Example of wrapped-Lambert lighting model:
#pragma surface surf WrapLambert half4 LightingWrapLambert (SurfaceOutput s, half3 dir, half atten) { dir = normalize(dir); half NdotL = dot (s.Normal, dir); half diff = NdotL * 0.5 + 0.5; half4 c; c.rgb = s.Albedo *
_
LightColor0.rgb * (diff * atten * 2); c.a = s.Alpha; return c; } struct Input {
float2 uv
_
MainTex;
};
sampler2D
_
MainTex;
void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = tex2D (
_
MainTex, IN.uv
_
MainTex).rgb;
}
Behind the scenes
We’re using HLSL parser from Ryan Gordon’s mojoshader to parse the original surface shader code and infer some things from the abstract syntax tree mojoshader produces. This way we can figure out what members are in what structures, go over function prototypes and so on. At this stage some error checking is done to tell the user his surface function is of wrong prototype, or his structures are missing required members – which is much better than failing with dozens of compile errors in the generated code later.
To figure out which surface shader inputs are actually used in the various lighting passes, we’re generating small dummy pixel shaders, compile them with Cg and use Cg’s API to query used inputs & outputs. This way we can figure out, for example, that a normal map nor it’s texture coordinate is not actually used in Deferred Lighting final pass, and save some vertex shader instructions & a texcoord interpolator.
The code that is ultimately generated is compiled with various shader compilers depending on the target platform (Cg for Windows/Mac, XDK HLSL for Xbox 360, ps3 Cg for PS3, and our own fork of HLSL2GLSL for iPhone, Android and upcoming NativeClient port of Unity).
So yeah, that’s it. We’ll see where this goes next, or what happens when Unity 3 will be released. I hope more folks will try to write shaders!
转自官方英文blog
http://blogs.unity3d.com/2010/07/17/unity-3-technology-surface-shaders/
|
|