Sabtu, 26 Agustus 2017
Cross Hatch Shader In Unity3D
Cross hatch shader is one of the Non Photo Realistic (NPR) shader that try to simulate pencil hatching in 3d renderings. There are some approach that can be used to create the shader in Unity 3D, like using post processing shader and surface shader. This article will (at least try to) explain on how to create the later approach
This is how the end result should look like.
First, create an empty text and just name it into something you like, something like "crosshatch.shader" or something... open it using any text editor you like, and add this line
Shader "Custom/CrossHatchShader" //This will be the shader name
the code above will add this shader into shader dropdown selection under the "Custom" section with the name "CrossHatchShader"
Next we'll add the properties of this shader. The expected properties are the main texture, which will be converted into grayscale later in the shader. Then there are the hatching textures which will simulate the darkness of the hatching. They should be tileable textures with three darkness level, light, medium and heavy. The last one is the "repeat" parameter which defines how many time the hatching texture will be repeated in one uv space.
Properties {
_MainTex ("Texture", 2D) = "white" {}
_LitTex ("Light Hatch", 2D) = "white" {}
_MedTex ("Medium Hatch", 2D) = "white" {}
_HvyTex ("Heavy Hatch", 2D) = "white" {}
_Repeat ("Repeat Tile", float) = 4
}
Here's my attempt in creating the hatching textures using GIMP. Left is light, middle is medium and the right one is heavy/dark hatch.
Next, we'll create the shader body where all magics happen. First, let's declare the variables in the properties. Textures will need sampler2D so shader can sample(read) them. And why fixed instead float? Well, we don't need float's high precision for defining how many time the hatch will be repeated so, fixed will be just fine.
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf CrossHatch
sampler2D _MainTex;
sampler2D _LitTex;
sampler2D _MedTex;
sampler2D _HvyTex;
fixed _Repeat;
Next, we'll define the input structure for the surface shader. We'll add screenPos so we can sample the hatchings in screen space position while the usual uv_MainTex will be used to sample them in object space position.
struct Input {
float2 uv_MainTex;
float4 screenPos;
};
Next, is to declare the surface shader itself
void surf (Input IN, inout MySurfaceOutput o) {
//uncomment to use object space hatching
// o.screenUV = IN.uv_MainTex * _Repeat;
//uncomment to use screen space hatching
o.screenUV = IN.screenPos.xy * 4 / IN.screenPos.w;
half v = length(tex2D (_MainTex, IN.uv_MainTex).rgb) * 0.33;
o.val = v;
}
Here in the surface shader, we sample the main texture rgb color value, get the average of them and store them in "val" to be passed to the lighting function.
We also get the uv coordinate of the current sampling position and pass them to the lighting function via "screenUV" variable. Here, you can choose whether to use screen space sampling or object uv space sampling. Screen space sampling means that the hatch will take place in screen position and the surface position and orientation wont affect the hatching result. On the other hand, object space sampling will take the surface position and orientation into account when sampling the hatching. Put it simply, screen sampling is like overlaying the hatching in the screen space while object space will overlay the hatching over the main texture.
Then, we'll add our custom lighting function. The function have to be named "Lighting<surfaceShaderFunctionName>" which in this case our surfaceShaderFunctionName is "CrossHatch" so our custom lighting function name will be "LightingCrossHatch". The function have to return a vector4 so the function should be a type of float4, half4 or fixed4. For this case, half4 is enough.
half4 LightingCrossHatch (MySurfaceOutput s, half3 lightDir, half atten)
{
half NdotL = dot (s.Normal, lightDir);
half4 cLit = tex2D(_LitTex, s.screenUV);
half4 cMed = tex2D(_MedTex, s.screenUV);
half4 cHvy = tex2D(_HvyTex, s.screenUV);
half4 c;
half v = saturate(length(_LightColor0.rgb) * (NdotL * atten * 2) * s.val);
c.rgb = lerp(cHvy, cMed, v);
c.rgb = lerp(c.rgb, cLit, v);
c.a = s.Alpha;
return c;
}
Note that "MySurfaceOutput" instead of usual "SurfaceOutput"? That because we need additional parameters to be passed from surface shader to lighting function. The additional parameters are the "screenUV" which is to pass the screenUV(or just UV) from surface shader to lighting function so we can sample the hatchings inside the lighting function.
There's also additional parameter "val" which is the calculated average value of main texture rgb component, which basically using the main texture as greyscale image.
So we have to declare the "MySurfaceOutput" first
struct MySurfaceOutput
{
fixed3 Albedo;
fixed3 Normal;
fixed3 Emission;
fixed Gloss;
fixed Alpha;
fixed val;
float2 screenUV;
};
Back to the lighting function. We basically sampling all the hatching texture at once and interpolate them using the lighting(darkness) level. So when the surface is well lit, well show them the light hatching _LitTex, when the surface is pretty much dark, well use the _HvyTex dark hatching. And we'll use moderately lit _MedTex hatching for something between. But why not using if else instead of interpolation? Well, let just say that shader doesn't like conditional function especially if you are targeting mobile platform.
The result shader should be look like something like this:
Shader "Custom/CrossHatchShader"
{
Properties {
_MainTex ("Texture", 2D) = "white" {}
_LitTex ("Light Hatch", 2D) = "white" {}
_MedTex ("Medium Hatch", 2D) = "white" {}
_HvyTex ("Heavy Hatch", 2D) = "white" {}
_Repeat ("Repeat Tile", float) = 4
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf CrossHatch
sampler2D _MainTex;
sampler2D _LitTex;
sampler2D _MedTex;
sampler2D _HvyTex;
fixed _Repeat;
struct MySurfaceOutput
{
fixed3 Albedo;
fixed3 Normal;
fixed3 Emission;
fixed Gloss;
fixed Alpha;
fixed val;
float2 screenUV;
};
struct Input {
float2 uv_MainTex;
float4 screenPos;
};
void surf (Input IN, inout MySurfaceOutput o) {
//uncomment to use object space hatching
// o.screenUV = IN.uv_MainTex * _Repeat;
//uncomment to use screen space hatching
o.screenUV = IN.screenPos.xy * 4 / IN.screenPos.w;
half v = length(tex2D (_MainTex, IN.uv_MainTex).rgb) * 0.33;
o.val = v;
}
half4 LightingCrossHatch (MySurfaceOutput s, half3 lightDir, half atten)
{
half NdotL = dot (s.Normal, lightDir);
half4 cLit = tex2D(_LitTex, s.screenUV);
half4 cMed = tex2D(_MedTex, s.screenUV);
half4 cHvy = tex2D(_HvyTex, s.screenUV);
half4 c;
half v = saturate(length(_LightColor0.rgb) * (NdotL * atten * 2) * s.val);
c.rgb = lerp(cHvy, cMed, v);
c.rgb = lerp(c.rgb, cLit, v);
c.a = s.Alpha;
return c;
}
ENDCG
}
Fallback "Diffuse"
}
To use it, just create a material and select "Custom/CrossHatchShader" from the shader selection menu, then assign the main texture and hatching textures respectively. Adjust the repeat value if necessary.
This is what to expect when using screen space sampling.
And this one is object space sampling.
There you go, feel free to use the shader.
Langganan:
Posting Komentar (Atom)
hi is there a way to display the texture colors?
BalasHapusYes you can say in the surf method:
Hapuso.Albedo = tex2D(_MainTex,IN.uv_MainTex);
and then in the LightingCrossHatch method:
c.rgb *= s.Albedo.rgb;
This will give you also the texture colors
You can add a color property at the top:
BalasHapus_Color("Color", Color) = (1,1,1,1)
And then down in the LightingCrossHatch function, fiddle with these lines:
half4 cLit = tex2D(_LitTex, s.screenUV) + _Color;
half4 cMed = tex2D(_MedTex, s.screenUV) + _Color;
half4 cHvy = tex2D(_HvyTex, s.screenUV) + _Color;
You also might want to multiply by the color instead? I'm super new to shader logic so it's probably not perfect.
Oops, and in SubShader under the #pragma line, you need to declare the variable:
Hapusfloat4 _Color;
Hello there, is there a way to make it affected by light color ? Thanks !
BalasHapus