Sabtu, 26 Agustus 2017

Cross Hatch Shader In Unity3D


Cross hatch shader is one of the Non Photo Realistic (NPR) shader that try to simulate pencil hatching in 3d renderings. There are some approach that can be used to create the shader in Unity 3D, like using post processing shader and surface shader. This article will (at least try to) explain on how to create the later approach

This is how the end result should look like.
 

First, create an empty text and just name it into something you like, something like "crosshatch.shader" or something... open it using any text editor you like, and add this line

    Shader "Custom/CrossHatchShader" //This will be the shader name

the code above will add this shader into shader dropdown selection under the "Custom" section with the name "CrossHatchShader"

Next we'll add the properties of this shader. The expected properties are the main texture, which will be converted into grayscale later in the shader. Then there are the hatching textures which will simulate the darkness of the hatching. They should be tileable textures with three darkness level, light, medium and heavy. The last one is the "repeat" parameter which defines how many time the hatching texture will be repeated in one uv space.

    Properties {
        _MainTex ("Texture", 2D) = "white" {}
        _LitTex ("Light Hatch", 2D) = "white" {}
        _MedTex ("Medium Hatch", 2D) = "white" {}
        _HvyTex ("Heavy Hatch", 2D) = "white" {}
        _Repeat ("Repeat Tile", float) = 4
    }


Here's my attempt in creating the hatching textures using GIMP. Left is light, middle is medium and the right one is heavy/dark hatch.


Next, we'll create the shader body where all magics happen. First, let's declare the variables in the properties. Textures will need sampler2D so shader can sample(read) them. And why fixed instead float? Well, we don't need float's high precision for defining how many time the hatch will be repeated so, fixed will be just fine.

    SubShader {
        Tags { "RenderType" = "Opaque" }
        CGPROGRAM
        #pragma surface surf CrossHatch   

        sampler2D _MainTex;
        sampler2D _LitTex;
        sampler2D _MedTex;
        sampler2D _HvyTex;
        fixed _Repeat;
       

Next, we'll define the input structure for the surface shader. We'll add screenPos so we can sample the hatchings in screen space position while the usual uv_MainTex will be used to sample them in object space position.
        struct Input {
            float2 uv_MainTex;
            float4 screenPos;
        };


Next, is to declare the surface shader itself

        void surf (Input IN, inout MySurfaceOutput o) {
           //uncomment to use object space hatching
//            o.screenUV = IN.uv_MainTex * _Repeat;
           //uncomment to use screen space hatching
            o.screenUV = IN.screenPos.xy * 4 / IN.screenPos.w;
            half v = length(tex2D (_MainTex, IN.uv_MainTex).rgb) * 0.33;
            o.val = v;
        }


Here in the surface shader, we sample the main texture rgb color value, get the average of them and store them in "val" to be passed to the lighting function. 
We also get the uv coordinate of the current sampling position and pass them to the lighting function via "screenUV" variable. Here, you can choose whether to use screen space sampling or object uv space sampling. Screen space sampling means that the hatch will take place in screen position and the surface position and orientation wont affect the hatching result. On the other hand, object space sampling will take the surface position and orientation into account when sampling the hatching. Put it simply, screen sampling is like overlaying the hatching in the screen space while object space will overlay the hatching over the main texture.

Then, we'll add our custom lighting function. The function have to be named "Lighting<surfaceShaderFunctionName>" which in this case our surfaceShaderFunctionName is "CrossHatch" so our custom lighting function name will be "LightingCrossHatch". The function have to return a vector4 so the function should be a type of float4, half4 or fixed4. For this case, half4 is enough.

        half4 LightingCrossHatch (MySurfaceOutput s, half3 lightDir, half atten)
        {
            half NdotL = dot (s.Normal, lightDir);
           
            half4 cLit = tex2D(_LitTex, s.screenUV);
            half4 cMed = tex2D(_MedTex, s.screenUV);
            half4 cHvy = tex2D(_HvyTex, s.screenUV);
            half4 c;
           
            half v = saturate(length(_LightColor0.rgb) * (NdotL * atten * 2) * s.val);
           
            c.rgb = lerp(cHvy, cMed, v);
            c.rgb = lerp(c.rgb, cLit, v);
            c.a = s.Alpha;
            return c;
        }


Note that "MySurfaceOutput" instead of usual "SurfaceOutput"? That because we need additional parameters to be passed from surface shader to lighting function. The additional parameters are the "screenUV" which is to pass the screenUV(or just UV) from surface shader to lighting function so we can sample the hatchings inside the lighting function.
There's also additional parameter "val" which is the calculated average value of main texture rgb component, which basically using the main texture as greyscale image.
So we have to declare the "MySurfaceOutput" first

        struct MySurfaceOutput
        {
            fixed3 Albedo;
            fixed3 Normal;
            fixed3 Emission;
            fixed Gloss;
            fixed Alpha;
            fixed val;
            float2 screenUV;
        };
       

Back to the lighting function. We basically sampling all the hatching texture at once and interpolate them using the lighting(darkness) level. So when the surface is well lit, well show them the light hatching _LitTex, when the surface is pretty much dark, well use the _HvyTex dark hatching. And we'll use moderately lit _MedTex hatching for something between. But why not using if else instead of interpolation? Well, let just say that shader doesn't like conditional function especially if you are targeting mobile platform.

The result shader should be look like something like this:

Shader "Custom/CrossHatchShader"
{
    Properties {
        _MainTex ("Texture", 2D) = "white" {}
        _LitTex ("Light Hatch", 2D) = "white" {}
        _MedTex ("Medium Hatch", 2D) = "white" {}
        _HvyTex ("Heavy Hatch", 2D) = "white" {}
        _Repeat ("Repeat Tile", float) = 4
    }
   
    SubShader {
        Tags { "RenderType" = "Opaque" }
        CGPROGRAM
        #pragma surface surf CrossHatch

        sampler2D _MainTex;
        sampler2D _LitTex;
        sampler2D _MedTex;
        sampler2D _HvyTex;
        fixed _Repeat;
       
        struct MySurfaceOutput
        {
            fixed3 Albedo;
            fixed3 Normal;
            fixed3 Emission;
            fixed Gloss;
            fixed Alpha;
            fixed val;
            float2 screenUV;
        };
       
        struct Input {
            float2 uv_MainTex;
            float4 screenPos;
        };
     
        void surf (Input IN, inout MySurfaceOutput o) {
//uncomment to use object space hatching
//            o.screenUV = IN.uv_MainTex * _Repeat;
//uncomment to use screen space hatching
            o.screenUV = IN.screenPos.xy * 4 / IN.screenPos.w;
            half v = length(tex2D (_MainTex, IN.uv_MainTex).rgb) * 0.33;
            o.val = v;
        }
       
        half4 LightingCrossHatch (MySurfaceOutput s, half3 lightDir, half atten)
        {
            half NdotL = dot (s.Normal, lightDir);
           
            half4 cLit = tex2D(_LitTex, s.screenUV);
            half4 cMed = tex2D(_MedTex, s.screenUV);
            half4 cHvy = tex2D(_HvyTex, s.screenUV);
            half4 c;
           
            half v = saturate(length(_LightColor0.rgb) * (NdotL * atten * 2) * s.val);
           
            c.rgb = lerp(cHvy, cMed, v);
            c.rgb = lerp(c.rgb, cLit, v);
            c.a = s.Alpha;
            return c;
        }
       
      ENDCG
    }
    Fallback "Diffuse"
}


To use it, just create a material and select "Custom/CrossHatchShader" from the shader selection menu, then assign the main texture and hatching textures respectively. Adjust the repeat value if necessary.

This is what to expect when using screen space sampling. 

 











And this one is object space sampling.
 










There you go, feel free to use the shader.

Jumat, 08 Mei 2015

Setting Up Reference Image in Blender 3D

In this short tutorial, I'll just show you how to set a reference Image in Blender 3D version 2.7
Most tutorial in using reference image for modelling with Blender 3D suggest you to assign the image into the background, which is quite cumbersome since you can't actually rotate or zoom in the 3D View Window without losing your image to mesh alignment. Even, panning the camera will mess your image to mesh alignment.

Most 3D modeller are using the reference images in the 3D space(world) itself, not in the separated background. So, this is why I would like to share how I set the image reference in Blender 3D.

First, of course you will need the image reference itself. For now, I'll just use one I found on CG Cookie site at https://cgcookie.com/max/2012/02/28/exclusive-resource-character-modeling-sheets-krystal/






Next is of course open Blender and add a Plane







Then, edit the plane to make something like this

The next step is to load the image and map the uv so you'll have the right view of the image when viewed from the right angle, that is, you got the front view when viewing the model from front, and the same when viewing from left/right, etc.
Also you might want to add Material into it and make it Shadeless.

Now adjust the model height so you have a good proportion.

There now you have an image reference in 3D view port. The next step is to prepare the 3D object you're going to working on.

Add another 3D object in to the scene.
Now here's the trick, do you notice that the new object is always shaded the way your preference is shaded. That is, when you switch to Wireframe, the reference model will be in Wireframe too. That's not good since you'll might want to view your reference Textured while the object you're working on in Wireframe(so you can always see the reference image while working the object).

Now go to object properties and set the Maximum Draw Type to Wireframe and viola...

Now I believe you can do it yourself from here ;)
Happy modelling...

Rabu, 04 September 2013

Cheap Murky Water FX Using Projected Image

When I'm working on my current Unity3D project, Hi:Breed, a smart phone zombie shooter, I decided to add a sewer level, half flooded with murky water. So I tried a couple of standard water FX and found none of them fit the gloomy environment. I only can use the Basic Water Shader since I use Unity3D Basic Version.

The picture to the left is the standard DayLight Simple Shader, obviously not suitable since I don't want the 'water' clips all polygon under it. The middle one is using simple UV shifted transparent mesh, looks much better for me but the water is way too clean, the water should be dirty so I can only see only anything near the surface. Now, the one to the right is the, uh, right one. The submerged part near the surface is still visible, while anything deeper is harder to see.

I've read some article about this 'Murky Water Shader' and almost all of them require the render-to-texture feature, something I can't use in my Basic Version, to sample the depth buffer.

So, I've come with this stupid idea, "Hey, why don't I use this lightning fast standard image projector to fake the volumetric fog for the fake murky water"...

I know a little bit of writing fixed function pipeline shader but I'm too lazy to create the shader from scratch, so I took the standard "Shadow Material" shader and make some small modification

Here is my fake volumetric fog shader

Shader "Projector/Fog" {
  Properties {
        _Color ("Main Color", Color) = (1,1,1,1)      
     _ShadowTex ("Cookie", 2D) = "" { TexGen ObjectLinear }
  }
  Subshader {
     Pass {
        ZWrite off
        Fog { Color (0, 0, 0) }
        Color [_Color]
        ColorMask RGB
        Blend DstColor zero
        SetTexture [_ShadowTex] {
           combine texture, texture
           Matrix [_Projector]
        }
     }
  }
}


And here's the projected 'fog' texture:












And here is the result:




















Still looking like a poisonous fog than water for me, something is still missing, the water surface. So, let's add the UV shifted transparent mesh on top of the fog...

And viola... Le Cheap Murky Water FX that works smoothly even on my ArmV6 powered android device