How many times that we who work at the SOC find that it’s damn hard to validate an incident? Let say that we received one alerts; WEB-PHP remote include path (my fav),
1st let us see the rules. This is to know how the hell the alert triggered;
alert tcp $EXTERNAL_NET any -> $HTTP_SERVERS $HTTP_PORTS (msg:”WEB-PHP remote include path”; flow:established,to_server; uricontent:”.php“; content:”path=“; pcre:”/path=(http|https|ftp)/i“; classtype:web-application-attack; sid:2002; rev:5;)
So, usually we will see the payloads and try to identify what is actually happening;
ent: Mozilla/5.0(compatible; Koqueror/3.5; FreBSD) KHTML/3.5. (like Gecko)..
Accept: text/html, image/jpeg, image/png, text/*
, image/*, */*..
Accept-Encoding:x-gzip, x-deflate, gzip, deflate..Accept-Charset: iso-8859-1,
utf-8;q=0.5, *;q=0.5..Accept-Language: en..Host:192.168.2.127
What else can you do? Besides identifying the targeted system’s operating system and applications, how can we sure that whether this attack is successful or not?
Same as this alert : WEB-IIS cmd.exe access. See the rules;
alert tcp $EXTERNAL_NET any -> $HTTP_SERVERS $HTTP_PORTS (msg:”WEB-IIS cmd.exe access”; flow:to_server,established; uricontent:”cmd.exe“; nocase; classtype:web-application-attack; sid:1002; rev:8;)
and see the payload ;
Connection: Keep Alive..Content-Length: 0..User-Agent: Mozilla/4.75..Host: 192.168.2.127….
You might wonder how the hell the alert triggered? It seems that there’s no cmd.exe existed in the payload. Actually there is. Basically the attacker using the availabality of url encoding instead of directly type in “cmd.exe”. Even though url specification (RFC 1738) states that “…Only alphanumerics [0-9a-zA-Z], the special characters “$-_.+!*’(),” and reserved characters used for their reserved purposes may be used unencoded within a URL.” HTML otherwise allows the entire range of the ISO-8859-1 (ISO-Latin) character set to be used in documents – and HTML4 expands the allowable range to include all of the Unicode character set as well. In the case of non-ISO-8859-1 characters (characters above FF hex/255 decimal in the Unicode set), they just can not be used in URLs, because there is no safe way to specify character set information in the URL content yet [RFC2396.]
You can read further on url encoding here.
So basically the input of
will be parsed as
%73 =s; %69 =i; %70 =p; %73 =s; %72 =r; %61 =a; %63 =c; %6c =l; %69 =i; %64 =d; %6c= l; %65 =e;
%70 =p; %6e =n; %78 =x; %69 =i; %66 =f; %69 =i; %6d =m; %70 =p; %74 =t; %65 =e; %2e = .; %5c =\;
%2e =.; %5c =\; %2e =.; %2e =.; %77 =w; %69 =i; %6e =n; %5c =\; %73 =s; %79 =y; %73 =s; %74 =t;
%65 =e; %6d =m; %33 =3; %32 =2; %5c =\; %64 =d; %78 =x; %65 =e; %3f =?; %2b =+; %69 =i; %72 =r.
Again back to the question whether the assets that we monitored vulnerable to this kind of attack? It’s not a big problem if our assets Operating System are Unix Based or Unix variant but whut if they are using Windows based with IIS as their web engines?
The key information that we need in order to validate this kind attack is how the server respond to this kind of request.
How can we get such information? I will touch that on the second part of this topic