Any live system being examined should be considered hostile. It has been demonstrated many times (Darren Bilby [Anti-forensic Rootkits] and Bill Blunden [Anti-Forensics: The Rootkit Connection]) that kernel level rootkits can intercept the calls for reading the physical memory and the hard disk, hide what is to be hidden and still serve valid data. There are special circumstances that need to be considered in these cases such as: a) the presence of the intruder on the system, b) possible ”booby traps”, c) involvement of law enforcement. Protection of the evidence is paramount. Prior to carrying out a live forensic examination, the following should be considered:
- Avoiding GUI (graphical user interface) tools. CLI (Command-line interface) utilities, and in particular, statically linked binary files, are better, because they are more likely to leave little or no footprint on the evidence system and also are more trusted if no host system dynamic libraries are used. CLI tools are easier to use from a trusted command shell.
- Tools should be validated. If free or open source tools are used, instead of recognized commercial ones, they should by validated and recognized by the forensic community. Tools should be obtained from trusted sources and their actions verified. This supports not only the evidence acquired and the credibility of the investigator, should he be called upon in court to validate the processes followed and tools used. A list of checksums for all the tools should be kept the with the toolkit.
- Tools should be kept on a write protected removable media, that contains a trusted operating system (e.g. a live forensic distribution like Helix, that has also a free slightly older version )
- Documentation of exactly what is done and when it is done during every step of an investigation is extremely important. A testimony may take place a year or more later and the more comprehensive the notes are, the easier it will be to provide accurate and less refutable testimony. There are good tools to assist with the documentation. One free program is CaseNotes , that supports MD5 hashes, encryption and full audit-trail. Helix 2008R1 has the option to save a log of all the actions after exiting the GUI interface.
Data acquisition in these cases requires a methodical process, that follows the level of volatility of information. The processes and tools involved in collecting and analysing volatile data are thoroughly described in [Harlan Carvey - Windows Forensic Analysis]. Many of the tools described below could be (some are already) incorporated into automated live incidence response kits or live disks (e.g. COFEE
or Helix Live Response), or simply some automated scripts can be build to run them and pipe the output. All the tools discussed are provided in the attached support. The order in which information needs to be acquired during forensic processing is as follows:
or Helix Live Response), or simply some automated scripts can be build to run them and pipe the output. All the tools discussed are provided in the attached support. The order in which information needs to be acquired during forensic processing is as follows:
1 System date/time and timezone information
Very important piece of information when correlating attacks on different machines and when examining time-based log files. The BIOS time should be noted first. Then the OS time and zone info. In Windows, lots of zone information can be extracted from the registry
, or programatically using windows time related API
.
, or programatically using windows time related API
.
2 Memory
The field of memory analysis has progressed tremendously. Tools have been created to collect the contents of physical memory on Windows XP and Vista/7. The open-source Volatility project
provides a framework for analysing memory dumps, that also has the ability to parse hibernation files and process memory dumps. A list of memory imaging techniques and tools for Windows, Linux and Mac OS X is available at
. An important aspect to keep in mind when using these tools is that to collect the contents of RAM, they must be loaded into RAM as a running process (Locard’s Exchange Principle
).
provides a framework for analysing memory dumps, that also has the ability to parse hibernation files and process memory dumps. A list of memory imaging techniques and tools for Windows, Linux and Mac OS X is available at
. An important aspect to keep in mind when using these tools is that to collect the contents of RAM, they must be loaded into RAM as a running process (Locard’s Exchange Principle
).
ProDiscover
is another tool dedicated to incident response, with memory imaging capabilities. Additionally, it can also extract the BIOS. A basic demo version is available
.
is another tool dedicated to incident response, with memory imaging capabilities. Additionally, it can also extract the BIOS. A basic demo version is available
.
KntDD
is a tool that overcomes a problem in the previous programs: Since Windows 2003 SP1, access to \Device\PhysicalMemory object has been restricted from user mode
to help prevent security exploits that might leverage this functionality from user-mode. Only kernel drivers are allowed to access this object. The KntDD utility is available only for private sale and is capable to do just this.
is a tool that overcomes a problem in the previous programs: Since Windows 2003 SP1, access to \Device\PhysicalMemory object has been restricted from user mode
to help prevent security exploits that might leverage this functionality from user-mode. Only kernel drivers are allowed to access this object. The KntDD utility is available only for private sale and is capable to do just this.
MDD
is a simple CLI tool that allows RAM dumping. It's output is raw, dd-style. It has a 4GB limitation, and some known issues
.
is a simple CLI tool that allows RAM dumping. It's output is raw, dd-style. It has a 4GB limitation, and some known issues
.
Once dumping is complete, mdd displays an MD5 checksum for the resultant dump file, that is used to ensure the integrity of the memory dump file later, and the error log:
Win32DD (from Moonsols Windows Memory Kit)
is a tool developed by Matthieu Suiche that can parse different types of files: crash dumps, windows hibernation files and VMWare memory snapshots.
is a tool developed by Matthieu Suiche that can parse different types of files: crash dumps, windows hibernation files and VMWare memory snapshots.
Memoryze
is another tool capable of dumping physical memory. The collection process of memory data can be started from MemoryDD.bat script. A great article explains
everything, from the installation on a removable media to acquiring memory images and analysing live memory.
is another tool capable of dumping physical memory. The collection process of memory data can be started from MemoryDD.bat script. A great article explains
everything, from the installation on a removable media to acquiring memory images and analysing live memory.
FastDump
is a memory dumping utility from HBGary with a very small memory footprint and all code statically linked, no shared DLL libraries. It has a free version with some limitations (only for 32-bit systems, with no more than 4GB of RAM).
is a memory dumping utility from HBGary with a very small memory footprint and all code statically linked, no shared DLL libraries. It has a free version with some limitations (only for 32-bit systems, with no more than 4GB of RAM).
F-Response
is an acquisition framework, tool independent, that uses the iSCSI protocol for raw read-only disc access over the network. It allows mounting a remote drive to be seen as a local disk, and then any of the previous mentioned tools could be used (dd, FTK, ...). It also provides remote access to physical memory.
is an acquisition framework, tool independent, that uses the iSCSI protocol for raw read-only disc access over the network. It allows mounting a remote drive to be seen as a local disk, and then any of the previous mentioned tools could be used (dd, FTK, ...). It also provides remote access to physical memory.
FAU (Forensic Acquisition Utilies)
is a set of tools developed by George M. Garner which contains a reliable port of Linux dd utility (the suite also contains a data wiper and a volume information dumper, very useful for NTFS volumes).
is a set of tools developed by George M. Garner which contains a reliable port of Linux dd utility (the suite also contains a data wiper and a volume information dumper, very useful for NTFS volumes).
3 Current network state
One of the reasons why it's important to save the network state before shutting down the system is that an attacker may be connected to the system, or some previously installed malware is communicating with the master server. Capturing network related information may help later to build a time-line or to obtain further evidence (requesting logs related to network conversations from other nodes, ISPs).
On Windows machines, netstat -b -a displays all the network connections and their state (listening or established) and also the programs involved in opening the sockets. This option to display the processes involved exists on Linux and Unix system, but it differs between various flavours of operating systems, and for some it is not present by default. Every line in the netstat output must be analysed. On Windows, the FPort
tool from McAfee can be used to identify unknown ports and their applications (outputs the same information as netstat -an plus also shows the corresponding application. It's a lot faster than -b flag of netstat).
tool from McAfee can be used to identify unknown ports and their applications (outputs the same information as netstat -an plus also shows the corresponding application. It's a lot faster than -b flag of netstat).
Another important thing to check is the routing table. An attacker could have altered the routing table to redirect the traffic through a sniffing point, or just use another route to avoid security control devices like firewalls. The routing information can be obtained using netstat -rn command. An identical response should be obtained with route print command. If not, one of the two executables may have been replaced with rogue version (software level rootkit).
4 Processes
Running programs
The running processes may also offer valuable information (one should not rely one hundred per cent on the process list because there could always be a kernel level rootkit active that hides specific processes). PsList from SysInternal's PsTools
free suite is a command line version of Process Explorer that shows all the running processes, threads and memory details, process tree and running time information.
free suite is a command line version of Process Explorer that shows all the running processes, threads and memory details, process tree and running time information.
Services
Another tool for processes inspection is PsService, from the same suite. It can manage services and query the status for all, or individually. Services are important to note because malicious attacker-planted tools may hide in them. Services can be set to start at reboot and start backdoors, or rogue file sharing FTP servers.
Schedules tasks
An attacker may also schedule some tasks to be run at a certain hour, to minimize the exposure window. For example he could have scheduled an automated task to open some port at a specific hour outside business hours, or may start a processing intensive task after working hours. The at command allows to schedule task, but also view all the ones already programmed.
Opened files
The Process Explorer utility lists the files, registry keys, and other objects that open processes are using and the dynamic-link libraries (DLLs) that the processes loaded. This is a very good GUI based tool that displays those resources individually per process. Another tool just for showing opened files, that also works in CLI is OpenedFlesView
, from NirSoft, that can also display remotely opened files. Microsoft provides the openfiles command to displays opened files, locally or remote. From SysInternals (now Microsoft) it's available the psfile tool, specifically for viewing remotely opened files.
, from NirSoft, that can also display remotely opened files. Microsoft provides the openfiles command to displays opened files, locally or remote. From SysInternals (now Microsoft) it's available the psfile tool, specifically for viewing remotely opened files.
Processes memory dumps
When analysing a running process identified as malware, its memory space could provide interesting information. There are a couple of things that impede process memory acquisition: the lack of documented methods and restricted access to protected memory area. But important information may be retrieved, that otherwise would be lost, like executed hard-coded commands, clear text password, inadvertently left-over information that could help identify the attacker.
A simple tool to do this is PMDump
by Arne Vidstrom. It's usage is pretty self-explanatory. Another tool to help with this is userdump
, provided by Microsoft. It has an useful option, the -p flag, to dump the process list. Another one, more useful, is the possibility to dump multiple processes memory with -m flag (Note: A binary compare between two 30 MB dumps obtained with each of these tools proved them to be almost the same).
by Arne Vidstrom. It's usage is pretty self-explanatory. Another tool to help with this is userdump
, provided by Microsoft. It has an useful option, the -p flag, to dump the process list. Another one, more useful, is the possibility to dump multiple processes memory with -m flag (Note: A binary compare between two 30 MB dumps obtained with each of these tools proved them to be almost the same).
On the obtained dump, a first step is a basic strings check, with the strings
utility from SysInternals. This will reveal computer name, executable path, environment variables and other possibly relevant information. A more advanced text extractor GUI tool, that offers also many types of filters and detailed information is BinText
from McAfee.
utility from SysInternals. This will reveal computer name, executable path, environment variables and other possibly relevant information. A more advanced text extractor GUI tool, that offers also many types of filters and detailed information is BinText
from McAfee.
Debugging Tools for Windows
package provides the program dumpchk. Running it on an output of pmdump returns an “invalid dump” error. Running it on the one extracted by userdump, it outputs lots of information: memory regions, linked directories paths, command name used to launch it, timestamps. All these information can be correlated to build a profile. Sometimes, the same piece of information is offered by many tools and this could be a method of verification.
package provides the program dumpchk. Running it on an output of pmdump returns an “invalid dump” error. Running it on the one extracted by userdump, it outputs lots of information: memory regions, linked directories paths, command name used to launch it, timestamps. All these information can be correlated to build a profile. Sometimes, the same piece of information is offered by many tools and this could be a method of verification.
5 Logged on user information gathering
Logged on users
To view the currently logged on users and their login date and time, and the users accessing remote shares, the tool PsLoggedOn from SysInternals' PsTools package is safe and simple to use. In case we see remote users accessing shares, their IP is not displayed, but we know that a remotely connected user must use a NetBIOS port. For Windows XP it's 445 (for older versions of Windows it was 139). So correlating these information, we can get the full picture about who is logged on, since when and from where.
Here we see the local user. The same user is accessing a network share. There is also a connection with administrator privileges. The administrator user is accessing a network share, through the PsExec with the -s flag tool (which spawns PSEXECSVC service on target machine):
To verify the network connections, we simply search for NetBIOS ports. The following listing confirms the listening and established connections:
History of logins
To obtain a list of last logons, successful or not, we can use the NTLast
tool from McAfee. It depends on Windows Events and reads .svt files and applies different filters. It's easy to spot password guessing attempts and failed login attempts (using -f option), generate reports. It also has a verbose option (-v flag) that shows the logoff time also when possible:
tool from McAfee. It depends on Windows Events and reads .svt files and applies different filters. It's easy to spot password guessing attempts and failed login attempts (using -f option), generate reports. It also has a verbose option (-v flag) that shows the logoff time also when possible:
Here we see different kinds of successful login events: local administrator login (generated by a command like 'PsExec -u Administrator -p pass' ), the current user login and two services started under NT_AUTHORITY, one of them by PsExec -s option (run a process in System account).
System event logs
Windows events store lots of information about the security of the system and the applications and users' activity. PsLogList
, from the same PsTools package, is a very flexible tool, that uses Event Log API to dump all the information from event logs (security events, system and application events, but also events logged by any particular application that reports through Event Viewer). Important features:
, from the same PsTools package, is a very flexible tool, that uses Event Log API to dump all the information from event logs (security events, system and application events, but also events logged by any particular application that reports through Event Viewer). Important features:
- It can be used to dump logs from remote machines (using \\computer_name -u user_name parameters)
- It can apply lots of filters, to show just or to exclude the following: date/time, user name, event id, event type (errors, warnings), application name,
- Export files under different formats: supports comma-separated value (CSV), and the default delimiter can be changed (with -t switch)
- Aggregates event logs data from multiple computers (using the @computers_file switch).
So this can be used automatically to check for local or network anomalies, but also during forensic analysis process to confirm user's actions and establish content. This really seems like a “Swiss-army knife event log-management utility”, like its authors describe it.
Transparently encrypted data
Users may have encrypted data that needs to be accessed on the fly, and that is available transparently while the user is still logged in. Here we can have files, folders or even drives protected by the EFS
(Encrypting File System) feature of NTFS file system. This uses a very strong form of encryption, and is only recoverable with the user's credentials or with a recovery agent's certificate (maybe if the computer was part of a domain and a recovery agent was already defined). Not even brute force encryption will be possible off-line for EFS, because the implementation of the algorithms is unknown, and the protocols is not reversed. There are works in this area
, to reverse engineer and understand data blocks used in EFS, so this file system could be accessed from other operating systems, given the user name and password.
(Encrypting File System) feature of NTFS file system. This uses a very strong form of encryption, and is only recoverable with the user's credentials or with a recovery agent's certificate (maybe if the computer was part of a domain and a recovery agent was already defined). Not even brute force encryption will be possible off-line for EFS, because the implementation of the algorithms is unknown, and the protocols is not reversed. There are works in this area
, to reverse engineer and understand data blocks used in EFS, so this file system could be accessed from other operating systems, given the user name and password.
Similar to EFS data are encrypted file systems mounted by third party tools, like TrueCrypt. If a TrueCrypt volume was mounted on seizing time, and evidence isn't collected on that moment, without the access password, it is lost after shutdown.
Some browsers (Chrome and Internet Explorer) store user's saved credentials for visited sites protected by DPAPI (Data Protection API)
. It's confidentiality is based on user's login credentials. The relation between DPAPI, user password, master and session keys and protected data is describe below.
. It's confidentiality is based on user's login credentials. The relation between DPAPI, user password, master and session keys and protected data is describe below.
Data protection application programming interface is a stronger option for developers to protect the sensitive data, available starting from Windows 2000. It is a service that provides confidentiality of data by using encryption. Because data protection is part of the operating system, every application can secure data without needing any specific cryptographic code other than the necessary function calls to DPAPI. It uses Triple-DES algorithm, strong keys and ties them to user’s logon password. Because all applications running in the context of a user would have access to data protected by this user, a 'secret' is introduced, to act as secondary entropy (it it’s stored unprotected, other applications could use it to unprotect data). There is also the option to use a prompt which sets/asks for a password (usable only in GUI applications).
The key derivation process is as follows: DPAPI initially generates a Master Key (512 bits of random data). To protect it, a key is derived from the SHA1 hash of user’s password, a salt and an iteration count, through a Password Based Key Derivation Function (PBKDF2 from PKCS #5, with 4000 iterations). An HMAC is calculated for the Master Key (to prevent tampering). The derived key is used to encrypt (TDES, CBC mode) the Master Key and HMAC. The encrypted Master Key and HMAC, unencrypted salt and iteration count, are all stored in a Master Key file, which resides in the user's profile directory.
A session key is a symmetric key used to encrypt and decrypt the actual data. It is not stored anywhere, it is always derived and removed from memory (thus the importance of acquiring data protected by it on time). It is derived by CryptoAPI calls, using as input the Master Key, 16 random bytes, an optional entropy and/or optional user password. The random bytes are stored unprotected in the output data blob.
Two example source codes have been included showing how to use this functionality to encrypt and decrypt data blobs.
Helix 2008 R1 forensic distribution contains tools to extract credentials stored in Chrome's sqlite database (chromepass program, from Nirsoft) and also from registry, as stored by Internet Explorer, all versions (the iepv tool). The APIs used to do that are described by Microsoft. Chrome browser is open source and the 'tricks' used by IE after version 7 were already discovered. Because of these reasons and also because the existing tools for browser password dumping are detected as viruses by many anti-virus software or contain adware/spyware and are closed-source, I've coded some simple proof of concept password dumpers for browsers (I've attached the cpp code (tools\pass recover\my\)
). This could help understanding the underlying mechanisms of storing passwords, not only the ones used by browsers but also by other applications utilizing data protection API.
). This could help understanding the underlying mechanisms of storing passwords, not only the ones used by browsers but also by other applications utilizing data protection API.
LSA Secrets
“LSA secrets” is a special protected storage for important data used by the Local Security Authority (LSA) in Windows. LSA is designed for managing a system's local security policy, auditing, authenticating, logging users on to the system, storing private data. The important thing to realize about LSA Secrets is that it potentially contains credentials for services started under specific users, passwords for accounts that log on from external domains, as well as Dial-up Networking passwords. This “secret” information is stored in an encrypted format in the registry key HKLM\SECURITY\Policy\Secrets. Normally, these registry keys are not visible even if you run regedit as Administrator, because the permissions for this key show that only the SYSTEM account has access to it. Each secret (key) here contains the data in CurrVal sub-key. For example on systems with auto logon enabled, there is a key called DefaultPassword that contains the cached logon password. For unknown reasons, this key exists even on some systems without auto logon enabled (I had that key on a Windows XP SP3, and I have never had auto login, and other people are reporting the same problem - a possible breach). A method to try to get the logon password (used by other tools also, like Cain&Abel) is to query the value from DefaultPassword key and decrypt it using functions from Windows API. So LSA secret storage can be read in the context of current user by using functions exported by Advapi32 library, LsaRetrievePrivateData being the most useful. A sample application for this purpose is created and attached (http://code.google.com/p/secrets/).
6 Non-volatile data
Live forensics and incident response process should not neglect non-volatile data. First, logical files and logs may be collected. Here the netcat program may be useful (or it's variant that uses encryption, cryptcat) to send logs to a remote server. In the final phase, the physical hard drives, floppies, backup tapes, CD/DVD-ROMs, USB thumb drives, flash memory cards and other storage media from the scene are taken into custody. Also any relevant materials and printouts found may be collected.
The initial evidence collection process is important to establish the level of response that will be needed further and to fully understand the scope of the incident Often, a pre-defined check-list will reduce the risk of forgetting to gather some important proof or mistaking the order of volatility. One example of a check list found online is the procedure for incidents response on Windows systems of UCF University
. This is meant to assist an investigator to better record the initial findings. A procedure like this should be kept and updated, and complemented with written notes and observations from the investigators.
The initial evidence collection process is important to establish the level of response that will be needed further and to fully understand the scope of the incident Often, a pre-defined check-list will reduce the risk of forgetting to gather some important proof or mistaking the order of volatility. One example of a check list found online is the procedure for incidents response on Windows systems of UCF University
. This is meant to assist an investigator to better record the initial findings. A procedure like this should be kept and updated, and complemented with written notes and observations from the investigators.