Featured

Locked File Access Using ESENTUTL.exe

I’m currently working on a solution to collect files off a live system to be used during some IR processes. I won’t go into any great detail but I’m limited to only using built-in Windows utilities.  I need access to browser history data and while Chrome and Firefox allow copying of the history files, the WebCacheV01.dat file that IE and Edge history are stored in is a locked file and cannot be copied using native copy commands/cmdlets like Xcopy, Copy-Item, RoboCopy, etc.

ESE Database Files and ESENTUTL.EXE

The WebCacheV01.dat file is an ESE (Extensible Storage Engine) database file and there is a built-in tool for performing maintenance operations on such files: esentutl.exe. I started wondering if I could use this tool to export the database or at least dump the history.  Running esentutl.exe from a command prompt, we see two interesting options: /m to dump a file and /y to copy a file. esentutl_options

Copying the file sounds great to me.  Let’s try
“esentutl.exe /y WebCacheV01.dat /d C:\Path\To\Save\WebCacheV01.dat”

esentutl_error_locked

Strike 1. That gives us the same “file is being used” error that I received with other copy commands.  Ok so taking another look at the copy options, I see the /vss and /vssrec options. A couple of important distinctions here:

  • I am running Windows 10, build 1803. The /vss and /vssrec options are only available on Win 10 and Server 2016 or later.
  • The /vss and /vssrec options require you to be running as an admin

The /vss option “copies a snapshot of the file, does not replay the logs”.  We’ll talk a little more about the transaction logs later but let’s go with the /vss option for now.

esentutl_vss_option

OK, that’s much better. If I open up the WebCacheV01.dat file in ESEDatabaseView or BrowsingHistoryView, I see browsing history leading up to my testing. Initially, I thought it was grabbing a copy of the file from a previous Volume Shadow Copy (VSC) but that isn’t the case. Esentutl.exe is able to use the Volume Shadow Copy service to make a backup of a locked file.  This can be done even if VSCs are disabled on the system.

What about the /vssrec option?  Data is not written directly to the database file. In simple terms, data is instead written to RAM and then to transaction logs before being flushed into the database file.  Microsoft’s documentation says: “The data can be written to the database file later; possibly immediately, potentially much later.”

I did some testing with this and I’m not sure under what scenarios this doesn’t happen right away.  I opened up Edge and navigated to a new page, then immediately copied the WebCacheV01.dat file while Edge was still open and it contained this new entry.

Just keep in mind that when using the /vss option only, we have the potential to miss entries that have not been written to the database. Using the /vssrec option will replay these transaction logs.  This is the syntax used:

esentutl.exe /y C:\Path\To\WebCacheV01.dat /vssrec V01 . /d c:\exports\webcachev01.dat

This can be a double-edged sword though because you also have the potential to lose deleted records that have yet to be purged from the database once the logs are flushed.  If this is a concern you could go with both options and just save two copies of the file. This article from SANS provides more details on the ins and outs of ESE databases and transaction logs.

https://digital-forensics.sans.org/blog/2015/06/03/ese-databases-are-dirty

Additional Uses of Esentutl.exe

So we know we can use esentutl.exe to copy ESE database files but what about other locked files? Well, it turns out you can. In this example, I grab a copy of the NTUSER.dat file for the currently logged in account.

esentutl_ntuser.dat

I really like this as an option for copying system files when doing investigations or even testing. I’m sure it has value to Red Teams as well as it allows you to grab other hives like the SAM and other ESE databases like NTDS.dit without introducing outside tools or using PowerShell.  Blue Teams can detect this type of activity by auditing process creation and looking for activity by esentutl.exe, particularly with the /vss switch.  esentutl_evidence_process_tracking

Final Thoughts

I’m still looking for a good way to get IE/Edge browser history on the versions of Windows that do not have the /vss switch so if you’ve got any ideas there, let me know.

Advertisements
Featured

Installing Volatility on Windows

I recently had the need to run Volatility from a Windows operating system and ran into a couple issues when trying to analyze memory dumps from the more recent versions of Windows 10.

Volatility uses profiles to handle differences in data structures between Operating Systems.  There are changes in these data structures between some builds of Windows 10 that are significant enough to cause certain plugins to fail or return incomplete and unreadable results.

Compiled versions of Volatility are available on https://www.volatilityfoundation.org/releases. These releases contain all the required dependencies and don’t require any installation but they don’t contain the latest profiles. We can verify this if we download and run the compiled Windows release with the –info switch to display the available profiles.  Those of you that are familiar with Windows build numbers will note that we are missing the following builds: 15063, 16299, 17134, and 17763. volatilty-compiled-profiles

Installation

To get the latest profiles, we need to install Volatility using the source code files. These utilize Python and will also require some dependencies to be installed for all plugins to work.  Also, I’d like to point out that while these instructions are for Windows, the same principle applies to installing on other Operating Systems. For additional details, I highly recommend you take a look at the Installation page on the Volatility Github. This provides links for all the dependencies and explains what functionality they provide.

  1. Download and install Python 2.7. (The Volatility setup script doesn’t currently support Python 3). **Make sure to enable the option to add Python to Path during the installation as shown below.**
    Python_SetPath
  2. Download the Volatility source code archive and extract files
  3. Open a command prompt, navigate to the location you extracted the Volatility source to and run “setup.py install”
    Volatility_setup
  4. If we run “vol.py -h” at this point, we will get an error indicating that several dependencies are not installed.  Use the links and commands below to install the following dependencies.
    • diStorm3: Download from https://github.com/gdabah/distorm/releases and run the executable to install
      distrom_binaries
    • pyCrypto: I had some issues with installing pyCrypto. The install link on the Volatility Github for the pyCrypto binaries is the easiest install method but it stopped working shortly before this posting. I’ll leave it up in case it’s a temporary issue. If not, we can use pip to install but will need to install the Microsoft C++ Compiler for Python 2.7 prior to doing so.
    • Yara: https://www.dropbox.com/sh/umip8ndplytwzj1/AADdLRsrpJL1CM1vPVAxc5JZa?dl=0&lst=
      I know the dropbox link seems sketchy but that’s where the Volatility Github points to when selecting the option for binary installers. There are several options on this page. Make sure to select one of the py2.7.exe options. Once downloaded, run the executable to install.
      Yara_dropbox
    • openpyxl: There are no compiled Windows binaries so we will install by running “pip install openpyxl” from the command line
    • ujson: There is no compiled binary installer for this one either so we will use PIP to install here too: “pip install ujson”
      pip_openpyxl_ujson

There is one other dependency listed for Volatility which is the Python Imaging Library (PIL). This gives Python the ability to process images an graphics. I was unable to install this and it wasn’t a capability I needed in Volatility so I chose to leave it out.

So that’s it. Now if we run “vol.py –info” we can see the newer profiles are listed.Volatility-profiles_source

We can get started with Volatility by running “vol.py -h” from the command line to see the syntax.
volatility_help
The SANS Memory Forensics Cheat Sheet is also a great resource if you need help getting started on Memory Forensics commands.
https://digital-forensics.sans.org/media/volatility-memory-forensics-cheat-sheet.pdf

Finally – I need to say thanks here to Richard Davis and his 13Cubed YouTube channel. Richard has a ton of great videos, one of which covers this profile issue on SIFT Workstation and Kali Linux.  I watched this several months ago and when I ran into the Windows issue, I knew the cause right away thanks to him.  Here’s the video if you are interested.

I hope this is helpful and if you have any questions or comments feel free to reach out.

 

Featured

More Automation: Get-ZimmermanTools.ps1

Just wanted to provide an update on a recent addition to my Github.  In my post last week, I discussed the Start-ImageParsing.ps1 script which automates the use of various parsing tools against a forensic image.  One of the requirements in the script is that all of Eric Zimmerman’s tools must be in the same directory.  I realized this download and extraction might be a pain for people that don’t already have the tools so I put together this script to automate things.  It’s also a good way to ensure that you always have the latest versions installed.

Installation and Execution

  1. Download the script from my Github and extract files: https://github.com/grayfold3d/POSH-Triage
  2. Unblock the file and set the PowerShell execution policy. This allows us to execute PowerShell scripts but prevents scripts that are either not local to your system or unsigned from running.
    • Right-click script, select Properties and then “Unblock file”
    • Open PowerShell as administrator and type:
      > Set-ExecutionPolicy -executionpolicy RemoteSigned
  3. By default, files are saved to C:\Forensic Program Files\Zimmerman. If you’d like them to be saved to a different location, you can specify this when executing from the PowerShell console using the -outDir parameter, or the script can be edited to set the location using these steps.
    • Right-Click Get-ZimmermanTools.ps1 and select Edit
    • Change the area highlighted below to your desired folder and save changes
      Edit_default
  4. Right-Click Get-ZimmermanTools.ps1 and select “Run with PowerShell”
    Run with PowerShell
  5. The script will launch and begin downloading the files
    Script Execution
  6. Alternatively, the script can also be launched from the PowerShell console by navigating to the directory it is saved to and entering
    > .\Get-ZimmermanTools.ps1 
    In this example, we use the -outDir parameter to specify an alternative location to save the files.
    PSConsole_execution

So that’s it. Hopefully, this will save you some headaches.  As always, if you have any feedback or suggestions, leave a comment or send me a message on Twitter.

Featured

Start-ImageParsing.ps1

Earlier this year, I was able to take the SANS FOR500 course.  I’ve really never enjoyed any training more.  I took the OnDemand course which I think allows you to soak up the material at a reasonable pace. In addition to the course labs, I found it very easy to apply the topics being covered to my daily work.

For most of the artifacts covered in the course, SANS tries to present one commercial tool and one open source tool that can be used to process the data.  The tools by Eric Zimmerman get a lot of coverage in this course.  If you aren’t familiar with these, you should definitely check out Eric’s blog.  There is a good mix of GUI and command line applications which allow you to parse things like shell items, registry hives, the Master File Table ($MFT) and even mount Volume Shadow Copies. I plan to cover these in more detail and discuss how I use them in my workflow in a future blog post.

I really like the simplicity of running these tools and find they present information very quickly which allows me to identify areas to focus my investigation.  The only downside is if you want to run each application against an entire image or multiple images.  This can get a little time-consuming, particularly if the image you are working on has multiple profiles or VSCs.

This need led me to create Start-ImageParsing.ps1 which is a PowerShell script that executes against a mounted image and runs the tools against all profiles.   I created this script a few months ago and it’s been a big time saver for me.  Eric recently released VSCMount.exe which mounts any available Volume Shadow Copies. I’ve updated my script to run VSCMount and then execute the other tools against each shadow copy.  I also added two other applications – Hindsight, which does an outstanding job of parsing Chrome artifacts, and BrowsingHistoryView by NirSoft which shows History for Chrome, Firefox, Edge, Internet Explorer, etc.

TL;DR – Eric’s tools are awesome. I’ve got a script that automates their execution.

So how do you get started?

  1. Download Eric’s tools. These can be manually downloaded from EZToolshttps://ericzimmerman.github.io/.  Note they can also be installed using Chocolatey but the path it places each file in causes issues with my script. So my recommendation is to download each of the files and extract them into the same subdirectory.  (Update 10/12/18 – you can now use the Get-ZimmermanTools.ps1 script in my Github to download these tools.) When you are done you should see these files in the same folder. The default setup in the script is to run the tools from “C:\Forensic Program Files\Zimmerman”.  I’ll show you how to change this shortly.
  2. If you want to parse browser history, we need Hindsight and BrowsingHistoryView.  Download these and extract them.  Two notes about Hindsight.  If you download it from Github there are a lot of files that allow you to run this in Python. We are really only using the Hindsight.exe file located in the ‘dist’ folder.  My script currently only parses the Default Chrome profile so keep this in mind if there are other profiles on the image. So using the same folder structure as earlier we have:
    C:\Forensic Program Files\Nirsoft\BrowsingHistoryView.exe
    C:\Forensic Program Files\Hindsight\Hindsight.exe
  3. Download my PowerShell script: https://github.com/grayfold3d/POSH-Triage
    Save this file and extract the contents anywhere. If you saved your tools to a different location than the one specified above, right-click Start-ImageParsing.ps1 and select Edit to open the script in the PowerShell ISE.  Update the path parameters for each tool with the location you saved the files and save your changes when done.  Note $hindsightPath points to the executable while the others use the directory.PSToolEditIf you aren’t familiar with PowerShell, there are a couple of things that need to be done for the script to execute. First, right-click Start-ImageParsing.ps1, go to Properties and select Unblock.  Next, we need to modify the execution policy on the system to allow scripts to run. We will set the policy to RemoteSigned which will allow local scripts to run but anything from the internet will need to be signed or unblocked like we just did.  This can be done by typing the following at the PowerShell prompt:
    Set-ExecutionPolicy -executionpolicy RemoteSigned
  4. Mount an image. You can use Arsenal, FTK Imager, or even mount it in SIFT Workstation and access the mount over the network.  There are a couple caveats to each method. My script will attempt to detect your mounting method and alert you as to what may be missing.
    • Arsenal Image Mounter: This is my favorite option as it allows us to access Volume Shadow Copies.   Its downside is that it doesn’t allow access to the $MFT without extracting it using another tool. Also, there are only certain versions of Arsenal which give access to the Volume Shadow Copies. Harlan Carvey had a recent post about this here.  **Make sure you select the Write Temporary option when mounting!**
    • FTK Imager: FTK works well for the most part. You get the $MFT parsing that Arsenal doesn’t have but lose the Volume Shadow capability. There can also be an issue parsing Shellbags if the hive is dirty and parsing Chrome artifacts with Hindsight. The Shellbags issue can be bypassed by holding SHIFT down while the script executes.
    • SIFT Mount: Currently not parsing Shellbags due to an issue with SIFT not recognizing reparse points in Windows which causes SBECmd.exe to loop endlessly. So, for now, I’ve got it excluded if the script detects a UNC path in the mount. SIFT also doesn’t offer VSCMount.exe the ability to mount volume shadow copies. You can manually mount these in SIFT and run the script against each mounted VSC but it doesn’t do all of them automatically like Arsenal.So what should you pick? I typically use Arsenal and then I grab the $MFT and parse it on its own using MFTECmd.exe
  1. Launch PowerShell as Administrator, change directory to the location of Start-ImageParsing.ps1 and type the script name and parameters.

Example 1:
.\Start-ImageParsing.ps1 -imagepath f: -outpath G:\cases\Dblake -vsc

Executes the script against an Arsenal mounted image ‘f:’ and saves the output into G:\Cases\Dblake. The -vsc switch parameter forces the Volume Shadow Copies to be mounted and parsed.  Since the -toolPath, -hindsightPath, and $nirsoftPath parameters are not specified, the default locations will be used.

Example 2:
.\Start-ImageParsing.ps1 –imagepath g:\[root] -toolPath g:\tools\Zimmerman -hindsightPath g:\tools\hindsight.exe -nirsoftPath c:\tools\nirsoft –outpath G:\cases\Dblake

In this example, we are running against a drive mounted in FTK.  We are also explicitly stating the location of our tools to be used in the parsing.  As stated before, you are better off setting these in the script so you don’t have to do it this way, but it’s an option if needed. No -vsc switch parameter is used as that’s not an option with FTK mounted images.

My Github has more examples and there is some help built into the script.  Just type:

Get-Help Start-ImageParsing.ps1 -examples

  1. Review Output:   ScriptOutput

Looking at the screenshot above, we can see how the output is organized.

  • Any tools that process artifacts for an individual user will save their output in the respective folder for that user. The two exceptions to this are SBECmd.exe and BrowsingHistoryView.exe which both save into the root output folder.
  • The Mounted_VSC_* folder contains the mounted Volume Shadow Copies should you need to perform additional actions on them.  An important note on this is that you will not be able to navigate this folder structure completely using Windows Explorer. Command line or PowerShell work great though.
  • The Processed_VSC folder contains a subdirectory for each VSC found in the image and the parsed output from each tool can be seen in these.
  • The other files I’d like to point out are the log files:
    Start-ImageParsing_Detailed.log will display the output streams for each tool.
    Start-ImageParsing_Commands.log  will display the command and any arguments executed by the script. If an artifact is not found, this will be listed as well.

CommandsLog

So that’s it.  Hopefully, you’ll find this as useful as I have. It’s a work in progress and I’m hoping in the next update to add a couple RegRipper parsers and then combine and dedupe the output from the primary image with the VSCs.

Thanks for reading.  If you have any comments, suggestions or questions feel free to let me know.

Long distance runner, what you standin’ there for?

If you ask any of the prominent bloggers and instructors in the DFIR community for tips for those just getting started, a pretty common theme is to start a blog.  This advice also applies to those who have been doing Incident Response and Forensics for a while.  Phil Moore, who operates the thisweekin4n6.com and thinkdfir.com recently put out a blog post extolling the merits of running a blog.   I won’t go into any great detail on his post but two things really stood out and encouraged me to move forward.

  1. Participation Inequality – This is based on a principle that most of the content in the DFIR community is created by a small percentage of contributors.  I can see how this may be true but it still seems like there are a lot of people contributing.  I love the sharing of tools and ideas that takes place in this field and want to be part of that.
  2. Imposter Syndrome –  I think this is pretty common across most technology fields.  We always tend to think that everyone else knows more than we do.  Many of you reading this (is anyone reading this?) probably have far more experience than I do in the DFIR world.  Just the same, I know my way around Windows artifacts and think I have stuff to share that others will find useful. I had 15 years in the Infrastructure world before switching to an IR position last year and man do I love it!  I mean I really love it. I would have never blogged about that stuff. Well maybe I would have blogged about PowerShell but the rest…no way.

So here we are….blog entry #1.  I’m planning on putting out something new at least once a month.    What can you expect from entries 2 and on?  I’ll be highlighting various artifacts along with tools that I find do a great job presenting them.  I also really enjoy PowerShell so I’ll be including a few scripts I’ve created over the past few months and discuss how they’ve helped in cases or in my daily workflow.  If anything I write peaks your interest feel free to reach out.