All posts by Elad Erez

Eternal Blues – Worldwide Statistics

Finally, 2 weeks post launch, some worldwide statistics.

But before you start, here are some useful tips:

  • Read FAQ below
  • Hover data to see some extra detailed tool-tips
  • Click data for dynamic filtering
  • CTRL+click for multi-select
  • Hit the full screen button below and enjoy ;)

Words out. Visualizations in.

 

Please share your feedback – your results, how it helped you – in twitter or just as comments below. If you have more ideas for cool visualizations, just let me know. Need to ask something privately? you can email me or make contact through LinkedIn.

 

Some surprising facts (July 12, 2017)

  • More than 8 million IPs were scanned. France taking the lead with 1.5 million
  • The top 3 vulnerable countries (out of ~130), had more than 30,000 vulnerable hosts altogether
  • The majority (53.82%) of hosts nowadays still have SMBv1 enabled
  • 1 out of 9 hosts in a network is vulnerable to EternalBlue
  • One network, with almost 10,000 hosts (not IPs), had 2 vulnerable hosts. How could anyone find that without Eternal Blues?

 

Conclusions

Unfortunately, exploitation of EternalBlue is still a very good method of invoking remote code execution. It is available in more than 50,000 hosts scanned by Eternal Blues (as for July 12, 2017). Yes, even after all the latest attacks by WannaCry and NotPetya. I’m here to remind you, sometimes it takes just 1 vulnerable machine to take you down.

Although numbers are quite high (remember, these are IPs scanned with my tool only), I feel like awareness did increase somewhat. Running Eternal Blues is, by definition, being aware of the problem. So good for you for taking responsibility and checking your network status. Now it’s patching time!

Recommendations

 

Please, don’t be mistaken – recent ransomware attacks are the ones that made all the buzz, since they actually tell you when they hit you. I believe there are many more EternalBlue-based attacks which remain off the radar and are still unknown to us (examples: data exfiltration or even just using your computers to join a botnet). So not seeing something like this (below), does not mean you weren’t hit…

 

FAQ

  • Is ‘IP’ == ‘Host’?
  • No. IP is IP address. It may be in use and may not
  • Can someone hack your data and see our personal data?
  • First, everything is hack-able. Second, there is no personal data to hack.  In fact, I’ve just made it available online with the Power BI dashboard above, so, no need to hack, it’s all here! :D
    As for what’s being collected and why there is no privacy issue, read Privacy & Reporting
  • Are there any duplicates?
  • Yes. Since I don’t track users/hosts, I cannot know if a user scanned the same network twice
  • So, total results should be lower than mentioned?
  • Actually quite the opposite. There are many cases which makes me believe the total results number is actually higher:
    • Versions 0.0.0.1-0.0.0.4 included a detection issue (as mentioned here). So in order not to have even the slightest mistake with statistics, I decided to exclude all collected results from these versions (meaning, scans of 1 million IPs were completely dismissed)
    • Some scans were taken on more secured environments where there is no internet access. Meaning, no statistics for me
    • Some users probably disabled access to my website in order not to send statistics
  • Can I use these visualizations in my website / presentations?
  • Sure. Letting me know how/where it helped you can be great
  • Are visualizations, or the data they’re based on, going to be updated?
  • Yes. At least twice a week

Eternal Blues – Versions & Reporting

Versions

Version Date Size Notes SHA-256
0.0.0.9 (latest) July 25, 2017 886 KB Increased timeout (for slow networks)

Removed “Are you sure” button before exit

7f5f447fe870449a8245e7abc19b9f4071095e02813d5f42c622add56da15b8b
0.0.0.8 July 10, 2017 1.43 MB Added host name column for better analysis 21cc36e60e661613f0c05e73b9496bf2d456931686b0693112842d91d7e64e78
0.0.0.7 July 6, 2017 1.43 MB Some GUI fixes 7a08f7010402e2813830c77be1e992f6193f5c1ea97b76fbe706c2090ba66cb3
0.0.0.6 July 3, 2017 1.42 MB Some GUI fixes 1e6fc5078edd00a8ecedcbd2e2054a769610bfacce81b22f1285a7e14dbeacb0
0.0.0.5 July 2, 2017 1.42MB Vulnerability detection fix 952feb69a311e0a7602b65b0e981364bc2f0d79bb7af79ea342234c28b6df099
0.0.0.1-0.0.0.4 June 29, 2017 1.42MB First versions N/A

Privacy & Reporting

Anonymous statistics are being sent to omerez.com every time Eternal Blues starts a scan or when it is finished. Your privacy is a top concern of mine.

Below described the information being collected  (each new version includes all the previous collected data) -

  • 0.0.0.1-0.0.0.4
    • Eternal Blues version
    • Random ID
      • Generated with each new launch of the application. It is used for my own debugging – to see if a scan started but did not end (or ended with different number of hosts). Launching twice by the same user/host will result with a different random number
    • # of scanned IPs
    • # of vulnerable IPs
  • 0.0.0.5
    • # of responsive IPs
  • 0.0.0.6 and later
    • # of IPs with SMBv1 enabled

Some other metadata is being appended by default with Google Analytics, like time of scan & country.

I don’t know about your IP, don’t care about it and frankly, quite glad not to know anything about it in order to completely eliminate any unnecessary privacy/legal issues.

What’s not being collected?

User names, host names, IP addresses, domain name. It is really none of my interest.
Two scans taken by the same user & computer cannot be correlated (the only common data is the fact they share the same country)

Why collecting data at all?

Understanding how the world’s EternalBlue vulnerability (and SMBv1) posture really looks like, is a great interest to me and actually to many more in the cyber security ecosystem. I doubt if anyone has good visibility for that. Not sure even if Microsoft really knows the average ratio of hosts with SMBv1 enabled in a standard network is.

Stats are coming soon.
July 10 teaser: More than 7 million IPs were scanned so far. PowerBI is coming…

Here they are ;)

Eternal Blues – Day 4 (important update)

It’s been quite a day. More than 2,000 scans in the past 24 hours and over 6,000 in total.

IMPORTANT UPDATE

My first priority for today was fixing the reported issues (I actually took a day off work). There were some scenarios of wrong detection – it mainly happened with Windows 2003, but are likely to reproduce with other versions as well (the issue was reading 2 overridden bytes). I can’t know the exact likelihood of reproduction, but I roughly estimate it with probability of 1%-3% – which means approximately 2-8 hosts out of the default 256 hosts scan. If only half of these IP are in use, it’ll be 1-4 hosts with chances of result mismatch.

Therefore, people who scanned with version 0.0.0.4 or earlier:
I encourage you to take another scan with the latest version . Thankfully, a few people made contact and reported about these mismatches on day 2. They’ve verified today version 0.0.0.5 and it reported 100% correct results.

How this tool works?

I get a lot of questions on what’s the logic behind getting a “YES” (vulnerable) result for a host. People were wondering whether the check was just “pinging the host”, or “checking SMBv1 status”, or “finding shares”. The answer to all three is “no”.

Eternal Blues checks the existence of the EternalBlue vulnerability by sending 4 crafted SMB messages. There are many references online for the technical stuff. I think the best executive summary I read was Rapid7′s:

“…it connects to the IPC$ tree and attempts a transaction on FID 0. If the status returned is “STATUS_INSUFF_SERVER_RESOURCES”, the machine does not have the MS17-010 patch.”

Also seems like a patched host (with MS17-010) will return STATUS_INVALID_HANDLE or STATUS_ACCESS_DENIED.

The 4 crafted SMB messages are:

  • SMB Negotiate Protocol
  • SMB Session Setup AndX Request
  • SMB Tree Connect (to IPC$)
  • SMB Peek Named Pipe

Getting STATUS_INSUFF_SERVER_RESOURCES as the SMB status of the 4th message means host is vulnerable.

What’s next?

  • Releases visibility (communicating new content for each version)
  • Some bug fixes (mainly UI, hopefully no more mismatches)
  • Taking some feature requests
  • Statistics. Prepare for some (super) Power BI

 

Eternal Blues – 72 hours update

It’s been three days since launch.  The exposure “Eternal Blues” got  is mind blowing – first day was very quiet, but then I had over 5,000 visits in 2 days (way more than I imagined). Actually, this traffic peak is all thanks to Tal Be’ery, Mirko Zorz (Help Net Security, Twitter) and Bleeping Computer (Twitter) – without your help, I bet I had only 100 visitors for this weekend. So one big THANK YOU for the three of you!

I got a few appreciation emails – people actually found vulnerable computers, which is fantastic. I also got a few people wondering about some false positives (work in progress fixed!), asking for feature requests and suggesting improvements. This is all truly amazing. and also a lot to process in such a short time. All, please be patient, I’ll do my best answering you all and fixing wherever needed. Stay tuned.

Eternal Blues

Eternal Blues is a free EternalBlue vulnerability scanner. It helps finding the blind spots in your network, these endpoints that are still vulnerable to EternalBlue.

Just hit the SCAN button and you will immediately start to get which of your computers are vulnerable and which aren’t. That’s it.

EternalBlues_0.0.0.8

If you wish, you can switch networks, or edit your own (yeah, you can also scan the world wide web if you wish). Please use it for good cause only. We have enough bad guys already…

DOWNLOAD HERE

Follow for latest updates twitterbird_RGB

 

Was this tool tested in real networks?

Oh yeah. Obviously I cannot say which, but with almost every network I connected to, there were a few vulnerable computers.

IMPORTANT: It does *not* exploit the vulnerability, but just checks whether it is exploitable.

 As of July 12, 2017: Worldwide statistics are available.

Yet another vulnerability scanner?

There are many vulnerability scanners out there. So… why did I create another? Mainly for the ease of use. The majority of latest WannaCry, NoPetya (Petya, GoldenEye or whatever) victims, are not technical organizations and sometimes just small business who don’t have a security team, or even just an IT team to help them mitigate this. Running NMap, Metasploit (not to mention more commercial products) is something they will never do. I aimed to create a simple ‘one-button’ tool that tells you one thing and one thing only – which systems are vulnerable in your network.

 

Notes

This is a free tool provided for your benefit & security. I don’t charge for it.  It is here to help you and also to help me getting worldwide statistics. Learn more about it.

 

Tips

  • If you’re about to run it in your working environment, please update the IT/Security team in advance. You don’t want to cause (IDS/IPS/AV) false alarms
  • If vulnerable systems were found – please take a Windows update asap

  • For God’s sake, please disable SMBv1 already. Whether your systems are patched or not. This protocol was written over 3 decades ago…!
  • If you would like to enjoy the tool but disallow sending anonymous statistics (which is so uncool), disable access to my website

 

Final words

I really hope this can help people and organizations protecting against the next attack.

This is a no-guarantees-use-at-your-own-risk tool.

Special thanks to Jonathan Smith for his contribution!

Please share your feedback -

  • Twitter: Omerez
  • LinkedIn: Elad Erez
  • Email: EternalBlues!omerez.com (replace ‘!’ with ‘@’)
  • Comment below

DOWNLOAD HERE (Learn more in version history)

Maintaining backward-forward compatibility of your own Client-Server protocol

When you are integrating your client with well-known interfaces, protocols are quite clear (well, at least they should be). Whether your server is an HTTP server, COM interface or even just a Windows DLL with exported functions, you know how to communicate with the server just by looking at the documentation (e.g. HTTP protocol, MSDN). We know that if documentation declares a specific interface/protocol, it must stand behind it. Although it sounds obvious, when we develop our own custom Client-Server protocol, we sometimes miss this point.

HTTP client-server is something too obvious. But there are many more we are creating. Let’s not think about them as client-server, but more as consumer-producer components. Take these examples –

  • Dll_A (client) calls exported functions in Dll_B (server)
  • Process_A (client) collects information from Process_B (server) over IPC
  • A user-space module (client) sends information to a kernel module (server)
  • Client sends a packet to server

-          All must work with a predefined interface.

In order to make your client to work with the server properly, you must define a protocol that is known to both of them. The simple and best way to do it in order to minimize the chances of bugs is that both will use the exact same protocol definition files. Whether this is a header (.h) file, an xml file, an ini file – you name it, having both the client and server (or consumer and producer) use the same file means they both “speak” the same language.

Consider the following protocol definition –

enum {
    eConfigEnable,
    eConfigName
}EConfigParams;

struct {
    EConfigParams eConfigParam,
    void* pData
}SConfigMessage;

The EConfigParams enum declares an interface between two components and SConfigMessage will be a generic message structure sent from client to server which can be one of two declared messages. Since SConfigMessage contains the EConfigParams enum and a void pointer it is very generic and can be expanded for many other usages (new protocol messages).

Client code will look like this:

//Send enable message
SPConfigMessage configMessage = {eConfigEnable, NULL};
send(&configMessage);
//Send name message
SPConfigMessage configMessage = {eConfigName, “MyName”};
send(&configMessage);

Server code will look like this:

void receive(SConfigMessage* configMessage)
{
    if (configMessage == NULL)
        return;

    switch (configMessage->eConfigParam)
    {
        case eConfigEnable: enable(); break;
        case eConfigName:   setName((char*)configMessage->pData)); break;
    }
}

This is pretty straight forward. Client sends a message, and as a result server calls enable() or setName() according to the enum value declared in the protocol.

Now we need to add a new feature that should make the server to call disable() function. Usually we add an enum value as the last value (good practice) but sometimes we add it in the place where it is more in context. In our example, having an enum value of eConfigDisable makes sense to be next to (before/after) eConfigEnable (this is risky!).

Consider we are taking this approach, the enum will look like this:

enum {
    eConfigEnable,
    eConfigDisable, //This is new!
    eConfigName
}EConfigParams;

//Send disable message
SPConfigMessage configMessage = {eConfigDisable, NULL};
send(&configMessage);

The server will have a new case for eConfigDisable

case eConfigDisable: disable(); break;

In order to test your code you rebuild both your client and your server and perform a test. You reproduce eConfigDisable message scenario and noticed that client sends the right message and server calls the disable() function as expected. You even make sure no degradation exists after these changes by checking all other possible messages in the protocol – enable() and setName(). Both work as expected.

Well, it seems like everything is working just fine but it may not.

If there is even the slightest chance that your client code can be released to your customers without releasing the server as well, you are in a high risk of unexpected behavior at server side.

If your client and your server are always being released together, not separated, not patched – there is no risk.

So, where is the risk?

After changing the enum to include eConfigDisable, the enum values after this value have changed. So eConfigName value was changed from 1 to 2. When you rebuild both your server and client you’re all good, but what happens when you update just your client? Or just your server? Backward-forward compatibility here will be harmed. Let’s take a look:

Configuration

enum values

When client sends a message

eConfigEnable

eConfigDisable

eConfigName

Old client +
New server
Client –
0 = eConfigEnable
1 = eConfigName

Server –
0 = eConfigEnable
1 = eConfigDisable
2 = eConfigName

No impact
(value remains 0)
No impact
(client does not support eConfigDisable)
Client will send ‘1’
Server will treat it as eConfigDisable
New client +
Old server
Client –
0 = eConfigEnable
1 = eConfigDisable
2 = eConfigName

Server –
0 = eConfigEnable
1 = eConfigName

No impact
(value remains 0)
Client will send ‘1’
Server was supposed to ignore unknown values (eConfigDisable) but will treat it as eConfigName

POTENTIAL CRASH!!

Client will send ‘2’
Server will ignore it (while it actually supports eConfigName)

The above table shows states where your client and server ‘speak’ different languages, different protocols. Client sends A but server treats it as B. The good case is that an operation will be ignored. The worst case (which is very likely to happen) is an unexpected behavior at server side which can easily lead to a crash. Just think of the simple situation where you are using a new client, an old server and client sends eConfigDisable. Server treats it as eConfigName (since both values are 1) and server access the pData pointer while actually the pointer’s value is NULL. In this simple example it may not be a big deal, but in cases where pData points to structure A in client and server will treat it as structure B, crashes are very likely to happen.

Once again, just to clarify the most important point here –

This invalid state of Client-Server protocol mismatch can happen only when they are not released to your customer together in 100% of the times. Different installers, patches or components updates can lead to this invalid state quite easily.

Solution:

How can we deal with such cases? How to avoid getting into this invalid state in products that are not tight together into one installer or are sometimes being patched/updated?

Solutions that may help but won’t solve the issue:

  • Adding the last value of an enum “NUMBER_OF_WHATEVER”. This is just bad practice.
  • Hardcoding enum values.
  • Manual code reviews each time the protocol changes
  • Documentation of the enum/structure that must remain consistent. For example: like:
    “Don’t change values” / “Don’t touch here” / “Contact me for any change” / lots of exclamation marks / etc.

All of the solutions above may help a bit but still will not keep you safe of errors. The ideal solution will keep your code repository server free of such issues the whole time. In order to achieve this we need:

  • To be able to run a ‘diff’ between two protocol definition files and catch risky changes.
  • Automatic execution of this diff between code that is committed to your repository server and the existing code.
  • To block the commit upon error resulting from the ‘diff’.

The repository server I usually work with is TortoiseSVN, so I’ve created scripts in order to deal with such tricky cases. The idea is to deploy a pre-commit hook to SVN, so whenever a developer commits code to your SVN repository server, all protocol definition files will be checked for risky changes and if a risk was found, commit will fail. This way you keep your repository server safe in an automated procedure.

Stay tuned for the scripts!

Whether you are using this script or not, here are my guidelines for how to keep your client-server protocol consistent:

  • Protocol declarations (defined values, enums, structs) documentation should be very clear and with well-known high risk. Values, order and structure can be very risky
  • Use code analysis to ensure you keep your backward-forward compatibility
  • Take manual code reviews on high-risk definition files
  • Use automation

Automatic static code analysis before uploading your code

Developers and team leaders are probably familiar with static code analysis tools such as CppCheck, Klocwork and others. The main problem of the usage of such tools is that there is no enforcement. Meaning, you can upload your code to your code repository server with issues that could have found before the upload (while issues are already in repository server and even worse, deployed at your customers endpoints).

This means that the code repository server must be reviewed (using a static code analysis tool) once in a while in order to find and fix these issues (which requires a cycle of 2-3 engineers). So why not enforcing code to be analysed each time it is being uploaded?  This way you’ll save the engineers find-fix-test cycle and you ensure all of your issues will be fixed before releasing the product to the customers.

Check out this article I uploaded to the CppCheck community - CppCheck integration to TortoiseSVN (includes a script for static code analysis automation).

Since last changes with SourceForge, CppCheck data on SourceForge is missing. Therefore, I’m re-posting it here:

===================================================

Since we are not robots (yet), it is very possible to forget running a Cppcheck before committing code to the SVN server. Organizations that use Cppcheck (or any other static code analysis tool) usually perform the code analysis once a day/week/month. The team leader assigns a task to developer to fix the issue, commit the code and wait for the next code analysis. And then we start this cycle again. Sometimes the code analysis is taken after a build was already released to QA, or even worse, to customers.

So we all know that asking your developers to run the Cppcheck before every commit they do is not feasible. However, this process can be automated (and also invisible in some manner) for the developers.

Attached to this page a script which will automatically force the Cppcheck on all source files that are being committed. The check is run when the commit is triggered (before the commit is actually performed) with a zero effort from the developers. In the case issues are found, the script will fail the commit so the developer can fix the issues and commit only Cppcheck-checked code (failing the commit can be bypassed if needed). The great value of this approach is that we can fix the issues before they are committed to the SVN server!
Configuration

  1. Download SVN_Pre_Commit_Hook__CppCheck_Validate, extract the zipped file and edit the script:
    • cppCheckPath - Full path to your Cppcheck.exe (not CppcheckGui.exe).
    • supportedFileTypes - Add or remove file types to check. This variable is here so the script won’t check ‘.sln’, ‘.vxproj’ and other non-source file types.
    • enableScript - ’1′ or ’0′ to enable/disable running the script.
  2. Right click (somewhere on desktop) → TortoiseSVN → Settings → Hook Scripts → Add…
  3. Configure Hook Scripts:
    • Hook Type: Choose ‘Pre-Commit Hook’ (upper right corner).
    • Working Copy Path: The directory that all of your SVN checkouts are done. Use the top most directory (or just use ‘C:\’ for example).
    • Command Line To Execute: Full path to the attached script.
    • Make sure that both ‘Wait for the script to finish‘ and ‘Hide the script while running‘ checkboxes are checked → OK → OK.
    ConfigureHookScripts

Hints

  1. Even if the commit failed because it didn’t pass the static code analysis, SVN gives you the option to easily recommit disregarding the failure by clicking the ‘Retry without hooks‘ button. If commit succeeded (meaning, Cppcheck did not find any issues), it will look like nothing happened (so developers will still see a commit end message just like before).
    CommitScreen
  2. If you want to implement this solution in your organization/team you can do it in two different approaches:
    • Client side solution - Meaning, the steps above should be taken for all of your development machines. The benefit in this approach is that only relevant teams can use this solution and not all of the developers that are working on the SVN server. Besides, ignoring this Cppcheck (in case of false-positives for example) is quite easy using one button click integrated in the TortoiseSVN Client (‘Retry without hooks‘). This approach means that Cppcheck must be installed on all of the relevant developers machines of course.
    • Server side solution - Meaning, Cppcheck should be installed only on the SVN server and the steps above should be taken only once (server side only). So clients (developers’ machines) should take no action since every commit will trigger the hook at server side. The benefit is this is taken only once, but this solution may be to restrictive for some organizations. In addition, in order to ignore the hook (once again, false-positive for example) – you need to create some ‘back-door’ script that will allow developers to bypass it with a specific keyword in the commit message.
  3. More about SVN hook scripts - Client Hook ScriptsServer Hook Scripts.

All you need to do is take the Configuration steps above just once. Afterwards, you can work with SVN the same as before, just now you get to see your failures before code is committed to the SVN server.

Reverse Engineering COM dlls

Reverse engineering closed source binaries is always a challenge. Using tools such as Process Monitor, API Monitor, IDA and others make such tasks possible to achieve, but still requires good knowledge and experience. Whether you want to exploit or defend (depends on the hat you wear) a specific application, you need to find the right interception point(s). In most cases we are dealing with dlls, which is obviously by getting the dll’s export functions addresses and start investigating from there.

Lately I had to find the right place for interception in order to develop an additional security layer to one of Microsoft’s IIS components. No need to mention that the set of the dlls I’ve investigated is completely undocumented and unfortunately does not even have private symbols in Microsoft’s symbols server. So my guideline was –

  • Find a few interesting entry points (exported functions) in some dlls
  • Debug the target process by attaching the process using Visual Studio
  • Load relevant dll exports (once again, we have no symbols)
  • Set breakpoints on my suggested entry points

After eliminating some of the loaded in the process, my suspicious was on one dll with more than 6,000 exported functions. Cutting things short, I’ve found a function, let’s call it foo.dll:func() which can point that I’m in a good spot in the critical path. Setting a breakpoint on this function proved me that I was in the right place. Whenever I performed the operation I wanted to intercept, the operation was hung until the breakpoint was released. Done? Nope. Since my goal was to protect and not just audit an operation, I tested what will happen if I’ll skip the execution of this function or just return access denied. Doing so did not provide the results I was looking for. Although I skipped execution, original operation triggered by the user was still completed successfully. This probably means that this was not the function I was looking for, though I’m very close. Why? Because the breakpoint really held the execution of the whole request, so I’m in a good spot, probably on the right thread as well.

Taking one more try with the same breakpoint has showed something interesting. When the execution has paused because of the breakpoint, I was taking a look at the thread’s call stack and noticed that this function was not in the first frame of the call stack. To simplify it (2 frames scope only), it was looking like this:
boo.dll:newFunc()+0×500 → foo.dll:func()

So, we have a new dll in the game, boo.dll. It seems like the real operation was initiated from the newFunc exported function in boo.dll, and along the way, in offset 0×500, it has reached a ‘call’ opcode to foo.dll:func().

WRONG!

Why? Because of three reasons –

  • Setting a breakpoint at boo.dll:newFunc() entry point did not break the execution
  • The original name of newFunc() was not related at all to the operation I was looking for
  • Each time I performed different operations, I’ve noticed boo.dll:newFunc() in the call stack with different offsets for each operation.

So although at first glance it may look like calls originated from boo.dll:newFunc(), they actually weren’t. Then what is going on here? COM objects were in the game.

Since this dll does not have symbols, Visual Studio only chance to show a more informative call stack, is to load dll exports, so it will look like a minimal symbols version of the dll. However, there are more functions in this dll, so whenever there is a function offset that Visual Studio does not recognize for this dll, it basically look up for the first known symbol and present it like execution originated from this symbol plus the offset for the real address. Meaning, it looks like execution was originated from function X (which we do have symbol for) while execution was originated from function Y (which we don’t have symbol for).

So, apparently boo.dll is not just exporting functions, it also has a couple of dozens COM interfaces. So how do we proceed from here?

We have 3 main challenges –

  1. Discovering all dll’s COM interfaces, methods and addresses (remember, GetProcAddress() will not do the trick here).
  2. Making your debugger (VisualStudio in my case), to show the correct call stack, including COM function calls and also to have the ability to set breakpoints at COM functions easily. A good call stack will be bool.dll:ComFunc+offset → foo.dll:func() (since boo.dll:newFunc exported function was just misleading, we don’t want to see it in the call stack).
  3. Understanding an application API calls flow. If you reverse engineer a set of dlls that include more than a few dozens of APIs, it becomes a challenge to find the right place for interception. We will need logging of all API calls for a specific operation. This is a bit similar to API Monitor and Process Monitor but what both are missing is fully automated logging of COM calls without the need of manually preconfigured definitions file (and also integrating it for Visual Studio).

How can we achieve this? Stay tuned for the DbgGenerator tool I’ve created which allows you to load symbols for such dlls and easily reverse engineer them with no time. I promise good stuff.