Tuesday, August 21, 2018

Crack me if you can 2018 write-up

Crack me if you can write-up 2018



Active participating members
15
GPUs equivalent to GTX1080 peak
60
GPUs equivalent to GTX1080 constant
40
CPU threads peak
1300
CPU threads constant
600
Contest related Instant Messages sent
~7000
Hash:plain submissions to internal platform
>5300
Hash:plain submissions to Korelogic
2293



Members

blazer cvsi espira gearjunkie hops m33x mastercracker milzo jimbas mexx666666 s3in!c usasoft user vetronexe winxp5421





Prep

After hearing news that Korelogic would be awarding bonus points for first unique founds, we took precautions to tune our submission process to ensure we could capitalise on this bonus. To avoid false spam triggers, an alternate email provider that supported bulk inbound/outbound requests was used. In addition, various functions on our hash management platform were disabled and tweaked such that the hash:plain pairs could be processed and uploaded quickly at a constant but not too aggressive rate.  We only had a handful of submission troubles which were rectified quickly on our end.





Patterns

It was quite cheeky for Korelogic to use usernames from the competing teams as plaintexts and this was spotted quite early on in our MD5 list. Similarly, they were seen in the SSHA, MD5(unix) lists, we also noticed that each algorithm was assigned a specific range of starting characters. Seeing as that the other teams were getting bcrypts it appeared that these were possible, and this was where all the points were at.  While some of our members continued to collect points by exploiting the 4x first unique found bonus for the lower scoring hashes, others worked on trying to get a break on bcrypt hashes using the patterns we spotted. It was not long before we found the starting characters for the bcrypt hashes using the usernames in double combo mode.


Strategy
Once we had the first bcrypt hit, we tried to uncover the complete list of usernames from the plains found in the faster algorithms. After we were confident we had a solid pattern, we brought up many CPU crackers running MDXfind to work solely on bcrypt hashes. It was a little chaotic initially as we tried to figure out the best way to distribute the workload for bcrypt hashes. One of our members then stepped up and became the central point for distributing the tasks but the task distribution and request was still done manually. Soon another member whipped up a semi-automated procedure where each member could request custom tasks from a central distribution list. During our peak we utilised roughly 1300 CPU threads but we had around 600 sustained threads throughout the contest. A small cluster of 16 odroids (XU4) running MDXfind-ARM were also used to attack the bcrypt hashes. Sidenote, it was relatively cheap and efficient to attack bcrypts using ARM cores. Each odroid gave us roughly 50H/s (800H/s in total) for the contest’s bcrypt hashes (cost factor 10) and the cluster in total uses approximately 200W. This results in a efficiency of 4H/s/W.

Due to the unfriendly nature of bcrypt on GPU, all GPU resources were reserved for the other 3 algorithms which worked much more efficiently with hashcat GPU. Members were free to decide whether they wanted work on patterns alone which some opted to and devised their own methods and scripts which they used to attack patterns on the algorithms, while other joined the hashtopolis instance which had around the equivalent of 60 GTX1080s.
We were generally quite close score wise with team hashcat and trailed them for the first 15 hours or so into the contest. When one of our members woke up and submitted over 100 unique bcrypts we leapfrogged over hashcat into first place and took a comfortable commanding lead. This was a great morale boost and more CPU instances were placed onto bcrypt as we realized other teams were using different patterns from us and we had identified a very efficient one which yield many hits for little work. Additional patterns were later identified, such as one where popular suffixes (pass01, pass02 etc) were used across all of the algos); though these did not seem as efficient as the username combos.

Some stats from our hash management platform showing rate of uploads

MD5(Unix)


SSHA
MD5


 Bcrypt

After thoughts
We do regret not switching over to JTR for a nice bcrypt speedup when more candidates than cores are used due to its bitslice interleaved implementation, yielding up to twice the speed over MDXfind. We also failed to spot the full range of starting characters for bcrypt and lost some valuable points there too.

Towards the end we tried to spread the attacks across all the algorithms so we would not only be ranked highest by score but also highest across algorithms. This was quite hard to maintain as it seemed like both team hashcat and john were gaining ground on us. Overall, we were quite impressed with our ability to obtain more unique bcrypt firsts than both john-users and hashcat combined which allowed us to take first place. A massive thanks to Korelogic for hosting the contest once again, we really enjoyed the added twist this year as it gave us all an incentive to constantly submit. A shout out to our competitive rivals, Team Hashcat and john-users for pushing us hard and making us drink that extra cup of coffee to stay up.

Looking ahead
We have enjoyed playing CMIYC over the years. So, when presented with the opportunity to create our own password cracking contest we jumped at the idea. In 2019, we will be hosting our own CMIYC style contest at Cyphercon in Milwaukee, WI. We hope all of you will join us for the first “Crackthecon”. As more information about the contest is finalized we will update the contest site crackthecon.com.



Tuesday, August 29, 2017

320 Million Hashes Exposed


Earlier this month (August 2017) Troy Hunt founder of the website Have I been pwned? [0] released over 319 million plaintext passwords [1] compiled from various non-hashed data breaches, in the form of SHA-1 hashes. Making this data public might allow future passwords to be cross-checked in a secure manner in the hopes of preventing password re-use, especially of those from compromised breaches which were in unhashed plaintext.

Our group (in collaboration with @m33x and @tychotithonus) made an attempt to crack/recover as many of the hashes as possible, both for research purposes and of course to satisfy our curiosity while using this opportunity as a challenge. Although each of the pwned password packs released at the time (3 in total at this writing) were labeled as 40-character ASCII-HEX SHA-1 hashes, we worked under the assumption that “No hash list larger than a few hundred thousand entries, contains only one kind of hash!” - and these lists were no exception.

Nested Hashes
Although the majority of the passwords recovered were plaintext, as expected, we also noticed there were a number of plaintexts themselves being hashes or some form of non-plaintext. This suggested that we were dealing with more than just SHA-1.

Out of the roughly 320 million hashes, we were able to recover all but 116 of the SHA-1 hashes, a roughly 99.9999% success rate. In addition, we attempted to take it a step further and resolve as many “nested” hashes (hashes within hashes) as possible to their ultimate plaintext forms. Through the use of MDXfind [2] we were able to identify over 15 different algorithms in use across the pwned-passwords-1.0.txt and the successive update-1 and update-2 packages following that. We also added support for SHA1SHA512x01 to Hashcat [3].

Taking a deeper dive into the found “plaintexts,” we realized there were hashes-within-hashes, hashes of seemingly garbage data, what appears to be “seeded” hashes, and more. Here is a list of the hash types we found:

There are other hashes we have not completely resolved yet - some of which may be seeded hashes. For example, we see:

sha1(md5(md5($salt).md5($pass)))
sha1(md5($salt).md5($pass)))
sha1(md5(md5($salt1).md5($pass)).$salt2)
sha1(md5($salt1).md5($pass).$salt2)

… and much more.

Personal Identifiable Information
We also saw unusual strings from incorrect import/export that was already present in the original leak. This links the hash to the owner of the password, which was clearly not intended by Troy. We found more than 2.5m email addresses and about 230k email:password combinations.
<firstname.lastname@tld><:.,;| /><password>
<truncated-firstname.lastname@tld><:.,;| /><password>
<@tld><:.,;| /><password>
<username><:.,;| /><password>
<firstname.lastname@tld><:.,;| /><some-hash>

Trash / Other Non-Passwords
Furthermore, there were obviously other strings that were not passwords, but rather fragments of files.  For example:

005a97e5323dac9a43c06bb5fe0a75973ee5e23f:<div><embed src="http://apps.rockyou.com/fxtext.swf?ID=31478642&nopanel=true&stage=true" quality="high" scale="noscale" width="405.37" height="116.475" wmode="transparent" name="rockyou" type="application/x-shockwave-flash" pluginspage="http://www.macrom


006bb7e8893618b02f979dd425e689b4ae64df10:honeyDo you realize who is in this image: http://thecoolpics.com/who.jpg . Just think for a moment and tell me o you realize who is in this image: http://thecoolpics.com/who.jpg . Just think for a moment and tell me soon ;))

Bad Line Parsing
We observed a number of passwords which appeared as they were truncated at length 40 but contained data following the linefeed terminator of the input lists.

n.doe@gmail.com:password:123456jane.doe@

We assumed this was either caused by a parsing error or some anomaly. To recover these strange processed plaintexts, some utilities were coded [4] to emulate the particular behavior of concatenating successive lines while restricting them to 40 characters.

john.doe@gmail.com:password:123456jane.d
ohn.doe@gmail.com:password:123456jane.do
hn.doe@gmail.com:password:123456jane.doe
n.doe@gmail.com:password:123456jane.doe@

Furthermore, to find the correct position where the initial parsing error occurred, we searched our dictionaries from the right to the left (see [4]) concatenating characters like this:

123456jane.doe@ho
o
ho
@ho
e@ho
...
123456jane.doe@ho


 An example of a bad/invalid email imported into the haveibeenpwned.com website

Hashcat’s Hexception
During hash processing, we also caught a glimpse into Troy’s methodology.  We believe that he processed some “cracked” passwords as well, suggested by the presence of $HEX[] plaintexts. This also revealed a bug in Hashcat’s $HEX[] encoding.

For example, consider the following hash:

0b20b6ad0b6c7fd3655e8734cb48c001567983eb:$HEX[244845585b623436653635373737393666373236625d]

Initially, when this was found with Hashcat, it appeared as:

0b20b6ad0b6c7fd3655e8734cb48c001567983eb:$HEX[b46e6577796f726b]

The hash could not be verified as the solution since:

sha1(binary[b46e6577796f726b]):[9def6b97e0095ac93331bc2780cc35a21d9cc752]

We discovered that Hashcat fails to correctly encode a literal string with $HEX[], if the literal string starts with $HEX[.  This means that if you take the output of Hashcat, say from hashcat.pot and try to re-crack it using the passwords in the hashcat.pot file - you will end up with “unsolvable” hashes.  As part of our work involves building dictionaries that we can reuse, we consider this a significant bug.

Some tools [5] were put together to properly re-encode the output from Hashcat, into the proper string:

$HEX[244845585b623436653635373737393666373236625d]

This then works properly as a reusable password with Hashcat and MDXfind, as it decodes into the literal string:

$HEX[b46e6577796f726b]

This issue has been resolved in a beta version of Hashcat [6].

We also uncovered a second bug in Hashcat, which was later corrected in a beta version. When using certain rules, we found that the solutions that Hashcat was offering also did not hash back to the correct value.  We ended up with hundreds of  “solutions” that really were not solutions at all. This is one of the reasons that we always try to double-check our work, to ensure that we have accurate hashes and plaintexts.

As a final check, we took just the SHA1x01 passwords we found and re-ran them through both Hashcat (Beta v3.6.0-351-gec874c1) and MDXfind. The results were quite illuminating. The test system used was a 4 core Intel Core i7-6700K system, with 4x GTX1080 cards and 64GB of memory. Using Hashcat, we found that loading more than about 250,000,000 hashes at a time was not possible [7] and as a result, the list was broken up into chunks of 225m hashes.


Program
Time to Complete
Hashes Found
Hashcat
55 minutes
318,932,512
MDXfind (all hashes)
9 minutes
318,933,582
MDXfind (225m chunks)
9 minutes
318,933,582

From our usage patterns, it is evident that both applications have their strengths and caveats. MDXfind shows its strength when the hashlist is too large to fit into GPU memory, when many algorithms need to be checked in parallel and when very long password strings need to be tested. Hashcat, on the other hand, shines when parallel compute is needed; such as running large rule sets and large keyspaces. Using the tools in tandem gives us the best of both worlds since we can feed the left list of each successive attack into either program to achieve optimal efficiency and coverage.

To further illustrate the problem with password reuse (and the importance of validation), the hashes were re-run using just the found password of Hashcat (Beta v3.6.0-351-gec874c1).  This resulted in 86,954 hashes not being recovered. These are primarily due to the $HEX encoding error that Hashcat makes.

Distributed Tasks
Once the hashlist was small enough where the size of the hashlist had negligible effects on search speed, distributed brute-force and mask attacks were conducted via Hashtopussy [8] a Hashcat wrapper.  Combining our hardware, we were able to achieve peak speeds of over 180GH/s on SHA-1, to put things into perspective that's roughly the speed of 25x GTX1080s. We were able to cover ?a length 1-8, ?l?d length 9-10 and ?b length 1-6 effortlessly.

Statistical Properties
In order to speed up the analysis of such a large volume of plaintexts, a custom tool was coded “Panal” (will be released at a later time) to quickly and accurately analyse our large dataset of over 320 million passwords. The longest password we found was 400 characters, while the shortest was only 3 characters long. About 0.06% of passwords were 50 characters or longer with 96.67% of passwords being 16 characters or less.  Roughly 87.3% of passwords fall into the character set of LowerNum 47.5%, LowerCase 24.75%, Num 8.15%, and MixedNum 6.89% respectively. In addition we saw UTF-8 encoded passwords along with passes containing control characters. See [9] for full Panal output.

Length.png 
Charset.png

Summary
Blocking common passwords during account creation has positive effects on the overall password security of a website [10]. While blacklisting 320m leaked passwords might sound like a good idea to further improve password security, it can have unforeseeable consequences on usability (i.e, the level of user frustration). Conventional blacklist approaches typically include the 10k most common passwords to limit online password guessing attack consequences. Until now, there has been no evidence to support which blacklist size provides an optimal balance. 

Post written in collaboration with @m33x and @tychotithonus

Resources
[0] 2017-08-03: Have I been pwned? by Troy Hunt
https://haveibeenpwned.com
[1] 2017-08-03: Introducing 306 Million Freely Downloadable Pwned Passwords 
https://www.troyhunt.com/introducing-306-million-freely-downloadable-pwned-passwords
[2] 2017-08-03: MDXfind v1.93
https://hashes.org/mdxfind.php
[3] 2017-08-28: Hashcat sha1(sha512($pass)) patch
https://gist.github.com/hops/9beda82cf3d21ab99a2971bf8d00dbb4 
[4] 2017-08-27: Some tools we developed to deal with incorrectly parsed strings
https://gist.github.com/m33x/3e0ab19a53384c036db29f996cb60733
[6] 2017-08-20: Hashcat Issue “hexify also all password of format $HEX[]”
https://github.com/hashcat/hashcat/issues/1340
[7] 2017-08-18: Hashcat Issue Potential Silent Cracking Failures at Certain Hash-Count
https://github.com/hashcat/hashcat/issues/1336
[8] 2017-08-03: Hashtopussy by s3inlc
https://github.com/s3inlc/hashtopussy
[9] 2017-8-29: Panal (Password Analysis) 320m HIBP Passwords
https://gist.github.com/m33x/03031e764ae5de179315270973c5871f
[10] 2017-08-03: Password Creation in the Presence of Blacklists
https://www.internetsociety.org/doc/password-creation-presence-blacklists