QuickHash Changelog

What's new in QuickHash 3.3.4

Nov 1, 2023
  • Removed default XML file name specification that is used for storing some last used settings for greater cross platform suitability, so it just uses a system default instead Corrected project version specification for Windows (as it was left at v3.3.1 in error) Adjusted the "start at a time" date to a more current default value to save users having to scroll from 2017! Added CRC32 to the disk hashing module Made big changes to the disk hashing module display. Simpler layout, ensured the results screen was not disabled after saving the log file, ensured the drop down was not disabled at annoying moments. Tweaked the scheduler of the disk module to equal to or greater than specified start time, to account for missing second precision Removed dependancy on the no longer maintained ZVDateTimePicker throughout, in favour of the native TDateTimePicker, again for easier cross platform compilation Apple OSX version released again that should now work on newer Mac OSX versions, in theory

New in QuickHash 3.3.3 (Oct 15, 2023)

  • By popular request, the last selected hash algorithm will now be saved on exit and re-used on next launch. Settings saved in QHConfig.xml file.
  • The correction and conversion from "ID" to "No" was not fully integrated in the "Compare Two Folders" tab, causing an SQLite error message to appear. That was fixed. Fixed countless deprecated references to "ID" column in SQL syntax - another remnant of previous changes from "ID" to "No". Removed some unnecessary logging data from the log file of "Compare Two Folders" tab, notably entries that just mirrored the UI during the process such as "Currently searching for...". SQLite version 3.43.2 bundled in 32 and 64 bit modes. This changelog corrected to show the release date of v3.3.2 as June 2023, instead of Jan 2022! * About page updated some more.

New in QuickHash 3.3.2 (Jul 10, 2023)

  • The column heading of "ID" in text output seems to cause the almighty Microsoft Excel a headache because it thinks it is an "SYLK" file. And users of Quickhash were being told "Excel has detected ‘file.csv’ is a SYLK file, but cannot load it. Either the file has errors or it is not a SYLK file format. Click OK to try to open the file in a different format." I was unaware a two byte string of "ID" at the starts of a CSV file somehow meant "all such files are SYLK files and cannot be anything else" but there we are. So yes, I have changed the value of "ID" to "No" (as in "Number") for now. Users of v3.3.1 can do the following if they do not want to upgrade to v3.3.2 : Open the CSV with a text editor like Notepad and change ID into some text that doesn’t start with ID e.g No or Number Save the CSV then open it with Excel. This time it won’t throw you the SYLK file error. Do your editing and save the CSV. Open the CSV with the text editor again and change the edited text back to ID. Save the file. It will now be OK.
  • In the Copy tab, if the user had checked the box "Save results (CSV)?", they were getting a largely empty output file. But if they right clicked the display grid of results and saved to CSV, it was OK. That was fixed. It was another error remaining of my v3.3.0 fixes back in May 2021 and I didnt catch it in the v3.3.1 release. Sorry for that
  • In the Copy tab, the little text based percentage indicator (as controlled by lblFilesCopiedPercentage object) had somehow got buried in blankets like a small child at a crazy feather pillow-fight party at some stage in development history and was not visible. Now it is.

New in QuickHash 3.3.1 (Jan 7, 2022)

  • Function CountGridRows has been changed. This function was designed to count the number of rows in any given display grid to determine whether the clipboard could be used, or whether the data would saved to a filestream. And, if the user chose to save the output to CSV or HTML, the same function would check to see if a memory list of strings could be used to then be saved out to a file, or whether a filestream should be used line by line.
  • But, when saving the output of very large lists of files to HTML, filestreams were supposed to be incorporated rather than using RAM. However, due to the v3.3.0 adjustment of the function CountGridRows to use .RecordCount, .First and .Last, the variable that was used to check the number of rows was only showing what was on screen instead of what was in the table. So, QH was still using RAM even if the row count was many hundreds of thousands!! As such, it would cause QH to crash with large volumes of data. Fixing this required tow significant changes:
  • Changes to CountGridRows means a dedicated TSQLQuery is used on the fly, instead of the DBGrid itself.
  • Changes to the function call of CountGridRows now means the Grid and the table to query is passed.
  • Major changes to functions SaveFILESTabToHTML SaveCOPYWindowToHTML SaveC2FWindowToHTML to use TSQLQueries too, instead of DBGrid queries. All three can now handle many thousands of rows more easily and are executed in just a few seconds. A test of 407K rows was saved as a 56Mb HTML file in under 10 seconds. However, I have noticed that the step of preparing the data for display in the Compare Two Folders does take a long time for many tens of thousands of files. It gets there eventually, but it can take a while. This is due to the enormous SQL statement that was added in v3.3.0 in the repareData_COMPARE_TWO_FOLDERS function. This was added to give users greater abilities to find and sort, following earlier pre v3.3.0 complaints that the compairson was not granular enough. It is more granular now, but has come at the cost of taking longer to prepare. Something to work on for v3.3.2.
  • These changes described above are the largest service release aspects to this version.
  • The user is now also shown a message on screen, with an OK button, to let them know a Save as HTML has finished. Useful if the data set is very large and the save takes some time.
  • The HTML file produced by right clicking in the FileS tab did not have a row 1 header if the row count was over 20K. Now it does.
  • The HTML file produced by right clicking in the FileS tab did not have the FileSize column if the row count was over 20K. Now it does.
  • The HTML file produced by right clicking in the FileS tab did not have the ID column if the row count was LESS than 20K. Now it does. (note that this has not been added for clipboard output on the assumption it would be pasted into spreadsheets where rows are automatically then counted)
  • (See - there were a lot of things missing in the HTML save for large volumes of data that I had missed - this is how small scale testing on your own does not compare with real world usage - its only when users report issues to me that I often get to know about problems, and then in turn, that unearths other issues that I can then fix)
  • On Linux, and OSX, the "Curently Hashing" status in the FileS tab was chopping off the first characters of the path. So instead of it saying /home/user/Documents/MyFile.doc it was saying e/users/Documents/MyFile.doc. This was due to the long path override character cleansing that is necessary for Windows, but not for Linux or OSX, and I forgot to use a cross compiler directive. Now fixed in in v3.3.1
  • The function DatasetToClipBoardFILES checked if the number of rows was less than 20K, but didn't show a message to instruct the user to use a file save if the count was greater than 20K. That has now been applied in v3.3.1, so they dont just sit there wondering what has happened.
  • If the user tried to clipboard a volume of data over 20K rows in the FileS tab, although the user was told to use a file save instead, the status still said it was copying to clipboard. Now it will tell the user the clipboard effort has been aborted.
  • The Clipboard button in the "Copy" display grid was not as complete as the right-click clipboard option. A remnant of the changes made in v3.3.0. I think. Now both methods produce the same clipboard content.

New in QuickHash 3.3.0 (May 31, 2021)

  • New : Ability to hash forensic images of the Expert Witness Format (EWF), also known as "E01 Images". Available for Windows and Linux for users who know what they are doing with regard to forensic images. It is not available for OSX, for now. Quickhash will conduct the hash and also report the embedded MD5 or SHA1 hash, if available, placing it in the "Expected Hash" field automatically, depending on the type of hash the user is performing. So if the E01 contains both MD5 and SHA1, but the user selects SHA1, then the embedded SHA1 hash will be reported as well as the computed SHA1 hash, and the same theory for MD5. More features will likely follow in future of this landmark addition to QuickHash GUI.
  • New : CRC32 algorithm added for Text, File, FileS, Compare Two Files, Compare Two Folders and Base64. Not added to disks, and not available for EWF (E01) image hashing. New : Users who utilise the CRC32 algorithm via the FileS tab can now optionally choose whether to just compute the checksums of the files in the folder as normal, or, compute the checksums and then rename the files by appending the checksum in square brackets to the end. Useful for many media and sound specilists who commonly use CRC32 values in their work.
  • New : Button added to enable the user to easily make a copy of the backend SQLite database at any given point in time, for convenience. This can help users who may wish to load it into specific database tools, like SQLite Explorer or browser extensions like SQLite Manager.
  • New : The About menu now contains a "Check Environment" option (available on all OS platforms but the results vary on each) that scans for DLLs, reports database information etc. New : Logo replaced with the newer Quickash logo.
  • New : In some parts of QH where display grids of data are generated (FileS, Copy, Compare Two Folders), the user can now select their own delimiter character via a drop down menu, such as the tab character, hyphen and (heaven forbid) even the space char. If no character is chosen, a comma is assumed and used as before.
  • New : The user can now use the About menu to establish the version of SQLite that is being used by QuickHash.
  • Improvement : Monumentally large changes to "Compare Two Folders" processing, scrubbing away much of the earlier effort and restructuring it, with big thanks to an open-source co-developer who has helped me here. Key amongst them are that v3.3.0 addresses a bug where rows got mis-aligned if the file counts differed. The mis-match was still correctly reported in v3.2.0, and even if the file matches counted but the hashes differed, that was also still OK. But, the rows got out of sync if the file counts differed due to there being less files in one folder than the other and my use of the UPDATE SQL statement. Additional restructuring applied but note that C2F is really designed to check that two folders are a mirror image of each other, and it is supposed to help you confirm this is the case, rather than help you clean up your disk. If your aim is to use it as a file manager, then QH might not be the best option. Other tools like Beyond Compare might be better for your needs here.
  • That said, the ability now exists to compare files by name and hash value in both folders, and then the user can right click the results and see many other options too:
  • Restore Results view
  • Clipboard all rows
  • Clipboard selected row
  • Clipboard all selected rows (currently does it in reverse order for some reason)
  • Show mismatches (new, based on filename or hash or both)
  • Show duplicates (new, offers the chance to clipboard immediately after because the column row changes for this display)
  • Show matching hashes (new)
  • Show different hashes, not missing files (new)
  • Show missing FolderA files (new)
  • Show missing FolderB files (new)
  • Show missing files from Folder A or FolderB (new)
  • Save as CSV file
  • Save as HTML file
  • That is a whopping array of ways to conduct some analysis of two folders and is about as good as I think I can make it and is based on help from the community. If that still falls short, other tools are available, or get stuck in yourself and help me.
  • Improvement : DB Rows were being counted (when required) using a slower method than I had realised. With v3.3.0, counts are now immediate by calling DBGrid.DataSource.DataSet.RecordCount;
  • Improvement : Column headers added to CSV and HTML outputs (achieved by right clicking the display grid results throughout). I may have missed one but I think I have them all covered
  • Improvement : Removed the generation of a "QH_XXXXX" time stamp named parent folder in destination folder when copying as many users reported this was unhelpful.
  • Improvement : SQLite DLL's for Windows replaced with stable version 3.35.5.0 as of April 2021 (replacing former version 3.21.0.0).
  • Improvement : The size of some fields in SQLite was set to 32767 to account for crazily large filename and filepath combinations. But on reflection, that seems extreme use of memory for what must be one in a billion chances and very unlikely to be encountered. Instead, 4096 size is set in v3.3.0 to still enable QH to account for very long paths, but given that filenames alone can rarely exceed 255 (even where paths can) on any of the 3 OSes except for some UTF8 and UTF16 variances, where even with those the maximum is still 1020 bytes (4 bytes for every single char of the 255 max)
  • Improvement : Disk hashing module now presents more data in the list view, especially for logical volumes, such as the filesystem.
  • Improvement : The button to launch the disk hashing module now gives the user an indication of what it is doing while it loads the treeview of disks and volumes
  • Improvement : The system RAM label has been moved from the main interface to the new Environment Checker section of the About menu (Windows only). This frees up some GUI real-estate and avoids the use of resources unnecessarily. Fix : DisableControls and EnableControls used more extensively to expedite the "Save as CSV" and "Save as HTML" options for large volumes of data, as some user reported save efforts taking several hours for millions of rows of data. This makes sense because Quickhash was repaitning the display grid after each row file write.
  • Fix : When saving results as CSV in Compare Two Folders, if the user selected an existing file to overwrite, it would do that, but the next run would result in an infinite loop telling the user it already exists and to choose another file, but not being able to actually do so. That was fixed.
  • Fix : Apples new OSX 'Big Sur' OS unhelpfully removed static libraries, like the SQLite library, so it could not be referenced by file path. So a different method of lookup is needed using the dyanmic linker cache and a 3 state compiler directive is now used for loading SQLite, depending on the OS being used. That has been applied so that Apple users can continue to enjoy the benefits of QuickHash on that most changing and challenging of operating system. You're welcome.
  • Fix : Two stringlists are created when using "Compare Two Folders" to store the list of files for analysis. I had introduced a memory leak here without realising it and that has been corrected (with thanks to an open-source developer who spotted that for me).
  • Fix : A small memory leak existed in frmSQLiteDBases.DatasetToClipBoard for copying data to clipboard. The CSVClipboardList string list that was used to achieve this was not being freed. Now it is freed.
  • Fix : In the basic results txt file that is created during Compare Two Folders, the selected folder names in the log file were prefixed with the LongPathOverride of two slashes a question mark and a slash. That was corrected to just show the normal path as users dont realy need to see that (as it is just an API switch).
  • Fix : .Value was used extensively to "call" a value from a DB cell. But some cells can be NULL in QuickHash. And if they are, using .Value can generate an error. Instead this is now switched to .AsString meaning a NULL value returns an empty string, as intended.
  • Fix : In the Text tab, the "Expected Hash" lookup was not applied for xxHash, which was missed before so if users pasted an expected xxHash value, it would not be looked up against the computed hash. That was fixed.
  • Fix : In the File tab, the "Expected Hash" lookup was not applied for xxHash, which was missed before so if users pasted an expected xxHash value, it would not be looked up against the computed hash. That was fixed.
  • Fix : The disk hashing module showed the field for Blake after hashing, even if empty and not computed, and was not being hidden like the others. That was fixed.
  • Fix : The disk hashing module reported "Windows 8" when conducted using "Windows 10". This was not actually wrong, but mis-leading, and is actually due to the Windows API being woeful in parts with regard to how the "number" and "name" of Windows are reported. So a new function created to speak to ntldr.dll directly so that now the major, minor, and build versions are all reported.
  • Code : Adjusted variable naming in the "ProcessDir" function relating to source and destination folders because it was so confusing I did not even understand it several years after first writing it.
  • Code : More effort made to initialise variables
  • Code : Dismodule code entirely refactored to be more efficient, to produce more useful data for the user, and to help safeguard against null values, removable drive bays with no disks, and for general ease of reading. It should now also read (be able to hash) CD and DVD disks, for example.

New in QuickHash 3.1.0 (Sep 10, 2019)

  • Several bug fixes and feature request inclusions, most notably it includes SHA-3 (256 bit) and Blake2b (256 bit) algorithms, and should be suitable for Apple OSX Catalina and its 64-bit enforcement of applications.

New in QuickHash 3.0.4 (Sep 10, 2019)

  • The ‘File’ tab was not showing automatically when using drag and drop. Now it does.

New in QuickHash 3.0.0 (Sep 10, 2019)

  • dozens of bug fixes, several new features added (two of which are the significant implementation of SQLite and Hash list importing)

New in QuickHash 2.8.4 (Aug 28, 2017)

  • The "Expected Hash Value" field had been broke a little in the 2.8.3 release meaning that when the user first pasted a value, it would report a mis-match even when it matched. But if the user re-pasted the value, it would match as intended (https://quickhash-gui.org/bugs/expected-hash-value-report-wrongly-on-single-file-hashing/). That fault was fixed.
  • The "Expected Hash Value" was comparing only 7 characters instead of 8 for xxHash. That was fixed.
  • The date and time formatting that was reported as fixed in v2.8.3 was not as fixed as it should be, and also was not included in the Linux version as it should have been.
  • The "Text" field had been accidentally adjusted to use pointers to widestrings. The commit was accepted without realisng the impact. So 'hello' was not being hashed but it's Unicode widestring version was being hashed. That was fixed and reverted to it's previous settings.

New in QuickHash 2.8.0 (Mar 22, 2017)

  • Major change the the hash library. All version of QuickHash prior to and including v2.7.0 used DCPCrypt, which is a fairly old library and had to be adjusted to hash large files over 4Gb due to a 32-bit limitation. In addition, for SHA-256 and SHA-512, it was not enormously fast, though it was fast enough. With v2.8.0, HashLib4Pascal ([http://wiki.freepascal.org/HashLib4Pascal http://wiki.freepascal.org/HashLib4Pascal] and [https://github.com/Xor-el/HashLib4Pascal https://github.com/Xor-el/HashLib4Pascal]) has been incorporated instead. There is not only a huge code readability improvement but a slight speed increase as well for all four of the major algorithms used by QuickHash. In addition, it will now make the addition of other other hash algorithms easier for the devlopers, because the library has a large choice to choose from. Enormous credit, appreciation and thanks to Ugochukwu Mmaduekwe Stanley, aka Xor-el, for the library (https://github.com/Xor-el) which is licensed under MIT.
  • SHA256, SHA-1 & SHA256 concurrently and SHA512 hash algorithms added to the disk hashing module.
  • xxHash64 added to all areas of QuickHash – text, files and disks. XxHash was a hash library that I wanted to include a couple of years ago but never got round to. But a Freepascal form of it is also part of the HashLib4Pascal library, so implementing it was as easy as for the other algorithms. It is true what they say about how fast it is – it really is crazy fast!
  • New save dialog added to disk hashing module (prompted by default by the enabled ‘Created and save a log file’ checkbox) to enable the user to save all the results of the hashing process as a text file in a location of their choosing. Or they can disable the option.
  • New date and time values added to “File” tab so the user can report on the time the process started and ended and the elapsed time as per feature request [http://quickhash-gui.org/bugs/add-date-and-document-output/ http://quickhash-gui.org/bugs/add-date-and-document-output/] . Useful for benchmarking and so on.
  • Also fixed the fact that the “Elapsed time” for the “File” tab did not refresh if the user changed the hash algorithm using the radio box. It only refreshed if the user chose a new file using the button. That was fixed so that regardless of how the user adds the file or what hash algorithm is chosen, the timers are reset.
  • Horizontal scroll bar added to the hash value field in 'Text' tab, to allow the whole hash to be read more easily.
  • Improved anchoring of several visual elements meaning text labels were not cut off or made less visible and looked better when maximising the GUI. Thanks to Dareal Shinji for his help with that. See [https://github.com/tedsmith/quickhash/issues/11 https://github.com/tedsmith/quickhash/issues/11]
  • The settings file that was implemented in v2.7.0 caused some problems for Linux and OSX users. That was fixed by adjusting to a generic filename based on the name of the application. See [https://github.com/tedsmith/quickhash/issues/6 https://github.com/tedsmith/quickhash/issues/6]
  • The progress bars didn't automatically reset to zero when the same tabbed interface was used multiple times without restarting QuickHash. Now, for each tab where a progress is found, when the user clicks “Start”, or equivalent thereof, the progress bar will reset.
  • Fixed an issue in the disk hashing module; after hashing a volume or disk, if the user selects a different hash algorithm and then clicks the start button again, 65K of data was read and hashed and then the program then just reports that no more data can be read. This was caused by the tripping of a boolean flag to true when the progress form was closed, thus, the repeat loop when executed again stopped at the “until” line because the abort condition was true. This was fixed. So now users can keep hashing the disk with various algorithms without restarting Quickhash.
  • New start date and time, end date and time and time taken labels added to the disk hashing module. This information is also saved to the log file by default.
  • Stop button added to disk hashing module to allow the user to easily abort if needed.

New in QuickHash 2.6.9 (Oct 13, 2016)

  • The UNC and long path name fixes still had not entirely worked as hoped when tested on big data sets. Further fixes implemented to ensure the filename and path to an existing file in a very long path is detected, and likewise re-created when copied.
  • Improvements made to the way QH reports errors. Errors are generally quite rare except when dealing with very large volumes of network data in a dynamic environment. Prior to v2.6.9, a message window would appear which was not very useful if there were over a few dozen errors because the list was too big for the screen and the automatic saving of that data seemed to go wrong and generate save errors. That was fixed to a simple warning that errors were found and the user is now prompted to save a text file in a place of their choosing.
  • If QuickHash fails to initiate a handle to a file at the time of hashing, not only will the user be told that there was an error initiating a handle (as it did before) but it will now tell you which file is causing the problem.
  • If the user pastes the path of a mounted drive as a UNC path (e.g. M:MyServerMyDataShareMyFolder) as either source or destination, the user will now be told to fix it to a true UNC path rather than simply crashing out!
  • Status bar in the bottom of the Copy tab (the part that shows the user what file is currently being hashed) was being truncated if the path length was particularly long, and was still truncated even if maximised to the full screen size on a 40” monitor! That has been improved.

New in QuickHash 2.6.8 (Oct 13, 2016)

  • In the 'Copy' tab, users can now select multiple source folders so that multiple folder content can be hashed, copied to a single destination folder, and then hashed again. Note that an experimental limit exists – if the list of files in memory exceeds 2Gb, Quickhash will likely crash. Please report such instances. If they are too many, I will implement another technique.
  • In the copy tab, a bug was fixed for UNC paths when long path names were encountered. Seemingly my earlier efforts to correct this issue had not worked. Now, as of v2.6.8, long paths should not be a problem with UNC mode in the 'Copy' tab for either source or destination locations.
  • For Linux users, made the UNC path fields visible, allbeit disabled, just to illustrate more clearly to the suer the full path currently selected in the treeview.
  • For MD5 and SHA-1 hashes, if the handle to the file fails, a more meaningful error should be displayed rather than a standard error message that didn't tell the user or the developer much as to why the handle failed.
  • The 'Stop' button in the 'Copy' tab didn't work at all I noticed! Now it does (it will abort after the file that was being copied at the time of the button press was conducted has been copied, before the next file copy starts).
  • The status bar at the bottom of the 'Copy' tab now alerts the user that files are being counted after the user presses 'Go', rather than displaying nothing.
  • More of the lists used in memory are Unicode enabled which may reduce crashes.

New in QuickHash 2.6.7 (Apr 9, 2016)

  • The 'Expected Hash' comparison didn't kick in when the user drag and dropped a file into the 'File' tab in that QuickHash wouldn't report to the user whether the computed hash matched what he was expecting though obviously the user could still look by eye at the computed hash but nevertheless, it needed to be fixed.
  • Added a toggle for text line-by-line hashing. Users asked if it would be possible to give them a choice when outputting the results of either including the original source text with the computed hashes or excluding it resulting in a just a list of hashes. So now there is an option that toggles between 'Source text INcluded in output' or 'Source text EXcluded in output'. It, along with the two line-by-line text buttons have been put in their own group box within the 'Text' tab. Non-ASCII\ANSI characters accepted allowing for Western, Eastern and Asian language encoding.

New in QuickHash 2.6.6b (Apr 9, 2016)

  • Removed one element from the RAM box because it was reporting incorrect amount of free RAM and it wasn't really that necessary anyway.

New in QuickHash 2.6.6 (Feb 4, 2016)

  • Added the ability to hash the content of a text file line-by-line (an expansion of the ability to hash pasted text line by line). This means the user can select a file full of a list of names or e-mails addresses or whatever, and each line will be hashed seperately. Carriage return line feeds and nulled space should be trimmed from the end of each line.
  • Added a RAM status field (Windows only) that updates itself every few seconds with the RAM status of the computer. Useful if particular large data sets are being dealt with.
  • Ever since 2011, Quickhash has only been shipped as a 32-bit version for Windows, in the knowledge that all the internal 64-bit requirements are dealt with and the fact that QH doesn't need the extra RAM and so on provided by 64-bit systems. However, a bug was reported (#17 - http://sourceforge.net/p/quickhash/tickets/17/) that highlighted an issue with 32-bit versions of QH running on 64-bit Windows with regard to the content of the Windows\System32 folder. The files in here are presented differently to 32-bit programs than 64-bit ones using the SysWoW64 system.
  • "The operating system uses the %SystemRoot%\system32 directory for its 64-bit library and executable files. This is done for backward compatibility reasons, as many legacy applications are hardcoded to use that path. When executing 32-bit applications (like Quickhash, which doesn't need to be 64-bit), WoW64 transparently redirects 32-bit DLLs to %SystemRoot%\SysWoW64, which contains 32-bit libraries and executables. 32-bit applications are generally not aware that they are running on a 64-bit operating system. 32-bit applications can access %SystemRoot%\System32 through the pseudo directory %SystemRoot%\sysnative." https://en.wikipedia.org/wiki/WoW64
  • This means, essentially, that the 32-bit mode of QH, when run on 64-bit systems, is presented with different data to what it is expecting by the filename natively. The users affected by this are minimal (perhaps none except the user who reported it) because it only impacts upon files in that specific folder. Other folders are not affected. Nevertheless, to resolve this, as of v2.6.6, a dedicated 32-bit and 64-bit executable are now provided for Windows. Users are encouraged to use the appropriate executable for their system, but in 99% of cases the 32-bit one should work fine in 32-bit emulated mode, unless the content of C:\Windows\System32 is to be examined.

New in QuickHash 2.6.5 (Dec 17, 2015)

  • At user request, the "Text" tab now allows line-by-line hashing of each line. The results are saved to a comma separated text file that can be opened in a text file editor or spreadsheet software.
  • For example, Google Adwords requires SHA256 lowercase hashes of customer e-mail addresses. So with QuickHash, you can easily paste your list of addresses into the text field, click the "Hash Line-By-Line" button and the output is saved as CSV output for you, ready for use with Google Adwords or any similar product line (https://support.google.com/adwords/answer/6276125?hl=en-GB). Tested with data sets of the low tens of thousands. Would be interested to hear how it copes with larger volumes of data.

New in QuickHash 2.6.4-a (Dec 6, 2015)

  • Bug #16 highlighted an issues with the “Don't rebuild path' option of the “Copy” tab wherein the copy failed. This was tracked back to v2.6.3 when the new treeview feature was added, replacing the former button path selection functionality. The bug was caused as a result to a path parameter that no longer existed. That was fixed.

New in QuickHash 2.6.4 (Dec 1, 2015)

  • QuickHash can now READ and WRITE from and to folders that exceed the MAX_PATH limit of MS Windows, which is 260 characters. A limit of 32K is now adhered to with QuickHash 2.6.4, meaning files may be found on filesystems that were put there by software that is able to bypass the MAX_PATH limit even if regular software like Windows Explorer is unaware of their existence.
  • UNC Mode” added to the “Copy” tab, specifically to enable the user to operate in pure UNC mode but with the new 32K path length limits. Useful for comparing data on multiple network nodes that may not be mapped as a drive letter. Windows only feature (not needed for Linux and Apple Mac).
  • The tree view in the copy tab are now sorted alphabetically.
  • The “Choose file types” option that has existed in the “Copy” tab for a while has now been added to the “Files” tab by user request. Meaning the user can now recursively hash a folder and sub-folder of files but choose which files to include and which to include. Extension basis only and not file type signature analysis.
  • Further GUI anchoring improvements, to make the program display elements better when maximised, with less overlapping hopefully.
  • Some historic error messages updated and improved, and made more OS specific.
  • User manual updated and revised for v2.6.4
  • Some other minor improvements

New in QuickHash 2.6.3 (Sep 19, 2015)

  • NEW: Replaced two buttons with two treeview panes in the 'Copy' tab. Left pane is for the user to choose where to copy files FROM. Right pane is for the user to choose where to copy files TO. On appropriate selection, the user needs just press 'Go' and on completion a new form shows the results
  • FIX: In the 'Compare Directories' tab, the save button will now also save the hash comparison result to the log file, i.e. did the comparison match or not? And how many files were counted in grids A and B (feature request #20 http://sourceforge.net/p/quickhash/feature-requests/20/)
  • FIX: In the 'Compare Directories' tab, the file counts of the grids and difference counts were overlapping with the labels when high file counts were examined (tens of thousands upwards). Fixed by anchoring the elements

New in QuickHash 2.6.2 (Aug 7, 2015)

  • As per feature request #15, and in part request #7, added an 'Expected Hash Value' field to Text and File tabs to enable the user to paste an already computed hash value (perhaps from another tool, e-mail, webpage) into QuickHash. If the field contains anything other than three dots, once the data hash is generated in QuickHash, it will compare it against the expected hash in this field and report match or mis-match
  • Corrected the fact that that the values for Total Files in Dir A and Dir B in the comparison of two directories, were the wrong way round
  • Updated copyright date range in the form captions for both the disk hashing module and QuickHash itself
  • Minor GUI improvements like anchoring
  • User manual updated

New in QuickHash 2.6.1 (Apr 17, 2015)

  • Added two buttons for copying the grid content of “Compare Directories” to the clipboard, to enable the user to simply paste the results of one or both grids to another tool like Excel, Notepad etc.
  • Added a “Save to Files” button in the same tab to allow the content of grids A and B to be saved as two seperate CSV files (one for each grid) and a single combined HTML file (with the content of table A displayed in one table, the content of table B displayed in the other).
  • Throughout all of Quickhash, a line is automatically inserted into both CSV and HTML output stating the name and version of QuickHash used and the date the log file was generated.
  • Fixed the truncation of “Total Files in DirA” and “Total Files in DirB” in Compare Directories tab, where counts more than 99 (i.e. 100+) were being truncated. So 150 files was being written as “15”. Note this only affected the user display – not the log or display grid.
  • Ensured that if the user re-runs a comparison of two directories using the “Compare Directories” tab, any values from the previous comparisons are cleared, such as the values in the display grids, the time ended, the hash match status, etc. Prior to 2.6.1, once a scan had been conducted, the display was not updated until the second scan had finished, as opposed to clearing it at the start of the subsequent scan.
  • Added a clickable link to the QuickHash projects homepage at sourceforge.

New in QuickHash 2.6.0 (Jan 29, 2015)

  • New tab added titled 'Compare Two Files' to allow the user to check if two files in two different places (folders) are identical, or not, without having to hash all the other files in those respective folders. For example, C:\Data\FileA.doc and C:\BackupFiles\FileA.doc
  • Fixed column mis-alignment for HTML output of the ?FileS? tab; the mis-alignment was caused by the seperation of file name and file path into two different columns in v2.5.2. where the seperation in the grid was not carried forward to the HTML output.
  • Added the ability to delete duplicate files where found, if the user chooses to detect duplicate files only.
  • Further hints corrected in 'Copy' tab.
  • Manual updated to incorporate changes brought in versions 2.5.3 and 2.6.0

New in QuickHash 2.5.2 (Nov 3, 2014)

  • For the Windows version only : Implemented a scheduler for disk hashing, allowing the user the ability to schedule a start time for their chosen disk. Useful, for example, if a disk is currently being used or examined with an estimated completion time of 2 hours which is after the examining user may have gone home and unable to start the disk hashing process. Now, the user can specify a start date and time that is two or 3 hours after the estimated end time of the other task, and QuickHash will then commence hashing automatically without the need for the user to start it. If no valid start time is entered, the program starts hashing as soon as the chosen disk is double clicked, as normal.
  • For all versions : At user request, added an additional column to “FileS” tab to seperate the path from the filename. So now the FileName column contains just the filename. And the new 'Path' column contains the files path.
  • Added an option in “Copy” tab called “Don't rebuild path?”. If checked, the files in the source directory and all sub-directories will simply be dumped into the root of the destination directory without having the original path rebuilt. Any files with the same name will be appended with 'Filename.ext_DuplicatedFileNameX'.
  • Changed progress status labelling to look neater and more compact.

New in QuickHash 2.5.1 (Sep 8, 2014)

  • The new dynamic text hashing worked fine - new hashes appeared as the user typed, but if the user then chose a different hash algorithm, without changing the text, users felt it would be better for the hash to update dynamically too. So that was applied.
  • When you clicked in the text area, it was always cleared automatically, for convenience. However, users felt it might be better to only clear the default standing text on entering the text field, rather than always clearing it. So now it only clears it if the default standing text is in the box. After that, it only clears the box if the user consciously clicks the "Clear Text Box" button. This allows the user to add text, then add some more text a few minutes later without losing what they had first.
  • Drag and drop functionality added for SINGLE FILES in the 'File' tab. So users can now simply drag their file onto QuickHash. Switching the hash algorithm choice in that same tab will dynamically update the hash, as seen with the new text hashing changes reported above. And it will switch the user to that tab, if they do a drag and drop from another tab. Support for folder based drag and drop will not be added.
  • Adjusted the 'Started at:' value in 'File' tab from just the time to date and time, to account for large files that may exceed 24 hours to hash.
  • All hash value strings assigned as ansistrings. Not strictly necessary as SHA512 as hex is 128 characters, but future algorithms may exceed that.
  • Added an advisory to ensure users run QuickHash as administrator for hashing disks and that Windows 8 users might wish to consider other options due to a lack of testing on that rather unpredictable platform. In tests, unexpected read errors were reported on Windows 8.

New in QuickHash 2.4.2 (Jul 19, 2014)

  • Adjusted interface to make it better on small screens like notebook computers.
  • Removed a message dialog that appeared when there was an error. Instead, QH will continue when an error is enountered but warn you at the end about the error, instead

New in QuickHash 2.4.1 (Jul 17, 2014)

  • Switched the SHA-1 file hashing functionality to the same transform function as used in the disk hashing module, for speed increases.
  • Meaning QuickHash will compute the hashes of files around 40% faster than in any earlier version.
  • Customised versions of SHA1 library merged into one unit (called 'sha1customised') that incorporate both the fixes for Unicode file handling and the faster transform routines introduced in the disk hashing module, that are now needed for both disks and files.
  • In v2.4.0, there were two seperate customised SHA1 units which made life confusing.
  • Entire process repeated for MD5, too. It too has it's own customised unit and seems to be around 3 times faster!!
  • Start Times and End Times provided as a pair, making them more useful and where possible computing the time actually taken to do the task.
  • Fixed status bar - the status bar in 'File Hashing' was being populated by 'Hash, Copy, Hash' processes instead of just the 'File Hashing' progress tab. The status bar in 'Hash, Copy, Hash' was not being populated. That was fixed.
  • Redundant Unit1 code (applied to versions prior to v2.0) removed.

New in QuickHash 2.4.0 (Jul 14, 2014)

  • After several years of trying, the functionality to hash physical disks in Windows is now part of QuickHash. It has been implemented by means
  • of a seperate self-contained module that is launched on press of a button in the fourth tabsheet titled "Disk Hashing (for Windows)".
  • The Linux version does not need this tab or this module so neither are available to Linux users. Linux users have always had the option of
  • hashing disks with QuickHash by running it as root or sudo and using the "Hash File" tabsheet and navigating to /dev/hdX or /dev/sdaX or whatever.
  • Note SHA1 only, for now. Others will follow in X.X.X sub releases, e.g. 2.4.1.
  • Speeds are fast - approx 3.5Gb per minute via Firewire800 and up to 8Gb per minute with direct SATA connection.
  • Some redundant unused variables removed to optimise memory usage
  • Some minor improvements to the interface - a few buttons moved around, extra hints added etc

New in QuickHash 2.3.0 (Jun 6, 2014)

  • Complete support for Unicode on Windows, ensuring filenames or directotries containing Chinese or Arabic or Hebrew (etc) characters can now be processed using QuickHash without the user having to change their language and region settings. Prior to this, QuickHash was generating the default initialisation hashes for such files but not actually hashing them.

New in QuickHash 2.2.0 (Jun 6, 2014)

  • It was reported that large files failed to hash properly with SHA256 or SHA512 implementation. It turned out this was due to a 32-bit integer delcaration in the DCPCrypt library that is used by QuickHash for those two algorithms. Updated by using QWord instead Longword variables. Output checked against SHA256SUM and SHA512SUM and found to be OK now.

New in QuickHash 2.1.0 (Jun 12, 2013)

  • All versions prior to 2.1 suffered a 32-bit 4Gb limitation when copying (as part of the 'Hash, Copy, Hash' routine) a single file larger than 4Gb. That was fixed by casting the "filesize" variable to Int64 instead of Int32 meaning the size limitation is now set by your filesystem only (16 Exabytes for NTFS).
  • International language support added for filenames and directories that contain or might be created of a non-English nature by use of UTF8 casting. For example, the destination directory for "Hash, Copy, Hash" can now contain non-English characters.
  • All hashing in Quick Hash utilises Merkle–Damgård constructions. As such, zero byte files will always generate a predetermined hash, depending on the algorithm; sha-1, for example, is always da39a3ee5e6b4b0d3255bfef95601890afd80709. To avoid confusion, if a file is zero bytes, it is not hashed at all and the entry 'Not computed, zero byte file' is enetered into the results. Though I acknowledge some users may feel it is necessary to hash zero byte files for security reasons, on the whole, I don't think it is for 99% of users.
  • Files of zero bytes are now copied as part of the "Hash, Copy, Hash" routine to facilitate those who wish to use QuickHash as a backup system where, on occasion, zero byte files are created by software and are required in order to function properly.
  • Date format of output directory changed again to 'yy-mm-dd_hhmmss' (e.g. QH_13-12-25_221530) due to the now widespread use of QuickHash internationally.
  • The previous format of ddmmyy worked OK for UK users, but there is some merit in the year, month, day format, especially for multiple output dirs.

New in QuickHash 2.0.0 (Mar 5, 2013)

  • New tabbed interface making the layout clearer but also enabling better use on lower resolution systems and massive speed increase of approx 75% from version 1.5.6.
  • Lots of other improvements, particularly in the area of error reporting.

New in QuickHash 1.5.6.1 (Jan 26, 2013)

  • Moved some of the tick boxes into a panel group to help with resizing and moved the status bars of recursive directory hashing further in to the left.
  • This is because it became apparant that on resolutions of less than 1600 pixels, they were not visibile. These changes have reduced, but not cured,
  • the problem. As a result, a new tabbed interface is being worked on and will probably result in a v2 release status.

New in QuickHash 1.5.6 (Jan 26, 2013)

  • The display grids for displaying hashes of multiple files in a directory and for "copy and paste" hashing now have the number of rows pre-computed
  • based on the number of files found prior to hashing. This saves a considerable amount of time with large data sets.
  • Combined with the step above, a gigantic speed improvement caused by also disabling the dynamic bottom pane until after all files are hashed.
  • Having it refresh for every file was not really necessary anyway, given that the status bar reports the file being hashed and the progress stats
  • show files %, data volume etc. enchmarks show 3K files took 2 minutes with version < v1.5.6; With v1.5.6, the same 3K files take 12 seconds!
  • The same visiblity change applied to recursive copy and hash, though, in tests, the process of copying the files was slower than the grid display
  • but with lots of small files, this is likely ot have made an improvement.
  • With regard to recursive directory hashing and recrusive copy and hashing; the user can now decide to override the default behaviour of hashing
  • all files in all sub-directories of that chosen directory, meaning that just the files in the root of that chosen directory can be hashed (and copied
  • if appropriate) and no others in other sub directories, if required.
  • The user can now decide whether to flag any duplicate files found, or not (only for standard direcotry hashing - not for copy and hash, yet).
  • The left to right scroll bar of the bottom pane was partly obscured by the status bar. That was corrected.

New in QuickHash 1.5.5 (Jan 26, 2013)

  • Added file mask capability to allow selective searching for one or more mixed file types, e.g. *.doc; *.xls etc. New masks can be added at will.
  • Added progress indicators to recursive copy and hash, to match the standard recrusive hash without copy.
  • A new intermidiary output directory, named after the date and time of execution, is now added beneath the output directory with the output then put beneath that
  • ensuring that if multiple outputs are sent to the same directory at different times, each output can easily be identified.
  • A log of file of files that failed to copy or those for whome the hashes didn't match are now recorded in the chosen output directory
  • Adjusted phrasing of Clipboard button to "Clipboard Results", to mean "Copy the results to RAM clipboard" because the previous phrasing
  • of "Copy to RAM" was misleading, suggesting the files would be copied to RAM, which was not true.
  • Improved layout slightly by replacing some labels with edit fields.
  • Improved the 'Hash mismatch' error to make it easier to read and including the name of the actual file that has failed, as well as just the hash value.
  • Added a warning to recrusive copy and hash feature that OS protected files or files in use will not copy properly, to make the user choose more wisely

New in QuickHash 1.5.4.1 (Jan 26, 2013)

  • All functionality added since 1.5.2.2 added for the Linux version, too, matching it to the 1.5.4 Windows release
  • Added Stop button to recursive directory copy and paste traversal (top right pane), to match the stop features of the simpler recursive directory traversal functionality (bottom pane)

New in QuickHash 1.5.4 (May 29, 2012)

  • As announced in v 1.5.3, improved the "Copy and Hash Files" display area as follows:
  • The display area is now a numerical grid with sortable columns instead of a text field. Faster and more feature rich options and responsiveness
  • For Windows only instances of QuickHash, the source files' created, last modified and last accessed dates are looked up, displayed and logged to account for NTFS\FAT32 issues with date attribute retention
  • Added the ability to export results to HTML file, including column headings
  • Added the ability to copy the grid content to clipboard for easy pasting into spreadhseets etc
  • Some minor code improvements and interface labelling all round

New in QuickHash 1.5.2.2 (Apr 26, 2012)

  • Fixed incorrect formatting of reported date and time settings to now accurately show DD/MM/YY HH:MM:SS
  • Converted display area of "Copy & Hash Files" to a listbox, rather than a memo field to increase speed
  • Adjusted "Copy & Hash Files" delimiter to a tab (#9) instead of nothing to allow easier importing into spreadsheets
  • Coming Soon: v 1.5.3 will use a grid system for the "Copy & Hash Files" display instead of either a memo field or a listbox

New in QuickHash 1.5.2 (Mar 22, 2012)

  • System Error codes returned with any last error to enable better dev support to users GUI set to increase proportionally as the interface is maximized to the max screen size to allow more data to fit in the meo fields when run on larger screens.
  • The 1.5.0 feature of copying source files to destination directories further corrected and improved as follows:
  • Radio box added to choose whether to list JUST directories or whether to list JUST directories AND files, neither of which will be hashed or copied. Useful for occasions when the user might want to generate a list of subdirectories only,
  • that might contain forensic images for example, that they wish to paste into the case properties of forensic software like X-Ways Forensics or FTK or into a report.

New in QuickHash 1.5.1 (Mar 8, 2012)

  • Main Menu added - About page, Credits page and a "File --> Exit" to free space on the form by allowing the removal of the 'Exit' button
  • Ialian version - credit to Sandro of the DEFT Live CD project for translating the English to Italian - www.deftlinux.net/
  • Corrected keyboard shortcut keys as some shortcuts were applied twice to different buttons.
  • Minor re-alignment of GUI panes

New in QuickHash 1.5.0 (Mar 8, 2012)

  • Recursive directory copying and hashing from source directory to destination directory added.
  • Some minor GUI re-arrangement and improvement for readability.
  • Known Issues : Some unicode filenames cause an error, but not all. Also, illegal Windows characters in the filename may cause an error.

New in QuickHash 1.4.1 (Mar 8, 2012)

  • Took out the autosize attribute for the grid display of recursive directory file hashing. Refreshing that grid with tens of thousands of files slowed
  • down the program considerably - sometimes up to a third!
  • Added a 'Counting files....' entry in the progress bar at the bottom of the grid display so that when a directory is first selected, the user now
  • knows the program is working while it calculates how many files there are to hash in total, as opposed to appearing to be doing nothing.

New in QuickHash 1.2.1 (Jul 2, 2011)

  • The data figure next to total files examined looped back round to zero with unusually large files. This was fixed by using a QWord integer and QuickHash can now recursively SHA1 hash directories containing 18 ExaBytes (250 thousand 4 TeraByte harddisks full of data).

New in QuickHash 1.2.0 (Jun 25, 2011)

  • String hash box enlarged to allow paragraphs or long sentences to be hashed, instead of just a few words.
  • File hashing now has a start and end time counter, to determine how long the hashing process took.
  • Recursive directory and file hashing now has a start and end time counter, to determine how long the hashing process took for entire directory and its children.
  • Recursive directory and file hashing now has a field to show the total amount of data examined (bytes, Kb's, Mb', Gb's or Tb's).
  • Linux version optimized for Linux usage
  • Windows version optimized for Windows usage
  • Minor improvements relating to layout and code optimization.

New in QuickHash 1.1 (Jun 13, 2011)

  • Larger buffers allow faster hashing of files over 1Mb.
  • Files without an extension are now detected.
  • Some additional safeguards to prevent forensic image "sets" being hashed accidentally as individual files.

New in QuickHash 1.0 (Jun 13, 2011)

  • Hashing of a string
  • Hashing of a single file (or disk if ran in Linux using sudo or root permissions)
  • Hashing of an entire directory - it's children and al sub-directories, including a percentage progress indicator.
  • Copy and Paste to Clipboard