What's new in Hextrakt Crawler 2.1.1 Build 2101
Dec 28, 2017
- You can now apply a hash function on any HTML element (see “Custom Data”) in order to detect specific duplicated content. Previously, this was possible on the HTML body only.
- The process startup speed has been improved.
New in Hextrakt Crawler 1.8.1 Build 1811 (Nov 24, 2017)
- A new menu “On Page / HTTPS” is showing the HTTP/HTTPS mixed contents
- The projects page has been improved and now lists the crawl reports
- The “Google SERP impressions” column is now shown by default in the “Indexation” reports
New in Hextrakt Crawler 1.8.0 Build 1811 (Nov 1, 2017)
- A new menu “On Page / HTTPS” is showing the HTTP/HTTPS mixed contents
- The projects page has been improved and now lists the crawl reports
- The “Google SERP impressions” column is now shown by default in the “Indexation” reports
New in Hextrakt Crawler 1.5.0 Build 1501 (Jun 17, 2017)
- New indicators about network errors can be found in the “Performance / Error codes” menu
- List of the URLs which have a “DNS resolution failed” error
- List of the URLs which have a “Connection failed” error
- List of the URLs which have a “Request timeout” error
New in Hextrakt Crawler 1.4.0 (Jun 17, 2017)
- Upload a list of URLs to crawl
- Bulk export of all URL data
- New “not like” operator in User Custom Filter
- Clickable “inlinks / outlinks” columns, in order to go directly to the corresponding tab in the URL detail browser
New in Hextrakt Crawler 1.3.0 (Jun 17, 2017)
- Logarithmic scale for charts
- Relative distribution in percents
- “Find URL” search field above the URL grid
- Extract the HTML body id attribute in order to categorize URLs.