robots.txt validation in Site Audit
It would be nice to have robots.txt validation in Site Audit. For example, there was User agent: MTRobot Disallow: / ("User agent" instead of "User-agent"). in my robots.txt file and Lighthouse in Chrome DevTools https://web.dev/robots-txt/ helped to fix this error.
location.kml erroneously shows errors
This page by definition cannot have a title tag, h1, etc. It should not come up in the all issues page. Please see: https://developers.google.com/kml/documentation/ Instead ahrefs should check if the kml page has all necessary data.
Regex Search for domains for all functions (backlink checking, site audit, etc)
Our URL structure does not allow us to check a specific part of our site. Rather than /uk/page, it's /page/uk meaning we cannot analyse all our UK pages, or any country for that matter. With regex, we could do this.
Ahrefs Site Audit integration with Google Search Console and Google Analytics
it would be great as SEMRush to be able to integrate Ahrefs with Google Search Console and Google Analytics Data in Site Audit.
Possibility to add issues as solved
Would be great if we could mark each issue as solved. So that it's not necessary to re-crawl everything to check which is done or not. Easy way to be able to keep on track instead of export everything and work on 2 different document. Easier to have everything within the application.
Exclude URL from Audit
It would be nice to exclude URLs individually/manually in the same way you can remove issues from the audit report with the "Turn off for this project" function. This would be nice to do it at the low URL level as some URL will carry an issue that you don't want to fix / or should not fix but then influence the reports as showing not fixed!
Clicking Issues Goes Right to Affected URL
When viewing site audit, instead of the first click into a row (error, warning issue), it should show the affected URL's right away, rather than showing the definition and 'how to fix'. This saves so many clicks, and just makes for better UX overall. WHO'S WITH ME, ARRRRG! ARRRRG! ARRRRRG!
Option to crawl CSS for url() resources
It would be a neat option to have internal CSS files crawled for url() function, to see if any called images, fonts, etc... have error responses. For example: website url is: https://example.com CSS file https://example.com/style.css includes an image for a background: url( https://eaxmple.com/images/no_image_here.jpg ); crawl would check the image to see if it exists, or give out an error if 404 response for the resource. option to have both internal and external url() URLs checked
Proofread Option: Spelling & Grammar Check
A site proofreading option, with spell check and grammar check, would be a great integration
alt="" should not be reported as missing alt text
When an image (especially icons) is for decorative purposes only or it already has a text equivalent, the W3C recommends using an empty alt tag to hide the image from screen readers. These should not be reported as missing alt tags. See https://www.w3.org/WAI/tutorials/images/decorative/