HTTrack is a free, open-source website copier that downloads entire websites for offline browsing. The utility captures HTML pages, images, code, and other resources while maintaining the original site structure, enabling complete offline access to web content.
The software follows links throughout a website, downloading pages and their dependencies comprehensively. Depth limiting controls how many links deep the crawler follows. Domain filtering restricts downloads to specific sites, preventing endless crawling through external links.
HTTrack maintains the original directory structure and link relationships so downloaded sites browse normally in local browsers. Relative links are adjusted to work offline. The local browsing experience mirrors the original site.
Resume capability continues interrupted downloads from where they stopped. Site update mode downloads only new and modified pages when revisiting previously captured sites. Bandwidth limiting prevents saturating network connections.
The graphical interface provides easy configuration while the command-line version enables scripting and automation. Configured projects save settings for repeated use. The project wizard guides new users through common scenarios.
Proxy support enables downloads through network proxies with authentication. Cookie handling captures protected content when logged in. HTTPS support downloads secure sites. User-agent configuration presents as different browsers to sites with browser-specific content.