Wget Proxy Setup (2025): Interactive Step-by-Step





David Foster
Setup Guides
Getting Started with Wget and Proxies
So, you're looking to download files or web pages programmatically? Enter `wget`, a nifty command-line tool perfect for the job. But downloading is just the start. To truly automate tasks like web scraping without hitting frustrating roadblocks (like IP bans), you'll want to pair `wget` with a proxy server.
`wget` operates much like its sibling, `cURL`. You fire off a command in your terminal, point it at a URL, and voilà – files downloaded. Imagine scheduling this to run daily, effortlessly fetching the data you need. It's a beautiful thing.
However, repeated requests from the same IP address can quickly get you flagged and blocked by websites wary of scrapers. That's where proxies become essential. They act as intermediaries, masking your real IP and making your requests appear to come from different locations. This guide will walk you through using `wget`, setting it up with proxies (specifically using reliable options like those from Evomi), installing it if you haven't already, and troubleshooting common hiccups.
Best of all? We've included examples you can run yourself. Let's dive in!
The Basic Wget Command Structure
Using `wget` boils down to a straightforward syntax:
wget
When integrating proxies, you have a couple of paths. You can supply the proxy details directly within your command using an option, or you can configure your system to use a proxy globally for `wget` (and potentially other tools), saving you from typing it out each time.
Before we get to the proxy magic, let's make sure `wget` is even installed. Try running this:
wget -V
If you see version information, great! If you get an error like "command not found," don't worry – we'll sort that out next.
Dealing with "Wget Command Not Found"
Many Linux distributions come with `wget` pre-installed, which is convenient. However, if you're on macOS or Windows, chances are you'll encounter an error initially. This simply means `wget` isn't installed yet, but it's an easy fix.

Installing Wget on macOS
The simplest method for Mac users is typically via Homebrew. Open your terminal and run:
brew install wget
If you don't use Homebrew, other installation methods are available, such as downloading the package directly and installing it manually.
Installing Wget on Windows
For Windows, you can grab the `wget` package directly from the GNU project archives (search for "wget for Windows GnuWin32"). Download the setup program, run it, and follow the installation wizard. Once done, you're ready to use `wget`. But is it always the right tool compared to `cURL`?
Wget vs. cURL: A Quick Comparison
Here’s the gist: `wget` is often seen as simpler for straightforward downloading tasks, with many common options enabled by default. `cURL`, on the other hand, is a powerhouse of flexibility, supporting a vast array of protocols and offering fine-grained control over requests. You can learn more about using cURL for file downloads in our other post.
Feature-wise, `cURL` boasts support for over two dozen protocols, whereas `wget` primarily sticks to HTTP, HTTPS, and FTP. Both tools support proxy authentication. However, `wget`'s connection handling is simpler, which can sometimes be a limitation.
For basic web scraping – grabbing pages, following redirects, downloading files – `wget` often suffices. But if you need more advanced features or protocol support, cURL might be the better choice.
Now, let's see `wget` in action, specifically focusing on using it with proxies.
Using Wget with Proxies: Stay Under the Radar
Even seemingly harmless `wget` usage can lead to IP blocks. Websites monitor traffic patterns to detect automated activity, and multiple rapid requests from a single IP are a major red flag.
The key identifier websites use is your IP address. By tracking requests from an IP, they can infer bot-like behavior (e.g., visiting too many pages too quickly). This is where proxies save the day.
Using a proxy service, like Evomi's residential proxies, routes your traffic through different IP addresses. To the target website, each request can appear to originate from a unique, legitimate user, drastically reducing the chance of being blocked. Evomi takes pride in ethically sourcing its proxies and providing robust, Swiss-quality infrastructure, all at competitive prices (residential proxies start at just $0.49/GB).
To use `wget` with an Evomi proxy, you'll need your credentials and the proxy endpoint details. You can set the `http_proxy` or `https_proxy` environment variable right before your `wget` command. Here’s how it looks using Evomi's residential proxy endpoint:
http_proxy=http://YOUR_USERNAME:YOUR_PASSWORD@rp.evomi.com:1000 \
wget -v
Replace YOUR_USERNAME
and YOUR_PASSWORD
with your actual Evomi credentials. The command fetches a simple page that shows your current public IP address – run it with and without the proxy setting to see the difference! (You can try Evomi's service with a completely free trial to test this yourself).
Other techniques exist, like using the -e
option (e.g., wget -e use_proxy=yes -e http_proxy=...
) or defining shell aliases for convenience.
Setting the proxy variable inline like this is great for temporary or script-specific use, as it only affects the current command or terminal session. But what if you want `wget` to *always* use a proxy?
Persistently Configuring Wget to Use a Proxy
`wget` checks standard environment variables like http_proxy
and https_proxy
. You can set these variables permanently (or semi-permanently) so that all `wget` commands automatically use the proxy. Be aware that this might affect other command-line tools (like `cURL`) that also respect these variables.
Your system typically looks for proxy settings in these files, in order:
~/.wgetrc
: User-specific `wget` configuration./etc/wgetrc
: System-wide `wget` configuration.~/.bash_profile
(or~/.bashrc
,~/.zshrc
, etc.): User-specific shell environment settings./etc/profile
: System-wide shell environment settings.
You can edit the relevant file (~/.wgetrc
is often preferred for user-specific settings) and add lines like these:
use_proxy=yes
http_proxy=http://YOUR_USERNAME:YOUR_PASSWORD@rp.evomi.com:1000/
https_proxy=http://YOUR_USERNAME:YOUR_PASSWORD@rp.evomi.com:1001/
# You might also need ftp_proxy if downloading via FTP
# ftp_proxy=http://YOUR_USERNAME:YOUR_PASSWORD@rp.evomi.com:1000/
If you're editing shell profile files (like ~/.bash_profile
or /etc/profile
), the syntax is slightly different:
export http_proxy="http://YOUR_USERNAME:YOUR_PASSWORD@rp.evomi.com:1000/"
export https_proxy="http://YOUR_USERNAME:YOUR_PASSWORD@rp.evomi.com:1001/"
# export ftp_proxy="http://YOUR_USERNAME:YOUR_PASSWORD@rp.evomi.com:1000/"
Remember to replace the placeholder credentials and adjust the endpoint/port if using different Evomi proxy types (e.g., datacenter: dc.evomi.com:2000
, mobile: mp.evomi.com:3000
). After saving the file, you might need to restart your terminal or run source ~/.bash_profile
(or the relevant file) for the changes to take effect.
Handling Redirects with Wget
This isn't strictly a proxy issue, but it's a common scenario when downloading files or scraping. Websites often use redirects (like HTTP 301 or 302). Thankfully, `wget` is smart enough to follow these automatically.
Try this command:
wget -v
(Try it - Note: original example used google.com, this uses a redirect testing site)
You should see output indicating that `wget` received a redirect status and followed the new location:
--<TIMESTAMP>-- http://httpbin.org/redirect/1
Resolving httpbin.org (httpbin.org)... <IP ADDRESS>
Connecting to httpbin.org (httpbin.org)|<IP ADDRESS>|:80... connected.
HTTP request sent, awaiting response... 302 FOUND
Location: /get [following]
--<TIMESTAMP>-- http://httpbin.org/get
Reusing existing connection to httpbin.org:80.
HTTP request sent, awaiting response... 200
Notice the "[following]" message. `wget` followed the redirect from /redirect/1
to /get
and downloaded the final content. If redirects aren't working, double-check the --max-redirect
option; by default it's usually high enough (e.g., 20), but if set too low (like --max-redirect=0
), it will prevent following redirects.
Ignoring SSL Certificate Errors
Sometimes, you might encounter websites with invalid, expired, or self-signed SSL certificates. Browsers will throw up scary warnings, and `wget` will typically refuse to connect over HTTPS in these cases due to the security risk.
While generally not recommended for sensitive browsing, if you understand the risks and *need* to fetch content from such a site (perhaps for testing or scraping specific known-safe but misconfigured targets), you can tell `wget` to ignore certificate errors:
wget \
--no-check-certificate
(Try it - Uses a site specifically for testing expired certs)
This flag bypasses the certificate validation check. Use it cautiously!
Temporarily Disabling a Configured Proxy
What if you've configured a global proxy in your .wgetrc
or environment variables, but need to make a direct connection for a specific command? The --no-proxy
flag comes to the rescue.
wget --no-proxy \
https://api.ipify.org?format
(Try it - Fetches your IP directly)
This overrides any configured proxy settings for this single `wget` execution, forcing a direct connection.
Other Useful Wget Commands
Here are a few more handy `wget` options to add to your toolkit, especially useful when combined with proxies for scraping:
Specify Output Filename:
wget -O [filename] [url]
(Note: uppercase -O for file, lowercase -o logs to file)
Use -O
(short for --output-document=FILE
) to save the downloaded content with a specific name, rather than letting `wget` decide.
wget -v -O
(Try it - Adjusted example)
Save to a Specific Directory:
wget -P [path] [url]
The -P
(--directory-prefix=PREFIX
) option tells `wget` where to save the downloaded file(s).
# Create directory first if it doesn't exist:
mkdir ./downloads
wget -v -P
(Try it - Adjusted example)
Set Custom User-Agent:
wget --user-agent="[agent string]" [url]
Websites often check the User-Agent string to identify browsers. `wget` has a default User-Agent (like "Wget/1.21.3"), which can sometimes be blocked. You can mimic a real browser:
wget \
--user-agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36"
(Try it - Uses a testing site and a common UA string)
Convert Links for Local Viewing:
wget --convert-links [url]
When downloading a webpage, this option modifies the links within the HTML file to point to locally downloaded resources (if they were also downloaded, e.g., using recursive download options), making it easier to browse the site offline.
wget \\\
--convert-links \\\
--page-requisites
(Try it - Added --page-requisites for better offline viewing)
You'll see a confirmation message after the download:
Converting links in example.com/index.html... 4 links converted.FINISHED --
Download URLs from a File:
wget -i list_of_urls.txt
For bulk downloads, create a text file (e.g., list_of_urls.txt
) with one URL per line. Then, use the -i
option:
# First, create list_of_urls.txt:
# echo http://example.com > list_of_urls.txt
# echo https://api.ipify.org?format=json >> list_of_urls.txt
wget -i
(Try it - Adjusted example)
`wget` will process each URL in the file sequentially.
Wrapping Up
And there you have it! `wget` is a powerful tool for automated downloads, and combining it with proxies transforms it into a robust solution for web scraping and data gathering without easily getting blocked. By understanding how to configure proxies, handle redirects and certificates, and use its various command-line options, you can tackle a wide range of downloading tasks. Happy scripting!
Getting Started with Wget and Proxies
So, you're looking to download files or web pages programmatically? Enter `wget`, a nifty command-line tool perfect for the job. But downloading is just the start. To truly automate tasks like web scraping without hitting frustrating roadblocks (like IP bans), you'll want to pair `wget` with a proxy server.
`wget` operates much like its sibling, `cURL`. You fire off a command in your terminal, point it at a URL, and voilà – files downloaded. Imagine scheduling this to run daily, effortlessly fetching the data you need. It's a beautiful thing.
However, repeated requests from the same IP address can quickly get you flagged and blocked by websites wary of scrapers. That's where proxies become essential. They act as intermediaries, masking your real IP and making your requests appear to come from different locations. This guide will walk you through using `wget`, setting it up with proxies (specifically using reliable options like those from Evomi), installing it if you haven't already, and troubleshooting common hiccups.
Best of all? We've included examples you can run yourself. Let's dive in!
The Basic Wget Command Structure
Using `wget` boils down to a straightforward syntax:
wget
When integrating proxies, you have a couple of paths. You can supply the proxy details directly within your command using an option, or you can configure your system to use a proxy globally for `wget` (and potentially other tools), saving you from typing it out each time.
Before we get to the proxy magic, let's make sure `wget` is even installed. Try running this:
wget -V
If you see version information, great! If you get an error like "command not found," don't worry – we'll sort that out next.
Dealing with "Wget Command Not Found"
Many Linux distributions come with `wget` pre-installed, which is convenient. However, if you're on macOS or Windows, chances are you'll encounter an error initially. This simply means `wget` isn't installed yet, but it's an easy fix.

Installing Wget on macOS
The simplest method for Mac users is typically via Homebrew. Open your terminal and run:
brew install wget
If you don't use Homebrew, other installation methods are available, such as downloading the package directly and installing it manually.
Installing Wget on Windows
For Windows, you can grab the `wget` package directly from the GNU project archives (search for "wget for Windows GnuWin32"). Download the setup program, run it, and follow the installation wizard. Once done, you're ready to use `wget`. But is it always the right tool compared to `cURL`?
Wget vs. cURL: A Quick Comparison
Here’s the gist: `wget` is often seen as simpler for straightforward downloading tasks, with many common options enabled by default. `cURL`, on the other hand, is a powerhouse of flexibility, supporting a vast array of protocols and offering fine-grained control over requests. You can learn more about using cURL for file downloads in our other post.
Feature-wise, `cURL` boasts support for over two dozen protocols, whereas `wget` primarily sticks to HTTP, HTTPS, and FTP. Both tools support proxy authentication. However, `wget`'s connection handling is simpler, which can sometimes be a limitation.
For basic web scraping – grabbing pages, following redirects, downloading files – `wget` often suffices. But if you need more advanced features or protocol support, cURL might be the better choice.
Now, let's see `wget` in action, specifically focusing on using it with proxies.
Using Wget with Proxies: Stay Under the Radar
Even seemingly harmless `wget` usage can lead to IP blocks. Websites monitor traffic patterns to detect automated activity, and multiple rapid requests from a single IP are a major red flag.
The key identifier websites use is your IP address. By tracking requests from an IP, they can infer bot-like behavior (e.g., visiting too many pages too quickly). This is where proxies save the day.
Using a proxy service, like Evomi's residential proxies, routes your traffic through different IP addresses. To the target website, each request can appear to originate from a unique, legitimate user, drastically reducing the chance of being blocked. Evomi takes pride in ethically sourcing its proxies and providing robust, Swiss-quality infrastructure, all at competitive prices (residential proxies start at just $0.49/GB).
To use `wget` with an Evomi proxy, you'll need your credentials and the proxy endpoint details. You can set the `http_proxy` or `https_proxy` environment variable right before your `wget` command. Here’s how it looks using Evomi's residential proxy endpoint:
http_proxy=http://YOUR_USERNAME:YOUR_PASSWORD@rp.evomi.com:1000 \
wget -v
Replace YOUR_USERNAME
and YOUR_PASSWORD
with your actual Evomi credentials. The command fetches a simple page that shows your current public IP address – run it with and without the proxy setting to see the difference! (You can try Evomi's service with a completely free trial to test this yourself).
Other techniques exist, like using the -e
option (e.g., wget -e use_proxy=yes -e http_proxy=...
) or defining shell aliases for convenience.
Setting the proxy variable inline like this is great for temporary or script-specific use, as it only affects the current command or terminal session. But what if you want `wget` to *always* use a proxy?
Persistently Configuring Wget to Use a Proxy
`wget` checks standard environment variables like http_proxy
and https_proxy
. You can set these variables permanently (or semi-permanently) so that all `wget` commands automatically use the proxy. Be aware that this might affect other command-line tools (like `cURL`) that also respect these variables.
Your system typically looks for proxy settings in these files, in order:
~/.wgetrc
: User-specific `wget` configuration./etc/wgetrc
: System-wide `wget` configuration.~/.bash_profile
(or~/.bashrc
,~/.zshrc
, etc.): User-specific shell environment settings./etc/profile
: System-wide shell environment settings.
You can edit the relevant file (~/.wgetrc
is often preferred for user-specific settings) and add lines like these:
use_proxy=yes
http_proxy=http://YOUR_USERNAME:YOUR_PASSWORD@rp.evomi.com:1000/
https_proxy=http://YOUR_USERNAME:YOUR_PASSWORD@rp.evomi.com:1001/
# You might also need ftp_proxy if downloading via FTP
# ftp_proxy=http://YOUR_USERNAME:YOUR_PASSWORD@rp.evomi.com:1000/
If you're editing shell profile files (like ~/.bash_profile
or /etc/profile
), the syntax is slightly different:
export http_proxy="http://YOUR_USERNAME:YOUR_PASSWORD@rp.evomi.com:1000/"
export https_proxy="http://YOUR_USERNAME:YOUR_PASSWORD@rp.evomi.com:1001/"
# export ftp_proxy="http://YOUR_USERNAME:YOUR_PASSWORD@rp.evomi.com:1000/"
Remember to replace the placeholder credentials and adjust the endpoint/port if using different Evomi proxy types (e.g., datacenter: dc.evomi.com:2000
, mobile: mp.evomi.com:3000
). After saving the file, you might need to restart your terminal or run source ~/.bash_profile
(or the relevant file) for the changes to take effect.
Handling Redirects with Wget
This isn't strictly a proxy issue, but it's a common scenario when downloading files or scraping. Websites often use redirects (like HTTP 301 or 302). Thankfully, `wget` is smart enough to follow these automatically.
Try this command:
wget -v
(Try it - Note: original example used google.com, this uses a redirect testing site)
You should see output indicating that `wget` received a redirect status and followed the new location:
--<TIMESTAMP>-- http://httpbin.org/redirect/1
Resolving httpbin.org (httpbin.org)... <IP ADDRESS>
Connecting to httpbin.org (httpbin.org)|<IP ADDRESS>|:80... connected.
HTTP request sent, awaiting response... 302 FOUND
Location: /get [following]
--<TIMESTAMP>-- http://httpbin.org/get
Reusing existing connection to httpbin.org:80.
HTTP request sent, awaiting response... 200
Notice the "[following]" message. `wget` followed the redirect from /redirect/1
to /get
and downloaded the final content. If redirects aren't working, double-check the --max-redirect
option; by default it's usually high enough (e.g., 20), but if set too low (like --max-redirect=0
), it will prevent following redirects.
Ignoring SSL Certificate Errors
Sometimes, you might encounter websites with invalid, expired, or self-signed SSL certificates. Browsers will throw up scary warnings, and `wget` will typically refuse to connect over HTTPS in these cases due to the security risk.
While generally not recommended for sensitive browsing, if you understand the risks and *need* to fetch content from such a site (perhaps for testing or scraping specific known-safe but misconfigured targets), you can tell `wget` to ignore certificate errors:
wget \
--no-check-certificate
(Try it - Uses a site specifically for testing expired certs)
This flag bypasses the certificate validation check. Use it cautiously!
Temporarily Disabling a Configured Proxy
What if you've configured a global proxy in your .wgetrc
or environment variables, but need to make a direct connection for a specific command? The --no-proxy
flag comes to the rescue.
wget --no-proxy \
https://api.ipify.org?format
(Try it - Fetches your IP directly)
This overrides any configured proxy settings for this single `wget` execution, forcing a direct connection.
Other Useful Wget Commands
Here are a few more handy `wget` options to add to your toolkit, especially useful when combined with proxies for scraping:
Specify Output Filename:
wget -O [filename] [url]
(Note: uppercase -O for file, lowercase -o logs to file)
Use -O
(short for --output-document=FILE
) to save the downloaded content with a specific name, rather than letting `wget` decide.
wget -v -O
(Try it - Adjusted example)
Save to a Specific Directory:
wget -P [path] [url]
The -P
(--directory-prefix=PREFIX
) option tells `wget` where to save the downloaded file(s).
# Create directory first if it doesn't exist:
mkdir ./downloads
wget -v -P
(Try it - Adjusted example)
Set Custom User-Agent:
wget --user-agent="[agent string]" [url]
Websites often check the User-Agent string to identify browsers. `wget` has a default User-Agent (like "Wget/1.21.3"), which can sometimes be blocked. You can mimic a real browser:
wget \
--user-agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36"
(Try it - Uses a testing site and a common UA string)
Convert Links for Local Viewing:
wget --convert-links [url]
When downloading a webpage, this option modifies the links within the HTML file to point to locally downloaded resources (if they were also downloaded, e.g., using recursive download options), making it easier to browse the site offline.
wget \\\
--convert-links \\\
--page-requisites
(Try it - Added --page-requisites for better offline viewing)
You'll see a confirmation message after the download:
Converting links in example.com/index.html... 4 links converted.FINISHED --
Download URLs from a File:
wget -i list_of_urls.txt
For bulk downloads, create a text file (e.g., list_of_urls.txt
) with one URL per line. Then, use the -i
option:
# First, create list_of_urls.txt:
# echo http://example.com > list_of_urls.txt
# echo https://api.ipify.org?format=json >> list_of_urls.txt
wget -i
(Try it - Adjusted example)
`wget` will process each URL in the file sequentially.
Wrapping Up
And there you have it! `wget` is a powerful tool for automated downloads, and combining it with proxies transforms it into a robust solution for web scraping and data gathering without easily getting blocked. By understanding how to configure proxies, handle redirects and certificates, and use its various command-line options, you can tackle a wide range of downloading tasks. Happy scripting!
Getting Started with Wget and Proxies
So, you're looking to download files or web pages programmatically? Enter `wget`, a nifty command-line tool perfect for the job. But downloading is just the start. To truly automate tasks like web scraping without hitting frustrating roadblocks (like IP bans), you'll want to pair `wget` with a proxy server.
`wget` operates much like its sibling, `cURL`. You fire off a command in your terminal, point it at a URL, and voilà – files downloaded. Imagine scheduling this to run daily, effortlessly fetching the data you need. It's a beautiful thing.
However, repeated requests from the same IP address can quickly get you flagged and blocked by websites wary of scrapers. That's where proxies become essential. They act as intermediaries, masking your real IP and making your requests appear to come from different locations. This guide will walk you through using `wget`, setting it up with proxies (specifically using reliable options like those from Evomi), installing it if you haven't already, and troubleshooting common hiccups.
Best of all? We've included examples you can run yourself. Let's dive in!
The Basic Wget Command Structure
Using `wget` boils down to a straightforward syntax:
wget
When integrating proxies, you have a couple of paths. You can supply the proxy details directly within your command using an option, or you can configure your system to use a proxy globally for `wget` (and potentially other tools), saving you from typing it out each time.
Before we get to the proxy magic, let's make sure `wget` is even installed. Try running this:
wget -V
If you see version information, great! If you get an error like "command not found," don't worry – we'll sort that out next.
Dealing with "Wget Command Not Found"
Many Linux distributions come with `wget` pre-installed, which is convenient. However, if you're on macOS or Windows, chances are you'll encounter an error initially. This simply means `wget` isn't installed yet, but it's an easy fix.

Installing Wget on macOS
The simplest method for Mac users is typically via Homebrew. Open your terminal and run:
brew install wget
If you don't use Homebrew, other installation methods are available, such as downloading the package directly and installing it manually.
Installing Wget on Windows
For Windows, you can grab the `wget` package directly from the GNU project archives (search for "wget for Windows GnuWin32"). Download the setup program, run it, and follow the installation wizard. Once done, you're ready to use `wget`. But is it always the right tool compared to `cURL`?
Wget vs. cURL: A Quick Comparison
Here’s the gist: `wget` is often seen as simpler for straightforward downloading tasks, with many common options enabled by default. `cURL`, on the other hand, is a powerhouse of flexibility, supporting a vast array of protocols and offering fine-grained control over requests. You can learn more about using cURL for file downloads in our other post.
Feature-wise, `cURL` boasts support for over two dozen protocols, whereas `wget` primarily sticks to HTTP, HTTPS, and FTP. Both tools support proxy authentication. However, `wget`'s connection handling is simpler, which can sometimes be a limitation.
For basic web scraping – grabbing pages, following redirects, downloading files – `wget` often suffices. But if you need more advanced features or protocol support, cURL might be the better choice.
Now, let's see `wget` in action, specifically focusing on using it with proxies.
Using Wget with Proxies: Stay Under the Radar
Even seemingly harmless `wget` usage can lead to IP blocks. Websites monitor traffic patterns to detect automated activity, and multiple rapid requests from a single IP are a major red flag.
The key identifier websites use is your IP address. By tracking requests from an IP, they can infer bot-like behavior (e.g., visiting too many pages too quickly). This is where proxies save the day.
Using a proxy service, like Evomi's residential proxies, routes your traffic through different IP addresses. To the target website, each request can appear to originate from a unique, legitimate user, drastically reducing the chance of being blocked. Evomi takes pride in ethically sourcing its proxies and providing robust, Swiss-quality infrastructure, all at competitive prices (residential proxies start at just $0.49/GB).
To use `wget` with an Evomi proxy, you'll need your credentials and the proxy endpoint details. You can set the `http_proxy` or `https_proxy` environment variable right before your `wget` command. Here’s how it looks using Evomi's residential proxy endpoint:
http_proxy=http://YOUR_USERNAME:YOUR_PASSWORD@rp.evomi.com:1000 \
wget -v
Replace YOUR_USERNAME
and YOUR_PASSWORD
with your actual Evomi credentials. The command fetches a simple page that shows your current public IP address – run it with and without the proxy setting to see the difference! (You can try Evomi's service with a completely free trial to test this yourself).
Other techniques exist, like using the -e
option (e.g., wget -e use_proxy=yes -e http_proxy=...
) or defining shell aliases for convenience.
Setting the proxy variable inline like this is great for temporary or script-specific use, as it only affects the current command or terminal session. But what if you want `wget` to *always* use a proxy?
Persistently Configuring Wget to Use a Proxy
`wget` checks standard environment variables like http_proxy
and https_proxy
. You can set these variables permanently (or semi-permanently) so that all `wget` commands automatically use the proxy. Be aware that this might affect other command-line tools (like `cURL`) that also respect these variables.
Your system typically looks for proxy settings in these files, in order:
~/.wgetrc
: User-specific `wget` configuration./etc/wgetrc
: System-wide `wget` configuration.~/.bash_profile
(or~/.bashrc
,~/.zshrc
, etc.): User-specific shell environment settings./etc/profile
: System-wide shell environment settings.
You can edit the relevant file (~/.wgetrc
is often preferred for user-specific settings) and add lines like these:
use_proxy=yes
http_proxy=http://YOUR_USERNAME:YOUR_PASSWORD@rp.evomi.com:1000/
https_proxy=http://YOUR_USERNAME:YOUR_PASSWORD@rp.evomi.com:1001/
# You might also need ftp_proxy if downloading via FTP
# ftp_proxy=http://YOUR_USERNAME:YOUR_PASSWORD@rp.evomi.com:1000/
If you're editing shell profile files (like ~/.bash_profile
or /etc/profile
), the syntax is slightly different:
export http_proxy="http://YOUR_USERNAME:YOUR_PASSWORD@rp.evomi.com:1000/"
export https_proxy="http://YOUR_USERNAME:YOUR_PASSWORD@rp.evomi.com:1001/"
# export ftp_proxy="http://YOUR_USERNAME:YOUR_PASSWORD@rp.evomi.com:1000/"
Remember to replace the placeholder credentials and adjust the endpoint/port if using different Evomi proxy types (e.g., datacenter: dc.evomi.com:2000
, mobile: mp.evomi.com:3000
). After saving the file, you might need to restart your terminal or run source ~/.bash_profile
(or the relevant file) for the changes to take effect.
Handling Redirects with Wget
This isn't strictly a proxy issue, but it's a common scenario when downloading files or scraping. Websites often use redirects (like HTTP 301 or 302). Thankfully, `wget` is smart enough to follow these automatically.
Try this command:
wget -v
(Try it - Note: original example used google.com, this uses a redirect testing site)
You should see output indicating that `wget` received a redirect status and followed the new location:
--<TIMESTAMP>-- http://httpbin.org/redirect/1
Resolving httpbin.org (httpbin.org)... <IP ADDRESS>
Connecting to httpbin.org (httpbin.org)|<IP ADDRESS>|:80... connected.
HTTP request sent, awaiting response... 302 FOUND
Location: /get [following]
--<TIMESTAMP>-- http://httpbin.org/get
Reusing existing connection to httpbin.org:80.
HTTP request sent, awaiting response... 200
Notice the "[following]" message. `wget` followed the redirect from /redirect/1
to /get
and downloaded the final content. If redirects aren't working, double-check the --max-redirect
option; by default it's usually high enough (e.g., 20), but if set too low (like --max-redirect=0
), it will prevent following redirects.
Ignoring SSL Certificate Errors
Sometimes, you might encounter websites with invalid, expired, or self-signed SSL certificates. Browsers will throw up scary warnings, and `wget` will typically refuse to connect over HTTPS in these cases due to the security risk.
While generally not recommended for sensitive browsing, if you understand the risks and *need* to fetch content from such a site (perhaps for testing or scraping specific known-safe but misconfigured targets), you can tell `wget` to ignore certificate errors:
wget \
--no-check-certificate
(Try it - Uses a site specifically for testing expired certs)
This flag bypasses the certificate validation check. Use it cautiously!
Temporarily Disabling a Configured Proxy
What if you've configured a global proxy in your .wgetrc
or environment variables, but need to make a direct connection for a specific command? The --no-proxy
flag comes to the rescue.
wget --no-proxy \
https://api.ipify.org?format
(Try it - Fetches your IP directly)
This overrides any configured proxy settings for this single `wget` execution, forcing a direct connection.
Other Useful Wget Commands
Here are a few more handy `wget` options to add to your toolkit, especially useful when combined with proxies for scraping:
Specify Output Filename:
wget -O [filename] [url]
(Note: uppercase -O for file, lowercase -o logs to file)
Use -O
(short for --output-document=FILE
) to save the downloaded content with a specific name, rather than letting `wget` decide.
wget -v -O
(Try it - Adjusted example)
Save to a Specific Directory:
wget -P [path] [url]
The -P
(--directory-prefix=PREFIX
) option tells `wget` where to save the downloaded file(s).
# Create directory first if it doesn't exist:
mkdir ./downloads
wget -v -P
(Try it - Adjusted example)
Set Custom User-Agent:
wget --user-agent="[agent string]" [url]
Websites often check the User-Agent string to identify browsers. `wget` has a default User-Agent (like "Wget/1.21.3"), which can sometimes be blocked. You can mimic a real browser:
wget \
--user-agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36"
(Try it - Uses a testing site and a common UA string)
Convert Links for Local Viewing:
wget --convert-links [url]
When downloading a webpage, this option modifies the links within the HTML file to point to locally downloaded resources (if they were also downloaded, e.g., using recursive download options), making it easier to browse the site offline.
wget \\\
--convert-links \\\
--page-requisites
(Try it - Added --page-requisites for better offline viewing)
You'll see a confirmation message after the download:
Converting links in example.com/index.html... 4 links converted.FINISHED --
Download URLs from a File:
wget -i list_of_urls.txt
For bulk downloads, create a text file (e.g., list_of_urls.txt
) with one URL per line. Then, use the -i
option:
# First, create list_of_urls.txt:
# echo http://example.com > list_of_urls.txt
# echo https://api.ipify.org?format=json >> list_of_urls.txt
wget -i
(Try it - Adjusted example)
`wget` will process each URL in the file sequentially.
Wrapping Up
And there you have it! `wget` is a powerful tool for automated downloads, and combining it with proxies transforms it into a robust solution for web scraping and data gathering without easily getting blocked. By understanding how to configure proxies, handle redirects and certificates, and use its various command-line options, you can tackle a wide range of downloading tasks. Happy scripting!

Author
David Foster
Proxy & Network Security Analyst
About Author
David is an expert in network security, web scraping, and proxy technologies, helping businesses optimize data extraction while maintaining privacy and efficiency. With a deep understanding of residential, datacenter, and rotating proxies, he explores how proxies enhance cybersecurity, bypass geo-restrictions, and power large-scale web scraping. David’s insights help businesses and developers choose the right proxy solutions for SEO monitoring, competitive intelligence, and anonymous browsing.